Next Article in Journal
Epoxy- versus Glutaraldehyde-Treated Bovine Jugular Vein Conduit for Pulmonary Valve Replacement: A Comparison of Morphological Changes in a Pig Model
Previous Article in Journal
The Clinical Implication of Conversion Surgery in Patients with Stage IV Gastric Cancer Who Received Systemic Chemotherapy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Computer-Assisted Diagnostic Method for Accurate Detection of Early Nondisplaced Fractures of the Femoral Neck

1
Minimally Invasive Spine and Joint Center, Buddhist Tzu Chi General Hospital Taichung Branch, Taichung 427213, Taiwan
2
College of Electrical Engineering and Computer Science, National Chin-Yi University of Technology, Taichung 411030, Taiwan
3
Department of Orthopedic Surgery, China Medical University Hospital, Taichung 404327, Taiwan
*
Author to whom correspondence should be addressed.
Biomedicines 2023, 11(11), 3100; https://doi.org/10.3390/biomedicines11113100
Submission received: 15 October 2023 / Revised: 12 November 2023 / Accepted: 15 November 2023 / Published: 20 November 2023

Abstract

:
Nondisplaced femoral neck fractures are sometimes misdiagnosed by radiographs, which may deteriorate into displaced fractures. However, few efficient artificial intelligent methods have been reported. We developed an automatic detection method using deep learning networks to pinpoint femoral neck fractures on radiographs to assist physicians in making an accurate diagnosis in the first place. Our proposed accurate automatic detection method, called the direction-aware fracture-detection network (DAFDNet), consists of two steps, namely region-of-interest (ROI) segmentation and fracture detection. The first step removes the noise region and pinpoints the femoral neck region. The fracture-detection step uses a direction-aware deep learning algorithm to mark the exact femoral neck fracture location in the region detected in the first step. A total of 3840 femoral neck parts in anterior–posterior (AP) pelvis radiographs collected from the China Medical University Hospital database were used to test our method. The simulation results showed that DAFDNet outperformed the U-Net and DenseNet methods in terms of the IOU value, Dice value, and Jaccard value. Our proposed DAFDNet demonstrated over 94.8% accuracy in differentiating non-displaced Garden type I and type II femoral neck fracture cases. Our DAFDNet method outperformed the diagnostic accuracy of general practitioners and orthopedic surgeons in accurately locating Garden type I and type II fracture locations. This study can determine the feasibility of applying artificial intelligence in a clinical setting and how the use of deep learning networks assists physicians in improving correct diagnoses compared to the current traditional orthopedic manual assessments.

Graphical Abstract

1. Introduction

A femoral neck fracture (FNF) is one of the most common osteoporotic fractures in the elderly, and it causes substantial morbidity and mortality [1,2,3]. Figure 1 shows a normal lateral view of the pelvis and proximal femur. According to the radiograph-based Garden classification system for assessing fracture severity, FNFs can be classified into four types, namely nondisplaced Garden I and II and displaced Garden III and IV [4]. The Garden classification incorporates the displacement, fracture completeness, and relationship of bony trabeculae in the femoral head and neck. Type I is described as a nondisplaced fracture with a valgus-impacted incomplete fracture and a disruption in the lateral cortex, while the medial cortex is preserved. Type II is described as a complete fracture without displacement. Type III is described as a complete fracture with partial displacement, indicated by a change in the angle of the trabeculae. Type IV is described as a complete fracture with complete displacement. The features of displaced FNFs are clinically distinct and distinct through imaging, whereas those of nondisplaced FNFs are challenging and receive less attention [5,6,7,8]. The radiographic imaging of nondisplaced FNFs can be compromised by osteoporosis, obesity, patient position-related reasons, the use of portable radiographic equipment, and poor image quality, which creates additional difficulties for clinicians [6,9].
In order to understand the misdiagnosis rate of nondisplaced FNFs in radiographs, we conducted a trial at the China Medical University Hospital (CMUH). One ER doctor, one junior PGY-1 doctor, and one ten-years-of-experience senior orthopedic doctor volunteered to read 480 pelvis AP-view X-rays with nondisplaced FNFs. The diagnosis of fractures was based on a further pelvis CT scan and radiologist reports. The overall misrecognized rates of nondisplaced FNFs were 7.87% for the PGY doctor, 4.19% for the ER doctor, and 2.44% for the senior orthopedic doctor. Table 1 shows the results compared with previous reviews. The misdiagnosis rate was higher for the junior doctor and ER doctor compared with the senior orthopedic doctor. Therefore, AI-assisted diagnoses of radiographs are helpful for alerting doctors in the ER to arrange an advanced CT scan to identify these occult fractures in highly suspect cases.
Recent advances in artificial intelligence using deep learning techniques, such as deep convolutional neural networks (DCNNs), have shown remarkable results for a range of medical tasks as well as for human experts [10,11,12,13,14]. A growing number of studies support that deep learning networks can be trained to identify fractures in orthopedic radiographs with a satisfactory accuracy [15,16]. Although deep learning has been applied to fracture detection for radiological diagnoses, nondisplaced FNFs are often overlooked in a misdiagnosis, which may result in patients with nondisplaced fractures that deteriorate into displaced fractures. Therefore, we propose a new direction-aware fracture-detection network, termed DAFDNet, for the automatic detection of FNFs on anterior–posterior pelvic radiographs. It is well known that the Gabor filter is a differentiable band-pass filter with adjustable scales and orientations, and therefore, it has been integrated into DCNNs [17,18,19]. Garden type I and Garden type II FNFs present different orientations and frequencies in the frequency space, depending on the patient’s imaging location and conditions. By integrating a Gabor filter into the DCNN, the filter is able to fix the optimal parameters and help the DCNN learn robust feature presentations. We present this study to validate the accuracy of a DCNN in detecting nondisplaced FNFs, and it showed substantial improvements in performance. This study utilized a deep learning network to help physicians improve the diagnostic correctness compared to the current traditional manual orthopedic evaluations.

2. Materials and Methods

2.1. DCNN for FNFs

Recently, DCNN-based methods have shown a great potential efficiency in many areas of medical diagnoses and have encouraged further applied research [20]. The use of a DCNN can reduce the need for expensive computed tomography (CT) and magnetic resonance imaging (MRI) scans, and its automotive and accurate detection results can reduce the burden on clinicians for the urgent identification of fractures [12]. However, the feasibility and efficiency of detecting FNFs using a DCNN remain challenging and have not been fully investigated, especially for the occult representations of Garden I and II. To the best of our knowledge, typical DCNN-based methods [16,21,22,23,24], such as U-Net [21] and DenseNet [24], can be applied to detect FNFs. These types of fractures may disappear after a series of convolutional operations with the depth of the layers due to tiny variations in the grayscale distribution in these regions in the radiographic images.

2.2. Gabor Filter

A two-dimensional Gabor filter is a directional band-pass wavelet filter that is the multiplication of a Gaussian function and a cosine function, defined as follows [17]:
G u , v ( z ) = k u , v 2 ( 2 π ) 2 e ( k u , v 2 z 2 / 2 ( 2 π ) 2 ) [ e i k u , v z e ( 2 π ) 2 / 2 ]
where k u , v = k v e i k u , k v = ( π / 2 ) / 2 ( v 1 ) , k u = u π U , and v and u are the frequency and orientation, respectively. U stands for the total number of directions, and we set up 8 different angles to find the directionality of the feature. Substantially, the Gabor transform is a windowed short-time Fourier transform that is able to extract features locally or of certain frequency components. The direction and frequency selection properties make the Gabor filter sensitive to certain types of boundaries. In radiography, the orientations of nondisplaced FNFs are related to the patient’s position, where the frequency components lie in certain ranges. Therefore, in our study, a multiple-direction Gabor filter with adjusted frequency bands was engaged as an input layer to the DCNN to detect tiny changes in the grayscale distribution in Garden type I and type II fractures.

2.3. Attention Mechanism

The attention mechanism was invented to tell the DCNN where or what features to focus on, which was demonstrated to significantly improve the effectiveness of the model performance [25,26]. The squeeze-and-excitation attention network (SENet) exploits the squeeze function and the excitation function, i.e., the global average pooling operation and the sigmoid function, respectively, to encode inter-channel information [27]. This simple and innovative model provides a significant performance improvement for DCNNs, but ignores the location information that is important for capturing features. Therefore, several extension studies have proposed solutions such as the bottle attention module (BAM) [28], the convolutional block attention module (CBAM) [29], and spatial and channel-wise attention (SCA) [30] to further extract spatial and channel information and improve the network effectiveness. The self-attention DCNN model attention-in attention network (A2Net) divides the attention branches into attention and non-attention branches to maximize the use of high-contributing information and minimize the suppression of redundant information [31]. Although A2Net exhibits an excellent performance, the large amount of computation requires significant hardware facility costs. In summary, this study used the SCA strategy to obtain spatial and channel-wise information.

2.4. Direction-Aware Segmentation Network

In this section, we introduce the implementation details of the proposed DAFDNet model, including the attention mechanism ghost convolution and the details of the model architecture.

2.5. Squeeze-and-Excitation Ghost Convolution

GhostNet was first proposed in reference [32] to reduce the computation consumption by replacing the ordinary convolution with a simple linear transformation. A ghost module divides the result of convolution into two parts. The first part involves ordinary convolution, while the other part uses a series of linear transformations to generate more feature maps, as shown in Figure 2a. By using this strategy, the lightweight ghost module produces more feature maps with inexpensive operations and performs better than other lightweight DCNNs, which also accelerates the learning process. However, the linear transformation does not focus on cross-channel relationships, which have proven to be robust in object detection. Therefore, each squeeze-and-excitation (SE) block consisting of global average pooling and two fully connected layers was embedded in the ghost module instead of the linear transformation, as shown in Figure 2b. The weights calculated from the SE blocks were then multiplied with the input convolution results by channel and concatenated with the original convolution results to generate the final feature maps.

2.6. Model Architecture

As shown in Figure 3, DAFDNet uses the popular encoder–decoder framework to first encode the input image by focusing on the attention ghost convolutional module and Gabor-filter convolution. The detailed structure of the DAFDNet framework is shown in Table 2. The input image is manipulated with the ghost module, and then two down-sampling operations are performed to gain global feature maps with different resolutions, i.e., GhoM1, GhoM2, and GhoM3. The output feature map GhoMi in the ghost module is represented as follows:
GhoMi = GhostConv(Gi, Li), i = 1, 2, … n
where Li is the ith input feature map and Gi is the ith ghost module.
In the process of bypassing the Gabor convolution, we used an 8-direction Gabor with diverse scales to extract the boundary of the FNF, and then performed two down-sampling operations to generate two additional batches of shrinking feature maps. Then, we obtained the feature maps Gabi from the Gabor convolution, which are represented as follows:
Gabi = GaborConv(GFi(θ,s), Li) i = 1, 2, … n
where GFi(θ,s) is the Gabor filter with directions θ and scales s. Afterwards, in our study, Gab1, Gab2, and Gab3 were concatenated with GhoM1, GhoM2, and GhoM3, respectively, to obtain the aggregated feature maps GG1, GG2, and GG3, expressed by:
GGi = Concat(Gabi, GhoMi) i = 1, 2, … n
Then, the extracted features were refined with a 2 × 2 average pooling layer, a 3 × 3 convolution layer, and a batch normalization layer to obtain the PFi feature map. We concatenated the corresponding GhoMi and PFi with the same resolution to obtain Ai, where the results were processed with the attention module, respectively. The feature maps with a lower resolution, such as A3 and A2, as shown in Figure 3, were resized with a 2 × 2 up-sampling layer before being concatenated to the feature maps with a higher resolution. In general, the process can be expressed as follows:
C o n c a t i + 1 = C o n c a t ( U p s a m p l i n g ( A i + 2 ) , A i + 1 ) C o n c a t i = C o n c a t ( U p s a m p l i n g ( A i + 1 ) , A i ) i = 1,2 , . . . , n
Hence, the output of our network was the result of the sequential operations of Equations (2)–(5) with a 2 × 2 up-sampling layer to recover the feature size as the input image, followed by a 1 × 1 convolution layer to reduce the dimension of the channels.
The loss function of DAFDNet is the mean-squared error, which can be expressed as:
L ( ϑ ) = 1 N i = 1 N D A S N e t ( I i I N ) I i G T 2
where ϑ represents the learnable parameter of DAFDNet and . 2   is L2-norm. I i I N and I i G T denote the input images and the corresponding ground truth, respectively.

3. Experiments and Results

3.1. Dataset and Metrics

We extracted radiological images of the anterior–posterior view of the pelvis of 240 patients with nondisplaced ipsilateral FNFs (Garden type I and II) noted in relevant radiologists’ reports from the China Medical University Hospital (CMUH, Taichung, Taiwan) between 2018 and 2020, taken from the PACS (picture-archiving and communication system) database identified through the RIS (radiology information system). This study was approved by the institutional review board (IRB number: CMUH111-REC2-110). The inclusion criteria for patient selection were individuals who had been diagnosed with nondisplaced femoral neck fractures that were classified as Garden type I or II and patients with diagnostic reports. Individuals with displaced femoral neck fractures, preexisting implanted hardware around the fracture site, or musculoskeletal neoplasms were excluded. These 240 unilateral nondisplaced fractures were cut into 480 right and left joints, of which 240 were normal and 240 were nondisplaced fractures. There was no interaction with the patients directly, as we acquired de-identified data. This study was in accordance with the ethical standards of the institutional and national committee on human experimentation and conducted according to the guidelines of the Declaration of Helsinki. These radiographic images were rotated and rescaled with randomized rotations of the images from −15 to +15 degrees and a magnification reduction of 0.05 times to increase the number of frames to 3840, of which 3000 were used for training and 840 were used for testing. Two senior orthopedic surgeons were involved in the annotation, independently annotating the femoral neck part and the fracture line. In our algorithm, the femoral neck part was used to train the DCNN for ROI segmentation and the fracture line was used to train the DCNN for fracture detection. All labeled images were made under the guidance of a professional orthopedist, and new images of a 1024 × 1024 pixel size were extracted accordingly to reduce the computing time. We used the intersection-over-union (IOU) value between the FNF region and the labeled region as the assessing metrics, defined as IOU = (A∩B)/(AB), where and ∪ denote the intersection and union of two sets, A is the intersection of the predicted region and label region, and B is the predicted region. In addition to this, we used Dice and Jaccard as evaluation indicators with the following formulas:
D i c e = 2 × | A B | A + | B |
J a c c a r d = A B A B
where A is the predicted region and B is the label region.

3.2. Implementation Details

Figure 4 shows the FNF detection strategy used in this paper, which consisted of two phases, namely femoral neck localization and fracture detection. In the first stage, the original image was fed into a segmentation network with matching and alternative methods to accurately localize the femoral neck. In the second stage, a surgeon-made label-trained network was used to localize the exact location of the fracture after the output femoral neck image in the first stage.
We augmented the data by rotating and rescaling the images and labels with various degrees and scales. During the training process, 12 images were randomly chosen as the input in each training batch. The model was trained using the Adam optimizer with a learning rate initialized to 1 × 10−5 and set to 4 × 10−5 in steps of 1 × 10−5. Two typical DCNNs, namely U-Net [21] and DenseNet [24], were used as the comparison algorithms, and their codes were downloaded from GitHub, shared by the original authors. Up-sampling layers were added into DenseNet to achieve femoral neck segmentation. The corresponding parameters of these three methods were optimized until the network converged.

3.3. Results and Comparison

Figure 5 shows the pelvic radiographic images with FNFs and the comparison of the detection results of U-Net, DenseNet, and our proposed DAFDNet. U-Net is more often used in image segmentation, and can roughly cut out the contour of the target, but cannot handle cracks without a displacement fracture well, because the cracks in the image are very small and inconspicuous, and U-Net is more suitable for dealing with more obvious and rough segmentation, but cannot deal with such small cracks. DenseNet is a related method used for image classification and detection; with the characteristic of dense connections, it can extract the features of smaller objects in the image, which strengthens the feature reuse and reduces the number of parameters, but in the process of feature extraction, the down-sampling method ignores the information of many small and dense objects, which reduces the accuracy of detection, and in the deeper layers of DenseNet, it may still face the problem of disappearing gradients [33], so it may not be suitable for dealing with small cracks. As shown in the enlarged view of Figure 5a, the fractures were labeled by an experienced surgeon and delineated with blue lines. The coordinates of both the labeled and predicted results are calculated and plotted in different colors in Figure 5b–d, where the labeled fracture region is plotted in red, while the predicted fracture region results are plotted in yellow. The detection results show that our proposed method was the closest to the actual size and area of the label region, while the other methods obtained results with an area several times larger than the labeled region. Therefore, our proposed method gained the largest IOU value among the three methods, which indicates that the fracture detected by our proposed method was the closest to the ground truth.
Figure 6 shows a comparison of the IOU values of our proposed DAFDNet, DenseNet, and U-Net. We can see that DAFDNet outperformed the other two methods, and most of the IOU values of DAFDNet were much larger than those of DenseNet and U-Net.
In Table 3, we divided the IOU values into three categories. A total of 73.1% of the DAFDNet results were above 0.5 (or 50%), while none of the other two methods achieved this. A total of 21.7% of the DAFDNet results were between 0.2 and 0.5 (or [20%, 50%]), while more than 90% of the DenseNet and U-Net comparison method results were close to 0.1 (or [0%, 20%]). The table also calculates the average IOU values, which were 0.648, 0.084, and 0.062 (or 64.8%, 8.4%, and 6.2%) for DAFDNet, DenseNet, and U-Net, respectively. The average Dice values were 0.542, 0.06, and 0.041 (or 54.2%, 6.0%, and 4.1%) for DAFDNet, DenseNet, and U-Net, respectively. The average Jaccard values were 0.426, 0.031, and 0.021 (or 42.6%, 3.1%, and 2.1%) for DAFDNet, DenseNet, and U-Net, respectively. The simulation results showed that DAFDNet outperformed the U-Net and DenseNet methods in terms of the IOU value, Dice value, and Jaccard value.
Figure 7 shows the detection results of DAFDNet for different IOU values. The IOU values are divided into three classes, where values above 50% are shown in a–c, those between 20% and 50% are shown in d–f, and values below 20% are shown in g–i. In the enlarged view of all sub-images, the red rectangle indicates the labeled region (ground truth) and the yellow rectangle is the fracture region predicted by DAFDNet. It was concluded that DAFDNet achieved a better performance than the DenseNet and U-Net comparison methods. The diagnostic correctness of DAFDNet exceeded 94.8% and the DAFDNet method could assist general practitioners and orthopedic surgeons in the initial diagnosis of Garden type I and II fractures to avoid misclassification and improve the diagnostic correctness.

4. Discussion and Conclusions

In this study, we proposed a new method for detecting FNFs. The results show that our method was effective in detecting precise fracture locations and outperformed other comparative methods. Our proposed method is implemented in the localization and detection phases, i.e., localization of the femoral neck and fracture detection. The benefit of the localization phase is that, by localizing the ROI from the original image, the input data size of the DCNN can be greatly reduced. On the one hand, the computation time is saved to a great extent, and on the other hand, the disturbances, such as regions with a similar gray distribution in the pelvis image, can be excluded to improve the accuracy of detection. In the fracture-detection stage, because the orientation of a fracture is random, DAFDNet introduces an orientation-aware algorithm to detect fracture directionality. The use of a band-pass Gabor filter enabled the network to detect image gray changes by adjusting its frequency and orientation. In addition, attention mechanism and ghost convolution were also involved to improve the performance of DAFDNet.
Although deep learning has made great advances in medical image processing, few publications have shown a clinical utility for detecting FNFs, especially for nondisplaced Garden I and Garden II fracture detection. The success of our study in detecting the precise location of nondisplaced fractures provides the first evidence that DCNNs can help physicians improve the diagnostic accuracy of nondisplaced Garden I and Garden II fractures. As shown by the predicted rectangular and IOU values of the fractures, our proposed method obtained better results than a physician diagnosis. However, fractures were not detected in more than 5.2% of the images tested, due to the poor contrast of the images. Therefore, a better radiographic image quality would greatly improve our approach.
Elderly patients suffering from nondisplaced FNFs may have been misrecognized negatively in plain films in the first place. The overall sensitivity to hip fractures in plain film radiography (anteroposterior pelvis and lateral hip view) is about 90–98% [34]. A surgical intervention for nondisplaced FNFs usually involves a closed reduction and an internal fixation with multiple cannulated screws or sliding hip screws. Early surgery (within 48 h of admission) after a hip fracture reduces the hospital stay and may also reduce complications and mortality [35]. Delayed recognized or misdiagnosed nondisplaced FNFs may lead to the further displacement of the fracture site. Displaced FNF is the major risk factor of avascular necrosis of the femoral head and the nonunion of fractures. Elderly patients with displaced FNFs should be treated with bipolar hemiarthroplasty. Compared with a closed reduction and internal fixation with multiple cannulated screws, the surgical time of bipolar hemiarthroplasty is significantly longer, and perioperative blood loss is significantly increased [36]. Therefore, recognizing nondisplaced FNFs as soon as possible is crucial for better outcomes.

Author Contributions

Conceptualization, S.L.H. and Y.Y.C.; Methodology, S.L.H.; Software, J.L.C. and C.H.C.; Validation, J.L.C. and C.H.C.; Formal analysis, C.J.H.; Investigation, J.L.C.; Resources, S.L.H. and C.J.H.; Data curation, J.L.C., C.H.C. and C.J.H.; Writing—original draft, S.L.H. and Y.Y.C.; Writing—review & editing, Y.Y.C.; Supervision, S.L.H.; Project administration, Y.Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used or analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Papadimitriou, N.; Tsilidis, K.; Orfanos, P.; Benetou, V.; Ntzani, E.; Soerjomataram, I.; Künn-Nelen, A.; Pettersson-Kymmer, U.; Eriksson, S.; Brenner, H.; et al. Burden of hip fracture using disability-adjusted life-years: A pooled analysis of prospective cohorts in the CHANCES consortium. Lancet Public Health 2017, 2, e239–e246. [Google Scholar] [CrossRef]
  2. Brauer, C.A.; Coca-Perraillon, M.; Cutler, D.M.; Rosen, A.B. Incidence and Mortality of Hip Fractures in the United States. JAMA 2009, 302, 1573–1579. [Google Scholar] [CrossRef] [PubMed]
  3. Marks, R. Hip fracture epidemiological trends, outcomes, and risk factors, 1970–2009. Int. J. Gen. Med. 2009, 3, 1–17. [Google Scholar] [CrossRef]
  4. Garden, R.S. Low-Angle Fixation in Fractures of the Femoral Neck. J. Bone Jt. Surgery. Br. Vol. 1961, 43, 647–663. [Google Scholar] [CrossRef]
  5. Florschutz, A.V.; Langford, J.R.; Haidukewych, G.J.; Koval, K.J. Femoral Neck Fractures: Current Management. J. Orthop. Trauma 2015, 29, 3. [Google Scholar] [CrossRef] [PubMed]
  6. Hoskins, W.; Rayner, J.; Sheehy, R.; Claireaux, H.; Bingham, R.; Santos, R.; Bucknill, A.; Griffin, X. The effect of patient, fracture and surgery on outcomes of high energy neck of femur fractures in patients aged 15–50. HIP Int. 2019, 29, 77–82. [Google Scholar] [CrossRef]
  7. Gjertsen, J.; Fevang, J.; Matre, K.; Vinje, T.; Engesæter, L. Clinical outcome after undisplaced femoral neck fractures. Acta Orthop. 2011, 82, 268–274. [Google Scholar] [CrossRef]
  8. Mutasa, S.; Varada, S.; Goel, A.; Wong, T.; Rasiej, M. Advanced Deep Learning Techniques Applied to Automated Femoral Neck Fracture Detection and Classification. J. Digit. Imaging 2020, 33, 1209–1217. [Google Scholar] [CrossRef]
  9. Kim, K.C.; Ha, Y.C.; Kim, T.Y.; Choi, J.A.; Koo, K.H. Initially missed occult fractures of the proximal femur in elderly patients: Implications for need of operation and their morbidity. Arch. Orthop. Trauma Surg. 2010, 130, 915–920. [Google Scholar] [CrossRef]
  10. Prevedello, L.M.; Erdal, B.S.; Ryu, J.L.; Little, K.J.; Demirer, M.; Qian, S.; White, R.D. Automated Critical Test Findings Identification and Online Notification System Using Artificial Intelligence in Imaging. Radiology 2017, 285, 923–931. [Google Scholar] [CrossRef]
  11. Al Arif, S.M.M.R.; Knapp, K.; Slabaugh, G. Fully automatic cervical vertebrae segmentation framework for X-ray images. Comput. Methods Programs Biomed. 2018, 157, 95–111. [Google Scholar] [CrossRef]
  12. Gale, W.; Oakden-Rayner, L.; Carneiro, G.; Palmer, L. Detecting hip fractures with radiologist-level performance using deep neural networks. arXiv 2017, arXiv:1711.06504. [Google Scholar]
  13. Kazi, A.; Albarqouni, S.; Sanchez, A.J.; Kirchhoff, S.; Biberthaler, P.; Navab, N.; Mateus, D. Automatic Classification of Proximal Femur Fractures Based on Attention Models. In Proceedings of the International Workshop on Machine Learning in Medical Imaging, Quebec City, QC, Canada, 10 September 2017; pp. 70–78. [Google Scholar]
  14. Cheng, C.; Ho, T.Y.; Lee, T.Y.; Chang, C.; Chou, C.; Chen, C.; Chung, I.; Liao, C. Application of a deep learning algorithm for detection and visualization of hip fractures on plain pelvic radiographs. Eur. Radiol. 2019, 29, 5469–5477. [Google Scholar] [CrossRef] [PubMed]
  15. Olczak, J.; Fahlberg, N.; Maki, A.; Razavian, A.S.; Jilert, A.; Stark, A.; Sköldenberg, O.; Gordon, M. Artificial intelligence for analyzing orthopedic trauma radiographs: Deep learning algorithms—Are they on par with humans for diagnosing fractures? Acta Orthop. 2017, 88, 581–586. [Google Scholar] [CrossRef] [PubMed]
  16. Cheng, C.T.; Chen, C.C.; Cheng, F.J.; Chen, H.W.; Su, Y.S.; Yeh, C.N.; Chung, I.F.; Liao, C. A Human-Algorithm Integration System for Hip Fracture Detection on Plain Radiography: System Development and Validation Study. Psychopharmacology 2020, 8, e19416. [Google Scholar] [CrossRef] [PubMed]
  17. Luan, S.; Chen, C.; Zhang, B.; Han, J.; Liu, J. Gabor Convolutional Networks. IEEE Trans. Image Process. 2018, 27, 4357–4366. [Google Scholar] [CrossRef]
  18. Yao, H.; Li, C.; Dan, H.; Yu, W. Gabor Feature Based Convolutional Neural Network for Object Recognition in Natural Scene. In Proceedings of the 2016 3rd International Conference on Information Science and Control Engineering (ICISCE), Beijing, China, 8–10 July 2016. [Google Scholar]
  19. Sarwar, S.S.; Panda, P.; Roy, K. Gabor Filter Assisted Energy Efficient Fast Learning Convolutional Neural Networks. In Proceedings of the IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED, Taipei, Taiwan, 24–26 July 2017; pp. 1–6. [Google Scholar]
  20. Kermany, D.S.; Goldbaum, M.; Wenjia Cai Carolina, C.S.; Valentim Liang, H.; Baxter, S.; McKeown, A.; Yang, G.; Wu, X.; Yan, F.; Dong, J.; et al. Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning. Cell 2018, 172, 1122–1131.e9. [Google Scholar] [CrossRef] [PubMed]
  21. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
  22. Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
  23. He, K.; Zhang Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  24. Huang, G.; Liu, Z.; Laurens, V.; Weinberger, K. Densely Connected Convolutional Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2016. [Google Scholar]
  25. Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar]
  26. Oktay, O.; Schlemper, J.; Folgoc, L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; Mcdonagh, S.; Hammerla, N.; Kainz, B. Attention U-Net: Learning Where to Look for the Pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
  27. Jie, J.; Li, S.; Samuel, A.; Gang, S.; Wu, E. Squeeze-and-Excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, YT, USA, 18–22 June 2018. [Google Scholar]
  28. Park, J.; Woo, S.; Lee, J.Y. BAM: Bottleneck Attention Module. arXiv 2018, arXiv:1807.06514. [Google Scholar]
  29. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Springer: Cham, Switzerland, 2018. [Google Scholar]
  30. Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.-S. SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 6298–6306. [Google Scholar]
  31. Chen, H.; Gu, J.; Zhang, Z. Attention in Attention Network for Image Super-Resolution. arXiv 2021, arXiv:2104.09497. [Google Scholar]
  32. Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C. GhostNet: More Features from Cheap Operations. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  33. Yang, L.; Chen, G.; Ci, W. Multiclass objects detection algorithm using DarkNet-53 and DenseNet for intelligent vehicles. EURASIP J. Adv. Signal Process. 2023, 2023, 85. [Google Scholar] [CrossRef]
  34. Hakkarinen, D.K.; Banh, K.V.; Hendey, G.W. Magnetic resonance imaging identifies occult hip fractures missed by 64-slice com-puted tomography. J. Emerg. Med. 2012, 43, 303–307. [Google Scholar] [CrossRef] [PubMed]
  35. Khan, S.K.; Kalra, S.; Khanna, A.; Thiruvengada, M.M.; Parker, M.J. Timing of surgery for hip fractures: A systematic review of 52 published studies involving 291,413 patients. Injury 2009, 40, 692–697. [Google Scholar] [CrossRef]
  36. Dolatowski, F.C.; Frihagen, F.; Bartels, S.; Opland, V.; Benth, J.; Talsnes, O.; Hoelsbrekken, S.E.; Utvåg, S.E. Screw Fixation versus Hemiarthroplasty for Nondisplaced Femoral Neck Fractures in Elderly Patients: A multicenter randomized controlled trial. J. Bone Jt. Surg. Am. Minerva Anestesiol. 2019, 101, 136–144. [Google Scholar] [CrossRef]
Figure 1. A normal lateral view of the pelvis and proximal femur.
Figure 1. A normal lateral view of the pelvis and proximal femur.
Biomedicines 11 03100 g001
Figure 2. Original ghost convolution and SE ghost module. (a) Ghost convolution, which uses a simple linear transformation to generate more features; (b) SE ghost module, which incorporates the SE attention mechanism into the ghost module to discriminate the weight of each channel.
Figure 2. Original ghost convolution and SE ghost module. (a) Ghost convolution, which uses a simple linear transformation to generate more features; (b) SE ghost module, which incorporates the SE attention mechanism into the ghost module to discriminate the weight of each channel.
Biomedicines 11 03100 g002
Figure 3. Schematic diagram of the proposed DAFDNet.
Figure 3. Schematic diagram of the proposed DAFDNet.
Biomedicines 11 03100 g003
Figure 4. Workflow of the fracture-detection strategy. Two phases are included: femoral neck localization and fracture detection.
Figure 4. Workflow of the fracture-detection strategy. Two phases are included: femoral neck localization and fracture detection.
Biomedicines 11 03100 g004
Figure 5. Radiographic images of pelvis and FNFs and the detection results of U-Net, DenseNet, and our proposed DAFDNet. The arrow indicates the location of the FNF and the dashed line indicates the zoomed-in area. (a) Imaging of the pelvis with FNF labeled by blue lines in the magnified view. (b) Fracture detected by DAFDNet, (c) fracture detected by U-Net, and (d) fracture location detected by DenseNet. As shown in the enlarged view, the fracture-detection range for each method is in the yellow rectangle and the physician-delineated labels are in the red rectangle.
Figure 5. Radiographic images of pelvis and FNFs and the detection results of U-Net, DenseNet, and our proposed DAFDNet. The arrow indicates the location of the FNF and the dashed line indicates the zoomed-in area. (a) Imaging of the pelvis with FNF labeled by blue lines in the magnified view. (b) Fracture detected by DAFDNet, (c) fracture detected by U-Net, and (d) fracture location detected by DenseNet. As shown in the enlarged view, the fracture-detection range for each method is in the yellow rectangle and the physician-delineated labels are in the red rectangle.
Biomedicines 11 03100 g005
Figure 6. Comparison of IOU values of all the test images predicted by the three methods. Most of the IOU values of DAFDNet were larger than those of DenseNet and U-Net.
Figure 6. Comparison of IOU values of all the test images predicted by the three methods. Most of the IOU values of DAFDNet were larger than those of DenseNet and U-Net.
Biomedicines 11 03100 g006
Figure 7. Detection results of DAFDNet with different IOU values. In the enlarged views, the red rectangle illustrates the region of ground truth and the yellow rectangle is the fracture region predicted by DAFDNet.
Figure 7. Detection results of DAFDNet with different IOU values. In the enlarged views, the red rectangle illustrates the region of ground truth and the yellow rectangle is the fracture region predicted by DAFDNet.
Biomedicines 11 03100 g007
Table 1. Comparison of misclassification rate in nondisplaced FNFs in radiographs among different professional physicians.
Table 1. Comparison of misclassification rate in nondisplaced FNFs in radiographs among different professional physicians.
ProfessionMisrecognized Fracture Ratep-Value 1
ER doctor4.19%<0.0001
PGY-1 doctor7.87%<0.0001
Senior orthopedic doctor2.44%<0.0001
1 The p-values were estimated using the chi-squared test.
Table 2. The detailed structure of the DAFDNet framework.
Table 2. The detailed structure of the DAFDNet framework.
StageLayerSizeChannel
InputInput1024 × 10241
GaborGab11024 × 102432
Gab2512 × 51232
Gab3256 × 25632
GhostGhoM11024 × 102432
GhoM2512 × 51264
GhoM3256 × 256128
Gabor + GhostGG11024 × 102464
GG2512 × 51296
GG3256 × 256150
PFPF11024 × 102432
PF2512 × 51264
PF3256 × 256128
Attention ModuleA11024 × 102464
A2512 × 512128
A3256 × 256256
OutputOutput1024 × 10241
Table 3. Statistical results of the IOU, Dice, and Jaccard values.
Table 3. Statistical results of the IOU, Dice, and Jaccard values.
MethodsU-Net [21]
(%)
DenseNet [24]
(%)
DAFDNet
(%)
IOU (%)
[50, 100]0073.1
[20, 50]0.41.621.7
[0, 20]99.698.45.2
Average IOU6.28.464.8
Dice4.16.054.2
Jaccard2.13.142.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hsieh, S.L.; Chiang, J.L.; Chuang, C.H.; Chen, Y.Y.; Hsu, C.J. A Computer-Assisted Diagnostic Method for Accurate Detection of Early Nondisplaced Fractures of the Femoral Neck. Biomedicines 2023, 11, 3100. https://doi.org/10.3390/biomedicines11113100

AMA Style

Hsieh SL, Chiang JL, Chuang CH, Chen YY, Hsu CJ. A Computer-Assisted Diagnostic Method for Accurate Detection of Early Nondisplaced Fractures of the Femoral Neck. Biomedicines. 2023; 11(11):3100. https://doi.org/10.3390/biomedicines11113100

Chicago/Turabian Style

Hsieh, S. L., J. L. Chiang, C. H. Chuang, Y. Y. Chen, and C. J. Hsu. 2023. "A Computer-Assisted Diagnostic Method for Accurate Detection of Early Nondisplaced Fractures of the Femoral Neck" Biomedicines 11, no. 11: 3100. https://doi.org/10.3390/biomedicines11113100

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop