Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging

A special issue of Diagnostics (ISSN 2075-4418). This special issue belongs to the section "Machine Learning and Artificial Intelligence in Diagnostics".

Deadline for manuscript submissions: closed (31 March 2021) | Viewed by 64792

Special Issue Editor

1. Shenzhen Research Institute of Big Data, Shenzhen, China
2. University of Michigan, Ann Arbor, MI, USA
Interests: biomedical imaging; computer vision; machine learning; visual analytics

Special Issue Information

Dear Colleagues,

Deep learning has led to dramatic advances in the analysis of images and video and has demonstrated the potential to transform computer-aided diagnosis in biomedical imaging. Innovations in algorithm and software development and availability of larger annotated biomedical imaging datasets are driving improvements in automated classification, localization, retrieval, and segmentation of molecules, cells, lesions, nodules, tumors, organs and other structures of interest. Deep neural networks have also been employed for medical image generation and enhancement, and integration of images and biomedical data of other modalities. However, applications of deep learning to computer-assisted diagnosis in biomedical imaging also pose important challenges, such as learning from small, imbalanced, and noisy data, estimating model uncertainty, evaluating and interpreting models, limiting computing requirements, and designing and building interfaces between algorithms and clinicians. This Special Issue will focus on recent advances, prospects and challenges in deep learning applications to computer-aided biomedical imaging.

Dr. Alexandr Kalinin
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Diagnostics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • biomedical image analysis
  • computer-assisted diagnosis
  • computer vision
  • artificial intelligence in biomedicine
  • self- and semi-supervised learning
  • one- and few-shot learning
  • transfer learning
  • model interpretability
  • biomedical image augmentation
  • biomedical image segmentation
  • object detection and localization
  • image generation and enhancement
  • 2D and 3D modeling
  • 2D and 3D reconstruction
  • image-guided surgery and intervention
  • biomarkers
  • personalized medicine
  • on-device deep learning

Published Papers (16 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 23305 KiB  
Article
Dilated Semantic Segmentation for Breast Ultrasonic Lesion Detection Using Parallel Feature Fusion
by Rizwana Irfan, Abdulwahab Ali Almazroi, Hafiz Tayyab Rauf, Robertas Damaševičius, Emad Abouel Nasr and Abdelatty E. Abdelgawad
Diagnostics 2021, 11(7), 1212; https://doi.org/10.3390/diagnostics11071212 - 05 Jul 2021
Cited by 61 | Viewed by 3921
Abstract
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and [...] Read more.
Breast cancer is becoming more dangerous by the day. The death rate in developing countries is rapidly increasing. As a result, early detection of breast cancer is critical, leading to a lower death rate. Several researchers have worked on breast cancer segmentation and classification using various imaging modalities. The ultrasonic imaging modality is one of the most cost-effective imaging techniques, with a higher sensitivity for diagnosis. The proposed study segments ultrasonic breast lesion images using a Dilated Semantic Segmentation Network (Di-CNN) combined with a morphological erosion operation. For feature extraction, we used the deep neural network DenseNet201 with transfer learning. We propose a 24-layer CNN that uses transfer learning-based feature extraction to further validate and ensure the enriched features with target intensity. To classify the nodules, the feature vectors obtained from DenseNet201 and the 24-layer CNN were fused using parallel fusion. The proposed methods were evaluated using a 10-fold cross-validation on various vector combinations. The accuracy of CNN-activated feature vectors and DenseNet201-activated feature vectors combined with the Support Vector Machine (SVM) classifier was 90.11 percent and 98.45 percent, respectively. With 98.9 percent accuracy, the fused version of the feature vector with SVM outperformed other algorithms. When compared to recent algorithms, the proposed algorithm achieves a better breast cancer diagnosis rate. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

17 pages, 1944 KiB  
Article
NanoChest-Net: A Simple Convolutional Network for Radiological Studies Classification
by Juan Eduardo Luján-García, Yenny Villuendas-Rey, Itzamá López-Yáñez, Oscar Camacho-Nieto and Cornelio Yáñez-Márquez
Diagnostics 2021, 11(5), 775; https://doi.org/10.3390/diagnostics11050775 - 26 Apr 2021
Cited by 2 | Viewed by 2465
Abstract
The new coronavirus disease (COVID-19), pneumonia, tuberculosis, and breast cancer have one thing in common: these diseases can be diagnosed using radiological studies such as X-rays images. With radiological studies and technology, computer-aided diagnosis (CAD) results in a very useful technique to analyze [...] Read more.
The new coronavirus disease (COVID-19), pneumonia, tuberculosis, and breast cancer have one thing in common: these diseases can be diagnosed using radiological studies such as X-rays images. With radiological studies and technology, computer-aided diagnosis (CAD) results in a very useful technique to analyze and detect abnormalities using the images generated by X-ray machines. Some deep-learning techniques such as a convolutional neural network (CNN) can help physicians to obtain an effective pre-diagnosis. However, popular CNNs are enormous models and need a huge amount of data to obtain good results. In this paper, we introduce NanoChest-net, which is a small but effective CNN model that can be used to classify among different diseases using images from radiological studies. NanoChest-net proves to be effective in classifying among different diseases such as tuberculosis, pneumonia, and COVID-19. In two of the five datasets used in the experiments, NanoChest-net obtained the best results, while on the remaining datasets our model proved to be as good as baseline models from the state of the art such as the ResNet50, Xception, and DenseNet121. In addition, NanoChest-net is useful to classify radiological studies on the same level as state-of-the-art algorithms with the advantage that it does not require a large number of operations. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

18 pages, 29796 KiB  
Article
A Novel Deep Learning Method for Recognition and Classification of Brain Tumors from MRI Images
by Momina Masood, Tahira Nazir, Marriam Nawaz, Awais Mehmood, Junaid Rashid, Hyuk-Yoon Kwon, Toqeer Mahmood and Amir Hussain
Diagnostics 2021, 11(5), 744; https://doi.org/10.3390/diagnostics11050744 - 21 Apr 2021
Cited by 69 | Viewed by 5588
Abstract
A brain tumor is an abnormal growth in brain cells that causes damage to various blood vessels and nerves in the human body. An earlier and accurate diagnosis of the brain tumor is of foremost important to avoid future complications. Precise segmentation of [...] Read more.
A brain tumor is an abnormal growth in brain cells that causes damage to various blood vessels and nerves in the human body. An earlier and accurate diagnosis of the brain tumor is of foremost important to avoid future complications. Precise segmentation of brain tumors provides a basis for surgical planning and treatment to doctors. Manual detection using MRI images is computationally complex in cases where the survival of the patient is dependent on timely treatment, and the performance relies on domain expertise. Therefore, computerized detection of tumors is still a challenging task due to significant variations in their location and structure, i.e., irregular shapes and ambiguous boundaries. In this study, we propose a custom Mask Region-based Convolution neural network (Mask RCNN) with a densenet-41 backbone architecture that is trained via transfer learning for precise classification and segmentation of brain tumors. Our method is evaluated on two different benchmark datasets using various quantitative measures. Comparative results show that the custom Mask-RCNN can more precisely detect tumor locations using bounding boxes and return segmentation masks to provide exact tumor regions. Our proposed model achieved an accuracy of 96.3% and 98.34% for segmentation and classification respectively, demonstrating enhanced robustness compared to state-of-the-art approaches. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

22 pages, 2332 KiB  
Article
Multi-Level Seg-Unet Model with Global and Patch-Based X-ray Images for Knee Bone Tumor Detection
by Nhu-Tai Do, Sung-Taek Jung, Hyung-Jeong Yang and Soo-Hyung Kim
Diagnostics 2021, 11(4), 691; https://doi.org/10.3390/diagnostics11040691 - 13 Apr 2021
Cited by 22 | Viewed by 6129
Abstract
Tumor classification and segmentation problems have attracted interest in recent years. In contrast to the abundance of studies examining brain, lung, and liver cancers, there has been a lack of studies using deep learning to classify and segment knee bone tumors. In this [...] Read more.
Tumor classification and segmentation problems have attracted interest in recent years. In contrast to the abundance of studies examining brain, lung, and liver cancers, there has been a lack of studies using deep learning to classify and segment knee bone tumors. In this study, our objective is to assist physicians in radiographic interpretation to detect and classify knee bone regions in terms of whether they are normal, begin-tumor, or malignant-tumor regions. We proposed the Seg-Unet model with global and patched-based approaches to deal with challenges involving the small size, appearance variety, and uncommon nature of bone lesions. Our model contains classification, tumor segmentation, and high-risk region segmentation branches to learn mutual benefits among the global context on the whole image and the local texture at every pixel. The patch-based model improves our performance in malignant-tumor detection. We built the knee bone tumor dataset supported by the physicians of Chonnam National University Hospital (CNUH). Experiments on the dataset demonstrate that our method achieves better performance than other methods with an accuracy of 99.05% for the classification and an average Mean IoU of 84.84% for segmentation. Our results showed a significant contribution to help the physicians in knee bone tumor detection. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

10 pages, 2018 KiB  
Article
Robustness of Deep Learning Algorithm to Varying Imaging Conditions in Detecting Low Contrast Objects in Computed Tomography Phantom Images: In Comparison to 12 Radiologists
by Hae Young Kim, Kyeorye Lee, Won Chang, Youngjune Kim, Sungsoo Lee, Dong Yul Oh, Leonard Sunwoo, Yoon Jin Lee and Young Hoon Kim
Diagnostics 2021, 11(3), 410; https://doi.org/10.3390/diagnostics11030410 - 28 Feb 2021
Cited by 2 | Viewed by 1940
Abstract
The performance of deep learning algorithm (DLA) to that of radiologists was compared in detecting low contrast objects in CT phantom images under various imaging conditions. For training, 10,000 images were created using American College of Radiology CT phantom as the background. In [...] Read more.
The performance of deep learning algorithm (DLA) to that of radiologists was compared in detecting low contrast objects in CT phantom images under various imaging conditions. For training, 10,000 images were created using American College of Radiology CT phantom as the background. In half of the images, objects of 3–20 mm size and 5–30 HU contrast difference were generated in random locations. Binary responses were used as the ground truth. For testing, 640 images of Catphan® phantom were used, half of which had objects of either 5 or 9 mm size with 10 HU contrast difference. Twelve radiologists evaluated the presence of objects on a five-point scale. The performances of the DLA and radiologists were compared across different imaging conditions in terms of area under receiver operating characteristics curve (AUC). Multi-reader multi-case AUC and Hanley and McNeil tests were used. We performed post-hoc analysis using bootstrapping and verified that the DLA is less affected by the changing imaging conditions. The AUC of DLA was consistently higher than those of the radiologists across different imaging conditions (p < 0.0001), and it was less affected by varying imaging conditions. The DLA outperformed the radiologists and showed more robust performance under varying imaging conditions. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

15 pages, 4112 KiB  
Article
Automatic Pharyngeal Phase Recognition in Untrimmed Videofluoroscopic Swallowing Study Using Transfer Learning with Deep Convolutional Neural Networks
by Ki-Sun Lee, Eunyoung Lee, Bareun Choi and Sung-Bom Pyun
Diagnostics 2021, 11(2), 300; https://doi.org/10.3390/diagnostics11020300 - 13 Feb 2021
Cited by 9 | Viewed by 2367
Abstract
Background: Video fluoroscopic swallowing study (VFSS) is considered as the gold standard diagnostic tool for evaluating dysphagia. However, it is time consuming and labor intensive for the clinician to manually search the recorded long video image frame by frame to identify the instantaneous [...] Read more.
Background: Video fluoroscopic swallowing study (VFSS) is considered as the gold standard diagnostic tool for evaluating dysphagia. However, it is time consuming and labor intensive for the clinician to manually search the recorded long video image frame by frame to identify the instantaneous swallowing abnormality in VFSS images. Therefore, this study aims to present a deep leaning-based approach using transfer learning with a convolutional neural network (CNN) that automatically annotates pharyngeal phase frames in untrimmed VFSS videos such that frames need not be searched manually. Methods: To determine whether the image frame in the VFSS video is in the pharyngeal phase, a single-frame baseline architecture based the deep CNN framework is used and a transfer learning technique with fine-tuning is applied. Results: Compared with all experimental CNN models, that fine-tuned with two blocks of the VGG-16 (VGG16-FT5) model achieved the highest performance in terms of recognizing the frame of pharyngeal phase, that is, the accuracy of 93.20 (±1.25)%, sensitivity of 84.57 (±5.19)%, specificity of 94.36 (±1.21)%, AUC of 0.8947 (±0.0269) and Kappa of 0.7093 (±0.0488). Conclusions: Using appropriate and fine-tuning techniques and explainable deep learning techniques such as grad CAM, this study shows that the proposed single-frame-baseline-architecture-based deep CNN framework can yield high performances in the full automation of VFSS video analysis. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

13 pages, 3250 KiB  
Article
Deep Learning for Diagnosis of Paranasal Sinusitis Using Multi-View Radiographs
by Yejin Jeon, Kyeorye Lee, Leonard Sunwoo, Dongjun Choi, Dong Yul Oh, Kyong Joon Lee, Youngjune Kim, Jeong-Whun Kim, Se Jin Cho, Sung Hyun Baik, Roh-eul Yoo, Yun Jung Bae, Byung Se Choi, Cheolkyu Jung and Jae Hyoung Kim
Diagnostics 2021, 11(2), 250; https://doi.org/10.3390/diagnostics11020250 - 05 Feb 2021
Cited by 17 | Viewed by 6812
Abstract
Accurate image interpretation of Waters’ and Caldwell view radiographs used for sinusitis screening is challenging. Therefore, we developed a deep learning algorithm for diagnosing frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views. The datasets were selected for the training and [...] Read more.
Accurate image interpretation of Waters’ and Caldwell view radiographs used for sinusitis screening is challenging. Therefore, we developed a deep learning algorithm for diagnosing frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views. The datasets were selected for the training and validation set (n = 1403, sinusitis% = 34.3%) and the test set (n = 132, sinusitis% = 29.5%) by temporal separation. The algorithm can simultaneously detect and classify each paranasal sinus using both Waters’ and Caldwell views without manual cropping. Single- and multi-view models were compared. Our proposed algorithm satisfactorily diagnosed frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views (area under the curve (AUC), 0.71 (95% confidence interval, 0.62–0.80), 0.78 (0.72–0.85), and 0.88 (0.84–0.92), respectively). The one-sided DeLong’s test was used to compare the AUCs, and the Obuchowski–Rockette model was used to pool the AUCs of the radiologists. The algorithm yielded a higher AUC than radiologists for ethmoid and maxillary sinusitis (p = 0.012 and 0.013, respectively). The multi-view model also exhibited a higher AUC than the single Waters’ view model for maxillary sinusitis (p = 0.038). Therefore, our algorithm showed diagnostic performances comparable to radiologists and enhanced the value of radiography as a first-line imaging modality in assessing multiple sinusitis. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

15 pages, 7218 KiB  
Article
Automatic Fetal Middle Sagittal Plane Detection in Ultrasound Using Generative Adversarial Network
by Pei-Yin Tsai, Ching-Hui Hung, Chi-Yeh Chen and Yung-Nien Sun
Diagnostics 2021, 11(1), 21; https://doi.org/10.3390/diagnostics11010021 - 24 Dec 2020
Cited by 6 | Viewed by 2151
Abstract
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system [...] Read more.
Background and Objective: In the first trimester of pregnancy, fetal growth, and abnormalities can be assessed using the exact middle sagittal plane (MSP) of the fetus. However, the ultrasound (US) image quality and operator experience affect the accuracy. We present an automatic system that enables precise fetal MSP detection from three-dimensional (3D) US and provides an evaluation of its performance using a generative adversarial network (GAN) framework. Method: The neural network is designed as a filter and generates masks to obtain the MSP, learning the features and MSP location in 3D space. Using the proposed image analysis system, a seed point was obtained from 218 first-trimester fetal 3D US volumes using deep learning and the MSP was automatically extracted. Results: The experimental results reveal the feasibility and excellent performance of the proposed approach between the automatically and manually detected MSPs. There was no significant difference between the semi-automatic and automatic systems. Further, the inference time in the automatic system was up to two times faster than the semi-automatic approach. Conclusion: The proposed system offers precise fetal MSP measurements. Therefore, this automatic fetal MSP detection and measurement approach is anticipated to be useful clinically. The proposed system can also be applied to other relevant clinical fields in the future. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

15 pages, 42128 KiB  
Article
Deep Learning Assisted Localization of Polycystic Kidney on Contrast-Enhanced CT Images
by Djeane Debora Onthoni, Ting-Wen Sheng, Prasan Kumar Sahoo, Li-Jen Wang and Pushpanjali Gupta
Diagnostics 2020, 10(12), 1113; https://doi.org/10.3390/diagnostics10121113 - 21 Dec 2020
Cited by 15 | Viewed by 4608
Abstract
Total Kidney Volume (TKV) is essential for analyzing the progressive loss of renal function in Autosomal Dominant Polycystic Kidney Disease (ADPKD). Conventionally, to measure TKV from medical images, a radiologist needs to localize and segment the kidneys by defining and delineating the kidney’s [...] Read more.
Total Kidney Volume (TKV) is essential for analyzing the progressive loss of renal function in Autosomal Dominant Polycystic Kidney Disease (ADPKD). Conventionally, to measure TKV from medical images, a radiologist needs to localize and segment the kidneys by defining and delineating the kidney’s boundary slice by slice. However, kidney localization is a time-consuming and challenging task considering the unstructured medical images from big data such as Contrast-enhanced Computed Tomography (CCT). This study aimed to design an automatic localization model of ADPKD using Artificial Intelligence. A robust detection model using CCT images, image preprocessing, and Single Shot Detector (SSD) Inception V2 Deep Learning (DL) model is designed here. The model is trained and evaluated with 110 CCT images that comprise 10,078 slices. The experimental results showed that our derived detection model outperformed other DL detectors in terms of Average Precision (AP) and mean Average Precision (mAP). We achieved mAP = 94% for image-wise testing and mAP = 82% for subject-wise testing, when threshold on Intersection over Union (IoU) = 0.5. This study proves that our derived automatic detection model can assist radiologist in locating and classifying the ADPKD kidneys precisely and rapidly in order to improve the segmentation task and TKV calculation. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

17 pages, 1795 KiB  
Article
An Aggregated-Based Deep Learning Method for Leukemic B-lymphoblast Classification
by Payam Hosseinzadeh Kasani, Sang-Won Park and Jae-Won Jang
Diagnostics 2020, 10(12), 1064; https://doi.org/10.3390/diagnostics10121064 - 08 Dec 2020
Cited by 27 | Viewed by 2667
Abstract
Leukemia is a cancer of blood cells in the bone marrow that affects both children and adolescents. The rapid growth of unusual lymphocyte cells leads to bone marrow failure, which may slow down the production of new blood cells, and hence increases patient [...] Read more.
Leukemia is a cancer of blood cells in the bone marrow that affects both children and adolescents. The rapid growth of unusual lymphocyte cells leads to bone marrow failure, which may slow down the production of new blood cells, and hence increases patient morbidity and mortality. Age is a crucial clinical factor in leukemia diagnosis, since if leukemia is diagnosed in the early stages, it is highly curable. Incidence is increasing globally, as around 412,000 people worldwide are likely to be diagnosed with some type of leukemia, of which acute lymphoblastic leukemia accounts for approximately 12% of all leukemia cases worldwide. Thus, the reliable and accurate detection of normal and malignant cells is of major interest. Automatic detection with computer-aided diagnosis (CAD) models can assist medics, and can be beneficial for the early detection of leukemia. In this paper, a single center study, we aimed to build an aggregated deep learning model for Leukemic B-lymphoblast classification. To make a reliable and accurate deep learner, data augmentation techniques were applied to tackle the limited dataset size, and a transfer learning strategy was employed to accelerate the learning process, and further improve the performance of the proposed network. The results show that our proposed approach was able to fuse features extracted from the best deep learning models, and outperformed individual networks with a test accuracy of 96.58% in Leukemic B-lymphoblast diagnosis. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

20 pages, 5275 KiB  
Article
Convolutional Neural Network-Based Humerus Segmentation and Application to Bone Mineral Density Estimation from Chest X-ray Images of Critical Infants
by Yung-Chun Liu, Yung-Chieh Lin, Pei-Yin Tsai, Osuke Iwata, Chuew-Chuen Chuang, Yu-Han Huang, Yi-Shan Tsai and Yung-Nien Sun
Diagnostics 2020, 10(12), 1028; https://doi.org/10.3390/diagnostics10121028 - 30 Nov 2020
Cited by 5 | Viewed by 3994
Abstract
Measuring bone mineral density (BMD) is important for surveying osteopenia in premature infants. However, the clinical availability of dual-energy X-ray absorptiometry (DEXA) for standard BMD measurement is very limited, and it is not a practical technique for critically premature infants. Developing alternative approaches [...] Read more.
Measuring bone mineral density (BMD) is important for surveying osteopenia in premature infants. However, the clinical availability of dual-energy X-ray absorptiometry (DEXA) for standard BMD measurement is very limited, and it is not a practical technique for critically premature infants. Developing alternative approaches for DEXA might improve clinical care for bone health. This study aimed to measure the BMD of premature infants via routine chest X-rays in the intensive care unit. A convolutional neural network (CNN) for humeral segmentation and quantification of BMD with calibration phantoms (QRM-DEXA) and soft tissue correction were developed. There were 210 X-rays of premature infants evaluated by this system, with an average Dice similarity coefficient value of 97.81% for humeral segmentation. The estimated humerus BMDs (g/cm3; mean ± standard) were 0.32 ± 0.06, 0.37 ± 0.06, and 0.32 ± 0.09, respectively, for the upper, middle, and bottom parts of the left humerus for the enrolled infants. To our knowledge, this is the first pilot study to apply a CNN model to humerus segmentation and to measure BMD in preterm infants. These preliminary results may accelerate the progress of BMD research in critical medicine and assist with nutritional care in premature infants. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

12 pages, 2182 KiB  
Article
Automatic Grading of Individual Knee Osteoarthritis Features in Plain Radiographs Using Deep Convolutional Neural Networks
by Aleksei Tiulpin and Simo Saarakkala
Diagnostics 2020, 10(11), 932; https://doi.org/10.3390/diagnostics10110932 - 10 Nov 2020
Cited by 58 | Viewed by 5206
Abstract
Knee osteoarthritis (OA) is the most common musculoskeletal disease in the world. In primary healthcare, knee OA is diagnosed using clinical examination and radiographic assessment. Osteoarthritis Research Society International (OARSI) atlas of OA radiographic features allows performing independent assessment of knee osteophytes, joint [...] Read more.
Knee osteoarthritis (OA) is the most common musculoskeletal disease in the world. In primary healthcare, knee OA is diagnosed using clinical examination and radiographic assessment. Osteoarthritis Research Society International (OARSI) atlas of OA radiographic features allows performing independent assessment of knee osteophytes, joint space narrowing and other knee features. This provides a fine-grained OA severity assessment of the knee, compared to the gold standard and most commonly used Kellgren–Lawrence (KL) composite score. In this study, we developed an automatic method to predict KL and OARSI grades from knee radiographs. Our method is based on Deep Learning and leverages an ensemble of residual networks with 50 layers. We used transfer learning from ImageNet with a fine-tuning on the Osteoarthritis Initiative (OAI) dataset. An independent testing of our model was performed on the Multicenter Osteoarthritis Study (MOST) dataset. Our method yielded Cohen’s kappa coefficients of 0.82 for KL-grade and 0.79, 0.84, 0.94, 0.83, 0.84 and 0.90 for femoral osteophytes, tibial osteophytes and joint space narrowing for lateral and medial compartments, respectively. Furthermore, our method yielded area under the ROC curve of 0.98 and average precision of 0.98 for detecting the presence of radiographic OA, which is better than the current state-of-the-art. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

9 pages, 1757 KiB  
Article
A Performance Comparison between Automated Deep Learning and Dental Professionals in Classification of Dental Implant Systems from Dental Imaging: A Multi-Center Study
by Jae-Hong Lee, Young-Taek Kim, Jong-Bin Lee and Seong-Nyum Jeong
Diagnostics 2020, 10(11), 910; https://doi.org/10.3390/diagnostics10110910 - 07 Nov 2020
Cited by 46 | Viewed by 4104
Abstract
In this study, the efficacy of the automated deep convolutional neural network (DCNN) was evaluated for the classification of dental implant systems (DISs) and the accuracy of the performance was compared against that of dental professionals using dental radiographic images collected from three [...] Read more.
In this study, the efficacy of the automated deep convolutional neural network (DCNN) was evaluated for the classification of dental implant systems (DISs) and the accuracy of the performance was compared against that of dental professionals using dental radiographic images collected from three dental hospitals. A total of 11,980 panoramic and periapical radiographic images with six different types of DISs were divided into training (n = 9584) and testing (n = 2396) datasets. To compare the accuracy of the trained automated DCNN with dental professionals (including six board-certified periodontists, eight periodontology residents, and 11 residents not specialized in periodontology), 180 images were randomly selected from the test dataset. The accuracy of the automated DCNN based on the AUC, Youden index, sensitivity, and specificity, were 0.954, 0.808, 0.955, and 0.853, respectively. The automated DCNN outperformed most of the participating dental professionals, including board-certified periodontists, periodontal residents, and residents not specialized in periodontology. The automated DCNN was highly effective in classifying similar shapes of different types of DISs based on dental radiographic images. Further studies are necessary to determine the efficacy and feasibility of applying an automated DCNN in clinical practice. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

28 pages, 3136 KiB  
Article
CoSinGAN: Learning COVID-19 Infection Segmentation from a Single Radiological Image
by Pengyi Zhang, Yunxin Zhong, Yulin Deng, Xiaoying Tang and Xiaoqiong Li
Diagnostics 2020, 10(11), 901; https://doi.org/10.3390/diagnostics10110901 - 03 Nov 2020
Cited by 24 | Viewed by 3266
Abstract
Computed tomography (CT) images are currently being adopted as the visual evidence for COVID-19 diagnosis in clinical practice. Automated detection of COVID-19 infection from CT images based on deep models is important for faster examination. Unfortunately, collecting large-scale training data systematically in the [...] Read more.
Computed tomography (CT) images are currently being adopted as the visual evidence for COVID-19 diagnosis in clinical practice. Automated detection of COVID-19 infection from CT images based on deep models is important for faster examination. Unfortunately, collecting large-scale training data systematically in the early stage is difficult. To address this problem, we explore the feasibility of learning deep models for lung and COVID-19 infection segmentation from a single radiological image by resorting to synthesizing diverse radiological images. Specifically, we propose a novel conditional generative model, called CoSinGAN, which can be learned from a single radiological image with a given condition, i.e., the annotation mask of the lungs and infected regions. Our CoSinGAN is able to capture the conditional distribution of the single radiological image, and further synthesize high-resolution (512 × 512) and diverse radiological images that match the input conditions precisely. We evaluate the efficacy of CoSinGAN in learning lung and infection segmentation from very few radiological images by performing 5-fold cross validation on COVID-19-CT-Seg dataset (20 CT cases) and an independent testing on the MosMed dataset (50 CT cases). Both 2D U-Net and 3D U-Net, learned from four CT slices by using our CoSinGAN, have achieved notable infection segmentation performance, surpassing the COVID-19-CT-Seg-Benchmark, i.e., the counterparts trained on an average of 704 CT slices, by a large margin. Such results strongly confirm that our method has the potential to learn COVID-19 infection segmentation from few radiological images in the early stage of COVID-19 pandemic. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

17 pages, 3970 KiB  
Article
Automated Segmentation and Severity Analysis of Subdural Hematoma for Patients with Traumatic Brain Injuries
by Negar Farzaneh, Craig A. Williamson, Cheng Jiang, Ashok Srinivasan, Jayapalli R. Bapuraj, Jonathan Gryak, Kayvan Najarian and S. M. Reza Soroushmehr
Diagnostics 2020, 10(10), 773; https://doi.org/10.3390/diagnostics10100773 - 30 Sep 2020
Cited by 22 | Viewed by 3018
Abstract
Detection and severity assessment of subdural hematoma is a major step in the evaluation of traumatic brain injuries. This is a retrospective study of 110 computed tomography (CT) scans from patients admitted to the Michigan Medicine Neurological Intensive Care Unit or Emergency Department. [...] Read more.
Detection and severity assessment of subdural hematoma is a major step in the evaluation of traumatic brain injuries. This is a retrospective study of 110 computed tomography (CT) scans from patients admitted to the Michigan Medicine Neurological Intensive Care Unit or Emergency Department. A machine learning pipeline was developed to segment and assess the severity of subdural hematoma. First, the probability of each point belonging to the hematoma region was determined using a combination of hand-crafted and deep features. This probability provided the initial state of the segmentation. Next, a 3D post-processing model was applied to evolve the initial state and delineate the hematoma. The recall, precision, and Dice similarity coefficient of the proposed segmentation method were 78.61%, 76.12%, and 75.35%, respectively, for the entire population. The Dice similarity coefficient was 79.97% for clinically significant hematomas, which compared favorably to an inter-rater Dice similarity coefficient. In volume-based severity analysis, the proposed model yielded an F1, recall, and specificity of 98.22%, 98.81%, and 92.31%, respectively, in detecting moderate and severe subdural hematomas based on hematoma volume. These results show that the combination of classical image processing and deep learning can outperform deep learning only methods to achieve greater average performance and robustness. Such a system can aid critical care physicians in reducing time to intervention and thereby improve long-term patient outcomes. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Graphical abstract

22 pages, 5422 KiB  
Article
Analyzing Malaria Disease Using Effective Deep Learning Approach
by Krit Sriporn, Cheng-Fa Tsai, Chia-En Tsai and Paohsi Wang
Diagnostics 2020, 10(10), 744; https://doi.org/10.3390/diagnostics10100744 - 24 Sep 2020
Cited by 32 | Viewed by 4714
Abstract
Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria [...] Read more.
Medical tools used to bolster decision-making by medical specialists who offer malaria treatment include image processing equipment and a computer-aided diagnostic system. Malaria images can be employed to identify and detect malaria using these methods, in order to monitor the symptoms of malaria patients, although there may be atypical cases that need more time for an assessment. This research used 7000 images of Xception, Inception-V3, ResNet-50, NasNetMobile, VGG-16 and AlexNet models for verification and analysis. These are prevalent models that classify the image precision and use a rotational method to improve the performance of validation and the training dataset with convolutional neural network models. Xception, using the state of the art activation function (Mish) and optimizer (Nadam), improved the effectiveness, as found by the outcomes of the convolutional neural model evaluation of these models for classifying the malaria disease from thin blood smear images. In terms of the performance, recall, accuracy, precision, and F1 measure, a combined score of 99.28% was achieved. Consequently, 10% of all non-dataset training and testing images were evaluated utilizing this pattern. Notable aspects for the improvement of a computer-aided diagnostic to produce an optimum malaria detection approach have been found, supported by a 98.86% accuracy level. Full article
(This article belongs to the Special Issue Deep Learning for Computer-Aided Diagnosis in Biomedical Imaging)
Show Figures

Figure 1

Back to TopTop