Advance in Deep Learning-Based Medical Image Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (31 December 2023) | Viewed by 22689

Special Issue Editor

Department of Computer Science, Faculty of Information Technology and Electrical Engineering, Norges Teknisk-Naturvitenskapelige Universitet, Trondheim, Norway
Interests: image and video analysis; remote sensing; deep learning; pattern recognition; medical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning is the prominent research direction for medical image analysis. The hierarchical nature of deep models learns the complex patterns in medical images, facilitating image-based diagnostics and prognosis. Different imaging modalities, including not but not limited to RGB, CT, MRI, X-ray, ultrasound, PETS, EEG, and mammogram, are used for inferring valuable insights about the patient’s medical condition. In addition, multi-modality- and cross-modality-based learning algorithms are also explored where the models are learned using more than a single imaging modality.

In the last decade, many algorithms have been proposed, from cell segmentation to anomaly detection, with the aim of aiding radiologists and medical doctors. However, there are many limiting factors that create barriers to the ubiquitous application of such techniques in clinical practices. The availability of a large amount of high-quality labeled data, the real-time performance bottleneck, and the accuracy of algorithms themselves are some of the key challenges that researchers are currently faced with.

Irrespective of the type of data modality, this Special Issue of Applied sciences titled “Advance in Deep Learning-Based Medical Image Analysis” is dedicated to covering recent advancements in this domain.

Dr. Mohib Ullah
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • anomaly detection
  • weakly supervised learning
  • tumor segmentation
  • oxygenation measurements
  • desmoking
  • image enhancement
  • activity recognition
  • classification
  • semi-supervised learning
  • multi- and cross-modality learning

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 2779 KiB  
Article
Breast Cancer Diagnosis Using YOLO-Based Multiscale Parallel CNN and Flattened Threshold Swish
by Ahmed Dhahi Mohammed and Dursun Ekmekci
Appl. Sci. 2024, 14(7), 2680; https://doi.org/10.3390/app14072680 - 22 Mar 2024
Viewed by 366
Abstract
In the field of biomedical imaging, the use of Convolutional Neural Networks (CNNs) has achieved impressive success. Additionally, the detection and pathological classification of breast masses creates significant challenges. Traditional mammogram screening, conducted by healthcare professionals, is often exhausting, costly, and prone to [...] Read more.
In the field of biomedical imaging, the use of Convolutional Neural Networks (CNNs) has achieved impressive success. Additionally, the detection and pathological classification of breast masses creates significant challenges. Traditional mammogram screening, conducted by healthcare professionals, is often exhausting, costly, and prone to errors. To address these issues, this research proposes an end-to-end Computer-Aided Diagnosis (CAD) system utilizing the ‘You Only Look Once’ (YOLO) architecture. The proposed framework begins by enhancing digital mammograms using the Contrast Limited Adaptive Histogram Equalization (CLAHE) technique. Then, features are extracted using the proposed CNN, leveraging multiscale parallel feature extraction capabilities while incorporating DenseNet and InceptionNet architectures. To combat the ‘dead neuron’ problem, the CNN architecture utilizes the ‘Flatten Threshold Swish’ (FTS) activation function. Additionally, the YOLO loss function has been enhanced to effectively handle lesion scale variation in mammograms. The proposed framework was thoroughly tested on two publicly available benchmarks: INbreast and CBIS-DDSM. It achieved an accuracy of 98.72% for breast cancer classification on the INbreast dataset and a mean Average Precision (mAP) of 91.15% for breast cancer detection on the CBIS-DDSM. The proposed CNN architecture utilized only 11.33 million parameters for training. These results highlight the proposed framework’s ability to revolutionize vision-based breast cancer diagnosis. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

12 pages, 3527 KiB  
Article
Classification of Alzheimer’s Disease Based on White Matter Connectivity Network
by Xiaoli Yang, Yuxin Xia, Zhenwei Li, Lipei Liu, Zhipeng Fan and Jiayi Zhou
Appl. Sci. 2023, 13(21), 12030; https://doi.org/10.3390/app132112030 - 04 Nov 2023
Viewed by 590
Abstract
Alzheimer’s disease (AD) is one of the most common irreversible brain diseases in the elderly. Mild cognitive impairment (MCI) is an early symptom of AD, and the early intervention of MCI may slow down the progress of AD. However, due to the subtle [...] Read more.
Alzheimer’s disease (AD) is one of the most common irreversible brain diseases in the elderly. Mild cognitive impairment (MCI) is an early symptom of AD, and the early intervention of MCI may slow down the progress of AD. However, due to the subtle neuroimaging differences between MCI and normal control (NC), the clinical diagnosis is subjective and easy to misdiagnose. Machine learning can extract depth features from neural images, and analyze and label them to assist the diagnosis of diseases. This paper combines diffusion tensor imaging (DTI) and support vector machine (SVM) to classify AD, MCI, and NC. First, the white matter connectivity network was constructed based on DTI. Second, the nodes with significant differences between groups were screened out by the two-sample t-test. Third, the optimal feature subset was selected as the classification feature by recursive feature elimination (RFE). Finally, the Gaussian kernel support vector machine was used for classification. The experiment tested and verified the data downloaded from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, and the area under the curve (AUC) of AD/MCI and MCI/NC are 0.94 and 0.95, respectively, which have certain competitive advantages compared with other methods. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

17 pages, 3994 KiB  
Article
Semi-Supervised Seizure Prediction Model Combining Generative Adversarial Networks and Long Short-Term Memory Networks
by Xiaoli Yang, Lipei Liu, Zhenwei Li, Yuxin Xia, Zhipeng Fan and Jiayi Zhou
Appl. Sci. 2023, 13(21), 11631; https://doi.org/10.3390/app132111631 - 24 Oct 2023
Viewed by 723
Abstract
In recent years, significant progress has been made in seizure prediction using machine learning methods. However, fully supervised learning methods often rely on a large amount of labeled data, which can be costly and time-consuming. Unsupervised learning overcomes these drawbacks but can suffer [...] Read more.
In recent years, significant progress has been made in seizure prediction using machine learning methods. However, fully supervised learning methods often rely on a large amount of labeled data, which can be costly and time-consuming. Unsupervised learning overcomes these drawbacks but can suffer from issues such as unstable training and reduced prediction accuracy. In this paper, we propose a semi-supervised seizure prediction model called WGAN-GP-Bi-LSTM. Specifically, we utilize the Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP) as the feature learning model, using the Earth Mover’s distance and gradient penalty to guide the unsupervised training process and train a high-order feature extractor. Meanwhile, we built a prediction model based on the Bidirectional Long Short-Term Memory Network (Bi-LSTM), which enhances seizure prediction performance by incorporating the high-order time-frequency features of the brain signals. An independent, publicly available dataset, CHB-MIT, was applied to train and validate the model’s performance. The results showed that the model achieved an average AUC of 90.08%, an average sensitivity of 82.84%, and an average specificity of 85.97%. A comparison with previous research demonstrates that our proposed method outperforms traditional adversarial network models and optimizes unsupervised feature extraction for seizure prediction. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

13 pages, 2336 KiB  
Article
A Multi Brain Tumor Region Segmentation Model Based on 3D U-Net
by Zhenwei Li, Xiaoqin Wu and Xiaoli Yang
Appl. Sci. 2023, 13(16), 9282; https://doi.org/10.3390/app13169282 - 16 Aug 2023
Viewed by 992
Abstract
Accurate segmentation of different brain tumor regions from MR images is of great significance in the diagnosis and treatment of brain tumors. In this paper, an enhanced 3D U-Net model was proposed to address the shortcomings of 2D U-Net in the segmentation tasks [...] Read more.
Accurate segmentation of different brain tumor regions from MR images is of great significance in the diagnosis and treatment of brain tumors. In this paper, an enhanced 3D U-Net model was proposed to address the shortcomings of 2D U-Net in the segmentation tasks of brain tumors. While retaining the U-shaped characteristics of the original U-Net network, an enhanced encoding module and decoding module were designed to increase the extraction and utilization of image features. Then, a hybrid loss function combining the binary cross-entropy loss function and dice similarity coefficient was adopted to speed up the model’s convergence and to achieve accurate and fast automatic segmentation. The model’s performance in the segmentation of brain tumor’s whole tumor region, tumor core region, and enhanced tumor region was studied. The results showed that the proposed 3D U-Net model can achieve better segmentation performance, especially for the tumor core region and enhanced tumor region tumor regions. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

22 pages, 2177 KiB  
Article
Deep Cellular Automata-Based Feature Extraction for Classification of the Breast Cancer Image
by Surasak Tangsakul and Sartra Wongthanavasu
Appl. Sci. 2023, 13(10), 6081; https://doi.org/10.3390/app13106081 - 15 May 2023
Viewed by 1223
Abstract
Feature extraction is an important step in classification. It directly results in an improvement of classification performance. Recent successes of convolutional neural networks (CNN) have revolutionized image classification in computer vision. The outstanding convolution layer of CNN performs feature extraction to obtain promising [...] Read more.
Feature extraction is an important step in classification. It directly results in an improvement of classification performance. Recent successes of convolutional neural networks (CNN) have revolutionized image classification in computer vision. The outstanding convolution layer of CNN performs feature extraction to obtain promising features from images. However, it faces the overfitting problem and computational complexity due to the complicated structure of the convolution layer and deep computation. Therefore, this research problem is challenging. This paper proposes a novel deep feature extraction method based on a cellular automata (CA) model for image classification. It is established on the basis of a deep learning approach and multilayer CA with two main processes. Firstly, in the feature extraction process, multilayer CA with rules are built as the deep feature extraction model based on CA theory. The model aims at extracting multilayer features, called feature matrices, from images. Then, these feature matrices are used to generate score matrices for the deep feature model trained by the CA rules. Secondly, in the decision process, the score matrices are flattened and fed into the fully connected layer of an artificial neural network (ANN) for classification. For performance evaluation, the proposed method is empirically tested on BreaKHis, a popular public breast cancer image dataset used in several promising and popular studies, in comparison with the state-of-the-art methods. The experimental results show that the proposed method achieves the better results up to 7.95% improvement on average when compared with the state-of-the-art methods. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

24 pages, 1614 KiB  
Article
Image Analysis System for Early Detection of Cardiothoracic Surgery Wound Alterations Based on Artificial Intelligence Models
by Catarina Pereira, Federico Guede-Fernández, Ricardo Vigário, Pedro Coelho, José Fragata and Ana Londral
Appl. Sci. 2023, 13(4), 2120; https://doi.org/10.3390/app13042120 - 07 Feb 2023
Cited by 2 | Viewed by 1364
Abstract
Cardiothoracic surgery patients have the risk of developing surgical site infections which cause hospital readmissions, increase healthcare costs, and may lead to mortality. This work aims to tackle the problem of surgical site infections by predicting the existence of worrying alterations in wound [...] Read more.
Cardiothoracic surgery patients have the risk of developing surgical site infections which cause hospital readmissions, increase healthcare costs, and may lead to mortality. This work aims to tackle the problem of surgical site infections by predicting the existence of worrying alterations in wound images with a wound image analysis system based on artificial intelligence. The developed system comprises a deep learning segmentation model (MobileNet-Unet), which detects the wound region area and categorizes the wound type (chest, drain, and leg), and a machine learning classification model, which predicts the occurrence of wound alterations (random forest, support vector machine and k-nearest neighbors for chest, drain, and leg, respectively). The deep learning model segments the image and assigns the wound type. Then, the machine learning models classify the images from a group of color and textural features extracted from the output region of interest to feed one of the three wound-type classifiers that reach the final binary decision of wound alteration. The segmentation model achieved a mean Intersection over Union of 89.9% and a mean average precision of 90.1%. Separating the final classification into different classifiers was more effective than a single classifier for all the wound types. The leg wound classifier exhibited the best results with an 87.6% recall and 52.6% precision. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

17 pages, 5440 KiB  
Article
Analysis of Breath-Holding Capacity for Improving Efficiency of COPD Severity-Detection Using Deep Transfer Learning
by Narendra Kumar Rout, Nirjharinee Parida, Ranjeet Kumar Rout, Kshira Sagar Sahoo, N. Z. Jhanjhi, Mehedi Masud and Mohammed A. AlZain
Appl. Sci. 2023, 13(1), 507; https://doi.org/10.3390/app13010507 - 30 Dec 2022
Viewed by 1585
Abstract
Air collection around the lung regions can cause lungs to collapse. Conditions like emphysema can cause chronic obstructive pulmonary disease (COPD), wherein lungs get progressively damaged, and the damage cannot be reversed by treatment. It is recommended that these conditions be detected early [...] Read more.
Air collection around the lung regions can cause lungs to collapse. Conditions like emphysema can cause chronic obstructive pulmonary disease (COPD), wherein lungs get progressively damaged, and the damage cannot be reversed by treatment. It is recommended that these conditions be detected early via highly complex image processing models applied to chest X-rays so that the patient’s life may be extended. Due to COPD, the bronchioles are narrowed and blocked with mucous, and causes destruction of alveolar geometry. These changes can be visually monitored via feature analysis using effective image classification models such as convolutional neural networks (CNN). CNNs have proven to possess more than 95% accuracy for detection of COPD conditions for static datasets. For consistent performance of CNNs, this paper presents an incremental learning mechanism that uses deep transfer learning for incrementally updating classification weights in the system. The proposed model is tested on 3 different lung X-ray datasets, and an accuracy of 99.95% is achieved for detection of COPD. In this paper, a model for temporal analysis of COPD detected imagery is proposed. This model uses Gated Recurrent Units (GRUs) for evaluating lifespan of patients with COPD. Analysis of lifespan can assist doctors and other medical practitioners to take recommended steps for aggressive treatment. A smaller dataset was available to perform temporal analysis of COPD values because patients are not advised continuous chest X-rays due to their long-term side effects, which resulted in an accuracy of 97% for lifespan analysis. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

11 pages, 789 KiB  
Article
Automated Hybrid Model for Detecting Perineural Invasion in the Histology of Colorectal Cancer
by Jiyoon Jung, Eunsu Kim, Hyeseong Lee, Sung Hak Lee and Sangjeong Ahn
Appl. Sci. 2022, 12(18), 9159; https://doi.org/10.3390/app12189159 - 13 Sep 2022
Viewed by 1406
Abstract
Perineural invasion (PNI) is a well-established independent prognostic factor for poor outcomes in colorectal cancer (CRC). However, PNI detection in CRC is a cumbersome and time-consuming process, with low inter-and intra-rater agreement. In this study, a deep-learning-based approach was proposed for detecting PNI [...] Read more.
Perineural invasion (PNI) is a well-established independent prognostic factor for poor outcomes in colorectal cancer (CRC). However, PNI detection in CRC is a cumbersome and time-consuming process, with low inter-and intra-rater agreement. In this study, a deep-learning-based approach was proposed for detecting PNI using histopathological images. We collected 530 regions of histology from 77 whole-slide images (PNI, 100 regions; non-PNI, 430 regions) for training. The proposed hybrid model consists of two components: a segmentation network for tumor and nerve tissues, and a PNI classifier. Unlike a “black-box” model that is unable to account for errors, the proposed approach enables false predictions to be explained and addressed. We presented a high performance, automated PNI detector, with the area under the curve (AUC) for the receiver operating characteristic (ROC) curve of 0.92. Thus, the potential for the use of deep neural networks in PNI screening was proved, and a possible alternative to conventional methods for the pathologic diagnosis of CRC was provided. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

16 pages, 4459 KiB  
Article
Optimal and Efficient Deep Learning Model for Brain Tumor Magnetic Resonance Imaging Classification and Analysis
by Manar Ahmed Hamza, Hanan Abdullah Mengash, Saud S. Alotaibi, Siwar Ben Haj Hassine, Ayman Yafoz, Fahd Althukair, Mahmoud Othman and Radwa Marzouk
Appl. Sci. 2022, 12(15), 7953; https://doi.org/10.3390/app12157953 - 08 Aug 2022
Cited by 5 | Viewed by 1935
Abstract
A brain tumor (BT) is an abnormal development of brain cells that causes damage to the nerves and blood vessels. An accurate and early diagnosis of BT is important to prevent future complications. Precise segmentation of the BT provides a basis for surgical [...] Read more.
A brain tumor (BT) is an abnormal development of brain cells that causes damage to the nerves and blood vessels. An accurate and early diagnosis of BT is important to prevent future complications. Precise segmentation of the BT provides a basis for surgical and planning treatment to physicians. Manual detection utilizing MRI images is computationally difficult. Due to significant variation in their structure and location, viz., ambiguous boundaries and irregular shapes, computerized tumor diagnosis is still a challenging task. The application of a convolutional neural network (CNN) helps radiotherapists categorize the types of BT from magnetic resonance images (MRI). This study designs an evolutional algorithm with a deep learning-driven brain tumor MRI image classification (EADL-BTMIC) model. The presented EADL-BTMIC model aims to accurately recognize and categorize MRI images to identify BT. The EADL-BTMIC model primarily applies bilateral filtering (BF) based noise removal and skull stripping as a pre-processing stage. In addition, the morphological segmentation process is carried out to determine the affected regions in the image. Moreover, sooty tern optimization (STO) with the Xception model is exploited for feature extraction. Furthermore, the attention-based long short-term memory (ALSTM) technique is exploited for the classification of BT into distinct classes. To portray the increased performance of the EADL-BTMIC model, a series of simulations were carried out on the benchmark dataset. The experimental outcomes highlighted the enhancements of the EADL-BTMIC model over recent models. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

14 pages, 1599 KiB  
Article
Semi-Supervised Medical Image Classification Based on Attention and Intrinsic Features of Samples
by Zhuohao Zhou, Chunyue Lu, Wenchao Wang, Wenhao Dang and Ke Gong
Appl. Sci. 2022, 12(13), 6726; https://doi.org/10.3390/app12136726 - 02 Jul 2022
Cited by 2 | Viewed by 1639
Abstract
The training of deep neural networks usually requires a lot of high-quality data with good annotations to obtain good performance. However, in clinical medicine, obtaining high-quality marker data is laborious and expensive because it requires the professional skill of clinicians. In this paper, [...] Read more.
The training of deep neural networks usually requires a lot of high-quality data with good annotations to obtain good performance. However, in clinical medicine, obtaining high-quality marker data is laborious and expensive because it requires the professional skill of clinicians. In this paper, based on the consistency strategy, we propose a new semi-supervised model for medical image classification which introduces a self-attention mechanism into the backbone network to learn more meaningful features in image classification tasks and uses the improved version of focal loss at the supervision loss to reduce the misclassification of samples. Finally, we add a consistency loss similar to the unsupervised consistency loss to encourage the model to learn more about the internal features of unlabeled samples. Our method achieved 94.02% AUC and 72.03% Sensitivity on the ISIC 2018 dataset and 79.74% AUC on the ChestX-ray14 dataset. These results show the effectiveness of our method in single-label and multi-label classification. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

23 pages, 3880 KiB  
Article
DETECT-LC: A 3D Deep Learning and Textural Radiomics Computational Model for Lung Cancer Staging and Tumor Phenotyping Based on Computed Tomography Volumes
by Karma M. Fathalla, Sherin M. Youssef and Nourhan Mohammed
Appl. Sci. 2022, 12(13), 6318; https://doi.org/10.3390/app12136318 - 21 Jun 2022
Cited by 3 | Viewed by 1975
Abstract
Lung Cancer is one of the primary causes of cancer-related deaths worldwide. Timely diagnosis and precise staging are pivotal for treatment planning, and thus can lead to increased survival rates. The application of advanced machine learning techniques helps in effective diagnosis and staging. [...] Read more.
Lung Cancer is one of the primary causes of cancer-related deaths worldwide. Timely diagnosis and precise staging are pivotal for treatment planning, and thus can lead to increased survival rates. The application of advanced machine learning techniques helps in effective diagnosis and staging. In this study, a multistage neurobased computational model is proposed, DETECT-LC learning. DETECT-LC handles the challenge of choosing discriminative CT slices for constructing 3D volumes, using Haralick, histogram-based radiomics, and unsupervised clustering. ALT-CNN-DENSE Net architecture is introduced as part of DETECT-LC for voxel-based classification. DETECT-LC offers an automatic threshold-based segmentation approach instead of the manual procedure, to help mitigate this burden for radiologists and clinicians. Also, DETECT-LC presents a slice selection approach and a newly proposed relatively light weight 3D CNN architecture to improve existing studies performance. The proposed pipeline is employed for tumor phenotyping and staging. DETECT-LC performance is assessed through a range of experiments, in which DETECT-LC attains outstanding performance surpassing its counterparts in terms of accuracy, sensitivity, F1-score and Area under Curve (AuC). For histopathology classification, DETECT-LC average performance achieved an improvement of 20% in overall accuracy, 0.19 in sensitivity, 0.16 in F1-Score and 0.16 in AuC over the state of the art. A similar enhancement is reached for staging, where higher overall accuracy, sensitivity and F1-score are attained with differences of 8%, 0.08 and 0.14. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

20 pages, 7141 KiB  
Article
A Fast Method for Whole Liver- and Colorectal Liver Metastasis Segmentations from MRI Using 3D FCNN Networks
by Yuliia Kamkova, Egidijus Pelanis, Atle Bjørnerud, Bjørn Edwin, Ole Jakob Elle and Rahul Prasanna Kumar
Appl. Sci. 2022, 12(10), 5145; https://doi.org/10.3390/app12105145 - 19 May 2022
Cited by 2 | Viewed by 1620
Abstract
The liver is the most frequent organ for metastasis from colorectal cancer, one of the most common tumor types with a poor prognosis. Despite reducing surgical planning time and providing better spatial representation, current methods of 3D modeling of patient-specific liver anatomy are [...] Read more.
The liver is the most frequent organ for metastasis from colorectal cancer, one of the most common tumor types with a poor prognosis. Despite reducing surgical planning time and providing better spatial representation, current methods of 3D modeling of patient-specific liver anatomy are extremely time-consuming. The purpose of this study was to develop a deep learning model trained on an in-house dataset of 84 MRI volumes to rapidly provide fully automated whole liver and liver lesions segmentation from volumetric MRI series. A cascade approach was utilized to address the problem of class imbalance. The trained model achieved an average Dice score for whole liver segmentation of 0.944 ± 0.009 and 0.780 ± 0.119 for liver lesion segmentation. Furthermore, applying this method to a not-annotated dataset creates a complete 3D segmentation in less than 6 s per MRI volume, with a mean segmentation Dice score of 0.994 ± 0.003 for the liver and 0.709 ± 0.171 for tumors compared to manual corrections applied after the inference was achieved. Availability and integration of our method in clinical practice may improve diagnosis and treatment planning in patients with colorectal liver metastasis and open new possibilities for research into liver tumors. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

16 pages, 66557 KiB  
Article
Cerebrovascular Segmentation Model Based on Spatial Attention-Guided 3D Inception U-Net with Multi-Directional MIPs
by Yongwei Liu, Hyo-Sung Kwak and Il-Seok Oh
Appl. Sci. 2022, 12(5), 2288; https://doi.org/10.3390/app12052288 - 22 Feb 2022
Cited by 11 | Viewed by 2090
Abstract
The segmentation algorithm of cerebrovascular magnetic resonance angiography (MRA) images based on deep learning plays an essential role in medical study. Traditional segmentation algorithms face poor segmentation results and poor connectivity when the cerebrovascular vessels are thinner. An improved segmentation algorithm based on [...] Read more.
The segmentation algorithm of cerebrovascular magnetic resonance angiography (MRA) images based on deep learning plays an essential role in medical study. Traditional segmentation algorithms face poor segmentation results and poor connectivity when the cerebrovascular vessels are thinner. An improved segmentation algorithm based on deep convolutional networks is proposed in this research. The proposed segmentation network combines the original 3D U-Net with the maximum intensity projection (MIP), which was transformed from the corresponding patch of a 3D MRA image. The MRA dataset provided by Jeonbuk National University Hospital was used to evaluate the experimental results in comparison with traditional 3D cerebrovascular segmentation methods and other state–of–the–art deep learning methods. The experimental results showed that our method achieved the best test performance among the compared methods in terms of the Dice score when Inception blocks and attention modules were placed in the proposed dual-path networks. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 1424 KiB  
Review
Biomedical Image Segmentation Using Denoising Diffusion Probabilistic Models: A Comprehensive Review and Analysis
by Zengxin Liu, Caiwen Ma, Wenji She and Meilin Xie
Appl. Sci. 2024, 14(2), 632; https://doi.org/10.3390/app14020632 - 11 Jan 2024
Viewed by 1145
Abstract
Biomedical image segmentation plays a pivotal role in medical imaging, facilitating precise identification and delineation of anatomical structures and abnormalities. This review explores the application of the Denoising Diffusion Probabilistic Model (DDPM) in the realm of biomedical image segmentation. DDPM, a probabilistic generative [...] Read more.
Biomedical image segmentation plays a pivotal role in medical imaging, facilitating precise identification and delineation of anatomical structures and abnormalities. This review explores the application of the Denoising Diffusion Probabilistic Model (DDPM) in the realm of biomedical image segmentation. DDPM, a probabilistic generative model, has demonstrated promise in capturing complex data distributions and reducing noise in various domains. In this context, the review provides an in-depth examination of the present status, obstacles, and future prospects in the application of biomedical image segmentation techniques. It addresses challenges associated with the uncertainty and variability in imaging data analyzing commonalities based on probabilistic methods. The paper concludes with insights into the potential impact of DDPM on advancing medical imaging techniques and fostering reliable segmentation results in clinical applications. This comprehensive review aims to provide researchers, practitioners, and healthcare professionals with a nuanced understanding of the current state, challenges, and future prospects of utilizing DDPM in the context of biomedical image segmentation. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

29 pages, 7637 KiB  
Review
A Review of Medical Diagnostic Video Analysis Using Deep Learning Techniques
by Moomal Farhad, Mohammad Mehedy Masud, Azam Beg, Amir Ahmad and Luai Ahmed
Appl. Sci. 2023, 13(11), 6582; https://doi.org/10.3390/app13116582 - 29 May 2023
Cited by 1 | Viewed by 2426
Abstract
The automated analysis of medical diagnostic videos, such as ultrasound and endoscopy, provides significant benefits in clinical practice by improving the efficiency and accuracy of diagnosis. Deep learning techniques show remarkable success in analyzing these videos by automating tasks such as classification, detection, [...] Read more.
The automated analysis of medical diagnostic videos, such as ultrasound and endoscopy, provides significant benefits in clinical practice by improving the efficiency and accuracy of diagnosis. Deep learning techniques show remarkable success in analyzing these videos by automating tasks such as classification, detection, and segmentation. In this paper, we review the application of deep learning techniques for analyzing medical diagnostic videos, with a focus on ultrasound and endoscopy. The methodology for selecting the papers consists of two major steps. First, we selected around 350 papers based on the relevance of their titles to our topic. Second, we chose the research articles that focus on deep learning and medical diagnostic videos based on our inclusion and exclusion criteria. We found that convolutional neural networks (CNNs) and long short-term memory (LSTM) are the two most commonly used models that achieve good results in analyzing different types of medical videos. We also found various limitations and open challenges. We highlight the limitations and open challenges in this field, such as labeling and preprocessing of medical videos, class imbalance, and time complexity, as well as incorporating expert knowledge, k-shot learning, live feedback from experts, and medical history with video data. Our review can encourage collaborative research with domain experts and patients to improve the diagnosis of diseases from medical videos. Full article
(This article belongs to the Special Issue Advance in Deep Learning-Based Medical Image Analysis)
Show Figures

Figure 1

Back to TopTop