Medical Image Processing Using AI

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Bioelectronics".

Deadline for manuscript submissions: closed (15 February 2023) | Viewed by 20916

Special Issue Editor


E-Mail Website1 Website2
Guest Editor
Faculty of Medicine, School of Health Sciences, University of Ioannina, 45110 Ioannina, Greece
Interests: medical image analysis; machine learning; computer aided skin diagnostics; treatment evaluation
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical imaging data are one of the richest and often one of the most complex sources of information about patients. Artificial intelligence (AI) has been proven to boost the power of the automated processing of medical images. Researchers have applied AI not only to automate the most tedious and time-consuming tasks such as image segmentation, but also to develop advanced image quantification and automated image interpretation to improve patient diagnosis, management, and follow-up. Although, today, numerous AI-based approaches, and in particular, deep learning algorithms, have demonstrated outstanding performance in several image analysis tasks, their implementation in real clinical environments is limited. Therefore, this Special Issue aims to promote novel and efficient AI algorithms and methods for medical image analysis, with a main focus on the current challenges for practical applications of AI in clinical environments, and to propose possible solutions for integrating AI in real settings. Of particular interest are submissions regarding computer-aided diagnosis. However, contributions concerning other aspects of medical image processing (image segmentation, image synthesis) are also welcomed.

  • The challenges of computer-assisted diagnosis with deep learning;
  • Enforcing invariance in deep learning models;
  • The repeatability and robustness of feature extraction;
  • Deep models for small data sets;
  • Data augmentation and data synthesis;
  • Lesion detection;
  • Lesion classification;
  • Assisted diagnosis;
  • Treatment evaluation and patient monitoring.

Dr. Panagiota Spyridonos
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image analysis
  • deep learning
  • knowledge transfer
  • assisted diagnosis
  • image interpretation
  • image embeddings
  • lesion detection
  • lesion classification
  • image segmentation
  • synthetic data

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 8862 KiB  
Article
Efficient Gastrointestinal Disease Classification Using Pretrained Deep Convolutional Neural Network
by Muhammad Nouman Noor, Muhammad Nazir, Sajid Ali Khan, Oh-Young Song and Imran Ashraf
Electronics 2023, 12(7), 1557; https://doi.org/10.3390/electronics12071557 - 26 Mar 2023
Cited by 14 | Viewed by 2586
Abstract
Gastrointestinal (GI) tract diseases are on the rise in the world. These diseases can have fatal consequences if not diagnosed in the initial stages. WCE (wireless capsule endoscopy) is the advanced technology used to inspect gastrointestinal diseases such as ulcerative-colitis, polyps, esophagitis, and [...] Read more.
Gastrointestinal (GI) tract diseases are on the rise in the world. These diseases can have fatal consequences if not diagnosed in the initial stages. WCE (wireless capsule endoscopy) is the advanced technology used to inspect gastrointestinal diseases such as ulcerative-colitis, polyps, esophagitis, and ulcers. WCE produces thousands of frames for a single patient’s procedure for which manual examination is tiresome, time-consuming, and prone to error; therefore, an automated procedure is needed. WCE images suffer from low contrast which increases inter-class and intra-class similarity and reduces the anticipated performance. In this paper, an efficient GI tract disease classification technique is proposed which utilizes an optimized brightness-controlled contrast-enhancement method to improve the contrast of the WCE images. The proposed technique applies a genetic algorithm (GA) for adjusting the values of contrast and brightness within an image by modifying the fitness function, which improves the overall quality of WCE images. This quality improvement is reported using qualitative measures, such as peak signal to noise ratio (PSNR), mean square error (MSE), visual information fidelity (VIF), similarity index (SI), and information quality index (IQI). As a second step, data augmentation is performed on WCE images by applying multiple transformations, and then, transfer learning is used to fine-tune a modified pre-trained model on WCE images. Finally, for the classification of GI tract disease, the extracted features are passed through multiple machine-learning classifiers. To show the efficacy of the proposed technique in the improvement in classification performance, the results are reported for the original dataset as well as the contrast-enhanced dataset. The results show an overall improvement of 15.26% in accuracy, 13.3% in precision, 16.77% in recall rate, and 15.18% in F-measure. Finally, a comparison with the existing techniques shows that the proposed framework outperforms the state-of-the-art techniques. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

13 pages, 1030 KiB  
Article
Efficient Training on Alzheimer’s Disease Diagnosis with Learnable Weighted Pooling for 3D PET Brain Image Classification
by Xin Xing, Muhammad Usman Rafique, Gongbo Liang, Hunter Blanton, Yu Zhang, Chris Wang, Nathan Jacobs and Ai-Ling Lin
Electronics 2023, 12(2), 467; https://doi.org/10.3390/electronics12020467 - 16 Jan 2023
Cited by 8 | Viewed by 2234
Abstract
Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze Alzheimer’s disease (AD) brain images for a better understanding of the disease progress or predicting the conversion from cognitively impaired (CU) or mild cognitive impairment status. It is well-known that training [...] Read more.
Three-dimensional convolutional neural networks (3D CNNs) have been widely applied to analyze Alzheimer’s disease (AD) brain images for a better understanding of the disease progress or predicting the conversion from cognitively impaired (CU) or mild cognitive impairment status. It is well-known that training 3D-CNN is computationally expensive and with the potential of overfitting due to the small sample size available in the medical imaging field. Here we proposed a novel 3D-2D approach by converting a 3D brain image to a 2D fused image using a Learnable Weighted Pooling (LWP) method to improve efficient training and maintain comparable model performance. By the 3D-to-2D conversion, the proposed model can easily forward the fused 2D image through a pre-trained 2D model while achieving better performance over different 3D and 2D baselines. In the implementation, we chose to use ResNet34 for feature extraction as it outperformed other 2D CNN backbones. We further showed that the weights of the slices are location-dependent and the model performance relies on the 3D-to-2D fusion view, with the best outcomes from the coronal view. With the new approach, we were able to reduce 75% of the training time and increase the accuracy to 0.88, compared with conventional 3D CNNs, for classifying amyloid-beta PET imaging from the AD patients from the CU participants using the publicly available Alzheimer’s Disease Neuroimaging Initiative dataset. The novel 3D-2D model may have profound implications for timely AD diagnosis in clinical settings in the future. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

27 pages, 5628 KiB  
Article
A Lightweight CNN and Class Weight Balancing on Chest X-ray Images for COVID-19 Detection
by Noha Alduaiji, Abeer Algarni, Saadia Abdalaha Hamza, Gamil Abdel Azim and Habib Hamam
Electronics 2022, 11(23), 4008; https://doi.org/10.3390/electronics11234008 - 02 Dec 2022
Cited by 2 | Viewed by 1937
Abstract
In many locations, reverse transcription polymerase chain reaction (RT-PCR) tests are used to identify COVID-19. It could take more than 48 h. It is a key factor in its seriousness and quick spread. Images from chest X-rays are utilized to diagnose COVID-19. Which [...] Read more.
In many locations, reverse transcription polymerase chain reaction (RT-PCR) tests are used to identify COVID-19. It could take more than 48 h. It is a key factor in its seriousness and quick spread. Images from chest X-rays are utilized to diagnose COVID-19. Which generally deals with the issue of imbalanced classification. The purpose of this paper is to improve CNN’s capacity to display Chest X-ray pictures when there is a class imbalance. CNN Training has come to an end while chastening the classes for using more examples. Additionally, the training data set uses data augmentation. The achievement of the suggested method is assessed on an image’s two data sets of chest X-rays. The suggested model’s efficiency was analyzed using criteria like accuracy, specificity, sensitivity, and F1 score. The suggested method attained an accuracy of 94% worst, 97% average, and 100% best cases, respectively, and an F1-score of 96% worst, 98% average and 100% best cases, respectively. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

35 pages, 43569 KiB  
Article
Deep Learning Approach for Automatic Segmentation and Functional Assessment of LV in Cardiac MRI
by Anupama Bhan, Parthasarathi Mangipudi and Ayush Goyal
Electronics 2022, 11(21), 3594; https://doi.org/10.3390/electronics11213594 - 03 Nov 2022
Cited by 3 | Viewed by 1647
Abstract
The early diagnosis of cardiovascular diseases (CVDs) can effectively prevent them from worsening. The source of the disease can be effectively detected through analysis with cardiac magnetic resonance imaging (CMRI). The segmentation of the left ventricle (LV) in CMRI images plays an indispensable [...] Read more.
The early diagnosis of cardiovascular diseases (CVDs) can effectively prevent them from worsening. The source of the disease can be effectively detected through analysis with cardiac magnetic resonance imaging (CMRI). The segmentation of the left ventricle (LV) in CMRI images plays an indispensable role in the diagnosis of CVDs. However, the automated segmentation of LV is a challenging task, as it is confused with neighboring regions in the cardiac MRI. Deep learning models are effective in performing such complex segmentation because of the high performing convolutional neural networks (CNN). However, since segmentation using CNN involves the pixel-level classification of the image, it lacks the contextual information that is highly desirable in analyzing medical images. In this research, we propose a modified U-Net model to accurately segment the LV using context-enabled segmentation. The proposed model achieves the automatic segmentation and quantitative assessment of LV. The proposed model achieves the state-of-the-art accuracy by effectively utilizing various hyperparameters, such as batch size, batch normalization, activation function, loss function and dropout. Our method demonstrated a statistical significance in the endo- and epicardial walls with a dice score of 0.96 and 0.93, respectively, an average perpendicular distance of 1.73 and percentage of good contours of 96.22 were achieved. Furthermore, a high positive correlation of 0.98 between the clinical parameters, such as ejection fraction, end diastolic volume (EDV), end systolic volume (ESV) and gold standard was obtained. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

13 pages, 7045 KiB  
Article
Improving Pneumonia Classification and Lesion Detection Using Spatial Attention Superposition and Multilayer Feature Fusion
by Kang Li, Fengbo Zheng, Panpan Wu, Qiuyuan Wang, Gongbo Liang and Lifen Jiang
Electronics 2022, 11(19), 3102; https://doi.org/10.3390/electronics11193102 - 28 Sep 2022
Cited by 2 | Viewed by 1475
Abstract
Pneumonia is a severe inflammation of the lung that could cause serious complications. Chest X-rays (CXRs) are commonly used to make a diagnosis of pneumonia. In this paper, we propose a deep-learning-based method with spatial attention superposition (SAS) and multilayer feature fusion (MFF) [...] Read more.
Pneumonia is a severe inflammation of the lung that could cause serious complications. Chest X-rays (CXRs) are commonly used to make a diagnosis of pneumonia. In this paper, we propose a deep-learning-based method with spatial attention superposition (SAS) and multilayer feature fusion (MFF) to facilitate pneumonia diagnosis based on CXRs. Specifically, an SAS module, which takes advantage of the channel and spatial attention mechanisms, was designed to identify intrinsic imaging features of pneumonia-related lesions and their locations, and an MFF module was designed to harmonize disparate features from different channels and emphasize important information. These two modules were concatenated to extract critical image features serving as the basis for pneumonia diagnosis. We further embedded the proposed modules into a baseline neural network and developed a model called SAS-MFF-YOLO to diagnose pneumonia. To validate the effectiveness of our model, extensive experiments were conducted on two CXR datasets provided by the Radiological Society of North America (RSNA) and the AI Research Institute. SAS-MFF-YOLO achieved a precision of 88.1%, a recall of 98.2% for pneumonia classification and an AP50 of 99% for lesion detection on the AI Research Institute dataset. The visualization of intermediate feature maps showed that our method could facilitate uncovering pneumonia-related lesions in CXRs. Our results demonstrated that our approach could be used to enhance the performance of the overall pneumonia detection on CXR imaging. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

18 pages, 3156 KiB  
Article
MfdcModel: A Novel Classification Model for Classification of Benign and Malignant Breast Tumors in Ultrasound Images
by Wei Liu, Minghui Guo, Peizhong Liu and Yongzhao Du
Electronics 2022, 11(16), 2583; https://doi.org/10.3390/electronics11162583 - 18 Aug 2022
Cited by 3 | Viewed by 2594
Abstract
Automatic classification of benign and malignant breast ultrasound images is an important and challenging task to improve the efficiency and accuracy of clinical diagnosis of breast tumors and reduce the rate of missed and misdiagnosis. The task often requires a large amount of [...] Read more.
Automatic classification of benign and malignant breast ultrasound images is an important and challenging task to improve the efficiency and accuracy of clinical diagnosis of breast tumors and reduce the rate of missed and misdiagnosis. The task often requires a large amount of data to train. However, it is difficult to obtain medical images, which contradicts the large amount of data needed to obtain good diagnostic models for training. In this paper, a novel classification model for the classification of breast tumors is proposed to improve the performance of diagnosis models trained by small datasets. The method integrates three features from medical features extracted from segmented images, features selected from the pre-trained ResNet101 output by principal component analysis (PCA), and texture features. Among the medical features that are used to train the naive Bayes (NB) classifier, and the PCA-selected features are used to train the support vector machine (SVM) classifier. Subsequently, the final results of boosting are obtained by weighting the classifiers. A five-fold cross-validation experiment yields an average accuracy of 89.17%, an average precision of 90.00%, and an average AUC value of 0.95. According to the experimental results, the proposed method has better classification accuracy compared to the accuracy obtained by other models trained on only small datasets. This approach can serve as a reliable second opinion for radiologists, and it can also provide useful advice for junior radiologists who do not have sufficient clinical experience. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Graphical abstract

24 pages, 7958 KiB  
Article
MobileUNetV3—A Combined UNet and MobileNetV3 Architecture for Spinal Cord Gray Matter Segmentation
by Alhanouf Alsenan, Belgacem Ben Youssef and Haikel Alhichri
Electronics 2022, 11(15), 2388; https://doi.org/10.3390/electronics11152388 - 30 Jul 2022
Cited by 1 | Viewed by 2508
Abstract
The inspection of gray matter (GM) tissue of the human spinal cord is a valuable tool for the diagnosis of a wide range of neurological disorders. Thus, the detection and segmentation of GM regions in magnetic resonance images (MRIs) is an important task [...] Read more.
The inspection of gray matter (GM) tissue of the human spinal cord is a valuable tool for the diagnosis of a wide range of neurological disorders. Thus, the detection and segmentation of GM regions in magnetic resonance images (MRIs) is an important task when studying the spinal cord and its related medical conditions. This work proposes a new method for the segmentation of GM tissue in spinal cord MRIs based on deep convolutional neural network (CNN) techniques. Our proposed method, called MobileUNetV3, has a UNet-like architecture, with the MobileNetV3 model being used as a pre-trained encoder. MobileNetV3 is light-weight and yields high accuracy compared with many other CNN architectures of similar size. It is composed of a series of blocks, which produce feature maps optimized using residual connections and squeeze-and-excitation modules. We carefully added a set of upsampling layers and skip connections to MobileNetV3 in order to build an effective UNet-like model for image segmentation. To illustrate the capabilities of the proposed method, we tested it on the spinal cord gray matter segmentation challenge dataset and compared it to a number of recent state-of-the-art methods. We obtained results that outperformed seven methods with respect to five evaluation metrics comprising the dice similarity coefficient (0.87), Jaccard index (0.78), sensitivity (87.20%), specificity (99.90%), and precision (87.96%). Based on these highly competitive results, MobileUNetV3 is an effective deep-learning model for the segmentation of GM MRIs in the spinal cord. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

11 pages, 2320 KiB  
Article
Improved Image Fusion Method Based on Sparse Decomposition
by Xiaomei Qin, Yuxi Ban, Peng Wu, Bo Yang, Shan Liu, Lirong Yin, Mingzhe Liu and Wenfeng Zheng
Electronics 2022, 11(15), 2321; https://doi.org/10.3390/electronics11152321 - 26 Jul 2022
Cited by 56 | Viewed by 1929
Abstract
In the principle of lens imaging, when we project a three-dimensional object onto a photosensitive element through a convex lens, the point intersecting the focal plane can show a clear image of the photosensitive element, and the object point far away from the [...] Read more.
In the principle of lens imaging, when we project a three-dimensional object onto a photosensitive element through a convex lens, the point intersecting the focal plane can show a clear image of the photosensitive element, and the object point far away from the focal plane presents a fuzzy image point. The imaging position is considered to be clear within the limited size of the front and back of the focal plane. Otherwise, the image is considered to be fuzzy. In microscopic scenes, an electron microscope is usually used as the shooting equipment, which can basically eliminate the factors of defocus between the lens and the object. Most of the blur is caused by the shallow depth of field of the microscope, which makes the image defocused. Based on this, this paper analyzes the causes of defocusing in a video microscope and finds out that the shallow depth of field is the main reason, so we choose the corresponding deblurring method: the multi-focus image fusion method. We proposed a new multi-focus image fusion method based on sparse representation (DWT-SR). The operation burden is reduced by decomposing multiple frequency bands, and multi-channel operation is carried out by GPU parallel operation. The running time of the algorithm is further reduced. The results indicate that the DWT-SR algorithm introduced in this paper is higher in contrast and has much more details. It also solves the problem that dictionary training sparse approximation takes a long time. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

15 pages, 3702 KiB  
Article
Depth Estimation Method for Monocular Camera Defocus Images in Microscopic Scenes
by Yuxi Ban, Mingzhe Liu, Peng Wu, Bo Yang, Shan Liu, Lirong Yin and Wenfeng Zheng
Electronics 2022, 11(13), 2012; https://doi.org/10.3390/electronics11132012 - 27 Jun 2022
Cited by 62 | Viewed by 2844
Abstract
When using a monocular camera for detection or observation, one only obtain two-dimensional information, which is far from adequate for surgical robot manipulation and workpiece detection. Therefore, at this scale, obtaining three-dimensional information of the observed object, especially the depth information estimation of [...] Read more.
When using a monocular camera for detection or observation, one only obtain two-dimensional information, which is far from adequate for surgical robot manipulation and workpiece detection. Therefore, at this scale, obtaining three-dimensional information of the observed object, especially the depth information estimation of the surface points of each object, has become a key issue. This paper proposes two methods to solve the problem of depth estimation of defiant images in microscopic scenes. These are the depth estimation method of the defocused image based on a Markov random field, and the method based on geometric constraints. According to the real aperture imaging principle, the geometric constraints on the relative defocus parameters of the point spread function are derived, which improves the traditional iterative method and improves the algorithm’s efficiency. Full article
(This article belongs to the Special Issue Medical Image Processing Using AI)
Show Figures

Figure 1

Back to TopTop