applsci-logo

Journal Browser

Journal Browser

Image Processing and Machine Learning in Disease Predictions and Diagnosis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Biomedical Engineering".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 18774

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Information Engineering, Meijo University, Nagoya 468-8502, Japan
Interests: deep learning; medical Imaging; lung cancer; gastric cancer
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
The Center for Data Science Education and Research, Shiga University, Ōtsu 522-0069, Japan
Interests: medical imaging

Special Issue Information

Dear Colleagues,

With the emergence of various deep learning techniques, medical image processing methods have also undergone a significant transformation. For image classification and organ segmentation, deep learning has shown excellent performance, and traditional methods have been almost completely replaced by deep learning-based methods. On the other hand, for some prediction problems such as survival prediction and risk prediction, many studies have been reported using a combination of traditional image processing algorithms, such as texture analysis, and machine learning algorithms. This special issue deals with research on image processing techniques and machine learning for image classification, segmentation, survival prediction, treatment response prediction, risk prediction, and triage.

Dr. Atsushi Teramoto
Dr. Tomoko Tateyama
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • diagnosis
  • disease detection
  • prediction
  • risk analysis
  • artificial intelligence
  • machine learning

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 2852 KiB  
Article
Development of Artificial Intelligence-Based Dual-Energy Subtraction for Chest Radiography
by Asumi Yamazaki, Akane Koshida, Toshimitsu Tanaka, Masashi Seki and Takayuki Ishida
Appl. Sci. 2023, 13(12), 7220; https://doi.org/10.3390/app13127220 - 16 Jun 2023
Viewed by 1647
Abstract
Recently, some facilities have utilized the dual-energy subtraction (DES) technique for chest radiography to increase pulmonary lesion detectability. However, the availability of the technique is limited to certain facilities, in addition to other limitations, such as increased noise in high-energy images and motion [...] Read more.
Recently, some facilities have utilized the dual-energy subtraction (DES) technique for chest radiography to increase pulmonary lesion detectability. However, the availability of the technique is limited to certain facilities, in addition to other limitations, such as increased noise in high-energy images and motion artifacts with the one-shot and two-shot methods, respectively. The aim of this study was to develop artificial intelligence-based DES (AI–DES) technology for chest radiography to overcome these limitations. Using a trained pix2pix model on clinically acquired chest radiograph pairs, we successfully converted 130 kV images into virtual 60 kV images that closely resemble the real images. The averaged peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) between virtual and real 60 kV images were 33.8 dB and 0.984, respectively. We also achieved the production of soft-tissue- and bone-enhanced images using a weighted image subtraction process with the virtual 60 kV images. The soft-tissue-enhanced images exhibited sufficient bone suppression, particularly within lung fields. Although the bone-enhanced images contained artifacts on and around the lower thoracic and lumbar spines, superior sharpness and noise characteristics were presented. The main contribution of our development is its ability to provide selectively enhanced images for specific tissues using only high-energy images obtained via routine chest radiography. This suggests the potential to improve the detectability of pulmonary lesions while addressing challenges associated with the existing DES technique. However, further improvements are necessary to improve the image quality. Full article
Show Figures

Figure 1

9 pages, 1063 KiB  
Communication
Automated Classification of Urinary Cells: Using Convolutional Neural Network Pre-trained on Lung Cells
by Atsushi Teramoto, Ayano Michiba, Yuka Kiriyama, Eiko Sakurai, Ryoichi Shiroki and Tetsuya Tsukamoto
Appl. Sci. 2023, 13(3), 1763; https://doi.org/10.3390/app13031763 - 30 Jan 2023
Viewed by 1408
Abstract
Urine cytology, which is based on the examination of cellular images obtained from urine, is widely used for the diagnosis of bladder cancer. However, the diagnosis is sometimes difficult in highly heterogeneous carcinomas exhibiting weak cellular atypia. In this study, we propose a [...] Read more.
Urine cytology, which is based on the examination of cellular images obtained from urine, is widely used for the diagnosis of bladder cancer. However, the diagnosis is sometimes difficult in highly heterogeneous carcinomas exhibiting weak cellular atypia. In this study, we propose a new deep learning method that utilizes image information from another organ for the automated classification of urinary cells. We first extracted 3137 images from 291 lung cytology specimens obtained from lung biopsies and trained a classification process for benign and malignant cells using VGG-16, a convolutional neural network (CNN). Subsequently, 1380 images were extracted from 123 urine cytology specimens and used to fine-tune the CNN that was pre-trained with lung cells. To confirm the effectiveness of the proposed method, we introduced three different CNN training methods and compared their classification performances. The evaluation results showed that the classification accuracy of the fine-tuned CNN based on the proposed method was 98.8% regarding sensitivity and 98.2% for specificity of malignant cells, which were higher than those of the CNN trained with only lung cells or only urinary cells. The evaluation results showed that urinary cells could be automatically classified with a high accuracy rate. These results suggest the possibility of building a versatile deep-learning model using cells from different organs. Full article
Show Figures

Figure 1

22 pages, 4629 KiB  
Article
Automatic Hepatic Vessels Segmentation Using RORPO Vessel Enhancement Filter and 3D V-Net with Variant Dice Loss Function
by Petra Svobodova, Khyati Sethia, Petr Strakos and Alice Varysova
Appl. Sci. 2023, 13(1), 548; https://doi.org/10.3390/app13010548 - 30 Dec 2022
Cited by 2 | Viewed by 1801
Abstract
The segmentation of hepatic vessels is crucial for liver surgical planning. It is also a challenging task because of its small diameter. Hepatic vessels are often captured in images of low contrast and resolution. Our research uses filter enhancement to improve their contrast, [...] Read more.
The segmentation of hepatic vessels is crucial for liver surgical planning. It is also a challenging task because of its small diameter. Hepatic vessels are often captured in images of low contrast and resolution. Our research uses filter enhancement to improve their contrast, which helps with their detection and final segmentation. We have designed a specific fusion of the Ranking Orientation Responses of Path Operators (RORPO) enhancement filter with a raw image, and we have compared it with the fusion of different enhancement filters based on Hessian eigenvectors. Additionally, we have evaluated the 3D U-Net and 3D V-Net neural networks as segmentation architectures, and have selected 3D V-Net as a better segmentation architecture in combination with the vessel enhancement technique. Furthermore, to tackle the pixel imbalance between the liver (background) and vessels (foreground), we have examined several variants of the Dice Loss functions, and have selected the Weighted Dice Loss for its performance. We have used public 3D Image Reconstruction for Comparison of Algorithm Database (3D-IRCADb) dataset, in which we have manually improved upon the annotations of vessels, since the dataset has poor-quality annotations for certain patients. The experiments demonstrate that our method achieves a mean dice score of 76.2%, which outperforms other state-of-the-art techniques. Full article
Show Figures

Figure 1

11 pages, 2651 KiB  
Article
Quantitative Analysis of Retinal Vascular Leakage in Retinal Vasculitis Using Machine Learning
by Hiroshi Keino, Tomoki Wakitani, Wataru Sunayama and Yuji Hatanaka
Appl. Sci. 2022, 12(24), 12751; https://doi.org/10.3390/app122412751 - 12 Dec 2022
Cited by 2 | Viewed by 1451
Abstract
Retinal vascular leakage is known to be an important biomarker to monitor the disease activity of uveitis. Although fluorescein angiography (FA) is a gold standard for the diagnosis and assessment of the disease activity of uveitis, the evaluation of FA findings, especially retinal [...] Read more.
Retinal vascular leakage is known to be an important biomarker to monitor the disease activity of uveitis. Although fluorescein angiography (FA) is a gold standard for the diagnosis and assessment of the disease activity of uveitis, the evaluation of FA findings, especially retinal vascular leakage, remains subjective and descriptive. In the current study, we developed an automatic segmentation model using a deep learning system, U-Net, and subtraction of the retinal vessel area between early-phase and late-phase FA images for the detection of the retinal vascular leakage area in ultrawide field (UWF) FA images in three patients with Behçet’s Disease and three patients with idiopathic uveitis with retinal vasculitis. This study demonstrated that the automated model for segmentation of the retinal vascular leakage area through the UWF FA images reached 0.434 (precision), 0.529 (recall), and 0.467 (Dice coefficient) without using UWF FA images for training. There was a significant positive correlation between the automated segmented area (pixels) of retinal vascular leakage and the FA vascular leakage score. The mean pixels of automatic segmented vascular leakage in UWF FA images with treatment was significantly reduced compared with before treatment. The automated segmentation of retinal vascular leakage in UWF FA images may be useful for objective and quantitative assessment of disease activity in posterior segment uveitis. Further studies at a larger scale are warranted to improve the performance of this automatic segmentation model to detect retinal vascular leakage. Full article
Show Figures

Figure 1

27 pages, 4019 KiB  
Article
Two-View Mammogram Synthesis from Single-View Data Using Generative Adversarial Networks
by Asumi Yamazaki and Takayuki Ishida
Appl. Sci. 2022, 12(23), 12206; https://doi.org/10.3390/app122312206 - 29 Nov 2022
Cited by 1 | Viewed by 2174
Abstract
While two-view mammography taking both mediolateral-oblique (MLO) and cranio-caudual (CC) views is the current standard method of examination in breast cancer screening, single-view mammography is still being performed in some countries on women of specific ages. The rate of cancer detection is lower [...] Read more.
While two-view mammography taking both mediolateral-oblique (MLO) and cranio-caudual (CC) views is the current standard method of examination in breast cancer screening, single-view mammography is still being performed in some countries on women of specific ages. The rate of cancer detection is lower with single-view mammography than for two-view mammography, due to the lack of available image information. The goal of this work is to improve single-view mammography’s ability to detect breast cancer by providing two-view mammograms from single projections. The synthesis of novel-view images from single-view data has recently been achieved using generative adversarial networks (GANs). Here, we apply complete representation GAN (CR-GAN), a novel-view image synthesis model, aiming to produce CC-view mammograms from MLO views. Additionally, we incorporate two adaptations—the progressive growing (PG) technique and feature matching loss—into CR-GAN. Our results show that use of the PG technique reduces the training time, while the synthesized image quality is improved when using feature matching loss, compared with the method using only CR-GAN. Using the proposed method with the two adaptations, CC views similar to real views are successfully synthesized for some cases, but not all cases; in particular, image synthesis is rarely successful when calcifications are present. Even though the image resolution and quality are still far from clinically acceptable levels, our findings establish a foundation for further improvements in clinical applications. As the first report applying novel-view synthesis in medical imaging, this work contributes by offering a methodology for two-view mammogram synthesis. Full article
Show Figures

Figure 1

15 pages, 4628 KiB  
Article
Fully Automated Electronic Cleansing Using CycleGAN in Computed Tomography Colonography
by Yoshitaka Isobe, Atsushi Teramoto, Fujio Morita, Kuniaki Saito and Hiroshi Fujita
Appl. Sci. 2022, 12(21), 10789; https://doi.org/10.3390/app122110789 - 25 Oct 2022
Viewed by 1150
Abstract
In computed tomography colonography (CTC), an electric cleansing technique is used to mix barium with residual fluid, and colon residue is removed by image processing. However, a nonhomogenous mixture of barium and residue may not be properly removed. We developed an electronic cleansing [...] Read more.
In computed tomography colonography (CTC), an electric cleansing technique is used to mix barium with residual fluid, and colon residue is removed by image processing. However, a nonhomogenous mixture of barium and residue may not be properly removed. We developed an electronic cleansing method using CycleGAN, a deep learning technique, to assist diagnosis in CTC. In this method, an original computed tomography (CT) image taken during a CTC examination and a manually cleansed image in which the barium area was manually removed from the original CT image were prepared and converted to an image in which the barium was removed from the original CT image using CycleGAN. In the experiment, the electric cleansing images obtained using the conventional method were compared with those obtained using the proposed method. The average barium cleansing rates obtained by the conventional and proposed methods were 72.3% and 96.3%, respectively. A visual evaluation of the images showed that it was possible to remove only barium without removing the intestinal tract. Furthermore, the extraction of colorectal polyps and early stage cancerous lesions in the colon was performed as in the conventional method. These results indicate that the proposed method using CycleGAN may be useful for accurately visualizing the colon without barium. Full article
Show Figures

Figure 1

10 pages, 8027 KiB  
Article
Prediction of Intracranial Aneurysm Rupture Risk Using Non-Invasive Radiomics Analysis Based on Follow-Up Magnetic Resonance Angiography Images: A Preliminary Study
by Masayuki Yamanouchi, Hidetaka Arimura, Takumi Kodama and Akimasa Urakami
Appl. Sci. 2022, 12(17), 8615; https://doi.org/10.3390/app12178615 - 28 Aug 2022
Cited by 1 | Viewed by 1439
Abstract
This is the first preliminary study to develop prediction models for aneurysm rupture risk using radiomics analysis based on follow-up magnetic resonance angiography (MRA) images. We selected 103 follow-up images from 18 unruptured aneurysm (UA) cases and 10 follow-up images from 10 ruptured [...] Read more.
This is the first preliminary study to develop prediction models for aneurysm rupture risk using radiomics analysis based on follow-up magnetic resonance angiography (MRA) images. We selected 103 follow-up images from 18 unruptured aneurysm (UA) cases and 10 follow-up images from 10 ruptured aneurysm (RA) cases to build the prediction models. A total of 486 image features were calculated, including 54 original features and 432 wavelet-based features, within each aneurysm region in the MRA images for the texture patterns. We randomly divided the 103 UA data into 50 training and 53 testing data and separated the 10 RA data into 1 test and 9 training data to be increased to 54 using a synthetic minority oversampling technique. We selected 11 image features associated with UAs and RAs from 486 image features using the least absolute shrinkage and the selection operator logistic regression and input them into a support vector machine to build the rupture prediction models. An imbalanced adjustment training and test strategy was developed. The area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity were 0.971, 0.948, 0.700, and 0.953, respectively. This prediction model with non-invasive MRA images could predict aneurysm rupture risk for SAH prevention. Full article
Show Figures

Figure 1

14 pages, 3148 KiB  
Article
Temporal Subtraction Technique for Thoracic MDCT Based on Residual VoxelMorph
by Noriaki Miyake, Huinmin Lu, Tohru Kamiya, Takatoshi Aoki and Shoji Kido
Appl. Sci. 2022, 12(17), 8542; https://doi.org/10.3390/app12178542 - 26 Aug 2022
Viewed by 1483
Abstract
The temporal subtraction technique is a useful tool for computer aided diagnosis (CAD) in visual screening. The technique subtracts the previous image set from the current one for the same subject to emphasize temporal changes and/or new abnormalities. However, it is difficult to [...] Read more.
The temporal subtraction technique is a useful tool for computer aided diagnosis (CAD) in visual screening. The technique subtracts the previous image set from the current one for the same subject to emphasize temporal changes and/or new abnormalities. However, it is difficult to obtain a clear subtraction image without subtraction image artifacts. VoxelMorph in deep learning is a useful method, as preparing large training datasets is difficult for medical image analysis, and the possibilities of incorrect learning, gradient loss, and overlearning are concerns. To overcome this problem, we propose a new method for generating temporal subtraction images of thoracic multi-detector row computed tomography (MDCT) images based on ResidualVoxelMorph, which introduces a residual block to VoxelMorph to enable flexible positioning at a low computational cost. Its high learning efficiency can be expected even with a limited training set by introducing residual blocks to VoxelMorph. We performed our method on 84 clinical images and evaluated based on three-fold cross-validation. The results showed that the proposed method reduced subtraction image artifacts on root mean square error (RMSE) by 11.3% (p < 0.01), and its effectiveness was verified. That is, the proposed temporal subtraction method for thoracic MDCT improves the observer’s performance. Full article
Show Figures

Figure 1

15 pages, 28801 KiB  
Article
Changed Detection Based on Patch Robust Principal Component Analysis
by Wenqi Zhu, Zili Zhang, Xing Zhao and Yinghua Fu
Appl. Sci. 2022, 12(15), 7713; https://doi.org/10.3390/app12157713 - 31 Jul 2022
Cited by 2 | Viewed by 1209
Abstract
Change detection on retinal fundus image pairs mainly seeks to compare the important differences between a pair of images obtained at two different time points such as in anatomical structures or lesions. Illumination variation usually challenges the change detection methods in many cases. [...] Read more.
Change detection on retinal fundus image pairs mainly seeks to compare the important differences between a pair of images obtained at two different time points such as in anatomical structures or lesions. Illumination variation usually challenges the change detection methods in many cases. Robust principal component analysis (RPCA) takes intensity normalization and linear interpolation to greatly reduce the illumination variation between the continuous frames and then decomposes the image matrix to obtain the robust background model. The matrix-RPCA can obtain clear change regions, but when there are local bright spots on the image, the background model is vulnerable to illumination, and the change detection results are inaccurate. In this paper, a patch-based RPCA (P-RPCA) is proposed to detect the change of fundus image pairs, where a pair of fundus images is normalized and linearly interpolated to expand a low-rank image sequence; then, images are divided into many patches to obtain an image-patch matrix, and finally, the change regions are obtained by the low-rank decomposition. The proposed method is validated on a set of large lesion image pairs in clinical data. The area under curve (AUC) and mean average precision (mAP) of the method proposed in this paper are 0.9832 and 0.8641, respectively. For a group of small lesion image pairs with obvious local illumination changes in clinical data, the AUC and mAP obtained by the P-RPCA method are 0.9893 and 0.9401, respectively. The results show that the P-RPCA method is more robust to local illumination changes than the RPCA method, and has stronger performance in change detection than the RPCA method. Full article
Show Figures

Figure 1

33 pages, 12872 KiB  
Article
Mixed Machine Learning Approach for Efficient Prediction of Human Heart Disease by Identifying the Numerical and Categorical Features
by Ghulab Nabi Ahmad, Shafiullah, Hira Fatima, Mohamed Abbas, Obaidur Rahman, Imdadullah and Mohammed S. Alqahtani
Appl. Sci. 2022, 12(15), 7449; https://doi.org/10.3390/app12157449 - 25 Jul 2022
Cited by 10 | Viewed by 3657
Abstract
Heart disease is a danger to people’s health because of its prevalence and high mortality risk. Predicting cardiac disease early using a few simple physical indications collected from a routine physical examination has become difficult. Clinically, it is critical and sensitive for the [...] Read more.
Heart disease is a danger to people’s health because of its prevalence and high mortality risk. Predicting cardiac disease early using a few simple physical indications collected from a routine physical examination has become difficult. Clinically, it is critical and sensitive for the signs of heart disease for accurate forecasts and concrete steps for future diagnosis. The manual analysis and prediction of a massive volume of data are challenging and time-consuming. In this paper, a unique heart disease prediction model is proposed to predict heart disease correctly and rapidly using a variety of bodily signs. A heart disease prediction algorithm based on the analysis of the predictive models’ classification performance on combined datasets and the train-test split technique is presented. Finally, the proposed technique’s training results are compared with the previous works. For the Cleveland, Switzerland, Hungarian, and Long Beach VA heart disease datasets, accuracy, precision, recall, F1-score, and ROC-AUC curves are used as the performance indicators. The analytical outcomes for Random Forest Classifiers (RFC) of the combined heart disease datasets are F1-score 100%, accuracy 100%, precision 100%, recall 100%, and the ROC-AUC 100%. The Decision Tree Classifiers for pooled heart disease datasets are F1-score 100%, accuracy 98.80%, precision 98%, recall 99%, ROC-AUC 99%, and for RFC and Gradient Boosting Classifiers (GBC), the ROC-AUC gives 100% performance. The performances of the machine learning algorithms are improved by using five-fold cross validation. Again, the Stacking CV Classifier is also used to improve the performances of the individual machine learning algorithms by combining two and three techniques together. In this paper, several reduction methods are incorporated. It is found that the accuracy of the RFC classification algorithm is high. Moreover, the developed method is efficient and reliable for predicting heart disease. Full article
Show Figures

Figure 1

Back to TopTop