Artificial Intelligence-Based Diagnostics and Biomedical Analytics

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: 31 July 2024 | Viewed by 15987

Special Issue Editors

Department of Electrical and Computer Engineering, Idaho State University, Pocatello, ID 83209, USA
Interests: cybersecurity; artificial intelligence (AI); internet of things (IoT); smart grids; 5G/6G networks; vehicular networks; communication networks; image processing; signal processing; smart healthcare
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Artificial Intelligence Center for Health and Biomedical Research (ArCHER), National Institutes of Biomedical Innovation, Health and Nutrition (NIBIOHN), Osaka, Japan
Interests: computational biology; deep learning; medicine; neuropharmacology; microbiome; simulation; histamine

E-Mail Website
Guest Editor
Department of Computer Science, Western University, London, ON N6A 5B7, Canada
Interests: IoT; communication networks; medical cyber-physical system; smart health; smart society
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The field of Artificial Intelligence (AI) has made significant strides in recent years, leading to advancements in various fields including biomedical research and diagnostics. The use of AI-based techniques in biomedical analytics has opened up new avenues for the development of disease diagnosis, treatment, and drug discovery. This Special Issue aims to highlight the latest research and breakthroughs in AI-based diagnostics and biomedical analytics.

This Special Issue will cover a broad range of topics, including but not limited to, the following:

  • AI-based methods for medical image analysis and interpretation;
  • AI-based methods for the prediction and diagnosis of diseases;
  • AI-based methods for early disease detection and prevention;
  • AI-based methods for disease prognosis and survival prediction;
  • AI-based methods for precision medicine;
  • AI-based methods for genomics;
  • AI-based drug discovery and personalized medicine;
  • AI-based medical decision support systems;
  • Biomedical data mining and knowledge discovery;
  • Ethical considerations in AI-based diagnostics and biomedical analytics.

Dr. Mostafa Fouda
Dr. Attayeb Mohsen
Dr. Zubair Fadlullah
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • diagnostics
  • biomedical analytics
  • medical image analysis
  • machine learning
  • deep learning
  • drug discovery
  • personalized medicine
  • data mining
  • knowledge discovery
  • genomics
  • proteomics
  • medical decision support systems
  • early disease detection
  • disease prognosis
  • precision medicine
  • ethical considerations
  • risk prediction
  • disease screening
  • biomarkers
  • patient datasets
  • expert systems
  • reinforcement learning
  • natural language processing
  • electronic medical records
  • transparency
  • bias
  • data privacy

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 3542 KiB  
Article
Advancing Ocular Imaging: A Hybrid Attention Mechanism-Based U-Net Model for Precise Segmentation of Sub-Retinal Layers in OCT Images
by Prakash Kumar Karn and Waleed H. Abdulla
Bioengineering 2024, 11(3), 240; https://doi.org/10.3390/bioengineering11030240 - 28 Feb 2024
Viewed by 876
Abstract
This paper presents a novel U-Net model incorporating a hybrid attention mechanism for automating the segmentation of sub-retinal layers in Optical Coherence Tomography (OCT) images. OCT is an ophthalmology tool that provides detailed insights into retinal structures. Manual segmentation of these layers is [...] Read more.
This paper presents a novel U-Net model incorporating a hybrid attention mechanism for automating the segmentation of sub-retinal layers in Optical Coherence Tomography (OCT) images. OCT is an ophthalmology tool that provides detailed insights into retinal structures. Manual segmentation of these layers is time-consuming and subjective, calling for automated solutions. Our proposed model combines edge and spatial attention mechanisms with the U-Net architecture to improve segmentation accuracy. By leveraging attention mechanisms, the U-Net focuses selectively on image features. Extensive evaluations using datasets demonstrate that our model outperforms existing approaches, making it a valuable tool for medical professionals. The study also highlights the model’s robustness through performance metrics such as an average Dice score of 94.99%, Adjusted Rand Index (ARI) of 97.00%, and Strength of Agreement (SOA) classifications like “Almost Perfect”, “Excellent”, and “Very Strong”. This advanced predictive model shows promise in expediting processes and enhancing the precision of ocular imaging in real-world applications. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

16 pages, 1475 KiB  
Article
Carpal Tunnel Syndrome Automated Diagnosis: A Motor vs. Sensory Nerve Conduction-Based Approach
by Dimitrios Bakalis, Prokopis Kontogiannis, Evangelos Ntais, Yannis V. Simos, Konstantinos I. Tsamis and George Manis
Bioengineering 2024, 11(2), 175; https://doi.org/10.3390/bioengineering11020175 - 11 Feb 2024
Viewed by 824
Abstract
The objective of this study was to evaluate the effectiveness of machine learning classification techniques applied to nerve conduction studies (NCS) of motor and sensory signals for the automatic diagnosis of carpal tunnel syndrome (CTS). Two methodologies were tested. In the first methodology, [...] Read more.
The objective of this study was to evaluate the effectiveness of machine learning classification techniques applied to nerve conduction studies (NCS) of motor and sensory signals for the automatic diagnosis of carpal tunnel syndrome (CTS). Two methodologies were tested. In the first methodology, motor signals recorded from the patients’ median nerve were transformed into time-frequency spectrograms using the short-time Fourier transform (STFT). These spectrograms were then used as input to a deep two-dimensional convolutional neural network (CONV2D) for classification into two categories: patients and controls. In the second methodology, sensory signals from the patients’ median and ulnar nerves were subjected to multilevel wavelet decomposition (MWD), and statistical and non-statistical features were extracted from the decomposed signals. These features were utilized to train and test classifiers. The classification target was set to three categories: normal subjects (controls), patients with mild CTS, and patients with moderate to severe CTS based on conventional electrodiagnosis results. The results of the classification analysis demonstrated that both methodologies surpassed previous attempts at automatic CTS diagnosis. The classification models utilizing the motor signals transformed into time-frequency spectrograms exhibited excellent performance, with average accuracy of 94%. Similarly, the classifiers based on the sensory signals and the extracted features from multilevel wavelet decomposition showed significant accuracy in distinguishing between controls, patients with mild CTS, and patients with moderate to severe CTS, with accuracy of 97.1%. The findings highlight the efficacy of incorporating machine learning algorithms into the diagnostic processes of NCS, providing a valuable tool for clinicians in the diagnosis and management of neuropathies such as CTS. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

40 pages, 14434 KiB  
Article
Utilizing Deep Learning Algorithms for Signal Processing in Electrochemical Biosensors: From Data Augmentation to Detection and Quantification of Chemicals of Interest
by Fatemeh Esmaeili, Erica Cassie, Hong Phan T. Nguyen, Natalie O. V. Plank, Charles P. Unsworth and Alan Wang
Bioengineering 2023, 10(12), 1348; https://doi.org/10.3390/bioengineering10121348 - 23 Nov 2023
Viewed by 1041
Abstract
Nanomaterial-based aptasensors serve as useful instruments for detecting small biological entities. This work utilizes data gathered from three electrochemical aptamer-based sensors varying in receptors, analytes of interest, and lengths of signals. Our ultimate objective was the automatic detection and quantification of target analytes [...] Read more.
Nanomaterial-based aptasensors serve as useful instruments for detecting small biological entities. This work utilizes data gathered from three electrochemical aptamer-based sensors varying in receptors, analytes of interest, and lengths of signals. Our ultimate objective was the automatic detection and quantification of target analytes from a segment of the signal recorded by these sensors. Initially, we proposed a data augmentation method using conditional variational autoencoders to address data scarcity. Secondly, we employed recurrent-based networks for signal extrapolation, ensuring uniform signal lengths. In the third step, we developed seven deep learning classification models (GRU, unidirectional LSTM (ULSTM), bidirectional LSTM (BLSTM), ConvGRU, ConvULSTM, ConvBLSTM, and CNN) to identify and quantify specific analyte concentrations for six distinct classes, ranging from the absence of analyte to 10 μM. Finally, the second classification model was created to distinguish between abnormal and normal data segments, detect the presence or absence of analytes in the sample, and, if detected, identify the specific analyte and quantify its concentration. Evaluating the time series forecasting showed that the GRU-based network outperformed two other ULSTM and BLSTM networks. Regarding classification models, it turned out signal extrapolation was not effective in improving the classification performance. Comparing the role of the network architectures in classification performance, the result showed that hybrid networks, including both convolutional and recurrent layers and CNN networks, achieved 82% to 99% accuracy across all three datasets. Utilizing short-term Fourier transform (STFT) as the preprocessing technique improved the performance of all datasets with accuracies from 84% to 99%. These findings underscore the effectiveness of suitable data preprocessing methods in enhancing neural network performance, enabling automatic analyte identification and quantification from electrochemical aptasensor signals. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

13 pages, 1749 KiB  
Article
Interobserver Agreement in Automatic Segmentation Annotation of Prostate Magnetic Resonance Imaging
by Liang Jin, Zhuangxuan Ma, Haiqing Li, Feng Gao, Pan Gao, Nan Yang, Dechun Li, Ming Li and Daoying Geng
Bioengineering 2023, 10(12), 1340; https://doi.org/10.3390/bioengineering10121340 - 21 Nov 2023
Viewed by 991
Abstract
We aimed to compare the performance and interobserver agreement of radiologists manually segmenting images or those assisted by automatic segmentation. We further aimed to reduce interobserver variability and improve the consistency of radiomics features. This retrospective study included 327 patients diagnosed with prostate [...] Read more.
We aimed to compare the performance and interobserver agreement of radiologists manually segmenting images or those assisted by automatic segmentation. We further aimed to reduce interobserver variability and improve the consistency of radiomics features. This retrospective study included 327 patients diagnosed with prostate cancer from September 2016 to June 2018; images from 228 patients were used for automatic segmentation construction, and images from the remaining 99 were used for testing. First, four radiologists with varying experience levels retrospectively segmented 99 axial prostate images manually using T2-weighted fat-suppressed magnetic resonance imaging. Automatic segmentation was performed after 2 weeks. The Pyradiomics software package v3.1.0 was used to extract the texture features. The Dice coefficient and intraclass correlation coefficient (ICC) were used to evaluate segmentation performance and the interobserver consistency of prostate radiomics. The Wilcoxon rank sum test was used to compare the paired samples, with the significance level set at p < 0.05. The Dice coefficient was used to accurately measure the spatial overlap of manually delineated images. In all the 99 prostate segmentation result columns, the manual and automatic segmentation results of the senior group were significantly better than those of the junior group (p < 0.05). Automatic segmentation was more consistent than manual segmentation (p < 0.05), and the average ICC reached >0.85. The automatic segmentation annotation performance of junior radiologists was similar to that of senior radiologists performing manual segmentation. The ICC of radiomics features increased to excellent consistency (0.925 [0.888~0.950]). Automatic segmentation annotation provided better results than manual segmentation by radiologists. Our findings indicate that automatic segmentation annotation helps reduce variability in the perception and interpretation between radiologists with different experience levels and ensures the stability of radiomics features. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

18 pages, 2739 KiB  
Article
Anatomical Prior-Based Automatic Segmentation for Cardiac Substructures from Computed Tomography Images
by Xuefang Wang, Xinyi Li, Ruxu Du, Yong Zhong, Yao Lu and Ting Song
Bioengineering 2023, 10(11), 1267; https://doi.org/10.3390/bioengineering10111267 - 31 Oct 2023
Viewed by 1246
Abstract
Cardiac substructure segmentation is a prerequisite for cardiac diagnosis and treatment, providing a basis for accurate calculation, modeling, and analysis of the entire cardiac structure. CT (computed tomography) imaging can be used for a noninvasive qualitative and quantitative evaluation of the cardiac anatomy [...] Read more.
Cardiac substructure segmentation is a prerequisite for cardiac diagnosis and treatment, providing a basis for accurate calculation, modeling, and analysis of the entire cardiac structure. CT (computed tomography) imaging can be used for a noninvasive qualitative and quantitative evaluation of the cardiac anatomy and function. Cardiac substructures have diverse grayscales, fuzzy boundaries, irregular shapes, and variable locations. We designed a deep learning-based framework to improve the accuracy of the automatic segmentation of cardiac substructures. This framework integrates cardiac anatomical knowledge; it uses prior knowledge of the location, shape, and scale of cardiac substructures and separately processes the structures of different scales. Through two successive segmentation steps with a coarse-to-fine cascaded network, the more easily segmented substructures were coarsely segmented first; then, the more difficult substructures were finely segmented. The coarse segmentation result was used as prior information and combined with the original image as the input for the model. Anatomical knowledge of the large-scale substructures was embedded into the fine segmentation network to guide and train the small-scale substructures, achieving efficient and accurate segmentation of ten cardiac substructures. Sixty cardiac CT images and ten substructures manually delineated by experienced radiologists were retrospectively collected; the model was evaluated using the DSC (Dice similarity coefficient), Recall, Precision, and the Hausdorff distance. Compared with current mainstream segmentation models, our approach demonstrated significantly higher segmentation accuracy, with accurate segmentation of ten substructures of different shapes and sizes, indicating that the segmentation framework fused with prior anatomical knowledge has superior segmentation performance and can better segment small targets in multi-target segmentation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Graphical abstract

16 pages, 9185 KiB  
Article
Correction of Arterial-Phase Motion Artifacts in Gadoxetic Acid-Enhanced Liver MRI Using an Innovative Unsupervised Network
by Feng Pan, Qianqian Fan, Han Xie, Chongxin Bai, Zhi Zhang, Hebing Chen, Lian Yang, Xin Zhou, Qingjia Bao and Chaoyang Liu
Bioengineering 2023, 10(10), 1192; https://doi.org/10.3390/bioengineering10101192 - 13 Oct 2023
Viewed by 937
Abstract
This study aims to propose and evaluate DR-CycleGAN, a disentangled unsupervised network by introducing a novel content-consistency loss, for removing arterial-phase motion artifacts in gadoxetic acid-enhanced liver MRI examinations. From June 2020 to July 2021, gadoxetic acid-enhanced liver MRI data were retrospectively collected [...] Read more.
This study aims to propose and evaluate DR-CycleGAN, a disentangled unsupervised network by introducing a novel content-consistency loss, for removing arterial-phase motion artifacts in gadoxetic acid-enhanced liver MRI examinations. From June 2020 to July 2021, gadoxetic acid-enhanced liver MRI data were retrospectively collected in this center to establish training and testing datasets. Motion artifacts were semi-quantitatively assessed using a five-point Likert scale (1 = no artifact, 2 = mild, 3 = moderate, 4 = severe, and 5 = non-diagnostic) and quantitatively evaluated using the structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR). The datasets comprised a training dataset (308 examinations, including 58 examinations with artifact grade = 1 and 250 examinations with artifact grade ≥ 2), a paired test dataset (320 examinations, including 160 examinations with artifact grade = 1 and paired 160 examinations with simulated motion artifacts of grade ≥ 2), and an unpaired test dataset (474 examinations with artifact grade ranging from 1 to 5). The performance of DR-CycleGAN was evaluated and compared with a state-of-the-art network, Cycle-MedGAN V2.0. As a result, in the paired test dataset, DR-CycleGAN demonstrated significantly higher SSIM and PSNR values and lower motion artifact grades compared to Cycle-MedGAN V2.0 (0.89 ± 0.07 vs. 0.84 ± 0.09, 32.88 ± 2.11 vs. 30.81 ± 2.64, and 2.7 ± 0.7 vs. 3.0 ± 0.9, respectively; p < 0.001 each). In the unpaired test dataset, DR-CycleGAN also exhibited a superior motion artifact correction performance, resulting in a significant decrease in motion artifact grades from 2.9 ± 1.3 to 2.0 ± 0.6 compared to Cycle-MedGAN V2.0 (to 2.4 ± 0.9, p < 0.001). In conclusion, DR-CycleGAN effectively reduces motion artifacts in the arterial phase images of gadoxetic acid-enhanced liver MRI examinations, offering the potential to enhance image quality. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Graphical abstract

26 pages, 4948 KiB  
Article
MIL-CT: Multiple Instance Learning via a Cross-Scale Transformer for Enhanced Arterial Light Reflex Detection
by Yuan Gao, Chenbin Ma, Lishuang Guo, Xuxiang Zhang and Xunming Ji
Bioengineering 2023, 10(8), 971; https://doi.org/10.3390/bioengineering10080971 - 16 Aug 2023
Viewed by 1050
Abstract
One of the early manifestations of systemic atherosclerosis, which leads to blood circulation issues, is the enhanced arterial light reflex (EALR). Fundus images are commonly used for regular screening purposes to intervene and assess the severity of systemic atherosclerosis in a timely manner. [...] Read more.
One of the early manifestations of systemic atherosclerosis, which leads to blood circulation issues, is the enhanced arterial light reflex (EALR). Fundus images are commonly used for regular screening purposes to intervene and assess the severity of systemic atherosclerosis in a timely manner. However, there is a lack of automated methods that can meet the demands of large-scale population screening. Therefore, this study introduces a novel cross-scale transformer-based multi-instance learning method, named MIL-CT, for the detection of early arterial lesions (e.g., EALR) in fundus images. MIL-CT utilizes the cross-scale vision transformer to extract retinal features in a multi-granularity perceptual domain. It incorporates a multi-head cross-scale attention fusion module to enhance global perceptual capability and feature representation. By integrating information from different scales and minimizing information loss, the method significantly improves the performance of the EALR detection task. Furthermore, a multi-instance learning module is implemented to enable the model to better comprehend local details and features in fundus images, facilitating the classification of patch tokens related to retinal lesions. To effectively learn the features associated with retinal lesions, we utilize weights pre-trained on a large fundus image Kaggle dataset. Our validation and comparison experiments conducted on our collected EALR dataset demonstrate the effectiveness of the MIL-CT method in reducing generalization errors while maintaining efficient attention to retinal vascular details. Moreover, the method surpasses existing models in EALR detection, achieving an accuracy, precision, sensitivity, specificity, and F1 score of 97.62%, 97.63%, 97.05%, 96.48%, and 97.62%, respectively. These results exhibit the significant enhancement in diagnostic accuracy of fundus images brought about by the MIL-CT method. Thus, it holds potential for various applications, particularly in the early screening of cardiovascular diseases such as hypertension and atherosclerosis. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

13 pages, 3298 KiB  
Article
Deep Learning-Based Recognition of Periodontitis and Dental Caries in Dental X-ray Images
by Ivane Delos Santos Chen, Chieh-Ming Yang, Mei-Juan Chen, Ming-Chin Chen, Ro-Min Weng and Chia-Hung Yeh
Bioengineering 2023, 10(8), 911; https://doi.org/10.3390/bioengineering10080911 - 01 Aug 2023
Cited by 2 | Viewed by 2886
Abstract
Dental X-ray images are important and useful for dentists to diagnose dental diseases. Utilizing deep learning in dental X-ray images can help dentists quickly and accurately identify common dental diseases such as periodontitis and dental caries. This paper applies image processing and deep [...] Read more.
Dental X-ray images are important and useful for dentists to diagnose dental diseases. Utilizing deep learning in dental X-ray images can help dentists quickly and accurately identify common dental diseases such as periodontitis and dental caries. This paper applies image processing and deep learning technologies to dental X-ray images to propose a simultaneous recognition method for periodontitis and dental caries. The single-tooth X-ray image is detected by the YOLOv7 object detection technique and cropped from the periapical X-ray image. Then, it is processed through contrast-limited adaptive histogram equalization to enhance the local contrast, and bilateral filtering to eliminate noise while preserving the edge. The deep learning architecture for classification comprises a pre-trained EfficientNet-B0 and fully connected layers that output two labels by the sigmoid activation function for the classification task. The average precision of tooth detection using YOLOv7 is 97.1%. For the recognition of periodontitis, the area under the curve (AUC) of the receiver operating characteristic (ROC) curve is 98.67%, and the AUC of the precision-recall (PR) curve is 98.38%. For the recognition of dental caries, the AUC of the ROC curve is 98.31%, and the AUC of the PR curve is 97.55%. Different from the conventional deep learning-based methods for a single disease such as periodontitis or dental caries, the proposed approach can provide the recognition of both periodontitis and dental caries simultaneously. This recognition method presents good performance in the identification of periodontitis and dental caries, thus facilitating dental diagnosis. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Figure 1

29 pages, 7108 KiB  
Article
Detection of Cardiovascular Disease from Clinical Parameters Using a One-Dimensional Convolutional Neural Network
by Mohammad Mahbubur Rahman Khan Mamun and Tarek Elfouly
Bioengineering 2023, 10(7), 796; https://doi.org/10.3390/bioengineering10070796 - 03 Jul 2023
Cited by 3 | Viewed by 2011
Abstract
Heart disease is a significant public health problem, and early detection is crucial for effective treatment and management. Conventional and noninvasive techniques are cumbersome, time-consuming, inconvenient, expensive, and unsuitable for frequent measurement or diagnosis. With the advance of artificial intelligence (AI), new invasive [...] Read more.
Heart disease is a significant public health problem, and early detection is crucial for effective treatment and management. Conventional and noninvasive techniques are cumbersome, time-consuming, inconvenient, expensive, and unsuitable for frequent measurement or diagnosis. With the advance of artificial intelligence (AI), new invasive techniques emerging in research are detecting heart conditions using machine learning (ML) and deep learning (DL). Machine learning models have been used with the publicly available dataset from the internet about heart health; in contrast, deep learning techniques have recently been applied to analyze electrocardiograms (ECG) or similar vital data to detect heart diseases. Significant limitations of these datasets are their small size regarding the number of patients and features and the fact that many are imbalanced datasets. Furthermore, the trained models must be more reliable and accurate in medical settings. This study proposes a hybrid one-dimensional convolutional neural network (1D CNN), which uses a large dataset accumulated from online survey data and selected features using feature selection algorithms. The 1D CNN proved to show better accuracy compared to contemporary machine learning algorithms and artificial neural networks. The non-coronary heart disease (no-CHD) and CHD validation data showed an accuracy of 80.1% and 76.9%, respectively. The model was compared with an artificial neural network, random forest, AdaBoost, and a support vector machine. Overall, 1D CNN proved to show better performance in terms of accuracy, false negative rates, and false positive rates. Similar strategies were applied for four more heart conditions, and the analysis proved that using the hybrid 1D CNN produced better accuracy. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Graphical abstract

16 pages, 1901 KiB  
Article
Retinal Vascular Image Segmentation Using Improved UNet Based on Residual Module
by Ko-Wei Huang, Yao-Ren Yang, Zih-Hao Huang, Yi-Yang Liu and Shih-Hsiung Lee
Bioengineering 2023, 10(6), 722; https://doi.org/10.3390/bioengineering10060722 - 14 Jun 2023
Cited by 2 | Viewed by 2103
Abstract
In recent years, deep learning technology for clinical diagnosis has progressed considerably, and the value of medical imaging continues to increase. In the past, clinicians evaluated medical images according to their individual expertise. In contrast, the application of artificial intelligence technology for automatic [...] Read more.
In recent years, deep learning technology for clinical diagnosis has progressed considerably, and the value of medical imaging continues to increase. In the past, clinicians evaluated medical images according to their individual expertise. In contrast, the application of artificial intelligence technology for automatic analysis and diagnostic assistance to support clinicians in evaluating medical information more efficiently has become an important trend. In this study, we propose a machine learning architecture designed to segment images of retinal blood vessels based on an improved U-Net neural network model. The proposed model incorporates a residual module to extract features more effectively, and includes a full-scale skip connection to combine low level details with high-level features at different scales. The results of an experimental evaluation show that the model was able to segment images of retinal vessels accurately. The proposed method also outperformed several existing models on the benchmark datasets DRIVE and ROSE, including U-Net, ResUNet, U-Net3+, ResUNet++, and CaraNet. Full article
(This article belongs to the Special Issue Artificial Intelligence-Based Diagnostics and Biomedical Analytics)
Show Figures

Graphical abstract

Back to TopTop