Artificial Intelligence in Biological and Biomedical Imaging 2.0

A special issue of Biomedicines (ISSN 2227-9059). This special issue belongs to the section "Biomedical Engineering and Materials".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 23854

Special Issue Editors

Institute of Biophotonics, National Yang Ming Chiao Tung University, Taipei, Taiwan
Interests: medical image processing and analysis; structural and functional magnetic resonance imaging; Artificial Intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
Interests: neuroradiology; magnetic resonance imaging; fetal magnetic resonance imaging; brain tumor imaging; cerebrovascular disease diagnosis & treatment
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The present Special Issue, “Artificial Intelligence (AI) in Biological and Biomedical Imaging 2.0”, is open for submissions. In recent years, AI, including machine learning and deep learning, has been widely applied to automate time-consuming and labor-intensive work and to assist in diagnosis and prognosis. AI’s role in biological and biomedical imaging is an emerging research topic and reveals a future trend in the field.

Imaging plays an essential role in the fields of biology and biomedicine, providing information about the structural and functional mechanisms of cells and the human body. Biological and biomedical imaging covers microscopy, molecular imaging, pathological imaging, optical coherence tomography, nuclear medicine, ultrasound imaging, X-ray radiography, computed tomography, magnetic resonance imaging, etc. AI has been applied in biological and biomedical imaging research to address imaging reconstruction, registration, detection, classification, lesion segmentation, diagnosis, prognosis, and so on. This Special Issue will present diverse and up-to-date AI research findings in this field.

This Special Issue is open to basic, clinical, and multidisciplinary research on AI in the field of biological and biomedical imaging. Articles may address, but are not limited to, the following topics:

  • AI-based biological and biomedical image reconstruction, registration, classification, detection and segmentation, and applications;
  • AI-aided diagnosis, prognosis, and decision making.

Dr. Yu-Te Wu
Dr. Wan-Yuo Guo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Biomedicines is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • machine learning
  • deep learning
  • biological imaging
  • biomedical imaging
  • image analysis and processing
  • diagnosis
  • prognosis
  • decision-making

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3593 KiB  
Article
Lightweight Visual Transformers Outperform Convolutional Neural Networks for Gram-Stained Image Classification: An Empirical Study
by Hee E. Kim, Mate E. Maros, Thomas Miethke, Maximilian Kittel, Fabian Siegel and Thomas Ganslandt
Biomedicines 2023, 11(5), 1333; https://doi.org/10.3390/biomedicines11051333 - 30 Apr 2023
Viewed by 1733
Abstract
We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization [...] Read more.
We aimed to automate Gram-stain analysis to speed up the detection of bacterial strains in patients suffering from infections. We performed comparative analyses of visual transformers (VT) using various configurations including model size (small vs. large), training epochs (1 vs. 100), and quantization schemes (tensor- or channel-wise) using float32 or int8 on publicly available (DIBaS, n = 660) and locally compiled (n = 8500) datasets. Six VT models (BEiT, DeiT, MobileViT, PoolFormer, Swin and ViT) were evaluated and compared to two convolutional neural networks (CNN), ResNet and ConvNeXT. The overall overview of performances including accuracy, inference time and model size was also visualized. Frames per second (FPS) of small models consistently surpassed their large counterparts by a factor of 1-2×. DeiT small was the fastest VT in int8 configuration (6.0 FPS). In conclusion, VTs consistently outperformed CNNs for Gram-stain classification in most settings even on smaller datasets. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

14 pages, 2919 KiB  
Article
Artificial Intelligence Assisted Computational Tomographic Detection of Lung Nodules for Prognostic Cancer Examination: A Large-Scale Clinical Trial
by Heng-Sheng Chao, Chiao-Yun Tsai, Chung-Wei Chou, Tsu-Hui Shiao, Hsu-Chih Huang, Kun-Chieh Chen, Hao-Hung Tsai, Chin-Yu Lin and Yuh-Min Chen
Biomedicines 2023, 11(1), 147; https://doi.org/10.3390/biomedicines11010147 - 06 Jan 2023
Cited by 4 | Viewed by 2255
Abstract
Low-dose computed tomography (LDCT) has emerged as a standard method for detecting early-stage lung cancer. However, the tedious computer tomography (CT) slide reading, patient-by-patient check, and lack of standard criteria to determine the vague but possible nodule leads to variable outcomes of CT [...] Read more.
Low-dose computed tomography (LDCT) has emerged as a standard method for detecting early-stage lung cancer. However, the tedious computer tomography (CT) slide reading, patient-by-patient check, and lack of standard criteria to determine the vague but possible nodule leads to variable outcomes of CT slide interpretation. To determine the artificial intelligence (AI)-assisted CT examination, AI algorithm-assisted CT screening was embedded in the hospital picture archiving and communication system, and a 200 person-scaled clinical trial was conducted at two medical centers. With AI algorithm-assisted CT screening, the sensitivity of detecting nodules sized 4–5 mm, 6~10 mm, 11~20 mm, and >20 mm increased by 41%, 11.2%, 10.3%, and 18.7%, respectively. Remarkably, the overall sensitivity of detecting varied nodules increased by 20.7% from 67.7% to 88.4%. Furthermore, the sensitivity increased by 18.5% from 72.5% to 91% for detecting ground glass nodules (GGN), which is challenging for radiologists and physicians. The free-response operating characteristic (FROC) AI score was ≥0.4, and the AI algorithm standalone CT screening sensitivity reached >95% with an area under the localization receiver operating characteristic curve (LROC-AUC) of >0.88. Our study demonstrates that AI algorithm-embedded CT screening significantly ameliorates tedious LDCT practices for doctors. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

23 pages, 3893 KiB  
Article
A Hybrid Workflow of Residual Convolutional Transformer Encoder for Breast Cancer Classification Using Digital X-ray Mammograms
by Riyadh M. Al-Tam, Aymen M. Al-Hejri, Sachin M. Narangale, Nagwan Abdel Samee, Noha F. Mahmoud, Mohammed A. Al-masni and Mugahed A. Al-antari
Biomedicines 2022, 10(11), 2971; https://doi.org/10.3390/biomedicines10112971 - 18 Nov 2022
Cited by 22 | Viewed by 3256
Abstract
Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer [...] Read more.
Breast cancer, which attacks the glandular epithelium of the breast, is the second most common kind of cancer in women after lung cancer, and it affects a significant number of people worldwide. Based on the advantages of Residual Convolutional Network and the Transformer Encoder with Multiple Layer Perceptron (MLP), this study proposes a novel hybrid deep learning Computer-Aided Diagnosis (CAD) system for breast lesions. While the backbone residual deep learning network is employed to create the deep features, the transformer is utilized to classify breast cancer according to the self-attention mechanism. The proposed CAD system has the capability to recognize breast cancer in two scenarios: Scenario A (Binary classification) and Scenario B (Multi-classification). Data collection and preprocessing, patch image creation and splitting, and artificial intelligence-based breast lesion identification are all components of the execution framework that are applied consistently across both cases. The effectiveness of the proposed AI model is compared against three separate deep learning models: a custom CNN, the VGG16, and the ResNet50. Two datasets, CBIS-DDSM and DDSM, are utilized to construct and test the proposed CAD system. Five-fold cross validation of the test data is used to evaluate the accuracy of the performance results. The suggested hybrid CAD system achieves encouraging evaluation results, with overall accuracies of 100% and 95.80% for binary and multiclass prediction challenges, respectively. The experimental results reveal that the proposed hybrid AI model could identify benign and malignant breast tissues significantly, which is important for radiologists to recommend further investigation of abnormal mammograms and provide the optimal treatment plan. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

15 pages, 4097 KiB  
Article
Artificial Intelligence for Early Detection of Chest Nodules in X-ray Images
by Hwa-Yen Chiu, Rita Huan-Ting Peng, Yi-Chian Lin, Ting-Wei Wang, Ya-Xuan Yang, Ying-Ying Chen, Mei-Han Wu, Tsu-Hui Shiao, Heng-Sheng Chao, Yuh-Min Chen and Yu-Te Wu
Biomedicines 2022, 10(11), 2839; https://doi.org/10.3390/biomedicines10112839 - 07 Nov 2022
Cited by 7 | Viewed by 2773
Abstract
Early detection increases overall survival among patients with lung cancer. This study formulated a machine learning method that processes chest X-rays (CXRs) to detect lung cancer early. After we preprocessed our dataset using monochrome and brightness correction, we used different kinds of preprocessing [...] Read more.
Early detection increases overall survival among patients with lung cancer. This study formulated a machine learning method that processes chest X-rays (CXRs) to detect lung cancer early. After we preprocessed our dataset using monochrome and brightness correction, we used different kinds of preprocessing methods to enhance image contrast and then used U-net to perform lung segmentation. We used 559 CXRs with a single lung nodule labeled by experts to train a You Only Look Once version 4 (YOLOv4) deep-learning architecture to detect lung nodules. In a testing dataset of 100 CXRs from patients at Taipei Veterans General Hospital and 154 CXRs from the Japanese Society of Radiological Technology dataset, the sensitivity of the AI model using a combination of different preprocessing methods performed the best at 79%, with 3.04 false positives per image. We then tested the AI by using 383 sets of CXRs obtained in the past 5 years prior to lung cancer diagnoses. The median time from detection to diagnosis for radiologists assisted with AI was 46 (3–523) days, longer than that for radiologists (8 (0–263) days). The AI model can assist radiologists in the early detection of lung nodules. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

12 pages, 3188 KiB  
Article
Rapid Convolutional Neural Networks for Gram-Stained Image Classification at Inference Time on Mobile Devices: Empirical Study from Transfer Learning to Optimization
by Hee E. Kim, Mate E. Maros, Fabian Siegel and Thomas Ganslandt
Biomedicines 2022, 10(11), 2808; https://doi.org/10.3390/biomedicines10112808 - 04 Nov 2022
Cited by 2 | Viewed by 1416
Abstract
Despite the emergence of mobile health and the success of deep learning (DL), deploying production-ready DL models to resource-limited devices remains challenging. Especially, during inference time, the speed of DL models becomes relevant. We aimed to accelerate inference time for Gram-stained analysis, which [...] Read more.
Despite the emergence of mobile health and the success of deep learning (DL), deploying production-ready DL models to resource-limited devices remains challenging. Especially, during inference time, the speed of DL models becomes relevant. We aimed to accelerate inference time for Gram-stained analysis, which is a tedious and manual task involving microorganism detection on whole slide images. Three DL models were optimized in three steps: transfer learning, pruning and quantization and then evaluated on two Android smartphones. Most convolutional layers (≥80%) had to be retrained for adaptation to the Gram-stained classification task. The combination of pruning and quantization demonstrated its utility to reduce the model size and inference time without compromising model quality. Pruning mainly contributed to model size reduction by 15×, while quantization reduced inference time by 3× and decreased model size by 4×. The combination of two reduced the baseline model by an overall factor of 46×. Optimized models were smaller than 6 MB and were able to process one image in <0.6 s on a Galaxy S10. Our findings demonstrate that methods for model compression are highly relevant for the successful deployment of DL solutions to resource-limited devices. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

14 pages, 3071 KiB  
Article
Artificial Intelligence Driven Biomedical Image Classification for Robust Rheumatoid Arthritis Classification
by Marwa Obayya, Mohammad Alamgeer, Jaber S. Alzahrani, Rana Alabdan, Fahd N. Al-Wesabi, Abdullah Mohamed and Mohamed Ibrahim Alsaid Hassan
Biomedicines 2022, 10(11), 2714; https://doi.org/10.3390/biomedicines10112714 - 26 Oct 2022
Cited by 4 | Viewed by 1667
Abstract
Recently, artificial intelligence (AI) including machine learning (ML) and deep learning (DL) models has been commonly employed for the automated disease diagnosis process. AI in biological and biomedical imaging is an emerging area and will be a future trend in the field. At [...] Read more.
Recently, artificial intelligence (AI) including machine learning (ML) and deep learning (DL) models has been commonly employed for the automated disease diagnosis process. AI in biological and biomedical imaging is an emerging area and will be a future trend in the field. At the same time, biomedical images can be used for the classification of Rheumatoid arthritis (RA) diseases. RA is an autoimmune illness that affects the musculoskeletal system causing systemic, inflammatory and chronic effects. The disease frequently becomes progressive and decreases physical function, causing articular damage, suffering, and fatigue. After a time, RA causes harm to the cartilage of the joints and bones, weakens the tendons and joints, and finally causes joint destruction. Sensors (thermal infrared camera sensor, accelerometers and wearable sensors) are more commonly employed to collect data for RA. This study develops an Automated Rheumatoid Arthritis Classification using an Arithmetic Optimization Algorithm with Deep Learning (ARAC-AOADL) model. The goal of the presented ARAC-AOADL technique lies in the classification of health disorders depending upon RA and orthopaedics. Primarily, the presented ARAC-AOADL technique pre-processes the input images by median filtering (MF) technique. Then, the ARAC-AOADL technique uses AOA with an enhanced capsule network (ECN) model to produce feature vectors. For RA classification, the ARAC-AOADL technique uses a multi-kernel extreme learning machine (MKELM) model. The experimental result analysis of the ARAC-AOADL technique on a benchmark dataset reported a maximum accuracy of 98.57%. Therefore, the ARAC-AOADL technique can be employed for accurate and timely RA classification. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

13 pages, 2153 KiB  
Article
How Resilient Are Deep Learning Models in Medical Image Analysis? The Case of the Moment-Based Adversarial Attack (Mb-AdA)
by Theodore V. Maliamanis, Kyriakos D. Apostolidis and George A. Papakostas
Biomedicines 2022, 10(10), 2545; https://doi.org/10.3390/biomedicines10102545 - 12 Oct 2022
Cited by 5 | Viewed by 1270
Abstract
In the past years, deep neural networks (DNNs) have become popular in many disciplines such as computer vision (CV). One of the most important challenges in the CV area is Medical Image Analysis (MIA). However, adversarial attacks (AdAs) have proven to be an [...] Read more.
In the past years, deep neural networks (DNNs) have become popular in many disciplines such as computer vision (CV). One of the most important challenges in the CV area is Medical Image Analysis (MIA). However, adversarial attacks (AdAs) have proven to be an important threat to vision systems by significantly reducing the performance of the models. This paper proposes a new black-box adversarial attack, which is based οn orthogonal image moments named Mb-AdA. Additionally, a corresponding defensive method of adversarial training using Mb-AdA adversarial examples is also investigated, with encouraging results. The proposed attack was applied in classification and segmentation tasks with six state-of-the-art Deep Learning (DL) models in X-ray, histopathology and nuclei cell images. The main advantage of Mb-AdA is that it does not destroy the structure of images like other attacks, as instead of adding noise it removes specific image information, which is critical for medical models’ decisions. The proposed attack is more effective than compared ones and achieved degradation up to 65% and 18% in terms of accuracy and IoU for classification and segmentation tasks, respectively, by also presenting relatively high SSIM. At the same time, it was proved that Mb-AdA adversarial examples can enhance the robustness of the model. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

11 pages, 1332 KiB  
Article
Deep Learning for Bone Mineral Density and T-Score Prediction from Chest X-rays: A Multicenter Study
by Yoichi Sato, Norio Yamamoto, Naoya Inagaki, Yusuke Iesaki, Takamune Asamoto, Tomohiro Suzuki and Shunsuke Takahara
Biomedicines 2022, 10(9), 2323; https://doi.org/10.3390/biomedicines10092323 - 19 Sep 2022
Cited by 12 | Viewed by 3677
Abstract
Although the number of patients with osteoporosis is increasing worldwide, diagnosis and treatment are presently inadequate. In this study, we developed a deep learning model to predict bone mineral density (BMD) and T-score from chest X-rays, which are one of the most common, [...] Read more.
Although the number of patients with osteoporosis is increasing worldwide, diagnosis and treatment are presently inadequate. In this study, we developed a deep learning model to predict bone mineral density (BMD) and T-score from chest X-rays, which are one of the most common, easily accessible, and low-cost medical imaging examination methods. The dataset used in this study contained patients who underwent dual-energy X-ray absorptiometry (DXA) and chest radiography at six hospitals between 2010 and 2021. We trained the deep learning model through ensemble learning of chest X-rays, age, and sex to predict BMD using regression and T-score for multiclass classification. We assessed the following two metrics to evaluate the performance of the deep learning model: (1) correlation between the predicted and true BMDs and (2) consistency in the T-score between the predicted class and true class. The correlation coefficients for BMD prediction were hip = 0.75 and lumbar spine = 0.63. The areas under the curves for the T-score predictions of normal, osteopenia, and osteoporosis diagnoses were 0.89, 0.70, and 0.84, respectively. These results suggest that the proposed deep learning model may be suitable for screening patients with osteoporosis by predicting BMD and T-score from chest X-rays. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

14 pages, 3216 KiB  
Article
Human Blastocyst Components Detection Using Multiscale Aggregation Semantic Segmentation Network for Embryonic Analysis
by Muhammad Arsalan, Adnan Haider, Se Woon Cho, Yu Hwan Kim and Kang Ryoung Park
Biomedicines 2022, 10(7), 1717; https://doi.org/10.3390/biomedicines10071717 - 15 Jul 2022
Cited by 6 | Viewed by 2139
Abstract
Infertility is one of the most important health concerns worldwide. It is characterized by not being successful of pregnancy after some periods of periodic unprotected sexual intercourse. In vitro fertilization (IVF) is an assisted reproduction technique that efficiently addresses infertility. IVF replaces the [...] Read more.
Infertility is one of the most important health concerns worldwide. It is characterized by not being successful of pregnancy after some periods of periodic unprotected sexual intercourse. In vitro fertilization (IVF) is an assisted reproduction technique that efficiently addresses infertility. IVF replaces the actual mode of reproduction through a manual procedure wherein embryos are cultivated in a controlled laboratory environment until they reach the blastocyst stage. The standard IVF procedure includes the transfer of one or two blastocysts from several blastocysts that are grown in a controlled environment. The morphometric properties of blastocysts with their compartments such as trophectoderm (TE), zona pellucida (ZP), inner cell mass (ICM), and blastocoel (BL), are analyzed through manual microscopic analysis to predict viability. Deep learning has been extensively used for medical diagnosis and analysis and can be a powerful tool to automate the morphological analysis of human blastocysts. However, the existing approaches are inaccurate and require extensive preprocessing and expensive architectures. Thus, to cope with the automatic detection of blastocyst components, this study proposed a novel multiscale aggregation semantic segmentation network (MASS-Net) that combined four different scales via depth-wise concatenation. The extensive use of depthwise separable convolutions resulted in a decrease in the number of trainable parameters. Further, the innovative multiscale design provided rich spatial information of different resolutions, thereby achieving good segmentation performance without a very deep architecture. MASS-Net utilized 2.06 million trainable parameters and accurately detects TE, ZP, ICM, and BL without using preprocessing stages. Moreover, it can provide a separate binary mask for each blastocyst component simultaneously, and these masks provide the structure of each component for embryonic analysis. Further, the proposed MASS-Net was evaluated using publicly available human blastocyst (microscopic) imaging data. The experimental results revealed that it can effectively detect TE, ZP, ICM, and BL with mean Jaccard indices of 79.08, 84.69, 85.88%, and 89.28%, respectively, for embryological analysis, which was higher than those of the state-of-the-art methods. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Figure 1

12 pages, 2771 KiB  
Article
Automatic Segmentation of Retinal Fluid and Photoreceptor Layer from Optical Coherence Tomography Images of Diabetic Macular Edema Patients Using Deep Learning and Associations with Visual Acuity
by Huan-Yu Hsu, Yu-Bai Chou, Ying-Chun Jheng, Zih-Kai Kao, Hsin-Yi Huang, Hung-Ruei Chen, De-Kuang Hwang, Shih-Jen Chen, Shih-Hwa Chiou and Yu-Te Wu
Biomedicines 2022, 10(6), 1269; https://doi.org/10.3390/biomedicines10061269 - 29 May 2022
Cited by 6 | Viewed by 2290
Abstract
Diabetic macular edema (DME) is a highly common cause of vision loss in patients with diabetes. Optical coherence tomography (OCT) is crucial in classifying DME and tracking the results of DME treatment. The presence of intraretinal cystoid fluid (IRC) and subretinal fluid (SRF) [...] Read more.
Diabetic macular edema (DME) is a highly common cause of vision loss in patients with diabetes. Optical coherence tomography (OCT) is crucial in classifying DME and tracking the results of DME treatment. The presence of intraretinal cystoid fluid (IRC) and subretinal fluid (SRF) and the disruption of the ellipsoid zone (EZ), which is part of the photoreceptor layer, are three crucial factors affecting the best corrected visual acuity (BCVA). However, the manual segmentation of retinal fluid and the EZ from retinal OCT images is laborious and time-consuming. Current methods focus only on the segmentation of retinal features, lacking a correlation with visual acuity. Therefore, we proposed a modified U-net, a deep learning algorithm, to segment these features from OCT images of patients with DME. We also correlated these features with visual acuity. The IRC, SRF, and EZ of the OCT retinal images were manually labeled and checked by doctors. We trained the modified U-net model on these labeled images. Our model achieved Sørensen–Dice coefficients of 0.80 and 0.89 for IRC and SRF, respectively. The area under the receiver operating characteristic curve (ROC) for EZ disruption was 0.88. Linear regression indicated that EZ disruption was the factor most strongly correlated with BCVA. This finding agrees with that of previous studies on OCT images. Thus, we demonstrate that our segmentation network can be feasibly applied to OCT image segmentation and assist physicians in assessing the severity of the disease. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biological and Biomedical Imaging 2.0)
Show Figures

Graphical abstract

Back to TopTop