Artificial Intelligence in Advanced Medical Imaging

A special issue of Bioengineering (ISSN 2306-5354). This special issue belongs to the section "Biosignal Processing".

Deadline for manuscript submissions: closed (31 March 2023) | Viewed by 29958

Special Issue Editors

School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
Interests: artificial intelligence; MRI image denoising; computational imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
Interests: deep learning; computational imaging technique; noninvasive measurement of physiological parameters
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
Interests: deep learning; image fusion for medical imaging; MRI image enhancement; transformer
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Biomedical imaging technology has been widely used to generate anatomically precise images of in vivo tissue. However, the quality of medical images can be degraded severely by many existing factors during the acquisition procedure, mainly from stochastic variation, numerous physiological processes, eddy currents, artifacts of magnetic susceptibilities between neighboring tissues, rigid body motion, and nonrigid motion. Recently, with the development of artificial intelligence technology, the combination of advanced technologies in these two fields can provide important clinical information and play an important role in disease diagnosis, staging, treatment, and surgical planning.

Therefore, this Special Issue on “Artificial Intelligence in Advanced Medical Imaging” will focus on original research papers and comprehensive reviews involving the processing of computational biomedical image, the information fusion of multimodality medical bioimages, quality evaluation, and biomedical image improvements. Topics of interest for this Special Issue include but are not limited to the following:

  1. Advanced artificial-intelligence-based computational imaging techniques for biomedical imaging, involving the application of few/zero shot learning and self-supervised learning for biomedical imaging;
  2. Advanced medical image processing technology based on deep learning, such as denoising, reconstruction, and enhancement;
  3. Accurate assessment methods for biological image quality, including full reference or no reference;
  4. Novel multimodality medical biomedical image fusion method based on artificial intelligence.

Dr. Ming Liu
Prof. Dr. Liquan Dong
Dr. Qingliang Jiao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Bioengineering is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • image processing
  • biomedical image fusion
  • biomedical image quality assessment

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 3729 KiB  
Article
Automated Segmentation of Levator Ani Muscle from 3D Endovaginal Ultrasound Images
by Nada Rabbat, Amad Qureshi, Ko-Tsung Hsu, Zara Asif, Parag Chitnis, Seyed Abbas Shobeiri and Qi Wei
Bioengineering 2023, 10(8), 894; https://doi.org/10.3390/bioengineering10080894 - 28 Jul 2023
Viewed by 1164
Abstract
Levator ani muscle (LAM) avulsion is a common complication of vaginal childbirth and is linked to several pelvic floor disorders. Diagnosing and treating these conditions require imaging of the pelvic floor and examination of the obtained images, which is a time-consuming process subjected [...] Read more.
Levator ani muscle (LAM) avulsion is a common complication of vaginal childbirth and is linked to several pelvic floor disorders. Diagnosing and treating these conditions require imaging of the pelvic floor and examination of the obtained images, which is a time-consuming process subjected to operator variability. In our study, we proposed using deep learning (DL) to automate the segmentation of the LAM from 3D endovaginal ultrasound images (EVUS) to improve diagnostic accuracy and efficiency. Over one thousand images extracted from the 3D EVUS data of healthy subjects and patients with pelvic floor disorders were utilized for the automated LAM segmentation. A U-Net model was implemented, with Intersection over Union (IoU) and Dice metrics being used for model performance evaluation. The model achieved a mean Dice score of 0.86, demonstrating a better performance than existing works. The mean IoU was 0.76, indicative of a high degree of overlap between the automated and manual segmentation of the LAM. Three other models including Attention UNet, FD-UNet and Dense-UNet were also applied on the same images which showed comparable results. Our study demonstrated the feasibility and accuracy of using DL segmentation with U-Net architecture to automate LAM segmentation to reduce the time and resources required for manual segmentation of 3D EVUS images. The proposed method could become an important component in AI-based diagnostic tools, particularly in low socioeconomic regions where access to healthcare resources is limited. By improving the management of pelvic floor disorders, our approach may contribute to better patient outcomes in these underserved areas. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

11 pages, 1648 KiB  
Article
Performance Comparison of Object Detection Networks for Shrapnel Identification in Ultrasound Images
by Sofia I. Hernandez-Torres, Ryan P. Hennessey and Eric J. Snider
Bioengineering 2023, 10(7), 807; https://doi.org/10.3390/bioengineering10070807 - 05 Jul 2023
Cited by 4 | Viewed by 1394
Abstract
Ultrasound imaging is a critical tool for triaging and diagnosing subjects but only if images can be properly interpreted. Unfortunately, in remote or military medicine situations, the expertise to interpret images can be lacking. Machine-learning image interpretation models that are explainable to the [...] Read more.
Ultrasound imaging is a critical tool for triaging and diagnosing subjects but only if images can be properly interpreted. Unfortunately, in remote or military medicine situations, the expertise to interpret images can be lacking. Machine-learning image interpretation models that are explainable to the end user and deployable in real time with ultrasound equipment have the potential to solve this problem. We have previously shown how a YOLOv3 (You Only Look Once) object detection algorithm can be used for tracking shrapnel, artery, vein, and nerve fiber bundle features in a tissue phantom. However, real-time implementation of an object detection model requires optimizing model inference time. Here, we compare the performance of five different object detection deep-learning models with varying architectures and trainable parameters to determine which model is most suitable for this shrapnel-tracking ultrasound image application. We used a dataset of more than 16,000 ultrasound images from gelatin tissue phantoms containing artery, vein, nerve fiber, and shrapnel features for training and evaluating each model. Every object detection model surpassed 0.85 mean average precision except for the detection transformer model. Overall, the YOLOv7tiny model had the higher mean average precision and quickest inference time, making it the obvious model choice for this ultrasound imaging application. Other object detection models were overfitting the data as was determined by lower testing performance compared with higher training performance. In summary, the YOLOv7tiny object detection model had the best mean average precision and inference time and was selected as optimal for this application. Next steps will implement this object detection algorithm for real-time applications, an important next step in translating AI models for emergency and military medicine. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

20 pages, 3777 KiB  
Article
AHANet: Adaptive Hybrid Attention Network for Alzheimer’s Disease Classification Using Brain Magnetic Resonance Imaging
by T. Illakiya, Karthik Ramamurthy, M. V. Siddharth, Rashmi Mishra and Ashish Udainiya
Bioengineering 2023, 10(6), 714; https://doi.org/10.3390/bioengineering10060714 - 12 Jun 2023
Cited by 7 | Viewed by 1619
Abstract
Alzheimer’s disease (AD) is a progressive neurological problem that causes brain atrophy and affects the memory and thinking skills of an individual. Accurate detection of AD has been a challenging research topic for a long time in the area of medical image processing. [...] Read more.
Alzheimer’s disease (AD) is a progressive neurological problem that causes brain atrophy and affects the memory and thinking skills of an individual. Accurate detection of AD has been a challenging research topic for a long time in the area of medical image processing. Detecting AD at its earliest stage is crucial for the successful treatment of the disease. The proposed Adaptive Hybrid Attention Network (AHANet) has two attention modules, namely Enhanced Non-Local Attention (ENLA) and Coordinate Attention. These modules extract global-level features and local-level features separately from the brain Magnetic Resonance Imaging (MRI), thereby boosting the feature extraction power of the network. The ENLA module extracts spatial and contextual information on a global scale while also capturing important long-range dependencies. The Coordinate Attention module captures local features from the input images. It embeds positional information into the channel attention mechanism for enhanced feature extraction. Moreover, an Adaptive Feature Aggregation (AFA) module is proposed to fuse features from the global and local levels in an effective way. As a result of incorporating the above architectural enhancements into the DenseNet architecture, the proposed network exhibited better performance compared to the existing works. The proposed network was trained and tested on the ADNI dataset, yielding a classification accuracy of 98.53%. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

18 pages, 4975 KiB  
Article
Deep-Learning-Based Segmentation of Extraocular Muscles from Magnetic Resonance Images
by Amad Qureshi, Seongjin Lim, Soh Youn Suh, Bassam Mutawak, Parag V. Chitnis, Joseph L. Demer and Qi Wei
Bioengineering 2023, 10(6), 699; https://doi.org/10.3390/bioengineering10060699 - 08 Jun 2023
Viewed by 1544
Abstract
In this study, we investigated the performance of four deep learning frameworks of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the extraocular muscles (EOMs) from coronal MRI. Performances of the four models were evaluated and compared with the standard F-measure-based [...] Read more.
In this study, we investigated the performance of four deep learning frameworks of U-Net, U-NeXt, DeepLabV3+, and ConResNet in multi-class pixel-based segmentation of the extraocular muscles (EOMs) from coronal MRI. Performances of the four models were evaluated and compared with the standard F-measure-based metrics of intersection over union (IoU) and Dice, where the U-Net achieved the highest overall IoU and Dice scores of 0.77 and 0.85, respectively. Centroid distance offset between identified and ground truth EOM centroids was measured where U-Net and DeepLabV3+ achieved low offsets (p > 0.05) of 0.33 mm and 0.35 mm, respectively. Our results also demonstrated that segmentation accuracy varies in spatially different image planes. This study systematically compared factors that impact the variability of segmentation and morphometric accuracy of the deep learning models when applied to segmenting EOMs from MRI. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Graphical abstract

16 pages, 2370 KiB  
Article
Automated Classification of Lung Cancer Subtypes Using Deep Learning and CT-Scan Based Radiomic Analysis
by Bryce Dunn, Mariaelena Pierobon and Qi Wei
Bioengineering 2023, 10(6), 690; https://doi.org/10.3390/bioengineering10060690 - 06 Jun 2023
Cited by 5 | Viewed by 2285
Abstract
Artificial intelligence and emerging data science techniques are being leveraged to interpret medical image scans. Traditional image analysis relies on visual interpretation by a trained radiologist, which is time-consuming and can, to some degree, be subjective. The development of reliable, automated diagnostic tools [...] Read more.
Artificial intelligence and emerging data science techniques are being leveraged to interpret medical image scans. Traditional image analysis relies on visual interpretation by a trained radiologist, which is time-consuming and can, to some degree, be subjective. The development of reliable, automated diagnostic tools is a key goal of radiomics, a fast-growing research field which combines medical imaging with personalized medicine. Radiomic studies have demonstrated potential for accurate lung cancer diagnoses and prognostications. The practice of delineating the tumor region of interest, known as segmentation, is a key bottleneck in the development of generalized classification models. In this study, the incremental multiple resolution residual network (iMRRN), a publicly available and trained deep learning segmentation model, was applied to automatically segment CT images collected from 355 lung cancer patients included in the dataset “Lung-PET-CT-Dx”, obtained from The Cancer Imaging Archive (TCIA), an open-access source for radiological images. We report a failure rate of 4.35% when using the iMRRN to segment tumor lesions within plain CT images in the lung cancer CT dataset. Seven classification algorithms were trained on the extracted radiomic features and tested for their ability to classify different lung cancer subtypes. Over-sampling was used to handle unbalanced data. Chi-square tests revealed the higher order texture features to be the most predictive when classifying lung cancers by subtype. The support vector machine showed the highest accuracy, 92.7% (0.97 AUC), when classifying three histological subtypes of lung cancer: adenocarcinoma, small cell carcinoma, and squamous cell carcinoma. The results demonstrate the potential of AI-based computer-aided diagnostic tools to automatically diagnose subtypes of lung cancer by coupling deep learning image segmentation with supervised classification. Our study demonstrated the integrated application of existing AI techniques in the non-invasive and effective diagnosis of lung cancer subtypes, and also shed light on several practical issues concerning the application of AI in biomedicine. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Graphical abstract

13 pages, 4238 KiB  
Article
Intra-Patient Lung CT Registration through Large Deformation Decomposition and Attention-Guided Refinement
by Jing Zou, Jia Liu, Kup-Sze Choi and Jing Qin
Bioengineering 2023, 10(5), 562; https://doi.org/10.3390/bioengineering10050562 - 08 May 2023
Viewed by 1326
Abstract
Deformable lung CT image registration is an essential task for computer-assisted interventions and other clinical applications, especially when organ motion is involved. While deep-learning-based image registration methods have recently achieved promising results by inferring deformation fields in an end-to-end manner, large and irregular [...] Read more.
Deformable lung CT image registration is an essential task for computer-assisted interventions and other clinical applications, especially when organ motion is involved. While deep-learning-based image registration methods have recently achieved promising results by inferring deformation fields in an end-to-end manner, large and irregular deformations caused by organ motion still pose a significant challenge. In this paper, we present a method for registering lung CT images that is tailored to the specific patient being imaged. To address the challenge of large deformations between the source and target images, we break the deformation down into multiple continuous intermediate fields. These fields are then combined to create a spatio-temporal motion field. We further refine this field using a self-attention layer that aggregates information along motion trajectories. By leveraging temporal information from a respiratory cycle, our proposed methods can generate intermediate images that facilitate image-guided tumor tracking. We evaluated our approach extensively on a public dataset, and our numerical and visual results demonstrate the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

16 pages, 3684 KiB  
Article
AI Model for Detection of Abdominal Hemorrhage Lesions in Abdominal CT Images
by Young-Jin Park, Hui-Sup Cho and Myoung-Nam Kim
Bioengineering 2023, 10(4), 502; https://doi.org/10.3390/bioengineering10040502 - 21 Apr 2023
Cited by 1 | Viewed by 1671
Abstract
Information technology has been actively utilized in the field of imaging diagnosis using artificial intelligence (AI), which provides benefits to human health. Readings of abdominal hemorrhage lesions using AI can be utilized in situations where lesions cannot be read due to emergencies or [...] Read more.
Information technology has been actively utilized in the field of imaging diagnosis using artificial intelligence (AI), which provides benefits to human health. Readings of abdominal hemorrhage lesions using AI can be utilized in situations where lesions cannot be read due to emergencies or the absence of specialists; however, there is a lack of related research due to the difficulty in collecting and acquiring images. In this study, we processed the abdominal computed tomography (CT) database provided by multiple hospitals for utilization in deep learning and detected abdominal hemorrhage lesions in real time using an AI model designed in a cascade structure using deep learning, a subfield of AI. The AI model was used a detection model to detect lesions distributed in various sizes with high accuracy, and a classification model that could screen out images without lesions was placed before the detection model to solve the problem of increasing false positives owing to the input of images without lesions in actual clinical cases. The developed method achieved 93.22% sensitivity and 99.60% specificity. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Graphical abstract

13 pages, 1960 KiB  
Article
Machine Learning Diffuse Optical Tomography Using Extreme Gradient Boosting and Genetic Programming
by Ami Hauptman, Ganesh M. Balasubramaniam and Shlomi Arnon
Bioengineering 2023, 10(3), 382; https://doi.org/10.3390/bioengineering10030382 - 21 Mar 2023
Cited by 1 | Viewed by 1828
Abstract
Diffuse optical tomography (DOT) is a non-invasive method for detecting breast cancer; however, it struggles to produce high-quality images due to the complexity of scattered light and the limitations of traditional image reconstruction algorithms. These algorithms can be affected by boundary conditions and [...] Read more.
Diffuse optical tomography (DOT) is a non-invasive method for detecting breast cancer; however, it struggles to produce high-quality images due to the complexity of scattered light and the limitations of traditional image reconstruction algorithms. These algorithms can be affected by boundary conditions and have a low imaging accuracy, a shallow imaging depth, a long computation time, and a high signal-to-noise ratio. However, machine learning can potentially improve the performance of DOT by being better equipped to solve inverse problems, perform regression, classify medical images, and reconstruct biomedical images. In this study, we utilized a machine learning model called “XGBoost” to detect tumors in inhomogeneous breasts and applied a post-processing technique based on genetic programming to improve accuracy. The proposed algorithm was tested using simulated DOT measurements from complex inhomogeneous breasts and evaluated using the cosine similarity metrics and root mean square error loss. The results showed that the use of XGBoost and genetic programming in DOT could lead to more accurate and non-invasive detection of tumors in inhomogeneous breasts compared to traditional methods, with the reconstructed breasts having an average cosine similarity of more than 0.97 ± 0.07 and average root mean square error of around 0.1270 ± 0.0031 compared to the ground truth. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

26 pages, 4663 KiB  
Article
Optimization System Based on Convolutional Neural Network and Internet of Medical Things for Early Diagnosis of Lung Cancer
by Yossra Hussain Ali, Varghese Sabu Chooralil, Karthikeyan Balasubramanian, Rajasekhar Reddy Manyam, Sekar Kidambi Raju, Ahmed T. Sadiq and Alaa K. Farhan
Bioengineering 2023, 10(3), 320; https://doi.org/10.3390/bioengineering10030320 - 02 Mar 2023
Cited by 11 | Viewed by 2879
Abstract
Recently, deep learning and the Internet of Things (IoT) have been widely used in the healthcare monitoring system for decision making. Disease prediction is one of the emerging applications in current practices. In the method described in this paper, lung cancer prediction is [...] Read more.
Recently, deep learning and the Internet of Things (IoT) have been widely used in the healthcare monitoring system for decision making. Disease prediction is one of the emerging applications in current practices. In the method described in this paper, lung cancer prediction is implemented using deep learning and IoT, which is a challenging task in computer-aided diagnosis (CAD). Because lung cancer is a dangerous medical disease that must be identified at a higher detection rate, disease-related information is obtained from IoT medical devices and transmitted to the server. The medical data are then processed and classified into two categories, benign and malignant, using a multi-layer CNN (ML-CNN) model. In addition, a particle swarm optimization method is used to improve the learning ability (loss and accuracy). This step uses medical data (CT scan and sensor information) based on the Internet of Medical Things (IoMT). For this purpose, sensor information and image information from IoMT devices and sensors are gathered, and then classification actions are taken. The performance of the proposed technique is compared with well-known existing methods, such as the Support Vector Machine (SVM), probabilistic neural network (PNN), and conventional CNN, in terms of accuracy, precision, sensitivity, specificity, F-score, and computation time. For this purpose, two lung datasets were tested to evaluate the performance: Lung Image Database Consortium (LIDC) and Linear Imaging and Self-Scanning Sensor (LISS) datasets. Compared to alternative methods, the trial outcomes showed that the suggested technique has the potential to help the radiologist make an accurate and efficient early lung cancer diagnosis. The performance of the proposed ML-CNN was analyzed using Python, where the accuracy (2.5–10.5%) was high when compared to the number of instances, precision (2.3–9.5%) was high when compared to the number of instances, sensitivity (2.4–12.5%) was high when compared to several instances, the F-score (2–30%) was high when compared to the number of cases, the error rate (0.7–11.5%) was low compared to the number of cases, and the computation time (170 ms to 400 ms) was low compared to how many cases were computed for the proposed work, including previous known methods. The proposed ML-CNN architecture shows that this technique outperforms previous works. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

14 pages, 3434 KiB  
Article
Multi-Layered Non-Local Bayes Model for Lung Cancer Early Diagnosis Prediction with the Internet of Medical Things
by Yossra Hussain Ali, Seelammal Chinnaperumal, Raja Marappan, Sekar Kidambi Raju, Ahmed T. Sadiq, Alaa K. Farhan and Palanivel Srinivasan
Bioengineering 2023, 10(2), 138; https://doi.org/10.3390/bioengineering10020138 - 20 Jan 2023
Cited by 7 | Viewed by 2427
Abstract
The Internet of Things (IoT) has been influential in predicting major diseases in current practice. The deep learning (DL) technique is vital in monitoring and controlling the functioning of the healthcare system and ensuring an effective decision-making process. In this study, we aimed [...] Read more.
The Internet of Things (IoT) has been influential in predicting major diseases in current practice. The deep learning (DL) technique is vital in monitoring and controlling the functioning of the healthcare system and ensuring an effective decision-making process. In this study, we aimed to develop a framework implementing the IoT and DL to identify lung cancer. The accurate and efficient prediction of disease is a challenging task. The proposed model deploys a DL process with a multi-layered non-local Bayes (NL Bayes) model to manage the process of early diagnosis. The Internet of Medical Things (IoMT) could be useful in determining factors that could enable the effective sorting of quality values through the use of sensors and image processing techniques. We studied the proposed model by analyzing its results with regard to specific attributes such as accuracy, quality, and system process efficiency. In this study, we aimed to overcome problems in the existing process through the practical results of a computational comparison process. The proposed model provided a low error rate (2%, 5%) and an increase in the number of instance values. The experimental results led us to conclude that the proposed model can make predictions based on images with high sensitivity and better precision values compared to other specific results. The proposed model achieved the expected accuracy (81%, 95%), the expected specificity (80%, 98%), and the expected sensitivity (80%, 99%). This model is adequate for real-time health monitoring systems in the prediction of lung cancer and can enable effective decision-making with the use of DL techniques. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

12 pages, 3479 KiB  
Article
AI-Driven Robust Kidney and Renal Mass Segmentation and Classification on 3D CT Images
by Jingya Liu, Onur Yildirim, Oguz Akin and Yingli Tian
Bioengineering 2023, 10(1), 116; https://doi.org/10.3390/bioengineering10010116 - 13 Jan 2023
Cited by 7 | Viewed by 2648
Abstract
Early intervention in kidney cancer helps to improve survival rates. Abdominal computed tomography (CT) is often used to diagnose renal masses. In clinical practice, the manual segmentation and quantification of organs and tumors are expensive and time-consuming. Artificial intelligence (AI) has shown a [...] Read more.
Early intervention in kidney cancer helps to improve survival rates. Abdominal computed tomography (CT) is often used to diagnose renal masses. In clinical practice, the manual segmentation and quantification of organs and tumors are expensive and time-consuming. Artificial intelligence (AI) has shown a significant advantage in assisting cancer diagnosis. To reduce the workload of manual segmentation and avoid unnecessary biopsies or surgeries, in this paper, we propose a novel end-to-end AI-driven automatic kidney and renal mass diagnosis framework to identify the abnormal areas of the kidney and diagnose the histological subtypes of renal cell carcinoma (RCC). The proposed framework first segments the kidney and renal mass regions by a 3D deep learning architecture (Res-UNet), followed by a dual-path classification network utilizing local and global features for the subtype prediction of the most common RCCs: clear cell, chromophobe, oncocytoma, papillary, and other RCC subtypes. To improve the robustness of the proposed framework on the dataset collected from various institutions, a weakly supervised learning schema is proposed to leverage the domain gap between various vendors via very few CT slice annotations. Our proposed diagnosis system can accurately segment the kidney and renal mass regions and predict tumor subtypes, outperforming existing methods on the KiTs19 dataset. Furthermore, cross-dataset validation results demonstrate the robustness of datasets collected from different institutions trained via the weakly supervised learning schema. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Graphical abstract

12 pages, 245 KiB  
Article
Attitudes toward the Integration of Radiographers into the First-Line Interpretation of Imaging Using the Red Dot System
by Ammar A. Oglat, Firas Fohely, Ali AL Masalmeh, Ismail AL Jbour, Laith AL Jaradat and Sema I. Athamnah
Bioengineering 2023, 10(1), 71; https://doi.org/10.3390/bioengineering10010071 - 05 Jan 2023
Cited by 3 | Viewed by 2106
Abstract
The red dot system uses expertise in the identification of anomalies to assist radiologists in distinguishing radiological abnormalities and managing them before the radiologist report is sent. This is a small step on the road to greater role development for radiographers. This practice [...] Read more.
The red dot system uses expertise in the identification of anomalies to assist radiologists in distinguishing radiological abnormalities and managing them before the radiologist report is sent. This is a small step on the road to greater role development for radiographers. This practice has existed for more than 20 years in the UK. Today, it is only the UK seeking to legislate radiographer reports. The aim of this paper is to put focus on this issue, determine whether radiographer reports are necessary, and explore whether there are any benefits that can be highlighted to encourage health authorities worldwide to allow radiographers to write clinical reports. Additionally, this study was conducted to evaluate the role of radiographers (non-radiologists) in medical image interpretation, using 95 samples that were collected randomly and a representative sample of radiographers and radiologists of both genders. The SPSS program was used for the statistical analysis of the samples and to scientifically explain the results. We found that radiologists have no objections to the participation of radiographers in diagnosis assistance, interpretation, and clinical reporting through the red dot system. Therefore, there was support for the future implementation of such a system in health care. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
19 pages, 1213 KiB  
Article
A Deep Learning Approach for Liver and Tumor Segmentation in CT Images Using ResUNet
by Hameedur Rahman, Tanvir Fatima Naik Bukht, Azhar Imran, Junaid Tariq, Shanshan Tu and Abdulkareeem Alzahrani
Bioengineering 2022, 9(8), 368; https://doi.org/10.3390/bioengineering9080368 - 05 Aug 2022
Cited by 29 | Viewed by 5266
Abstract
According to the most recent estimates from global cancer statistics for 2020, liver cancer is the ninth most common cancer in women. Segmenting the liver is difficult, and segmenting the tumor from the liver adds some difficulty. After a sample of liver tissue [...] Read more.
According to the most recent estimates from global cancer statistics for 2020, liver cancer is the ninth most common cancer in women. Segmenting the liver is difficult, and segmenting the tumor from the liver adds some difficulty. After a sample of liver tissue is taken, imaging tests, such as magnetic resonance imaging (MRI), computer tomography (CT), and ultrasound (US), are used to segment the liver and liver tumor. Due to overlapping intensity and variability in the position and shape of soft tissues, segmentation of the liver and tumor from computed abdominal tomography images based on shade gray or shapes is undesirable. This study proposed a more efficient method for segmenting liver and tumors from CT image volumes using a hybrid ResUNet model, combining the ResNet and UNet models to address this gap. The two overlapping models were primarily used in this study to segment the liver and for region of interest (ROI) assessment. Segmentation of the liver is done to examine the liver with an abdominal CT image volume. The proposed model is based on CT volume slices of patients with liver tumors and evaluated on the public 3D dataset IRCADB01. Based on the experimental analysis, the true value accuracy for liver segmentation was found to be approximately 99.55%, 97.85%, and 98.16%. The authentication rate of the dice coefficient also increased, indicating that the experiment went well and that the model is ready to use for the detection of liver tumors. Full article
(This article belongs to the Special Issue Artificial Intelligence in Advanced Medical Imaging)
Show Figures

Figure 1

Back to TopTop