Current Methods in Medical Image Segmentation

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Medical Imaging".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 42233

Special Issue Editor


E-Mail Website
Guest Editor
LITIS EA 4108, Université de Rouen Normandie, 22 bd Gambetta, 76183 Rouen CEDEX, France
Interests: image segmentation; medical imaging, deep learning; prior knowledge integration, label-efficient learning; explainability in segmentation CNN

Special Issue Information

Dear Colleagues,

Image segmentation is a key step in medical imaging, in assisting early disease detection, diagnosis, monitoring treatment, and follow-up. Segmentation is a challenging task, due to noise, lack of contrast, and object variability in medical images. Decade after decade, research on automatic or semi-automatic image segmentation has continuously provided the medical expert with increasingly powerful and time-saving algorithms. In this regard, the breakthrough of deep convolutional neural networks (CNNs) has focused almost all research efforts in medical image segmentation in recent years. CNNs are now the state of the art in this field. Medical images raise specific issues, which have been addressed with novel backbone architectures, network blocks, frameworks, and loss functions. The annotation of medical images has also an important cost, which has led researchers to design methodologies with various levels of supervision (label-efficient machine learning), including weakly and semi-supervised learning or to investigate transfer learning. This Special Issue will present review and original articles on medical image segmentation with CNNs, targeting various modalities and pathologies. We invite submissions presenting research on topics including, but not limited to, the following: CNN architecture and frameworks, novel loss functions, weakly and semi-supervised learning, multi-task and transfer learning, extensive experimental validation, integration of imaging and clinical data, and the interpretability and explainability of segmentation CNN.

Prof. Dr. Caroline Petitjean
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • image segmentation
  • computer-aided diagnosis
  • deep learning
  • machine learning
  • label-efficient machine learning

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

16 pages, 4174 KiB  
Article
Development and In-Silico and Ex-Vivo Validation of a Software for a Semi-Automated Segmentation of the Round Window Niche to Design a Patient Specific Implant to Treat Inner Ear Disorders
by Farnaz Matin-Mann, Ziwen Gao, Chunjiang Wei, Felix Repp, Eralp-Niyazi Artukarslan, Samuel John, Dorian Alcacer Labrador, Thomas Lenarz and Verena Scheper
J. Imaging 2023, 9(2), 51; https://doi.org/10.3390/jimaging9020051 - 20 Feb 2023
Cited by 1 | Viewed by 2043
Abstract
The aim of this study was to develop and validate a semi-automated segmentation approach that identifies the round window niche (RWN) and round window membrane (RWM) for use in the development of patient individualized round window niche implants (RNI) to treat inner ear [...] Read more.
The aim of this study was to develop and validate a semi-automated segmentation approach that identifies the round window niche (RWN) and round window membrane (RWM) for use in the development of patient individualized round window niche implants (RNI) to treat inner ear disorders. Twenty cone beam computed tomography (CBCT) datasets of unilateral temporal bones of patients were included in the study. Defined anatomical landmarks such as the RWM were used to develop a customized 3D Slicer™ plugin for semi-automated segmentation of the RWN. Two otolaryngologists (User 1 and User 2) segmented the datasets manually and semi-automatically using the developed software. Both methods were compared in-silico regarding the resulting RWM area and RWN volume. Finally, the developed software was validated ex-vivo in N = 3 body donor implantation tests with additively manufactured RNI. The independently segmented temporal bones of the different Users showed a strong consistency in the volume of the RWN and the area of the RWM. The volume of the semi-automated RWN segmentations were 48 ± 11% smaller on average than the manual segmentations and the area of the RWM of the semi-automated segmentations was 21 ± 17% smaller on average than the manual segmentation. All additively manufactured implants, based on the semi-automated segmentation method could be implanted successfully in a pressure-tight fit into the RWN. The implants based on the manual segmentations failed to fit into the RWN and this suggests that the larger manual segmentations were over-segmentations. This study presents a semi-automated approach for segmenting the RWN and RWM in temporal bone CBCT scans that is efficient, fast, accurate, and not dependent on trained users. In addition, the manual segmentation, often positioned as the gold-standard, actually failed to pass the implantation validation. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

25 pages, 1043 KiB  
Article
A Standardized Approach for Skin Detection: Analysis of the Literature and Case Studies
by Loris Nanni, Andrea Loreggia, Alessandra Lumini and Alberto Dorizza
J. Imaging 2023, 9(2), 35; https://doi.org/10.3390/jimaging9020035 - 6 Feb 2023
Cited by 6 | Viewed by 2914
Abstract
Skin detection involves identifying skin and non-skin areas in a digital image and is commonly used in various applications, such as analyzing hand gestures, tracking body parts, and facial recognition. The process of distinguishing between skin and non-skin regions in a digital image [...] Read more.
Skin detection involves identifying skin and non-skin areas in a digital image and is commonly used in various applications, such as analyzing hand gestures, tracking body parts, and facial recognition. The process of distinguishing between skin and non-skin regions in a digital image is widely used in a variety of applications, ranging from hand-gesture analysis to body-part tracking to facial recognition. Skin detection is a challenging problem that has received a lot of attention from experts and proposals from the research community in the context of intelligent systems, but the lack of common benchmarks and unified testing protocols has hampered fairness among approaches. Comparisons are very difficult. Recently, the success of deep neural networks has had a major impact on the field of image segmentation detection, resulting in various successful models to date. In this work, we survey the most recent research in this field and propose fair comparisons between approaches, using several different datasets. The main contributions of this work are (i) a comprehensive review of the literature on approaches to skin-color detection and a comparison of approaches that may help researchers and practitioners choose the best method for their application; (ii) a comprehensive list of datasets that report ground truth for skin detection; and (iii) a testing protocol for evaluating and comparing different skin-detection approaches. Moreover, we propose an ensemble of convolutional neural networks and transformers that obtains a state-of-the-art performance. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

21 pages, 3573 KiB  
Article
Local-Sensitive Connectivity Filter (LS-CF): A Post-Processing Unsupervised Improvement of the Frangi, Hessian and Vesselness Filters for Multimodal Vessel Segmentation
by Erick O. Rodrigues, Lucas O. Rodrigues, João H. P. Machado, Dalcimar Casanova, Marcelo Teixeira, Jeferson T. Oliva, Giovani Bernardes and Panos Liatsis
J. Imaging 2022, 8(10), 291; https://doi.org/10.3390/jimaging8100291 - 21 Oct 2022
Cited by 3 | Viewed by 1929
Abstract
A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that [...] Read more.
A retinal vessel analysis is a procedure that can be used as an assessment of risks to the eye. This work proposes an unsupervised multimodal approach that improves the response of the Frangi filter, enabling automatic vessel segmentation. We propose a filter that computes pixel-level vessel continuity while introducing a local tolerance heuristic to fill in vessel discontinuities produced by the Frangi response. This proposal, called the local-sensitive connectivity filter (LS-CF), is compared against a naive connectivity filter to the baseline thresholded Frangi filter response and to the naive connectivity filter response in combination with the morphological closing and to the current approaches in the literature. The proposal was able to achieve competitive results in a variety of multimodal datasets. It was robust enough to outperform all the state-of-the-art approaches in the literature for the OSIRIX angiographic dataset in terms of accuracy and 4 out of 5 works in the case of the IOSTAR dataset while also outperforming several works in the case of the DRIVE and STARE datasets and 6 out of 10 in the CHASE-DB dataset. For the CHASE-DB, it also outperformed all the state-of-the-art unsupervised methods. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

17 pages, 5386 KiB  
Article
GUBS: Graph-Based Unsupervised Brain Segmentation in MRI Images
by Simeon Mayala, Ida Herdlevær, Jonas Bull Haugsøen, Shamundeeswari Anandan, Nello Blaser, Sonia Gavasso and Morten Brun
J. Imaging 2022, 8(10), 262; https://doi.org/10.3390/jimaging8100262 - 27 Sep 2022
Cited by 3 | Viewed by 2157
Abstract
Brain segmentation in magnetic resonance imaging (MRI) images is the process of isolating the brain from non-brain tissues to simplify the further analysis, such as detecting pathology or calculating volumes. This paper proposes a Graph-based Unsupervised Brain Segmentation (GUBS) that processes 3D MRI [...] Read more.
Brain segmentation in magnetic resonance imaging (MRI) images is the process of isolating the brain from non-brain tissues to simplify the further analysis, such as detecting pathology or calculating volumes. This paper proposes a Graph-based Unsupervised Brain Segmentation (GUBS) that processes 3D MRI images and segments them into brain, non-brain tissues, and backgrounds. GUBS first constructs an adjacency graph from a preprocessed MRI image, weights it by the difference between voxel intensities, and computes its minimum spanning tree (MST). It then uses domain knowledge about the different regions of MRIs to sample representative points from the brain, non-brain, and background regions of the MRI image. The adjacency graph nodes corresponding to sampled points in each region are identified and used as the terminal nodes for paths connecting the regions in the MST. GUBS then computes a subgraph of the MST by first removing the longest edge of the path connecting the terminal nodes in the brain and other regions, followed by removing the longest edge of the path connecting non-brain and background regions. This process results in three labeled, connected components, whose labels are used to segment the brain, non-brain tissues, and the background. GUBS was tested by segmenting 3D T1 weighted MRI images from three publicly available data sets. GUBS shows comparable results to the state-of-the-art methods in terms of performance. However, many competing methods rely on having labeled data available for training. Labeling is a time-intensive and costly process, and a big advantage of GUBS is that it does not require labels. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

19 pages, 15773 KiB  
Article
Four Severity Levels for Grading the Tortuosity of a Retinal Fundus Image
by Sufian Abdul Qader Badawi, Maen Takruri, Yaman Albadawi, Muazzam A. Khan Khattak, Ajay Kamath Nileshwar and Emad Mosalam
J. Imaging 2022, 8(10), 258; https://doi.org/10.3390/jimaging8100258 - 22 Sep 2022
Cited by 3 | Viewed by 1779
Abstract
Hypertensive retinopathy severity classification is proportionally related to tortuosity severity grading. No tortuosity severity scale enables a computer-aided system to classify the tortuosity severity of a retinal image. This work aimed to introduce a machine learning model that can identify the severity of [...] Read more.
Hypertensive retinopathy severity classification is proportionally related to tortuosity severity grading. No tortuosity severity scale enables a computer-aided system to classify the tortuosity severity of a retinal image. This work aimed to introduce a machine learning model that can identify the severity of a retinal image automatically and hence contribute to developing a hypertensive retinopathy or diabetic retinopathy automated grading system. First, the tortuosity is quantified using fourteen tortuosity measurement formulas for the retinal images of the AV-Classification dataset to create the tortuosity feature set. Secondly, a manual labeling is performed and reviewed by two ophthalmologists to construct a tortuosity severity ground truth grading for each image in the AV classification dataset. Finally, the feature set is used to train and validate the machine learning models (J48 decision tree, ensemble rotation forest, and distributed random forest). The best performance learned model is used as the tortuosity severity classifier to identify the tortuosity severity (normal, mild, moderate, and severe) for any given retinal image. The distributed random forest model has reported the highest accuracy (99.4%) compared to the J48 Decision tree model and the rotation forest model with minimal least root mean square error (0.0000192) and the least mean average error (0.0000182). The proposed tortuosity severity grading matched the ophthalmologist’s judgment. Moreover, detecting the tortuosity severity of the retinal vessels’, optimizing vessel segmentation, the vessel segment extraction, and the created feature set have increased the accuracy of the automatic tortuosity severity detection model. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

19 pages, 6366 KiB  
Article
Towards More Accurate and Complete Heterogeneous Iris Segmentation Using a Hybrid Deep Learning Approach
by Yuan Meng and Tie Bao
J. Imaging 2022, 8(9), 246; https://doi.org/10.3390/jimaging8090246 - 10 Sep 2022
Cited by 1 | Viewed by 1861
Abstract
Accurate iris segmentation is a crucial preprocessing stage for computer-aided ophthalmic disease diagnosis. The quality of iris images taken under different camera sensors varies greatly, and thus accurate segmentation of heterogeneous iris databases is a huge challenge. At present, network architectures based on [...] Read more.
Accurate iris segmentation is a crucial preprocessing stage for computer-aided ophthalmic disease diagnosis. The quality of iris images taken under different camera sensors varies greatly, and thus accurate segmentation of heterogeneous iris databases is a huge challenge. At present, network architectures based on convolutional neural networks (CNNs) have been widely applied in iris segmentation tasks. However, due to the limited kernel size of convolution layers, iris segmentation networks based on CNNs cannot learn global and long-term semantic information interactions well, and this will bring challenges to accurately segmenting the iris region. Inspired by the success of vision transformer (VIT) and swin transformer (Swin T), a hybrid deep learning approach is proposed to segment heterogeneous iris images. Specifically, we first proposed a bilateral segmentation backbone network that combines the benefits of Swin T with CNNs. Then, a multiscale feature information extraction module (MFIEM) is proposed to extract multiscale spatial information at a more granular level. Finally, a channel attention mechanism module (CAMM) is used in this paper to enhance the discriminability of the iris region. Experimental results on a multisource heterogeneous iris database show that our network has a significant performance advantage compared with some state-of-the-art (SOTA) iris segmentation networks. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

16 pages, 3349 KiB  
Article
Cardiac Disease Classification Using Two-Dimensional Thickness and Few-Shot Learning Based on Magnetic Resonance Imaging Image Segmentation
by Adi Wibowo, Pandji Triadyaksa, Aris Sugiharto, Eko Adi Sarwoko, Fajar Agung Nugroho, Hideo Arai and Masateru Kawakubo
J. Imaging 2022, 8(7), 194; https://doi.org/10.3390/jimaging8070194 - 11 Jul 2022
Cited by 6 | Viewed by 2424
Abstract
Cardiac cine magnetic resonance imaging (MRI) is a widely used technique for the noninvasive assessment of cardiac functions. Deep neural networks have achieved considerable progress in overcoming various challenges in cine MRI analysis. However, deep learning models cannot be used for classification because [...] Read more.
Cardiac cine magnetic resonance imaging (MRI) is a widely used technique for the noninvasive assessment of cardiac functions. Deep neural networks have achieved considerable progress in overcoming various challenges in cine MRI analysis. However, deep learning models cannot be used for classification because limited cine MRI data are available. To overcome this problem, features from cine image settings are derived by handcrafting and addition of other clinical features to the classical machine learning approach for ensuring the model fits the MRI device settings and image parameters required in the analysis. In this study, a novel method was proposed for classifying heart disease (cardiomyopathy patient groups) using only segmented output maps. In the encoder–decoder network, the fully convolutional EfficientNetB5-UNet was modified to perform the semantic segmentation of the MRI image slice. A two-dimensional thickness algorithm was used to combine the segmentation outputs for the 2D representation of images of the end-diastole (ED) and end-systole (ES) cardiac volumes. The thickness images were subsequently used for classification by using a few-shot model with an adaptive subspace classifier. Model performance was verified by applying the model to the 2017 MICCAI Medical Image Computing and Computer-Assisted Intervention dataset. High segmentation performance was achieved as follows: the average Dice coefficients of segmentation were 96.24% (ED) and 89.92% (ES) for the left ventricle (LV); the values for the right ventricle (RV) were 92.90% (ED) and 86.92% (ES). The values for myocardium were 88.90% (ED) and 90.48% (ES). An accuracy score of 92% was achieved in the classification of various cardiomyopathy groups without clinical features. A novel rapid analysis approach was proposed for heart disease diagnosis, especially for cardiomyopathy conditions using cine MRI based on segmented output maps. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

17 pages, 3478 KiB  
Article
Integration of Deep Learning and Active Shape Models for More Accurate Prostate Segmentation in 3D MR Images
by Massimo Salvi, Bruno De Santi, Bianca Pop, Martino Bosco, Valentina Giannini, Daniele Regge, Filippo Molinari and Kristen M. Meiburger
J. Imaging 2022, 8(5), 133; https://doi.org/10.3390/jimaging8050133 - 11 May 2022
Cited by 12 | Viewed by 2830
Abstract
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to [...] Read more.
Magnetic resonance imaging (MRI) has a growing role in the clinical workup of prostate cancer. However, manual three-dimensional (3D) segmentation of the prostate is a laborious and time-consuming task. In this scenario, the use of automated algorithms for prostate segmentation allows us to bypass the huge workload of physicians. In this work, we propose a fully automated hybrid approach for prostate gland segmentation in MR images using an initial segmentation of prostate volumes using a custom-made 3D deep network (VNet-T2), followed by refinement using an Active Shape Model (ASM). While the deep network focuses on three-dimensional spatial coherence of the shape, the ASM relies on local image information and this joint effort allows for improved segmentation of the organ contours. Our method is developed and tested on a dataset composed of T2-weighted (T2w) MRI prostatic volumes of 60 male patients. In the test set, the proposed method shows excellent segmentation performance, achieving a mean dice score and Hausdorff distance of 0.851 and 7.55 mm, respectively. In the future, this algorithm could serve as an enabling technology for the development of computer-aided systems for prostate cancer characterization in MR imaging. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

31 pages, 1204 KiB  
Article
Kidney Tumor Semantic Segmentation Using Deep Learning: A Survey of State-of-the-Art
by Abubaker Abdelrahman and Serestina Viriri
J. Imaging 2022, 8(3), 55; https://doi.org/10.3390/jimaging8030055 - 25 Feb 2022
Cited by 18 | Viewed by 8288
Abstract
Cure rates for kidney cancer vary according to stage and grade; hence, accurate diagnostic procedures for early detection and diagnosis are crucial. Some difficulties with manual segmentation have necessitated the use of deep learning models to assist clinicians in effectively recognizing and segmenting [...] Read more.
Cure rates for kidney cancer vary according to stage and grade; hence, accurate diagnostic procedures for early detection and diagnosis are crucial. Some difficulties with manual segmentation have necessitated the use of deep learning models to assist clinicians in effectively recognizing and segmenting tumors. Deep learning (DL), particularly convolutional neural networks, has produced outstanding success in classifying and segmenting images. Simultaneously, researchers in the field of medical image segmentation employ DL approaches to solve problems such as tumor segmentation, cell segmentation, and organ segmentation. Segmentation of tumors semantically is critical in radiation and therapeutic practice. This article discusses current advances in kidney tumor segmentation systems based on DL. We discuss the various types of medical images and segmentation techniques and the assessment criteria for segmentation outcomes in kidney tumor segmentation, highlighting their building blocks and various strategies. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

17 pages, 3352 KiB  
Article
Segmentation-Based vs. Regression-Based Biomarker Estimation: A Case Study of Fetus Head Circumference Assessment from Ultrasound Images
by Jing Zhang, Caroline Petitjean and Samia Ainouz
J. Imaging 2022, 8(2), 23; https://doi.org/10.3390/jimaging8020023 - 25 Jan 2022
Cited by 4 | Viewed by 3398
Abstract
The fetus head circumference (HC) is a key biometric to monitor fetus growth during pregnancy, which is estimated from ultrasound (US) images. The standard approach to automatically measure the HC is to use a segmentation network to segment the skull, and then estimate [...] Read more.
The fetus head circumference (HC) is a key biometric to monitor fetus growth during pregnancy, which is estimated from ultrasound (US) images. The standard approach to automatically measure the HC is to use a segmentation network to segment the skull, and then estimate the head contour length from the segmentation map via ellipse fitting, usually after post-processing. In this application, segmentation is just an intermediate step to the estimation of a parameter of interest. Another possibility is to estimate directly the HC with a regression network. Even if this type of segmentation-free approaches have been boosted with deep learning, it is not yet clear how well direct approach can compare to segmentation approaches, which are expected to be still more accurate. This observation motivates the present study, where we propose a fair, quantitative comparison of segmentation-based and segmentation-free (i.e., regression) approaches to estimate how far regression-based approaches stand from segmentation approaches. We experiment various convolutional neural networks (CNN) architectures and backbones for both segmentation and regression models and provide estimation results on the HC18 dataset, as well agreement analysis, to support our findings. We also investigate memory usage and computational efficiency to compare both types of approaches. The experimental results demonstrate that even if segmentation-based approaches deliver the most accurate results, regression CNN approaches are actually learning to find prominent features, leading to promising yet improvable HC estimation results. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

18 pages, 4407 KiB  
Article
Automatic Aortic Valve Cusps Segmentation from CT Images Based on the Cascading Multiple Deep Neural Networks
by Gakuto Aoyama, Longfei Zhao, Shun Zhao, Xiao Xue, Yunxin Zhong, Haruo Yamauchi, Hiroyuki Tsukihara, Eriko Maeda, Kenji Ino, Naoki Tomii, Shu Takagi, Ichiro Sakuma, Minoru Ono and Takuya Sakaguchi
J. Imaging 2022, 8(1), 11; https://doi.org/10.3390/jimaging8010011 - 14 Jan 2022
Cited by 7 | Viewed by 3189
Abstract
Accurate morphological information on aortic valve cusps is critical in treatment planning. Image segmentation is necessary to acquire this information, but manual segmentation is tedious and time consuming. In this paper, we propose a fully automatic aortic valve cusps segmentation method from CT [...] Read more.
Accurate morphological information on aortic valve cusps is critical in treatment planning. Image segmentation is necessary to acquire this information, but manual segmentation is tedious and time consuming. In this paper, we propose a fully automatic aortic valve cusps segmentation method from CT images by combining two deep neural networks, spatial configuration-Net for detecting anatomical landmarks and U-Net for segmentation of aortic valve components. A total of 258 CT volumes of end systolic and end diastolic phases, which include cases with and without severe calcifications, were collected and manually annotated for each aortic valve component. The collected CT volumes were split 6:2:2 for the training, validation and test steps, and our method was evaluated by five-fold cross validation. The segmentation was successful for all CT volumes with 69.26 s as mean processing time. For the segmentation results of the aortic root, the right-coronary cusp, the left-coronary cusp and the non-coronary cusp, mean Dice Coefficient were 0.95, 0.70, 0.69, and 0.67, respectively. There were strong correlations between measurement values automatically calculated based on the annotations and those based on the segmentation results. The results suggest that our method can be used to automatically obtain measurement values for aortic valve morphology. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

18 pages, 26877 KiB  
Article
Single-Input Multi-Output U-Net for Automated 2D Foetal Brain Segmentation of MR Images
by Andrik Rampun, Deborah Jarvis, Paul D. Griffiths, Reyer Zwiggelaar, Bryan W. Scotney and Paul A. Armitage
J. Imaging 2021, 7(10), 200; https://doi.org/10.3390/jimaging7100200 - 1 Oct 2021
Cited by 6 | Viewed by 2040
Abstract
In this work, we develop the Single-Input Multi-Output U-Net (SIMOU-Net), a hybrid network for foetal brain segmentation inspired by the original U-Net fused with the holistically nested edge detection (HED) network. The SIMOU-Net is similar to the original U-Net but it has a [...] Read more.
In this work, we develop the Single-Input Multi-Output U-Net (SIMOU-Net), a hybrid network for foetal brain segmentation inspired by the original U-Net fused with the holistically nested edge detection (HED) network. The SIMOU-Net is similar to the original U-Net but it has a deeper architecture and takes account of the features extracted from each side output. It acts similar to an ensemble neural network, however, instead of averaging the outputs from several independently trained models, which is computationally expensive, our approach combines outputs from a single network to reduce the variance of predications and generalization errors. Experimental results using 200 normal foetal brains consisting of over 11,500 2D images produced Dice and Jaccard coefficients of 94.2 ± 5.9% and 88.7 ± 6.9%, respectively. We further tested the proposed network on 54 abnormal cases (over 3500 images) and achieved Dice and Jaccard coefficients of 91.2 ± 6.8% and 85.7 ± 6.6%, respectively. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Show Figures

Figure 1

Review

Jump to: Research

28 pages, 424 KiB  
Review
Literature Review on Artificial Intelligence Methods for Glaucoma Screening, Segmentation, and Classification
by José Camara, Alexandre Neto, Ivan Miguel Pires, María Vanessa Villasana, Eftim Zdravevski and António Cunha
J. Imaging 2022, 8(2), 19; https://doi.org/10.3390/jimaging8020019 - 20 Jan 2022
Cited by 19 | Viewed by 4833
Abstract
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by [...] Read more.
Artificial intelligence techniques are now being applied in different medical solutions ranging from disease screening to activity recognition and computer-aided diagnosis. The combination of computer science methods and medical knowledge facilitates and improves the accuracy of the different processes and tools. Inspired by these advances, this paper performs a literature review focused on state-of-the-art glaucoma screening, segmentation, and classification based on images of the papilla and excavation using deep learning techniques. These techniques have been shown to have high sensitivity and specificity in glaucoma screening based on papilla and excavation images. The automatic segmentation of the contours of the optic disc and the excavation then allows the identification and assessment of the glaucomatous disease’s progression. As a result, we verified whether deep learning techniques may be helpful in performing accurate and low-cost measurements related to glaucoma, which may promote patient empowerment and help medical doctors better monitor patients. Full article
(This article belongs to the Special Issue Current Methods in Medical Image Segmentation)
Back to TopTop