sensors-logo

Journal Browser

Journal Browser

Medical Image Segmentation: Role in Diagnostics, Prognostics and Decision Making

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (10 March 2024) | Viewed by 3205

Special Issue Editors


E-Mail Website
Guest Editor
Department of Pathology and Clinical Bioinformatics, Erasmus Medical Center, 3015 GD Rotterdam, The Netherlands
Interests: deep learning; radiomics; histopathology; medical image analysis; image segmentation; image classification; CAD systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Tissue Hybridisation & Digital Pathology, Precision Medicine Centre of Excellence, Queen’s University Belfast, Northern Ireland, 97 Lisburn Rd., Belfast BT9 7AE, UK
Interests: artificial intelligence; deep learning; medical image analysis; histopathology; segmentation; classification
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Medical image segmentation involves the extraction of regions of interest (ROIs) from different imaging modalities, such as X-ray, magnetic resonance imaging (MRI)-computed tomography (CT), thermal sensor images, histopathological hematoxylin and eosin (H&E) whole slide images (WSIs), immunohistochemistry WSIs, etc. The main purpose of delineating the boundaries of ROIs is to understand the underlying anatomical structures for a particular study—for example, to grade the tumor severity or classify the tumor into a molecular subtype by looking into tumor shape, size, or tumor microenvironment. Manual segmentation of ROIs in medical images is a time-consuming task. Usually, it is challenging to create a robust and generalizable algorithm with traditional image processing approaches. Recent advances in artificial intelligence (AI) have paved a new path for image segmentation with the possibility of generating a robust, generalizable, and accurate model. However, this requires a high number of labeled and high computational resources.

This Special Issue focuses on novel segmentation methods devised using artificial intelligence (AI) in various imaging modalities, such as X-ray, computed tomography (CT), positron emission tomography (PET), ultrasound, MRI, histopathological H&E WSIs, and immunohistochemistry WSIs that help clinicians, radiologists and pathologists in diagnosis, prognosis, and decision making in a clinical setting.

  • Sensor applications
  • Imaging sensors
  • Sensor data processing
  • Camera sensors
  • Thermal imaging sensors
  • Image segmentation
  • Radiology
  • Histopathology
  • Cancer diagnosis
  • Medical images
  • Machine learning
  • Deep learning
  • Artificial intelligence
  • Explainable AI models
  • Decision making

Dr. Farhan Akram
Dr. Vivek Kumar Singh
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor applications
  • imaging sensors
  • sensor data processing
  • camera sensors
  • thermal imaging sensors
  • image segmentation
  • radiology
  • histopathology
  • cancer diagnosis
  • medical images
  • machine learning
  • deep learning
  • artificial intelligence
  • explainable AI models
  • decision making

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 3271 KiB  
Article
FNeXter: A Multi-Scale Feature Fusion Network Based on ConvNeXt and Transformer for Retinal OCT Fluid Segmentation
by Zhiyuan Niu, Zhuo Deng, Weihao Gao, Shurui Bai, Zheng Gong, Chucheng Chen, Fuju Rong, Fang Li and Lan Ma
Sensors 2024, 24(8), 2425; https://doi.org/10.3390/s24082425 - 10 Apr 2024
Viewed by 359
Abstract
The accurate segmentation and quantification of retinal fluid in Optical Coherence Tomography (OCT) images are crucial for the diagnosis and treatment of ophthalmic diseases such as age-related macular degeneration. However, the accurate segmentation of retinal fluid is challenging due to significant variations in [...] Read more.
The accurate segmentation and quantification of retinal fluid in Optical Coherence Tomography (OCT) images are crucial for the diagnosis and treatment of ophthalmic diseases such as age-related macular degeneration. However, the accurate segmentation of retinal fluid is challenging due to significant variations in the size, position, and shape of fluid, as well as their complex, curved boundaries. To address these challenges, we propose a novel multi-scale feature fusion attention network (FNeXter), based on ConvNeXt and Transformer, for OCT fluid segmentation. In FNeXter, we introduce a novel global multi-scale hybrid encoder module that integrates ConvNeXt, Transformer, and region-aware spatial attention. This module can capture long-range dependencies and non-local similarities while also focusing on local features. Moreover, this module possesses the spatial region-aware capabilities, enabling it to adaptively focus on the lesions regions. Additionally, we propose a novel self-adaptive multi-scale feature fusion attention module to enhance the skip connections between the encoder and the decoder. The inclusion of this module elevates the model’s capacity to learn global features and multi-scale contextual information effectively. Finally, we conduct comprehensive experiments to evaluate the performance of the proposed FNeXter. Experimental results demonstrate that our proposed approach outperforms other state-of-the-art methods in the task of fluid segmentation. Full article
Show Figures

Figure 1

16 pages, 2088 KiB  
Article
CFANet: Context Feature Fusion and Attention Mechanism Based Network for Small Target Segmentation in Medical Images
by Ruifen Cao, Long Ning, Chao Zhou, Pijing Wei, Yun Ding, Dayu Tan and Chunhou Zheng
Sensors 2023, 23(21), 8739; https://doi.org/10.3390/s23218739 - 26 Oct 2023
Cited by 2 | Viewed by 1151
Abstract
Medical image segmentation plays a crucial role in clinical diagnosis, treatment planning, and disease monitoring. The automatic segmentation method based on deep learning has developed rapidly, with segmentation results comparable to clinical experts for large objects, but the segmentation accuracy for small objects [...] Read more.
Medical image segmentation plays a crucial role in clinical diagnosis, treatment planning, and disease monitoring. The automatic segmentation method based on deep learning has developed rapidly, with segmentation results comparable to clinical experts for large objects, but the segmentation accuracy for small objects is still unsatisfactory. Current segmentation methods based on deep learning find it difficult to extract multiple scale features of medical images, leading to an insufficient detection capability for smaller objects. In this paper, we propose a context feature fusion and attention mechanism based network for small target segmentation in medical images called CFANet. CFANet is based on U-Net structure, including the encoder and the decoder, and incorporates two key modules, context feature fusion (CFF) and effective channel spatial attention (ECSA), in order to improve segmentation performance. The CFF module utilizes contextual information from different scales to enhance the representation of small targets. By fusing multi-scale features, the network captures local and global contextual cues, which are critical for accurate segmentation. The ECSA module further enhances the network’s ability to capture long-range dependencies by incorporating attention mechanisms at the spatial and channel levels, which allows the network to focus on information-rich regions while suppressing irrelevant or noisy features. Extensive experiments are conducted on four challenging medical image datasets, namely ADAM, LUNA16, Thoracic OAR, and WORD. Experimental results show that CFANet outperforms state-of-the-art methods in terms of segmentation accuracy and robustness. The proposed method achieves excellent performance in segmenting small targets in medical images, demonstrating its potential in various clinical applications. Full article
Show Figures

Figure 1

13 pages, 2784 KiB  
Article
Deep Learning-Based Evaluation of Ultrasound Images for Benign Skin Tumors
by Hyunwoo Lee, Yerin Lee, Seung-Won Jung, Solam Lee, Byungho Oh and Sejung Yang
Sensors 2023, 23(17), 7374; https://doi.org/10.3390/s23177374 - 24 Aug 2023
Cited by 1 | Viewed by 1075
Abstract
In this study, a combined convolutional neural network for the diagnosis of three benign skin tumors was designed, and its effectiveness was verified through quantitative and statistical analysis. To this end, 698 sonographic images were taken and diagnosed at the Department of Dermatology [...] Read more.
In this study, a combined convolutional neural network for the diagnosis of three benign skin tumors was designed, and its effectiveness was verified through quantitative and statistical analysis. To this end, 698 sonographic images were taken and diagnosed at the Department of Dermatology at Severance Hospital in Seoul, Korea, between 10 November 2017 and 17 January 2020. Through an empirical process, a convolutional neural network combining two structures, which consist of a residual structure and an attention-gated structure, was designed. Five-fold cross-validation was applied, and the train set for each fold was augmented by the Fast AutoAugment technique. As a result of training, for three benign skin tumors, an average accuracy of 95.87%, an average sensitivity of 90.10%, and an average specificity of 96.23% were derived. Also, through statistical analysis using a class activation map and physicians’ findings, it was found that the judgment criteria of physicians and the trained combined convolutional neural network were similar. This study suggests that the model designed and trained in this study can be a diagnostic aid to assist physicians and enable more efficient and accurate diagnoses. Full article
Show Figures

Figure 1

Back to TopTop