sensors-logo

Journal Browser

Journal Browser

Medical Imaging: Artificial Intelligence, Image Recognition, and Machine Learning Techniques

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (10 June 2022) | Viewed by 27748

Special Issue Editor


E-Mail Website
Guest Editor
Centro de Investigação do Instituto Português de Oncologia do Porto (CI-IPOP), Grupo de Física Médica, Radiobiologia e Protecção Radiológica, 4200-072 Porto, Portugal
Interests: pattern recognition; image processing; data science; artificial intelligence; machine learning; deep learning; biomedical applications; health applications; cancer

Special Issue Information

Dear Colleagues,

Medical Imaging has become an essential component in many fields of medical research and clinical practice. Medical imaging techniques, deep learning, and artificial intelligence bring many healthcare protection benefits.  We can collect, measure, and analyse vast volumes of health-related data using computing, networking technologies, and artificial intelligence, leading to tremendous advances in healthcare and excellent opportunities for medical imaging communities.  Meanwhile, these technologies have also brought new challenges and issues.

This Special Issue of the Journal Sensors is focused on advanced techniques, new challenges, and issues in Medical Imaging.

Dr. Inês Domingues
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • artificial intelligence
  • information fusion for medical data
  • image recognition
  • machine learning
  • deep learning
  • image processing
  • image analysis
  • computer vision

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

10 pages, 3138 KiB  
Article
In-Bed Posture Classification Using Deep Neural Network
by Lindsay Stern and Atena Roshan Fekr
Sensors 2023, 23(5), 2430; https://doi.org/10.3390/s23052430 - 22 Feb 2023
Cited by 13 | Viewed by 2158
Abstract
In-bed posture monitoring has become a prevalent area of research to help minimize the risk of pressure sore development and to increase sleep quality. This paper proposed 2D and 3D Convolutional Neural Networks, which are trained on images and videos of an open-access [...] Read more.
In-bed posture monitoring has become a prevalent area of research to help minimize the risk of pressure sore development and to increase sleep quality. This paper proposed 2D and 3D Convolutional Neural Networks, which are trained on images and videos of an open-access dataset consisting of 13 subjects’ body heat maps captured from a pressure mat in 17 positions, respectively. The main goal of this paper is to detect the three main body positions: supine, left, and right. We compare the use of image and video data through 2D and 3D models in our classification. Since the dataset was imbalanced, three strategies were evaluated, i.e., down sampling, over sampling, and class weights. The best 3D model achieved accuracies of 98.90 ± 1.05% and 97.80 ± 2.14% for 5-fold and leave-one-subject-out (LOSO) cross validations, respectively. To compare the 3D model with 2D, four pre-trained 2D models were evaluated, where the best-performing model was the ResNet-18 with accuracies of 99.97 ± 0.03% for 5-fold and 99.62 ± 0.37% for LOSO. The proposed 2D and 3D models provided promising results for in-bed posture recognition and can be used in the future to further distinguish postures into more detailed subclasses. The outcome of this study can be used to remind caregivers at hospitals and long-term care facilitiesto reposition their patients if they do not reposition themselves naturally to prevent pressure ulcers. In addition, the evaluation of body postures and movements during sleep can help caregivers understand sleep quality. Full article
Show Figures

Figure 1

20 pages, 6317 KiB  
Article
Automatic Detection of Liver Cancer Using Hybrid Pre-Trained Models
by Esam Othman, Muhammad Mahmoud, Habib Dhahri, Hatem Abdulkader, Awais Mahmood and Mina Ibrahim
Sensors 2022, 22(14), 5429; https://doi.org/10.3390/s22145429 - 20 Jul 2022
Cited by 10 | Viewed by 2673
Abstract
Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the world. Consequently, the early detection of liver cancer leads to lower mortality rates. This work aims to build a model that will help clinicians determine the type of [...] Read more.
Liver cancer is a life-threatening illness and one of the fastest-growing cancer types in the world. Consequently, the early detection of liver cancer leads to lower mortality rates. This work aims to build a model that will help clinicians determine the type of tumor when it occurs within the liver region by analyzing images of tissue taken from a biopsy of this tumor. Working within this stage requires effort, time, and accumulated experience that must be possessed by a tissue expert to determine whether this tumor is malignant and needs treatment. Thus, a histology expert can make use of this model to obtain an initial diagnosis. This study aims to propose a deep learning model using convolutional neural networks (CNNs), which are able to transfer knowledge from pre-trained global models and decant this knowledge into a single model to help diagnose liver tumors from CT scans. Thus, we obtained a hybrid model capable of detecting CT images of a biopsy of a liver tumor. The best results that we obtained within this research reached an accuracy of 0.995, a precision value of 0.864, and a recall value of 0.979, which are higher than those obtained using other models. It is worth noting that this model was tested on a limited set of data and gave good detection results. This model can be used as an aid to support the decisions of specialists in this field and save their efforts. In addition, it saves the effort and time incurred by the treatment of this type of cancer by specialists, especially during periodic examination campaigns every year. Full article
Show Figures

Figure 1

26 pages, 10525 KiB  
Article
Unified End-to-End YOLOv5-HR-TCM Framework for Automatic 2D/3D Human Pose Estimation for Real-Time Applications
by Hung-Cuong Nguyen, Thi-Hao Nguyen, Rafal Scherer and Van-Hung Le
Sensors 2022, 22(14), 5419; https://doi.org/10.3390/s22145419 - 20 Jul 2022
Cited by 16 | Viewed by 4847
Abstract
Three-dimensional human pose estimation is widely applied in sports, robotics, and healthcare. In the past five years, the number of CNN-based studies for 3D human pose estimation has been numerous and has yielded impressive results. However, studies often focus only on improving the [...] Read more.
Three-dimensional human pose estimation is widely applied in sports, robotics, and healthcare. In the past five years, the number of CNN-based studies for 3D human pose estimation has been numerous and has yielded impressive results. However, studies often focus only on improving the accuracy of the estimation results. In this paper, we propose a fast, unified end-to-end model for estimating 3D human pose, called YOLOv5-HR-TCM (YOLOv5-HRet-Temporal Convolution Model). Our proposed model is based on the 2D to 3D lifting approach for 3D human pose estimation while taking care of each step in the estimation process, such as person detection, 2D human pose estimation, and 3D human pose estimation. The proposed model is a combination of best practices at each stage. Our proposed model is evaluated on the Human 3.6M dataset and compared with other methods at each step. The method achieves high accuracy, not sacrificing processing speed. The estimated time of the whole process is 3.146 FPS on a low-end computer. In particular, we propose a sports scoring application based on the deviation angle between the estimated 3D human posture and the standard (reference) origin. The average deviation angle evaluated on the Human 3.6M dataset (Protocol #1–Pro #1) is 8.2 degrees. Full article
Show Figures

Figure 1

12 pages, 535 KiB  
Article
A Feasibility Study on Deep Learning Based Brain Tumor Segmentation Using 2D Ellipse Box Areas
by Muhaddisa Barat Ali, Xiaohan Bai, Irene Yu-Hua Gu, Mitchel S. Berger and Asgeir Store Jakola
Sensors 2022, 22(14), 5292; https://doi.org/10.3390/s22145292 - 15 Jul 2022
Cited by 4 | Viewed by 1776
Abstract
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box [...] Read more.
In most deep learning-based brain tumor segmentation methods, training the deep network requires annotated tumor areas. However, accurate tumor annotation puts high demands on medical personnel. The aim of this study is to train a deep network for segmentation by using ellipse box areas surrounding the tumors. In the proposed method, the deep network is trained by using a large number of unannotated tumor images with foreground (FG) and background (BG) ellipse box areas surrounding the tumor and background, and a small number of patients (<20) with annotated tumors. The training is conducted by initial training on two ellipse boxes on unannotated MRIs, followed by refined training on a small number of annotated MRIs. We use a multi-stream U-Net for conducting our experiments, which is an extension of the conventional U-Net. This enables the use of complementary information from multi-modality (e.g., T1, T1ce, T2, and FLAIR) MRIs. To test the feasibility of the proposed approach, experiments and evaluation were conducted on two datasets for glioma segmentation. Segmentation performance on the test sets is then compared with those used on the same network but trained entirely by annotated MRIs. Our experiments show that the proposed method has obtained good tumor segmentation results on the test sets, wherein the dice score on tumor areas is (0.8407, 0.9104), and segmentation accuracy on tumor areas is (83.88%, 88.47%) for the MICCAI BraTS’17 and US datasets, respectively. Comparing the segmented results by using the network trained by all annotated tumors, the drop in the segmentation performance from the proposed approach is (0.0594, 0.0159) in the dice score, and (8.78%, 2.61%) in segmented tumor accuracy for MICCAI and US test sets, which is relatively small. Our case studies have demonstrated that training the network for segmentation by using ellipse box areas in place of all annotated tumors is feasible, and can be considered as an alternative, which is a trade-off between saving medical experts’ time annotating tumors and a small drop in segmentation performance. Full article
Show Figures

Figure 1

13 pages, 31784 KiB  
Article
The Influence of a Coherent Annotation and Synthetic Addition of Lung Nodules for Lung Segmentation in CT Scans
by Joana Sousa, Tania Pereira, Inês Neves, Francisco Silva and Hélder P. Oliveira
Sensors 2022, 22(9), 3443; https://doi.org/10.3390/s22093443 - 30 Apr 2022
Cited by 1 | Viewed by 1644
Abstract
Lung cancer is a highly prevalent pathology and a leading cause of cancer-related deaths. Most patients are diagnosed when the disease has manifested itself, which usually is a sign of lung cancer in an advanced stage and, as a consequence, the 5-year survival [...] Read more.
Lung cancer is a highly prevalent pathology and a leading cause of cancer-related deaths. Most patients are diagnosed when the disease has manifested itself, which usually is a sign of lung cancer in an advanced stage and, as a consequence, the 5-year survival rates are low. To increase the chances of survival, improving the cancer early detection capacity is crucial, for which computed tomography (CT) scans represent a key role. The manual evaluation of the CTs is a time-consuming task and computer-aided diagnosis (CAD) systems can help relieve that burden. The segmentation of the lung is one of the first steps in these systems, yet it is very challenging given the heterogeneity of lung diseases usually present and associated with cancer development. In our previous work, a segmentation model based on a ResNet34 and U-Net combination was developed on a cross-cohort dataset that yielded good segmentation masks for multiple pathological conditions but misclassified some of the lung nodules. The multiple datasets used for the model development were originated from different annotation protocols, which generated inconsistencies for the learning process, and the annotations are usually not adequate for lung cancer studies since they did not comprise lung nodules. In addition, the initial datasets used for training presented a reduced number of nodules, which was showed not to be enough to allow the segmentation model to learn to include them as a lung part. In this work, an objective protocol for the lung mask’s segmentation was defined and the previous annotations were carefully reviewed and corrected to create consistent and adequate ground-truth masks for the development of the segmentation model. Data augmentation with domain knowledge was used to create lung nodules in the cases used to train the model. The model developed achieved a Dice similarity coefficient (DSC) above 0.9350 for all test datasets and it showed an ability to cope, not only with a variety of lung patterns, but also with the presence of lung nodules as well. This study shows the importance of using consistent annotations for the supervised learning process, which is a very time-consuming task, but that has great importance to healthcare applications. Due to the lack of massive datasets in the medical field, which consequently brings a lack of wide representativity, data augmentation with domain knowledge could represent a promising help to overcome this limitation for learning models development. Full article
Show Figures

Figure 1

25 pages, 6806 KiB  
Article
Spatial Distribution Analysis of Novel Texture Feature Descriptors for Accurate Breast Density Classification
by Haipeng Li, Ramakrishnan Mukundan and Shelley Boyd
Sensors 2022, 22(7), 2672; https://doi.org/10.3390/s22072672 - 30 Mar 2022
Cited by 1 | Viewed by 1699
Abstract
Breast density has been recognised as an important biomarker that indicates the risk of developing breast cancer. Accurate classification of breast density plays a crucial role in developing a computer-aided detection (CADe) system for mammogram interpretation. This paper proposes a novel texture descriptor, [...] Read more.
Breast density has been recognised as an important biomarker that indicates the risk of developing breast cancer. Accurate classification of breast density plays a crucial role in developing a computer-aided detection (CADe) system for mammogram interpretation. This paper proposes a novel texture descriptor, namely, rotation invariant uniform local quinary patterns (RIU4-LQP), to describe texture patterns in mammograms and to improve the robustness of image features. In conventional processing schemes, image features are obtained by computing histograms from texture patterns. However, such processes ignore very important spatial information related to the texture features. This study designs a new feature vector, namely, K-spectrum, by using Baddeley’s K-inhom function to characterise the spatial distribution information of feature point sets. Texture features extracted by RIU4-LQP and K-spectrum are utilised to classify mammograms into BI-RADS density categories. Three feature selection methods are employed to optimise the feature set. In our experiment, two mammogram datasets, INbreast and MIAS, are used to test the proposed methods, and comparative analyses and statistical tests between different schemes are conducted. Experimental results show that our proposed method outperforms other approaches described in the literature, with the best classification accuracy of 92.76% (INbreast) and 86.96% (MIAS). Full article
Show Figures

Figure 1

20 pages, 4208 KiB  
Article
Impact of Image Enhancement Module for Analysis of Mammogram Images for Diagnostics of Breast Cancer
by Yassir Edrees Almalki, Toufique Ahmed Soomro, Muhammad Irfan, Sharifa Khalid Alduraibi and Ahmed Ali
Sensors 2022, 22(5), 1868; https://doi.org/10.3390/s22051868 - 26 Feb 2022
Cited by 12 | Viewed by 5830
Abstract
Breast cancer is widespread around the world and can be cured if diagnosed at an early stage. Digital mammograms are used as the most effective imaging modalities for the diagnosis of breast cancer. However, mammography images suffer from low contrast, background noise as [...] Read more.
Breast cancer is widespread around the world and can be cured if diagnosed at an early stage. Digital mammograms are used as the most effective imaging modalities for the diagnosis of breast cancer. However, mammography images suffer from low contrast, background noise as well as contrast as non-coherency among the regions, and these factors makes breast cancer diagnosis challenging. These problems can be overcome by using a new image enhancement technique. The objective of this research work is to enhance mammography images to improve the overall process of segmentation and classification of breast cancer diagnosis. We proposed the image enhancement for mammogram images, as well as the ablation of the pectoral muscle. The image enhancement technique involves several steps. In the first step, we process the mammography images in three channels (red, green and blue), the second step is based on the uniformity of the background on morphological operations, and the third step is to obtain a well-contrasted image using principal component analysis (PCA). The fourth step is based on the removal of the pectoral muscle using a seed-based region growth technique, and the last step contains the coherence of the different regions of the image using a second order Gaussian Laplacian (LoG) and an oriented diffusion filter to obtain a much-improved contrast image. The proposed image enhancement technique is tested with our data collected from different hospitals in Qassim health cluster Qassim province Saudi Arabia, and it contains the five Breast Imaging and Reporting System (BI-RADS) categories and this database contained 11,194 images (the images contain carnio-caudal (CC) view and mediolateral oblique(MLO) view of mammography images), and we used approximately 700 images to validate our database. We have achieved improved performance in terms of peak signal-to-noise ratio, contrast, and effective measurement of enhancement (EME) as well as our proposed image enhancement technique outperforms existing image enhancement methods. This performance of our proposed method demonstrates the ability to improve the diagnostic performance of the computerized breast cancer detection method. Full article
Show Figures

Figure 1

Review

Jump to: Research

48 pages, 2956 KiB  
Review
Deep Learning for Diabetic Retinopathy Analysis: A Review, Research Challenges, and Future Directions
by Muhammad Waqas Nadeem, Hock Guan Goh, Muzammil Hussain, Soung-Yue Liew, Ivan Andonovic and Muhammad Adnan Khan
Sensors 2022, 22(18), 6780; https://doi.org/10.3390/s22186780 - 8 Sep 2022
Cited by 28 | Viewed by 5652
Abstract
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number [...] Read more.
Deep learning (DL) enables the creation of computational models comprising multiple processing layers that learn data representations at multiple levels of abstraction. In the recent past, the use of deep learning has been proliferating, yielding promising results in applications across a growing number of fields, most notably in image processing, medical image analysis, data analysis, and bioinformatics. DL algorithms have also had a significant positive impact through yielding improvements in screening, recognition, segmentation, prediction, and classification applications across different domains of healthcare, such as those concerning the abdomen, cardiac, pathology, and retina. Given the extensive body of recent scientific contributions in this discipline, a comprehensive review of deep learning developments in the domain of diabetic retinopathy (DR) analysis, viz., screening, segmentation, prediction, classification, and validation, is presented here. A critical analysis of the relevant reported techniques is carried out, and the associated advantages and limitations highlighted, culminating in the identification of research gaps and future challenges that help to inform the research community to develop more efficient, robust, and accurate DL models for the various challenges in the monitoring and diagnosis of DR. Full article
Show Figures

Figure 1

Back to TopTop