sensors-logo

Journal Browser

Journal Browser

Machine and Deep Learning in Sensing and Imaging

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 6 November 2024 | Viewed by 6793

Special Issue Editor


E-Mail Website
Guest Editor
Department of Computer Science, Boston University, Boston, MA 02215, USA
Interests: computer intelligence; image processing; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The application of machine and deep learning methods in sensing and imaging can potentially have a significant and profound impact on analysis and treatment of the human body, therapeutic decisions, and may ultimately improve the outcome for patients. A wide range of machine and deep learning methods have been applied to analyze and interpret data of various kinds from sensors embedded in different tools and devices, or from portable sensor devices. Advances in network design, processing power, the availability of easy-to-use software packages, and the scale of available medical image databases have accelerated the developments in this exciting field. Nevertheless, studies evaluating the potential applications of machine and deep learning methods for detection, lesion segmentation, therapeutic decision, and prognosis of human body disease are still relatively sparse.

This Special Issue encourages authors from academia and industry to submit new research results regarding methods and applications in this field. We welcome high-quality original research or review articles relating to the application of current machine and deep learning methods to the human body, including clinical applications, methods, data augmentation, machine learning interpretation, and new algorithm design. The Special Issue topics include, but are not limited to:

  • Medical imaging
  • Biomedical engineering
  • Data fusion techniques
  • Human body imaging and therapy
  • Imaging modality
  • Decision support algorithms
  • Predictive modelling of treatment efficacy
  • Multi-parametric study

Dr. Kate Saenko
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • medical imaging
  • biomedical engineering
  • data fusion techniques
  • human body imaging and therapy
  • imaging modality
  • decision support algorithms
  • predictive modelling of treatment efficacy

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 3601 KiB  
Article
Using Deep Learning Architectures for Detection and Classification of Diabetic Retinopathy
by Cheena Mohanty, Sakuntala Mahapatra, Biswaranjan Acharya, Fotis Kokkoras, Vassilis C. Gerogiannis, Ioannis Karamitsos and Andreas Kanavos
Sensors 2023, 23(12), 5726; https://doi.org/10.3390/s23125726 - 19 Jun 2023
Cited by 17 | Viewed by 4156
Abstract
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images [...] Read more.
Diabetic retinopathy (DR) is a common complication of long-term diabetes, affecting the human eye and potentially leading to permanent blindness. The early detection of DR is crucial for effective treatment, as symptoms often manifest in later stages. The manual grading of retinal images is time-consuming, prone to errors, and lacks patient-friendliness. In this study, we propose two deep learning (DL) architectures, a hybrid network combining VGG16 and XGBoost Classifier, and the DenseNet 121 network, for DR detection and classification. To evaluate the two DL models, we preprocessed a collection of retinal images obtained from the APTOS 2019 Blindness Detection Kaggle Dataset. This dataset exhibits an imbalanced image class distribution, which we addressed through appropriate balancing techniques. The performance of the considered models was assessed in terms of accuracy. The results showed that the hybrid network achieved an accuracy of 79.50%, while the DenseNet 121 model achieved an accuracy of 97.30%. Furthermore, a comparative analysis with existing methods utilizing the same dataset revealed the superior performance of the DenseNet 121 network. The findings of this study demonstrate the potential of DL architectures for the early detection and classification of DR. The superior performance of the DenseNet 121 model highlights its effectiveness in this domain. The implementation of such automated methods can significantly improve the efficiency and accuracy of DR diagnosis, benefiting both healthcare providers and patients. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

17 pages, 3617 KiB  
Article
Saliency Map and Deep Learning in Binary Classification of Brain Tumours
by Wojciech Chmiel, Joanna Kwiecień and Kacper Motyka
Sensors 2023, 23(9), 4543; https://doi.org/10.3390/s23094543 - 07 May 2023
Cited by 2 | Viewed by 2070
Abstract
The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using [...] Read more.
The paper was devoted to the application of saliency analysis methods in the performance analysis of deep neural networks used for the binary classification of brain tumours. We have presented the basic issues related to deep learning techniques. A significant challenge in using deep learning methods is the ability to explain the decision-making process of the network. To ensure accurate results, the deep network being used must undergo extensive training to produce high-quality predictions. There are various network architectures that differ in their properties and number of parameters. Consequently, an intriguing question is how these different networks arrive at similar or distinct decisions based on the same set of prerequisites. Therefore, three widely used deep convolutional networks have been discussed, such as VGG16, ResNet50 and EfficientNetB7, which were used as backbone models. We have customized the output layer of these pre-trained models with a softmax layer. In addition, an additional network has been described that was used to assess the saliency areas obtained. For each of the above networks, many tests have been performed using key metrics, including statistical evaluation of the impact of class activation mapping (CAM) and gradient-weighted class activation mapping (Grad-CAM) on network performance on a publicly available dataset of brain tumour X-ray images. Full article
(This article belongs to the Special Issue Machine and Deep Learning in Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop