Deep Learning and Machine Learning in Biomedical Signal and Image Processing

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Biomedical Engineering".

Deadline for manuscript submissions: closed (20 August 2023) | Viewed by 5137

Special Issue Editor


E-Mail Website
Guest Editor
Department of Electronics Engineering, Chosun University, Gwangju 61452, Republic of Korea
Interests: biometrics; computational intelligence; human-robot interaction
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine learning (ML) is an exploratory field dedicated to understanding and building how to learn as well as how to use data to improve the performance of a set of tasks. Among machine learning methods, deep learning (DL) is defined as a set of machine learning algorithms that attempt high-level abstraction through a combination of several nonlinear transformation methods. In particular, the recent development of deep learning using advanced techniques for deep learning-based biomedical image and signal processing has become a field that is being actively studied in medicine and academia.

This Special Issue is concerned with biomedical signal processing (ECG, EEG, EMG, PPG, BCG etc.), biomedical image processing (MRI, CT, PET, X-ray etc.), disease diagnosis, biometrics, and biomedical analysis based on deep learning and machine learning

Dr. Keun-Chang Kwak
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning
  • machine learning
  • medicine
  • biomedical signal processing
  • biomedical image processing
  • disease diagnosis
  • biometrics

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 7117 KiB  
Article
Image Visualization and Classification Using Hydatid Cyst Images with an Explainable Hybrid Model
by Muhammed Yildirim
Appl. Sci. 2023, 13(17), 9926; https://doi.org/10.3390/app13179926 - 2 Sep 2023
Cited by 1 | Viewed by 1513
Abstract
Hydatid cysts are most commonly found in the liver, but they can also occur in other body parts such as the lungs, kidneys, bones, and brain. The growth of these cysts occurs through the division and proliferation of cells over time. Cysts usually [...] Read more.
Hydatid cysts are most commonly found in the liver, but they can also occur in other body parts such as the lungs, kidneys, bones, and brain. The growth of these cysts occurs through the division and proliferation of cells over time. Cysts usually grow slowly, and symptoms are initially absent. Symptoms often vary in size, location, and the affected organ. Common symptoms include abdominal pain, vomiting, nausea, shortness of breath, and foul odor. Early diagnosis and treatment are of great importance in this process. Therefore, computer-aided systems can be used for early diagnosis. In addition, it is very important that these cysts can be interpreted more easily by the specialist and that the error is minimized. Therefore, in this study, data visualization was performed using Grad-CAM and LIME methods for easier interpretation of hydatid cyst images via a reanalysis of data. In addition, feature extraction was performed with the MobileNetV2 architecture using the original, Grad-CAM, and LIME applied data for the grading of hydatid cyst CT images. The feature maps obtained from these three methods were combined to increase the performance of the proposed method. Then, the Kruskal method was used to reduce the size of the combined feature map. In this way, the size of the 2416 × 3000 feature map was reduced to 2416 × 700. The accuracy of the proposed model in classifying hydatid cyst images is 94%. Full article
Show Figures

Figure 1

14 pages, 5185 KiB  
Article
Speech Emotion Recognition Based on Two-Stream Deep Learning Model Using Korean Audio Information
by A-Hyeon Jo and Keun-Chang Kwak
Appl. Sci. 2023, 13(4), 2167; https://doi.org/10.3390/app13042167 - 8 Feb 2023
Cited by 9 | Viewed by 3207
Abstract
Identifying a person’s emotions is an important element in communication. In particular, voice is a means of communication for easily and naturally expressing emotions. Speech emotion recognition technology is a crucial component of human–computer interaction (HCI), in which accurately identifying emotions is key. [...] Read more.
Identifying a person’s emotions is an important element in communication. In particular, voice is a means of communication for easily and naturally expressing emotions. Speech emotion recognition technology is a crucial component of human–computer interaction (HCI), in which accurately identifying emotions is key. Therefore, this study presents a two-stream-based emotion recognition model based on bidirectional long short-term memory (Bi-LSTM) and convolutional neural networks (CNNs) using a Korean speech emotion database, and the performance is comparatively analyzed. The data used in the experiment were obtained from the Korean speech emotion recognition database built by Chosun University. Two deep learning models, Bi-LSTM and YAMNet, which is a CNN-based transfer learning model, were connected in a two-stream architecture to design an emotion recognition model. Various speech feature extraction methods and deep learning models were compared in terms of performance. Consequently, the speech emotion recognition performance of Bi-LSTM and YAMNet was 90.38% and 94.91%, respectively. However, the performance of the two-stream model was 96%, which was a minimum of 1.09% and up to 5.62% improved compared with a single model. Full article
Show Figures

Figure 1

Back to TopTop