Deep Learning and Transfer Learning

A special issue of Signals (ISSN 2624-6120).

Deadline for manuscript submissions: closed (31 January 2023) | Viewed by 11028

Special Issue Editor


E-Mail Website
Guest Editor
ITI—Interactive Technologies Institute, LARSyS, Laboratory of Robotics and Systems in Engineering and Science, M-ITI, ARDITI, 9000 Funchal, Portugal
Interests: artificial intelligence; biomedical engineering; electronic design; biomedical applications
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Outstanding achievements have been gained by supervised learning in the last decade. With the introduction of deep learning models, it is possible to achieve great results with minimum domain knowledge. Human-level or, in some cases, better than human-level accuracy is achieved. However, most of this deep learning model-building relies on vast amounts of labeled data. In most cases, a massive quantity of leveled data is expensive; in some specific circumstances, it is difficult to collect a large set of data due to the nature of the problem. Deep Learning and Transfer Learning can solve these problems. This Special Issue of Deep Learning and Transfer Learning aims to present state-of-the-art research, on both theoretical issues and applications, based on Deep Learning and Transfer Learning. Papers should emphasize either theoretical issues or practical applications such as Multi-Layer Perceptrons, Convolutional Neural Networks, Recurrent Neural Networks, Generative Adversarial Networks, Deep Belief Networks, etc., and their application. 

Dr. Sheikh Shanawaz Mostafa
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Signals is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • novel deep learning methods
  • optimization of deep learning models
  • multi-layer perceptrons
  • convolutional neural networks
  • recurrent neural networks
  • generative adversarial networks
  • deep belief networks
  • computer vision and image
  • handwriting analysis
  • medical image analysis
  • medical signal analysis
  • video and image sequence analysis
  • content-based retrieval of image and video
  • face and gesture recognition
  • hardware and/or software
  • review literature
  • natural language processing

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

22 pages, 7396 KiB  
Article
Dialogue Act Classification via Transfer Learning for Automated Labeling of Interviewee Responses in Virtual Reality Job Interview Training Platforms for Autistic Individuals
by Deeksha Adiani, Kelley Colopietro, Joshua Wade, Miroslava Migovich, Timothy J. Vogus and Nilanjan Sarkar
Signals 2023, 4(2), 359-380; https://doi.org/10.3390/signals4020019 - 19 May 2023
Viewed by 1665
Abstract
Computer-based job interview training, including virtual reality (VR) simulations, have gained popularity in recent years to support and aid autistic individuals, who face significant challenges and barriers in finding and maintaining employment. Although popular, these training systems often fail to resemble the complexity [...] Read more.
Computer-based job interview training, including virtual reality (VR) simulations, have gained popularity in recent years to support and aid autistic individuals, who face significant challenges and barriers in finding and maintaining employment. Although popular, these training systems often fail to resemble the complexity and dynamism of the employment interview, as the dialogue management for the virtual conversation agent either relies on choosing from a menu of prespecified answers, or dialogue processing is based on keyword extraction from the transcribed speech of the interviewee, which depends on the interview script. We address this limitation through automated dialogue act classification via transfer learning. This allows for recognizing intent from user speech, independent of the domain of the interview. We also redress the lack of training data for a domain general job interview dialogue act classifier by providing an original dataset with responses to interview questions within a virtual job interview platform from 22 autistic participants. Participants’ responses to a customized interview script were transcribed to text and annotated according to a custom 13-class dialogue act scheme. The best classifier was a fine-tuned bidirectional encoder representations from transformers (BERT) model, with an f1-score of 87%. Full article
(This article belongs to the Special Issue Deep Learning and Transfer Learning)
Show Figures

Figure 1

18 pages, 4166 KiB  
Article
Graphical User Interface for the Development of Probabilistic Convolutional Neural Networks
by Aníbal Chaves, Fábio Mendonça, Sheikh Shanawaz Mostafa and Fernando Morgado-Dias
Signals 2023, 4(2), 297-314; https://doi.org/10.3390/signals4020016 - 20 Apr 2023
Viewed by 1255
Abstract
Through the development of artificial intelligence, some capabilities of human beings have been replicated in computers. Among the developed models, convolutional neural networks stand out considerably because they make it possible for systems to have the inherent capabilities of humans, such as pattern [...] Read more.
Through the development of artificial intelligence, some capabilities of human beings have been replicated in computers. Among the developed models, convolutional neural networks stand out considerably because they make it possible for systems to have the inherent capabilities of humans, such as pattern recognition in images and signals. However, conventional methods are based on deterministic models, which cannot express the epistemic uncertainty of their predictions. The alternative consists of probabilistic models, although these are considerably more difficult to develop. To address the problems related to the development of probabilistic networks and the choice of network architecture, this article proposes the development of an application that allows the user to choose the desired architecture with the trained model for the given data. This application, named “Graphical User Interface for Probabilistic Neural Networks”, allows the user to develop or to use a standard convolutional neural network for the provided data, with networks already adapted to implement a probabilistic model. Contrary to the existing models for generic use, which are deterministic and already pre-trained on databases to be used in transfer learning, the approach followed in this work creates the network layer by layer, with training performed on the provided data, originating a specific model for the data in question. Full article
(This article belongs to the Special Issue Deep Learning and Transfer Learning)
Show Figures

Figure 1

12 pages, 1807 KiB  
Article
Evolving Optimised Convolutional Neural Networks for Lung Cancer Classification
by Maximilian Achim Pfeffer and Sai Ho Ling
Signals 2022, 3(2), 284-295; https://doi.org/10.3390/signals3020018 - 5 May 2022
Cited by 10 | Viewed by 2754
Abstract
Detecting pulmonary nodules early significantly contributes to the treatment success of lung cancer. Several deep learning models for medical image analysis have been developed to help classify pulmonary nodules. The design of convolutional neural network (CNN) architectures, however, is still heavily reliant on [...] Read more.
Detecting pulmonary nodules early significantly contributes to the treatment success of lung cancer. Several deep learning models for medical image analysis have been developed to help classify pulmonary nodules. The design of convolutional neural network (CNN) architectures, however, is still heavily reliant on human domain knowledge. Manually designing CNN design solutions has been shown to limit the data’s utility by creating a co-dependency on the creator’s cognitive bias, which urges the development of smart CNN architecture design solutions. In this paper, an evolutionary algorithm is used to optimise the classification of pulmonary nodules with CNNs. The implementation of a genetic algorithm (GA) for CNN architectures design and hyperparameter optimisation is proposed, which approximates optimal solutions by implementing a range of bio-inspired mechanisms of natural selection and Darwinism. For comparison purposes, two manually designed deep learning models, FractalNet and Deep Local-Global Network, were trained. The results show an outstanding classification accuracy of the fittest GA-CNN (91.3%), which outperformed both manually designed models. The findings indicate that GAs pose advantageous solutions for diagnostic challenges, the development of which may to be fully automated in the future using GAs to design and optimise CNN architectures for various clinical applications. Full article
(This article belongs to the Special Issue Deep Learning and Transfer Learning)
Show Figures

Figure 1

18 pages, 5085 KiB  
Article
Wearable Device for Observation of Physical Activity with the Purpose of Patient Monitoring Due to COVID-19
by Angelos-Christos Daskalos, Panayiotis Theodoropoulos, Christos Spandonidis and Nick Vordos
Signals 2022, 3(1), 11-28; https://doi.org/10.3390/signals3010002 - 6 Jan 2022
Cited by 19 | Viewed by 3938
Abstract
In late 2019, a new genre of coronavirus (COVID-19) was first identified in humans in Wuhan, China. In addition to this, COVID-19 spreads through droplets, so quarantine is necessary to halt the spread and to recover physically. This modern urgency creates a critical [...] Read more.
In late 2019, a new genre of coronavirus (COVID-19) was first identified in humans in Wuhan, China. In addition to this, COVID-19 spreads through droplets, so quarantine is necessary to halt the spread and to recover physically. This modern urgency creates a critical challenge for the latest technologies to detect and monitor potential patients of this new disease. In this vein, the Internet of Things (IoT) contributes to solving such problems. This paper proposed a wearable device that utilizes real-time monitoring to detect body temperature and ambient conditions. Moreover, the system automatically alerts the concerned person using this device. The alert is transmitted when the body exceeds the allowed temperature threshold. To achieve this, we developed an algorithm that detects physical exercise named “Continuous Displacement Algorithm” based on an accelerometer to see whether a potential temperature rise can be attributed to physical activity. The people responsible for the person in quarantine can then connect via nRF Connect or a similar central application to acquire an accurate picture of the person’s condition. This experiment included an Arduino Nano BLE 33 Sense which contains several other sensors like a 9-axis IMU, several types of temperature, and ambient and other sensors equipped. This device successfully managed to measure wrist temperature at all states, ranging from 32 °C initially to 39 °C, providing better battery autonomy than other similar devices, lasting over 12 h, with fast charging capabilities (500 mA), and utilizing the BLE 5.0 protocol for data wireless data transmission and low power consumption. Furthermore, a 1D Convolutional Neural Network (CNN) was employed to classify whether the user is feverish while considering the physical activity status. The results obtained from the 1D CNN illustrated the manner in which it can be leveraged to acquire insight regarding the health of the users in the setting of the COVID-19 pandemic. Full article
(This article belongs to the Special Issue Deep Learning and Transfer Learning)
Show Figures

Figure 1

Back to TopTop