sensors-logo

Journal Browser

Journal Browser

Emotion Recognition Based on Sensors (Volume II)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 July 2023) | Viewed by 10695

Special Issue Editors


E-Mail Website
Guest Editor
Electronics, Telecommunications and Informatics Faculty, Gdansk University of Technology, Gdansk, Poland
Interests: affective computing (multimodal emotion recognition; development of affect-aware video games; image processing; computer vision; artificial intelligence; 3D graphics)
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Gdansk University of Technology, Electronics, Telecommunications and Informatics Faculty, Gdansk, Poland
Interests: machine learning; feature selection and extraction; behavioral biometrics; emotion recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Affective computing is an emerging field of computer science that plays, and will continue to play, an increasingly important role in human–computer interaction. Recognition of user emotions is a fundamental and viable element of each affective and affect-aware system. In recent years, many approaches to emotion recognition that use different input devices and channels, as well as different reasoning algorithms, have been proposed and developed. Various sensors, connected to or embedded in computer devices, smartphones, and training devices for fitness, health, and everyday use, play a special role in providing input data for such systems. They include, among others, cameras, microphones, depth sensors, biometric sensors, etc.

This Special Issue of the journal Sensors is focused on emotion recognition methods based on such sensory data. We are inviting original research articles covering novel theories, innovative machine learning methods, and meaningful applications that can potentially lead to significant advances in this field. The goal is to collect a diverse set of articles on emotion recognition that span across a wide range of sensors, data modalities, their fusion, and classification.

Dr. Mariusz Szwoch
Dr. Agata Kołakowska
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • emotion recognition
  • affective computing
  • sensory data processing
  • sensors
  • human–computer interaction
  • biomedical signal processing

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

22 pages, 520 KiB  
Article
Physiological Signals and Affect as Predictors of Advertising Engagement
by Gregor Strle, Andrej Košir and Urban Burnik
Sensors 2023, 23(15), 6916; https://doi.org/10.3390/s23156916 - 03 Aug 2023
Viewed by 984
Abstract
This study investigated the use of affect and physiological signals of heart rate, electrodermal activity, pupil dilation, and skin temperature to classify advertising engagement. The ground truth for the affective and behavioral aspects of ad engagement was collected from 53 young adults using [...] Read more.
This study investigated the use of affect and physiological signals of heart rate, electrodermal activity, pupil dilation, and skin temperature to classify advertising engagement. The ground truth for the affective and behavioral aspects of ad engagement was collected from 53 young adults using the User Engagement Scale. Three gradient-boosting classifiers, LightGBM (LGBM), HistGradientBoostingClassifier (HGBC), and XGBoost (XGB), were used along with signal fusion to evaluate the performance of different signal combinations as predictors of engagement. The classifiers trained on the fusion of skin temperature, valence, and tiredness (features n = 5) performed better than those trained on all signals (features n = 30). The average AUC ROC scores for the fusion set were XGB = 0.68 (0.10), LGBM = 0.69 (0.07), and HGBC = 0.70 (0.11), compared to the lower scores for the set of all signals (XGB = 0.65 (0.11), LGBM = 0.66 (0.11), HGBC = 0.64 (0.10)). The results also show that the signal fusion set based on skin temperature outperforms the fusion sets of the other three signals. The main finding of this study is the role of specific physiological signals and how their fusion aids in more effective modeling of ad engagement while reducing the number of features. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

15 pages, 8007 KiB  
Article
Software Usability Testing Using EEG-Based Emotion Detection and Deep Learning
by Sofien Gannouni, Kais Belwafi, Arwa Aledaily, Hatim Aboalsamh and Abdelfettah Belghith
Sensors 2023, 23(11), 5147; https://doi.org/10.3390/s23115147 - 28 May 2023
Cited by 1 | Viewed by 1454
Abstract
It is becoming increasingly attractive to detect human emotions using electroencephalography (EEG) brain signals. EEG is a reliable and cost-effective technology used to measure brain activities. This paper proposes an original framework for usability testing based on emotion detection using EEG signals, which [...] Read more.
It is becoming increasingly attractive to detect human emotions using electroencephalography (EEG) brain signals. EEG is a reliable and cost-effective technology used to measure brain activities. This paper proposes an original framework for usability testing based on emotion detection using EEG signals, which can significantly affect software production and user satisfaction. This approach can provide an in-depth understanding of user satisfaction accurately and precisely, making it a valuable tool in software development. The proposed framework includes a recurrent neural network algorithm as a classifier, a feature extraction algorithm based on event-related desynchronization and event-related synchronization analysis, and a new method for selecting EEG sources adaptively for emotion recognition. The framework results are promising, achieving 92.13%, 92.67%, and 92.24% for the valence–arousal–dominance dimensions, respectively. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

23 pages, 2756 KiB  
Article
Online Learning for Wearable EEG-Based Emotion Classification
by Sidratul Moontaha, Franziska Elisabeth Friederike Schumann and Bert Arnrich
Sensors 2023, 23(5), 2387; https://doi.org/10.3390/s23052387 - 21 Feb 2023
Cited by 6 | Viewed by 3278
Abstract
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by [...] Read more.
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by the brain. Therefore, we used non-invasive and portable EEG sensors to develop a real-time emotion classification pipeline. The pipeline trains different binary classifiers for Valence and Arousal dimensions from an incoming EEG data stream achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset than previous work. Afterward, the pipeline was applied to the curated dataset from 15 participants using two consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. Mean F1-Scores of 87% (Arousal) and 82% (Valence) were achieved for an immediate label setting. Additionally, the pipeline proved to be fast enough to achieve predictions in real-time in a live scenario with delayed labels while continuously being updated. The significant discrepancy from the readily available labels on the classification scores leads to future work to include more data. Thereafter, the pipeline is ready to be used for real-time applications of emotion classification. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

13 pages, 532 KiB  
Article
Self-Relation Attention and Temporal Awareness for Emotion Recognition via Vocal Burst
by Dang-Linh Trinh, Minh-Cong Vo, Soo-Hyung Kim, Hyung-Jeong Yang and Guee-Sang Lee
Sensors 2023, 23(1), 200; https://doi.org/10.3390/s23010200 - 24 Dec 2022
Cited by 2 | Viewed by 1499
Abstract
Speech emotion recognition (SER) is one of the most exciting topics many researchers have recently been involved in. Although much research has been conducted recently on this topic, emotion recognition via non-verbal speech (known as the vocal burst) is still sparse. The vocal [...] Read more.
Speech emotion recognition (SER) is one of the most exciting topics many researchers have recently been involved in. Although much research has been conducted recently on this topic, emotion recognition via non-verbal speech (known as the vocal burst) is still sparse. The vocal burst is concise and has meaningless content, which is harder to deal with than verbal speech. Therefore, in this paper, we proposed a self-relation attention and temporal awareness (SRA-TA) module to tackle this problem with vocal bursts, which could capture the dependency in a long-term period and focus on the salient parts of the audio signal as well. Our proposed method contains three main stages. Firstly, the latent features are extracted using a self-supervised learning model from the raw audio signal and its Mel-spectrogram. After the SRA-TA module is utilized to capture the valuable information from latent features, all features are concatenated and fed into ten individual fully-connected layers to predict the scores of 10 emotions. Our proposed method achieves a mean concordance correlation coefficient (CCC) of 0.7295 on the test set, which achieves the first ranking of the high-dimensional emotion task in the 2022 ACII Affective Vocal Burst Workshop & Challenge. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

Other

Jump to: Research

11 pages, 1052 KiB  
Brief Report
Multi-Input Speech Emotion Recognition Model Using Mel Spectrogram and GeMAPS
by Itsuki Toyoshima, Yoshifumi Okada, Momoko Ishimaru, Ryunosuke Uchiyama and Mayu Tada
Sensors 2023, 23(3), 1743; https://doi.org/10.3390/s23031743 - 03 Feb 2023
Cited by 3 | Viewed by 2617
Abstract
The existing research on emotion recognition commonly uses mel spectrogram (MelSpec) and Geneva minimalistic acoustic parameter set (GeMAPS) as acoustic parameters to learn the audio features. MelSpec can represent the time-series variations of each frequency but cannot manage multiple types of audio features. [...] Read more.
The existing research on emotion recognition commonly uses mel spectrogram (MelSpec) and Geneva minimalistic acoustic parameter set (GeMAPS) as acoustic parameters to learn the audio features. MelSpec can represent the time-series variations of each frequency but cannot manage multiple types of audio features. On the other hand, GeMAPS can handle multiple audio features but fails to provide information on their time-series variations. Thus, this study proposes a speech emotion recognition model based on a multi-input deep neural network that simultaneously learns these two audio features. The proposed model comprises three parts, specifically, for learning MelSpec in image format, learning GeMAPS in vector format, and integrating them to predict the emotion. Additionally, a focal loss function is introduced to address the imbalanced data problem among the emotion classes. The results of the recognition experiments demonstrate weighted and unweighted accuracies of 0.6657 and 0.6149, respectively, which are higher than or comparable to those of the existing state-of-the-art methods. Overall, the proposed model significantly improves the recognition accuracy of the emotion “happiness”, which has been difficult to identify in previous studies owing to limited data. Therefore, the proposed model can effectively recognize emotions from speech and can be applied for practical purposes with future development. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

Back to TopTop