sensors-logo

Journal Browser

Journal Browser

Emotion Recognition Based on Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (29 July 2022) | Viewed by 39047

Special Issue Editors

Electronics, Telecommunications and Informatics Faculty, Gdansk University of Technology, Gdansk, Poland
Interests: affective computing (multimodal emotion recognition; development of affect-aware video games; image processing; computer vision; artificial intelligence; 3D graphics)
Special Issues, Collections and Topics in MDPI journals
Electronics, Telecommunications and Informatics Faculty, Gdansk University of Technology, Gdańsk, Poland
Interests: machine learning; feature selection and extraction; behavioral biometrics; emotion recognition
Special Issues, Collections and Topics in MDPI journals
Instituto de Investigación e Innovación en Bioingeniería (i3B), Universitat Politècnica de València, 46022 Valencia, Spain
Interests: virtual reality; virtual rehabilitation; consumer neuroscience; organizational neuroscience; emotion recognition; Algorithms; Eye-tracking; Machine Learning; behavioural data

Special Issue Information

Dear Colleagues,

Affective computing is an emerging field of computer science that plays and will continue to play an increasing role in human–computer interaction. Recognition of user emotions is a fundamental and most viable element of each affective and affect-aware system. In recent years, many approaches to emotion recognition that use different input devices and channels as well as different reasoning algorithms have been proposed and developed. Various sensors, connected to or embedded in computer devices, smartphones, training devices, fitness, health, and everyday use, play a special role in providing input data for such systems. They include, among others, cameras, microphones, depth sensors, biometric sensors, and many more.

This Special Issue of the journal Sensors is focused on emotion recognition methods based on such sensory data. We are inviting original research work covering novel theories, innovative machine learning methods, and meaningful applications that can potentially lead to significant advances in this field. The goal is to collect a diverse set of articles on emotion recognition that span across a wide range of sensors, data modalities, their fusion, and classification.

Dr. Mariusz Szwoch
Dr. Agata Kołakowska
Prof. Dr. Mariano Alcañiz Raya
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

We are inviting the submission of original and unpublished work addressing several research topics of interest, including but not limited to the following issues:

  • Emotion recognition
  • Affective computing
  • Sensory data processing
  • Sensors
  • Human–computer interaction
  • Biomedical signal processing

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 568 KiB  
Article
Cross-Language Speech Emotion Recognition Using Bag-of-Word Representations, Domain Adaptation, and Data Augmentation
by Shruti Kshirsagar and Tiago H. Falk
Sensors 2022, 22(17), 6445; https://doi.org/10.3390/s22176445 - 26 Aug 2022
Cited by 5 | Viewed by 1764
Abstract
To date, several methods have been explored for the challenging task of cross-language speech emotion recognition, including the bag-of-words (BoW) methodology for feature processing, domain adaptation for feature distribution “normalization”, and data augmentation to make machine learning algorithms more robust across testing conditions. [...] Read more.
To date, several methods have been explored for the challenging task of cross-language speech emotion recognition, including the bag-of-words (BoW) methodology for feature processing, domain adaptation for feature distribution “normalization”, and data augmentation to make machine learning algorithms more robust across testing conditions. Their combined use, however, has yet to be explored. In this paper, we aim to fill this gap and compare the benefits achieved by combining different domain adaptation strategies with the BoW method, as well as with data augmentation. Moreover, while domain adaptation strategies, such as the correlation alignment (CORAL) method, require knowledge of the test data language, we propose a variant that we term N-CORAL, in which test languages (in our case, Chinese) are mapped to a common distribution in an unsupervised manner. Experiments with German, French, and Hungarian language datasets were performed, and the proposed N-CORAL method, combined with BoW and data augmentation, was shown to achieve the best arousal and valence prediction accuracy, highlighting the usefulness of the proposed method for “in the wild” speech emotion recognition. In fact, N-CORAL combined with BoW was shown to provide robustness across languages, whereas data augmentation provided additional robustness against cross-corpus nuance factors. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

15 pages, 2561 KiB  
Article
Facial Emotion Recognition in Verbal Communication Based on Deep Learning
by Mohammed F. Alsharekh
Sensors 2022, 22(16), 6105; https://doi.org/10.3390/s22166105 - 16 Aug 2022
Cited by 10 | Viewed by 2893
Abstract
Facial emotion recognition from facial images is considered a challenging task due to the unpredictable nature of human facial expressions. The current literature on emotion classification has achieved high performance over deep learning (DL)-based models. However, the issue of performance degradation occurs in [...] Read more.
Facial emotion recognition from facial images is considered a challenging task due to the unpredictable nature of human facial expressions. The current literature on emotion classification has achieved high performance over deep learning (DL)-based models. However, the issue of performance degradation occurs in these models due to the poor selection of layers in the convolutional neural network (CNN) model. To address this issue, we propose an efficient DL technique using a CNN model to classify emotions from facial images. The proposed algorithm is an improved network architecture of its kind developed to process aggregated expressions produced by the Viola–Jones (VJ) face detector. The internal architecture of the proposed model was finalised after performing a set of experiments to determine the optimal model. The results of this work were generated through subjective and objective performance. An analysis of the results presented herein establishes the reliability of each type of emotion, along with its intensity and classification. The proposed model is benchmarked against state-of-the-art techniques and evaluated on the FER-2013, CK+, and KDEF datasets. The utility of these findings lies in their application by law-enforcing bodies in smart cities. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

14 pages, 103905 KiB  
Article
Single Image Video Prediction with Auto-Regressive GANs
by Jiahui Huang, Yew Ken Chia, Samson Yu, Kevin Yee, Dennis Küster, Eva G. Krumhuber, Dorien Herremans and Gemma Roig
Sensors 2022, 22(9), 3533; https://doi.org/10.3390/s22093533 - 06 May 2022
Cited by 2 | Viewed by 2400
Abstract
In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our [...] Read more.
In this paper, we introduce an approach for future frames prediction based on a single input image. Our method is able to generate an entire video sequence based on the information contained in the input frame. We adopt an autoregressive approach in our generation process, i.e., the output from each time step is fed as the input to the next step. Unlike other video prediction methods that use “one shot” generation, our method is able to preserve much more details from the input image, while also capturing the critical pixel-level changes between the frames. We overcome the problem of generation quality degradation by introducing a “complementary mask” module in our architecture, and we show that this allows the model to only focus on the generation of the pixels that need to be changed, and to reuse those that should remain static from its previous frame. We empirically validate our methods against various video prediction models on the UT Dallas Dataset, and show that our approach is able to generate high quality realistic video sequences from one static input image. In addition, we also validate the robustness of our method by testing a pre-trained model on the unseen ADFES facial expression dataset. We also provide qualitative results of our model tested on a human action dataset: The Weizmann Action database. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

25 pages, 771 KiB  
Article
Emotion Recognition from Physiological Channels Using Graph Neural Network
by Tomasz Wierciński, Mateusz Rock, Robert Zwierzycki, Teresa Zawadzka and Michał Zawadzki
Sensors 2022, 22(8), 2980; https://doi.org/10.3390/s22082980 - 13 Apr 2022
Cited by 7 | Viewed by 3392
Abstract
In recent years, a number of new research papers have emerged on the application of neural networks in affective computing. One of the newest trends observed is the utilization of graph neural networks (GNNs) to recognize emotions. The study presented in the paper [...] Read more.
In recent years, a number of new research papers have emerged on the application of neural networks in affective computing. One of the newest trends observed is the utilization of graph neural networks (GNNs) to recognize emotions. The study presented in the paper follows this trend. Within the work, GraphSleepNet (a GNN for classifying the stages of sleep) was adjusted for emotion recognition and validated for this purpose. The key assumption of the validation was to analyze its correctness for the Circumplex model to further analyze the solution for emotion recognition in the Ekman modal. The novelty of this research is not only the utilization of a GNN network with GraphSleepNet architecture for emotion recognition, but also the analysis of the potential of emotion recognition based on differential entropy features in the Ekman model with a neutral state and a special focus on continuous emotion recognition during the performance of an activity The GNN was validated against the AMIGOS dataset. The research shows how the use of various modalities influences the correctness of the recognition of basic emotions and the neutral state. Moreover, the correctness of the recognition of basic emotions is validated for two configurations of the GNN. The results show numerous interesting observations for Ekman’s model while the accuracy of the Circumplex model is similar to the baseline methods. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

17 pages, 815 KiB  
Article
The Emotion Probe: On the Universality of Cross-Linguistic and Cross-Gender Speech Emotion Recognition via Machine Learning
by Giovanni Costantini, Emilia Parada-Cabaleiro, Daniele Casali and Valerio Cesarini
Sensors 2022, 22(7), 2461; https://doi.org/10.3390/s22072461 - 23 Mar 2022
Cited by 20 | Viewed by 2562
Abstract
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech emotion recognition (SER). However, few studies explore cross-corpora aspects of SER; this work aims to explore the feasibility and characteristics of a cross-linguistic, cross-gender SER. Three ML classifiers (SVM, [...] Read more.
Machine Learning (ML) algorithms within a human–computer framework are the leading force in speech emotion recognition (SER). However, few studies explore cross-corpora aspects of SER; this work aims to explore the feasibility and characteristics of a cross-linguistic, cross-gender SER. Three ML classifiers (SVM, Naïve Bayes and MLP) are applied to acoustic features, obtained through a procedure based on Kononenko’s discretization and correlation-based feature selection. The system encompasses five emotions (disgust, fear, happiness, anger and sadness), using the Emofilm database, comprised of short clips of English movies and the respective Italian and Spanish dubbed versions, for a total of 1115 annotated utterances. The results see MLP as the most effective classifier, with accuracies higher than 90% for single-language approaches, while the cross-language classifier still yields accuracies higher than 80%. The results show cross-gender tasks to be more difficult than those involving two languages, suggesting greater differences between emotions expressed by male versus female subjects than between different languages. Four feature domains, namely, RASTA, F0, MFCC and spectral energy, are algorithmically assessed as the most effective, refining existing literature and approaches based on standard sets. To our knowledge, this is one of the first studies encompassing cross-gender and cross-linguistic assessments on SER. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

19 pages, 12963 KiB  
Article
A Wearable Head Mounted Display Bio-Signals Pad System for Emotion Recognition
by Chunting Wan, Dongyi Chen, Zhiqi Huang and Xi Luo
Sensors 2022, 22(1), 142; https://doi.org/10.3390/s22010142 - 26 Dec 2021
Cited by 7 | Viewed by 4412
Abstract
Multimodal bio-signals acquisition based on wearable devices and using virtual reality (VR) as stimulus source are promising techniques in emotion recognition research field. Numerous studies have shown that emotional states can be better evoked through Immersive Virtual Environments (IVE). The main goal of [...] Read more.
Multimodal bio-signals acquisition based on wearable devices and using virtual reality (VR) as stimulus source are promising techniques in emotion recognition research field. Numerous studies have shown that emotional states can be better evoked through Immersive Virtual Environments (IVE). The main goal of this paper is to provide researchers with a system for emotion recognition in VR environments. In this paper, we present a wearable forehead bio-signals acquisition pad which is attached to Head-Mounted Displays (HMD), termed HMD Bio Pad. This system can simultaneously record emotion-related two-channel electroencephalography (EEG), one-channel electrodermal activity (EDA), photoplethysmograph (PPG) and skin temperature (SKT) signals. In addition, we develop a human-computer interaction (HCI) interface which researchers can carry out emotion recognition research using VR HMD as stimulus presentation device. To evaluate the performance of the proposed system, we conducted different experiments to validate the multimodal bio-signals quality, respectively. To validate EEG signal, we have assessed the performance in terms of EEG eyes-blink task and eyes-open and eyes-closed task. The EEG eyes-blink task indicates that the proposed system can achieve comparable EEG signal quality in comparison to the dedicated bio-signals measuring device. The eyes-open and eyes-closed task proves that the proposed system can efficiently record alpha rhythm. Then we used signal-to-noise ratio (SNR) and Skin Conductance Reaction (SCR) signal to validate the performance for EDA acquisition system. A filtered EDA signal, with a high mean SNR of 28.52 dB, is plotted on HCI interface. Moreover, the SCR signal related to stimulus response can be correctly extracted from EDA signal. The SKT acquisition system has been validated effectively by the temperature change experiment when subjects are in unpleasant emotion. The pulse rate (PR) estimated from PPG signal achieved the low mean average absolute error (AAE), which is 1.12 beats per minute (BPM) over 8 recordings. In summary, the proposed HMD Bio Pad offers a portable, comfortable and easy-to-wear device for recording bio-signals. The proposed system could contribute to emotion recognition research in VR environments. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

22 pages, 699 KiB  
Article
Keystroke Dynamics Patterns While Writing Positive and Negative Opinions
by Agata Kołakowska and Agnieszka Landowska
Sensors 2021, 21(17), 5963; https://doi.org/10.3390/s21175963 - 06 Sep 2021
Cited by 3 | Viewed by 2671
Abstract
This paper deals with analysis of behavioural patterns in human–computer interaction. In the study, keystroke dynamics were analysed while participants were writing positive and negative opinions. A semi-experiment with 50 participants was performed. The participants were asked to recall the most negative and [...] Read more.
This paper deals with analysis of behavioural patterns in human–computer interaction. In the study, keystroke dynamics were analysed while participants were writing positive and negative opinions. A semi-experiment with 50 participants was performed. The participants were asked to recall the most negative and positive learning experiences (subject and teacher) and write an opinion about it. Keystroke dynamics were captured and over 50 diverse features were calculated and checked against the ability to differentiate positive and negative opinions. Moreover, classification of opinions was performed providing accuracy slightly above the random guess level. The second classification approach used self-report labels of pleasure and arousal and showed more accurate results. The study confirmed that it was possible to recognize positive and negative opinions from the keystroke patterns with accuracy above the random guess; however, combination with other modalities might produce more accurate results. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

18 pages, 5965 KiB  
Article
Multi-Modal Residual Perceptron Network for Audio–Video Emotion Recognition
by Xin Chang and Władysław Skarbek
Sensors 2021, 21(16), 5452; https://doi.org/10.3390/s21165452 - 12 Aug 2021
Cited by 11 | Viewed by 3426
Abstract
Emotion recognition is an important research field for human–computer interaction. Audio–video emotion recognition is now attacked with deep neural network modeling tools. In published papers, as a rule, the authors show only cases of the superiority in multi-modality over audio-only or video-only modality. [...] Read more.
Emotion recognition is an important research field for human–computer interaction. Audio–video emotion recognition is now attacked with deep neural network modeling tools. In published papers, as a rule, the authors show only cases of the superiority in multi-modality over audio-only or video-only modality. However, there are cases of superiority in uni-modality that can be found. In our research, we hypothesize that for fuzzy categories of emotional events, the within-modal and inter-modal noisy information represented indirectly in the parameters of the modeling neural network impedes better performance in the existing late fusion and end-to-end multi-modal network training strategies. To take advantage of and overcome the deficiencies in both solutions, we define a multi-modal residual perceptron network which performs end-to-end learning from multi-modal network branches, generalizing better multi-modal feature representation. For the proposed multi-modal residual perceptron network and the novel time augmentation for streaming digital movies, the state-of-the-art average recognition rate was improved to 91.4% for the Ryerson Audio–Visual Database of Emotional Speech and Song dataset and to 83.15% for the Crowd-Sourced Emotional Multi Modal Actors dataset. Moreover, the multi-modal residual perceptron network concept shows its potential for multi-modal applications dealing with signal sources not only of optical and acoustical types. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

31 pages, 1027 KiB  
Article
Graph Representation Integrating Signals for Emotion Recognition and Analysis
by Teresa Zawadzka, Tomasz Wierciński, Grzegorz Meller, Mateusz Rock, Robert Zwierzycki and Michał R. Wróbel
Sensors 2021, 21(12), 4035; https://doi.org/10.3390/s21124035 - 11 Jun 2021
Cited by 2 | Viewed by 3613
Abstract
Data reusability is an important feature of current research, just in every field of science. Modern research in Affective Computing, often rely on datasets containing experiments-originated data such as biosignals, video clips, or images. Moreover, conducting experiments with a vast number of participants [...] Read more.
Data reusability is an important feature of current research, just in every field of science. Modern research in Affective Computing, often rely on datasets containing experiments-originated data such as biosignals, video clips, or images. Moreover, conducting experiments with a vast number of participants to build datasets for Affective Computing research is time-consuming and expensive. Therefore, it is extremely important to provide solutions allowing one to (re)use data from a variety of sources, which usually demands data integration. This paper presents the Graph Representation Integrating Signals for Emotion Recognition and Analysis (GRISERA) framework, which provides a persistent model for storing integrated signals and methods for its creation. To the best of our knowledge, this is the first approach in Affective Computing field that addresses the problem of integrating data from multiple experiments, storing it in a consistent way, and providing query patterns for data retrieval. The proposed framework is based on the standardized graph model, which is known to be highly suitable for signal processing purposes. The validation proved that data from the well-known AMIGOS dataset can be stored in the GRISERA framework and later retrieved for training deep learning models. Furthermore, the second case study proved that it is possible to integrate signals from multiple sources (AMIGOS, ASCERTAIN, and DEAP) into GRISERA and retrieve them for further statistical analysis. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

Other

Jump to: Research

22 pages, 1511 KiB  
Systematic Review
Datasets for Automated Affect and Emotion Recognition from Cardiovascular Signals Using Artificial Intelligence— A Systematic Review
by Paweł Jemioło, Dawid Storman, Maria Mamica, Mateusz Szymkowski, Wioletta Żabicka, Magdalena Wojtaszek-Główka and Antoni Ligęza
Sensors 2022, 22(7), 2538; https://doi.org/10.3390/s22072538 - 25 Mar 2022
Cited by 4 | Viewed by 3601
Abstract
Our review aimed to assess the current state and quality of publicly available datasets used for automated affect and emotion recognition (AAER) with artificial intelligence (AI), and emphasising cardiovascular (CV) signals. The quality of such datasets is essential to create replicable systems for [...] Read more.
Our review aimed to assess the current state and quality of publicly available datasets used for automated affect and emotion recognition (AAER) with artificial intelligence (AI), and emphasising cardiovascular (CV) signals. The quality of such datasets is essential to create replicable systems for future work to grow. We investigated nine sources up to 31 August 2020, using a developed search strategy, including studies considering the use of AI in AAER based on CV signals. Two independent reviewers performed the screening of identified records, full-text assessment, data extraction, and credibility. All discrepancies were resolved by discussion. We descriptively synthesised the results and assessed their credibility. The protocol was registered on the Open Science Framework (OSF) platform. Eighteen records out of 195 were selected from 4649 records, focusing on datasets containing CV signals for AAER. Included papers analysed and shared data of 812 participants aged 17 to 47. Electrocardiography was the most explored signal (83.33% of datasets). Authors utilised video stimulation most frequently (52.38% of experiments). Despite these results, much information was not reported by researchers. The quality of the analysed papers was mainly low. Researchers in the field should concentrate more on methodology. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

29 pages, 1076 KiB  
Systematic Review
Automatic Emotion Recognition in Children with Autism: A Systematic Literature Review
by Agnieszka Landowska, Aleksandra Karpus, Teresa Zawadzka, Ben Robins, Duygun Erol Barkana, Hatice Kose, Tatjana Zorcec and Nicholas Cummins
Sensors 2022, 22(4), 1649; https://doi.org/10.3390/s22041649 - 20 Feb 2022
Cited by 11 | Viewed by 5420
Abstract
The automatic emotion recognition domain brings new methods and technologies that might be used to enhance therapy of children with autism. The paper aims at the exploration of methods and tools used to recognize emotions in children. It presents a literature review study [...] Read more.
The automatic emotion recognition domain brings new methods and technologies that might be used to enhance therapy of children with autism. The paper aims at the exploration of methods and tools used to recognize emotions in children. It presents a literature review study that was performed using a systematic approach and PRISMA methodology for reporting quantitative and qualitative results. Diverse observation channels and modalities are used in the analyzed studies, including facial expressions, prosody of speech, and physiological signals. Regarding representation models, the basic emotions are the most frequently recognized, especially happiness, fear, and sadness. Both single-channel and multichannel approaches are applied, with a preference for the first one. For multimodal recognition, early fusion was the most frequently applied. SVM and neural networks were the most popular for building classifiers. Qualitative analysis revealed important clues on participant group construction and the most common combinations of modalities and methods. All channels are reported to be prone to some disturbance, and as a result, information on a specific symptoms of emotions might be temporarily or permanently unavailable. The challenges of proper stimuli, labelling methods, and the creation of open datasets were also identified. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

Back to TopTop