Advanced Technologies for Emotion Recognition

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 20 May 2024 | Viewed by 17341

Special Issue Editors

AIwell Research Group, the eHealth Center, Universitat Oberta de Catalunya (UOC), Barcelona 08018, Spain
Interests: computer science; statistical machine learning; artificial intelligence
School of Cyber Science and Technology, Beihang University, Beijing 100191, China
Interests: social network analysis; social media data mining; network events detection and influence analysis and prediction; network multimodal data deep fusion; text data information extraction; multimodal deep learning
Special Issues, Collections and Topics in MDPI journals
Computer Vision Centre, Universitat Autònoma de Barcelona, Bellaterra (Cerdanyola), 08193 Barcelona, Spain
Interests: human behaviour analysis; pattern recognition; machine learning
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the development of computing technology and human-computer interaction technology, emotion recognition technology has gradually become an important research topic in human-like artificial intelligence (AI). It is widely used in healthcare, education, public service, and social network analysis. The content of emotion recognition research includes facial expressions, speech, heart rate, behavior, text and physiological signal recognition, etc., through which the user’s emotional state can be judged.

We invite original research papers and review articles on emotion recognition innovations, including but not limited to any of the following emotion recognition-related topics:

  • Systems and devices for capturing physiological signals;
  • Data preprocessing;
  • Non-invasive sensor technology;
  • Machine learning techniques for emotion recognition;
  • Deep learning for emotion recognition;
  • Facial expression recognition;
  • Audio analysis for emotion recognition;
  • Brain signals analysis for emotion recognition;
  • Behavior analysis for emotion recognition;
  • Emotional model;
  • Explainable models for emotion recognition.

Dr. Xavier Solé
Dr. Xiaoming Zhang
Prof. Dr. Sergio Escalera
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 5069 KiB  
Article
Personalization of Affective Models Using Classical Machine Learning: A Feasibility Study
by Ali Kargarandehkordi, Matti Kaisti and Peter Washington
Appl. Sci. 2024, 14(4), 1337; https://doi.org/10.3390/app14041337 - 06 Feb 2024
Cited by 1 | Viewed by 545
Abstract
Emotion recognition, a rapidly evolving domain in digital health, has witnessed significant transformations with the advent of personalized approaches and advanced machine learning (ML) techniques. These advancements have shifted the focus from traditional, generalized models to more individual-centric methodologies, underscoring the importance of [...] Read more.
Emotion recognition, a rapidly evolving domain in digital health, has witnessed significant transformations with the advent of personalized approaches and advanced machine learning (ML) techniques. These advancements have shifted the focus from traditional, generalized models to more individual-centric methodologies, underscoring the importance of understanding and catering to the unique emotional expressions of individuals. Our study delves into the concept of model personalization in emotion recognition, moving away from the one-size-fits-all approach. We conducted a series of experiments using the Emognition dataset, comprising physiological and video data of human subjects expressing various emotions, to investigate this personalized approach to affective computing. For the 10 individuals in the dataset with a sufficient representation of at least two ground truth emotion labels, we trained a personalized version of three classical ML models (k-nearest neighbors, random forests, and a dense neural network) on a set of 51 features extracted from each video frame. We ensured that all the frames used to train the models occurred earlier in the video than the frames used to test the model. We measured the importance of each facial feature for all the personalized models and observed differing ranked lists of the top features across the subjects, highlighting the need for model personalization. We then compared the personalized models against a generalized model trained using data from all 10 subjects. The mean F1 scores for the personalized models, specifically for the k-nearest neighbors, random forest, and dense neural network, were 90.48%, 92.66%, and 86.40%, respectively. In contrast, the mean F1 scores for the generic models, using the same ML techniques, were 88.55%, 91.78% and 80.42%, respectively, when trained on data from various human subjects and evaluated using the same test set. The personalized models outperformed the generalized models for 7 out of the 10 subjects. The PCA analyses on the remaining three subjects revealed relatively little facial configuration differences across the emotion labels within each subject, suggesting that personalized ML will fail when the variation among data points within a subject’s data is too low. This preliminary feasibility study demonstrates the potential as well as the ongoing challenges with implementing personalized models which predict highly subjective outcomes like emotion. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

14 pages, 1397 KiB  
Article
Enhancing Emotion Recognition through Federated Learning: A Multimodal Approach with Convolutional Neural Networks
by Nikola Simić, Siniša Suzić, Nemanja Milošević, Vuk Stanojev, Tijana Nosek, Branislav Popović and Dragana Bajović
Appl. Sci. 2024, 14(4), 1325; https://doi.org/10.3390/app14041325 - 06 Feb 2024
Viewed by 874
Abstract
Human–machine interaction covers a range of applications in which machines should understand humans’ commands and predict their behavior. Humans commonly change their mood over time, which affects the way we interact, particularly by changing speech style and facial expressions. As interaction requires quick [...] Read more.
Human–machine interaction covers a range of applications in which machines should understand humans’ commands and predict their behavior. Humans commonly change their mood over time, which affects the way we interact, particularly by changing speech style and facial expressions. As interaction requires quick decisions, low latency is critical for real-time processing. Edge devices, strategically placed near the data source, minimize processing time, enabling real-time decision-making. Edge computing allows us to process data locally, thus reducing the need to send sensitive information further through the network. Despite the wide adoption of audio-only, video-only, and multimodal emotion recognition systems, there is a research gap in terms of analyzing lightweight models and solving privacy challenges to improve model performance. This motivated us to develop a privacy-preserving, lightweight, CNN-based (CNNs are frequently used for processing audio and video modalities) audiovisual emotion recognition model, deployable on constrained edge devices. The model is further paired with a federated learning protocol to preserve the privacy of local clients on edge devices and improve detection accuracy. The results show that the adoption of federated learning improved classification accuracy by ~2%, as well as that the proposed federated learning-based model provides competitive performance compared to other baseline audiovisual emotion recognition models. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

18 pages, 1769 KiB  
Article
Predicting Team Well-Being through Face Video Analysis with AI
by Moritz Müller, Ambre Dupuis, Tobias Zeulner, Ignacio Vazquez, Johann Hagerer and Peter A. Gloor
Appl. Sci. 2024, 14(3), 1284; https://doi.org/10.3390/app14031284 - 03 Feb 2024
Viewed by 749
Abstract
Well-being is one of the pillars of positive psychology, which is known to have positive effects not only on the personal and professional lives of individuals but also on teams and organizations. Understanding and promoting individual well-being is essential for staff health and [...] Read more.
Well-being is one of the pillars of positive psychology, which is known to have positive effects not only on the personal and professional lives of individuals but also on teams and organizations. Understanding and promoting individual well-being is essential for staff health and long-term success, but current tools for assessing subjective well-being rely on time-consuming surveys and questionnaires, which limit the possibility of providing the real-time feedback needed to raise awareness and change individual behavior. This paper proposes a framework for understanding the process of non-verbal communication in teamwork, using video data to identify significant predictors of individual well-being in teamwork. It relies on video acquisition technologies and state-of-the-art artificial intelligence tools to extract individual, relative, and environmental characteristics from panoramic video. Statistical analysis is applied to each time series, leading to the generation of a dataset of 125 features, which are then linked to PERMA (Positive Emotion, Engagement, Relationships, Meaning, and Accomplishments) surveys developed in the context of positive psychology. Each pillar of the PERMA model is evaluated as a regression or classification problem using machine learning algorithms. Our approach was applied to a case study, where 80 students collaborated in 20 teams for a week on a team task in a face-to-face setting. This enabled us to formulate several hypotheses identifying factors influencing individual well-being in teamwork. These promising results point to interesting avenues for research, for instance fusing different media for the analysis of individual well-being in teamwork. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

23 pages, 4436 KiB  
Article
Uni2Mul: A Conformer-Based Multimodal Emotion Classification Model by Considering Unimodal Expression Differences with Multi-Task Learning
by Lihong Zhang, Chaolong Liu and Nan Jia
Appl. Sci. 2023, 13(17), 9910; https://doi.org/10.3390/app13179910 - 01 Sep 2023
Cited by 1 | Viewed by 757
Abstract
Multimodal emotion classification (MEC) has been extensively studied in human–computer interaction, healthcare, and other domains. Previous MEC research has utilized identical multimodal annotations (IMAs) to train unimodal models, hindering the learning of effective unimodal representations due to differences between unimodal expressions and multimodal [...] Read more.
Multimodal emotion classification (MEC) has been extensively studied in human–computer interaction, healthcare, and other domains. Previous MEC research has utilized identical multimodal annotations (IMAs) to train unimodal models, hindering the learning of effective unimodal representations due to differences between unimodal expressions and multimodal perceptions. Additionally, most MEC fusion techniques fail to consider the unimodal–multimodal inconsistencies. This study addresses two important issues in MEC: learning satisfactory unimodal representations of emotion and accounting for unimodal–multimodal inconsistencies during the fusion process. To tackle these challenges, the authors propose the Two-Stage Conformer-based MEC model (Uni2Mul) with two key innovations: (1) in stage one, unimodal models are trained using independent unimodal annotations (IUAs) to optimize unimodal emotion representations; (2) in stage two, a Conformer-based architecture is employed to fuse the unimodal representations learned in stage one and predict IMAs, accounting for unimodal–multimodal differences. The proposed model is evaluated on the CH-SIMS dataset. The experimental results demonstrate that Uni2Mul outperforms baseline models. This study makes two key contributions: (1) the use of IUAs improves unimodal learning; (2) the two-stage approach addresses unimodal–multimodal inconsistencies during Conformer-based fusion. Uni2Mul advances MEC by enhancing unimodal representation learning and Conformer-based fusion. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

30 pages, 19394 KiB  
Article
Facial Emotion Recognition for Photo and Video Surveillance Based on Machine Learning and Visual Analytics
by Oleg Kalyta, Olexander Barmak, Pavlo Radiuk and Iurii Krak
Appl. Sci. 2023, 13(17), 9890; https://doi.org/10.3390/app13179890 - 31 Aug 2023
Cited by 3 | Viewed by 2055
Abstract
Modern video surveillance systems mainly rely on human operators to monitor and interpret the behavior of individuals in real time, which may lead to severe delays in responding to an emergency. Therefore, there is a need for continued research into the designing of [...] Read more.
Modern video surveillance systems mainly rely on human operators to monitor and interpret the behavior of individuals in real time, which may lead to severe delays in responding to an emergency. Therefore, there is a need for continued research into the designing of interpretable and more transparent emotion recognition models that can effectively detect emotions in safety video surveillance systems. This study proposes a novel technique incorporating a straightforward model for detecting sudden changes in a person’s emotional state using low-resolution photos and video frames from surveillance cameras. The proposed technique includes a method of the geometric interpretation of facial areas to extract features of facial expression, the method of hyperplane classification for identifying emotional states in the feature vector space, and the principles of visual analytics and “human in the loop” to obtain transparent and interpretable classifiers. The experimental testing using the developed software prototype validates the scientific claims of the proposed technique. Its implementation improves the reliability of abnormal behavior detection via facial expressions by 0.91–2.20%, depending on different emotions and environmental conditions. Moreover, it decreases the error probability in identifying sudden emotional shifts by 0.23–2.21% compared to existing counterparts. Future research will aim to improve the approach quantitatively and address the limitations discussed in this paper. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

17 pages, 3934 KiB  
Article
Lightweight Facial Expression Recognition Based on Class-Rebalancing Fusion Cumulative Learning
by Xiangwei Mou, Yongfu Song, Rijun Wang, Yuanbin Tang and Yu Xin
Appl. Sci. 2023, 13(15), 9029; https://doi.org/10.3390/app13159029 - 07 Aug 2023
Cited by 1 | Viewed by 898
Abstract
In the research of Facial Expression Recognition (FER), the inter-class of facial expression data is not evenly distributed, the features extracted by networks are insufficient, and the FER accuracy and speed are relatively low for practical applications. Therefore, a lightweight and efficient method [...] Read more.
In the research of Facial Expression Recognition (FER), the inter-class of facial expression data is not evenly distributed, the features extracted by networks are insufficient, and the FER accuracy and speed are relatively low for practical applications. Therefore, a lightweight and efficient method based on class-rebalancing fusion cumulative learning for FER is proposed in our research. A dual-branch network (Regular feature learning and Rebalancing-Cumulative learning Network, RLR-CNet) is proposed, where the RLR-CNet uses the improvement in the lightweight ShuffleNet with two branches (feature learning and class-rebalancing) based on cumulative learning, which improves the efficiency of our model recognition. Then, to enhance the generalizability of our model and pursue better recognition efficiency in real scenes, a random masking method is improved to process datasets. Finally, in order to extract local detailed features and further improve FER efficiency, a shuffle attention module (SA) is embedded in the model. The results demonstrate that the recognition accuracy of our RLR-CNet is 71.14%, 98.04%, and 87.93% on FER2013, CK+, and RAF-DB, respectively. Compared with other FER methods, our method has great recognition accuracy, and the number of parameters is only 1.02 MB, which is 17.74% lower than that in the original ShuffleNet. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

17 pages, 4131 KiB  
Article
Predicting Choices Driven by Emotional Stimuli Using EEG-Based Analysis and Deep Learning
by Mashael Aldayel, Amira Kharrat and Abeer Al-Nafjan
Appl. Sci. 2023, 13(14), 8469; https://doi.org/10.3390/app13148469 - 22 Jul 2023
Cited by 1 | Viewed by 970
Abstract
Individual choices and preferences are important factors that impact decision making. Artificial intelligence can predict decisions by objectively detecting individual choices and preferences using natural language processing, computer vision, and machine learning. Brain–computer interfaces can measure emotional reactions and identify brain activity changes [...] Read more.
Individual choices and preferences are important factors that impact decision making. Artificial intelligence can predict decisions by objectively detecting individual choices and preferences using natural language processing, computer vision, and machine learning. Brain–computer interfaces can measure emotional reactions and identify brain activity changes linked to positive or negative emotions, enabling more accurate prediction models. This research aims to build an individual choice prediction system using electroencephalography (EEG) signals from the Shanghai Jiao Tong University emotion and EEG dataset (SEED). Using EEG, we built different deep learning models, such as a convolutional neural network, long short-term memory (LSTM), and a hybrid model to predict choices driven by emotional stimuli. We also compared their performance with different classical classifiers, such as k-nearest neighbors, support vector machines, and logistic regression. We also utilized ensemble classifiers such as random forest, adaptive boosting, and extreme gradient boosting. We evaluated our proposed models and compared them with previous studies on SEED. Our proposed LSTM model achieved good results, with an accuracy of 96%. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

19 pages, 3919 KiB  
Article
Analysis of Deep Learning-Based Decision-Making in an Emotional Spontaneous Speech Task
by Mikel de Velasco, Raquel Justo, Asier López Zorrilla and María Inés Torres
Appl. Sci. 2023, 13(2), 980; https://doi.org/10.3390/app13020980 - 11 Jan 2023
Cited by 3 | Viewed by 1949
Abstract
In this work, we present an approach to understand the computational methods and decision-making involved in the identification of emotions in spontaneous speech. The selected task consists of Spanish TV debates, which entail a high level of complexity as well as additional subjectivity [...] Read more.
In this work, we present an approach to understand the computational methods and decision-making involved in the identification of emotions in spontaneous speech. The selected task consists of Spanish TV debates, which entail a high level of complexity as well as additional subjectivity in the human perception-based annotation procedure. A simple convolutional neural model is proposed, and its behaviour is analysed to explain its decision-making. The proposed model slightly outperforms commonly used CNN architectures such as VGG16, while being much lighter. Internal layer-by-layer transformations of the input spectrogram are visualised and analysed. Finally, a class model visualisation is proposed as a simple interpretation approach whose usefulness is assessed in the work. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

18 pages, 1655 KiB  
Article
Smart Classroom Monitoring Using Novel Real-Time Facial Expression Recognition System
by Shariqa Fakhar, Junaid Baber, Sibghat Ullah Bazai, Shah Marjan, Michal Jasinski, Elzbieta Jasinska, Muhammad Umar Chaudhry, Zbigniew Leonowicz and Shumaila Hussain
Appl. Sci. 2022, 12(23), 12134; https://doi.org/10.3390/app122312134 - 27 Nov 2022
Cited by 7 | Viewed by 3235
Abstract
Emotions play a vital role in education. Technological advancement in computer vision using deep learning models has improved automatic emotion recognition. In this study, a real-time automatic emotion recognition system is developed incorporating novel salient facial features for classroom assessment using a deep [...] Read more.
Emotions play a vital role in education. Technological advancement in computer vision using deep learning models has improved automatic emotion recognition. In this study, a real-time automatic emotion recognition system is developed incorporating novel salient facial features for classroom assessment using a deep learning model. The proposed novel facial features for each emotion are initially detected using HOG for face recognition, and automatic emotion recognition is then performed by training a convolutional neural network (CNN) that takes real-time input from a camera deployed in the classroom. The proposed emotion recognition system will analyze the facial expressions of each student during learning. The selected emotional states are happiness, sadness, and fear along with the cognitive–emotional states of satisfaction, dissatisfaction, and concentration. The selected emotional states are tested against selected variables gender, department, lecture time, seating positions, and the difficulty of a subject. The proposed system contributes to improve classroom learning. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

17 pages, 6704 KiB  
Article
A Data-Driven Approach for University Public Opinion Analysis and Its Applications
by Miao He, Chunyan Ma and Rui Wang
Appl. Sci. 2022, 12(18), 9136; https://doi.org/10.3390/app12189136 - 12 Sep 2022
Cited by 9 | Viewed by 2124
Abstract
In the era of mobile Internet, college students increasingly tend to express their opinions and views through online social media; furthermore, social media influence the value judgments of college students. Therefore, it is vital to understand and analyze university online public opinion over [...] Read more.
In the era of mobile Internet, college students increasingly tend to express their opinions and views through online social media; furthermore, social media influence the value judgments of college students. Therefore, it is vital to understand and analyze university online public opinion over time. In this paper, we propose a data-driven architecture for analysis of university online public opinion. Weibo, WeChat, Douyin, Zhihu and Toutiao apps are selected as sources for collection of public opinion data. Crawler technology is utilized to automatically obtain user data about target topics to form a database. To avoid the drawbacks of traditional methods, such as sentiment lexicon and machine learning, which rely on a priori knowledge and complex handcrafted features, the Word2Vec tool is used to perform word embedding, the LSTM-CFR model is proposed to realize Chinese word segmentation and a convolutional neural network (CNN) is built to automatically extract implicit features in word vectors, ultimately establishing the nonlinear relationships between implicit features and the sentiment tendency of university public opinion. The experimental results show that the proposed model is more accurate than SVM, RF, NBC and GMM methods, providing valuable information with respect to public opinion management. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 2905 KiB  
Review
Systematic Review: Emotion Recognition Based on Electrophysiological Patterns for Emotion Regulation Detection
by Mathilde Marie Duville, Yeremi Pérez, Rodrigo Hugues-Gudiño, Norberto E. Naal-Ruiz, Luz María Alonso-Valerdi and David I. Ibarra-Zarate
Appl. Sci. 2023, 13(12), 6896; https://doi.org/10.3390/app13126896 - 07 Jun 2023
Cited by 1 | Viewed by 1641
Abstract
The electrophysiological basis of emotion regulation (ER) has gained increased attention since efficient emotion recognition and ER allow humans to develop high emotional intelligence. However, no methodological standardization has been established yet. Therefore, this paper aims to provide a critical systematic review to [...] Read more.
The electrophysiological basis of emotion regulation (ER) has gained increased attention since efficient emotion recognition and ER allow humans to develop high emotional intelligence. However, no methodological standardization has been established yet. Therefore, this paper aims to provide a critical systematic review to identify experimental methodologies that evoke emotions and record, analyze and link electrophysiological signals with emotional experience by statistics and artificial intelligence, and lastly, define a clear application of assessing emotion processing. A total of 42 articles were selected after a search based on six scientific browsers: Web of Science, EBSCO, PubMed, Scopus, ProQuest and ScienceDirect during the first semester of 2020. Studies were included if (1) electrophysiological signals recorded on human subjects were correlated with emotional recognition and/or regulation; (2) statistical models, machine or deep learning methods based on electrophysiological signals were used to analyze data. Studies were excluded if they met one or more of the following criteria: (1) emotions were not described in terms of continuous dimensions (valence and arousal) or by discrete variables, (2) a control group or neutral state was not implemented, and (3) results were not obtained from a previous experimental paradigm that aimed to elicit emotions. There was no distinction in the selection whether the participants presented a pathological or non-pathological condition, but the condition of subjects must have been efficiently detailed for the study to be included. The risk of bias was limited by extracting and organizing information on spreadsheets and participating in discussions between the authors. However, the data size selection, such as the sample size, was not considered, leading to bias in the validity of the analysis. This systematic review is presented as a consulting source to accelerate the development of neuroengineering-based systems to regulate the trajectory of emotional experiences early on. Full article
(This article belongs to the Special Issue Advanced Technologies for Emotion Recognition)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Emotion Recognition Using Audio-Visual Features
Author:
Highlights: This paper tries to find a common ground for people across various cultures speaking the same language. In a real-world scenario, speech/images/videos might not be perfect in the way the datasets are collected.

Title: Systematic Review: Emotion Recognition based on Electrophysiological Patterns for Emotion Regulation Detection
Author: Ibarra-Zarate
Highlights: Electrophysiological signals; Emotional Intelligence; Emotion Recognition; Emotion Regulation; Methodology

Back to TopTop