sensors-logo

Journal Browser

Journal Browser

Emotion Intelligence Based on Smart Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 October 2022) | Viewed by 52033

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Human-Centered Artificial Intelligence, Sangmyung University, Seoul 03016, Korea
Interests: emotion recognition; emotion intelligence; social emotion; social neuroscience; emotional neuroscience; brain computer interface; human computer interaction

E-Mail Website
Guest Editor
School of Design, the Savannah College of Art and Design, Savannah, GA 31401, USA
Interests: human-robot interaction; embodied conversational agent; human-computer interaction; user experience design; affective interaction

Special Issue Information

Dear Colleagues,

Emotion intelligence is important in determining human relation in community, organization, and society in daily life. Emotion intelligence is to how well emotion is recognized and how well emotion is expressed. Emotion is individual, while social emotion is interdependent. Digital environments such as VR, AR, and robot require the ability to track and recognize emotion quantitatively in real time during emotion interaction among more than two persons. Therefore, emotion machines having emotion intelligence have been studied to implement a digital society.

Emotion has been quantified from sensing facial expressions, gestures, and physiological signals such as EEG, ECG, EDA, and other signals. In addition, emotion would be more accurately recognized with considering the emotional context, including situation at spatiotemporal variability, congruency of implicit and explicit response, consistency of human action, and human relation in society.

Human emotion is not only the short-time response but long-time response having patterns and trends in daily life. Lab studies of sensing emotion should extend to smart sensing, which monitors and tracks emotion variation with predictable pattern.

This Special Issue is to explore empirical studies of emotion mechanisms, qualitative and quantitative measurements of emotion, recognition of the emotion context, and application of emotion and to overcome engineering problems for better accuracy of emotion recognition.

Prof. Dr. Mincheol Whang
Prof. Dr. Sung Park
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 184 KiB  
Editorial
Special Issue “Emotion Intelligence Based on Smart Sensing”
by Sung Park and Mincheol Whang
Sensors 2023, 23(3), 1098; https://doi.org/10.3390/s23031098 - 18 Jan 2023
Cited by 1 | Viewed by 1259
Abstract
Emotional intelligence is essential to maintaining human relationships in communities, organizations, and societies [...] Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)

Research

Jump to: Editorial, Review

17 pages, 9194 KiB  
Article
Recognition of Emotion by Brain Connectivity and Eye Movement
by Jing Zhang, Sung Park, Ayoung Cho and Mincheol Whang
Sensors 2022, 22(18), 6736; https://doi.org/10.3390/s22186736 - 06 Sep 2022
Cited by 5 | Viewed by 2369
Abstract
Simultaneous activation of brain regions (i.e., brain connection features) is an essential mechanism of brain activity in emotion recognition of visual content. The occipital cortex of the brain is involved in visual processing, but the frontal lobe processes cranial nerve signals to control [...] Read more.
Simultaneous activation of brain regions (i.e., brain connection features) is an essential mechanism of brain activity in emotion recognition of visual content. The occipital cortex of the brain is involved in visual processing, but the frontal lobe processes cranial nerve signals to control higher emotions. However, recognition of emotion in visual content merits the analysis of eye movement features, because the pupils, iris, and other eye structures are connected to the nerves of the brain. We hypothesized that when viewing video content, the activation features of brain connections are significantly related to eye movement characteristics. We investigated the relationship between brain connectivity (strength and directionality) and eye movement features (left and right pupils, saccades, and fixations) when 47 participants viewed an emotion-eliciting video on a two-dimensional emotion model (valence and arousal). We found that the connectivity eigenvalues of the long-distance prefrontal lobe, temporal lobe, parietal lobe, and center are related to cognitive activity involving high valance. In addition, saccade movement was correlated with long-distance occipital-frontal connectivity. Finally, short-distance connectivity results showed emotional fluctuations caused by unconscious stimulation. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

18 pages, 3306 KiB  
Article
EEG Emotion Classification Network Based on Attention Fusion of Multi-Channel Band Features
by Xiaoliang Zhu, Wenting Rong, Liang Zhao, Zili He, Qiaolai Yang, Junyi Sun and Gendong Liu
Sensors 2022, 22(14), 5252; https://doi.org/10.3390/s22145252 - 13 Jul 2022
Cited by 8 | Viewed by 2274
Abstract
Understanding learners’ emotions can help optimize instruction sand further conduct effective learning interventions. Most existing studies on student emotion recognition are based on multiple manifestations of external behavior, which do not fully use physiological signals. In this context, on the one hand, a [...] Read more.
Understanding learners’ emotions can help optimize instruction sand further conduct effective learning interventions. Most existing studies on student emotion recognition are based on multiple manifestations of external behavior, which do not fully use physiological signals. In this context, on the one hand, a learning emotion EEG dataset (LE-EEG) is constructed, which captures physiological signals reflecting the emotions of boredom, neutrality, and engagement during learning; on the other hand, an EEG emotion classification network based on attention fusion (ECN-AF) is proposed. To be specific, on the basis of key frequency bands and channels selection, multi-channel band features are first extracted (using a multi-channel backbone network) and then fused (using attention units). In order to verify the performance, the proposed model is tested on an open-access dataset SEED (N = 15) and the self-collected dataset LE-EEG (N = 45), respectively. The experimental results using five-fold cross validation show the following: (i) on the SEED dataset, the highest accuracy of 96.45% is achieved by the proposed model, demonstrating a slight increase of 1.37% compared to the baseline models; and (ii) on the LE-EEG dataset, the highest accuracy of 95.87% is achieved, demonstrating a 21.49% increase compared to the baseline models. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

21 pages, 11525 KiB  
Article
Multimodal Data Collection System for Driver Emotion Recognition Based on Self-Reporting in Real-World Driving
by Geesung Oh, Euiseok Jeong, Rak Chul Kim, Ji Hyun Yang, Sungwook Hwang, Sangho Lee and Sejoon Lim
Sensors 2022, 22(12), 4402; https://doi.org/10.3390/s22124402 - 10 Jun 2022
Cited by 6 | Viewed by 2451
Abstract
As vehicles provide various services to drivers, research on driver emotion recognition has been expanding. However, current driver emotion datasets are limited by inconsistencies in collected data and inferred emotional state annotations by others. To overcome this limitation, we propose a data collection [...] Read more.
As vehicles provide various services to drivers, research on driver emotion recognition has been expanding. However, current driver emotion datasets are limited by inconsistencies in collected data and inferred emotional state annotations by others. To overcome this limitation, we propose a data collection system that collects multimodal datasets during real-world driving. The proposed system includes a self-reportable HMI application into which a driver directly inputs their current emotion state. Data collection was completed without any accidents for over 122 h of real-world driving using the system, which also considers the minimization of behavioral and cognitive disturbances. To demonstrate the validity of our collected dataset, we also provide case studies for statistical analysis, driver face detection, and personalized driver emotion recognition. The proposed data collection system enables the construction of reliable large-scale datasets on real-world driving and facilitates research on driver emotion recognition. The proposed system is avaliable on GitHub. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

18 pages, 3383 KiB  
Article
Multispectral Facial Recognition in the Wild
by Pedro Martins, José Silvestre Silva and Alexandre Bernardino
Sensors 2022, 22(11), 4219; https://doi.org/10.3390/s22114219 - 01 Jun 2022
Cited by 4 | Viewed by 2005
Abstract
This work proposes a multi-spectral face recognition system in an uncontrolled environment, aiming to identify or authenticate identities (people) through their facial images. Face recognition systems in uncontrolled environments have shown impressive performance improvements over recent decades. However, most are limited to the [...] Read more.
This work proposes a multi-spectral face recognition system in an uncontrolled environment, aiming to identify or authenticate identities (people) through their facial images. Face recognition systems in uncontrolled environments have shown impressive performance improvements over recent decades. However, most are limited to the use of a single spectral band in the visible spectrum. The use of multi-spectral images makes it possible to collect information that is not obtainable in the visible spectrum when certain occlusions exist (e.g., fog or plastic materials) and in low- or no-light environments. The proposed work uses the scores obtained by face recognition systems in different spectral bands to make a joint final decision in identification. The evaluation of different methods for each of the components of a face recognition system allowed the most suitable ones for a multi-spectral face recognition system in an uncontrolled environment to be selected. The experimental results, expressed in Rank-1 scores, were 99.5% and 99.6% in the TUFTS multi-spectral database with pose variation and expression variation, respectively, and 100.0% in the CASIA NIR-VIS 2.0 database, indicating that the use of multi-spectral images in an uncontrolled environment is advantageous when compared with the use of single spectral band images. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

27 pages, 7042 KiB  
Article
Fear Detection in Multimodal Affective Computing: Physiological Signals versus Catecholamine Concentration
by Laura Gutiérrez-Martín, Elena Romero-Perales, Clara Sainz de Baranda Andújar, Manuel F. Canabal-Benito, Gema Esther Rodríguez-Ramos, Rafael Toro-Flores, Susana López-Ongil and Celia López-Ongil
Sensors 2022, 22(11), 4023; https://doi.org/10.3390/s22114023 - 26 May 2022
Cited by 2 | Viewed by 3325
Abstract
Affective computing through physiological signals monitoring is currently a hot topic in the scientific literature, but also in the industry. Many wearable devices are being developed for health or wellness tracking during daily life or sports activity. Likewise, other applications are being proposed [...] Read more.
Affective computing through physiological signals monitoring is currently a hot topic in the scientific literature, but also in the industry. Many wearable devices are being developed for health or wellness tracking during daily life or sports activity. Likewise, other applications are being proposed for the early detection of risk situations involving sexual or violent aggressions, with the identification of panic or fear emotions. The use of other sources of information, such as video or audio signals will make multimodal affective computing a more powerful tool for emotion classification, improving the detection capability. There are other biological elements that have not been explored yet and that could provide additional information to better disentangle negative emotions, such as fear or panic. Catecholamines are hormones produced by the adrenal glands, two small glands located above the kidneys. These hormones are released in the body in response to physical or emotional stress. The main catecholamines, namely adrenaline, noradrenaline and dopamine have been analysed, as well as four physiological variables: skin temperature, electrodermal activity, blood volume pulse (to calculate heart rate activity. i.e., beats per minute) and respiration rate. This work presents a comparison of the results provided by the analysis of physiological signals in reference to catecholamine, from an experimental task with 21 female volunteers receiving audiovisual stimuli through an immersive environment in virtual reality. Artificial intelligence algorithms for fear classification with physiological variables and plasma catecholamine concentration levels have been proposed and tested. The best results have been obtained with the features extracted from the physiological variables. Adding catecholamine’s maximum variation during the five minutes after the video clip visualization, as well as adding the five measurements (1-min interval) of these levels, are not providing better performance in the classifiers. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

14 pages, 1434 KiB  
Article
Non-Contact Measurement of Empathy Based on Micro-Movement Synchronization
by Ayoung Cho, Sung Park, Hyunwoo Lee and Mincheol Whang
Sensors 2021, 21(23), 7818; https://doi.org/10.3390/s21237818 - 24 Nov 2021
Cited by 4 | Viewed by 2519
Abstract
Tracking consumer empathy is one of the biggest challenges for advertisers. Although numerous studies have shown that consumers’ empathy affects purchasing, there are few quantitative and unobtrusive methods for assessing whether the viewer is sharing congruent emotions with the advertisement. This study suggested [...] Read more.
Tracking consumer empathy is one of the biggest challenges for advertisers. Although numerous studies have shown that consumers’ empathy affects purchasing, there are few quantitative and unobtrusive methods for assessing whether the viewer is sharing congruent emotions with the advertisement. This study suggested a non-contact method for measuring empathy by evaluating the synchronization of micro-movements between consumers and people within the media. Thirty participants viewed 24 advertisements classified as either empathy or non-empathy advertisements. For each viewing, we recorded the facial data and subjective empathy scores. We recorded the facial micro-movements, which reflect the ballistocardiography (BCG) motion, through the carotid artery remotely using a camera without any sensory attachment to the participant. Synchronization in cardiovascular measures (e.g., heart rate) is known to indicate higher levels of empathy. We found that through cross-entropy analysis, the more similar the micro-movements between the participant and the person in the advertisement, the higher the participant’s empathy scores for the advertisement. The study suggests that non-contact BCG methods can be utilized in cases where sensor attachment is ineffective (e.g., measuring empathy between the viewer and the media content) and can be a complementary method to subjective empathy scales. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

26 pages, 6168 KiB  
Article
A Robust Facial Expression Recognition Algorithm Based on Multi-Rate Feature Fusion Scheme
by Seo-Jeon Park, Byung-Gyu Kim and Naveen Chilamkurti
Sensors 2021, 21(21), 6954; https://doi.org/10.3390/s21216954 - 20 Oct 2021
Cited by 34 | Viewed by 2732
Abstract
In recent years, the importance of catching humans’ emotions grows larger as the artificial intelligence (AI) field is being developed. Facial expression recognition (FER) is a part of understanding the emotion of humans through facial expressions. We proposed a robust multi-depth network that [...] Read more.
In recent years, the importance of catching humans’ emotions grows larger as the artificial intelligence (AI) field is being developed. Facial expression recognition (FER) is a part of understanding the emotion of humans through facial expressions. We proposed a robust multi-depth network that can efficiently classify the facial expression through feeding various and reinforced features. We designed the inputs for the multi-depth network as minimum overlapped frames so as to provide more spatio-temporal information to the designed multi-depth network. To utilize a structure of a multi-depth network, a multirate-based 3D convolutional neural network (CNN) based on a multirate signal processing scheme was suggested. In addition, we made the input images to be normalized adaptively based on the intensity of the given image and reinforced the output features from all depth networks by the self-attention module. Then, we concatenated the reinforced features and classified the expression by a joint fusion classifier. Through the proposed algorithm, for the CK+ database, the result of the proposed scheme showed a comparable accuracy of 96.23%. For the MMI and the GEMEP-FERA databases, it outperformed other state-of-the-art models with accuracies of 96.69% and 99.79%. For the AFEW database, which is known as one in a very wild environment, the proposed algorithm achieved an accuracy of 31.02%. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

19 pages, 8617 KiB  
Article
Subject-Specific Cognitive Workload Classification Using EEG-Based Functional Connectivity and Deep Learning
by Anmol Gupta, Gourav Siddhad, Vishal Pandey, Partha Pratim Roy and Byung-Gyu Kim
Sensors 2021, 21(20), 6710; https://doi.org/10.3390/s21206710 - 09 Oct 2021
Cited by 17 | Viewed by 4590
Abstract
Cognitive workload is a crucial factor in tasks involving dynamic decision-making and other real-time and high-risk situations. Neuroimaging techniques have long been used for estimating cognitive workload. Given the portability, cost-effectiveness and high time-resolution of EEG as compared to fMRI and other neuroimaging [...] Read more.
Cognitive workload is a crucial factor in tasks involving dynamic decision-making and other real-time and high-risk situations. Neuroimaging techniques have long been used for estimating cognitive workload. Given the portability, cost-effectiveness and high time-resolution of EEG as compared to fMRI and other neuroimaging modalities, an efficient method of estimating an individual’s workload using EEG is of paramount importance. Multiple cognitive, psychiatric and behavioral phenotypes have already been known to be linked with “functional connectivity”, i.e., correlations between different brain regions. In this work, we explored the possibility of using different model-free functional connectivity metrics along with deep learning in order to efficiently classify the cognitive workload of the participants. To this end, 64-channel EEG data of 19 participants were collected while they were doing the traditional n-back task. These data (after pre-processing) were used to extract the functional connectivity features, namely Phase Transfer Entropy (PTE), Mutual Information (MI) and Phase Locking Value (PLV). These three were chosen to do a comprehensive comparison of directed and non-directed model-free functional connectivity metrics (allows faster computations). Using these features, three deep learning classifiers, namely CNN, LSTM and Conv-LSTM were used for classifying the cognitive workload as low (1-back), medium (2-back) or high (3-back). With the high inter-subject variability in EEG and cognitive workload and recent research highlighting that EEG-based functional connectivity metrics are subject-specific, subject-specific classifiers were used. Results show the state-of-the-art multi-class classification accuracy with the combination of MI with CNN at 80.87%, followed by the combination of PLV with CNN (at 75.88%) and MI with LSTM (at 71.87%). The highest subject specific performance was achieved by the combinations of PLV with Conv-LSTM, and PLV with CNN with an accuracy of 97.92%, followed by the combination of MI with CNN (at 95.83%) and MI with Conv-LSTM (at 93.75%). The results highlight the efficacy of the combination of EEG-based model-free functional connectivity metrics and deep learning in order to classify cognitive workload. The work can further be extended to explore the possibility of classifying cognitive workload in real-time, dynamic and complex real-world scenarios. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

18 pages, 4682 KiB  
Article
Individual’s Social Perception of Virtual Avatars Embodied with Their Habitual Facial Expressions and Facial Appearance
by Sung Park, Si Pyoung Kim and Mincheol Whang
Sensors 2021, 21(17), 5986; https://doi.org/10.3390/s21175986 - 06 Sep 2021
Cited by 30 | Viewed by 6203
Abstract
With the prevalence of virtual avatars and the recent emergence of metaverse technology, there has been an increase in users who express their identity through an avatar. The research community focused on improving the realistic expressions and non-verbal communication channels of virtual characters [...] Read more.
With the prevalence of virtual avatars and the recent emergence of metaverse technology, there has been an increase in users who express their identity through an avatar. The research community focused on improving the realistic expressions and non-verbal communication channels of virtual characters to create a more customized experience. However, there is a lack in the understanding of how avatars can embody a user’s signature expressions (i.e., user’s habitual facial expressions and facial appearance) that would provide an individualized experience. Our study focused on identifying elements that may affect the user’s social perception (similarity, familiarity, attraction, liking, and involvement) of customized virtual avatars engineered considering the user’s facial characteristics. We evaluated the participant’s subjective appraisal of avatars that embodied the participant’s habitual facial expressions or facial appearance. Results indicated that participants felt that the avatar that embodied their habitual expressions was more similar to them than the avatar that did not. Furthermore, participants felt that the avatar that embodied their appearance was more familiar than the avatar that did not. Designers should be mindful about how people perceive individuated virtual avatars in order to accurately represent the user’s identity and help users relate to their avatar. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

14 pages, 1195 KiB  
Article
Changes in Computer-Analyzed Facial Expressions with Age
by Hyunwoong Ko, Kisun Kim, Minju Bae, Myo-Geong Seo, Gieun Nam, Seho Park, Soowon Park, Jungjoon Ihm and Jun-Young Lee
Sensors 2021, 21(14), 4858; https://doi.org/10.3390/s21144858 - 16 Jul 2021
Cited by 4 | Viewed by 4020
Abstract
Facial expressions are well known to change with age, but the quantitative properties of facial aging remain unclear. In the present study, we investigated the differences in the intensity of facial expressions between older (n = 56) and younger adults (n [...] Read more.
Facial expressions are well known to change with age, but the quantitative properties of facial aging remain unclear. In the present study, we investigated the differences in the intensity of facial expressions between older (n = 56) and younger adults (n = 113). In laboratory experiments, the posed facial expressions of the participants were obtained based on six basic emotions and neutral facial expression stimuli, and the intensities of their faces were analyzed using a computer vision tool, OpenFace software. Our results showed that the older adults expressed strong expressions for some negative emotions and neutral faces. Furthermore, when making facial expressions, older adults used more face muscles than younger adults across the emotions. These results may help to understand the characteristics of facial expressions in aging and can provide empirical evidence for other fields regarding facial recognition. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

13 pages, 1913 KiB  
Article
Identification of Video Game Addiction Using Heart-Rate Variability Parameters
by Jung-Yong Kim, Hea-Sol Kim, Dong-Joon Kim, Sung-Kyun Im and Mi-Sook Kim
Sensors 2021, 21(14), 4683; https://doi.org/10.3390/s21144683 - 08 Jul 2021
Cited by 8 | Viewed by 3532
Abstract
The purpose of this study is to determine heart rate variability (HRV) parameters that can quantitatively characterize game addiction by using electrocardiograms (ECGs). 23 subjects were classified into two groups prior to the experiment, 11 game-addicted subjects, and 12 non-addicted subjects, using questionnaires [...] Read more.
The purpose of this study is to determine heart rate variability (HRV) parameters that can quantitatively characterize game addiction by using electrocardiograms (ECGs). 23 subjects were classified into two groups prior to the experiment, 11 game-addicted subjects, and 12 non-addicted subjects, using questionnaires (CIUS and IAT). Various HRV parameters were tested to identify the addicted subject. The subjects played the League of Legends game for 30–40 min. The experimenter measured ECG during the game at various window sizes and specific events. Moreover, correlation and factor analyses were used to find the most effective parameters. A logistic regression equation was formed to calculate the accuracy in diagnosing addicted and non-addicted subjects. The most accurate set of parameters was found to be pNNI20, RMSSD, and LF in the 30 s after the “being killed” event. The logistic regression analysis provided an accuracy of 69.3% to 70.3%. AUC values in this study ranged from 0.654 to 0.677. This study can be noted as an exploratory step in the quantification of game addiction based on the stress response that could be used as an objective diagnostic method in the future. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

17 pages, 37184 KiB  
Article
The Analysis of Emotion Authenticity Based on Facial Micromovements
by Sung Park, Seong Won Lee and Mincheol Whang
Sensors 2021, 21(13), 4616; https://doi.org/10.3390/s21134616 - 05 Jul 2021
Cited by 7 | Viewed by 3443
Abstract
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) [...] Read more.
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user’s intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant’s expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

18 pages, 2869 KiB  
Article
Multi-Path and Group-Loss-Based Network for Speech Emotion Recognition in Multi-Domain Datasets
by Kyoung Ju Noh, Chi Yoon Jeong, Jiyoun Lim, Seungeun Chung, Gague Kim, Jeong Mook Lim and Hyuntae Jeong
Sensors 2021, 21(5), 1579; https://doi.org/10.3390/s21051579 - 24 Feb 2021
Cited by 12 | Viewed by 3513
Abstract
Speech emotion recognition (SER) is a natural method of recognizing individual emotions in everyday life. To distribute SER models to real-world applications, some key challenges must be overcome, such as the lack of datasets tagged with emotion labels and the weak generalization of [...] Read more.
Speech emotion recognition (SER) is a natural method of recognizing individual emotions in everyday life. To distribute SER models to real-world applications, some key challenges must be overcome, such as the lack of datasets tagged with emotion labels and the weak generalization of the SER model for an unseen target domain. This study proposes a multi-path and group-loss-based network (MPGLN) for SER to support multi-domain adaptation. The proposed model includes a bidirectional long short-term memory-based temporal feature generator and a transferred feature extractor from the pre-trained VGG-like audio classification model (VGGish), and it learns simultaneously based on multiple losses according to the association of emotion labels in the discrete and dimensional models. For the evaluation of the MPGLN SER as applied to multi-cultural domain datasets, the Korean Emotional Speech Database (KESD), including KESDy18 and KESDy19, is constructed, and the English-speaking Interactive Emotional Dyadic Motion Capture database (IEMOCAP) is used. The evaluation of multi-domain adaptation and domain generalization showed 3.7% and 3.5% improvements, respectively, of the F1 score when comparing the performance of MPGLN SER with a baseline SER model that uses a temporal feature generator. We show that the MPGLN SER efficiently supports multi-domain adaptation and reinforces model generalization. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

80 pages, 6415 KiB  
Review
A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States
by Arturas Kaklauskas, Ajith Abraham, Ieva Ubarte, Romualdas Kliukas, Vaida Luksaite, Arune Binkyte-Veliene, Ingrida Vetloviene and Loreta Kaklauskiene
Sensors 2022, 22(20), 7824; https://doi.org/10.3390/s22207824 - 14 Oct 2022
Cited by 11 | Viewed by 5100
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used [...] Read more.
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik’s wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation’s success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends. Full article
(This article belongs to the Special Issue Emotion Intelligence Based on Smart Sensing)
Show Figures

Figure 1

Back to TopTop