sensors-logo

Journal Browser

Journal Browser

Emotion Monitoring System Based on Sensors and Data Analysis

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 May 2021) | Viewed by 46019

Special Issue Editor


E-Mail Website
Guest Editor
Department of Decision Systems and Robotics, Faculty of Electronics, Telecommunications and Informatics, Gdanska University of Technology, Narutowicza 11/12, 80-233 Gdanska, Poland
Interests: automatic control and robotics; modeling and identification; estimation; artificial intelligence; evolutionary computations; computational intelligence; cognitive systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The automatic monitoring of emotions is an issue that must be solved to successfully perform various tasks, from a more natural human–computer interaction to assessing the effectiveness of a patient's medical therapy. The most advanced solutions are based on a fusion of many data sources, including audiovisual measurements and text and medical parameters. These data are then processed by advanced data analysis algorithms, from statistics and mathematical modeling to machine learning and artificial neural networks.

The great complexity of the problem of recognizing emotions, which in some conditions is difficult even for people, requires a large data stream, which, in subsequent stages, is processed into features with an increasingly high level of abstraction, ending with a rather-difficult-to-define concept of emotion. This is in line with the deep learning process, in which raw data in subsequent stages result in a higher level of features.

For this reason, deep neural networks are considered one of the most promising methods of data analysis in emotion monitoring systems (EMS). Deep neural networks, if applicable at all, are, however, only part of a larger process that includes the following steps:

  1. Selecting the right set of sensors and methods for measuring and combining data (raw data are created at this stage),
  2. Preliminary processing (selection of features) so that the original large data stream can be limited to the appropriate size and meaning (at this stage, statistical analysis and machine learning methods are used, supported by expert knowledge),
  3. Building a system to analyze the processed data stream to generate output variables describing emotions.

The second stage is extremely important because it is part of data mining and iterative learning based on data; it enables the integration of expert knowledge but does not yet generate a solution in the form of the so-called black box (ultimately) directly processing raw data.

The proposed topic of EMS based on sensors and data analysis (SDA) also includes its specific applications in the form of human reaction validation (HRV) subprojects, whose main goal is to develop a data-based tool that will allow estimating or forecasting the broadly understood emotional state of humans. In the simplest case,  a database model based on the data lake provided can be developed, describing the effect of certain stimuli on people. On the other hand, one can try to develop a cybernetic human psychological model based on the same data and adapted to the needs of various projects.

We are pleased to invite you to submit your papers to this Special Issue of Sensors, "Emotion Monitoring System Based on Sensors and Data Analysis". This Special Issue aims to discuss the latest research progress in the above described field of emotion monitoring. We encourage submissions of conceptual and empirical papers focused on this subject. Different types of approaches related to this field are welcome.

Prof. Dr. Zdzis̷law Kowalczuk
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor networks
  • data analysis
  • data fusion
  • human–computer interactions
  • diagnosis and medical treatment
  • statistical analysis
  • mathematical modeling
  • cybernetic models
  • artificial and deep neural networks
  • machine learning
  • expert knowledge
  • monitoring, recognizing, estimating, and forecasting of emotions

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

17 pages, 379 KiB  
Article
Emotion Recognition on Edge Devices: Training and Deployment
by Vlad Pandelea, Edoardo Ragusa, Tommaso Apicella, Paolo Gastaldo and Erik Cambria
Sensors 2021, 21(13), 4496; https://doi.org/10.3390/s21134496 - 30 Jun 2021
Cited by 4 | Viewed by 2852
Abstract
Emotion recognition, among other natural language processing tasks, has greatly benefited from the use of large transformer models. Deploying these models on resource-constrained devices, however, is a major challenge due to their computational cost. In this paper, we show that the combination of [...] Read more.
Emotion recognition, among other natural language processing tasks, has greatly benefited from the use of large transformer models. Deploying these models on resource-constrained devices, however, is a major challenge due to their computational cost. In this paper, we show that the combination of large transformers, as high-quality feature extractors, and simple hardware-friendly classifiers based on linear separators can achieve competitive performance while allowing real-time inference and fast training. Various solutions including batch and Online Sequential Learning are analyzed. Additionally, our experiments show that latency and performance can be further improved via dimensionality reduction and pre-training, respectively. The resulting system is implemented on two types of edge device, namely an edge accelerator and two smartphones. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

18 pages, 5821 KiB  
Article
Utterance Level Feature Aggregation with Deep Metric Learning for Speech Emotion Recognition
by Bogdan Mocanu, Ruxandra Tapu and Titus Zaharia
Sensors 2021, 21(12), 4233; https://doi.org/10.3390/s21124233 - 20 Jun 2021
Cited by 12 | Viewed by 3143
Abstract
Emotion is a form of high-level paralinguistic information that is intrinsically conveyed by human speech. Automatic speech emotion recognition is an essential challenge for various applications; including mental disease diagnosis; audio surveillance; human behavior understanding; e-learning and human–machine/robot interaction. In this paper, we [...] Read more.
Emotion is a form of high-level paralinguistic information that is intrinsically conveyed by human speech. Automatic speech emotion recognition is an essential challenge for various applications; including mental disease diagnosis; audio surveillance; human behavior understanding; e-learning and human–machine/robot interaction. In this paper, we introduce a novel speech emotion recognition method, based on the Squeeze and Excitation ResNet (SE-ResNet) model and fed with spectrogram inputs. In order to overcome the limitations of the state-of-the-art techniques, which fail in providing a robust feature representation at the utterance level, the CNN architecture is extended with a trainable discriminative GhostVLAD clustering layer that aggregates the audio features into compact, single-utterance vector representation. In addition, an end-to-end neural embedding approach is introduced, based on an emotionally constrained triplet loss function. The loss function integrates the relations between the various emotional patterns and thus improves the latent space data representation. The proposed methodology achieves 83.35% and 64.92% global accuracy rates on the RAVDESS and CREMA-D publicly available datasets, respectively. When compared with the results provided by human observers, the gains in global accuracy scores are superior to 24%. Finally, the objective comparative evaluation with state-of-the-art techniques demonstrates accuracy gains of more than 3%. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

29 pages, 84427 KiB  
Article
Estimation of Organizational Competitiveness by a Hybrid of One-Dimensional Convolutional Neural Networks and Self-Organizing Maps Using Physiological Signals for Emotional Analysis of Employees
by Saad Awadh Alanazi, Madallah Alruwaili, Fahad Ahmad, Alaa Alaerjan and Nasser Alshammari
Sensors 2021, 21(11), 3760; https://doi.org/10.3390/s21113760 - 28 May 2021
Cited by 12 | Viewed by 2617
Abstract
The theory of modern organizations considers emotional intelligence to be the metric for tools that enable organizations to create a competitive vision. It also helps corporate leaders enthusiastically adhere to the vision and energize organizational stakeholders to accomplish the vision. In this study, [...] Read more.
The theory of modern organizations considers emotional intelligence to be the metric for tools that enable organizations to create a competitive vision. It also helps corporate leaders enthusiastically adhere to the vision and energize organizational stakeholders to accomplish the vision. In this study, the one-dimensional convolutional neural network classification model is initially employed to interpret and evaluate shifts in emotion over a period by categorizing emotional states that occur at particular moments during mutual interaction using physiological signals. The self-organizing map technique is implemented to cluster overall organizational emotions to represent organizational competitiveness. The analysis of variance test results indicates no significant difference in age and body mass index for participants exhibiting different emotions. However, a significant mean difference was observed for the blood volume pulse, galvanic skin response, skin temperature, valence, and arousal values, indicating the effectiveness of the chosen physiological sensors and their measures to analyze emotions for organizational competitiveness. We achieved 99.8% classification accuracy for emotions using the proposed technique. The study precisely identifies the emotions and locates a connection between emotional intelligence and organizational competitiveness (i.e., a positive relationship with employees augments organizational competitiveness). Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

21 pages, 2283 KiB  
Article
EEG-Based Emotion Recognition Using an Improved Weighted Horizontal Visibility Graph
by Tianjiao Kong, Jie Shao, Jiuyuan Hu, Xin Yang, Shiyiling Yang and Reza Malekian
Sensors 2021, 21(5), 1870; https://doi.org/10.3390/s21051870 - 07 Mar 2021
Cited by 17 | Viewed by 3475
Abstract
Emotion recognition, as a challenging and active research area, has received considerable awareness in recent years. In this study, an attempt was made to extract complex network features from electroencephalogram (EEG) signals for emotion recognition. We proposed a novel method of constructing forward [...] Read more.
Emotion recognition, as a challenging and active research area, has received considerable awareness in recent years. In this study, an attempt was made to extract complex network features from electroencephalogram (EEG) signals for emotion recognition. We proposed a novel method of constructing forward weighted horizontal visibility graphs (FWHVG) and backward weighted horizontal visibility graphs (BWHVG) based on angle measurement. The two types of complex networks were used to extract network features. Then, the two feature matrices were fused into a single feature matrix to classify EEG signals. The average emotion recognition accuracies based on complex network features of proposed method in the valence and arousal dimension were 97.53% and 97.75%. The proposed method achieved classification accuracies of 98.12% and 98.06% for valence and arousal when combined with time-domain features. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

26 pages, 1446 KiB  
Article
Real-Time Emotion Classification Using EEG Data Stream in E-Learning Contexts
by Arijit Nandi, Fatos Xhafa, Laia Subirats and Santi Fort
Sensors 2021, 21(5), 1589; https://doi.org/10.3390/s21051589 - 25 Feb 2021
Cited by 29 | Viewed by 6950
Abstract
In face-to-face and online learning, emotions and emotional intelligence have an influence and play an essential role. Learners’ emotions are crucial for e-learning system because they promote or restrain the learning. Many researchers have investigated the impacts of emotions in enhancing and maximizing [...] Read more.
In face-to-face and online learning, emotions and emotional intelligence have an influence and play an essential role. Learners’ emotions are crucial for e-learning system because they promote or restrain the learning. Many researchers have investigated the impacts of emotions in enhancing and maximizing e-learning outcomes. Several machine learning and deep learning approaches have also been proposed to achieve this goal. All such approaches are suitable for an offline mode, where the data for emotion classification are stored and can be accessed infinitely. However, these offline mode approaches are inappropriate for real-time emotion classification when the data are coming in a continuous stream and data can be seen to the model at once only. We also need real-time responses according to the emotional state. For this, we propose a real-time emotion classification system (RECS)-based Logistic Regression (LR) trained in an online fashion using the Stochastic Gradient Descent (SGD) algorithm. The proposed RECS is capable of classifying emotions in real-time by training the model in an online fashion using an EEG signal stream. To validate the performance of RECS, we have used the DEAP data set, which is the most widely used benchmark data set for emotion classification. The results show that the proposed approach can effectively classify emotions in real-time from the EEG data stream, which achieved a better accuracy and F1-score than other offline and online approaches. The developed real-time emotion classification system is analyzed in an e-learning context scenario. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

22 pages, 7213 KiB  
Article
EEG-Based Emotion Classification Using Long Short-Term Memory Network with Attention Mechanism
by Youmin Kim and Ahyoung Choi
Sensors 2020, 20(23), 6727; https://doi.org/10.3390/s20236727 - 25 Nov 2020
Cited by 39 | Viewed by 3922
Abstract
Recently, studies that analyze emotions based on physiological signals, such as electroencephalogram (EEG), by applying a deep learning algorithm have been actively conducted. However, the study of sequence modeling considering the change of emotional signals over time has not been fully investigated. To [...] Read more.
Recently, studies that analyze emotions based on physiological signals, such as electroencephalogram (EEG), by applying a deep learning algorithm have been actively conducted. However, the study of sequence modeling considering the change of emotional signals over time has not been fully investigated. To consider long-term interaction of emotion, in this study, we propose a long short-term memory network to consider changes in emotion over time and apply an attention mechanism to assign weights to the emotional states appearing at specific moments based on the peak–end rule in psychology. We used 32-channel EEG data from the DEAP database. Two-level (low and high) and three-level (low, middle, and high) classification experiments were performed on the valence and arousal emotion models. The results show accuracies of 90.1% and 87.9% using the two-level classification for the valence and arousal models with four-fold cross validation, respectively. In the case of the three-level classification, these values were obtained as 83.5% and 82.6%, respectively. Additional experiments were conducted using a network combining a convolutional neural network (CNN) submodule with the proposed model. The obtained results showed accuracies of 90.1% and 88.3% in the case of the two-level classification and 86.9% and 84.1% in the case of the three-level classification for the valence and arousal models with four-fold cross validation, respectively. In 10-fold cross validation, there were 91.8% for valence and 91.6% for arousal accuracy, respectively. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

19 pages, 1473 KiB  
Article
Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia
by Aasim Raheel, Muhammad Majid, Majdi Alnowami and Syed Muhammad Anwar
Sensors 2020, 20(14), 4037; https://doi.org/10.3390/s20144037 - 21 Jul 2020
Cited by 48 | Viewed by 5564
Abstract
Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages [...] Read more.
Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57 % as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76 % (for four emotions) when interacting with tactile enhanced multimedia. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

16 pages, 2157 KiB  
Article
Enhancing BCI-Based Emotion Recognition Using an Improved Particle Swarm Optimization for Feature Selection
by Zina Li, Lina Qiu, Ruixin Li, Zhipeng He, Jun Xiao, Yan Liang, Fei Wang and Jiahui Pan
Sensors 2020, 20(11), 3028; https://doi.org/10.3390/s20113028 - 27 May 2020
Cited by 37 | Viewed by 4266
Abstract
Electroencephalogram (EEG) signals have been widely used in emotion recognition. However, the current EEG-based emotion recognition has low accuracy of emotion classification, and its real-time application is limited. In order to address these issues, in this paper, we proposed an improved feature selection [...] Read more.
Electroencephalogram (EEG) signals have been widely used in emotion recognition. However, the current EEG-based emotion recognition has low accuracy of emotion classification, and its real-time application is limited. In order to address these issues, in this paper, we proposed an improved feature selection algorithm to recognize subjects’ emotion states based on EEG signal, and combined this feature selection method to design an online emotion recognition brain-computer interface (BCI) system. Specifically, first, different dimensional features from the time-domain, frequency domain, and time-frequency domain were extracted. Then, a modified particle swarm optimization (PSO) method with multi-stage linearly-decreasing inertia weight (MLDW) was purposed for feature selection. The MLDW algorithm can be used to easily refine the process of decreasing the inertia weight. Finally, the emotion types were classified by the support vector machine classifier. We extracted different features from the EEG data in the DEAP data set collected by 32 subjects to perform two offline experiments. Our results showed that the average accuracy of four-class emotion recognition reached 76.67%. Compared with the latest benchmark, our proposed MLDW-PSO feature selection improves the accuracy of EEG-based emotion recognition. To further validate the efficiency of the MLDW-PSO feature selection method, we developed an online two-class emotion recognition system evoked by Chinese videos, which achieved good performance for 10 healthy subjects with an average accuracy of 89.5%. The effectiveness of our method was thus demonstrated. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

15 pages, 541 KiB  
Article
A Wrapper Feature Selection Algorithm: An Emotional Assessment Using Physiological Recordings from Wearable Sensors
by Inma Mohino-Herranz, Roberto Gil-Pita, Joaquín García-Gómez, Manuel Rosa-Zurera and Fernando Seoane
Sensors 2020, 20(1), 309; https://doi.org/10.3390/s20010309 - 06 Jan 2020
Cited by 7 | Viewed by 3234
Abstract
Assessing emotional state is an emerging application field boosting research activities on the topic of analysis of non-invasive biosignals to find effective markers to accurately determine the emotional state in real-time. Nowadays using wearable sensors, electrocardiogram and thoracic impedance measurements can be recorded, [...] Read more.
Assessing emotional state is an emerging application field boosting research activities on the topic of analysis of non-invasive biosignals to find effective markers to accurately determine the emotional state in real-time. Nowadays using wearable sensors, electrocardiogram and thoracic impedance measurements can be recorded, facilitating analyzing cardiac and respiratory functions directly and autonomic nervous system function indirectly. Such analysis allows distinguishing between different emotional states: neutral, sadness, and disgust. This work was specifically focused on the proposal of a k-fold approach for selecting features while training the classifier that reduces the loss of generalization. The performance of the proposed algorithm used as the selection criterion was compared to the commonly used standard error function. The proposed k-fold approach outperforms the conventional method with 4% hit success rate improvement, reaching an accuracy near to 78%. Moreover, the proposed selection criterion method allows the classifier to produce the best performance using a lower number of features at lower computational cost. A reduced number of features reduces the risk of overfitting while a lower computational cost contributes to implementing real-time systems using wearable electronics. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

Review

Jump to: Research

43 pages, 674 KiB  
Review
A Review of Emotion Recognition Methods Based on Data Acquired via Smartphone Sensors
by Agata Kołakowska, Wioleta Szwoch and Mariusz Szwoch
Sensors 2020, 20(21), 6367; https://doi.org/10.3390/s20216367 - 08 Nov 2020
Cited by 33 | Viewed by 8190
Abstract
In recent years, emotion recognition algorithms have achieved high efficiency, allowing the development of various affective and affect-aware applications. This advancement has taken place mainly in the environment of personal computers offering the appropriate hardware and sufficient power to process complex data from [...] Read more.
In recent years, emotion recognition algorithms have achieved high efficiency, allowing the development of various affective and affect-aware applications. This advancement has taken place mainly in the environment of personal computers offering the appropriate hardware and sufficient power to process complex data from video, audio, and other channels. However, the increase in computing and communication capabilities of smartphones, the variety of their built-in sensors, as well as the availability of cloud computing services have made them an environment in which the task of recognising emotions can be performed at least as effectively. This is possible and particularly important due to the fact that smartphones and other mobile devices have become the main computer devices used by most people. This article provides a systematic overview of publications from the last 10 years related to emotion recognition methods using smartphone sensors. The characteristics of the most important sensors in this respect are presented, and the methods applied to extract informative features on the basis of data read from these input channels. Then, various machine learning approaches implemented to recognise emotional states are described. Full article
(This article belongs to the Special Issue Emotion Monitoring System Based on Sensors and Data Analysis)
Show Figures

Figure 1

Back to TopTop