Advances in Computer Vision and Affective Computing for Emotion Understanding and Interpretation

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 August 2023) | Viewed by 6804

Special Issue Editors


E-Mail Website
Guest Editor
Department of Management and Production Engineering, Politecnico di Torino, 10129 Torino, Italy
Interests: 3D face analysis; emotional analysis
Special Issues, Collections and Topics in MDPI journals
Department of Management and Production Engineering, Polytechnic of Turin, 10129 Turin, Italy
Interests: computer vision; digital image processing; image segmentation; human–computer interaction; image analysis; 3D image processing; medical and biomedical image processing; extended reality; augmented reality; mixed reality; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Advances in computer vision and graphics enable the more effective development of virtual reality (VR), making this technology more suitable for a wide range of applications, from medicine to entertainment and product design. Additionally, an increasing body of literature shows the suitability of virtual reality environments (VEs) for eliciting emotional activity, thanks to their affective potentials and the link between emotion and the sense of presence and immersion. Nevertheless, the comprehension of emotion elicitation and categorization is still far from being satisfactory. The study of specific patterns of emotions, emotional states, and affective dimensions would foster the knowledge of such an interdisciplinary branch of research. Thus, new findings in the area of emotional pattern detection and recognition would enhance research on affective computing, paving the way for radically new insights into human–computer interaction (HCI) and emotional artificial intelligence.

This Special Issue is open to, but not limited to, articles on emotional analysis and elicitation involving:

  • VR technologies (standard monitors, head-mounted displays, etc.) to elicit emotions;
  • Strategies to design affective virtual environments;
  • Face expression recognition/ facial emotion recognition;
  • Multimodal experimentations to investigate the conscious/unconscious emotional state after affective stimuli;
  • Stress detection;
  • Suitability of physiological responses to detect affective and cognitive states (electroencephalographic, cardiac, electrodermal, muscular, etc.);
  • Methodologies for assessing emotions based on physiological data, with a particular focus on machine/deep learning techniques;
  • Novel theories on emotional representations, based on sound experimentations, besides the acknowledged dimensional (valence, arousal, dominance) and discrete (basic emotions) ones;
  • Theories and experiments aimed at investigating the physiological signature of emotional activation.

Technical Program Committee Member:

  • Prof. Sandro Moos, Politecnico di Torino

Dr. Elena Carlotta Olivetti
Dr. Luca Ulrich
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • human–computer interaction
  • affective computing
  • emotion detection
  • emotion recognition
  • virtual environments
  • physiological responses
  • face analysis
  • emotional pattern

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

27 pages, 14701 KiB  
Article
Exploring User Engagement in Museum Scenario with EEG—A Case Study in MAV Craftsmanship Museum in Valle d’Aosta Region, Italy
by Ivonne Angelica Castiblanco Jimenez, Francesca Nonis, Elena Carlotta Olivetti, Luca Ulrich, Sandro Moos, Maria Grazia Monaci, Federica Marcolin and Enrico Vezzetti
Electronics 2023, 12(18), 3810; https://doi.org/10.3390/electronics12183810 - 08 Sep 2023
Cited by 1 | Viewed by 1109
Abstract
In the last decade, museums and exhibitions have benefited from the advances in Virtual Reality technologies to create complementary virtual elements to the traditional visit. The aim is to make the collections more engaging, interactive, comprehensible and accessible. Also, the studies regarding users’ [...] Read more.
In the last decade, museums and exhibitions have benefited from the advances in Virtual Reality technologies to create complementary virtual elements to the traditional visit. The aim is to make the collections more engaging, interactive, comprehensible and accessible. Also, the studies regarding users’ and visitors’ engagement suggest that the real affective state cannot be fully assessed with self-assessment techniques and that other physiological techniques, such as EEG, should be adopted to gain a more unbiased and mature understanding of their feelings. With the aim of contributing to bridging this knowledge gap, this work proposes to adopt literature EEG-based indicators (valence, arousal, engagement) to analyze the affective state of 95 visitors interacting physically or virtually (in a VR environment) with five handicraft objects belonging to the permanent collection of the Museo dell’Artigianato Valdostano di Tradizione, which is a traditional craftsmanship museum in the Valle d’Aosta region. Extreme Gradient Boosting (XGBoost) was adopted to classify the obtained engagement measures, which were labeled according to questionnaire replies. EEG analysis played a fundamental role in understanding the cognitive and emotional processes underlying immersive experiences, highlighting the potential of VR technologies in enhancing participants’ cognitive engagement. The results indicate that EEG-based indicators have common trends with self-assessment, suggesting that their use as ‘the ground truth of emotion’ is a viable option. Full article
Show Figures

Figure 1

16 pages, 3532 KiB  
Article
A Model for EEG-Based Emotion Recognition: CNN-Bi-LSTM with Attention Mechanism
by Zhentao Huang, Yahong Ma, Rongrong Wang, Weisu Li and Yongsheng Dai
Electronics 2023, 12(14), 3188; https://doi.org/10.3390/electronics12143188 - 22 Jul 2023
Cited by 6 | Viewed by 3352
Abstract
Emotion analysis is the key technology in human–computer emotional interaction and has gradually become a research hotspot in the field of artificial intelligence. The key problems of emotion analysis based on EEG are feature extraction and classifier design. The existing methods of emotion [...] Read more.
Emotion analysis is the key technology in human–computer emotional interaction and has gradually become a research hotspot in the field of artificial intelligence. The key problems of emotion analysis based on EEG are feature extraction and classifier design. The existing methods of emotion analysis mainly use machine learning and rely on manually extracted features. As an end-to-end method, deep learning can automatically extract EEG features and classify them. However, most of the deep learning models of emotion recognition based on EEG still need manual screening and data pre-processing, and the accuracy and convenience are not high enough. Therefore, this paper proposes a CNN-Bi-LSTM-Attention model to automatically extract the features and classify emotions based on EEG signals. The original EEG data are used as input, a CNN and a Bi-LSTM network are used for feature extraction and fusion, and then the electrode channel weights are balanced through the attention mechanism layer. Finally, the EEG signals are classified to different kinds of emotions. An emotion classification experiment based on EEG is conducted on the SEED dataset to evaluate the performance of the proposed model. The experimental results show that the method proposed in this paper can effectively classify EEG emotions. The method was assessed on two distinctive classification tasks, one with three and one with four target classes. The average ten-fold cross-validation classification accuracy of this method is 99.55% and 99.79%, respectively, corresponding to three and four classification tasks, which is significantly better than the other methods. It can be concluded that our method is superior to the existing methods in emotion recognition, which can be widely used in many fields, including modern neuroscience, psychology, neural engineering, and computer science as well. Full article
Show Figures

Figure 1

23 pages, 6806 KiB  
Article
Emotion Classification Based on CWT of ECG and GSR Signals Using Various CNN Models
by Amita Dessai and Hassanali Virani
Electronics 2023, 12(13), 2795; https://doi.org/10.3390/electronics12132795 - 24 Jun 2023
Cited by 3 | Viewed by 1859
Abstract
Emotions expressed by humans can be identified from facial expressions, speech signals, or physiological signals. Among them, the use of physiological signals for emotion classification is a notable emerging area of research. In emotion recognition, a person’s electrocardiogram (ECG) and galvanic skin response [...] Read more.
Emotions expressed by humans can be identified from facial expressions, speech signals, or physiological signals. Among them, the use of physiological signals for emotion classification is a notable emerging area of research. In emotion recognition, a person’s electrocardiogram (ECG) and galvanic skin response (GSR) signals cannot be manipulated, unlike facial and voice signals. Moreover, wearables such as smartwatches and wristbands enable the detection of emotions in people’s naturalistic environment. During the COVID-19 pandemic, it was necessary to detect people’s emotions in order to ensure that appropriate actions were taken according to the prevailing situation and achieve societal balance. Experimentally, the duration of the emotion stimulus period and the social and non-social contexts of participants influence the emotion classification process. Hence, classification of emotions when participants are exposed to the elicitation process for a longer duration and taking into consideration the social context needs to be explored. This work explores the classification of emotions using five pretrained convolutional neural network (CNN) models: MobileNet, NASNetMobile, DenseNet 201, InceptionResnetV2, and EfficientNetB7. The continuous wavelet transform (CWT) coefficients were detected from ECG and GSR recordings from the AMIGOS database with suitable filtering. Scalograms of the sum of frequency coefficients versus time were obtained and converted into images. Emotions were classified using the pre-trained CNN models. The valence and arousal emotion classification accuracy obtained using ECG and GSR data were, respectively, 91.27% and 91.45% using the InceptionResnetV2 CNN classifier and 99.19% and 98.39% using the MobileNet CNN classifier. Other studies have not explored the use of scalograms to represent ECG and GSR CWT features for emotion classification using deep learning models. Additionally, this study provides a novel classification of emotions built on individual and group settings using ECG data. When the participants watched long-duration emotion elicitation videos individually and in groups, the accuracy was around 99.8%. MobileNet had the highest accuracy and shortest execution time. These subject-independent classification methods enable emotion classification independent of varying human behavior. Full article
Show Figures

Figure 1

Back to TopTop