sensors-logo

Journal Browser

Journal Browser

Emotion Recognition and Biometric Authentication with Contactless Sensing

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: closed (15 March 2024) | Viewed by 579

Special Issue Editor


E-Mail Website
Guest Editor
Department of Intelligent Engineering Informatics for Human, Sangmyung University, Seoul 03016, Republic of Korea
Interests: computer vision; pattern recognition; biometrics
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The forthcoming Special Issue in the Sensors journal is poised to explore the confluence between emotion recognition and biometric authentication by leveraging novel, contactless sensing technologies. Emotions, pivotal in shaping human interactions and responses, when accurately deciphered, offer boundless opportunities spanning mental health surveillance, enhanced human–machine rapport, and tailored services. Concurrently, biometric validation has solidified its place as the linchpin of modern security architectures, demanding perpetual innovations to boost precision and enrich user engagement. This Special Issue is set to spotlight investigations centered on pioneering techniques, state-of-the-art algorithms, and avant-garde methodologies tailored for discerning emotions and orchestrating biometric verifications using non-invasive sensors, including but not limited to cameras, microphones, and other subtle devices. By seamlessly intertwining the realms of emotion detection and biometric validation, this curated research anthology is primed to catalyze the emergence of systems that are more organic, fortified, and attuned to user preferences. Research topics that align with each purpose of emotion recognition and biometric authentication, without combining the two, can also be submitted to this Special Issue.

Prof. Dr. Eui Chul Lee
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • emotion recognition
  • biometric authentication
  • contactless sensing
  • remote biosignal sensing
  • multimodal fusion

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3452 KiB  
Article
Emotion Classification Based on Pulsatile Images Extracted from Short Facial Videos via Deep Learning
by Shlomi Talala, Shaul Shvimmer, Rotem Simhon, Michael Gilead and Yitzhak Yitzhaky
Sensors 2024, 24(8), 2620; https://doi.org/10.3390/s24082620 - 19 Apr 2024
Viewed by 344
Abstract
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or [...] Read more.
Most human emotion recognition methods largely depend on classifying stereotypical facial expressions that represent emotions. However, such facial expressions do not necessarily correspond to actual emotional states and may correspond to communicative intentions. In other cases, emotions are hidden, cannot be expressed, or may have lower arousal manifested by less pronounced facial expressions, as may occur during passive video viewing. This study improves an emotion classification approach developed in a previous study, which classifies emotions remotely without relying on stereotypical facial expressions or contact-based methods, using short facial video data. In this approach, we desire to remotely sense transdermal cardiovascular spatiotemporal facial patterns associated with different emotional states and analyze this data via machine learning. In this paper, we propose several improvements, which include a better remote heart rate estimation via a preliminary skin segmentation, improvement of the heartbeat peaks and troughs detection process, and obtaining a better emotion classification accuracy by employing an appropriate deep learning classifier using an RGB camera input only with data. We used the dataset obtained in the previous study, which contains facial videos of 110 participants who passively viewed 150 short videos that elicited the following five emotion types: amusement, disgust, fear, sexual arousal, and no emotion, while three cameras with different wavelength sensitivities (visible spectrum, near-infrared, and longwave infrared) recorded them simultaneously. From the short facial videos, we extracted unique high-resolution spatiotemporal, physiologically affected features and examined them as input features with different deep-learning approaches. An EfficientNet-B0 model type was able to classify participants’ emotional states with an overall average accuracy of 47.36% using a single input spatiotemporal feature map obtained from a regular RGB camera. Full article
Show Figures

Figure 1

Back to TopTop