Human-Computer Interaction and 3D Face Analysis

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (26 January 2020) | Viewed by 30593

Special Issue Editors


E-Mail Website
Guest Editor
Department of Management and Production Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Turin, Italy
Interests: product lifecycle management; product design and development; human–machine interaction; human–machine interface; human–computer interaction; computer-aided design; extended reality; augmented reality; virtual reality; artificial intelligence; behavioral analysis; digital therapies; minimally invasive surgery
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Management and Production Engineering, Politecnico di Torino, Torino, Italy
Interests: 3D face analysis; facial expression; automated pattern recognition; face recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Face analysis has become an emerging topic for the researchers working in the field of human–computer interaction in the last decade, especially for analysing emotions. Recent studies focused on designing methods for detecting and processing facial expressions to develop emotionally intelligent information systems, in order to improve the effectiveness of human–robot interaction. Also, the increased possibilities of acquisition sensors and the latest updates in photogrammetry have also allowed us to obtain accurate 3D facial data, also suitable for real-time applications. The latest research trends in the area concern the investigation of how human perception works in terms of the categorization of facial visual stimuli. Several fields are involved, such as computer vision and pattern recognition, psychology, and computational neuroscience.

This Special Issue is aimed at contributions to 3D face analysis for applications in the human–computer interaction scenarios. Work on novel face expression recognition methodologies and facial feature extraction will be welcome, together with contributions on human perception, whenever it is applied (or could be) to computational techniques for the analysis/interpretation of the human face and its categorization. Studies on 3D sensors and neural networks related to the field will also be acknowledged.

The topics of interest for this Special Issue include, but are not limited to:

  • New findings in 3D facial feature extraction;
  • Socially intelligent human–machine interaction;
  • Systems for facial expression/action unit categorization;
  • Application of human perception foundations to computational face analysis;
  • Psychological and neuroscientific theories of human face interpretation;
  • Neural network models for 3D face categorization;
  • Ultimate sensors for three-dimensional facial acquisition…

Prof. Enrico Vezzetti
Dr. Federica Marcolin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • 3D Face Analysis
  • Facial Expression Recognition
  • Emotionally Intelligent Information Systems
  • Human–Machine Interaction
  • Face Perception
  • Human Face Categorization
  • Human Face Interpretation

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

23 pages, 13501 KiB  
Article
Observing Pictures and Videos of Creative Products: An Eye Tracking Study
by Aurora Berni, Lorenzo Maccioni and Yuri Borgianni
Appl. Sci. 2020, 10(4), 1480; https://doi.org/10.3390/app10041480 - 21 Feb 2020
Cited by 13 | Viewed by 3521
Abstract
The paper offers insights into people’s exploration of creative products shown on a computer screen within the overall task of capturing artifacts’ original features and functions. In particular, the study presented here analyzes the effects of different forms of representations, i.e., static pictures [...] Read more.
The paper offers insights into people’s exploration of creative products shown on a computer screen within the overall task of capturing artifacts’ original features and functions. In particular, the study presented here analyzes the effects of different forms of representations, i.e., static pictures and videos. While the relevance of changing stimuli’s forms of representation is acknowledged in both engineering design and human-computer interaction, scarce attention has been paid to this issue hitherto when creative products are in play. Six creative products have been presented to twenty-eight subjects through either pictures or videos in an Eye-Tracking-supported experiment. The results show that major attention is paid by people to original product features and functional elements when products are displayed by means of videos. This aspect is of paramount importance, as original shapes, parts, or characteristics of creative products might be inconsistent with people’s habits and cast doubts about their rationale and utility. In this sense, videos seemingly emphasize said original elements and likely lead to their explanation/resolution. Overall, the outcomes of the study strengthen the need to match appropriate forms of representation with different design stages in light of the needs for designs’ evaluation and testing user experience. Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Figure 1

17 pages, 2739 KiB  
Article
Perspective Morphometric Criteria for Facial Beauty and Proportion Assessment
by Luca Ulrich, Jean-Luc Dugelay, Enrico Vezzetti, Sandro Moos and Federica Marcolin
Appl. Sci. 2020, 10(1), 8; https://doi.org/10.3390/app10010008 - 18 Dec 2019
Cited by 8 | Viewed by 3267
Abstract
Common sense usually considers the assessment of female human attractiveness to be subjective. Nevertheless, in the past decades, several studies and experiments showed that an objective component in beauty assessment exists and can be strictly related, even if it does not match, with [...] Read more.
Common sense usually considers the assessment of female human attractiveness to be subjective. Nevertheless, in the past decades, several studies and experiments showed that an objective component in beauty assessment exists and can be strictly related, even if it does not match, with proportions of features. Proportions can be studied through analysis of the face, which relies on landmarks, i.e., specific points on the facial surface, which are shared by everyone, and measurements between them. In this work, several measures have been gathered from studies in the literature considering datasets of beautiful women to build a set of measures that can be defined as suggestive of female attractiveness. The resulting set consists of 29 measures applied to a public dataset, the Bosphorus database, whose faces have been both analyzed by the developed methodology based on the expanded set of measures and judged by human observers. Results show that the set of chosen measures is significant in terms of attractiveness evaluation, confirming the key role of proportions in beauty assessment; furthermore, the sorting of identified measures has been performed to identify the most significant canons involved in the evaluation. Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Figure 1

15 pages, 9257 KiB  
Article
A Framework for Real-Time 3D Freeform Manipulation of Facial Video
by Jungsik Park, Byung-Kuk Seo and Jong-Il Park
Appl. Sci. 2019, 9(21), 4707; https://doi.org/10.3390/app9214707 - 04 Nov 2019
Viewed by 3654
Abstract
This paper proposes a framework that allows 3D freeform manipulation of a face in live video. Unlike existing approaches, the proposed framework provides natural 3D manipulation of a face without background distortion and interactive face editing by a user’s input, which leads to [...] Read more.
This paper proposes a framework that allows 3D freeform manipulation of a face in live video. Unlike existing approaches, the proposed framework provides natural 3D manipulation of a face without background distortion and interactive face editing by a user’s input, which leads to freeform manipulation without any limitation of range or shape. To achieve these features, a 3D morphable face model is fitted to a face region in a video frame and is deformed by the user’s input. The video frame is then mapped as a texture to the deformed model, and the model is rendered on the video frame. Because of the high computational cost, parallelization and acceleration schemes are also adopted for real-time performance. Performance evaluation and comparison results show that the proposed framework is promising for 3D face editing in live video. Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Figure 1

15 pages, 2596 KiB  
Article
An Interactive and Personalized Erasure Animation System for a Large Group of Participants
by Hua Wang, Xiaoyu He and Mingge Pan
Appl. Sci. 2019, 9(20), 4426; https://doi.org/10.3390/app9204426 - 18 Oct 2019
Viewed by 1769
Abstract
This paper introduces a system to realize interactive and personalized erasure animations by using mobile terminals, a shared display terminal, and a database server for a large group of participants. In the system, participants shake their mobile terminals with their hands. Their shaking [...] Read more.
This paper introduces a system to realize interactive and personalized erasure animations by using mobile terminals, a shared display terminal, and a database server for a large group of participants. In the system, participants shake their mobile terminals with their hands. Their shaking data are captured by the database server. Then there are immersive and somatosensory erasure animations on the shared display terminal according to the participants’ shaking data in the database server. The system is implemented by a data preprocessing module and an interactive erasure animation module. The former is mainly responsible for the cleaning and semantic standardization of the personalized erasure shape data. The latter realizes the interactive erasure animation, which involves shaking the mobile terminal, visualizations of the erasure animation on the shared display terminal, and dynamic and personalized data editing. The experimental results show that the system can realize various styles of personalized erasure animation and can respond to more than 2000 shaking actions simultaneously and present the corresponding erasure animations on the shared display terminal in real time. Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Graphical abstract

15 pages, 1535 KiB  
Article
Applying Eye-Tracking Technology to Measure Interactive Experience Toward the Navigation Interface of Mobile Games Considering Different Visual Attention Mechanisms
by Jun-Yi Jiang, Fu Guo, Jia-Hao Chen, Xiao-Hui Tian and Wei Lv
Appl. Sci. 2019, 9(16), 3242; https://doi.org/10.3390/app9163242 - 08 Aug 2019
Cited by 16 | Viewed by 3786
Abstract
As an initial channel for users learning about a mobile game, the interactive experience of the navigation interface will directly affect the first impression of the users on the game and their subsequent behaviors and willingness to use. This study aims to investigate [...] Read more.
As an initial channel for users learning about a mobile game, the interactive experience of the navigation interface will directly affect the first impression of the users on the game and their subsequent behaviors and willingness to use. This study aims to investigate players’ visual attention mechanisms of various interactive levels of mobile games’ interfaces under free-browsing and task-oriented conditions. Eye-tracking glasses and a questionnaire were used to measure the interactive experience of mobile games. The results show that in the free-browsing condition, the fixation count, saccade count and average saccade amplitude can be used to reflect and predict the interactive experiences of mobile games’ navigation interface; while in the task-oriented condition, the fixation count, first fixation duration, dwell time ratio and saccade count can be used to reflect and predict the interactive experience of mobile games’ navigation interface. These findings suggest that apart from the different eye movement indicators, players’ motivations should also be considered during the process of the games’ navigation interface design. Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Figure 1

20 pages, 12776 KiB  
Article
Real Time Shadow Mapping for Augmented Reality Photorealistic Rendering
by Francesco Osti, Gian Maria Santi and Gianni Caligiana
Appl. Sci. 2019, 9(11), 2225; https://doi.org/10.3390/app9112225 - 30 May 2019
Cited by 9 | Viewed by 5087
Abstract
In this paper, we present a solution for the photorealistic ambient light render of holograms into dynamic real scenes, in augmented reality applications. Based on Microsoft HoloLens, we achieved this result with an Image Base Lighting (IBL) approach. The real-time image capturing that [...] Read more.
In this paper, we present a solution for the photorealistic ambient light render of holograms into dynamic real scenes, in augmented reality applications. Based on Microsoft HoloLens, we achieved this result with an Image Base Lighting (IBL) approach. The real-time image capturing that has been designed is able to automatically locate and position directional lights providing the right illumination to the holograms. We also implemented a negative “shadow drawing” shader that contributes to the final photorealistic and immersive effect of holograms in real life. The main focus of this research was to achieve a superior photorealism through the combination of real-time lights placement and negative “shadow drawing” shader. The solution was evaluated in various Augmented Reality case studies, from classical ones (using Vuforia Toolkit) to innovative applications (using HoloLens). Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Figure 1

Review

Jump to: Research

33 pages, 1004 KiB  
Review
3D Approaches and Challenges in Facial Expression Recognition Algorithms—A Literature Review
by Francesca Nonis, Nicole Dagnes, Federica Marcolin and Enrico Vezzetti
Appl. Sci. 2019, 9(18), 3904; https://doi.org/10.3390/app9183904 - 18 Sep 2019
Cited by 55 | Viewed by 8396
Abstract
In recent years, facial expression analysis and recognition (FER) have emerged as an active research topic with applications in several different areas, including the human-computer interaction domain. Solutions based on 2D models are not entirely satisfactory for real-world applications, as they present some [...] Read more.
In recent years, facial expression analysis and recognition (FER) have emerged as an active research topic with applications in several different areas, including the human-computer interaction domain. Solutions based on 2D models are not entirely satisfactory for real-world applications, as they present some problems of pose variations and illumination related to the nature of the data. Thanks to technological development, 3D facial data, both still images and video sequences, have become increasingly used to improve the accuracy of FER systems. Despite the advance in 3D algorithms, these solutions still have some drawbacks that make pure three-dimensional techniques convenient only for a set of specific applications; a viable solution to overcome such limitations is adopting a multimodal 2D+3D analysis. In this paper, we analyze the limits and strengths of traditional and deep-learning FER techniques, intending to provide the research community an overview of the results obtained looking to the next future. Furthermore, we describe in detail the most used databases to address the problem of facial expressions and emotions, highlighting the results obtained by the various authors. The different techniques used are compared, and some conclusions are drawn concerning the best recognition rates achieved. Full article
(This article belongs to the Special Issue Human-Computer Interaction and 3D Face Analysis)
Show Figures

Figure 1

Back to TopTop