Face Recognition and Its Applications

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Computer Science & Engineering".

Deadline for manuscript submissions: closed (15 August 2021) | Viewed by 78439

Special Issue Editors


E-Mail Website
Guest Editor
Laboratory of IEMN DOAE. UMR CNRS 852, University of Valenciennes, 59313 Valenciennes, France
Interests: signal and image processing; medical imaging; face recognition; data fusion; speech processing; machine learning

E-Mail Website
Guest Editor
UMR 1253, iBrain, Université de Tours, INSERM, 37000 Tours, France
Interests: signal and image processing; machine/deep learning; medical imaging
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

The human face brings with its appearance and shape a number of clues enabling the extraction of information about person identity, gender, age, ethnicity, health, emotional state and physical wellness, to name but a few. Face recognition has a critical role in biometric systems and is attractive for numerous applications including visual surveillance and security, medical imaging, and affective computing. Though there has been a great deal of progress in face detection and recognition in the last few years, many problems remain unsolved. Research on face detection must confront with many challenging problems, especially when dealing with outdoor illumination, pose variation with large rotation angles, low image quality, low resolution, occlusion, and background changes in complex real-life scenes. Before one claims that the facial image processing/analysis system is reliable, rigorous testing and verification on real-world datasets must be performed, including databases for face analysis and tracking in digital video. Thus, vigorous research is needed to solve such outstanding challenging problems and propose advanced solutions and systems for emerging applications of facial image processing and analysis.

Topics of interest include, but are not limited to, the following:

  • Face and feature detection
  • Pre-processing methods
  • New sensors or data sources
  • Data set and evaluation
  • Machine learning and deep learning
  • Applications of face recognition

Prof. Dr. Abdelmalik Taleb-Ahmed
Prof. Dr. Abdeldjalil Ouahabi
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Face recognition
  • Machine learning
  • Pattern recognition
  • Computer vision
  • Information fusion
  • Applications in affective computing
  • Applications in Biometrics
  • Applications in video surveillance
  • Applications in autonomous driving
  • Applications in medical image analysis

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

12 pages, 2523 KiB  
Article
Automatic Pain Estimation from Facial Expressions: A Comparative Analysis Using Off-the-Shelf CNN Architectures
by Safaa El Morabit, Atika Rivenq, Mohammed-En-nadhir Zighem, Abdenour Hadid, Abdeldjalil Ouahabi and Abdelmalik Taleb-Ahmed
Electronics 2021, 10(16), 1926; https://doi.org/10.3390/electronics10161926 - 11 Aug 2021
Cited by 28 | Viewed by 2734
Abstract
Automatic pain recognition from facial expressions is a challenging problem that has attracted a significant attention from the research community. This article provides a comprehensive analysis on the topic by comparing some popular and Off-the-Shell CNN (Convolutional Neural Network) architectures, including MobileNet, GoogleNet, [...] Read more.
Automatic pain recognition from facial expressions is a challenging problem that has attracted a significant attention from the research community. This article provides a comprehensive analysis on the topic by comparing some popular and Off-the-Shell CNN (Convolutional Neural Network) architectures, including MobileNet, GoogleNet, ResNeXt-50, ResNet18, and DenseNet-161. We use these networks in two distinct modes: stand alone mode or feature extractor mode. In stand alone mode, the models (i.e., the networks) are used for directly estimating the pain. In feature extractor mode, the “values” of the middle layers are extracted and used as inputs to classifiers, such as SVR (Support Vector Regression) and RFR (Random Forest Regression). We perform extensive experiments on the benchmarking and publicly available database called UNBC-McMaster Shoulder Pain. The obtained results are interesting as they give valuable insights into the usefulness of the hidden CNN layers for automatic pain estimation. Full article
(This article belongs to the Special Issue Face Recognition and Its Applications)
Show Figures

Figure 1

23 pages, 4759 KiB  
Article
IoMT Based Facial Emotion Recognition System Using Deep Convolution Neural Networks
by Navjot Rathour, Sultan S. Alshamrani, Rajesh Singh, Anita Gehlot, Mamoon Rashid, Shaik Vaseem Akram and Ahmed Saeed AlGhamdi
Electronics 2021, 10(11), 1289; https://doi.org/10.3390/electronics10111289 - 28 May 2021
Cited by 17 | Viewed by 3497
Abstract
Facial emotion recognition (FER) is the procedure of identifying human emotions from facial expressions. It is often difficult to identify the stress and anxiety levels of an individual through the visuals captured from computer vision. However, the technology enhancements on the Internet of [...] Read more.
Facial emotion recognition (FER) is the procedure of identifying human emotions from facial expressions. It is often difficult to identify the stress and anxiety levels of an individual through the visuals captured from computer vision. However, the technology enhancements on the Internet of Medical Things (IoMT) have yielded impressive results from gathering various forms of emotional and physical health-related data. The novel deep learning (DL) algorithms are allowing to perform application in a resource-constrained edge environment, encouraging data from IoMT devices to be processed locally at the edge. This article presents an IoMT based facial emotion detection and recognition system that has been implemented in real-time by utilizing a small, powerful, and resource-constrained device known as Raspberry-Pi with the assistance of deep convolution neural networks. For this purpose, we have conducted one empirical study on the facial emotions of human beings along with the emotional state of human beings using physiological sensors. It then proposes a model for the detection of emotions in real-time on a resource-constrained device, i.e., Raspberry-Pi, along with a co-processor, i.e., Intel Movidius NCS2. The facial emotion detection test accuracy ranged from 56% to 73% using various models, and the accuracy has become 73% performed very well with the FER 2013 dataset in comparison to the state of art results mentioned as 64% maximum. A t-test is performed for extracting the significant difference in systolic, diastolic blood pressure, and the heart rate of an individual watching three different subjects (angry, happy, and neutral). Full article
(This article belongs to the Special Issue Face Recognition and Its Applications)
Show Figures

Figure 1

15 pages, 1428 KiB  
Article
Custom Face Classification Model for Classroom Using Haar-Like and LBP Features with Their Performance Comparisons
by Sirajdin Olagoke Adeshina, Haidi Ibrahim, Soo Siang Teoh and Seng Chun Hoo
Electronics 2021, 10(2), 102; https://doi.org/10.3390/electronics10020102 - 06 Jan 2021
Cited by 19 | Viewed by 3779
Abstract
Face detection by electronic systems has been leveraged by private and government establishments to enhance the effectiveness of a wide range of applications in our day to day activities, security, and businesses. Most face detection algorithms that can reduce the problems posed by [...] Read more.
Face detection by electronic systems has been leveraged by private and government establishments to enhance the effectiveness of a wide range of applications in our day to day activities, security, and businesses. Most face detection algorithms that can reduce the problems posed by constrained and unconstrained environmental conditions such as unbalanced illumination, weather condition, distance from the camera, and background variations, are highly computationally intensive. Therefore, they are primarily unemployable in real-time applications. This paper developed face detectors by utilizing selected Haar-like and local binary pattern features, based on their number of uses at each stage of training using MATLAB’s trainCascadeObjectDetector function. We used 2577 positive face samples and 37,206 negative samples to train Haar-like and LBP face detectors for a range of False Alarm Rate (FAR) values (i.e., 0.01, 0.05, and 0.1). However, the study shows that the Haar cascade face detector at a low stage (i.e., at six stages) for 0.1 FAR value is the most efficient when tested on a set of classroom images dataset with 100% True Positive Rate (TPR) face detection accuracy. Though, deep learning ResNet101 and ResNet50 outperformed the average performance of Haar cascade by 9.09% and 0.76% based on TPR, respectively. The simplicity and relatively low computational time used by our approach (i.e., 1.09 s) gives it an edge over deep learning (139.5 s), in online classroom applications. The TPR of the proposed algorithm is 92.71% when tested on images in the synthetic Labeled Faces in the Wild (LFW) dataset and 98.55% for images in MUCT face dataset “a”, resulting in a little improvement in average TPR over the conventional face identification system. Full article
(This article belongs to the Special Issue Face Recognition and Its Applications)
Show Figures

Figure 1

21 pages, 3822 KiB  
Article
OLIMP: A Heterogeneous Multimodal Dataset for Advanced Environment Perception
by Amira Mimouna, Ihsen Alouani, Anouar Ben Khalifa, Yassin El Hillali, Abdelmalik Taleb-Ahmed, Atika Menhaj, Abdeldjalil Ouahabi and Najoua Essoukri Ben Amara
Electronics 2020, 9(4), 560; https://doi.org/10.3390/electronics9040560 - 27 Mar 2020
Cited by 33 | Viewed by 5146
Abstract
A reliable environment perception is a crucial task for autonomous driving, especially in dense traffic areas. Recent improvements and breakthroughs in scene understanding for intelligent transportation systems are mainly based on deep learning and the fusion of different modalities. In this context, we [...] Read more.
A reliable environment perception is a crucial task for autonomous driving, especially in dense traffic areas. Recent improvements and breakthroughs in scene understanding for intelligent transportation systems are mainly based on deep learning and the fusion of different modalities. In this context, we introduce OLIMP: A heterOgeneous Multimodal Dataset for Advanced EnvIronMent Perception. This is the first public, multimodal and synchronized dataset that includes UWB radar data, acoustic data, narrow-band radar data and images. OLIMP comprises 407 scenes and 47,354 synchronized frames, presenting four categories: pedestrian, cyclist, car and tram. The dataset includes various challenges related to dense urban traffic such as cluttered environment and different weather conditions. To demonstrate the usefulness of the introduced dataset, we propose a fusion framework that combines the four modalities for multi object detection. The obtained results are promising and spur for future research. Full article
(This article belongs to the Special Issue Face Recognition and Its Applications)
Show Figures

Figure 1

16 pages, 803 KiB  
Article
Face Recognition via Deep Learning Using Data Augmentation Based on Orthogonal Experiments
by Zhao Pei, Hang Xu, Yanning Zhang, Min Guo and Yee-Hong Yang
Electronics 2019, 8(10), 1088; https://doi.org/10.3390/electronics8101088 - 25 Sep 2019
Cited by 32 | Viewed by 5798
Abstract
Class attendance is an important means in the management of university students. Using face recognition is one of the most effective techniques for taking daily class attendance. Recently, many face recognition algorithms via deep learning have achieved promising results with large-scale labeled samples. [...] Read more.
Class attendance is an important means in the management of university students. Using face recognition is one of the most effective techniques for taking daily class attendance. Recently, many face recognition algorithms via deep learning have achieved promising results with large-scale labeled samples. However, due to the difficulties of collecting samples, face recognition using convolutional neural networks (CNNs) for daily attendance taking remains a challenging problem. Data augmentation can enlarge the samples and has been applied to the small sample learning. In this paper, we address this problem using data augmentation through geometric transformation, image brightness changes, and the application of different filter operations. In addition, we determine the best data augmentation method based on orthogonal experiments. Finally, the performance of our attendance method is demonstrated in a real class. Compared with PCA and LBPH methods with data augmentation and VGG-16 network, the accuracy of our proposed method can achieve 86.3%. Additionally, after a period of collecting more data, the accuracy improves to 98.1%. Full article
(This article belongs to the Special Issue Face Recognition and Its Applications)
Show Figures

Figure 1

Review

Jump to: Research

53 pages, 18444 KiB  
Review
Past, Present, and Future of Face Recognition: A Review
by Insaf Adjabi, Abdeldjalil Ouahabi, Amir Benzaoui and Abdelmalik Taleb-Ahmed
Electronics 2020, 9(8), 1188; https://doi.org/10.3390/electronics9081188 - 23 Jul 2020
Cited by 281 | Viewed by 54870
Abstract
Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms [...] Read more.
Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. However, identifying a face in a crowd raises serious questions about individual freedoms and poses ethical issues. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera–subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions. We specifically concentrate on the most recent databases, 2D and 3D face recognition methods. Besides, we pay particular attention to deep learning approach as it presents the actuality in this field. Open issues are examined and potential directions for research in facial recognition are proposed in order to provide the reader with a point of reference for topics that deserve consideration. Full article
(This article belongs to the Special Issue Face Recognition and Its Applications)
Show Figures

Figure 1

Back to TopTop