Next Article in Journal
Analytical Solution of Stability Problem of Nanocomposite Cylindrical Shells under Combined Loadings in Thermal Environments
Previous Article in Journal
A Network-Level Stochastic Model for Pacemaker GABAergic Neurons in Substantia Nigra Pars Reticulata
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Perception of Facial Impressions Using Explicit Features of the Face (xFoFs)

1
Department of Computer Science, Sangmyung University, 20, Hongjimun 2-gil, Jongno-gu, Seoul 03016, Republic of Korea
2
Department of Software, Sangmyung University, 31, Sangmyeongdae-gil, Dongnam-gu, Cheonan 31066, Republic of Korea
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(17), 3779; https://doi.org/10.3390/math11173779
Submission received: 10 July 2023 / Revised: 18 August 2023 / Accepted: 21 August 2023 / Published: 3 September 2023
(This article belongs to the Section Mathematics and Computer Science)

Abstract

:
We present a novel approach to perceiving facial impressions by defining the explicit features of the face (xFoFs) based on anthropometric studies. The xFoFs estimate 35 anthropometric features of human faces with normal expressions and frontalized poses. Using these xFoFs, we have developed a method to objectively measure facial impressions, compiling a dataset of approximately 4896 facial images to validate our method. The ranking of xFoFs among the face image dataset guides an objective and quantitative estimation of facial impressions. To further corroborate our study, we conducted two user studies: an examination of the first and strongest impression perception and a validation of the consistency of multiple important impression perceptions. Our work significantly contributes to the field of facial recognition and explainable artificial intelligence (XAI) by providing an effective solution for integrating xFoFs with existing facial recognition models.

1. Introduction

As humans interact with one another, they receive a multitude of sensory inputs that coalesce to form a holistic impression of the other individual. One key component of this impression comes from the facial appearance of a person. We denote a facial impression as the mental image of a person, which is primarily derived from the most distinctive facial features recognized and subsequently remembered. However, since these impressions are highly subjective, it is challenging to explicitly express them due to their inherent subjectivity. For instance, if an observer perceives an individual as having large eyes, other observers may not concur with this assessment. Furthermore, providing an objective description of the shape of an individual’s face is a complex task. This is compounded by the difficulty of forming clear and objective facial impressions of individuals with ordinary facial features. Therefore, it remains a significant challenge to express facial impressions of individuals in an objective and quantitative way.
We propose a framework for perceiving facial impressions in a quantitative and objective way. The first challenge of our framework is to define the overall facial features in a quantitative way and extract them properly. Many existing studies extracted facial features using landmark points, the discretized approximation of the boundaries of important face components. However, they showed limitations in extracting correct facial features, since the landmark is an approximation of the correct boundaries of facial components. The diverse poses of a face may cause incorrect features, since the landmarks from the faces inclining to the left or right side become inconsistent. We resolve these limitations by employing deep learning-based background techniques that frontalize faces of diverse poses [1], segment important facial components [2] and align facial landmarks [3]. Based on anthropometry studies, we define 35 explicit and explainable facial features (i.e., explicit features of faces (xFoFs)) that capture the critical aspects of facial morphology using segmented regions and landmark points.
The second challenge of our framework is how to perceive facial impressions in a quantitative and objective way. We employ xFoFs for perceiving facial impressions. We denote that the ranking of xFoFs plays a key role in perceiving facial impressions. Therefore, we curate a dataset of 4896 facial images and estimate the xFoFs for each face in the dataset. The distinctive impression of an individual’s face is denoted as the ranking of the corresponding xFoF of the individual’s face. For example, the 17th xFoF denotes the ratio of the eye area to the face area. Therefore, a face whose 17th xFoF’s ranking is high is identified to have big eyes, and a face whose 17th xFoF’s ranking is low is identified to have small eyes. Our framework identifies a face with highly ranked or low-ranked xFoFs as a face whose impression is distinctive. This approach presents a quantitative and objective estimation of face impressions, which have hardly been studied by the existing approaches.
The third challenge is how to prove that our framework perceives impressions correctly. We claim the perception of a facial impression should be examined for two aspects. The first aspect regards the first and strongest impression, and the second aspect is about multiple important impressions. Therefore, we designed two distinctive user studies. In the first user study, the first impression perceived by our framework is compared to the first impression recognized by human participants. The similarity of the perceived and recognized first impressions shows that our framework successfully perceives the first impression from face images. In the second user study, the distribution of multiple important impressions perceived by our framework is compared to that recognized by the human participants. The similarity of the distributions shows that our framework successfully perceives multiple important impressions from face images.
This study makes several significant contributions to the field of facial expression recognition, which can be summarized as follows:
(1)
The existing anthropometric studies defined and extracted only restricted sets of facial features. We define a set of explicit and explainable facial features (xFoFs) for the overall facial components from 68 facial landmark points and present robust extraction schemes for xFoFs using deep learning-based background techniques.
(2)
Since impressions from human faces are recognized as subjective and qualitative, few studies have attacked the problem of measuring facial impressions. For an objective and quantitative estimation of facial impressions, we present a scheme that measures the rankings of the xFoFs of an individual face from the faces in a dataset. A face with high- or low-ranking xFoFs is distinguished to possess recognizable facial impressions.
(3)
A challenge in this study is how to prove that the facial impressions perceived by our framework are correct. To overcome this challenge, we design two user studies: one study to prove the first and strongest impression perceived by our study is correct and another study to prove that multiple important facial impressions perceived by our framework correspond to those recognized by humans. This pair of user studies can be employed for other studies on facial feature recognition.

2. Related Work

2.1. Anthropometrical Facial Feature Measurements

Farkas [4] presented a fundamental study for an anthropometrical approach to facial features that measures the head and face anthropometric features, including angles, distances and proportions. He also investigated several derived features such as facial asymmetry, gender differences and related facial changes. Vegter and Hage [5] compared modern clinical anthropometric methods with traditional human facial measurement methods that have been employed since ancient Greek times. They analyzed the effects of the ancient golden ratio, the standards of Renaissance artists, body anthropology and cephalometry on modern facial anthropometry. They also investigated how anthropometry methods evolved from the traditional methods.
Merler et al. [6] presented a method for measuring diversity in a data-dependent face recognition model in order to verify that the data on a human face contain sufficient information for the recognition model. First, they extracted landmarks from the detected faces and presented 10 coding schemes, including the vertical distance between face elements, proportions and facial symmetry. These coding schemes were employed for the verification of human faces. Kukharev and Kaziyeva [7] addressed the issues of morphology and shape measurement for digital facial anthropometry and presented qualitative and quantitative estimation schemes for facial features and parameters. They also investigated the relationship between facial features, genes and the attractiveness of a human. Furthermore, they designed biometric barcodes with facial measurements for analysis of the close relationship between digital facial anthropometry and the Internet of Things.

2.2. Anthropometrical Facial Feature-Based Deep Learning Models

Many deep learning studies employed anthropometrical facial features to enrich their models and performance.
Szlavik and Sziranyi [8] improved the performance of a face recognition model by measuring the coordinates of the nose and eyes from video using histogram and CCD methods and by extracting facial features through identification of the position of the mouth from the position of the nose.
Alrubaish and Zagrouba [9] analyzed the effects of expressions on face biometric recognition models by calculating the similarity between 22 facial feature values captured from various facial expressions. The facial feature values of neutral expressions were extracted to analyze the effects of individual facial expressions on the biometric recognition system.
Alsawwaf et al. [10] presented a scheme that measures the similarity of two human faces. For this purpose, they detected sibling landmarks, calculated landmark-based feature values and compared the similarities of two faces. Therefore, they provided quantitative numerical information on the similarities and differences of two faces.
Hong [11] presented a robust face identification scheme between two persons by extracting major landmarks in 2D and 3D face images of an identical person. The allowable range in distinguishing two persons was estimated from the distance between the mutual landmarks of the two detected images.
Ramanathan and Chellappa [12] presented a scheme that generates a human face grown from an input. They built a hypothesis that the growth of the cranial face follows a geometric invariance. From this hypothesis, they modified the growth parameters by applying a revised cardioid strain transformation. After synthesizing the face contour using the modified parameters, an age conversion model synthesized the grown face image from the frontal view of a child face image.
Sunhem and Pasupa [13] presented a feature-based recommendation model for hairstyles. Their model extracts facial features to classify faces into several facial types. This approach presents an explanation for the recommendation of hairstyles for an input face image.
Alzahrani et al. [14] presented a recommendation model that recognizes gender, face type and eyebrow type from an input face image. Their model recommends hairstyles and eyelash styles for women and hair styles for men by combining the recognized information.
Chen et al. [15] developed a recommendation system for the CelebHair dataset that inherits the properties of the CelebA dataset. They provided additional properties such as face length and proportion to the CelebHair dataset. They improved the performance of their system using the properties of the CelebHair dataset. They also proposed a try-on processing approach to try out hairstyles.
In the above studies, the anthropometric facial features not only improved the performance of the models but also presented the backgrounds for the predicted results.

2.3. Anthropometrical Facial Feature-Based Studies

The anthropometrical approach is widely used in various areas such as aesthetics, forensics and anthropology.

2.3.1. Aesthetics

Aesthetics, which studies the essence of beauty, employs anthropometrical facial features for analyzing the beauty of human faces. Liu et al. [16] employed anthropometrical facial features to measure the center line of the face that divides the left and right sides of the face and to trace the change of the center line due to facial expressions. They also measured the strength of the D face and S face to investigate left-right symmetry with quantitative values.
Little et al. [17] examined the facial features that affect facial attractiveness. Based on the features, they analyzed important causes of individual differences in face preference.
Xie et al. [18] constructed the SCUT-FBP dataset, which matches face images and their attractiveness as rated by 70 participants. They also measured 18 anthropometric feature values on the face to predict facial attractiveness with a deep learning model trained by their SCUT-FBP dataset.
Zheng et al. [19] presented a mathematical model of facial proportions to explain the attractiveness of faces in a quantitative way and proved that there are statistical differences according to gender and race.
Wei et al. [20] implemented an application that predicted facial beauty scores by measuring the anthropometric feature values from facial landmarks using the Google Face API. They also presented a model that predicted perceived attractiveness based on their feature values.

2.3.2. Forensics

Roelofse et al. [21] analyzed 13 measurements and 8 morphological features in the frontal facial photographs of 100 volunteers to determine common and rare facial features that could be observed in a South African male group.
Moreton [22] discussed the various ways in which face identification can be presented as evidence in British court. He also demonstrated the scientific validity of the process used for forensic matching. Finally, he presented simulated examples that show how face identification is demonstrated.
Verma et al. [23] measured the ratios and distances of the landmarks extracted from face images and conducted a study on estimating gender from face images using logistic regression and likelihood ratio approaches with corresponding feature values. They devised a face recognition scheme by defining 11 distances and ratios from the facial feature values. Finally, they presented a new approach using a likelihood ratio based on the feature values.
Sezgin and Karadayi [24] defined 11 anthropometric values from the face images and conducted a study to estimate the gender of the Turkish population based on the anthropometric values using a statistical analysis method.

2.3.3. Anthropology

Anthropology is the science of human beings and culture aiming to comprehend human beings from cultural and biological aspects. Therefore, anthropometrical facial features play an important role in anthropology.
Porter and Olson [25] analyzed the differences between African American and Caucasian women. For this purpose, they measured the horizontal and vertical distances and proportions of the face and evaluated the differences in facial proportions between these two groups through a statistical analysis process.
Farkas et al. [26] analyzed the significant differences between races by obtaining the facial measurements of 1470 people. They constructed a craniofacial database based on accurate anthropometric measurements. They proved that their approach contributes a successful cure to congenital or post-traumatic facial deformities. Zhuang et al. [27] designed a respirator from the data measured from 3997 US civilian workers. They analyzed the differences in facial shape and size between racial and age groups from the samples.
Packiriswamy et al. [28] quantified the size and position of the eyebrows and eyelids in two groups from standardized photographs of 200 South Indians and 200 Malaysian South Indians. They also measured whether there were significant differences between genders and ethnicities.
Maalman et al. [29] examined the vertical length of a face and a forehead. By measuring anthropometric feature values such as length, they found that there were large differences in 6 out of 10 features between the compared tribes.

3. Overview

Figure 1 depicts our framework for quantitatively measuring and extracting impressions from human faces using xFoFs. We define xFoFs by categorizing a human face into components such as the eyes, eyebrows, facial shape, nose, lips, eyelid, glabella, forehead, philtrum and chin and then devise a series of formulas to measure the xFoFs from each component. Based on these formulas, each xFoF is extracted from preprocessed face photographs. Additionally, we estimate the impression of an input face by measuring the ranking of the xFoF based on the xFoFs obtained from the faces in the face database. To validate our estimated impression, we conduct a pair of user studies to collect the impressions of a face from the participants, and we validate the results of this study by comparing them with the xFoF ranking-based impressions obtained in this study.

4. Method

4.1. Preprocessing

Our framework assumes that the input face is hedcut and frontal. However, the face images collected in this study consisted not only of frontal hedcut images but also side or upper-body images. To achieve consistent results from the diverse poses of the face images, we converted them to front hedcut face images in the preprocessing stage. To this end, we employed Zhou et al.’s frontalization algorithm [1] to transform the face images of different poses into front hedcut face images.
The xFoFs estimated from the face images were defined based on the landmark and region information of the face. Hence, we segmented the face into regions using Yu et al.’s segmentation algorithm [2] and located the landmarks using Dlib library [3]. The segmentation information measures the area of the facial components, while the landmarks measure the length, ratio and angle of the facial components. Specifically, we added a new landmark point at the tip of the forehead in addition to the 68 landmark points used in previous studies to define the xFoFs. Thus, we used a total of 68 landmark points to define the xFoFs.

4.2. Definition of an xFoF

To provide a quantitative measurement of a facial impression, we conducted a comprehensive review of anthropometry studies [4,6,7,9,10,11,12,13,14,15,18,19,23,25,28] and collected 68 facial features, eliminating any duplicates. These features are suggested in Appendix A. We classified the collected features according to their corresponding facial components and selected 25 features for inclusion in our study. We then applied a rigorous process to investigate the correlation between these features and merged those with a correlation value of 0.7 or higher, following the methodology suggested by Dancey and Reidy [30]. Additionally, to capture the essential aspects of a facial impression not covered by the collected features, we identified and defined 10 new features. As a result, our study defines a total of 35 xFoFs, including 4 for the facial shape, 4 for eyebrows, 9 for eyes, 5 for the nose, 7 for lips, 2 for the chin, 1 for the forehead, 1 for eyelid height, 1 for inter-brow distance and 1 for the philtrum. We present the xFoFs and their formulas and references in Figure 2 and Figure 3.

4.2.1. xFoFs for Face Shape

The 1st∼3rd xFoFs extracted from the facial features are cited in the previous research. The first xFoF quantifies the area of the face by measuring all horizontal lengths of the facial shape, while the second xFoF quantifies the length of the face by measuring the ratio of the vertical to horizontal lengths of the face. The third xFoF quantifies facial slimness by measuring the angles between the eighth point of the face and the 0th–7th and 9th–16th landmark points on the facial contour. However, these features have limitations in capturing facial features with prominent cheekbones. To address this issue, we proposed a fourth xFoF, defined as the angle between the two straight lines that make up the facial contour. The more acute the angle, the more developed the cheekbones are perceived to be, while a larger angle indicates a flatter face. This feature enabled us to determine the degree of development of not only the cheekbones but also the chin and cheeks.

4.2.2. xFoFs for Eyebrows

Among the xFoFs related to the eyebrows, the 5th xFoF, defined in previous research [28], measures the degree to which the eyebrows are raised or lowered by calculating the angle between the 17th and 20th landmark points with respect to the 21st point and the 23rd–26th landmark points with respect to the 22nd point. However, despite its utility in eyebrow shape analysis, this fifth xFoF has limitations in accurately quantifying the length, shape and thickness of the eyebrows.
To address this issue, we propose three xFoFs for eyebrow analysis. The first feature, the seventh xFoF, measures the length of the eyebrows while excluding the distance between them, which was improved upon by building upon the feature proposed by Chen et al. [15]. The second feature, the sixth xFoF, quantifies the location and degree of curvature in the eyebrows by measuring the angle between the straight lines that make up the eyebrows. Finally, the third feature, the eighth xFoF, proposes a method to quantify the thickness of the eyebrows by measuring the area of the eyebrows based on segmentation information. Our proposed xFoFs provide a more comprehensive analysis of the eyebrows, enabling improved eyebrow feature extraction for various computer vision applications.

4.2.3. xFoFs for the Eyes

Among the xFoFs that describe an eye, we identified the 9th∼14th xFoFs from previous studies. The 9th xFoF measures the horizontal length of the eye, while the 10th xFoF measures its vertical length. The 11th xFoF measures the ratio of the width to the height of the eye to determine whether the eye is thin like a narrow eye. The 12th xFoF measures the angle of the eye tail, quantifying how much the eye tail rises or falls. The 13th and 14th xFoFs measure the vertical lengths of the front and back of the eye, respectively, and quantitatively express the degree to which the eye increases or decreases in size toward the end. While these xFoFs are useful for characterizing the eye, they have limitations in measuring the curvature and size of the eye.
To address these limitations, we propose two new xFoFs. First, we define the 15th and 16th xFoFs to measure the roundness of the upper and lower lines of the eye by measuring the curvature of the upper and lower curves of the eye, respectively. This measure provides additional information on the shape of the eye that is not captured by the existing xFoFs. Secondly, we define the 17th xFoF to measure the area of the eye based on segmentation information. This measure complements the information on the shape of the eye by providing a measure of the eye’s size.

4.2.4. xFoFs for the Nose

Among the xFoFs defined for the nose, we defined the 18th∼22th xFoFs based on the features presented in earlier studies. The 18th xFoF measures the vertical length of the nose, while the 19th xFoF measures its horizontal length. The 20th xFoF quantifies whether the nose is round or long by measuring the ratio of the vertical length to the horizontal length of the nose. The 21st xFoF quantifies the size of the nostrils when viewed from the front by measuring the distance between the 30th and 31st landmark points. These xFoFs effectively capture the morphological characteristics of the nose, but they have limited ability to measure the size of the nose. In this study, we introduce the 22nd xFoF, which characterizes the area of the nose based on segmentation information to provide a more accurate measurement of the nose’s size.

4.2.5. xFoFs for the Lips

The 23rd and 24th xFoFs for the lips are defined based on features from earlier studies, where they measure the horizontal and vertical lengths of the lips, respectively. The 25th xFoF quantifies the thickness of the upper lip by measuring its vertical length, while the 26th one quantifies the thickness of the lower lip by measuring its vertical length. The 27th xFoF quantifies whether the lips are round or long by measuring the ratio of the horizontal length to the vertical length of the lips. However, these xFoFs have limitations in accurately representing the overall shape of the lips.
In this paper, we introduce two new xFoFs to better quantify the lip features. First, the 28th xFoF measures the degree to which the angle of the lip corners is raised or lowered relative to the midpoint of the lips. This new feature is expected to provide a more complete and precise understanding of the shape of the lips. Secondly, we suggest the 29th xFoF, which characterizes the area of the lips based on segmentation information, thereby offering a more objective quantification of lip size.

4.2.6. xFoFs for Other Features

We defined the xFoFs for various facial components, including the forehead, glabella, philtrum, upper eyelid height and chin, based on previous research. Specifically, the width of the forehead is measured using the 30th xFoF. The 31st xFoF is used to express the length of the glabella. The 32nd and 33rd xFoFs are used to define the horizontal and vertical lengths of the chin, respectively. The height of the upper eyelid, defined as the distance between the eyebrow and the eye, is estimated in the 34th xFoF. Lastly, the 35th xFoF is employed to measure the length of the philtrum.

4.3. Extraction of xFoFs

In the process of extracting xFoFs, the landmark and segmentation information extracted in the preprocessing stage are used as input. The xFoFs, which are defined by the formulas presented in Figure 2 and Figure 3, measure the length, proportion, angle and area of a face. Each xFoF is extracted as a 54-dimensional vector. Note that some xFoFs are defined using multiple dimensional vectors. The extraction of xFoFs is illustrated in Figure 4.

4.4. Estimating xFoF Rankings

The impressions of a face are generated from its relatively prominent features, and quantifying the prominence of certain components of an input face is achieved through the process of estimating the xFoF ranking. To evaluate the impression of an input face objectively, two publicly available face datasets, SCUT-FBP [18] and AFAD [31], are utilized in this research. A total of 4896 facial images were collected by excluding images with exaggerated expressions that distorted the facial identity in these two datasets. The xFoFs were extracted from the facial images in this dataset to build an xFoF database for 4896 individuals.
The xFoF ranking estimation process consists of the following steps. First, the xFoF features extracted from the input face are added to the database. Then, a t-score is calculated using a mean of 50 and a standard deviation of 10. The t-score indicates the relative prominence of a feature compared to others, taking into account the mean and standard deviation. Thus, the greater the deviation of the t-score from the mean, the higher the weight assigned to the corresponding feature, and vice versa. The weight vector is created and sorted in descending order, and the impression of the input face is quantitatively measured and represented based on the sorted weights.
Figure 5 illustrates a comparison of faces with significant t-score differences for the same xFoF component in the xFoF database. A large difference in t-scores indicates a significant difference in the ranking of the corresponding xFoF component, implying that the two faces have opposite features. This figure visually demonstrates that the farther the t-score values of facial components are from each other, the more opposing features they possess.

5. Implementation and Results

5.1. Implementation

This approach was implemented on a PC with an Intel Pentium i9-9900X 3.50 GHz CPU with main memory of 128 GB and an Nvidia Titan RTX GPU. We implemented this approach using Python with the Pytorch library in the Jupyter notebook environment. We employed Pandas for the manipulation of the xFoF database. Landmark detection was implemented using the Dlib library [3]. Semantic segmentation and frontalization were implemented using Gao et al.’s work [2] and Zhou et al.’s work [1], respectively. The face image dataset we employed was AFAD [31] and SCUT-FBP [18].
The deep learning models [1,2,3] in this study were the pretrained models, which did not require further training.

5.2. Results

In order to normalize and compare the xFoF rankings, we estimated the t-scores from the xFoF distribution and their corresponding percentages using the following approach:
(1)
For each set of xFoFs extracted from a dataset of 4896 faces, we calculated the mean and standard deviation of each xFoF distribution and computed the t-scores from the values [32].
(2)
The xFoF ranking for a prominent impression may be very high or very low, indicating that an xFoF with a large or small t-score value is an important feature for a face. For example, a face with very large eyes would have a high t-score for an xFoF related to the eye size, while a face with very small eyes would have a small t-score for the same xFoF. Faces with very large or very small eyes can both be classified as having prominent impressions.
(3)
The percentage that each feature belongs to a certain interval of t-scores can be used as a measure of the likelihood that the feature is a prominent one. Using the percentage of belonging to an interval of t-scores as a quantitative indicator of features is much more effective than using the t-score alone. For example, if the t-score for very large eyes is 99, and the t-score for very small eyes is 1, although there is a large difference between them, both are located at the extremes of the distribution and can be considered to belong to the percentage of 0.13%. Therefore, we divide the t-scores into appropriate percentages and calculate the percentages of each interval as follows [32].
(4)
The percentage of t-scores falling into the intervals 0∼20 and 80∼100 is 0.13% each, the percentage of t-scores falling into the intervals 20∼30 and 70∼80 is 2.14% each, the percentage of t-scores falling into the intervals 30∼40 and 60∼70 is 13.59% each, and the percentage of t-scores falling into the interval 40∼60 is 68.26% [32]. This process is illustrated in Figure 6.
(5)
All xFoFs can be classified according to facial components, and the rankings of the extracted xFoFs can be organized as shown in Figure 7.

6. Evaluation

6.1. First User Study on Catching Dominant Impressions

The primary aim of the first user study was to assess the efficacy of xFoF ranking in identifying the impressions perceived by the participants. We denoted the impression as a facial component that is recognized primarily by the participants. For this study, we recruited a sample of 30 participants, 24 of whom were in their 20s and 6 of whom were in their 30s, comprising 17 males and 13 females. Of these participants, 19 were undergraduate students, and 11 were graduate students.
To conduct the study, we selected a representative sample of 30 facial images from our dataset, which comprises a total of 4896 images.

6.1.1. Results of First User Study on Catching Dominant Impressions

In order to assess the participants’ perceptions of dominant impressions for various facial components, they were asked to assign a clear ranking to each of the 10 components: facial shape, eyes, eyebrows, nose, mouth, chin, eyelid, glabella, philtrum and forehead. This ranking system utilized a 10-point scale, with a score of 10 indicating the component with the most dominant impression and a score of 1 indicating the component with the least dominant impression.
Figure 8, Figure 9 and Figure 10 present the results of the user study for 10 faces. Each face was divided into two rows: the top row displayed the average values of the face components rated by 30 participants using a 10-point metric (1–10). The component perceived as the most dominant impression is shown in the leftmost column, while the component perceived as the least dominant impression is shown in the rightmost column. Each cell shows the average score and the corresponding component.
The bottom row displays the 10 components sorted in descending order of their xFoF ranking percentages. Each cell shows the xFoF ranking and its corresponding percentage of the t-score. The xFoF ranking indicates how distinct a component was perceived to be, with values closer to the ends of the scale indicating higher distinctiveness. The value within parentheses indicates the probability of the t-score belonging to that interval. For example, if the xFoF value for the face shape component in the leftmost column of the first face is 4895, then it is a significant value, and the percentage belonging to the 80∼100 interval of the t-score is 0.13%.
The components that received the top rank in the user study are highlighted in bold in the table. Thus, the bold figures in the bottom row of each face indicate the component that was ranked first in the user study. For instance, the first and second faces show that the feature that was ranked first in the user study was also ranked first in the xFoF ranking. However, for the third face, the feature that was ranked first in the user study was ranked second in the xFoF ranking.
These tables highlight components based on their corresponding t-scores in the xFoF ranking. Specifically, dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range. Note that the distribution of highlighted cells varied across the different faces due to the different xFoF ranking distributions. Faces with more dark orange cells indicate more distinctive components, while faces with fewer orange cells suggest more typical components.

6.1.2. Analysis of the First User Study

We executed two analyses is to determine whether the dominant components from the xFoF ranking matched the components selected from the participants’ rankings. In the first analysis, the dominant components recognized by the xFoF ranking percentages were compared to the results of the participants’ rankings. Table 1 presents the results of how the participants ranked components in a range with t-score percentages of 0.13% (very dominant), 2.14% (dominant) and 13.59% (less dominant).
The table shows that the eight components belonging to the very dominant 0.13% percentage had an average ranking of 1.25 by the participants, with six components being ranked first and two being ranked second. The dominant 2.14% percentage included 42 components and had an average ranking of 1.80, with 20 components being ranked first, 14 being ranked second, 5 being ranked third and 2 being ranked fourth. The less dominant 13.59% percentage had an average ranking of 4.24. Based on the table, the components identified as dominant based on the xFoF ranking were also perceived as having high rankings by the participants in the user test.
The second analysis examined whether the components that the participants perceived as dominant had high xFoF rankings. Table 2 presents the xFoF ranking percentages for the participants’ ranks. Specifically, this analysis presented the t-score percentages for the components that the participants ranked first, second and third. Among the components that the participants perceived as being first, there were 4 in the 0.13% range, 15 in the 2.14% range and 10 in the 13.59% range. For the components perceived as being second, there was 1 in the 0.13% range, 7 in the 2.14% range and 17 in the 13.59% range. Finally, for the components perceived as being third, there were 2 in the 0.13% range, 9 in the 2.14% range, and 14 in the 13.59% range. The analysis revealed that the components that the participants considered most important tended to be located in the lower xFoF ranking percentage ranges.

6.2. Second User Study on Catching Overall Impressions

The primary goal of the second user study was to compare the perception of the overall impressions between the participants and xFoF-based approach. In this user study, a total of 30 participants were carefully selected to avoid any overlap with the individuals involved in the first user study. Of the participants, 14 were male and 16 were female, with 23 falling within the 20s age group and 7 being within the 30s age group. The user study encompassed an evaluation of 30 questions related to 20 faces, where the participants were instructed to assess each facial component using the five-point metric. Figure 11 illustrates the questions presented to the participants. Each question corresponded to a facial component and its morphological feature, which were assumed to comprise the overall impression of a face.

6.2.1. Catching Impressions Using a User Study

In this user study, the participants were requested to provide responses using a 5-point metric for 30 facial morphological features. Notably, the ninth feature utilized a four-point metric. These questions for the facial morphological features are presented in Figure 11. The scores gathered from the 30 participants were averaged to determine the overall user study score for each feature.
For the 20 target faces, Figure 12 illustrates the average scores and standard deviations derived from the assessments of the 30 features by the 30 participants. In this figure, each row corresponds to a question in Figure 11, and each column represents a test face. Each cell within the figure comprises two components: the upper component represents the mean, while the lower component indicates the standard deviation. Cells shaded in orange indicate a standard deviation value of 1 or higher, whereas dark orange cells indicate a value of 1.5 or higher. A question with a higher number of orange cells suggests a lack of consensus among the participants regarding that particular feature. Within each row, the light yellow items denote features where the standard deviation is 1.0 or higher for 5–10 out of the 20 faces, while the dark yellow items indicate features where the standard deviation is 1.0 or higher for more than 10 faces. Likewise, within each column, the blue items represent faces where the standard deviation is 1.0 or higher for 5–10 items.
Table 3 displays the distribution of standard deviations presented in Figure 12. A standard deviation of one or lower implies that the average difference in user responses was one or less, indicating consistent answers. Notably, the user scores with a standard deviation of 1 or lower amounted to 503 out of the total 580 scores, accounting for 86.7% of the scores. This observation suggests a significant level of consistency in 86.7% of the user responses.
In this figure, it is notable that only the 16th question exhibited a standard deviation of one or higher in more than 10 faces. Additionally, there were seven questions where the standard deviation fell within the range of 5–10. Considering this perspective, it becomes evident that 21 out of the 29 features demonstrated a consistent convergence of user opinions. Furthermore, upon examining the faces, there were six faces where five or more features showed a standard deviation difference of one or higher. Consequently, it can be concluded that out of the 20 faces, 14 exhibited a consistent convergence of user opinions.

6.2.2. Catching Impressions Using xFoFs

Five intervals were extracted from the xFoF ranking percentages for each facial component. In the case of components composed of multiple xFoFs, the corresponding xFoFs were averaged to determine the component’s score. Figure 13 provides an overview of the associated xFoFs for each facial component. It is worth noting that the eye shape exhibited six corresponding xFoFs, setting it apart from the others where 1∼2 xFoFs were prevalent. Figure 13 showcases the scores computed based on xFoF ranking percentages.

6.2.3. Comparison of Impressions Caught by Both Methods

We compared the distributions of user scores obtained from the human participants in the second user study as depicted in Figure 12 with the distribution of xFoF scores presented in Figure 13. We analyzed the distributions of scores for both the components and the faces. As assuming normal distributions may be challenging, we conducted statistical tests including the Wilcoxon rank-sum test and Mann–Whitney U test to compare the distributions.

Analysis of the Faces

The results of the two tests conducted on each face are illustrated in Figure 14. This figure presents the p-values associated with each face, where a p-value exceeding 0.05 indicates the absence of a substantial distinction between the user score and xFoF rank score, as verified within a 99% confidence interval. Similarly, a p-value surpassing 0.01 suggests no noteworthy difference between the two scores at a 95% confidence interval. By employing the Wilcoxon rank-sum test, it was determined that there was no significant difference at a 99% confidence interval for 16 out of the total of 20 faces, while no significant difference was observed at a 95% confidence interval for 3 out of the total of 20 faces. Application of the Mann–Whitney U test yielded identical results. Consequently, there existed no considerable disparity between the user study-based scores and xFoF ranking-based scores for 19 out of the total of 20 faces.

Analysis of Features

The results of conducting these two tests for each component are presented in Figure 15. This figure displays the p-value for each component, where a p-value greater than 0.05 indicates no significant difference between the user score and xFoF rank score for that particular component, as verified within a 99% confidence interval. Similarly, a p-value greater than 0.01 indicates no significant difference between the two scores at a 95% confidence interval. In this study, the Wilcoxon rank-sum test revealed no significant difference at a 99% confidence interval for 18 out of 29 features and no significant difference at a 95% confidence interval for 6 out of 29 features. The Mann–Whitney U test yielded similar results. The Mann–Whitney U test revealed no significant difference at a 99% confidence interval for 17 out of 29 features and no significant difference at a 95% confidence interval for 7 out of 29 features. Consequently, according to this validation method, there was no significant difference between the user study-based scores and xFoF ranking-based scores for 24 out of the total of 29 features.

6.3. Discussion

In the first user study, we proved that the xFoF-based approach could catch the dominant impression perceived by a person. Through Table 1 and Table 2, we showed that the facial components which were estimated to be very distinctive with the xFoF-based approach were also perceived by the participants, and vise versa. In the second user study, we proved that the xFoF-based approach could estimate the overall impressions perceived by a person. We applied the Wilcoxon rank-sum test and Mann–Whitney U test for the results from both the xFoF-based approach and the participants’ evaluations and proved the similarity of the results for both the faces and features in Figure 14 and Figure 15.

6.4. Limitations

The primary limitation of our study comes from the face landmark alignment and face segmentation scheme, since an xFoF is defined by the landmarks and segmented regions on faces. Facial features such as pupils and wrinkles that are not identified by the landmarks and regions are excluded in the xFoF definition. Many important features such as hair styles, mustaches and eyeglasses are not considered in xFoFs, since they are not captured through landmarks and segmentations. Another limitation comes from the dataset we employed in this study. Since our dataset comprised only Asian and Caucasian faces, the facial features from other ethnicities such as African Americans or Indians were not considered in estimating the xFoF rankings.

7. Conclusions and Future Studies

In this study, we defined the explicit features of faces (xFoFs) to estimate the facial impression perceived by a person. The xFoF is designed based on the anthropometry studies and defined by facial landmarks and segmented regions. The rankings of xFoFs for faces in a dataset were estimated to measure the human perception of facial impressions. We executed two user studies to examine our approach. The first user study proved that the dominant impressions perceived by a person and the xFoF-based method were similar. The second user study proved that the overall impressions of a human face perceived by a person and the xFoF-based method showed similar distributions.
We are going to apply the xFoF-based method for caption generation to human faces. This study will help disabled people to recognized human faces. Another direction is to apply our approach for a generative model that can control and explain the process of producing human faces or character faces.

Author Contributions

Conceptualization, J.L.; Methodology, J.Y.; Validation, H.Y.; Writing— original draft, K.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Sangmyung University at 2021.

Data Availability Statement

Not available.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

We list the facial features suggested in various anthropometry studies [4,6,7,9,10,11,12,13,14,15,18,19,23,25,28,33] in Figure A1.

References

  1. Zhou, H.; Liu, J.; Liu, Z.; Liu, Y.; Wang, X. Rotate-and-render: Unsupervised photorealistic face rotation from single-view images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 5911–5920. [Google Scholar]
  2. Yu, C.; Gao, C.; Wang, J.; Yu, G.; Shen, C.; Sang, N. Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation. Int. J. Comput. Vis. 2021, 129, 3051–3068. [Google Scholar] [CrossRef]
  3. King, D.E. Dlib-ml: A machine learning toolkit. J. Mach. Learn. Res. 2009, 10, 1755–1758. [Google Scholar]
  4. Farkas, L. Anthropometry of the Head and Face; Raven Press: New York, NY, USA, 1994. [Google Scholar]
  5. Vegter, F.; Hage, J.J. Clinical anthropometry and canons of the face in historical perspective. Plast. Reconstr. Surg. 2000, 106, 1090–1096. [Google Scholar] [CrossRef] [PubMed]
  6. Merler, M.; Ratha, N.; Feris, R.S.; Smith, J.R. Diversity in faces. arXiv 2019, arXiv:1901.10436. [Google Scholar]
  7. Kukharev, G.A.; Kaziyeva, N. Digital facial anthropometry: Application and implementation. Pattern Recognit. Image Anal. 2020, 30, 496–511. [Google Scholar] [CrossRef]
  8. Szlávik, Z.; Szirányi, T. Face identification with CNN-UM. In Proceedings of the IEEE International Workshop on Cellular Neural Networks and their Applications(CNNA), Budapest, Hungary, 22–24 July 2004; pp. 190–195. [Google Scholar]
  9. Alrubaish, H.A.; Zagrouba, R. The effects of facial expressions on face biometric system’s reliability. Information 2020, 11, 485. [Google Scholar] [CrossRef]
  10. Alsawwaf, M.; Chaczko, Z.; Kulbacki, M.; Sarathy, N. In your face: Person identification through ratios and distances between facial features. Vietnam J. Comput. Sci. 2022, 9, 187–202. [Google Scholar] [CrossRef]
  11. Hong, Y.-J. Facial Identity Verification Robust to Pose Variations and Low Image Resolution: Image Comparison Based on Anatomical Facial Landmarks. Electronics 2022, 11, 1067. [Google Scholar] [CrossRef]
  12. Ramanathan, N.; Chellappa, R. Modeling age progression in young faces. In Proceedings of the Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 387–394. [Google Scholar]
  13. Sunhem, W.; Pasupa, K. An approach to face shape classification for hairstyle recommendation. In Proceedings of the Eighth International Conference on Advanced Computational Intelligence (ICACI), Chiang Mai, Thailand, 14–16 February 2016; pp. 390–394. [Google Scholar]
  14. Alzahrani, T.; Al-Nuaimy, W.; Al-Bander, B. Integrated multi-model face shape and eye attributes identification for hair style and eyelashes recommendation. Computation 2021, 9, 54. [Google Scholar] [CrossRef]
  15. Chen, Y.; Zhang, Y.; Huang, Z.; Luo, Z.; Chen, J. CelebHair: A new large-scale dataset for hairstyle recommendation based on CelebA. In Proceedings of the International Conference on Knowledge Science, Engineering and Management, Tokyo, Japan, 14 August 2021; pp. 323–336. [Google Scholar]
  16. Liu, Y.; Schmidt, K.L.; Cohn, J.F.; Mitra, S. Facial asymmetry quantification for expression invariant human identification. Comput. Vis. Image Underst. 2003, 91, 138–159. [Google Scholar] [CrossRef]
  17. Little, A.C.; Jones, B.C.; DeBruine L., M. Facial attractiveness: Evolutionary based research. Philos. Trans. R. Soc. Biol. Sci. 2011, 366, 1638–1659. [Google Scholar] [CrossRef]
  18. Xie, D.; Liang, L.; Jin, L.; Xu, J.; Li, M. Scut-fbp: A benchmark dataset for facial beauty perception. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, Hong Kong, China, 9–12 October 2015; pp. 1821–1826. [Google Scholar]
  19. Zheng, S.; Chen, K.; Lin, X.; Liu, S.; Han, J.; Wu, G. Quantitative analysis of facial proportions and facial attractiveness among Asians and Caucasians. Math. Biosci. Eng. 2022, 19, 6379–6395. [Google Scholar] [CrossRef] [PubMed]
  20. Wei, W.; Ho, E.S.L.; McCay, K.D.; Damaševičius, R.; Maskeliūnas, R.; Esposito, A. Assessing facial symmetry and attractiveness using augmented reality. Pattern Anal. Appl. 2022, 25, 635–651. [Google Scholar] [CrossRef]
  21. Roelofse, M.M.; Steyn, M.; Becker, P.J. Photo identification: Facial metrical and morphological features in South African males. Forensic Sci. Int. 2008, 177, 168–175. [Google Scholar] [CrossRef] [PubMed]
  22. Moreton, R. Forensic face matching: Procedures and application. In Forensic Face Matching: Research and Practice; Bindemann, M., Ed.; Oxford University Press: Oxford, UK, 2021; pp. 144–173. [Google Scholar]
  23. Verma, R.; Bhardwaj, N.; Singh, P.D.; Bhavsar, A.; Sharma, V. Estimation of sex through morphometric landmark indices in facial images with strength of evidence in logistic regression analysis. Forensic Sci. Int. Rep. 2021, 4, 100226. [Google Scholar] [CrossRef]
  24. Sezgin, N.; Karadayi, B. Sex estimation from biometric face photos for forensic purposes. Med. Sci. Law 2023, 63, 105–113. [Google Scholar] [CrossRef]
  25. Porter, J.P.; Olson, K.L. Anthropometric facial analysis of the African American woman. Arch. Fac. Plast. Surg. 2001, 3, 191–197. [Google Scholar] [CrossRef]
  26. Farkas, L.G.; Katic, M.J.; Forrest, C.R. International anthropometric study of facial morphology in various ethnic groups/races. J. Craniofac. Surg. 2005, 16, 615–646. [Google Scholar] [CrossRef]
  27. Zhuang, Z.; Landsittel, D.; Benson, S.; Roberge, R.; Shaffer, R. Facial anthropometric differences among gender, ethnicity, and age groups. Ann. Occup. Hyg. 2018, 54, 391–402. [Google Scholar]
  28. Packiriswamy, V.; Kumar, P.; Bashour, M. Photogrammetric analysis of eyebrow and upper eyelid dimensions in South Indians and Malaysian South Indians. Aesthetic Surg. J. 2013, 33, 975–982. [Google Scholar] [CrossRef]
  29. Maalman R., S.-E.; Abaidoo, C.S.; Tetteh, J.; Darko, N.D.; Atuahene, O.O.-D.; Appiah, A.K.; Diby, T. Anthropometric study of facial morphology in two tribes of the upper west region of Ghana. Int. J. Anat. Res. 2017, 5, 4129–4135. [Google Scholar] [CrossRef]
  30. Dancey, C.; Reidy, J. Statistics without Maths for Psychology; Pearson Education: New York, NY, USA, 2017. [Google Scholar]
  31. Niu, Z.; Zhou, M.; Wang, L.; Gao, X.; Hua, G. Ordinal regression with multiple output CNN for age estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 4920–4928. [Google Scholar]
  32. Ward, A.; Murray-Ward, M. Assessment in the Classroom, 1st ed.; Wadsworth Pub.: Wadsworth, OH, USA, 1999. [Google Scholar]
  33. Verma, R.; Bhardwaj, N.; Bhavsar, A.; Krishan, K. Towards facial recognition using likelihood ratio approach to facial landmark indices from images. Forensic Sci. Int. Rep. 2022, 5, 100254. [Google Scholar] [CrossRef]
Figure 1. The overview of our approach.
Figure 1. The overview of our approach.
Mathematics 11 03779 g001
Figure 2. The xFoFs and their formulas (0th∼17th xFoFs).
Figure 2. The xFoFs and their formulas (0th∼17th xFoFs).
Mathematics 11 03779 g002
Figure 3. The xFoFs and their formulas (18th∼35th xFoFs).
Figure 3. The xFoFs and their formulas (18th∼35th xFoFs).
Mathematics 11 03779 g003
Figure 4. The extraction of xFoFs.
Figure 4. The extraction of xFoFs.
Mathematics 11 03779 g004
Figure 5. Pairs of faces whose t-scores have great differences.
Figure 5. Pairs of faces whose t-scores have great differences.
Mathematics 11 03779 g005
Figure 6. The t-scores and the percentage of each portion of t-scores.
Figure 6. The t-scores and the percentage of each portion of t-scores.
Mathematics 11 03779 g006
Figure 7. The xFoF rankings by t-scores.
Figure 7. The xFoF rankings by t-scores.
Mathematics 11 03779 g007
Figure 8. The results of the user study for the first 10 face images (1∼10). Dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range.
Figure 8. The results of the user study for the first 10 face images (1∼10). Dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range.
Mathematics 11 03779 g008
Figure 9. The results of the user study for the second 10 face images (11∼20). Dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range.
Figure 9. The results of the user study for the second 10 face images (11∼20). Dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range.
Mathematics 11 03779 g009
Figure 10. The results of the user study for the third 10 face images (21∼30). Dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range.
Figure 10. The results of the user study for the third 10 face images (21∼30). Dark orange cells denote components with a t-score falling in the 0.13% range, orange cells denote t-scores in the 2.14% range, and light orange cells indicate t-scores in the 13.59% range.
Mathematics 11 03779 g010
Figure 11. The questions on facial impressions used in the second user study.
Figure 11. The questions on facial impressions used in the second user study.
Mathematics 11 03779 g011
Figure 12. The results of the second user study. Cells shaded in orange indicate a standard deviation value of 1 or higher, whereas dark orange cells indicate a value of 1.5 or higher. Within each row, the light yellow items denote features where the standard deviation is 1.0 or higher for 5–10 out of the 20 faces, while the dark yellow items indicate features where the standard deviation is 1.0 or higher for more than 10 faces. Likewise, within each column, the blue items represent faces where the standard deviation is 1.0 or higher for 5–10 items.
Figure 12. The results of the second user study. Cells shaded in orange indicate a standard deviation value of 1 or higher, whereas dark orange cells indicate a value of 1.5 or higher. Within each row, the light yellow items denote features where the standard deviation is 1.0 or higher for 5–10 out of the 20 faces, while the dark yellow items indicate features where the standard deviation is 1.0 or higher for more than 10 faces. Likewise, within each column, the blue items represent faces where the standard deviation is 1.0 or higher for 5–10 items.
Mathematics 11 03779 g012
Figure 13. The xFoF score, showing the score of the facial components from the xFoF ranking percentage.
Figure 13. The xFoF score, showing the score of the facial components from the xFoF ranking percentage.
Mathematics 11 03779 g013
Figure 14. Wilcoxon rank-sum test and Mann–Whitney U test for the distributions of user scores and xFoF rank scores for the faces, where 16 out of 20 faces satisfy p ≥ 0.05 and 19 out of 20 satisfy p ≥ 0.01. Red figures denote p value is greater than 0.05, blue figures denote p values in (0.05, 0.01) and black denote p values smaller than 0.01.
Figure 14. Wilcoxon rank-sum test and Mann–Whitney U test for the distributions of user scores and xFoF rank scores for the faces, where 16 out of 20 faces satisfy p ≥ 0.05 and 19 out of 20 satisfy p ≥ 0.01. Red figures denote p value is greater than 0.05, blue figures denote p values in (0.05, 0.01) and black denote p values smaller than 0.01.
Mathematics 11 03779 g014
Figure 15. Wilcoxon rank-sum test and Mann–Whitney U test results for the distribution of user scores and xFoF rank scores for facial components, where 16 out of 20 faces satisfy p ≥ 0.05 and 19 out of 20 satisfy p ≥ 0.01. Red figures denote p value is greater than 0.05, blue figures denote p values in (0.05, 0.01) and black denote p values smaller than 0.01.
Figure 15. Wilcoxon rank-sum test and Mann–Whitney U test results for the distribution of user scores and xFoF rank scores for facial components, where 16 out of 20 faces satisfy p ≥ 0.05 and 19 out of 20 satisfy p ≥ 0.01. Red figures denote p value is greater than 0.05, blue figures denote p values in (0.05, 0.01) and black denote p values smaller than 0.01.
Mathematics 11 03779 g015
Figure A1. Facial features in anthropometry studies.
Figure A1. Facial features in anthropometry studies.
Mathematics 11 03779 g0a1aMathematics 11 03779 g0a1b
Table 1. Participants’ rankings for the dominant components that matched the xFoF ranking percentages.
Table 1. Participants’ rankings for the dominant components that matched the xFoF ranking percentages.
xFoF RankCountParticipants’ Ranking
Percentage12345678910 Average
0.13%862000000001.25
2.14%422014521000001.80
13.59%1184142526211594004.24
Table 2. The t-score percentages for the most dominant facial components selected by the participants.
Table 2. The t-score percentages for the most dominant facial components selected by the participants.
CountxFoF Rank PercentageTotal
0.13% 2.14% 13.15% Above 13.15%
rank 141510130
rank 21717530
rank 32914530
Table 3. The count of answers in standard deviation categories.
Table 3. The count of answers in standard deviation categories.
Category of Standard DeviationCount
0∼0.570
0.5∼1433
1∼1.574
1.5∼23
total580
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yeom, J.; Lee, J.; Yang, H.; Min, K. Perception of Facial Impressions Using Explicit Features of the Face (xFoFs). Mathematics 2023, 11, 3779. https://doi.org/10.3390/math11173779

AMA Style

Yeom J, Lee J, Yang H, Min K. Perception of Facial Impressions Using Explicit Features of the Face (xFoFs). Mathematics. 2023; 11(17):3779. https://doi.org/10.3390/math11173779

Chicago/Turabian Style

Yeom, Jihyeon, Jeongin Lee, Heekyung Yang, and Kyungha Min. 2023. "Perception of Facial Impressions Using Explicit Features of the Face (xFoFs)" Mathematics 11, no. 17: 3779. https://doi.org/10.3390/math11173779

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop