Next Article in Journal
Analysis of CAT Gene Family and Functional Identification of OsCAT3 in Rice
Next Article in Special Issue
Evaluation of Candidate Reference Genes for Gene Expression Analysis in Wild Lamiophlomis rotata
Previous Article in Journal
Transcriptomic Profiling Reveals an Enhancer RNA Signature for Recurrence Prediction in Colorectal Cancer
Previous Article in Special Issue
Identification of New Toxicity Mechanisms in Drug-Induced Liver Injury through Systems Pharmacology
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Advancement in Human Face Prediction Using DNA

1
Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
2
Center for Biotechnology, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
3
College of Medicine and Health Sciences, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
4
Department Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi P.O. Box 127788, United Arab Emirates
5
Division of Psychiatry, Faculty of Health and Medical Sciences, The University of Western Australia, Crawley, WA 6009, Australia
6
School of Medical and Health Sciences, Edith Cowan University, Joondalup, WA 6027, Australia
7
Emirates Bio-Research Center, Ministry of Interior, Abu Dhabi P.O. Box 389, United Arab Emirates
*
Author to whom correspondence should be addressed.
Genes 2023, 14(1), 136; https://doi.org/10.3390/genes14010136
Submission received: 26 October 2022 / Revised: 15 December 2022 / Accepted: 21 December 2022 / Published: 3 January 2023
(This article belongs to the Special Issue Feature Papers in Technologies and Resources for Genetics)

Abstract

:
The rapid improvements in identifying the genetic factors contributing to facial morphology have enabled the early identification of craniofacial syndromes. Similarly, this technology can be vital in forensic cases involving human identification from biological traces or human remains, especially when reference samples are not available in the deoxyribose nucleic acid (DNA) database. This review summarizes the currently used methods for predicting human phenotypes such as age, ancestry, pigmentation, and facial features based on genetic variations. To identify the facial features affected by DNA, various two-dimensional (2D)- and three-dimensional (3D)-scanning techniques and analysis tools are reviewed. A comparison between the scanning technologies is also presented in this review. Face-landmarking techniques and face-phenotyping algorithms are discussed in chronological order. Then, the latest approaches in genetic to 3D face shape analysis are emphasized. A systematic review of the current markers that passed the threshold of a genome-wide association (GWAS) of single nucleotide polymorphism (SNP)-face traits from the GWAS Catalog is also provided using the preferred reporting items for systematic reviews and meta-analyses (PRISMA), approach. Finally, the current challenges in forensic DNA phenotyping are analyzed and discussed.

1. Introduction

During the last two decades, various genotyping techniques have been used to discover genetic factors responsible for variations in human appearance. Face character prediction has been a challenge for anthropologists, medical human geneticists and criminalists. In this review this type of prediction will be referred to as forensic DNA phenotyping (FDP). FDP aims to infer the unknown, externally visible characteristics (EVCs) of a person from DNA. After the anthropologists have established the bases of the phenotypes for human identification purposes, geneticists carry out research into genetic variations involving morphological features commonly used in human identification, such as age, ancestry, eye color, hair color, skin color, and facial features [1,2,3,4,5]. In 1996, Charles H. Brenner published a paper regarding the extension of the use of DNA short tandem repeat (STR) profiles to estimate the likelihood ratio of racially distinguishing Caucasians from African-Americans based on Bayesian reasoning [6]. Accordingly, the mathematical grounds for determining the ancestry of suspects who leave biological evidence at a crime scene was established. In addition, the determination of the inferential sense of human physical appearance such as ancestry and other phenotypes using DNA testing was discussed in a book by Tony Frudakis in 2008, using the term molecular photofitting [7]. DNA-phenotyping techniques can be significant in disaster victim identification (DVI), wherein facial identification of deceased individuals is difficult due to decomposition-induced changes in the skin, eye color, and other environmental factors. Such common disasters include tsunamis and hurricanes [8]. In addition, this type of facial prediction is vital in cases where searches in DNA and fingerprint databases and employing crime-scene clues have been exhausted without identification [9]. In 2003, genetic ancestry testing was first used to identify the race of a suspect behind a series of rape and murder cases in South Louisiana. Analyzing the evidence using a DNA Witness kit revealed that the suspect was of a mixed ancestry which in particular was (85%) African and (15%) Native American. This DNA profile led to the identification and conviction of the suspect in 2004 [10]. As the technology has advances and its reliability increased, it can now provide additional support to the traditional DNA-profiling methods. By providing an informative description of a suspect’s physical features, the technique can accelerate the investigation through the possible inclusion or exclusion of suspects based on the provided data [3,11,12]. In addition, when a victim’s skull is not available for facial reconstruction, victim identification can be hindered [13].
This review is organized as follows; in Section 1, we provide a general overview of DNA phenotyping. Section 2 exposes the different 2D and 3D facial scanning techniques and analysis tools. Section 3 provides an overview of the current face-landmarking techniques, algorithms, and analysis tools. In Section 4, a detailed survey of some approaches for analyzing and understanding facial features from DNA is provided. Section 5 elaborates on the present challenges in forensic DNA phenotyping.

2. DNA Phenotyping

The genetic influence on facial features has been investigated through studying various factors such as the impact of Sonic-Hedgehog, bone morphogenetic proteins, and homeobox genes on facial feature development [14,15,16,17]. Moreover, some genetic disorders, such as Neurofibromatosis, Fetal Alcohol Spectrum Disorder, the deletion of 22q11.2, and chromosomal abnormalities such as Down Syndrome can cause changes in facial features when compared to healthy individuals [18,19,20].
Epigenetic factors such as DNA methylation have shown reproducible results in age prediction due to the association between DNA methylation levels and age at some CpG sites [21,22,23,24]. DNA methylation levels in other genes, such as ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59, were also correlated with a mean absolute deviation (MAD) of 3.844 years from chronological age [25]. On the other hand, Xia et al. used three-dimensional facial image analysis to predict the age of Chinese participants with an average difference between the predicted and chronological age of only ±2.8 to 2.9 years [26]. This finding demonstrates the importance of age in face prediction, along with other genetic factors.
In addition, examining the European population within datasets such as the 1000 Genomes Project showed that polygenicity strongly affects phenotypes. Nevertheless, there is a correlation between different phenotypes within specific groups of people as they share similar genetic variations [27,28]. There is a link between facial traits and population substructures, which suggests that facial morphology could be affected by ancestry [17]. Most of the studies included in the genome wide association studies (GWAS) Catalog on genetic-to-phenotype associations were conducted on populations of European descent with an underrepresentation of other populations, such as the Middle Eastern population, which only contributed to 0.08% of the GWAS database [29,30]. Studying samples from under-represented population groups can improve our knowledge of genetic structures and extend the applicability of forensic and medical findings [31].
Ancestry-based SNPs (AISNP) are usually linked with facial traits because the predictability of a given population’s ethnicity is higher when the population’s facial features are more distinct [32].
Research reports have demonstrated the importance of AISNPs in forensic, medical, and anthropological applications [33,34,35]. The Kidd and Seldin AISNP panels include diverse data concerning reference populations from major continental regions [33]. Most of the ancestry markers are di-allelic (insertion–deletion) makers or SNPs, as the current method of using short tandem repeat (STR) markers does not predict ancestry. Hence, SNPs can provide investigative leads [36]. The Snipper App Suite is an open-source tool that provides multiple solutions for biogeographical classifications based on massive parallel sequencing (MPS) panels that contain AISNPs [37,38]. Other commercial kits have also been developed to determine ethnic background and ancestry, including AncestryDNA, 23andMe, and National Geographic [39,40,41].
AISNPs have different frequencies in different populations, thus enabling the determination of individuals’ ancestry using DNA [33,42,43]. An individual’s genetic ancestry is mainly presented as a proportional ancestry or admixture by determining the population most correlated with the genetic variation in the DNA sample [44,45,46].
In addition to the investigative leads that AISNPs can provide, phenotype-informative SNPs (PISNPs) reveal more information regarding a suspect’s physical appearance. Such phenotypes include the color of the subject’s eyes, hair, and skin, as well as their facial features [5,47,48,49,50,51,52,53,54]. There are complex interactions between the genes controlling the phenotypes of individuals, such as mutations, genetic drift, recombination, segregating variants, and copy number variants. Therefore, scientific collaborators from genetics, image-processing engineering, bioinformatics, statistics, and other backgrounds have focused on categorizing the genetic variations correlated with a specific phenotype during the last decade. This collaboration aims to understand the aforementioned correlation and accurately predict facial features [17,55].
One of the main categories that identifies and distinguishes an individual from another is color. The prominent, primarily identifiable visible colors are related to a person’s eyes, hair, and skin. One of the tools designed to predict these three phenotypes is the HIrisPlex-S system, which combines prediction models for eye, hair, and skin color [51,56].
The IrisPlex eye color prediction tool demonstrated a prediction accuracy of over 90% with respect to blue/brown phenotypes using only 31pg of DNA when applied to Dutch Europeans, making this kit sensitive and suitable for applications in low-copy-number DNA samples [57,58,59,60]. However, the tool shows low prediction accuracies for green-hazel or intermediate dark phenotypes and admixed populations, thus indicating the need to further investigate the tool using admixed populations and larger sample sizes [61,62,63]. Another GWAS study incorporating many European participants (~193,000) from 10 population groups identified 124 genetic loci for eye prediction, of which 50 had not been reported previously. The findings also demonstrated consistencies in the gene structure of the eye color traits between East Asian populations and Europeans [64]. In addition, some researchers have tested the tool on the Pakistani population using degraded DNA samples, which showed lower accuracies (70%), mainly due to the considerable variation of phenotypic eye color in Europeans compared to South Asians. They recommended refraining from using the tool if some SNPs had dropout alleles [65].
Regarding hair color prediction, the HIrisPlex tool showed an area under the curve (AUC) between 72–92 for blond, brown, red, and black hair colors, which indicates the potential of the tool for applications in the forensics [5,47,48,49,50,52].
SNPs for skin color prediction were included in the HIrisPlex-S tool and were distributed among 16 pigmentation genes. Two skin tone models were assessed using the following two approaches: three and five skin tone scales. The use of the three skin tone models (light, dark, and dark-black) demonstrated an accuracy ranging from 83–97%, while five skin tone models ranging from very pale to dark-black showed an accuracy range of 72% to 97% [51,53]. Some of the genes in the HIrisPlex-S tool were further studied in admixed populations and indicated similar associations [66,67,68,69]. In addition, the association of the SNP rs12913832 (HERC2) with the three skin pigmentation traits was discovered in Polish, European, and mixed populations, including Hispanics [70,71,72,73]. Furthermore, a GWAS study conducted on 17,019 Korean women revealed associations of seven loci with face pigmentation, of which three had not been previously reported [74]. Other researchers are testing these multiplexes and try to discover their associations to the nearby genes. Their purpose is to validate and optimize the accuracy of their results when applied to different populations [75,76,77].
On the other hand, Arab researchers have tested some of the primary loci associated with eye color, such as HERC2 and OCA2, on a Middle Eastern population (Iraqi). They discovered that due to a linkage disequilibrium between the SNPs of these loci and the variations of the minor allele frequencies, some deviations from the model with respect to predicting dark-brown, hazel, and blue eye colors were discovered since the tools did not account for Middle Eastern populations [78]. Other researchers also confirmed similar outcomes in other Middle Eastern populations (Saudi and Iranian) [72,79]. This shows the importance of validating prediction tools on multiple population groups in order to understand the resulting variations before officially using the tool in forensic applications.
Overall, eye, hair, and skin color prediction showed different accuracy levels among population groups and lower prediction accuracies for intermediate color groups, which can be improved by increasing the genetic markers that account for ancestry and by expanding studies on admixed populations [5,47,49,50,51,53,56,72,80,81,82]. The HIrisPlex-S tool is an open-source tool that is available online for any forensic investigators who are interested in obtaining an inference of hair/skin/eye color using the allele copy number of the specified SNPs of interest. The software provides the accuracy level of the results based on AUC values [83].

Inference of Face Features

The use of the human skull to reconstruct an entire face has been employed in the forensic investigation of the deceased bodies for the last few decades. Facial photos could be highly important for identifying unknown individuals. These images were successfully used in identifying unknown individuals [84,85,86]. In 2004, Turner et al. developed reality enhancement/facial approximation by computational estimation (RE/FACE) software to predict a skull’s soft tissue structure. This software was created for the Federal Bureau of Investigation (FBI) to automatically employ dense landmark placement using computerized tomography (CT) scans. Recently, other open-source computerized tools such as FacIT were developed to allow for the reconstruction of a person’s face by scanning their skull using tools such as CT [13,87]. These tools have helped investigators and anthropologists identify people from different eras and population groups, despite the controversy. Some researchers wanted to explore the influence of DNA on the skull, and their findings suggest that facial traits are composed of variations in cranium and soft tissue thickness [88].
To address instances where there is no skull from which to build a face, facial trait inference is currently being researched using DNA-phenotyping techniques. This method provides investigative leads regarding a person’s physical appearance from biological evidence. Similar to face reconstruction from skulls, facial landmarks are essential in this type of analysis. They can be used to distinguish faces through linear measurements between them, such as the face height, nose (width, prominence, and size), interocular distance, chin and forehead prominence [5,47,48,49,50,51,89,90].
One of the approaches that investigated the effect of gender, ancestry, and genetic variations on facial measurements used bootstrapped response-based imputation modeling (BRIM) to measure and model facial shape variations. The related study involved 592 participants from an admixed population of West Africans and Europeans. The sample pool was obtained from the United States, Brazil, and Cape Verde. They found 24 SNPs distributed among 20 genes that significantly affect face morphology. Moreover, from the total number of 7150 high-density quasi-landmark (QL) configurations of the superimposed and symmetrized 3D faces, 44 principal components (PCs) were selected, which described 98% of the total variation [80].
Other GWAS studies demonstrated the strong association between genetic variations and facial features such as face width, forehead protrusion, cheek protrusion, nose ridge elevation, nasal length and protrusion, nasion position, nose bridge breadth, and the distance between the eyeballs. These associations were mostly presented as p-values and standard deviation values rather than as a percentage of accuracy. Each study targeted a different population, which may have affected the correlation values due to factors affecting ancestrally correlated facial features. Optimizing such technology for forensic applications requires a further understanding of the genetic bases of human appearance. Such optimization relies on the use of larger sample sizes from different population groups, understanding the related epigenetic factors, investigating possible environmental factors, involving collaborators of varying scientific disciplines, and improving analysis methods [1,8,17,89,91,92,93,94,95].

3. Facial Screening and Scanning Tools

3.1. Facial Screening Using 2D Approach

Two-dimensional images of the face have been used in clinical applications for the last decade (ferry et al., 2014). Different medical applications have incorporated 2D photographs in the diagnosis of genetic syndromes and face-related anomalies at earlier stages of the human development [96]. The establishment of highly sophisticated centers with advanced equipment and technologies is very difficult in some rural areas and poor communities. The use of phones offers a readily available method and can provide the information required by geneticists to offer their diagnoses online. This type of inference will create a practical option for the early diagnosis of children from underdeveloped areas. Moreover, many datasets that are available online, such as Face2Gene and Ferry and Colleagues, have been made public to allow researchers to incorporate them into their tools [97,98]. As a result, the algorithms has been simplified to be trained on face photos of patients of various genetic conditions [99].
The architecture of the deep learning algorithm was created by developing three levels of neural networks, which first standardize the 2D face images, then detect the shape of the face, and, finally, estimate the genetic syndrome risks. Based on 2800 children’s photos from different countries, age and gender were used to transform the ability for phone photos and phone-based applications to be used as primary genetic screening tools. In general, the model evaluated the risk of children presenting with a genetic syndrome with an average accuracy of 88%. This study shows the amount of information a face can reveal about an individual’s genetic makeup using 2D photographs. The system required the manual landmarking of the faces at 44 locations. Then, measurements between these landmarks based on an in-house application were taken as each quantity of dysmorphology was associated with a specific genetic syndrome. Since photographs cannot provide details in (mm) units, the interpupillary distance for every patient was used as a standard to normalize error.
These technologies have higher success rates when the genetic diseases/syndromes involve joint deformation morphologies of the face, such as in Williams–Beuren, Crnelia de Lange, Down’s, 22q11.2 deletion, and Noonan syndromes [96].
However, to acquire a higher level of facial detail for human identification purposes and to better represent the depth of a face, the third dimension is required. Most researchers recommend using 3D scanners. They add higher resolution and more accuracy in capturing facial details [100]. Thus, the following section focuses on the use of 3D face scanning techniques.

3.2. 3D Face-Phenotyping Techniques

Three-dimensional surface imaging refers to the technique wherein three-dimensional data are acquired from an object as a function of three coordinates (x, y, and z). Three-dimensional scanners generate exact point clouds by obtaining an object’s fine details and capturing free-form shapes. Thus, once these features are transformed into digital data, they can be used for different purposes, such as quality checks and measurements. Moreover, surface imaging mainly works through measurements of coordinate points on the surface of an image. These measurements can be viewed as a depth map function (z) of the position (x, y) in the Cartesian coordinates system. They can also be expressed in a matrix form {zij = (xi, yj), i = 1, 2, …, L, j = 1, 2, …, M} [101,102].
Different technologies are used in 3D scanners, in which each technique serves an assorted purpose and has its advantages and disadvantages. These technologies include phone applications, laser triangulation, structured light, and stereophotogrammetry.

3.2.1. Advanced Phone Application in 3D Scanning

The practice of gathering a sequence of points in space from a series of images is known as photogrammetry. First, multiple 2D images of the object from all possible angles are required; then, the software will connect all the relevant points from the overlapping process of these images and create a 3D mesh [103]. The latest innovations by mobile technology companies such as Apple, Sony, and Samsung have made it possible to generate 3D photos using their devices. iPhone models such as 12 Pro, 13 Pro, or the newest iPad Pro may use LiDAR scanning. These devices are equipped with built-in LiDAR sensors, which enable them to easily scan oversized objects using depth data. Multiple types of software and applications utilize such technologies to process photos and create 3D objects, such as Trnio and Scann3d. In addition, such phones can use augmented reality (AR) to 3D-register physical objects with exceptional accuracy. While these techniques are promising, these phones will need to have between 20–40 different photos of an object to acquire an acceptable scan [104]. The Trnio 3D scanning software has two configurations: object and scene modes. After the photographer scans the object, the software provides immediate assistance in these modes. For the object mode, the photographer moves around the object and the application creates a panoramic photo of the object to be available in a circular manner. Scene mode is for free scanning, which means it can be used to scan massive objects or outdoor scenes in 3D.
Other options may require the use of plug-in devices along with the phone, such as itSeez3D and Bevel [105]. These devices are usually beneficial for lower phone capabilities as they provide extra sets of cameras and eye-safe lasers as detectors. The collected information is analyzed with software that collects size and geometric information from the laser, while it collects the color, texture, and other object features from the phone camera [104].
Although smart-phone applications are promising and have successfully recreated 3D-printed household objects, the high number of photos required and the long time it takes to create a scan can lower the accuracy especially in forensic related research.

3.2.2. Laser Triangulation-Based 3D Scanners

These scanners scan an object either by a laser line or a single laser point. The scanner emits the laser, and its light gets reflected off the scanned object. First, a sensor targets the initial trajectory. Based on the changes between the trajectory and the angle of triangulation, the system perceives specific aberration angles. These angles are associated with the distance between the scanner and the object. When sufficient distance measurements are collected, the scanner maps that object’s surface, thus creating a 3D picture. The scanner can also be used on moving objects, as it can collect a series of profiles from the laser lines that form a complete 3D map of an object. However, triangulation-based scanners exhibit safety issues concerning the safety of the participants’ eyes [106]. They may also perform poorly with respect to the scanning of shiny objects and materials with significant subsurface scattering [107,108]. This method was utilized in capturing 3D face scans of the participants of the Avon Longitudinal Study of Parents and Children (ALPAC) [109,110]. This laser constituted eye-safe technology accepted by the U.S Food and Drug Administration (FDA) with a wavelength of 690 nm at 30 mW. Using this type of scanner shows how important it could be to revisit older technologies and attempt to improve them for possible applications in facial scanning.

3.2.3. Structured Light 3D Scanners

This type of scanner uses active illumination. It flashes 2D spatially assorted varying-intensity patterns generated by a special light source or a projector. Then, it obtains the object’s surface information using an imaging sensor. Suppose the light is projected onto a planar surface (i.e., 2D surface). In that case, the pattern acquired by the camera will be similar to the projected pattern, which means there will be no distortion of the projected structured light. However, if the surface is nonplanar, which means it has prominence and depth, the projected structured light will be distorted. The distortion pattern can be computed using different algorithms to generate the 3D surface of the object. These scanners can be either handheld or stationary. Usually, these scanners are made from static SLR cameras and a light projector such as the SL2 system produced by XYZ RGB Inc. (Kanata, ON, Canada) [101,111]. Revopoint 3D Technologies Inc. (Xi’an and Shenzhen, China) produces other methods that use the same technique, including the Handysense handheld 3D scanner [112].

3.2.4. Stereophotogrammetry 3D Scanners

The concept of these scanners involves the production of 3D images from a series of 2D images using computer vision algorithms. In this method, several photographs are taken of the object from different viewpoints using any accessible camera. The changes from one photo to the next are calculated via algorithms that automatically recognize pixels corresponding to the same physical point, resulting in a 3D image. This technology can scan objects of various scales [113]. Canfield Scientific Inc. (Parsippany-Troy Hills, NJ, USA) is a company that invented the VECTRA® M3 system based on the stereophotogrammetry 3D scanner model. Plastic surgeons mainly use this system to view high-resolution images of the face and neck [114].

3.2.5. Selecting the Right Type of Scanners

When choosing a 3D scanner for capturing facial features, several factors need to be considered. The foremost factors are the scanning resolution and the accuracy. Scanning resolution is the smallest distance between two points on the object that the scanner can measure. Accuracy is the degree to which the measured value conforms to the object’s actual value. Moreover, as the distance between the scanner and the object increases, the absolute value of the error increases accordingly, and this is a value that should be considered. When dealing with human subjects and reflective objects, laser triangulation systems may harm the subjects’ eyes if suitable wavelengths are not considered. To compare the three types of systems, four devices were selected in terms of their popularity of use in 3D-based facial research (Table 1).
Konica Minolta Vivid 900 laser cameras have the highest 3D resolution, the smallest file size, and the fastest processing speed among the four devices. The 3dMDhead system, a stereophotogrammetry stationary device, that can scan in 360 degrees the entire face, head, and neck. This device allows the user to capture frames at the highest speed. The 3dMDhead system was used in multiple forensically related papers, as shown in Table 1. The other device that uses stereophotogrammetry is Vectra H1 from Canfield. It has the lowest geometry resolution, at 0.8 mm, and the lowest accuracy, at 0.84, among the other instruments. The vertices of the images created by this device are the highest, with a 1.2 mm resolution.
Moreover, Vectra H1 can be purchased at the lowest price compared to the other two devices (about half of their respective prices). On the other hand, the Artec Eva system is a handheld device that requires rotation around the object while capturing the pictures; thus, a great deal of time is required to capture a face (~4 min processing time per person). It also requires more effort from the subject to stabilize their facial expression and the person using the device to maintain a specific distance range from the subject to achieve an adequate scan. Overall, the four scanners are suitable for capturing facial details. Selecting the proper device depends on the nature of its application and the target price, resolution, coverage, and accuracy. From the literature, the 3dMDhead scanner seems to be more favorable among face-related studies due to its high speed with respect to capturing details and the consistency of the results between the collected data and the facial measurements of the subjects. However, this scanner is costly and requires a high level of experience to set up. Moreover, it requires a designated room to work at a high level of accuracy.
In the coming future, the use of phones to take 2D/3D photos could advance face–DNA research in the forensic, anthropological, and medical fields. If accuracy and precision were benchmarked against the discussed scanners, researchers would have practical options with which to conduct their investigations, generating fewer expenses and smaller setup areas. This option will positively impact the study of facial traits in the forensic field as it has for the emerging genetic medical diagnosis tools.

4. Face Landmarks, Algorithms, and Analysis Tools

4.1. Face Landmarks

A face landmark can be defined as a prominent discriminative position on the face that can be used as a reference point for facial comparison. The selection of landmarks depends on the phenotype being investigated, but most of these landmarks lie in the oral-nasal region of the face. Face landmarks were introduced in 1994 by the pioneer of modern craniofacial research Leslie Farkas, who suggested modeling faces using 17 landmarks [128].
Face landmarks are grouped into either primary or secondary groups. The primary (first-order) landmarks consist of the nose tip, corners of the mouth, corners of the eyes, etc. They focus on forensic applications, including the major features used in human identification. The localization of the primary landmarks can be carried out using tools such as the histogram of gradient (HOG) and scale-invariant feature transform (SIFT) [129]. In addition, the secondary (second-order) landmarks are guided by the primary landmarks because they represent points that have more details regarding non-extremity points. These landmarks usually represent features such as nose saddles, chin tips, and cheek curves, which are typically used to understand facial expressions or when analyzing one-sided/incomplete facial photos.
Some issues can be encountered when obtaining a high-quality facial scan, which involve poses, expression, illumination, occlusions, etc. [129]. Landmarking approaches include manual, semi-automated, fully automated, and face masking using quasi-landmarks. An overview of these methods is provided in this review.

4.2. Manual Landmarking

Measurements between the landmarks are obtained by taking anthropometric linear measurements. Weinberg et al. compared direct anthropometry using calipers, 2D photogrammetry from photos, and cephalometry of the skull using radiography techniques. It was shown that these techniques could not accurately capture the details of 3D human faces [128,130].
The placing of landmarks is usually performed by marking the face with points using a marker and measuring distances between them or estimating landmark positions without using a marker. This process can be performed manually using digital or ruled calipers. These measurements can also be obtained by uploading the facial scans on software that allows for 2D photo and 3D face mesh rotations [128]. It is preferable to use software to obtain such measurements due to the uncontrollable nature of the manual identification of landmarks and the possibility of a higher degree of error through operators’ intra- and inter-variations [127,128,131,132,133]. Multiple statistical analyses have been performed in order to investigate the precision of each technique (manual caliper vs. 3D image using a reference of dots versus without dots). The mean absolute difference, relative error magnitude, practical error of measurement, and the coefficient of consistency of each landmarking technique were compared. The results showed higher accuracy when the 3D images were marked using computer software [128].

4.3. Semi-Automated Landmarking

The semi-automatic landmarking of facial features can be performed using MATLAB® (Natick, MA, USA) and in-house developed software, including image analysis tools. Viola-Jones is one of the tools that use object-detection algorithms to detect faces and eyes, while an Active Appearance Model annotates the remaining landmarks. An operator supervises this process and confirms the annotated landmarks created by the automated system [93]. Another approach to semi-automatic landmarking is to generate 500 points of uniformly spread digitized landmarks using a sliding technique. The points are generated from a template 3D mesh that initially contains 16 manually selected landmarks. The points then radiate from the template landmarks to ensure a uniform spread on the facial surface (approximately 1.5 mm radius). This sliding of landmarks is expected to increase the accuracy of geometric analysis as it includes extra characteristics of the face, such as the curves and Procrustes distance. This technique is performed using multiple programs such as Viewbox (Kifissia, Greece) [134] and RStudio (Boston, MA, USA) packages (the geomorph package) [135,136].

4.4. Automated Landmarking

Since manual landmarking is time-consuming and challenging to replicate, automating the process can help deal with large databases and cover larger face areas. Therefore, different statistical models and algorithms have been developed to detect facial features and automatically place landmarks. Some techniques use the geometric properties of a face’s surface by incorporating one or multiple of the following differential geometry techniques: mean, Gaussian, principal curvatures, different shape indices, and curvedness [137]. One method involved the use of a thresholding technique to examine the correlation between the location of each landmark and the behavior of each predefined geometric descriptor on the face [138]. These methods were effective in localizing/detecting segments of the face without expressions or occlusion.
Moreover, landmarks were successfully extracted from 3D faces with standard, expressive, and occluded mouth/eyes using MATLAB® algorithms [138,139]. The success rate of the facial feature extraction algorithms was tested on available face databases such as FRAV3D, FRGC 1.0 and FRGC 2.0, Face Warehouse, and GavabDB [140,141,142,143]. This study demonstrated the possibility of building a reliable, automated landmarking method even with faces with different expressions and occlusions. There are also multiple available online datasets of 3D faces/scans used to evaluate the accuracy and efficiency of new methods, as the performance of algorithms has been shown to decrease when a test is performed with another dataset [129].
Another technique for automatic landmarking used a statistical ensemble approach based on comparing magnitudes of complex Gabor wavelet coefficients to the dataset. The algorithm improved the accuracy of landmarking facial scans by up to 22%. It was found that the stacking generalization algorithm of facial features can decrease the average error to 1.7 mm across 21 landmarks. The model was also successful in training the algorithm using the minimal number of 3D facial images, and it was able to handle large cohort GWAS studies [144,145]. It is important to note that most of these methods are under development and have not been intensively used in genetic studies [146]. In one of the GWAS studies, 2D facial photographs were utilized by obtaining metric measurements after converting the pixels from the photos of the faces into millimeters via different algorithms [146].

4.5. Estimation of 3D Face Landmarks Using Mobile Devices

Multiple solutions are available for the detection and estimation of 3D face landmarks using 2D photos taken by phones operating on Android and IOS platforms. One of these programs is named MediaPipe Face Mesh [147]. In real time, this application can estimate 468 LMs of the face. It can screen the position of the face through face transformation within a space by bridging the gaps between the estimated LMs. The machine learning pipeline is built based on a neural network model that detects the full-face image in order to align and connect the frames and another one that approximates a 3D face using regression models [147].

4.6. Face Masking and Quasi-Landmarks

Some researchers have established a guide to map the 3D face meshes in question to a template that resembles an anthropometric mask [148]. The open-source MeshMonk tool is a script designed to automatically quantify the dense surfaces of the biological phenotypes in question. It automates the orientation and resizing of the face, maps the face using a non-rigid transformation to a spatially dense face anthropometry model, and then uses the so-obtained modality in multivariate statistical analysis [149]. This mapping also establishes correspondences between the quasi-landmarks from the model and mesh points from the targeted faces. To account for changes in the orientation, positioning, and size scale of the face, Generalized Procrustes Analysis (GPA) was used. This superimposition was performed to combine the original and reflected configurations of the faces. This creates an asymmetric and bilaterally symmetric component for each decomposed shape of the face. The difference in asymmetry was determined from the average symmetrical component of the reflected configuration. By using these steps, differences between facial asymmetries were ignored, and the only components used were the symmetrical ones [150]. Multiple researchers have confirmed that the accuracy of using 3D facial scans with automated measurements was better than direct anthropometry [127,128]. Researchers have tested the accuracy of the MeshMonk tool on 41 human faces and found it to be similar to the use of 19 manual landmarking placements with an average Euclidean distance error of 1.26 mm and a range of 0.7–1.68 mm [151].
Some computer programs such as Cliniface include tools for the automatic extraction of facial landmarks from 3D facial scans [100,149]. This software uses the MeshMonk algorithms to landmark 69 points of a 3D face. In addition, linear and angular facial measurements can also be obtained using the same software [100,149]. This tool is of great use since it is available online for free and is designed for dysmorphological facial analysis research. Furthermore, the software provides a high level of accuracy with respect to extracting the measurements from 3D faces by comparing the results to manual extraction performed by an expert [100].
Most researchers in forensic and anthropological fields use linear distance measurements. These measurements can allow for greater collaboration between different study groups, especially when 3D face datasets cannot be shared due to ethical research restrictions set to protect the volunteers [91].

5. Current advances in Approaching Genetically Based 3D Facial Shape Analysis

The nature of EVC inference using DNA is complex due to environmental factors that can affect facial feature formation. Thus, larger 3D facial datasets are required to begin exploiting the capabilities of this technology. Furthermore, FDP techniques and analytical approaches are in their development stages, and greater agreement is required among the scientific community concerning the best approaches [149,152]. Sero et al. presented a framework for research on modeling facial features based on average faces, and then retouched using DNA [124]. The framework consists of three stages: unraveling genetic architectures, perceptual analysis and applications, and the predictive modeling of faces. The model also lists the disciplines needed for each stage, mainly comprising biologists, geneticists, bioinformaticians, image analysts, forensic scientists, lawyers, and policymakers. There are currently two approaches to studying facial features using genetics: the DNA-to-face approach and the face-to-DNA approach [124].

5.1. DNA to Face Approach

The DNA-to-face approach is a mode of investigation used by geneticists to understand the phenotypic aspects of the face based on genetic mutations using DNA-genotyping techniques [153]. The method is also investigated using enhancer activity detection by a reporter gene assay and RNA-seq assays in non-human models such as mice and zebrafish [154,155].
The GWAS investigated the association between SNPs and facial features, age, and genomic ancestry. Forensic scientists have been focused on DNA-phenotyping research, especially in the last ten years. Due to the abundantly generated data, multiple databases have been established to organize the related research findings. For the purposes of this study, The National Human Genome Research Institute–European Bioinformatics Institute (NHGRI–EBI)’s GWAS Catalog database was used to report the SNPs that have been significantly associated with facial features to date. This database is one of the main resources of GWAS studies and its tools provided the bases to identify, exclude, and include the studies of this review, as detailed in (Figure 1) [156].
The results shown in the identification stage of (Figure 1) were obtained by first searching the register “facial morphology measurement” trait in the GWAS Catalog on 30 April 2022. Facial morphology measurement was described as the “quantification of some aspect of facial morphology” such as “lip thickness”, “forehead height”, or “chin protrusion” [156]. The results included 1101 SNP associations with 109 traits in 19 publications (p-value 1 × 10−79–9 × 10−2) [91,146,154,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172]. However, this search did not reveal facial features regarding the nose area. As a result, another search was conducted on the “nose morphology measurement” trait, which was described as the “quantification of some aspect of nose morphology”, such as “nose wing breadth”, “nose tip shape”, or “nose profile” [156]. A total of 250 associations resulted from this search, with 33 nose traits in 11 studies [146,154,157,158,160,161,162,163,165,168,169]. From these two searches, a total of 1351 associations with 142 facial traits were reported in a total of 19 GWAS studies [91,146,154,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172]. A total of five papers were excluded [91,157,166,167,172].
Analysis of population backgrounds in these papers revealed that most of the studied populations were of European ancestry (80%), followed by East Asians (10%), Hispanic or Latin Americans (7%), Africans (2%), and those of Admixed ancestry (1%) (Figure 2). Such results support the persistent European bias in GWAS data, which was previously reported in 2016 [30].
For the purposes of this review, the “facial attractiveness measurement” trait was excluded due to its non-relevance to facial feature prediction using DNA. In addition, only SNPs that met the GWAS p-value threshold of (5 × 10−8) or higher were reported. This threshold was selected based on the Bonferroni correction method. This method accounts for the p-value based on the assumption that every SNP is tested independently of an array [173]. It considers the linkage disequilibrium (LD) that may present between SNPs of the same array. As a result, the calculation considers that there are, based on the genome, 1,000,000 possible LD. Thus, 0.05/1,000,000 will yield this threshold (p < 5 × 10−8) [174]. This is considered one of the most conservative methods for selecting the p-value threshold [174,175]. In addition, note that the database reports one p-value for the correlated trait, which may be the p-value of discovery, replication, or meta-analysis. Therefore, the database was used as a filtering tool for all SNPs that reached the genome-wide significance threshold. A total of 614 associations with 98 traits (p-value 1 × 10−79–5 × 10−8) met our inclusion criteria (Tables S1–S6). Figure 3 demonstrates the distribution of these associations according to six facial regions (facial traits affecting multiple areas of the face, forehead, nose, mouth, lip, and chin/lower face). Most of the associations were found in the mouth area (29%), followed by the nose (21%), eye (20%), face (15%), chin/lower face (13%), and forehead (2%).
When assessing genes of two or more associations with facial features, inconsistent patterns were observed within and between the population groups (Table 2). Some genes were associated with features in the same facial region in different publications on the same population group. For instance, the PABPC1L2A and PABPC1L2B genes were associated with intercanthal width in two studies conducted on the European population [154,165]. PABPC1L2 encodes for binding protein cytoplasmic, and among its related pathways are mRNA surveillance pathways and RNA transport [176]. Similarly, two studies conducted on individuals of European ancestry reported associations of the NAV3 gene with mouth morphology measurements [165,169]. The NAV3 gene is a member of the neuron navigator family and is mainly expressed in the nervous system [177].
In contrast, some genes showed associations with different facial features within the same population group. For example, the transmembrane 74 (TMEM74) gene was associated with the height of the vermillion lower lip in the European population [165]. This gene was also associated with features in the nose segment in another study conducted on the same population group [169]. In addition, several studies suggest the involvement of the TMEM74 gene in tumor cell survival through the induction of autophagy in multiple tumor cell lines [178,179].
When investigating the associations between different population groups, some genes showed consistent associations with features in the same facial region, while others showed inconsistent findings. For instance, the PAX3 gene was associated with nasion position in the Hispanic/Latin American individuals [158] and with features in the nose segment in the European population [169]. The PAX3 gene plays a critical role during fetal development and is involved in normal bone development in the skull and face [180]. Similarly, the regulator of chondrogenesis RNA (ROCR) gene was associated with five nose traits in different population groups, such as profile nasal angle, nasal tip protrusion, and nasolabial angle in East Asians [146], and nose size and traits in the nose segment in two studies conducted on Europeans, respectively [162,169]. The ROCR gene has a biased expressed mainly in the salivary gland [181].
Other findings suggest the association of genes in different face areas among different population groups. The SUPT3H gene was associated with forehead protrusion in Hispanic/Latin Americans [161], nose morphology measurements in East Asians [160], and chin dimples and the nose segment in Europeans [162,169]. The SUPT3H gene is related to pathways involved in transcriptional misregulation in cancer and chromatin-folding patterns [182]. It is also associated with several diseases, including Cleidocranial Dysplasia, which is a rare genetic disorder that affects tooth and bone development [183]. Another gene, HDAC9, was associated with an indication of columella in Hispanic/Latin Americans [161], while it was associated with mouth morphology measurement in Europeans [165]. HDAC9 (Histone deacetyltransferase 9) is an enzyme engaged in regulating gene expression. Although HDAC9 is not expressed in the craniofacial tissues of developing mice, it is proposed that it regulates the expression of TWIST1, a neighboring gene affecting limb and craniofacial development in mice [161].
The above observations suggest the involvement of multiple genes in face morphology. In addition, some of these genes affect traits in the same facial region within and between population groups, while others have inconsistent patterns. These findings indicate the complex interaction in gene–face morphology, and it is essential to conduct additional studies to understand such associations and how other anatomical and developmental factors affect variations between different ancestries.

5.2. Face to DNA Approach

Naturally, a person can distinguish between feminine and muscular facial features. In addition to the symmetry characteristics between male and female faces, the differences are statistically significant [184]. Moreover, face-shape differences between phylogenetically related populations were shown to be statistically significant [185]. The phenotype-to-genotype approach, also known as the face-to-DNA approach, uses the average face. It is generated based on each gender and genomic ancestry. The face is then remodeled/modified based on multiple SNPs that have previously been associated with specific facial features. The modifications on the faces are performed using machine learning tools, 3D facial scan databases, genetic traits, and other human biometric authentication measures [168].
Another investigated aspect in this approach constitutes face-to-DNA classifiers, which is a labeling approach that categorizes given faces into different classes based on molecular features. The algorithm used by the authors of [124] generates 7150 QLs based on wrapping a templated average face over the given face. Afterward, a squared similarity matrix is constructed using the RV coefficient between each pair of QLs configurations. First, the segments are arranged hierarchically using hierarchical spectral clustering. Then, these segments are divided multiple times until the total number of facial areas reaches 63 face segments. This dense surface registration tool targets specific 3D facial features to aid the performance of statistical analysis, classification, regression, score fusion, and biometric evaluations, as well as the discovery of new associations between phenotypes and genotypes. Using samples grouped into two primary study cohorts (Global and European), full, 3D faces were segregated into 63 segments/modules based on the ethnicity of the cohort. The preselected SNPs were generated from the GWAS 9.5 million SNPs, in which 1932 SNPs were positioned at 38 separate markers. Using this approach, the authors demonstrated 83% and 80% verified matches in the global and European cohorts [124].
Other researchers have also studied the approach of phenotype-based genomics. They were able to compare different models to test the effects of including some SNPs and genetic factors related to age, ethnicity, gender, height, body mass index (BMI), vocal pitches, and other parameters in the prediction algorithms used for identification. The prediction accuracy with respect to facial structure was enhanced when BMI and age were considered. Facial features were predicted using multiple types of algorithms, such as PC, linear discriminant, neural networks, sparse representation, and the local presentation of facial features. Their approach was mainly to deform the face and map it against a template and calculate the displacements between them to overcome the challenge regarding data privacy in personalized medicine. Using 1000 ancestry PC data, the algorithm predicts the face’s PC value based on ridge regression and multiple covariates such as sex, age, and BMI. This algorithm was developed based on maximum entropy in order to combine phenotypic features and GWAS. Future studies shall include populations of different ethnic groups in order to validate the current research outcomes and explore facial traits that are less common among individuals of European ancestry [186].
Considering the differences between the two approaches, the authors of [187] tested the possible improvement in predicting hair structure, freckles, and the color of hair, skin, and eyes using trait-prevalence-informed priors. The model of the priors was based on including biogeographical ancestry groups in a Bayesian framework. They compared this model to the previously proposed DNA-based (prior-free) EVC-predicting models. The priors model showed minimal effect on the prediction of some facial features, while it did not affect others. This study suggests that using prevalence priors, similar to the face-to-DNA approach, may not be the right approach to understanding the EVC. The researchers recommend focusing on the genetic factors directly affecting the facial traits independently from the population’s genetic factors [187]. However, considering the difference between the well-established pigmentation prediction method and the complex genetic architecture of facial feature prediction, additional traits ought to be assessed by the trait-prevalence-informed priors model. These features include nose protrusion, nose length, eye curvature, chin depression, etc.

5.3. Statistical Approaches

Considering the complexity of the process of facial prediction, applying the power of machine learning techniques, algorithms, and other statistical models to the big data available can alter the approaches to the challenge at hand. Although simple, linear models are considered great classifiers when aiming to avoid overfitting. There are regularized linear models, such as maximum likelihood, and those with less regularization using variable selection simultaneously, such as lasso, which is a preferred model for increasing accuracy in high-dimensional data [188,189]. Using the right training data based on a known variable, the supervised learning model can be targeted for each type of trait. In general, quantitative traits that involve measurements are usually approached using regression. On the other hand, when analyzing categorical traits that involve pigmentation (more than two categories), the multi-class classification supervised learning model is considered [190]. As an example, the polygenic score model—employing weighted allele sums of multiple SNPs—was used to predict height-related features [189]. The area under the receiver operating characteristic curve (AUC) and R2 were used to calculate the general performance or the accuracy of prediction models [189].
In addition, researchers have used partial-least-squares regression (PLSR) for predicting face-related features based on genomic ancestry, sex, and 24 SNPs [80]. Others used ridge regression of 1000 genomic principal components along with ancestry and sex genomic factors, which were coupled with age and BMI, to increase accuracy [187]. In addition, a shape-similarity statistic that used the shape space angle between 3D faces was used in PCA and PLSA models with 277 SNPs [168]. There are also other models that are considered black-box models which utilize ensemble, decision trees, and neural networks methods. Most of these networks are deep and have hidden layers of combined signals that are trained mostly by gradient or back-propagation algorithms [190]. Choosing the right features regarding the learning rate of neural networks, their layers, and the neurons available within them can help optimize such algorithms for prediction. Since the face is considered a complex, non-linear, continuous model, continuous latent features are better approached using variational autoencoders. In these methods, building a cost-effective prediction model is dependent on selecting the effective features of the target variable. Some of the methods that are used to filter redundancies and connect variables—even in a non-linear manner—are the information theory models [190].

6. Challenges in Forensic DNA Phenotyping

Multiple challenges need to be addressed before applying FDP in forensic cases. First, the accuracy of FDP needs to be assessed, mainly when the result of this evidence is used for exclusion or conviction. Second, FDP raises ethical concerns by revealing medical information that the suspects/victims may not wish to know/have released. Third, FDP has been legislated in only a few countries, and the legal terms need to be updated in others. Fourth, some individuals make cosmetic or surgical alterations to their facial features, making FDP more challenging. Lastly, the generated FDP data and their analytical procedures need to be further assessed by forensic and research laboratories worldwide to obtain a valid scientific basis before using FDP as legal evidence in forensic cases.

6.1. Accuracy

The success of FDP can be measured by validating the accuracy of predicted facial features in real forensic cases. However, the current research on FDP focuses on associating genes with face-related phenotypes, which predicts a class of phenotypes but does not yet individualize one face from another [191]. Moreover, several aspects need to be addressed before using the technology in forensic cases, including its accuracy and the standardization of validation testing [192]. The accuracy of FDP can be advanced by identifying more gene-related phenotypes and conducting studies on larger population groups from different ethnicities [186,193].

6.2. Ethical Issues

As the research on FDP progresses, more information can be predicted using DNA. Initially, DNA was used to generate profiles for identification in forensic cases. These STR profiles were thought to be uninformative and were mainly used for identification purposes. However, research has shown that these genetic markers have a regulatory role in gene expression. For instance, Bañuelos, M., et al. demonstrated correlations between genotypes in the Combined DNA Index System (CODIS) loci and expression variations of the neighboring gene and, possibly, medical information [194]. Therefore, genetic privacy has become one of the major challenges for FDP because of the continuous improvements and advances in this technology [193]. Another aspect of privacy is the availability of the FDP data to third parties, thereby granting them power due to holding such information while making the individual vulnerable from a “knowledge is power” perspective [186,195].
Not all characteristics revealed by FDP have the same level of sensitivity. For instance, some characteristics are trivial and are not private, such as the external features of a subject, which includes voice type or right-handedness. In addition, police and law enforcement agencies have access to portrait photographs on drivers’ licenses and ID cards. On the other hand, medical history is characterized as a sensitive trait. If this trait is revealed to the public, it can be used as a filter criterion in the employment process. Moreover, it is argued that the advantages of the limited use of these features in criminal investigations do not override the privacy risks faced by the individual [195]. It is significant to note that although the same gene can code for both pathological and physical variations, the mutation causing the variation is different. For instance, the OCA2 gene codes for oculocutaneous albinism type 2. It also codes for variations in eye, skin, and hair color. However, the SNPs associated with each variation are different [1].
FDP can potentially reveal information regarding genetic diseases that individuals may not wish to be informed about, thus violating the “right not to know” principle. Regarding the human genome, United Nations Educational, Scientific and Cultural Organization (UNESCO), in Article 5c, declared that “The right of every individual to decide whether or not to be informed of the results of the genetic examination and the resulting consequences should be respected” [196]. Similar statements were declared in the Rights of the Patient approved by the World Medical Association, Patients’ rights in French law, the European Convention, the Human Genetics Advisory Commission (HGAC) in the United Kingdom, the Dutch Medical Treatment Act of 1994, the Hungarian Health Act of 1997, and the Belgian Patient’s Rights Act of 2002 [196]. Generally, it is believed that the advantages of identifying criminals and preventing them from committing more crimes exceed the benefits of preventing patient discrimination. This argument has affected the legal legislation of FDP in Texas state, which legalized the use of FDP, including in testing for diseases. On the other hand, other countries, such as the Netherlands, disallowed the use of disease-related information in forensic investigations in 2003 [1].
From another perspective, some argue that the weight of the right to maintain ignorance is dependent on the value of information revealed by FDP. For example, some traits do not reveal medical information and do not necessarily violate privacy rights, such as left-handedness, the tendency to smoke, voice type, or geographic origin. These traits are mostly previously known by the individual and will not raise an issue if confirmed by FDP [5,47,48,49,50,51,52]. In addition to the possibility of violating the “right not to know principle,” it is important to identify the circumstances of the collection and storage of FDP data, e.g., the decision of whether FDP analysis will be implemented in all forensic cases or whether it will be restricted to cases where a DNA match has not been found [197].
To address the data protection and privacy issues raised by individuals, it is suggested that FDP data are destroyed following the criminal investigation or when a match is found. This option ensures that the collected data are used for their original purpose, which is identification, and potentially eliminates the use of such information for other purposes. However, destroying FDP information may violate the Universal Declaration on the Human Genome, which gives the individual the right to know/be unaware of the result of a genetic test [196,198]. Further FDP-related ethical concerns include the storage of such data during and after investigations and access to FDP information, which need to be addressed by the scientific community before using this information as evidence in forensic investigations [199].

6.3. Bias

There is evidence of bias in the research into FDP. This bias mainly stems from the researchers or the machines used. Researchers may use easier-to-access populations to conduct their research, and they may try to obtain specific grants based on the population of interest. Accordingly, some populations, such as Europeans, are more prone to be included in human genetics research. In addition, since most researchers are from universities and research centers in well-developed countries, it will be easier to target populations around them rather than collect samples from other areas, which invokes logistical bias.
Moreover, it is easier for researchers to compare and use pre-established databases such as the GWAS Catalog. However, this will create persistent bias throughout the years because machine learning and the verification of the results are easier when using more data from the same subjects’ ancestry. Hence, there are many variables to control when introducing a new population and more to correct for before learning from novel populations. In addition, some machines that have already been developed may find it harder to compare and provide accurate results since the ancestry information available (genotypes) and phenotypic information—such as lifestyle, facial characteristics, and diet—vary between populations.
Hence, some technologies may not accurately present complete perspectives of phenotypic information if this bias is not considered. For example, light, shades of darker colors, and reflection are not well-considered by scanning machines that have already been developed using lighter skin-color populations. It is necessary to optimize these machines to gather the full potential of the data in order to reach higher levels of accuracy.
Some researchers have shown that GWAS is a great tool for discovering the genetic factors involved in complex diseases. Hundreds of thousands of significantly associated biological characteristics and genetic loci have been found. These associations have been of high value as they helped understand the biological mechanisms of diseases and other phenotypes. However, admixed populations or those of non-European ancestry are under-represented in the GWAS Catalog. Hispanic and Latin American ancestry, Pacific Islanders, Native Peoples, and Arab and Middle Eastern subjects comprised less than 1% of the catalog in the year of 2016. Arab and Middle Eastern populations have contributed to only about 0.08% of the whole GWAS dataset as of 2016. This continuation of bias will create many implications for research, such as (1) impairing the accuracy of findings if direct associations are used across populations, (2) hindering the discovery of novel genetic associations, and (3) limiting the understanding of the face-related genomics in the forensic, anthropological, and medical research on unexplored populations [30].

6.4. Legal Issues

FDP was recently introduced in forensic investigations. Therefore, some countries are still updating their laws regarding the use of FDP in forensic cases. Netherlands, Slovakia, and Germany are the only countries that legalized FDP for forensic purposes, while Belgium, Greece, France, Luxembourg, and Ireland prohibited the use of FDP [54,200]. While some argue that FDP acts as an eyewitness and thus does not require any legislation, others have restrictive views on the application of FDP in forensic investigations. For instance, some countries, such as South Africa, restricted FDP to non-coding markers, as most SNPs associated with facial features are located in the intronic region [1,195]. As an example, the rs2045145 SNP is intronic and is associated with the female European second PC extreme profile [121]. In addition, an intron of the PARK2 gene (SNP rs9456748) was found to be significantly associated with the height of the midface [158]. Additionally, the intronic variant of COL23A1 (collagen type 23 α 1) (SNP rs118078182) is associated with variations in the nasal shape of individuals across Eurasia [95].
As FDP technology advances, another approach to legislating the use of FDP in forensic investigations is to specify the forensic purpose, i.e., identification or 3D facial prediction, rather than restricting the use of certain markers [1]. Overall, most current DNA regulations are related to traditional DNA profiling, which is based on comparisons of DNA profiles obtained from reference and evidence samples. Consequently, FDP technology requires new legal considerations that are different from those applied to traditional DNA typing [201].

6.5. Facial Cosmetic Changes

The number of plastic surgeries and cosmetic procedures has exponentially increased worldwide. This causes a limitation that needs to be addressed, especially for technologies that are based on face recognition algorithms [202]. Cosmetic changes can be minor, such as fillers, skin lifts and OnabotulinumtoxinA injections, as well as plastic surgeries of the eyelids and those involving the reshaping of the nose. The effect of these procedures can be minor such as changes in skin texture or major such as changes in the natural measurements of facial features [203]. An evaluation of the current facial recognition algorithms showed lower performance when applied to faces that underwent plastic surgery [204]. This might be even more challenging for FDP studies as the genetic components cannot be accurately associated with the natural landmark positions or face measurements. Other temporary modifications include the application of hair dye, wearing colored contact lenses, and the use of tanning products. Such modifications are also expected from fugitives who try to avoid being recognized by law enforcement agencies.
Consequently, caution must be taken in FDP-guided investigations to avoid falsified appearances [1,205]. The common changes that people make to their natural appearance indicate that new FDP technologies need to be robust and accurate, even towards cosmetic changes. A new aspect of FDP research is the study of the extent of such modifications and their effects on the accuracy of newly developed techniques [206]. Overall, overcoming the limitations of facial 3D prediction using DNA can be achieved by increasing the volume of the EVC-related studies [5,47,48,49,50,51,52].

6.6. Evaluation and Validation

The evaluated research studies demonstrated that the prediction of ancestry and pigmentation traits such as skin, eye, and hair color is more developed than the prediction of facial features, indicating the need for additional research before accepting the use of this technology in forensic fields [62,193,200]. Extensive research is needed to precisely identify the genes associated with variations of each trait. The collaboration of research institutions, governments, private organizations, commercial companies, and the forensic laboratories of law enforcement agencies around the world is needed to conduct massive genome-wide association studies. In addition, intensive research is essential in order to establish a database of candidate genes associated with facial features. It is of high value to understand that a comprehensive database can greatly enhance the current state-of-the-art technologies used in forensic laboratories by supporting the transition of DNA profiling databases to a new technology that can potentially generate evidence without the need for reference samples. The use of these databases can be efficient by including many subjects, a wide age range, different facial expressions, and various ethnic groups [1,206]. Some available 3D face databases include FaceBase, Stirling ESRC 3D Face Database, Bosphorus Database, etc. [207,208,209,210]. In 2017, scientists developed the VISible Attributes Through GEnomics (VISAGE) Consortium to validate and enhance the application of FDP in forensic cases. This consortium brings together eight working groups to cover multiple disciplines related to FDP, such as the confirmation of genetic markers and statistical tools, cooperation with face-sketchers, the training of individuals of interest, the setting of policies, and the acquirement of required ethical approval. This collaboration aims to predict individuals’ ancestry, facial features, and age using the massive parallel sequencing of a large number of data [211,212].
Face base also serves as a hub for collaborators/researchers interested in craniofacial research. FaceBase has criteria for accepting and sharing facial data from different models (animals and humans) [213]. In addition, the European DNA Profiling Group (EDNAP) aims to assess the reliability and consistency of the currently available technologies used in forensic science by establishing meetings and comparing the data of laboratories around the world. Moreover, the reproducibility of the IrisPlex System has been validated by EDNAP across 21 laboratories, which has revealed its potential for success. However, since most genetic associations with face morphology are determined from homogeneous populations, additional studies are needed to validate such associations in admixed populations [52].
Although the promising results of FDP indicate that it could replace the current STR-profiling techniques, this replacement may not be feasible in the near future due to the large amount of money spent on the existing STR profiles and infrastructure in place in developed national databases worldwide. As a result, future genetic markers used for FDP will likely be added to the core STR markers without replacing the existing STR technology [92].

7. Discussion and Conclusions

The current state of the art regarding DNA-phenotyping techniques has been highlighted in this paper. Multiple technologies (scanning tools, software, algorithms, etc.) are available for acquiring facial feature measurements, which vary with respect to their ease of use and data accuracy. In the meantime, new technologies are evolving fast, reflecting the importance of revisiting the literature to remain informed of the latest technologies and algorithms developed in such research areas.
Most scientists believe that it is too early for DNA technologies to be fully employed in face prediction. Nevertheless, genetic studies have cleared the picture of the influence of genetics on facial morphology. With the use of GWAS on limited population groups, researchers have been able to identify some associations between genetic markers and facial features. However, facial analysis based on DNA differs from Mendelian diseases due to the multiple and complex factors affecting facial morphology. Thus, other approaches or combinations of them need to be considered.
Some challenges FDP encountered by include those related to assessing its accuracy before implementing it in forensic cases. In addition, applying practices to maintain FDP data privacy is fundamental, especially if such data reveals medical information. The regulation of access to FDP data and storage methods are essential in order to avoid information exploitation by third parties/less scientifically sophisticated police officers. Limited efforts have been made to address these concerns, which is understandable considering that it is an emerging technology, and its extent and validity are not fully established [211]. Moreover, the fitness of face inference algorithms or statistical tests can be improved by increasing the number of investigated individuals, conducting studies on underrepresented population groups, and identifying more face-related genetic markers. Lately, the discovery of homogeneous populations has been a challenge, as the migration of individuals has become easier, and intermarriage is more common across the world. For example, the percentage of intermarriage of newlyweds in the USA was 3% in 1967. The same percentage in 2015 has increased 4.7 times to reach 17% [213]. These admixed populations could affect the outcomes of prediction models and could increase the error rate if not corrected for via population stratification. Moreover, it is recommended that researchers investigate faces globally and locally by correlating grouped features or measurements with individual or multiple genetic factors [167]. Other factors to be included in FDP analysis include epigenetics, telomere lengths, and non-genetic factors [214,215].

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/genes14010136/s1: Table S1: Summary of data obtained from NHGRI-EBI GWAS Catalog for SNPs that met the genome-wide significance p-value threshold of (5 × 10−8) for the face region, which include traits affecting multiple face areas. RAF = risk allele frequency. OR = Odd ratio. CI = Confidence interval. NR: Not reported [154,165,168]. Table S2: Summary of data obtained from NHGRI-EBI GWAS Catalog for SNPs that met the genome-wide significance p-value threshold of (5 × 10−8) for traits in the forehead region. RAF = risk allele frequency. OR = Odd ratio. CI = Confidence interval. NR: Not reported [146,161,163]. Table S3: Summary of data obtained from NHGRI-EBI GWAS Catalog for SNPs that met the genome-wide significance p-value threshold of (5 × 10−8) for traits in the eye region. RAF = risk allele frequency. OR = Odd ratio. CI = Confidence interval. NR: Not reported [144,145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160,161,163,168]. Table S4: Summary of data obtained from NHGRI-EBI GWAS Catalog for SNPs that met the genome-wide significance p-value threshold of (5 × 10−8) for traits in the nose region. RAF = risk allele frequency. OR = Odd ratio. CI = Confidence interval. NR: Not reported [146,158,160,161,162,163,165,168,169]. Table S5: Summary of data obtained from NHGRI-EBI GWAS Catalog for SNPs that met the genome-wide significance p-value threshold of (5 × 10−8) for traits in the mouth region. RAF = risk allele frequency. OR = Odd ratio. CI = Confidence interval. NR: Not reported [146,154,156,160,161,163,165,168,169]. Table S6: Summary of data obtained from NHGRI-EBI GWAS Catalog for SNPs that met the genome-wide significance p-value threshold of (5 × 10−8) for traits in the chin/lower face region. RAF = risk allele frequency. OR = Odd ratio. CI = Confidence interval. NR: Not reported [158,160,161,162,163,169].

Author Contributions

A.A. (Aamer Alshehhi), A.A. (Aliya Almarzooqi) and K.A., performed the literature search and data analysis. A.A. (Aamer Alshehhi) drafted the manuscript. N.W., G.K.T. and H.A. revised the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this review are available within the article and in the supplementary material.

Conflicts of Interest

The authors declare that they have no competing interest.

References

  1. Bhatia, S.; Wright, G.; Leighton, B. A proposed multivariate model for prediction of facial growth. Am. J. Orthod. 1979, 75, 264–281. [Google Scholar] [CrossRef] [PubMed]
  2. Richmond, S.; Howe, L.J.; Lewis, S.; Stergiakouli, E.; Zhurov, A. Facial Genetics: A Brief Overview. Front. Genet. 2018, 9, 462. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Kayser, M. Forensic DNA Phenotyping: Predicting human appearance from crime scene material for investigative purposes. Forensic Sci. Int. Genet. 2015, 18, 33–48. [Google Scholar] [CrossRef] [PubMed]
  4. Margiotta, G.; Iacovissi, E.; Tommolini, F.; Carnevali, E. Forensic DNA Phenotyping: A New Powerful Tool in Forensic Medicine. In Forensic Medicine: Fundamentals, Clinical Perspectives and Challenges; Nova Science Pub. Inc.: Hauppauge, NY, USA, 2016; pp. 1–23. [Google Scholar]
  5. Schneider, P.M.; Prainsack, B.; Kayser, M. The Use of Forensic DNA Phenotyping in Predicting Appearance and Biogeographic Ancestry. Dtsch. Arztebl. Int. 2019, 51–52, 873–880. [Google Scholar] [CrossRef] [PubMed]
  6. Brenner, C. Probable Race of a Stain Donor. In Proceedings of the Seventh International Symposium on Human Identification, Madison, WI, USA, 16–18 September 1997; pp. 48–52. [Google Scholar]
  7. Frudakis, T.N. Molecular Photofitting: Predicting Ancestry and Phenotype from DNA; Academic Press Publishers: Amsterdam, The Netherlands, 2008. [Google Scholar]
  8. Digilio, M.; Marino, B.; Capolino, R.; Dallapiccola, B. Clinical manifestations of Deletion 22q11.2 syndrome (DiGeorge/Velo-Cardio-Facial syndrome). Images Paediatr. Cardiol. 2005, 7, 23–34. [Google Scholar]
  9. Sharma, S. Fetal Alcohol Spectrum Disorders: Concepts, Mechanisms, and Cure; Nova Science Publishers, Incorporated: Hauppauge, NY, USA, 2017. [Google Scholar]
  10. Cornejo, J.Y.R.; Pedrini, H.; Lima, A.M.; Nunes, F.D.L.D.S. Down syndrome detection based on facial features using a geometric descriptor. J. Med. Imaging 2017, 4, 044008. [Google Scholar] [CrossRef]
  11. MacLean, C.; Lamparello, A. Forensic DNA Phenotyping in Criminal Investigations and Criminal Courts: Assessing and Mitigating the Dilemmas Inherent in the Science. Recent Adv. DNA Gene Seq. 2015, 8, 104–112. [Google Scholar] [CrossRef]
  12. Vidaki, A.; Ballard, D.; Aliferi, A.; Miller, T.H.; Barron, L.P.; Court, D.S. DNA methylation-based forensic age prediction using artificial neural networks and next generation sequencing. Forensic Sci. Int. Genet. 2017, 28, 225–236. [Google Scholar] [CrossRef] [Green Version]
  13. Miranda, G.E.; Wilkinson, C.; Roughley, M.; Beaini, T.L.; Melani, R.F.H. Assessment of accuracy and recognition of three-dimensional computerized forensic craniofacial reconstruction. PLoS ONE 2018, 13, e0196770. [Google Scholar] [CrossRef] [Green Version]
  14. Hong, S.R.; Jung, S.-E.; Lee, E.H.; Shin, K.-J.; Yang, W.I.; Lee, H.Y. DNA methylation-based age prediction from saliva: High age predictability by combination of 7 CpG markers. Forensic Sci. Int. Genet. 2017, 29, 118–125. [Google Scholar] [CrossRef]
  15. Parson, W. Age Estimation with DNA: From Forensic DNA Fingerprinting to Forensic (Epi)Genomics: A Mini-Review. Gerontology 2018, 64, 326–332. [Google Scholar] [CrossRef]
  16. Thong, Z.; Chan, X.L.S.; Tan, J.Y.Y.; Loo, E.S.; Syn, C.K.C. Evaluation of DNA methylation-based age prediction on blood. Forensic Sci. Int. Genet. Suppl. Ser. 2017, 6, e249–e251. [Google Scholar] [CrossRef] [Green Version]
  17. Jung, S.-E.; Lim, S.M.; Hong, S.R.; Lee, E.H.; Shin, K.-J.; Lee, H.Y. DNA methylation of the ELOVL2, FHL2, KLF14, C1orf132/MIR29B2C, and TRIM59 genes for age prediction from blood, saliva, and buccal swab samples. Forensic Sci. Int. Genet. 2018, 38, 1–8. [Google Scholar] [CrossRef]
  18. Bulik-Sullivan, B.; Finucane, H.K.; Anttila, V.; Gusev, A.; Day, F.R.; Loh, P.-R.; Duncan, L.; Perry, J.R.B.; Patterson, N.; Robinson, E.B.; et al. An atlas of genetic correlations across human diseases and traits. Nat. Genet. 2015, 47, 1236–1241. [Google Scholar] [CrossRef] [Green Version]
  19. Okbay, A.; Beauchamp, J.P.; Fontana, M.A.; Lee, J.J.; Pers, T.H.; Rietveld, C.A.; Turley, P.; Chen, G.-B.; Emilsson, V.; Meddens, S.F.W.; et al. Genome-wide association study identifies 74 loci associated with educational attainment. Nature 2016, 533, 539–542. [Google Scholar] [CrossRef] [Green Version]
  20. Pakstis, A.J.; Speed, W.C.; Soundararajan, U.; Rajeevan, H.; Kidd, J.R.; Li, H.; Kidd, K.K. Population relationships based on 170 ancestry SNPs from the combined Kidd and Seldin panels. Sci. Rep. 2019, 9, 18874. [Google Scholar] [CrossRef] [Green Version]
  21. Richmond, S.; Toma, A.M.; Zhurov, A.I. New perspectives on craniofacial growth. Orthod. Fr. 2009, 80, 359–369. [Google Scholar] [CrossRef]
  22. Richmond, S.; Wilson-Nagrani, C.; Zhurov, A.; Farnell, D.; Galloway, J.; Ali, A.S.M.; Pirttiniemi, P.; Katic, V. Factors Influencing Facial Shape. In Evidence-Based Orthodontics; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2018; pp. 69–81. [Google Scholar] [CrossRef]
  23. Dunn, R.R.; Spiros, M.C.; Kamnikar, K.R.; Plemons, A.M.; Hefner, J.T. Ancestry estimation in forensic anthropology: A review. WIREs Forensic Sci. 2020, 2, e1369. [Google Scholar] [CrossRef]
  24. Zbieć-Piekarska, R.; Spólnicka, M.; Kupiec, T.; Parys-Proszek, A.; Makowska, Ż.; Pałeczka, A.; Kucharczyk, K.; Płoski, R.; Branicki, W. Development of a forensically useful age prediction method based on DNA methylation analysis. Forensic Sci. Int. Genet. 2015, 17, 173–179. [Google Scholar] [CrossRef]
  25. Sun, Q.; Jiang, L.; Zhang, G.; Liu, J.; Zhao, L.; Zhao, W.; Li, C. Twenty-seven continental ancestry-informative SNP analysis of bone remains to resolve a forensic case. Forensic Sci. Res. 2019, 4, 364–366. [Google Scholar] [CrossRef]
  26. Xia, X.; Chen, X.; Wu, G.; Li, F.; Wang, Y.; Chen, Y.; Chen, M.; Wang, X.; Chen, W.; Xian, B.; et al. Three-dimensional facial-image analysis to predict heterogeneity of the human ageing rate and the impact of lifestyle. Nat. Metab. 2020, 2, 946–957. [Google Scholar] [CrossRef] [PubMed]
  27. Kidd, K.K.; Speed, W.C.; Pakstis, A.J.; Furtado, M.R.; Fang, R.; Madbouly, A.; Maiers, M.; Middha, M.; Friedlaender, F.R.; Kidd, J.R. Progress toward an efficient panel of SNPs for ancestry inference. Forensic Sci. Int. Genet. 2014, 10, 23–32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  28. AncestryDNA®. 2020. Available online: https://www.ancestry.com/dna/ (accessed on 30 September 2020).
  29. Roosenboom, J.; Hens, G.; Mattern, B.C.; Shriver, M.D.; Claes, P. Exploring the Underlying Genetics of Craniofacial Morphology through Various Sources of Knowledge. BioMed Res. Int. 2016, 2016, 3054578. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  30. Popejoy, A.B.; Fullerton, S.M. Genomics is failing on diversity. Nature 2016, 538, 161–164. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  31. Peterson, R.E.; Kuchenbaecker, K.; Walters, R.K.; Chen, C.Y.; Popejoy, A.B.; Periyasamy, S.; Lam, M.; Iyegbe, C.; Strawbridge, R.J.; Brick, L.; et al. Genome-wide Association Studies in Ancestrally Diverse Populations: Opportunities, Methods, Pitfalls, and Recommendations. Cell 2019, 179, 589–603. [Google Scholar] [CrossRef]
  32. 23andMe DNA Genetic Testing & Analysis—23andMe AU, DE, FR & EU. 2020. Available online: https://www.23andme.com/en-int/ (accessed on 30 September 2020).
  33. Geno DNA Ancestry Kit. National Geographic. 2020. Available online: https://helpcenter.nationalgeographic.com/s/article/Genographic-DNA-Ancestry-Project-and-Kit-Discontinuation (accessed on 1 October 2022).
  34. Butler, K.; Peck, J.M.; Hart, M. Schanfield, and D. Podini Molecular ‘eyewitness’: Forensic prediction of phenotype and ancestry. Forensic Sci. Int. Genet. Suppl. Ser. 2011, 3, e498–e499. [Google Scholar] [CrossRef]
  35. Jin, X.-Y.; Guo, Y.-X.; Chen, C.; Cui, W.; Liu, Y.-F.; Tai, Y.-C.; Zhu, B.-F. Ancestry Prediction Comparisons of Different AISNPs for Five Continental Populations and Population Structure Dissection of the Xinjiang Hui Group via a Self-Developed Panel. Genes 2020, 11, 505. [Google Scholar] [CrossRef]
  36. Budowle, B.; van Daal, A. Forensically relevant SNP classes. Biotechniques 2008, 44, 603–608. [Google Scholar] [CrossRef] [Green Version]
  37. Phillips, C.; Salas, A.; Sánchez, J.; Fondevila, M.; Tato, A.G.; Alvarez-Dios, J.A.; Calaza, M.; de Cal, M.C.; Ballard, D.; Lareu, M.; et al. Inferring ancestral origin using a single multiplex assay of ancestry-informative marker SNPs. Forensic Sci. Int. Genet. 2007, 1, 273–280. [Google Scholar] [CrossRef]
  38. José, A. Alvarez Dios, Antonio Gómez Tato, y María de los Ángeles Casares de Cal. Departamento de Matemática Aplicada, Classification of individuals using AIMs. Available online: http://mathgene.usc.es/index.php (accessed on 1 October 2022).
  39. Guo, Y.-X.; Jin, X.-Y.; Xia, Z.-Y.; Chen, C.; Cui, W.; Zhu, B.-F. A small NGS-SNP panel of ancestry inference designed to distinguish African, European, East, and South Asian populations. Electrophoresis 2020, 41, 649–656. [Google Scholar] [CrossRef]
  40. Lan, Q.; Fang, Y.; Mei, S.; Xie, T.; Liu, Y.; Jin, X.; Yang, G.; Zhu, B. Next generation sequencing of a set of ancestry-informative SNPs: Ancestry assignment of three continental populations and estimating ancestry composition for Mongolians. Mol. Genet. Genom. 2020, 295, 1027–1038. [Google Scholar] [CrossRef]
  41. Draus-Barini, J.; Walsh, S.; Pośpiech, E.; Kupiec, T.; Głąb, H.; Branicki, W.; Kayser, M. Bona fide colour: DNA prediction of human eye and hair colour from ancient and contemporary skeletal remains. Investig. Genet. 2013, 4, 3–15. [Google Scholar] [CrossRef] [Green Version]
  42. Walsh, S.; Liu, F.; Wollstein, A.; Kovatsi, L.; Ralf, A.; Kosiniak-Kamysz, A.; Branicki, W.; Kayser, M. The HIrisPlex system for simultaneous prediction of hair and eye colour from DNA. Forensic Sci. Int. Genet. 2012, 7, 98–115. [Google Scholar] [CrossRef] [Green Version]
  43. Van Laan, M. The genetic witness: Forensic DNA phenotyping. J. Emerg. Forensic Sci. Res. 2017, 2, 33–52. [Google Scholar]
  44. Chaitanya, L.; Breslin, K.; Zuñiga, S.; Wirken, L.; Pośpiech, E.; Kukla-Bartoszek, M.; Sijen, T.; de Knijff, P.; Liu, F.; Branicki, W.; et al. The HIrisPlex-S system for eye, hair and skin colour prediction from DNA: Introduction and forensic developmental validation. Forensic Sci. Int. Genet. 2018, 35, 123–135. [Google Scholar] [CrossRef] [Green Version]
  45. Walsh, S.; Chaitanya, L.; Clarisse, L.; Wirken, L.; Draus-Barini, J.; Kovatsi, L.; Maeda, H.; Ishikawa, T.; Sijen, T.; de Knijff, P.; et al. Developmental validation of the HIrisPlex system: DNA-based eye and hair colour prediction for forensic and anthropological usage. Forensic Sci. Int. Genet. 2013, 9, 150–161. [Google Scholar] [CrossRef]
  46. King, T.E.; Fortes, G.G.; Balaresque, P.; Thomas, M.G.; Balding, D.; Delser, P.M.; Neumann, R.; Parson, W.; Knapp, M.; Walsh, S.; et al. Identification of the remains of King Richard III. Nat. Commun. 2014, 5, 5631. [Google Scholar] [CrossRef] [Green Version]
  47. Marano, L.A.; Andersen, J.D.; Goncalves, F.T.; Garcia, A.L.O.; Fridman, C. Evaluation of HIrisplex-S system markers for eye, skin and hair color prediction in an admixed Brazilian population. Forensic Sci. Int. Genet. Suppl. Ser. 2019, 7, 427–428. [Google Scholar] [CrossRef]
  48. Marano, L.A.; Fridman, C. DNA phenotyping: Current application in forensic science. Res. Rep. Forensic Med. Sci. 2019, 9, 1–8. [Google Scholar] [CrossRef] [Green Version]
  49. Norrgard, K.; Schultz, J. SNPs and Population Differentiation. Nature 2008, 1, 85. [Google Scholar]
  50. Breslin, K.; Wills, B.; Ralf, A.; Garcia, M.V.; Kukla-Bartoszek, M.; Pospiech, E.; Freire-Aradas, A.; Xavier, C.; Ingold, S.; de La Puente, M.; et al. HIrisPlex-S system for eye, hair, and skin color prediction from DNA: Massively parallel sequencing solutions for two common forensically used platforms. Forensic Sci. Int. Genet. 2019, 43, 102152. [Google Scholar] [CrossRef] [PubMed]
  51. Walsh, S.; Liu, F.; Ballantyne, K.N.; van Oven, M.; Lao, O.; Kayser, M. IrisPlex: A sensitive DNA tool for accurate prediction of blue and brown eye colour in the absence of ancestry information. Forensic Sci. Int. Genet. 2011, 5, 170–180. [Google Scholar] [CrossRef] [PubMed]
  52. Chaitanya, L.; Walsh, S.; Andersen, J.D.; Ansell, R.; Ballantyne, K.; Ballard, D.; Banemann, R.; Bauer, C.M.; Bento, A.M.; Brisighelli, F.; et al. Collaborative EDNAP exercise on the IrisPlex system for DNA-based prediction of human eye colour. Forensic Sci. Int. Genet. 2014, 11, 241–251. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  53. Purps, J.; Geppert, M.; Nagy, M.; Roewer, L. Evaluation of the IrisPlex eye colour prediction tool in a German population sample. Forensic Sci. Int. Genet. Suppl. Ser. 2011, 3, e202–e203. [Google Scholar] [CrossRef]
  54. Walsh, S.; Wollstein, A.; Liu, F.; Chakravarthy, U.; Rahu, M.; Seland, J.H.; Soubrane, G.; Tomazzoli, L.; Topouzis, F.; Vingerling, J.R.; et al. DNA-based eye colour prediction across Europe with the IrisPlex system. Forensic Sci. Int. Genet. 2012, 6, 330–340. [Google Scholar] [CrossRef]
  55. Bulbul, O.; Zorlu, T.; Filoglu, G. Prediction of human eye colour using highly informative phenotype SNPs (PISNPs). Aust. J. Forensic Sci. 2020, 52, 27–37. [Google Scholar] [CrossRef]
  56. Prestes, P.R.; Mitchell, R.J.; Daniel, R.; Ballantyne, K.N.; van Oorschot, R.A.H. Evaluation of the IrisPlex system in admixed individuals. Forensic Sci. Int. Genet. Suppl. Ser. 2011, 3, e283–e284. [Google Scholar] [CrossRef]
  57. Dembinski, G.M.; Picard, C.J. Evaluation of the IrisPlex DNA-based eye color prediction assay in a United States population. Forensic Sci. Int. Genet. 2014, 9, 111–117. [Google Scholar] [CrossRef]
  58. Branicki, W.; Liu, F.; van Duijn, K.; Draus-Barini, J.; Pośpiech, E.; Walsh, S.; Kupiec, T.; Wojas-Pelc, A.; Kayser, M. Model-based prediction of human hair color using DNA variants. Hum. Genet. 2011, 129, 443–454. [Google Scholar] [CrossRef] [Green Version]
  59. De Cerqueira, C.C.S.; Hünemeier, T.; Gomez-Valdés, J.; Ramallo, V.; Volasko-Krause, C.D.; Barbosa, A.A.L.; Vargas-Pinilla, P.; Dornelles, R.C.; Longo, D.; Rothhammer, F.; et al. Implications of the admixture process in skin color molecular assessment. PLoS ONE 2014, 9, e96886. [Google Scholar] [CrossRef]
  60. Lima, F.D.A.; Gonçalves, F.D.T.; Fridman, C. SLC24A5 and ASIP as phenotypic predictors in Brazilian population for forensic purposes. Leg. Med. 2015, 17, 261–266. [Google Scholar] [CrossRef]
  61. Dario, P.; Mouriño, H.; Oliveira, A.R.; Lucas, I.; Ribeiro, T.; Porto, M.J.; Costa Santos, J.; Dias, D.; Corte Real, F. Assessment of IrisPlex-based multiplex for eye and skin color prediction with application to a Portuguese population. Int. J. Leg. Med. 2015, 129, 1191–1200. [Google Scholar] [CrossRef]
  62. Fracasso, N.C.d.A.; de Andrade, E.S.; Wiezel, C.E.V.; Andrade, C.C.F.; Zanão, L.R.; da Silva, M.S.; Marano, L.A.; Donadi, E.A.; Castelli, E.C.; Simões, A.L.; et al. Haplotypes from the SLC45A2 gene are associated with the presence of freckles and eye, hair and skin pigmentation in Brazil. Leg. Med. 2017, 25, 43–51. [Google Scholar] [CrossRef] [Green Version]
  63. Branicki, W.; Brudnik, U.; Wojas-Pelc, A. Interactions between HERC2, OCA2 and MC1R may influence human pigmentation phenotype. Ann. Hum. Genet. 2009, 73, 160–170. [Google Scholar] [CrossRef]
  64. Simcoe, M.; Valdes, A.; Liu, F.; Furlotte, N.A.; Evans, D.M.; Hemani, G.; Ring, S.M.; Smith, G.D.; Duffy, D.L.; Zhu, G.; et al. Genome-wide association study in almost 195,000 individuals identifies 50 previously unidentified genetic loci for eye color. Sci. Adv. 2021, 7, eabd1239. [Google Scholar] [CrossRef]
  65. Devranoglu, D.; Tavaci, I.; Filoglu, G.; Bulbul, O. Effect of Type of Degraded DNA Samples on Human Eye Color Prediction. Pak. J. Zool. 2020, 53, 1–9. [Google Scholar] [CrossRef]
  66. Hart, K.L.; Kimura, S.L.; Mushailov, V.; Budimlija, Z.M.; Prinz, M.; Wurmbach, E. Improved eye- and skin-color prediction based on 8 SNPs. Croat. Med. J. 2013, 54, 248–256. [Google Scholar] [CrossRef] [Green Version]
  67. Zaorska, K.; Zawierucha, P.; Nowicki, M. Prediction of skin color, tanning and freckling from DNA in Polish population: Linear regression, random forest and neural network approaches. Hum. Genet. 2019, 138, 635–647. [Google Scholar] [CrossRef] [Green Version]
  68. Stokowski, R.P.; Pant, P.K.; Dadd, T.; Fereday, A.; Hinds, D.; Jarman, C.; Filsell, W.; Ginger, R.S.; Green, M.R.; van der Ouderaa, F.J.; et al. A Genomewide Association Study of Skin Pigmentation in a South Asian Population. Am. J. Hum. Genet. 2007, 81, 1119–1132. [Google Scholar] [CrossRef] [Green Version]
  69. Spichenok, O.; Budimlija, Z.M.; Mitchell, A.A.; Jenny, A.; Kovacevic, L.; Marjanovic, D.; Caragine, T.; Prinz, M.; Wurmbach, E. Prediction of eye and skin color in diverse populations using seven SNPs. Forensic Sci. Int. Genet. 2011, 5, 472–478. [Google Scholar] [CrossRef]
  70. Hysi, P.G.; Valdes, A.M.; Liu, F.; Furlotte, N.A.; Evans, D.M.; Bataille, V.; Visconti, A.; Hemani, G.; McMahon, G.; Ring, S.M.; et al. Genome-wide association meta-analysis of individuals of European ancestry identifies new loci explaining a substantial fraction of hair color variation and heritability. Nat. Genet. 2018, 50, 652–656. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  71. Liu, F.; van Duijn, K.; Vingerling, J.R.; Hofman, A.; Uitterlinden, A.G.; Janssens, A.C.J.; Kayser, M. Eye color and the prediction of complex phenotypes from genotypes. Curr. Biol. 2009, 19, R192–R193. [Google Scholar] [CrossRef] [PubMed]
  72. Alghamdi, J.; Amoudi, M.; Kassab, A.C.; Al Mufarrej, M.; Al Ghamdi, S. Eye color prediction using single nucleotide polymorphisms in Saudi population. Saudi J. Biol. Sci. 2019, 26, 1607–1612. [Google Scholar] [CrossRef] [PubMed]
  73. Balanovska, E.; Lukianova, E.; Kagazezheva, J.; Maurer, A.; Leybova, N.; Agdzhoyan, A.; Gorin, I.; Petrushenko, V.; Zhabagin, M.; Pylev, V.; et al. Optimizing the genetic prediction of the eye and hair color for North Eurasian populations. BMC Genom. 2020, 21, 527. [Google Scholar] [CrossRef] [PubMed]
  74. Seo, J.Y.; You, S.W.; Shin, J.-G.; Kim, Y.; Park, S.G.; Won, H.-H.; Kang, N.G. GWAS Identifies Multiple Genetic Loci for Skin Color in Korean Women. J. Investig. Dermatol. 2022, 142, 1077–1084. [Google Scholar] [CrossRef]
  75. Djordjevic, J.; Zhurov, A.I.; Richmond, S.; Visigen Consortium. Genetic and Environmental Contributions to Facial Morphological Variation: A 3D Population-Based Twin Study. PLoS ONE 2016, 11, e0162250. [Google Scholar] [CrossRef] [Green Version]
  76. Tsagkrasoulis, D.; Hysi, P.; Spector, T.; Montana, G. Heritability maps of human face morphology through large-scale automated three-dimensional phenotyping. Sci. Rep. 2017, 7, 45885. [Google Scholar] [CrossRef] [Green Version]
  77. Claes, P.; Hill, H.; Shriver, M.D. Toward DNA-based facial composites: Preliminary results and validation. Forensic Sci. Int. Genet. 2014, 13, 208–216. [Google Scholar] [CrossRef]
  78. Tabarek, M.N.; Alkazaz, A.K.A. The two single nucleotide polymorphism haplotypes on chromosome 15 of the herc2 and oca2 genes of the color variation of the human eye in a sample of iraqi population. Iraqi J. Agric. Sci. 2022, 53, 67–74. [Google Scholar]
  79. Rafati, A.; Hosseini, M.; Tavallaei, M.; Naderi, M.; Sarveazad, A. Association of rs12913832 in the HERC2 Gene Affecting Human Iris Color Variation. Anat. Sci. J. 2015, 12, 9–16. [Google Scholar]
  80. Claes, P.; Liberton, D.K.; Daniels, K.; Rosana, K.M.; Quillen, E.E.; Pearson, L.N.; McEvoy, B.; Bauchet, M.; Zaidi, A.A.; Yao, W.; et al. Modeling 3D Facial Shape from DNA. PLOS Genet. 2014, 10, e1004224. [Google Scholar] [CrossRef] [Green Version]
  81. Liu, F.; Van Der Lijn, F.; Schurmann, C.; Zhu, G.; Chakravarty, M.M.; Hysi, P.G.; Wollstein, A.; Lao, O.; de Bruijne, M.; Ikram, M.A.; et al. A Genome-Wide Association Study Identifies Five Loci Influencing Facial Morphology in Europeans. PLoS Genet. 2012, 8, e1002932. [Google Scholar] [CrossRef] [Green Version]
  82. Peng, S.; Tan, J.; Hu, S.; Zhou, H.; Guo, J.; Jin, L.; Tang, K. Detecting Genetic Association of Common Human Facial Morphological Variation Using High Density 3D Image Registration. PLOS Comput. Biol. 2013, 9, e1003375. [Google Scholar] [CrossRef] [Green Version]
  83. HIrisPlex-S Eye, Hair and Skin Colour DNA Phenotyping Webtool. Available online: https://hirisplex.erasmusmc.nl/ (accessed on 8 May 2022).
  84. Wilkinson, C. Craniofacial Identification; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  85. Claes, P.; Vandermeulen, D.; De Greef, S.; Willems, G.; Clement, J.G.; Suetens, P. Computerized craniofacial reconstruction: Conceptual framework and review. Forensic Sci. Int. 2010, 201, 138–145. [Google Scholar] [CrossRef]
  86. Claes, P.; Vandermeulen, D.; De Greef, S.; Willems, G.; Suetens, P. Craniofacial reconstruction using a combined statistical model of face shape and soft tissue depths: Methodology and validation. Forensic Sci. Int. 2006, 159, S147–S158. [Google Scholar] [CrossRef]
  87. Decker, S.; Ford, J.; Davy-Jow, S.; Faraut, P.; Neville, W.; Hilbelink, D. Who is this person? A comparison study of current three-dimensional facial approximation methods. Forensic Sci. Int. 2013, 229, 161.e1–161.e8. [Google Scholar] [CrossRef]
  88. Qian, W.; Zhang, M.; Wan, K.; Xie, Y.; Du, S.; Li, J.; Mu, X.; Qiu, J.; Xue, X.; Zhuang, X.; et al. Genetic evidence for facial variation being a composite phenotype of cranial variation and facial soft tissue thickness. J. Genet. Genom. 2022, 49, 934–942. [Google Scholar] [CrossRef]
  89. Fagertun, J.; Wolffhechel, K.; Pers, T.H.; Nielsen, H.B.; Gudbjartsson, D.; Stefansson, H.; Stefansson, K.; Paulsen, R.R.; Jarmer, H. Predicting facial characteristics from complex polygenic variations. Forensic Sci. Int. Genet. 2015, 19, 263–268. [Google Scholar] [CrossRef] [Green Version]
  90. Adhikari, K.; Fontanil, T.; Cal, S.; Mendoza-Revilla, J.; Fuentes-Guajardo, M.; Chacon-Duque, J.C.; Al-Saadi, F.; Johansson, J.A.; Quinto-Sanchez, M.; Acuña-Alonzo, V.; et al. A genome-wide association scan in admixed Latin Americans identifies loci influencing facial and scalp hair features. Nat. Commun. 2016, 7, 10815. [Google Scholar] [CrossRef] [Green Version]
  91. Xiong, Z.; Dankova, G.; Howe, L.J.; Lee, M.K.; Hysi, P.G.; de Jong, M.A.; Zhu, G.; Adhikari, K.; Li, D.; Li, Y.; et al. Novel genetic loci affecting facial shape variation in humans. eLife 2019, 8, e49898. [Google Scholar] [CrossRef]
  92. Thompson, T.; Black, S. Forensic Human Identification: An Introduction; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  93. Wilkinson, C. Facial Anthropology and Reconstruction. In Forensic Human Identification; CRC Press: Boca Raton, FL, USA, 2006; pp. 231–255. [Google Scholar] [CrossRef]
  94. Fullwiley, D. Can DNA ‘Witness’ Race?: Forensic Uses of an Imperfect Ancestry Testing Technology. Genewatch 2008, 21, 12–14. [Google Scholar]
  95. Kayser, M.; Schneider, P.M. DNA-based prediction of human externally visible characteristics in forensics: Motivations, scientific challenges, and ethical considerations. Forensic Sci. Int. Genet. 2009, 3, 154–161. [Google Scholar] [CrossRef] [PubMed]
  96. Porras, A.R.; Rosenbaum, K.; Tor-Diez, C.; Summar, M.; Linguraru, M.G. Development and evaluation of a machine learning-based point-of-care screening tool for genetic syndromes in children: A multinational retrospective study. Lancet Digit. Health 2021, 3, e635–e643. [Google Scholar] [CrossRef]
  97. Gurovich, Y.; Hanani, Y.; Bar, O.; Nadav, G.; Fleischer, N.; Gelbman, D.; Basel-Salmon, L.; Krawitz, P.M.; Kamphausen, S.B.; Zenker, M.; et al. Identifying facial phenotypes of genetic disorders using deep learning. Nat. Med. 2019, 25, 60–64. [Google Scholar] [CrossRef] [PubMed]
  98. Ferry, Q.; Steinberg, J.; Webber, C.; FitzPatrick, D.R.; Ponting, C.P.; Zisserman, A.; Nellåker, C. Diagnostically relevant facial gestalt information from ordinary photos. eLife 2014, 3, e02020. [Google Scholar] [CrossRef]
  99. Hsieh, T.-C.; Bar-Haim, A.; Moosa, S.; Ehmke, N.; Gripp, K.W.; Pantel, J.T.; Danyel, M.; Mensah, M.A.; Horn, D.; Rosnev, S.; et al. GestaltMatcher facilitates rare disease matching using facial phenotype descriptors. Nat. Genet. 2022, 54, 349–357. [Google Scholar] [CrossRef]
  100. Palmer, R.L.; Helmholz, P.; Baynam, G. Cliniface: Phenotypic visualisation and analysis using non-rigid registration of 3d facial images. ISPRS Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2020, 43, 301–308. [Google Scholar] [CrossRef]
  101. Butler, J.M. The future of forensic DNA analysis. Philos. Trans. R. Soc. B: Biol. Sci. 2015, 370, 20140252. [Google Scholar] [CrossRef] [Green Version]
  102. Agrawal, K. Forensic DNA Phenotyping: Significance in Criminal Investigations. Forensic Sci. 2020, 3, WGK360047. [Google Scholar]
  103. Hedstrom, J. Scanning Process: It’s Easy to Generate a 3D Printing. 3D Printing Blog: Tutorials, News, Trends and Resources | Sculpteo. Available online: https://www.sculpteo.com/blog/2015/11/11/scanning-for-3d-printing-using-photogrammetry/ (accessed on 8 May 2022).
  104. Carreel, E.; Moreau, C. “How to 3D Scan with a Phone: Here Are Our Best Tips.” Sculpteo Inc. 29 January 2020. Available online: https://www.sculpteo.com/en/3d-learning-hub/best-articles-about-3d-printing/3d-scan-smartphone/ (accessed on 8 May 2022).
  105. #1 Mobile 3D Scanning App for iPad. 23 April 2014. Available online: https://itseez3d.com/scanner.html (accessed on 8 May 2022).
  106. Geng, J. Structured-light 3D surface imaging: A tutorial. Adv. Opt. Photon. 2011, 3, 128–160. [Google Scholar] [CrossRef]
  107. Ebrahim, M.A.B. 3D Laser Scanners’ Techniques Overview. Int. J. Sci. Res. (IJSR) 2015, 4, 5–611. [Google Scholar]
  108. LMI Technologies Inc. Vision Online, 03-Aug-2016. Available online: https://www.visiononline.org/vision-resources-details.cfm/vision-resources/Structured-Light-vs-Laser-Triangulation-for-3D-Scanning-and-Inspection/content_id/6071 (accessed on 10 November 2020).
  109. Bernardini, H.R.F. The 3D Model Acquisition Pipeline. Comput. Graph. Forum 2002, 21, 149–172. [Google Scholar] [CrossRef]
  110. Peiravi, A.; Taabbodi, B. A reliable 3D laser triangulation-based scanner with a new simple but accurate procedure for finding scanner parameters. J. Am. Sci. 2010, 6, 80–85. [Google Scholar]
  111. Kau, C.H.; Richmond, S. Three-dimensional analysis of facial morphology surface changes in untreated children from 12 to 14 years of age. Am. J. Orthod. Dentofac. Orthop. 2008, 134, 751–760. [Google Scholar] [CrossRef]
  112. Abbas, H.H.; Hicks, Y.; Zhurov, A.; Marshall, D.; Claes, P.; Wilson-Nagrani, C.; Richmond, S. An automatic approach for classification and categorisation of lip morphological traits. PLoS ONE 2019, 14, e0221197. [Google Scholar] [CrossRef] [Green Version]
  113. Georgopoulos, A.; Ioannidis, C.; Valanis, A. Assessing the performance of a structured light scanner. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 251–255. [Google Scholar]
  114. Revopoint3d High-Accuracy Handheld 3D Scanner Handysense Developed by Revopoint. 2019. Available online: https://www.revopoint3d.com/handheld-3d-scanner-handysense/ (accessed on 8 November 2020).
  115. Pucciarelli, V.; Gibelli, D.M.; Codari, M.; Sforza, C. Laser Scanner Versus Stereophotogrammetry: A Three-Dimensional Quantitative Approach for Morphological; Hometrica Consulting: Ascona, Switzerland, 2016; pp. 80–87. [Google Scholar]
  116. Canfield Scientific Inc. VECTRA M3 3D Imaging System. www.canfieldsci.com. 2020. Available online: https://www.canfieldsci.com/imaging-systems/vectra-m3-3d-imaging-system/ (accessed on 8 November 2020).
  117. Lane, C.; Duncan, K.; Nugent, M. 3dMD Products—3dMD, LLC. 3dmd.com. 2020. Available online: https://3dmd.com/products/#!/head (accessed on 8 November 2020).
  118. Artec3D Company Artec Eva. www.artec3d.com. 2020. Available online: https://www.artec3d.com/portable-3d-scanners/artec-eva?utm_source=google&utm_medium=cpc&utm_campaign=2030432791&utm_term=%2Bartec%20%2Beva||kwd-57271721806&utm_content=71839602156||&keyword=%2Bartec%20%2Beva&gclid=Cj0KCQiAy579BRCPARIsAB6QoIYRa3X-b28HsyzYRfP37TKS63H2AKnkcmSUE6VQlmZdMpcyVrskc1EaAhw5EALw_wcB. (accessed on 8 November 2020).
  119. Canfield Scientific Inc. VECTRA H1 3D Imaging System. www.canfieldsci.com. 2020. Available online: https://www.canfieldsci.com/imaging-systems/vectra-h1-3d-imaging-system/ (accessed on 8 November 2020).
  120. Minolta, K. VIVID 910—Laser Scanner. Laser Scanner. 2015. Available online: http://laserscannervivid.blogspot.com/2015/05/vivid-910.html. (accessed on 16 December 2020).
  121. Aeria, G.; Claes, P.; Vandermeulen, D.; Clement, J.G. Targeting specific facial variation for different identification tasks. Forensic Sci. Int. 2010, 201, 118–124. [Google Scholar] [CrossRef]
  122. Vuollo, V.; Sidlauskas, M.; Sidlauskas, A.; Harila, V.; Salomskiene, L.; Zhurov, A.; Holmström, L.; Pirttiniemi, P.; Heikkinen, T. Comparing Facial 3D Analysis With DNA Testing to Determine Zygosities of Twins. Twin Res. Hum. Genet. 2015, 18, 306–313. [Google Scholar] [CrossRef]
  123. Crouch, D.J.M.; Winney, B.; Koppen, W.P.; Christmas, W.J.; Hutnik, K.; Day, T.; Meena, D.; Boumertit, A.; Hysi, P.; Nessa, A.; et al. Genetics of the human face: Identification of large-effect single gene variants. Proc. Natl. Acad. Sci. USA 2018, 115, E676–E685. [Google Scholar] [CrossRef] [Green Version]
  124. Sero, D.; Zaidi, A.; Li, J.; White, J.D.; Zarzar, T.B.G.; Marazita, M.L.; Weinberg, S.M.; Suetens, P.; Vandermeulen, D.; Wagner, J.K.; et al. Facial recognition from DNA using face-to-DNA classifiers. Nat. Commun. 2019, 10, 2557. [Google Scholar] [CrossRef] [Green Version]
  125. Camison, L.; Bykowski, M.; Lee, W.W.; Carlson, J.C.; Roosenboom, J.; Goldstein, J.A.; Losee, J.E.; Weinberg, S.M. Validation of the Vectra H1 portable three-dimensional photogrammetry system for facial imaging. Int. J. Oral Maxillofac. Surg. 2018, 47, 403–410. [Google Scholar] [CrossRef] [PubMed]
  126. Savoldelli, C.; Benat, G.; Castillo, L.; Chamorey, E.; Lutz, J.-C. Accuracy, repeatability and reproducibility of a handheld three-dimensional facial imaging device: The Vectra H1. J. Stomatol. Oral Maxillofac. Surg. 2019, 120, 289–296. [Google Scholar] [CrossRef] [PubMed]
  127. Modabber, A.; Peters, F.; Brokmeier, A.; Goloborodko, E.; Ghassemi, A.; Lethaus, B.; Hölzle, F.; Möhlhenrich, S.C. Influence of Connecting Two Standalone Mobile Three-Dimensional Scanners on Accuracy Comparing with a Standard Device in Facial Scanning. J. Oral Maxillofac. Res. 2016, 7, e4. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  128. Almulla, S.; Premjani, P.; Vaid, N.R.; Fadia, D.F.; Ferguson, D.J. Evaluating the accuracy of facial models obtained from volume wrapping: 2D images on CBCT versus 3D on CBCT. Semin. Orthod. 2018, 24, 443–450. [Google Scholar] [CrossRef]
  129. Toma, A.M.; Zhurov, A.; Playle, R.; Ong, E.; Richmond, S. Reproducibility of facial soft tissue landmarks on 3D laser-scanned facial images. Orthod. Craniofacial Res. 2009, 12, 33–42. [Google Scholar] [CrossRef]
  130. Weinberg, S.; Scott, N.M.; Neiswanger, K.; Brandon, C.A.; Marazita, M.L. Digital Three-Dimensional Photogrammetry: Evaluation of Anthropometric Precision and Accuracy Using a Genex 3D Camera System. Cleft Palate-Craniofacial J. 2004, 41, 507–518. [Google Scholar] [CrossRef]
  131. Çeliktutan, O.; Ulukaya, S.; Sankur, B. A comparative study of face landmarking techniques. EURASIP J. Image Video Process. 2013, 2013, 13. [Google Scholar] [CrossRef] [Green Version]
  132. Moyers, R.E.; Bookstein, F.L. The inappropriateness of conventional cephalometrics. Am. J. Orthod. 1979, 75, 599–617. [Google Scholar] [CrossRef]
  133. Fagertun, J.; Harder, S.; Rosengren, A.; Moeller, C.; Werge, T.; Paulsen, R.R.; Hansen, T.F. 3D facial landmarks: Inter-operator variability of manual annotation. BMC Med. Imaging 2014, 14, 35. [Google Scholar] [CrossRef] [Green Version]
  134. Von Cramon-Taubadel, N.; Frazier, B.C.; Lahr, M.M. The problem of assessing landmark error in geometric morphometrics: Theory, methods, and modifications. Am. J. Phys. Anthropol. 2007, 134, 24–35. [Google Scholar] [CrossRef]
  135. Wong, J.Y.; Oh, A.K.; Ohta, E.; Hunt, A.T.; Rogers, G.F.; Mulliken, J.B.; Deutsch, C.K. Validity and Reliability of Craniofacial Anthropometric Measurement of 3D Digital Photogrammetric Images. Cleft Palate-Craniofac. J. 2008, 45, 232–239. [Google Scholar] [CrossRef]
  136. Halazonetis, D. Viewbox 4 Software—Viewbox Cephalometric Software. 2014. Available online: http://www.dhal.com/index.htm (accessed on 16 November 2020).
  137. Adams, D.C.; Otárola-Castillo, E. Geomorph: An r package for the collection and analysis of geometric morphometric shape data. Methods Ecol. Evol. 2013, 4, 393–399. [Google Scholar] [CrossRef]
  138. Nazri, A.; Agbolade, O.; Yaakob, R.; Ghani, A.A.; Cheah, Y.K. A novel investigation of the effect of iterations in sliding semi-landmarks for 3D human facial images. BMC Bioinform. 2020, 21, 208. [Google Scholar] [CrossRef]
  139. Segundo, M.P.; Silva, L.; Bellon, O.R.P.; Queirolo, C.C. Automatic Face Segmentation and Facial Landmark Detection in Range Images. IEEE Trans. Syst. Man Cybern. Part B 2010, 40, 1319–1330. [Google Scholar] [CrossRef]
  140. Vezzetti, E.; Marcolin, F.; Tornincasa, S.; Ulrich, L.; Dagnes, N. 3D geometry-based automatic landmark localization in presence of facial occlusions. Multimed. Tools Appl. 2017, 77, 14177–14205. [Google Scholar] [CrossRef]
  141. Vezzetti, E.; Marcolin, F. 3D Landmarking in Multiexpression Face Analysis: A Preliminary Study on Eyebrows and Mouth. Aesthetic Plast. Surg. 2014, 38, 796–811. [Google Scholar] [CrossRef] [Green Version]
  142. Bagchi, P.; Bhattacharjee, D.; Nasipuri, M.; Basu, D.K. A novel approach to nose-tip and eye corners detection using H-K curvature analysis in case of 3D images. In Proceedings of the 2012 Third International Conference on Emerging Applications of Information Technology, Kolkata, India, 30 November–1 December 2012; pp. 311–315. [Google Scholar] [CrossRef] [Green Version]
  143. Li, Y.; Wang, Y.; Wang, B.; Sui, L. Nose tip detection on three-dimensional faces using pose-invariant differential surface features. IET Comput. Vis. 2014, 9, 75–84. [Google Scholar] [CrossRef]
  144. Boukamcha, H.; Elhallek, M.; Atri, M.; Smach, F. 3D face landmark auto detection. In Proceedings of the 2015 World Symposium on Computer Networks and Information Security (WSCNIS), Hammamet, Tunisia, 19–21 September 2015; pp. 1–6. [Google Scholar]
  145. De Giorgis, N.; Rocca, L.; Puppo, E. Scale-Space Techniques for Fiducial Points Extraction from 3D Faces. In Proceedings of the Image Analysis and Processing—ICIAP, Genoa, Italy, 7–11 September 2015; pp. 421–431. [Google Scholar] [CrossRef]
  146. Cha, S.; Lim, J.E.; Park, A.Y.; Do, J.-H.; Lee, S.W.; Shin, C.; Cho, N.H.; Kang, J.-O.; Nam, J.M.; Kim, J.-S.; et al. Identification of five novel genetic loci related to facial morphology by genome-wide association studies. BMC Genom. 2018, 19, 481. [Google Scholar] [CrossRef]
  147. Face Mesh. mediapipe. Available online: https://google.github.io/mediapipe/solutions/face_mesh.html. (accessed on 30 April 2022).
  148. Zhang, Y.; Prakash, E.C. Face to Face: Anthropometry-Based Interactive Face Shape Modeling Using Model Priors. Int. J. Comput. Games Technol. 2009, 2009, 1–15. [Google Scholar] [CrossRef] [Green Version]
  149. White, J.D.; Ortega-Castrillón, A.; Matthews, H.; Zaidi, A.A.; Ekrami, O.; Snyders, J.; Fan, Y.; Penington, T.; Van Dongen, S.; Shriver, M.D.; et al. MeshMonk: Open-source large-scale intensive 3D phenotyping. Sci. Rep. 2019, 9, 6085. [Google Scholar] [CrossRef] [Green Version]
  150. Hoskens, H.; Li, J.; Indencleef, K.; Gors, D.; Larmuseau, M.H.D.; Richmond, S.; Zhurov, A.I.; Hens, G.; Peeters, H.; Claes, P. Spatially Dense 3D Facial Heritability and Modules of Co-heritability in a Father-Offspring Design. Front. Genet. 2018, 9, 554. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  151. White, J. Investigations into the Genetic Architecture of the Human Face; Pennsylvania State University: State College, PA, USA, 2019. [Google Scholar]
  152. De Jong, M.A.; Wollstein, A.; Ruff, C.; Dunaway, D.; Hysi, P.; Spector, T.; Liu, F.; Niessen, W.; Koudstaal, M.J.; Kayser, M.; et al. An Automatic 3D Facial Landmarking Algorithm Using 2D Gabor Wavelets. IEEE Trans. Image Process. 2016, 25, 580–588. [Google Scholar] [CrossRef] [PubMed]
  153. Hallgrimsson, B.; Mio, W.; Marcucio, R.S.; Spritz, R. Let’s face it--complex traits are just not that simple. PLoS Genet. 2014, 10, e1004724. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  154. Shaffer, J.R.; Orlova, E.; Lee, M.K.; Leslie, E.J.; Raffensperger, Z.D.; Heike, C.L.; Cunningham, M.L.; Hecht, J.T.; Kau, C.H.; Nidey, N.L.; et al. Genome-Wide Association Study Reveals Multiple Loci Influencing Normal Human Facial Morphology. PLOS Genet. 2016, 12, e1006149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  155. Waltoft, B.L.; Pedersen, C.B.; Nyegaard, M.; Hobolth, A. The importance of distinguishing between the odds ratio and the incidence rate ratio in GWAS. BMC Med. Genet. 2015, 16, 71. [Google Scholar] [CrossRef] [Green Version]
  156. Buniello, A.; MacArthur, J.A.L.; Cerezo, M.; Harris, L.W.; Hayhurst, J.; Malangone, C.; McMahon, A.; Morales, J.; Mountjoy, E.; Sollis, E.; et al. The NHGRI-EBI GWAS Catalog of published genome-wide association studies, targeted arrays and summary statistics 2019. Nucleic Acids Res. 2019, 47, D1005–D1012. [Google Scholar] [CrossRef]
  157. Hoskens, H.; Liu, D.; Naqvi, S.; Lee, M.K.; Eller, R.J.; Indencleef, K.; White, J.D.; Li, J.; Larmuseau, M.H.D.; Hens, G.; et al. 3D facial phenotyping by biometric sibling matching used in contemporary genomic methodologies. PLoS Genet. 2021, 17, e1009528. [Google Scholar] [CrossRef]
  158. Adhikari, K.; Fuentes-Guajardo, M.; Quinto-Sánchez, M.; Mendoza-Revilla, J.; Camilo Chacón-Duque, J.; Acuña-Alonzo, V.; Jaramillo, C.; Arias, W.; Lozano, R.B.; Pérez, G.M.; et al. A genome-wide association scan implicates DCHS2, RUNX2, GLI3, PAX1 and EDAR in human facial variation. Nat. Commun. 2016, 7, 11616. [Google Scholar] [CrossRef] [Green Version]
  159. Laville, V.; Le Clerc, S.; Ezzedine, K.; Jdid, R.; Taing, L.; Labib, T.; Coulonges, C.; Ulveling, D.; Galan, P.; Guinot, C.; et al. A genome wide association study identifies new genes potentially associated with eyelid sagging. Exp. Dermatol. 2019, 28, 892–898. [Google Scholar] [CrossRef]
  160. Huang, Y.; Li, D.; Qiao, L.; Liu, Y.; Peng, Q.; Wu, S.; Zhang, M.; Yang, Y.; Tan, J.; Xu, S.; et al. A genome-wide association study of facial morphology identifies novel genetic loci in Han Chinese. J. Genet. Genom. 2019, 48, 198–207. [Google Scholar] [CrossRef]
  161. Bonfante, B.; Faux, P.; Navarro, N.; Mendoza-Revilla, J.; Dubied, M.; Montillot, C.; Wentworth, E.; Poloni, L.; Varón-González, C.; Jones, P.; et al. A GWAS in Latin Americans identifies novel face shape loci, implicating VPS13B and a Denisovan introgressed region in facial variation. Sci. Adv. 2021, 7, i161–i168. [Google Scholar] [CrossRef]
  162. Pickrell, J.K.; Berisa, T.; Liu, J.Z.; Ségurel, L.; Tung, J.Y.; Hinds, D.A. Detection and interpretation of shared genetic influences on 42 human traits. Nat. Genet. 2016, 48, 709. [Google Scholar] [CrossRef] [Green Version]
  163. Liu, C.; Lee, M.K.; Naqvi, S.; Hoskens, H.; Liu, D.; White, J.D.; Indencleef, K.; Matthews, H.; Eller, R.J.; Li, J.; et al. Genome scans of facial features in East Africans and cross-population comparisons reveal novel associations. PLoS Genet. 2021, 17, e1009695. [Google Scholar] [CrossRef]
  164. Endo, C.; Johnson, T.A.; Morino, R.; Nakazono, K.; Kamitsuji, S.; Akita, M.; Kawajiri, M.; Yamasaki, T.; Kami, A.; Hoshi, Y.; et al. Genome-wide association study in Japanese females identifies fifteen novel skin-related trait associations. Sci. Rep. 2018, 8, 8974. [Google Scholar] [CrossRef] [Green Version]
  165. Lee, M.K.; Shaffer, J.R.; Leslie, E.J.; Orlova, E.; Carlson, J.C.; Feingold, E.; Marazita, M.L.; Weinberg, S.M. Genome-wide association study of facial morphology reveals novel associations with FREM1 and PARK2. PLoS ONE 2017, 12, e0176566. [Google Scholar] [CrossRef] [Green Version]
  166. Hu, B.; Shen, N.; Li, J.J.; Kang, H.; Hong, J.; Fletcher, J.; Greenberg, J.; Mailick, M.R.; Lu, Q. Genome-wide association study reveals sex-specific genetic architecture of facial attractiveness. PLoS Genet. 2019, 15, e1007973. [Google Scholar] [CrossRef]
  167. Claes, P.; Roosenboom, J.; White, J.D.; Swigut, T.; Sero, D.; Li, J.; Lee, M.K.; Zaidi, A.; Mattern, B.C.; Liebowitz, C.; et al. Genome-wide mapping of global-to-local genetic effects on human facial shape. Nat. Genet. 2018, 50, 414–423. [Google Scholar] [CrossRef]
  168. Qiao, L.; Yang, Y.; Fu, P.; Hu, S.; Zhou, H.; Peng, S.; Tan, J.; Lu, Y.; Lou, H.; Lu, D.; et al. Genome-wide variants of Eurasian facial shape differentiation and a prospective model of DNA based face prediction. J. Genet. Genom. 2018, 45, 419–432. [Google Scholar] [CrossRef]
  169. White, J.D.; Indencleef, K.; Naqvi, S.; Eller, R.J.; Hoskens, H.; Roosenboom, J.; Lee, M.K.; Li, J.; Mohammed, J.; Richmond, S.; et al. Insights into the genetic architecture of the human face. Nat. Genet. 2021, 53, 45–53. [Google Scholar] [CrossRef]
  170. Jacobs, L.C.; Liu, F.; Bleyen, I.; Gunn, D.; Hofman, A.; Klaver, C.C.W.; Uitterlinden, A.G.; Neumann, H.A.M.; Bataille, V.; Spector, T.D.; et al. Intrinsic and Extrinsic Risk Factors for Sagging Eyelids. JAMA Dermatol. 2014, 150, 836–843. [Google Scholar] [CrossRef] [Green Version]
  171. Howe, L.J.; Lee, M.K.; Sharp, G.C.; Smith, G.D.; Pourcain, B.S.; Shaffer, J.R.; Ludwig, K.U.; Mangold, E.; Marazita, M.L.; Feingold, E.; et al. Investigating the shared genetics of non-syndromic cleft lip/palate and facial morphology. PLoS Genet. 2018, 14, e1007501. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  172. Indencleef, K.; Hoskens, H.; Lee, M.K.; White, J.D.; Liu, C.; Eller, R.J.; Naqvi, S.; Wehby, G.L.; Uribe, L.M.M.; Hecht, J.T.; et al. The Intersection of the Genetic Architectures of Orofacial Clefts and Normal Facial Variation. Front. Genet. 2021, 12, 626403. [Google Scholar] [CrossRef] [PubMed]
  173. Shin, J.; Lee, C. Statistical power for identifying nucleotide markers associated with quantitative traits in genome-wide association analysis using a mixed model. Genomics 2015, 105, 1–4. [Google Scholar] [CrossRef] [PubMed]
  174. Fadista, J.; Manning, A.K.; Florez, J.C.; Groop, L. The (in)famous GWAS P-value threshold revisited and updated for low-frequency variants. Eur. J. Hum. Genet. 2016, 24, 1202–1205. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  175. Kaler, A.S.; Purcell, L.C. Estimation of a significance threshold for genome-wide association studies. BMC Genom. 2019, 20, 618. [Google Scholar] [CrossRef] [Green Version]
  176. PABPC1L2B Gene. GeneCards. Available online: https://www.genecards.org/cgi-bin/carddisp.pl?gene=PABPC1L2B (accessed on 1 October 2022).
  177. NAV3 Neuron Navigator 3 Homo Sapiens (Human)—Gene—NCBI. Available online: https://www.ncbi.nlm.nih.gov/gene/89795. (accessed on 9 May 2022).
  178. Sun, Y.; Chen, Y.; Zhang, J.; Cao, L.; He, M.; Liu, X.; Zhao, N.; Yin, A.; Huang, H.; Wang, L. TMEM74 promotes tumor cell survival by inducing autophagy via interactions with ATG16L1 and ATG9A. Cell Death Dis. 2017, 8, e3031. [Google Scholar] [CrossRef]
  179. Sun, Y.; Deng, J.; Xia, P.; Chen, W.; Wang, L. The Expression of TMEM74 in Liver Cancer and Lung Cancer Correlating With Survival Outcomes. Appl. Immunohistochem. Mol. Morphol. 2019, 27, 618–625. [Google Scholar] [CrossRef]
  180. PAX3 Gene. Available online: https://medlineplus.gov/genetics/gene/pax3/ (accessed on 4 May 2022).
  181. GeneCards Human Gene Database ROCR Gene. Available online: https://www.genecards.org/cgi-bin/carddisp.pl?gene=ROCR. (accessed on 4 May 2022).
  182. Dessen, P. SUPT3H (SPT3 homolog, SAGA and STAGA complex component) Atlas Genet Cytogenet Oncol Haematol. 01-05-2003. Available online: https://atlasgeneticsoncology.org/gene/42451/supt3h-(spt3-homolog-saga-and-staga-complex-component) (accessed on 4 May 2022).
  183. Cleidocranial Dysplasia (CCD). 21 December 2021. Available online: https://www.hopkinsmedicine.org/health/conditions-and-diseases/cleidocranial-dysplasia-ccd. (accessed on 4 May 2022).
  184. NCBI Resource Coordinators. Database Resources of the National Center for Biotechnology Information. Nucleic Acids Res. 2016, 44, D7–D19. [Google Scholar] [CrossRef] [Green Version]
  185. Claes, P.; Walters, M.; Shriver, M.D.; Puts, D.; Gibson, G.; Clement, J.; Baynam, G.; Verbeke, G.; Vandermeulen, D.; Suetens, P. Sexual dimorphism in multiple aspects of 3D facial symmetry and asymmetry defined by spatially dense geometric morphometrics. J. Anat. 2012, 221, 97–114. [Google Scholar] [CrossRef]
  186. Hopman, S.M.J.; Merks, J.H.M.; Suttie, M.; Hennekam, R.C.M.; Hammond, P. Face shape differs in phylogenetically related populations. Eur. J. Hum. Genet. 2014, 22, 1268–1271. [Google Scholar] [CrossRef] [Green Version]
  187. Lippert, C.; Sabatini, R.; Maher, M.C.; Kang, E.Y.; Lee, S.; Arikan, O.; Harley, A.; Bernal, A.; Garst, P.; Lavrenko, V.; et al. Identification of individuals by trait prediction using whole-genome sequencing data. Proc. Natl. Acad. Sci. USA 2017, 114, 10166–10171. [Google Scholar] [CrossRef] [Green Version]
  188. Vasquez, M.M.; Hu, C.; Roe, D.J.; Chen, Z.; Halonen, M.; Guerra, S. Least absolute shrinkage and selection operator type methods for the identification of serum biomarkers of overweight and obesity: Simulation and application. BMC Med. Res. Methodol. 2016, 16, 154. [Google Scholar] [CrossRef] [Green Version]
  189. Choi, S.W.; Mak, T.S.-H.; O’Reilly, P. Tutorial: A guide to performing polygenic risk score analyses. Nat. Protoc. 2020, 15, 2759–2772. [Google Scholar] [CrossRef]
  190. Pośpiech, E.; Teisseyre, P.; Mielniczuk, J.; Branicki, W. Predicting Physical Appearance from DNA Data—Towards Genomic Solutions. Genes 2022, 13, 121. [Google Scholar] [CrossRef]
  191. Hopman, R. Opening up forensic DNA phenotyping: The logics of accuracy, commonality and valuing. New Genet. Soc. 2020, 39, 424–440. [Google Scholar] [CrossRef]
  192. Stephan, C.N.; Caple, J.M.; Guyomarc’H, P.; Claes, P. An overview of the latest developments in facial imaging. Forensic Sci. Res. 2019, 4, 10–28. [Google Scholar] [CrossRef]
  193. Claes, P.; Shriver, M. Establishing a Multidisciplinary Context for Modeling 3D Facial Shape from DNA. PLoS Genet. 2014, 10, e1004725. [Google Scholar] [CrossRef]
  194. Bañuelos, M.M.; Zavaleta, Y.J.A.; Roldan, A.; Reyes, R.-J.; Guardado, M.; Rojas, B.C.; Nyein, T.; Vega, A.R.; Santos, M.; Huerta-Sanchez, E.; et al. Associations between forensic loci and expression levels of neighboring genes may compromise medical privacy. Proc. Natl. Acad. Sci. USA 2022, 119, e2121024119. [Google Scholar] [CrossRef]
  195. Koops, B.-J.; Schellekens, M. Forensic DNA phenotyping: Regulatory issues. Columbia Sci. Technol. Law Rev. 2008, 9, 158–160. [Google Scholar] [CrossRef]
  196. Andorno, R. The right not to know: An autonomy based approach. J. Med. Ethic 2004, 30, 435–439. [Google Scholar] [CrossRef] [Green Version]
  197. Hunter, P. Uncharted waters: Next-generation sequencing and machine learning software allow forensic science to expand into phenotype prediction from DNA samples. EMBO Rep. 2018, 19, e45810. [Google Scholar] [CrossRef]
  198. Scudder, N.; McNevin, D.; Kelty, S.F.; Walsh, S.J.; Robertson, J. Forensic DNA phenotyping: Developing a model privacy impact assessment. Forensic Sci. Int. Genet. 2018, 34, 222–230. [Google Scholar] [CrossRef]
  199. Toom, V.; Wienroth, M.; M’Charek, A.; Prainsack, B.; Williams, R.; Duster, T.; Heinemann, T.; Kruse, C.; Machado, H.; Murphy, E. Approaching ethical, legal and social issues of emerging forensic DNA phenotyping (FDP) technologies comprehensively: Reply to ‘Forensic DNA phenotyping: Predicting human appearance from crime scene material for investigative purposes’ by Manfred Kayser. Forensic Sci. Int. Genet. 2016, 22, e1–e4. [Google Scholar] [CrossRef] [Green Version]
  200. Queirós, F. The visibilities and invisibilities of race entangled with forensic DNA phenotyping technology. J. Forensic Leg. Med. 2019, 68, 101858. [Google Scholar] [CrossRef]
  201. Nogel, M.; Pádár, Z.; Czebe, A.; Kovács, G. Developing legal regulation of forensic DNA-phenotyping in Hungary. Forensic Sci. Int. Genet. Suppl. Ser. 2019, 7, 609–611. [Google Scholar] [CrossRef] [Green Version]
  202. Bouguila, J.; Khochtali, H. Facial plastic surgery and face recognition algorithms: Interaction and challenges. A scoping review and future directions. J. Stomatol. Oral. Maxillofac. Surg. 2020, 121, 696–703. [Google Scholar] [CrossRef] [PubMed]
  203. Liu, X.; Shan, S.; Chen, X. Face Recognition after Plastic Surgery: A Comprehensive Study. In Proceedings of the Computer Vision—ACCV 2012, Daejeon, Korea, 5–9 November 2013; pp. 565–576. [Google Scholar] [CrossRef]
  204. Singh, R.; Vatsa, M.; Bhatt, H.S.; Bharadwaj, S.; Noore, A.; Nooreyezdan, S.S. Plastic Surgery: A New Dimension to Face Recognition. IEEE Trans. Inf. Secur. 2010, 5, 441–448. [Google Scholar] [CrossRef]
  205. Samuel, G.; Prainsack, B. Forensic DNA phenotyping in Europe: Views “on the ground” from those who have a professional stake in the technology. New Genet. Soc. 2018, 38, 119–141. [Google Scholar] [CrossRef] [Green Version]
  206. Nappi, M.; Ricciardi, S.; Tistarelli, M. Deceiving faces: When plastic surgery challenges face recognition. Image Vis. Comput. 2016, 54, 71–82. [Google Scholar] [CrossRef]
  207. Hammond, P. The use of 3D face shape modelling in dysmorphology. Arch. Dis. Child. 2007, 92, 1120–1126. [Google Scholar] [CrossRef]
  208. De la Puente, M.; Xavier, C.; Mosquera, A.; Freire-Aradas, A.; Kalamara, V.; Vidaki, A.; Gross, T.; Revoir, A.; Pośpiech, E.; Kartasińska, E.; et al. VISAGE—Visible Attributes through Genomics. ResearchGate. 20 October 2020. Available online: https://www.researchgate.net/project/VISAGE-Visible-Attributes-through-Genomics (accessed on 27 February 2022).
  209. Katsara, M.-A.; Branicki, W.; Pośpiech, E.; Hysi, P.; Walsh, S.; Kayser, M.; Nothnagel, M. Testing the impact of trait prevalence priors in Bayesian-based genetic prediction modeling of human appearance traits. Forensic Sci. Int. Genet. 2020, 50, 102412. [Google Scholar] [CrossRef]
  210. Savran, A.; Alyüz, N.; Dibeklioğlu, H.; Çeliktutan, O.; Gökberk, B.; Sankur, B.; Akarun, L. Bosphorus Database for 3D Face Analysis. In Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; pp. 47–56. [Google Scholar] [CrossRef] [Green Version]
  211. Hodge, S. Current Controversies in the Use of DNA in Forensic Investigations. Univ. Baltim. Law Rev. 2018, 48, 39. [Google Scholar]
  212. Xavier, C.; de la Puente, M.; Mosquera-Miguel, A.; Freire-Aradas, A.; Kalamara, V.; Ralf, A.; Revoir, A.; Gross, T.; Schneider, P.; Ames, C.; et al. Development and inter-laboratory evaluation of the VISAGE Enhanced Tool for Appearance and Ancestry inference from DNA. Forensic Sci. Int. Genet. 2022, 61, 102779. [Google Scholar] [CrossRef]
  213. Livingston, A.B.G. Intermarriage in the U.S. 50 Years After Loving v. Virginia; Pew Research Center: Washington, DC, USA, 2017. [Google Scholar]
  214. Noroozi, R.; Ghafouri-Fard, S.; Pisarek, A.; Rudnicka, J.; Spólnicka, M.; Branicki, W.; Taheri, M.; Pośpiech, E. DNA methylation-based age clocks: From age prediction to age reversion. Ageing Res. Rev. 2021, 68, 101314. [Google Scholar] [CrossRef]
  215. Gunn, D.A.; Rexbye, H.; Griffiths, C.; Murray, P.G.; Fereday, A.; Catt, S.D.; Tomlin, C.C.; Strongitharm, B.H.; Perrett, D.; Catt, M.; et al. Why Some Women Look Young for Their Age. PLoS ONE 2009, 4, e8021. [Google Scholar] [CrossRef]
Figure 1. Literature review stages using PRISMA approach.
Figure 1. Literature review stages using PRISMA approach.
Genes 14 00136 g001
Figure 2. Population ancestries in 19 research studies investigating SNP-face morphology [91,146,154,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172]. These percentages include both discovery and replication samples.
Figure 2. Population ancestries in 19 research studies investigating SNP-face morphology [91,146,154,157,158,159,160,161,162,163,164,165,166,167,168,169,170,171,172]. These percentages include both discovery and replication samples.
Genes 14 00136 g002
Figure 3. Percentage of SNPs that met the GWAS significance threshold in the GWAS Catalog according to six facial regions (Face, forehead, nose, mouth, lip, and chin/lower face). “Face” indicates traits that were associated with multiple facial regions. Numbers on each bar reflect the number of the associations for each facial region.
Figure 3. Percentage of SNPs that met the GWAS significance threshold in the GWAS Catalog according to six facial regions (Face, forehead, nose, mouth, lip, and chin/lower face). “Face” indicates traits that were associated with multiple facial regions. Numbers on each bar reflect the number of the associations for each facial region.
Genes 14 00136 g003
Table 1. A comparison between four scanning tools using three different 3D scanning techniques: 3dMDhead, Canfield VECTRA H1, Artec Eva, and Vivid 900 [115,116,117,118].
Table 1. A comparison between four scanning tools using three different 3D scanning techniques: 3dMDhead, Canfield VECTRA H1, Artec Eva, and Vivid 900 [115,116,117,118].
Model/Products3dMDheadCanfield
VECTRA H1
Artec EvaKonica Minolta Vivid 900 Laser Cameras (Mid Lens)
RealizationActive/Passive stereo photogrammetryPassive Stereo PhotogrammetryStructured LightLaser scan
CoverageFull 360-degree capture of the head, face, and neckCapturing volume (H × W × D): 220 × 130 × 70 mm typical application: 100-degree of left, right, or front of face.Closest (H × W): 90 × 70 mm
Furthest (H × W):
180 × 140 mm
Closest (H × V): 204.7 × 153.6 mm
Furthest (H × V) 830.6 × 622.9 mm
3D Resolution0.2 mm0.8 mm geometric resolution (triangle edge length)0.5 mm0.016 mm
3D Point Accuracy0.58 ± 0.11 mmAverage: 0.84 mm (range 0.19–1.54 mm)0.1 mm
0.03% over 100 cm
X: ±0.38 mm, Y: ±0.31 mm Z: ±0.20 to the Z reference plane
Capture Speed~0.0015 s at highest resolution0.008 s0.067 s/frame0.3 s (fast mode)/2.5 s (Fine mode)/0.5 s (Color mode)
Processing Speed<15 s~20 s4 min (for facial scans)1 s (Fast) 1.5 s (Fine)
File Size15–95 MB.
Depends on configuration.
8 MB10–20 MB
(full-body scan ranges from 2–4 GB according to Artec3D technical support email)
1.6 MB (fast), 3.6 MB (Fine)
Geometric RepresentationA continuous point cloud available as a textured mesh and densely textured point modelMeshMeshOriginal format converted to 3D by the utility software (640 × 480)
Error in Geometry<0.2 mm<0.1 mm<0.1 mmN/A
Approximate PricePrices start at USD 25,700 (each system is costume-configured and upgraded from standard modules to meet the customer’s specific imaging workflow requirements)USD 11,000~USD 21,000USD 25,000 to 55,000
Utilized by[119,120,121,122,123,124][123,124,125,126][125,126][109,127]
Table 2. Genes that demonstrated two or more associations with facial traits from the Tables S1–S6 [146,154,158,160,161,162,163,165,169,170,171].
Table 2. Genes that demonstrated two or more associations with facial traits from the Tables S1–S6 [146,154,158,160,161,162,163,165,169,170,171].
Number of Associations for Each GeneGenesFacial RegionPhenotypesAncestryReference
2TRPC6FaceUpper facial depthEuropean[154]
Middle facial depthEuropean[154]
2LINC01470FaceFacial width measurementEuropean[165]
LINC01470, PRKAA1Facial width measurementEuropean[165]
2ZRANB2-AS2FaceFactor 13, vertical position of alar curvature relative to upper lipEuropean[165]
Factor 13, vertical position of alar curvature relative to upper lipEuropean[165]
2TRPM1, LINC02352FaceFacial width measurementEuropean[165]
Middle facial depthEuropean[154]
2REREEyeRight eyelid peak position ratioEast Asian[146]
Tangent line angle of er3East Asian[146]
2ATP8A1EyeUpper eyelid sagging severityEuropean[170]
Upper eyelid sagging severityEuropean[170]
2PABPC1L2A, PABPC1L2BEyeFactor 14, intercanthal widthEuropean[165]
Intercanthal widthEuropean[154]
2ZNF385DEyeUpper eyelid sagging severityEuropean[170]
Upper eyelid sagging severityEuropean[170]
2CACNA2D3NoseSegment 52African[163]
Nose sizeEuropean[162]
2GLI3NoseSegment 22European[169]
Nose wing breadthHispanic/Latin American[158]
2LINC00399, LINC00676NoseNose protrusionHispanic/Latin American[161]
Nose sizeHispanic/Latin American[161]
2LINC00676NoseNose sizeEuropean[162]
LINC00676, LINC00399Segment 20European[169]
2LINC01121, SIX2NoseColumella sizeHispanic/Latin American[161]
Segment 44European[169]
2LINC01432NoseNostril sizeHispanic/Latin American[161]
Segment 54African[163]
2PAX3NoseNasion positionHispanic/Latin American[158]
PAX3, RPL23AP28Segment 11European[169]
2PAX7NoseColumella inclinationHispanic/Latin American[161]
Segment 11European[169]
2PKHD1NoseSegment 11European[169]
PKHD1, FTH1P5Segment 22European[169]
2PRDM16NoseNose roundness 1Hispanic/Latin American[161]
Nose sizeHispanic/Latin American[161]
2RUNX2, SUPT3HNoseNose bridge breadthHispanic/Latin American[158]
Nose morphology measurementEast Asian[160]
2LINC00620MouthMouth morphology measurementEuropean[165]
Lower lip heightEuropean[154]
2LINC02820, RASSF9MouthSegment 30African[163]
Segment 9European[169]
2NAPBMouthFactor 15, philtrum widthEuropean[165]
Factor 15, philtrum widthEuropean[165]
2PCCAMouthFactor 5, width of mouth relative to central midfaceEuropean[165]
Factor 5, width of mouth relative to central midfaceEuropean[165]
2NAV3MouthMouth morphology measurementEuropean[165]
Segment 35European[169]
2NHP2P2, HOXA1MouthSegment 9European[169]
Philtrum widthEuropean[171]
2SACM1LMouthFactor 5, width of mouth relative to central midfaceEuropean[165]
Labial fissure widthEuropean[154]
2SDK1MouthFactor 5, width of mouth relative to central midfaceEuropean[165]
Factor 5, width of mouth relative to central midfaceEuropean[165]
2STXBP5-AS1MouthLip protrusionHispanic/Latin American[161]
Lower lip protrusionHispanic/Latin American[161]
2LINC01117Chin/Lower faceChin dimplesEuropean[162]
Segment 24European[169]
2LINC01965Chin/Lower faceChin dimplesEuropean[162]
LINC01965, AHCYP3Jaw slope 2Hispanic/Latin American[161]
2CPED1Chin/Lower faceJaw protrusion 2Hispanic/Latin American[161]
CPED1Jaw protrusion 5Hispanic/Latin American[161]
2RNU7-147P, PLCL1Chin/Lower faceChin dimplesEuropean[162]
Segment 53European[169]
2TNFSF12, TNFSF12-TNFSF13Chin/Lower faceSegment 26European[169]
Chin dimplesEuropean[162]
2SEM1Chin/Lower faceChin dimplesEuropean[162]
Segment 54European[169]
2ADAM15ForeheadSegment 41African[163]
Chin/Lower faceChin dimplesEuropean[162]
2ADGRL4FaceFactor 9, facial height related to vertical position of nasionEuropean[165]
MouthFactor 5, width of mouth relative to central midfaceEuropean[165]
2CLYBLEyeSegment 59African[163]
MouthFactor 5, width of mouth relative to central midfaceEuropean[165]
2DENND1BFaceFactor 4, facial height related to vertical position of gnathionEuropean[165]
Chin/Lower faceChin morphologyEast Asian[160]
2HDAC9NoseColumella inclinationHispanic/Latin American[161]
MouthMouth morphology measurementEuropean[165]
2KCNQ1FaceFactor 13, vertical position of alar curvature relative to upper lipEuropean[165]
KCNQ1, KCNQ1OT1MouthSegment 9European[169]
2LINC01376NoseSegment 22European[169]
LINC01376Chin/Lower faceSegment 24European[169]
2MN1FaceMiddle facial depthEuropean[154]
EyeFactor 8, orbital inclination due to vertical and horizontal position of exocanthionEuropean[165]
2PRRX1, GORABChin/Lower faceSegment 51European[169]
PRRX1, MROH9MouthSegment 9European[169]
2RAD51BNoseNose sizeEuropean[162]
RAD51BMouthSegment 17European[169]
2RN7SL720P, BNC2NoseColumella sizeHispanic/Latin American[161]
Chin/Lower faceChin dimplesEuropean[162]
2RPS27AP14, DMRT2FaceFactor 9, facial height related to vertical position of nasionEuropean[165]
NoseNose sizeEuropean[162]
2TBX3, UBA52P7EyeSegment 14African[163]
NoseSegment 5European[169]
2TMEM74MouthFactor 6, height of vermillion Lower lipEuropean[165]
TMEM74, EMC2NoseSegment 10European[169]
3VPS13BNoseColumella sizeHispanic/Latin American[161]
East Asian[160]
Nasolabial angleEast Asian[146]
3LSP1MouthLip thickness 1Hispanic/Latin American[161]
Lower lip thickness 1Hispanic/Latin American[161]
Lower lip thickness 2Hispanic/Latin American[161]
3WARS2MouthLower lip thickness 2Hispanic/Latin American[161]
Lip thickness ratio 1Hispanic/Latin American[161]
Lip thickness ratio 2Hispanic/Latin American[161]
3BMP7NoseSegment 23European[169]
NoseNose sizeEuropean[162]
MouthFactor 5, width of mouth relative to central midfaceEuropean[165]
3C17orf67FaceLower facial depthEuropean[154]
C17orf67EyeFactor 8, orbital inclination due to vertical and horizontal position of exocanthionEuropean[165]
C17orf67, NOGMouthSegment 38European[169]
3CRYGFP, MEAF6P1MouthFactor 17, height of vermillion upper lipEuropean[165]
CRYGGPFaceCheek morphology partial-least-square modelEast Asian +Admixed Ancestry[168]
CRYGGPEyeFactor 8, orbital inclination due to vertical and horizontal position of exocanthionEuropean[165]
3DLGAP1EyeUpper eyelid sagging severityEuropean[170]
EyeUpper eyelid sagging severityEuropean[170]
MouthFactor 6, height of vermillion Lower lipEuropean[165]
3MAGEF1, EPHB3EyeUpper eyelid sagging severityEuropean[170]
NoseSegment 5European[169]
NoseNose sizeEuropean[162]
3SMG6ForeheadForehead protrusion 1Hispanic/Latin American[161]
Forehead(Upper forehead slant angle)East Asian[146]
Chin/Lower faceSegment 51European[169]
3THSD4EyeRight eye tail lengthEast Asian[146]
EyeOutercanthal widthEast Asian[146]
Chin/Lower faceSegment 24European[169]
5Y_RNAFaceFactor 13, vertical position of alar curvature relative to upper lipEuropean[165]
Y_RNA, ARHGAP15Chin/Lower faceChin dimplesEuropean[162]
Y_RNA, CFAP20EyeFactor 14, intercanthal widthEuropean[165]
Y_RNA, MED13FaceFactor 4, facial height related to vertical position of gnathionEuropean[165]
Y_RNA, RPL35AP3MouthFactor 6, height of vermillion Lower lipEuropean[165]
5SFRP2, DCHS2NoseNose roundness 1Hispanic/Latin American[161]
Nose roundness 3Hispanic/Latin American[161]
Nostril sizeHispanic/Latin American[161]
Segment 27African[163]
5SLC24A2, MLLT3NoseSegment 48African[163]
SLC24A5NoseNose roundness 3Hispanic/Latin American[161]
SLC24A5MouthLip thickness 1Hispanic/Latin American[161]
SLC24A5MouthLower lip thickness 1Hispanic/Latin American[161]
SLC24A5MouthLower lip thickness 2Hispanic/Latin American[161]
5SUPT3HForeheadForehead protrusion 1Hispanic/Latin American[161]
SUPT3HNoseNose morphology measurementEast Asian[160]
SUPT3HNoseNose morphology measurementEast Asian[160]
SUPT3HChin/Lower faceChin dimplesEuropean[162]
SUPT3H, CDC5LNoseSegment 23European[169]
6CRB1FaceFactor 4, facial height related to vertical position of gnathionEuropean[165]
MouthLip protrusionHispanic/Latin American[161]
MouthLower lip protrusionHispanic/Latin American[161]
Chin/Lower faceChin protrusion 1Hispanic/Latin American[161]
Chin/Lower faceChin protrusion 2Hispanic/Latin American[161]
Chin/Lower faceChin dimplesEuropean[162]
6GCC2MouthLip protrusionHispanic/Latin American[161]
MouthLower lip protrusionHispanic/Latin American[161]
Chin/Lower faceJaw protrusion 2Hispanic/Latin American[161]
Chin/Lower faceJaw protrusion 5Hispanic/Latin American[161]
Chin/Lower faceJaw slope 2Hispanic/Latin American[161]
Chin/Lower faceLower face flatnessHispanic/Latin American[161]
7CASC17NoseColumella inclinationHispanic/Latin American[161]
NoseNose roundness 1Hispanic/Latin American[161]
NoseNose sizeHispanic/Latin American[161]
NoseSegment 5European[169]
NoseNasal tip protrusionEast Asian[146]
NoseProfile nasal areaEast Asian[146]
Chin/Lower faceChin dimplesEuropean[162]
8ROCRNoseProfile nasal angleEast Asian[146]
ROCRNoseProfile nasal angleEast Asian[146]
ROCRNoseNasal tip protrusionEast Asian[146]
ROCRNoseNasolabial angleEast Asian[146]
ROCRNoseNasal tip protrusionEast Asian[146]
ROCRNoseNasolabial angleEast Asian[146]
ROCRNoseNose sizeEuropean[162]
ROCR, LINC01152NoseSegment 44European[169]
9MTX2, RPSAP25EyeRight eye tail lengthEast Asian[146]
EyeEye morphologyEast Asian[160]
EyeTangent line angle of er4East Asian[146]
EyeRight eyelid peak position ratioEast Asian[146]
EyeTangent line angle of el3East Asian[146]
EyeTangent line angle of el4East Asian[146]
EyeTangent line angle of el6East Asian[146]
EyeTangent line angle of er3East Asian[146]
MouthMouth morphologyEast Asian[160]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alshehhi, A.; Almarzooqi, A.; Alhammadi, K.; Werghi, N.; Tay, G.K.; Alsafar, H. Advancement in Human Face Prediction Using DNA. Genes 2023, 14, 136. https://doi.org/10.3390/genes14010136

AMA Style

Alshehhi A, Almarzooqi A, Alhammadi K, Werghi N, Tay GK, Alsafar H. Advancement in Human Face Prediction Using DNA. Genes. 2023; 14(1):136. https://doi.org/10.3390/genes14010136

Chicago/Turabian Style

Alshehhi, Aamer, Aliya Almarzooqi, Khadija Alhammadi, Naoufel Werghi, Guan K. Tay, and Habiba Alsafar. 2023. "Advancement in Human Face Prediction Using DNA" Genes 14, no. 1: 136. https://doi.org/10.3390/genes14010136

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop