Next Article in Journal
Wearing Effect of Implant Steel Drills and Tappers for the Preparation of the Bone Osteotomies: An Infrared Thermal Analysis and Energy Dispersive Spectroscopy-Scanning Electron Microscopy (EDS-SEM) Study
Previous Article in Journal
Periodontal Therapy Using Bioactive Glasses: A Review
Previous Article in Special Issue
The Conometric Connection for the Implant-Supported Fixed Prosthesis: A Narrative Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Facial Scanners in Dentistry: An Overview

Department of Restorative Dentistry and Biomaterials Sciences, Harvard School of Dental Medicine, Boston, MA 02115, USA
*
Author to whom correspondence should be addressed.
Prosthesis 2022, 4(4), 664-678; https://doi.org/10.3390/prosthesis4040053
Submission received: 18 October 2022 / Revised: 8 November 2022 / Accepted: 8 November 2022 / Published: 15 November 2022
(This article belongs to the Special Issue Feature Review Papers for Prosthesis)

Abstract

:
Purpose: This narrative review aims to explore the current status of facial scanning technology in the dental field; outlining the history, mechanisms, and current evidence regarding its use and limitations within digital dentistry. Methods: Subtopics within facial scanner technology in dentistry were identified and divided among four reviewers. Electronic searches of the Medline (PubMed) database were performed with the following search terms: facial scanner, dentistry, prosthodontics, virtual patient, sleep apnea, maxillofacial prosthetics, accuracy. For this review only studies or review papers evaluating facial scanning technology for dental or medical applications were included. A total of 44 articles were included. Due to the narrative nature of this review, no formal evidence-based quality assessment was performed and the search was limited to the English language. No further restrictions were applied. Results: The significance, applications, limitations, and future directions of facial scanning technology were reviewed. Specific subtopics include significant history of facial scanner use and development for dentistry, different types and mechanisms used in facial scanning technology, accuracy of scanning technology, use as a diagnostic tool, use in creating a virtual patient, virtual articulation, smile design, diagnosing and treating obstructive sleep apnea, limitations of scanning technology, and future directions with artificial intelligence. Conclusions: Despite limitations in scan quality and software operation, 3D facial scanners are rapid and non-invasive tools that can be utilized in multiple facets of dental care. Facial scanners can serve an invaluable role in the digital workflow by capturing facial records to facilitate interdisciplinary communication, virtual articulation, smile design, and obstructive sleep apnea diagnosis and treatment. Looking into the future, facial scanning technology has promising applications in the fields of craniofacial research, and prosthodontic diagnosis and treatment planning.

1. Introduction: Significance of Facial Scanners in Dentistry

The continuous advancement of digital technologies has led to major innovations in dental techniques and workflows. The use of intraoral scanners and CAD/CAM technology for the restoration of teeth and dental implants has become commonplace. More recently, facial scanners have found a foothold in the digital dental workflow. This technology uses optical scanning techniques to digitally capture and present a detailed three dimensional representation of a subject’s face and head. Subsequently, the data can be utilized for a multitude of analyses including patient treatment planning, diagnosis, and communication.
Despite the recent resurgence of facial scanning technology in dentistry, facial scans have a relatively long history of applications in the dental field, some of which are highlighted in Figure 1. In 1991, Moss and colleagues became the first to use a 3D laser scanning system routinely in a clinical setting [1]. They monitored the growth of children with facial deformities at the Department of Orthodontics at University College in London. Soon after, in 1995, Cacou and colleagues combined electromyography and laser scanning technology to study facial muscle functions. In 2009, Lee et al. used features of craniofacial surface structures from digital photographs to predict obstructive sleep apnea [2]. As technological innovation amplifies, facial scanning systems will become more accessible and affordable in the dental field enticing novel applications.
Three-dimensional visualization for diagnostics and treatment planning is crucial and necessary for prosthetic rehabilitations. Facial scanning technology facilitates the collection and analysis of pretreatmentei and on-going clinical data. Creation of a virtual patient, by combining image files, from a facial scanner, intraoral scanner, and CBCT, can streamline diagnostic procedures, allow accurate analysis of the patient, and serve as an efficient communication tool to relay information to the lab technician and simulate treatment outcomes to the patient. Application of facial scanning can also be used in esthetic dentistry as an integral part of the digital smile design workflow [3]. Utilization of a facial scanner to capture natural rest position, maximum smile, and different smile lines can allow for a virtual teeth set-up with a 3D model to digitally evaluate tooth positions, forms, and colors on the face in immediate dentures as well [4]. Orthognathic surgery using computer-aided surgical simulation is possible by the merging of the patient’s CBCT DICOM file and intraoral scan STL file. Virtual orthognathic surgical simulation allows for precise and crucial pre-operative planning.
Although digital technology and facial scanners have contributed to the advancement of dentistry, the resistance to become mainstream is mostly due to novelty and cost. Understanding the ways in which facial scanning can be incorporated within the dental workflow can allow practitioners to become more comfortable with implementing this technology. This review aims to explore the current types of facial scanners and elucidate various applications of facial scanners within dentistry.

1.1. Scanning Mechanisms

A list of currently available facial scanners and software can be found in Table 1. There are four scanning methods utilized by facial scanners: photogrammetry, stereophotogrammetry, structured light scanning, and laser scanning. Within these mechanisms, there are two methodologies: passive and active. In the passive methods, a patient’s face is scanned with two or more photographs (photogrammetry and stereophotogrammetry). In the active method, 3D sensors are used to capture light patterns via active triangulation (laser beam and structured light). Regardless of scanning method, photogrammetry, stereophotogrammetry, laser-beam scanning, and structured light scanners are advantageous because they are noninvasive, accurate, and reproducible [5].
Photogrammetry, a technology originally developed for use in topographic mapping, obtains information from multiple photographs that are then stitched into a 3D image. Similarly, in stereophotogrammetry, a 3D object is created by photographing the object from two different planes. It allows for calculation of distance, surface area, and volume [6]. However, stereophotogrammetry lacks the ability to accurately detect hair and reflective or shiny surfaces of skin, which is especially useful in maxillofacial prosthetics. As depicted in Figure 2, stereophotogrammetry requires a dedicated space to set-up two or more cameras, from different viewpoints surrounding the subject, as well as a computer [7]. As it involves multiple photographs, this method is sensitive to light and requires specific software calibrated to render the scan. Adequate lighting is necessary since lack of light or excessive ambient light can distort the images captured. After completion of the scan, a computer algorithm then combines the multiple photos to form a 3D face model.
A structured light scanner uses trigonometric triangulation to capture light patterns and obtain a 3D shape of the surface of a subject’s face. This mechanism requires a projector that projects a pattern of light onto the object and a calibrated camera that captures the projection [8], as seen in Figure 3. Most common light scanners are composed of blue light or white light. However, white light scanners are being phased out because blue light, which emits a shorter wavelength and is thus less prone to reflection allows for a more accurate scan. The light is registered from various angles and a 3D mesh can be computed based on the displacement of the light pattern. Advantages of structured light technology include its speed, accuracy, and reproducibility. However, this method of scanning is sensitive to lighting conditions, in which additional ambient light can distort the scan.
Laser scanning utilizes similar technology to light scanning by capturing the reflection of the laser from the object being scanned. A camera is able to detect the geometry of a casted laser beam and compute the distance and shape of the laser in three dimensions [9]. For a complete 3D facial scan, the laser scanner requires multiple, consecutive scans with the subject rotated in different positions, as seen in Figure 4. Laser scanning is highly accurate but the accuracy can be affected by additional light sources. Laser scanning is slower in speed compared to structured light scanning [10].

1.2. Accuracy of Facial Scanners

Facial scanning accuracy has proven to be clinically acceptable for use in dental applications. Accuracy is measured by comparing the scan to a control model and measuring the deviations between the two. Scanners with deviation values less than 2 mm are considered acceptable [11]. Deviation values for facial scanners range between 140 and 1330 μm. Reported accuracy for most facial scanners was approximately 500 μm, which is within the limits for clinical acceptability. However, scanner accuracy can be influenced by factors such as the scanner technology used, object shape, and scanning depth and speed.
Zhao et al. determined and compared the accuracy of scanning technology in stereophotogrammetry, white (structured) light scanners, magnetic resonance imaging, and infrared scanners [12]. Stereophotogrammetry was found to be more accurate than magnetic resonance imaging (MRI) and infrared (IR) scanners. Stereophotogrammetry and structured light scanners had no significant difference in accuracy. Both stereophotogrammetry and structured light scanning were deemed acceptable for clinical use with accuracy of 0.58 ± 0.11 mm and 0.57 ± 0.07 mm, respectively. The mean deviation for a stereophotogrammetric facial scan was reported to be 0.15 ± 0.02 mm [13]. In another study which compared the accuracy of MRI with both a stereophotogrammetry and structured light 3D scanner, the percentage of data points that were reported within a range of 2 mm were 86%, 94%, and 90%, respectively [14].
Object shape also influences the accuracy of facial scan. Studies have shown that concave surfaces, such as the lower face, were less accurate compared to flat and convex surfaces of the upper face [12]. Areas with undercuts, such as the labiofacial sulcus, oral commissures, and oral fissure, had increased scanning difficulty and decreased scanning accuracy. The area of the face with the best precision and best comprehensive accuracy performance was the midface, in comparison to the upper face and the lower face. Other landmarks that were more difficult to capture included the teeth, extra-auditory canal, and nostril [10].
Other factors that affect accuracy are scanning depth and speed. In a study testing accuracy in relation to depth and speed of four different facial optical scanners, it was reported that the structured light scanner, Einscan Pro2x Plus (Shining 3D Tech. Co., Ltd., Hangzhou, China), had the best performance at or less than 2 mm depth in comparison to the EinScan Pro (Shining 3D Tech. Co., Ltd., Hangzhou, China), Planmeca ProMax 3D MiD (Planmeca USA, Inc., Hoffman Estates, IL, USA), and iPhoneX (Apple Store, Cupertino, CA, USA) [10]. Measurements of areas with greater depth than 2 mm showed less accuracy. This inaccuracy may be due to a failure in passing light into deep areas while scanning. Closer scanning techniques were shown to provide more accuracy in the scans. As scanning length increased, the accuracy of the scan decreased. Accurate scanning lengths varied between the scanners studied, with the greatest length measuring 150 mm for the Einscan Pro2x Plus (Shining 3D Tech. Co., Ltd., Hangzhou, China). By reducing the span of scanning and, if possible, ensuring that there are minimal surface irregularities, accuracy can be maintained. Additionally, the effect of scanning speeds of each scanner on accuracy were taken into account. Scanners with quicker scan speed, iPhoneX (Apple Store, Cupertino, CA, USA) at 0.57 min and Planmeca ProMax 3D MiD (Planmeca USA, Inc., Hoffman Estates, IL, USA) at 0.7 min, had lower accuracy than scanners with longer scan times, the EinScan Pro (Shining 3D Tech. Co., Ltd., Hangzhou, China) at 6.7 min and EinScan Pro2x Plus (Shining 3D Tech. Co., Ltd., Hangzhou, China) at 9.4 min. Rapid scanning capability may be possible by compromising on the accuracy and resolution of the scanned data file.

2. Applications of Facial Scanners in Dentistry

2.1. Diagnostic Records and the Virtual Patient

A detailed collection of patient information, history, and records must be done in order to form predictable treatment plans and execute successful dental treatment. Facial scanners have the potential to digitize and replace conventional extraoral records, analog facebow, occlusal analysis, and diagnostic wax-ups. In this section we will review the progress made in obtaining a facebow record, tracking the mandible and virtually designing a smile, all of which can be used to create a virtual patient. Information provided by this virtual patient allows the practitioner to digitally plan treatment across multiple facets of dentistry such as prosthetic and implant rehabilitation, smile design, orthognathic surgery, and maxillofacial prosthodontics. This system provides information that facilitates digital communication between providers, patients and laboratory technicians in order to achieve a more predictable end result.
Routine digital radiography and photography provide standard diagnostic information critical to the development of an accurate diagnosis and treatment plan. With the advent of facial scanners, additional extraoral tissue and facial structure data can be acquired and superimposed to form a 3D virtual model of the patient. A virtual patient is created by merging digital diagnostic records such as a face scan, virtual facebow, intraoral scan and cone-beam computed technology (CBCT) via “best fit” analysis, in which certain landmarks are identified in different sets of scan information and superimposed to align the data files as seen in Figure 5 [15]. However, each imaging modality exists within its own operating system. For instance, CBCT images are in a Digital Imaging and Communications in Medicine (DICOM) format, facial scans are stored as object (OBJ) files, and intraoral scans typically are stereolithography (STL) files. The combination of these pieces of information can be complicated.
A literature review by Mangano et al. looked at 25 studies that analyzed the matching of CBCT, facial scanner, intraoral scanner, and desktop scanner [16]. Nine studies combined information regarding a CBCT and intraoral scanner, another nine studies combined information from a CBCT and facial scanner, four studies looked at facial scanners and intraoral scanners, and lastly only three studies combined all three (CBCT, facial scanner, and intraoral scanner). Merging was accomplished through related points, surfaces, and voxels. Creation of a “virtual dentate patient” under static conditions is feasible; however, the procedure to combine all the data into a single entity is complex and a single device that can combine the different source files has yet to be developed.
Another approach to combining scan data was proposed by Ayoub et al., who utilized a method that superimposed a facial scan utilizing stereophotogrammetry with a CT scan of the patient’s underlying bone by converting both data sets into virtual reality modelling language (VRML) [17]. Ten landmarks were chosen on both surfaces to facilitate superimposition. The landmarks chosen were right and left inner canthus, right and left outer canthus, nose, right and left alar cartilage, right and left chillion, and pronasale. The study calculated registration errors within ±1.5 mm between surfaces that were aligned.
Joda and Gallucci discussed a technique of superimposing DICOM, OBJ, and STL files by utilizing a landmark consistent within all three data sets [15]. For dentistry, the patient’s existing dentition is a common landmark used to overlay the three files to create a 3D virtual patient under static conditions. This allows for the simulation of treatment planning, management of patient expectations, and facilitation of effective communication with patients and colleagues, and non-invasive acquisition of clinical documentation. However, the common concern regarding the inability to generate a virtual patient through a single application that can utilize all three file types presently exists. Joda and Gallucci also suggested developing a commercially available system that is able to incorporate patient facial movements and fuse them onto DICOM, STL, and OBJ files to create a 4D virtual patient.
Developing a 4D virtual patient would require incorporating virtual articulation data. Virtual facebows would be used to orient digital models within the virtual space with respect to a reference plane and then the models articulated with respect to each other. Different methods reported in literature for transferring the virtual facebow include: facial scanning with reference points, photographs, and stereophotogrammetry. Virtual facebow and articulators are advantageous in that they circumvent common issues experienced with conventional manual articulating techniques such as material distortion, errors during orientation and positioning, and difficulties simulating patient data in 3D [18]. Elimination of these challenges could increase efficiency and decrease complications at the time of prosthesis delivery.
Lam et al. described a technique for mounting scanned data on a virtual articulator by registering the patient’s horizontal plane in the natural head position [19]. A 3D facial scan was taken of the patient in this position and oriented using a reference board to create a X, Y, Z axis. The teeth were scanned intraorally in maximum intercuspation with an intraoral scanner (True definition scanner, 3M ESPE, St. Paul, MN). An additional 3D facial scan was taken with a facebow to register the relation of the maxilla. Markers on the transfer device were used as reference points to align the outside of the face to the maxilla and the dental arch was correctly positioned to the face. CAD-CAM software (Exocad GmbH, Darmstadt, Germany) oriented the virtual patient model to the virtual articulator. After calibration, the horizontal plane was registered to the virtual model and aligned to the virtual articulator at the transverse horizontal axis. Deviations in this proposed method were less than 1 degree and 1 mm for five repeated measurements on the same patient. This technique had repeatable results and is a good alternative to using a CBCT to register the position of the maxilla, circumventing radiation exposure. A digitized facial scan also provides abundant reference points, especially in edentulous cases or any case where reference points are minimal and technicians must rely on analog reference points provided by the dentist. The technician may place lines and planes on the “digital skull” to correctly determine Frankfort horizontal plane, Camper’s line, patient’s horizontal line, amongst others to use as occlusal plane reference points [20].
In a series of studies by Solaberrieta et al., various methods for relation of models via virtual articulation were proposed [21]. In 2015, their study used an extraoral light scanner (ATOS Compact Scan 5 M; GOM mbH, Braunschweig, Germany) to locate 6 reference points that were located on the head and jaws of the patient through a device worn on the patient’s head. Points 1–3 were located on the horizontal plane and points 4–6 were on the patient’s occlusal plane. These were then directly transferred to the virtual articulator with a reverse engineering software (Geomagic Design X; 3D Systems, Littleton, CO, USA). The position of the maxilla was compared to results from a conventional pantograph. Although an advantage of this proposed methodology was that it did not require a physical articulator or facebow, deviation values suggested that this procedure be improved before use in orthognathic or restorative treatment.
Another technique presented by Solaberrieta et al. used an intraoral scanner, digital camera, and reverse engineering software to virtually locate casts onto a virtual articulator [22]. Intraoral scans created digital casts of the maxilla and mandible. Adhesive points were applied directly to the patient’s face with two points next to both temporomandibular joints and a third point on the infraorbital point. A facebow fork which included an elastomeric impression was placed into the subject’s mouth. Photographs were taken of the face to obtain a 3D image of the head with target points related to the facebow. The impression and facebow fork were scanned. Using reverse engineering software, the facebow fork was aligned to the maxillary jig. This proposed method allowed for the location of a terminal transverse hinge axis through the use of posterior landmarks near the temporomandibular joints.
More recently, Inoue et al. compared a novel method of mounting virtual casts using landmarks from facial scans, to virtual casts mounted using average values, and those mounted based on a traditional facebow [23]. The Bergstrom points and orbitale were marked on the patient prior to capturing the facial scan. The facial scans were then merged with the intraoral scans using a target bitefork (SNAP; Degrees of Freedom Inc., Marlboro, VT, USA), and the previously marked facial landmarks were used to align this file to the virtual articulator. They compared positional differences between the mounted virtual casts in the different groups and found no significant difference in the mounting accuracy when comparing the facial scanner and the traditional facebow groups. However, a statistically significant difference was found in the virtually mounted casts using average values to those mounted with a facebow. They concluded that virtual mounting using landmarks from a facial scan could be a more accurate option for virtual articulation versus the use of average values.
An accurate diagnostic reproduction of a patient’s mandibular movements is important for the fabrication of prostheses without occlusal interferences. A factor to consider in eliminating interferences with virtual articulation is precise registration of the patient’s sagittal condylar inclination (SCI). Hong and Noh used the Christensen method of measuring SCI with a protrusive bite record using a facial scan and intraoral scan [24]. A custom bite registration device was 3D printed with round markers for the subject to bite while additional markers were placed on the infraorbital and on left and right Beyron points to record the hinge axis. A facial scan was captured using a mobile phone with a dedicated application. The intraoral scan was taken with the patient in maximal intercuspal position and protrusive interocclusal position. The scans were superimposed by a 3D image analysis software (Geomagic Control X; 3D Systems, Luxembourg, Germany) and the Frankfort horizontal plane and midsagittal plane were established. The angle between a line projected to the midsagittal plane and the Frankfort horizontal plane was measured to obtain the SCI. With this technique, incorporation of SCI within the virtual articulator enables customization of the occlusal surfaces of designed prosthesis to prevent interferences.
Facial scanners can also be used to record mandibular movements. In a study assessing occlusal dynamics with a structured light facial scanner, non-reflective targets were placed on the facial surface of the mandibular incisors and used to digitally record the motion of the mandible and temporomandibular joints by merging the motion data with the patient’s CBCT. This proposed technique was suggested for application during the fabrication of dental prostheses or diagnosis of temporomandibular disease [25]. In another digital method to record mandibular movement, real time jaw movements of scanned casts were created by transforming target tracking data of four incisors in the maxilla and mandible. With the further development of techniques for virtually recording mandibular movements, practitioners may have the opportunity to digitally check occlusal contacts during eccentric movement [26] and simulate articular guidance [27].

2.2. Smile Design

In the past, smile design was done in 2D by physically cutting and annotating printed photographs in order to simulate desired final results of treatments. 2D plans would then be converted into a 3D model through an additive wax up, which was transferred to the patient via a physical mock-up [28]. The greatest drawback of such multi-step analog conversions is the risk of introducing error and distortion into the workflow [28]. With advances in technology and digital cameras, digital smile design protocols and systems were created to be used in conjunction with Keynote, PowerPoint, or specialized programs to streamline the design process and make the final results more predictable. In digital workflows with facial scanners, the patient’s facial features are recorded digitally, and the conversions that were once done by hand can be done virtually, eliminating the introduction of those errors [29].
Utilizing a patient’s facial scan to create a 3D virtual patient model also allows for faster and better communication within an interdisciplinary team and with the patient. Effective communication between the patient, clinician, and lab technician is vital when it comes to esthetics. Since esthetics are very subjective and dependent on many factors, a virtual model of the patient enables the clinician and patient to effectively collaborate to customize treatment on a digital platform. 3D virtual smile design is also valuable as a non-invasive, fast, and inexpensive tool to motivate patients to accept treatment. Simulation of future treatment that provides the patient a better opportunity to visualize how the planned dental prosthesis can affect their esthetics.

2.3. Obstructive Sleep Apnea (OSA)

According to American Heart Association scientific statement, obstructive sleep apnea syndrome is a breathing disorder associated with partial or complete cessation in airflow of the upper airways during sleep, leading to abrupt reduction in oxygen levels to 80–89%, increased heart rates, and risk of stroke. Obesity and craniofacial abnormalities are key predisposing factors for airway collapse, although this varies between ethnicities. Currently, polysomnography (PSG) test is the gold standard to diagnose OSA. During the test, the patient is required to take an overnight sleep study with sensors to monitor breathing patterns, heart rate, electrical activity of heart, brain, eyes, and body movements. Unfortunately, extensive wait times and costs for PSG leave many patients undiagnosed, posing a public health issue.
Since the 1980′s, studies have looked into the relationship between craniofacial abnormalities and upper airway collapsibility [30,31,32,33]. Detection of a correlation between craniofacial structure and airway collapse can be identified through the use of a facial scanner as a possible alternative diagnostic tool for OSA. 2D photographs have been used to classify OSA since photos are easily obtained and inexpensive [34,35]. In a study that classified the clinical phenotypes of the disease, facial landmarks were placed on photographs of patients, who had completed a PSG to test for those with and without OSA [36]. The landmarks recorded, face width, mandibular length, eye width, and cervicomental angle, were used to separate images of normal subjects from OSA subjects with a computer algorithm that could increase its performance through additional training data. Classification accuracy between landmarking by hand and automatic landmarking was similar at 69–70%. Further application of landmarks to facial scans that create facial depth maps could generate more information about facial morphology than a simple 2D image [37].
In a recent study, 3D facial scans (3dMDface; 3dMD, Atlanta, GA, USA) were analyzed to predict the presence and severity of OSA through development of a mathematical algorithm for OSA clinical assessment. This mathematical algorithm involved linear measurements, the shortest distance between two points, and geodesic measurements, the shortest distance between two points on a curved surface. They were able to predict OSA with a 91% accuracy when linear and geodesic measurements from the 3D images were merged into one algorithm [38]. This work provided an innovative push towards more stream-lined and accurate diagnostic tools for OSA using facial scanners, in comparison to 2D photographic analysis.
Advancements in OSA treatment have also benefited from facial scanners. Customized CPAP masks and compared the air leak and comfort to commercially designed CPAP masks [39]. A facial scanner (3dMDface; 3dMD, Atlanta, GA, USA) was used to 3D-print the mask and silicone was injected to make the cushion that contacts the skin. In a sample size of 6 subjects, the study concluded that many found the customized mask to be highly comfortable and air leak rates to be similar to the subject’s favored commercially made mask.
Though there are limited studies on the use of facial scanners for OSA patients currently, the technology has significant potential in advancing the field of OSA. Diagnosis of OSA can be substantially improved since PSG is time consuming, expensive, and resource-draining. Developments in OSA diagnostic technology show promise in allowing for easier diagnosis and accuracy in comparison to the current standard. The challenge remains to translate scientific findings to clinical applications, but with OSA being quite underdiagnosed and untreated, there is significant promise for growth in this field [40].

3. Future Applications

To date, there have been only a few published studies in literature concerning the combination of artificial intelligence (AI) and machine learning (ML) with facial scanners [41,42]. Although AI use in orthognathic treatment is not new, further advancement of AI is currently limited by low sample size, lack of mathematical models, and clunky processing methods [42]. There are clear areas, however, in which the positive features of both AI and facial scanners may supplement the final end-use operation in the future.
For example, Jiang’s team developed a novel method to generate a dental arch [43]. It was an attempt to replace or assist the analog method of determining the dental arch, especially in denture teeth set-up and orthodontic wire bending. The coordinates of the “tooth-arrangement robot” were set up to meet the arch characteristics of any patient. While the dental arch is one component of the occlusal plane, there are many other references involved to create an ideal prosthesis including esthetics, function, and facial reference points such as the patient’s horizontal plane, Camper’s plane, and interpupillary line. The collaboration of Jiang’s ML tooth-arrangement robot and a virtual articulator with a facial scan could eventually allow for setting bilaterally balanced denture teeth with the click of a button.
Perhaps most notably, AI has contributed greatly in the digitization of maxillofacial surgery, implant placement, [44,45,46] and robot-assisted surgery in head and neck cancer [47]. For example, the Yomi robot (Neocin Inc., Miami, FL, USA) can create an osteotomy and place an implant according to CT scan planning [48]. Similarly, RoboDent (RoboDent GmbH, Ismaning, Germany) placed two implants in a human fully autonomously [49]. In the case of prosthetically driven implant planning & placement, a facial scan would be beneficial in cases where extensive full mouth work is planned and landmarks for the occlusal plane are limited. Having a virtual patient would provide sufficient facial landmarks to plan the final prosthesis.
In the field of maxillofacial surgery, orthognathic surgery has been performed on a “jawbone skull phantom” patient with a six-axis AI arm to assist the surgeon during surgery based on pre-operative planning from a 3D CT scan [50]. Use of a facial scan within this procedure has the potential to estimate end results for facial soft tissue which can help the surgical planning as well as communicate the treatment plan with the patient.

4. Conclusions

This review article outlines history, mechanisms, and applications of 3D facial scanners, and presents current evidence of its use within the dental field. Although there are some limitations in scan quality and software operation, 3D face scanners are rapid and non-invasive tools that can aid to achieve accurate and highly esthetic results. Facial scans offer a wide variety of diagnostic applications including interdisciplinary communication, virtual articulation, smile design, and OSA diagnosis and treatment. Looking into the future, facial scanning technology has promising applications in the fields of craniofacial research and prosthodontic diagnosis, treatment planning, and treatment sequencing. Ultimately, with further technological advancements, improved machine learning, and cost effective solutions, high-performance facial scanners can be expected to play an ever increasing role throughout the dental field.

Author Contributions

Conceptualization, S.J.L. and J.D.L.; methodology, S.J.L. and A.A.; data curation, Y.-C.L., D.L., O.N. and S.K.; writing—original draft preparation, Y.-C.L., D.L., O.N. and S.K.; writing—review and editing, J.D.L., S.J.L. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Moss, J.P.; Coombes, A.M.; Linney, A.D.; Campos, J. Methods of three dimensional analysis of patients with asymmetry of the face. Proc. Finn. Dent. Soc. Suom. Hammaslaak. Toim. 1991, 87, 139–149. [Google Scholar]
  2. Karatas, O.H.; Toy, E. Three-dimensional imaging techniques: A literature review. Eur. J. Dent. 2014, 8, 132–140. [Google Scholar] [CrossRef] [PubMed]
  3. Mangano, F.; Gandolfi, A.; Luongo, G.; Logozzo, S. Intraoral scanners in dentistry: A review of the current literature. BMC Oral Health 2017, 17, 149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Hassan, B.; Greven, M.; Wismeijer, D. Integrating 3D facial scanning in a digital workflow to CAD/CAM design and fabricate complete dentures for immediate total mouth rehabilitation. J. Adv. Prosthodont. 2017, 9, 381–386. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Piedra-Cascón, W.; Meyer, M.J.; Methani, M.M.; Revilla-León, M. Accuracy (trueness and precision) of a dual-structured light facial scanner and interexaminer reliability. J. Prosthet. Dent. 2020, 124, 567–574. [Google Scholar] [CrossRef] [PubMed]
  6. Hong, C.; Choi, K.; Kachroo, Y.; Kwon, T.; Nguyen, A.; McComb, R.; Moon, W. Evaluation of the 3dMDface system as a tool for soft tissue analysis. Orthod. Craniofac. Res. 2017, 20 (Suppl. S1), 119–124. [Google Scholar] [CrossRef]
  7. Heike, C.L.; Upson, K.; Stuhaug, E.; Weinberg, S.M. 3D digital stereophotogrammetry: A practical guide to facial image acquisition. Head Face Med. 2010, 6, 18. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. Tzou, C.-H.J.; Artner, N.M.; Pona, I.; Hold, A.; Placheta, E.; Kropatsch, W.G.; Frey, M. Comparison of three-dimensional surface-imaging systems. J. Plast. Reconstr. Aesthetic Surg. JPRAS 2014, 67, 489–497. [Google Scholar] [CrossRef]
  9. Gibelli, D.; Pucciarelli, V.; Poppa, P.; Cummaudo, M.; Dolci, C.; Cattaneo, C.; Sforza, C. Three-dimensional facial anatomy evaluation: Reliability of laser scanner consecutive scans procedure in comparison with stereophotogrammetry. J. Cranio-Maxillofac. Surg. Off. Publ. Eur. Assoc. Cranio-Maxillofac. Surg. 2018, 46, 1807–1813. [Google Scholar] [CrossRef]
  10. Amornvit, P.; Sanohkan, S. The Accuracy of Digital Face Scans Obtained from 3D Scanners: An In Vitro Study. Int. J. Environ. Res. Public Health 2019, 16, 5061. [Google Scholar] [CrossRef] [Green Version]
  11. Bohner, L.; Gamba, D.D.; Hanisch, M.; Marcio, B.S.; Neto, P.T.; Laganá, D.C.; Sesma, N. Accuracy of digital technologies for the scanning of facial, skeletal, and intraoral tissues: A systematic review. J. Prosthet. Dent. 2019, 121, 246–251. [Google Scholar] [CrossRef] [PubMed]
  12. Zhao, Y.J.; Xiong, Y.X.; Wang, Y. Three-Dimensional Accuracy of Facial Scan for Facial Deformities in Clinics: A New Evaluation Method for Facial Scanner Accuracy. PLoS ONE 2017, 12, e0169402. [Google Scholar] [CrossRef] [Green Version]
  13. Artopoulos, A.; Buytaert, J.; Dirckx, J.; Coward, T. Comparison of the accuracy of digital stereophotogrammetry and projection moiré profilometry for three-dimensional imaging of the face. Int. J. Oral Maxillofac. Surg. 2014, 43, 654–662. [Google Scholar] [CrossRef] [PubMed]
  14. Knoops, P.G.; Beaumont, C.A.A.; Borghi, A.; Rodriguez-Florez, N.; Breakey, R.W.; Rodgers, W.; Angullia, F.; Jeelani, N.O.; Schievano, S.; Dunaway, D.J. Comparison of three-dimensional scanner systems for craniomaxillofacial imaging. J. Plast. Reconstr. Aesthetic Surg. JPRAS 2017, 70, 441–449. [Google Scholar] [CrossRef] [PubMed]
  15. Joda, T.; Gallucci, G.O. The virtual patient in dental medicine. Clin. Oral Implants. Res. 2015, 26, 725–726. [Google Scholar] [CrossRef]
  16. Mangano, C.; Luongo, F.; Migliario, M.; Mortellaro, C.; Mangano, F.G. Combining Intraoral Scans, Cone Beam Computed Tomography and Face Scans: The Virtual Patient. J. Craniofac. Surg. 2018, 29, 2241–2246. [Google Scholar] [CrossRef]
  17. Ayoub, A.F.; Xiao, Y.; Khambay, B.; Siebert, J.P.; Hadley, D. Towards building a photo-realistic virtual human face for craniomaxillofacial diagnosis and treatment planning. Int. J. Oral Maxillofac. Surg. 2007, 36, 423–428. [Google Scholar] [CrossRef]
  18. Lepidi, L.; Galli, M.; Mastrangelo, F.; Venezia, P.; Joda, T.; Wang, H.; Li, J. Virtual Articulators and Virtual Mounting Procedures: Where Do We Stand? J. Prosthodont. Off. J. Am. Coll. Prosthodont. 2020, 30, 24–35. [Google Scholar] [CrossRef]
  19. Lam, W.Y.H.; Hsung, R.T.C.; Choi, W.W.S.; Luk, H.W.K.; Pow, E.H.N. A 2-part facebow for CAD-CAM dentistry. J. Prosthet. Dent. 2016, 116, 843–847. [Google Scholar] [CrossRef]
  20. Schweiger, J. 3D Facial Scanning. Published December 2018. Available online: https://www.zirkonzahn.com/assets/files/publications/EN-Dental-Dialogue-2018-12-web.pdf (accessed on 29 December 2020).
  21. Solaberrieta, E.; Garmendia, A.; Minguez, R.; Brizuela, A.; Pradies, G. Virtual facebow technique. J. Prosthet. Dent. 2015, 114, 751–755. [Google Scholar] [CrossRef]
  22. Solaberrieta, E.; Mínguez, R.; Barrenetxea, L.; Otegi, J.R.; Szentpétery, A. Comparison of the accuracy of a 3-dimensional virtual method and the conventional method for transferring the maxillary cast to a virtual articulator. J. Prosthet. Dent. 2015, 113, 191–197. [Google Scholar] [CrossRef] [PubMed]
  23. Inoue, N.; Scialabba, R.; Lee, J.D. A comparison of virtually mounted dental casts from traditional facebow records, average values, and 3D facial scans. J. Prosthet. Dent. 2022. [Google Scholar] [CrossRef]
  24. Hong, S.J.; Noh, K. Setting the sagittal condylar inclination on a virtual articulator by using a facial and intraoral scan of the protrusive interocclusal position: A dental technique. J. Prosthet. Dent. 2021, 125, 392–395. [Google Scholar] [CrossRef]
  25. Kwon, J.H.; Im, S.; Chang, M.; Kim, J.E.; Shim, J.S. A digital approach to dynamic jaw tracking using a target tracking system and a structured-light three-dimensional scanner. J. Prosthodont. Res. 2019, 63, 115–119. [Google Scholar] [CrossRef]
  26. Kim, J.E.; Park, J.H.; Moon, H.S.; Shim, J.S. Complete assessment of occlusal dynamics and establishment of a digital workflow by using target tracking with a three-dimensional facial scanner. J. Prosthodont. Res. 2019, 63, 120–124. [Google Scholar] [CrossRef]
  27. Stavness, I.K.; Hannam, A.G.; Tobias, D.L.; Zhang, X. Simulation of dental collisions and occlusal dynamics in the virtual environment. J. Oral Rehabil. 2016, 43, 269–278. [Google Scholar] [CrossRef] [PubMed]
  28. Antolín, A.; Rodríguez, N.A.; Crespo, J.A. Digital Flow in Implantology Using Facial Scanner. Published 2018. Available online: https://www.semanticscholar.org/paper/Digital-Flow-in-Implantology-Using-Facial-Scanner-Antol%C3%ADn-Rodr%C3%ADguez/0397531202d32a61f18337de99e4b3acf546206b (accessed on 23 December 2020).
  29. Lin, W.S.; Harris, B.T.; Phasuk, K.; Llop, D.R.; Morton, D. Integrating a facial scan, virtual smile design, and 3D virtual patient for treatment with CAD-CAM ceramic veneers: A clinical report. J. Prosthet. Dent. 2018, 119, 200–205. [Google Scholar] [CrossRef]
  30. Jamieson, A.; Guilleminault, C.; Partinen, M.; Quera-Salva, M.A. Obstructive sleep apneic patients have craniomandibular abnormalities. Sleep 1986, 9, 469–477. [Google Scholar] [CrossRef]
  31. Ferguson, K.A.; Ono, T.; Lowe, A.A.; Ryan, C.F.; Fleetham, J.A. The relationship between obesity and craniofacial structure in obstructive sleep apnea. Chest 1995, 108, 375–381. [Google Scholar] [CrossRef]
  32. Kushida, C.A.; Efron, B.; Guilleminault, C. A predictive morphometric model for the obstructive sleep apnea syndrome. Ann. Intern. Med. 1997, 127 8 Pt 1, 581–587. [Google Scholar] [CrossRef]
  33. Lee, R.W.W.; Vasudavan, S.; Hui, D.; Prvan, T.; Petocz, P.; Darendeliler, M.A.; Cistulli, P.A. Differences in Craniofacial Structures and Obesity in Caucasian and Chinese Patients with Obstructive Sleep Apnea. Sleep 2010, 33, 1075–1080. Available online: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2910536/ (accessed on 29 December 2020). [PubMed]
  34. Lee, C.H.; Kim, J.-W.; Lee, H.J.; Yun, P.-Y.; Kim, D.-Y.; Seo, B.S.; Yoon, I.-Y.; Mo, J.-H. An investigation of upper airway changes associated with mandibular advancement device using sleep videofluoroscopy in patients with obstructive sleep apnea. Arch. Otolaryngol. Head Neck Surg. 2009, 135, 910–914. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  35. Espinoza-Cuadros, F.; Fernández-Pozo, R.; Toledano, D.T.; Alcázar-Ramírez, J.D.; López-Gonzalo, E.; Hernández-Gómez, L.A. Speech Signal and Facial Image Processing for Obstructive Sleep Apnea Assessment. Comput. Math. Methods Med. 2015, 2015, 489761. [Google Scholar] [CrossRef] [PubMed]
  36. Balaei, A.T.; Sutherland, K.; Cistulli, P.A.; de Chazal, P. Automatic detection of obstructive sleep apnea using facial images. In Proceedings of the 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), Melbourne, Australia, 18–21 April 2017; pp. 215–218. [Google Scholar] [CrossRef]
  37. Islam, S.M.S.; Mahmood, H.; Al-Jumaily, A.A.; Claxton, S. Deep Learning of Facial Depth Maps for Obstructive Sleep Apnea Prediction. In Proceedings of the 2018 International Conference on Machine Learning and Data Engineering (ICMLDE), Sydney, Australia, 3–7 December 2018; pp. 154–157. [Google Scholar] [CrossRef]
  38. Eastwood, P.; Gilani, S.Z.; McArdle, N.; Hillman, D.; Walsh, J.; Maddison, K.; Goonewardene, B.M.; Mian, A. Predicting sleep apnea from three-dimensional face photography. J. Clin. Sleep Med. 2020, 16, 493–502. [Google Scholar] [CrossRef] [PubMed]
  39. Duong, K.; Glover, J.; Perry, A.; Olmstead, D.; Colarusso, P.; Ungrin, M.; MacLean, J.; Martin, A. Customized Facemasks for Continuous Positive Airway Pressure: Feasibility Study in Healthy Adults Volunteers. Am. J. Respir. Crit. Care Med. 2020, 201, A2432. [Google Scholar] [CrossRef]
  40. Luyster, F.S.; Buysse, D.J.; Strollo, P.J. Comorbid insomnia and obstructive sleep apnea: Challenges for clinical practice and research. J. Clin. Sleep Med. JCSM Off. Publ. Am. Acad. Sleep Med. 2010, 6, 196–204. [Google Scholar]
  41. Liu, W.; Li, M.; Yi, L. Identifying children with autism spectrum disorder based on their face processing abnormality: A machine learning framework. Autism Res. Off. J. Int. Soc. Autism Res. 2016, 9, 888–898. [Google Scholar] [CrossRef]
  42. Knoops, P.G.M.; Papaioannou, A.; Borghi, A.; Breakey, R.W.F.; Wilson, A.T.; Jeelani, O.; Zafeiriou, S.; Steinbacher, D.; Padwa, B.L.; Dunaway, D.J.; et al. A machine learning framework for automated diagnosis and computer-assisted planning in plastic and reconstructive surgery. Sci. Rep. 2019, 9, 13597. [Google Scholar] [CrossRef] [Green Version]
  43. Jiang, J.G.; Zhang, Y.D. Motion planning and synchronized control of the dental arch generator of the tooth-arrangement robot. Int. J. Med. Robot. Comput. Assist. Surg. MRCAS 2013, 9, 94–102. [Google Scholar] [CrossRef]
  44. Burgert, O.; Seifert, S.; Salb, T.; Gockel, T.; Dillmann, R.; Hassfeld, S.; Mühling, J. A VR-system supporting symmetry related cranio-maxillofacial surgery. Stud. Health Technol. Inform. 2003, 94, 33–35. [Google Scholar]
  45. Gulati, M.; Anand, V.; Salaria, S.K.; Jain, N.; Gupta, S. Computerized implant-dentistry: Advances toward automation. J. Indian Soc. Periodontol. 2015, 19, 5–10. [Google Scholar] [CrossRef] [PubMed]
  46. Azari, A.; Nikzad, S. Computer-assisted implantology: Historical background and potential outcomes-a review. Int. J. Med. Robot. Comput. Assist. Surg. MRCAS 2008, 4, 95–104. [Google Scholar] [CrossRef] [PubMed]
  47. Du, Y.F.; Chen, N.; Li, D.Q. Application of robot-assisted surgery in the surgical treatment of head and neck cancer. Zhonghua Kou Qiang Yi Xue Za Zhi Zhonghua Kouqiang Yixue Zazhi Chin. J. Stomatol. 2019, 54, 58–61. [Google Scholar] [CrossRef]
  48. Grischke, J.; Johannsmeier, L.; Eich, L.; Griga, L.; Haddadin, S. Dentronics: Towards robotics and artificial intelligence in dentistry. Dent. Mater. Off. Publ. Acad. Dent. Mater. 2020, 36, 765–778. [Google Scholar] [CrossRef] [PubMed]
  49. Maikuma, Y.; Usui, K.; Araki, K.; Mataki, S.; Kurosaki, N.; Furuya, N. Evaluation of an articulated measuring apparatus for use in the oral cavity. Dent. Mater. J. 2003, 22, 168–179. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Woo, S.-Y.; Lee, S.-J.; Yoo, J.-Y.; Han, J.-J.; Hwang, S.-J.; Huh, K.-H.; Lee, S.-S.; Heo, M.-S.; Choi, S.-C.; Yi, W.-J. Autonomous bone reposition around anatomical landmark for robot-assisted orthognathic surgery. J. Cranio-Maxillofac. Surg. Off. Public Eur. Assoc. Cranio-Maxillofac. Surg. 2017, 45, 1980–1988. [Google Scholar] [CrossRef]
Figure 1. Historical timeline of imaging technology.
Figure 1. Historical timeline of imaging technology.
Prosthesis 04 00053 g001
Figure 2. Photogrammetry and stereophotogrammetry requires a designated space for the subject, multiple cameras from various viewpoints surrounding the subject, and a computer to process the images.
Figure 2. Photogrammetry and stereophotogrammetry requires a designated space for the subject, multiple cameras from various viewpoints surrounding the subject, and a computer to process the images.
Prosthesis 04 00053 g002
Figure 3. Structured light scanning utilizes a projector to project a pattern of light onto the subject, which is then analyzed by the camera and processed by a computer. Location of the projector and the camera can be located on a single device, depending on the manufacturer.
Figure 3. Structured light scanning utilizes a projector to project a pattern of light onto the subject, which is then analyzed by the camera and processed by a computer. Location of the projector and the camera can be located on a single device, depending on the manufacturer.
Prosthesis 04 00053 g003
Figure 4. Laser scanners record the subject in multiple positions by taking scans of multiple positions. These scans are then picked up by a camera and computed into one image by a computer algorithm. Location of the laser and the camera can be located on a single device, depending on the manufacturer.
Figure 4. Laser scanners record the subject in multiple positions by taking scans of multiple positions. These scans are then picked up by a camera and computed into one image by a computer algorithm. Location of the laser and the camera can be located on a single device, depending on the manufacturer.
Prosthesis 04 00053 g004
Figure 5. Virtual patient workflow. Facial scan captured (A), Facial scan merged with intraoral scan (B), Facial scan and intraoral scan mounted on a virtual articulator (C).
Figure 5. Virtual patient workflow. Facial scan captured (A), Facial scan merged with intraoral scan (B), Facial scan and intraoral scan mounted on a virtual articulator (C).
Prosthesis 04 00053 g005
Table 1. List of available facial scanners & software (Features data obtained from manufacturers’ websites).
Table 1. List of available facial scanners & software (Features data obtained from manufacturers’ websites).
Scanner Type and NameApplicationsFeatures
Photogrammetry /Stereophotogrammetry3dMD Face system (3dMD, Atlanta, GA, USA)Medical and dental field. 4D records of facial expressions, function, smile and speech. Anatomically precise 3D surface images for surgical intervention assessment and measurement of long-term morphological changes outcomes, such as cleft lip and palate. No manual registration is required.
iPhone X (Apple, Cupertino, CA, USA) using Bellus3D Face Application (Bellus3D, Inc. Campbell, CA, USA)Face ID authentication. Facial recognition to unlock the phone and authorize purchases. The dot projector in the TrueDepth camera projected over 30,000 IR dots and captured the infrared image. The neural engine of Bionic Chip transformed the mathematical model to build a facial map and compared to the enrolled data.
Di4D (Dimensional Imaging, Glasgow, Scotland)Video games and movies. Crosses into the entertainment sector to provide high density facial scans.Bring digital humans to life. Capture an actor’s high fidelity motion data and translate to a virtual character.
Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc., Hoffman Estates, IL, USA)Dental and ENT imaging.Handles a wide range of diagnostic tasks including CBCT, orthodontic planning, CADCAM, implant planning, and maxillofacial surgeries. With anatomical presets for ears, nose and throat.
Structured light scannerFacehunter (Zirconzahn, South Tyrol, Italy)Designed for dentists and dental technicians.Efficient patient consultation. Planning reliability for the patient, the dentist and the dental technicians. Well-integrated with digital workflow.
FaceScan system (Isravision, Darmstadt, Germany)Designed for dental field. Has a double-mirror structure that captures three angles of 3D images and combines them into 3D models.
EinScan Pro (Shining 3D Tech. Co., Ltd., Hangzhou, China)Handheld. Designed for engineers, designers, art and heritage, custom orthotics.Multifunctional. Utilizing infrared light instead of visible light during face scan mode, comfortable for eyes.
EinScan Pro 2X Plus (Shining 3D Tech. Co., Ltd. Hangzhou, China) using Shining SoftwareHandheld. Not specific to dental workflow.Multifunctional. Manually or markers align mode. Not recommended for moving or hairy objects.
Priti mirror scanner and priti image software (Isravision, Polymetric, Germany)Digital models of face and teeth merged together with image software.Take pictures at different angles. An open 3D software with CAD construction programs. (ExoCAD Dental CAD, 3Shape dental designer)
ATOS Compact Scan 5M (GOM mbH, Braunschweig, Germany)For reverse engineering and dimension inspection.Precise optical measuring system. Has a probing feature that can digitize deep pockets and areas that are not optically accessible.
Laser scannerObiScanner (ObiScanner, Milano, Italy)Dental field.Scan lasts 15 s to produce a 3D model. Requires no training. An open software compatible with CAD systems.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, J.D.; Nguyen, O.; Lin, Y.-C.; Luu, D.; Kim, S.; Amini, A.; Lee, S.J. Facial Scanners in Dentistry: An Overview. Prosthesis 2022, 4, 664-678. https://doi.org/10.3390/prosthesis4040053

AMA Style

Lee JD, Nguyen O, Lin Y-C, Luu D, Kim S, Amini A, Lee SJ. Facial Scanners in Dentistry: An Overview. Prosthesis. 2022; 4(4):664-678. https://doi.org/10.3390/prosthesis4040053

Chicago/Turabian Style

Lee, Jason D., Olivia Nguyen, Yu-Chun Lin, Dianne Luu, Susie Kim, Ashley Amini, and Sang J. Lee. 2022. "Facial Scanners in Dentistry: An Overview" Prosthesis 4, no. 4: 664-678. https://doi.org/10.3390/prosthesis4040053

Article Metrics

Back to TopTop