Next Article in Journal
Retinal Photoreceptors and Microvascular Changes in the Assessment of Diabetic Retinopathy Progression: A Two-Year Follow-Up Study
Next Article in Special Issue
Short- and Long-Term Prediction of the Post-Pubertal Mandibular Length and Y-Axis in Females Utilizing Machine Learning
Previous Article in Journal
Radiomics and Hybrid Models Based on Machine Learning to Predict Levodopa-Induced Dyskinesia of Parkinson’s Disease in the First 6 Years of Levodopa Treatment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review

1
Department of Computer Engineering, Cankiri Karatekin University, Cankiri 18100, Turkey
2
Department of Pediatric Dentistry, Baskent University, Ankara 06810, Turkey
3
Department of Computer Engineering, Ankara University, Ankara 06830, Turkey
4
Department of Artificial Intelligence and Data Engineering, Ankara University, Ankara 06830, Turkey
5
Faculty of Medicine and Health Technology, Tampere University, 33720 Tampere, Finland
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(15), 2512; https://doi.org/10.3390/diagnostics13152512
Submission received: 11 July 2023 / Revised: 21 July 2023 / Accepted: 25 July 2023 / Published: 27 July 2023
(This article belongs to the Special Issue Artificial Intelligence in the Diagnostics of Dental Disease)

Abstract

:
Deep learning and diagnostic applications in oral and dental health have received significant attention recently. In this review, studies applying deep learning to diagnose anomalies and diseases in dental image material were systematically compiled, and their datasets, methodologies, test processes, explainable artificial intelligence methods, and findings were analyzed. Tests and results in studies involving human-artificial intelligence comparisons are discussed in detail to draw attention to the clinical importance of deep learning. In addition, the review critically evaluates the literature to guide and further develop future studies in this field. An extensive literature search was conducted for the 2019–May 2023 range using the Medline (PubMed) and Google Scholar databases to identify eligible articles, and 101 studies were shortlisted, including applications for diagnosing dental anomalies (n = 22) and diseases (n = 79) using deep learning for classification, object detection, and segmentation tasks. According to the results, the most commonly used task type was classification (n = 51), the most commonly used dental image material was panoramic radiographs (n = 55), and the most frequently used performance metric was sensitivity/recall/true positive rate (n = 87) and accuracy (n = 69). Dataset sizes ranged from 60 to 12,179 images. Although deep learning algorithms are used as individual or at least individualized architectures, standardized architectures such as pre-trained CNNs, Faster R-CNN, YOLO, and U-Net have been used in most studies. Few studies have used the explainable AI method (n = 22) and applied tests comparing human and artificial intelligence (n = 21). Deep learning is promising for better diagnosis and treatment planning in dentistry based on the high-performance results reported by the studies. For all that, their safety should be demonstrated using a more reproducible and comparable methodology, including tests with information about their clinical applicability, by defining a standard set of tests and performance metrics.

1. Introduction

Today, although most oral and dental diseases have early diagnosis and treatment opportunities with technological developments in oral and dental health, their global increase cannot be prevented. According to the WHO Global Oral Health Status Report (2022) [1], oral and dental diseases affect approximately 3.5 billion people worldwide. Especially in low- and middle-income countries, there are not adequate services in the field of oral and dental health due to the costs of diagnosis and treatment. As a result of this situation, it is estimated by the WHO that three out of four people in low- and middle-income countries are affected by oral and dental diseases [1]. The most common dental diseases, especially dental caries, are periodontal diseases, edentulism, oral cancer, dental anomalies, and cleft lip and palate diseases [1]. When efficient diagnosis and treatment are not provided for these diseases, it can cause various complications ranging from mild discomfort to death.
In addition to clinical examination, dental imaging technologies play a critical role in diagnosing oral and dental diseases. In Figure 1A, examples of some anomalies and diseases associated with dental imaging techniques are given. The advanced level of three-dimensional dental imaging technologies such as cone-beam computed tomography (CBCT), magnetic resonance imaging, and ultrasound, especially two-dimensional panoramic and periapical radiographs, has increased the success rate in diagnosis [2,3]. However, image-based dental diagnosis has some limitations. Image-based medical diagnosis is not objective as it depends on specialist experience and inter-observer variables. The background is noisy on radiographs, and anatomical structures overlap. Computed tomography has poor resolution compared to radiographs due to scattering from metallic objects. Ultrasonography contains high levels of noise. These limitations make interpreting images difficult and increase the rate of expert oversight and error.
Expert systems, aimed at assisting experts in managing images, formerly applied strict rules and methods based on how experts think. In recent years, with the ease of accessing data and the development of computers with faster processing power, artificial intelligence (AI) technologies have advanced, and expert systems have evolved into data-oriented AI applications. In particular, the increase in studies on the successful performance of deep learning methods, especially in image-based diagnostic tasks where a diagnosis is challenging, such as cancer [4], lung, and eye diseases [5,6], has increased interest in the medical application of AI [7,8]. Recent literature reviews have acknowledged the success of expert systems based on deep learning methods that compete with the performance of experts in image-based dental diagnostic tasks, especially the research presented in this article.
Deep learning is a form of machine learning that uses multilayer artificial neural networks in a wide range of applications, from image, audio, and video processing to natural language processing. Unlike traditional machine learning methods, deep learning can learn these features simultaneously by automatically extracting features from raw data symbols instead of learning with rules. In addition to these flexible structures, prediction accuracy can increase according to the size of the data. The concept of deep learning was first proposed by Hinton in 2006 as a more efficient version of multilayer artificial neural networks [9]. The CNN architecture, which is the most commonly used deep learning algorithm, is presented in Figure 1B. Since the emergence of deep learning, it has been proposed for many applications in the field of oral and dental health, such as tooth classification [10], detection [11] and segmentation [12], endodontic treatment and diagnosis [13], periodontal problem tooth detection [14], oral lesion pathology [15,16], forensic medicine applications [17,18], and classification of dental implants [19,20]. Considering the large number of images obtained in the field of oral and dental health, the dependence of dentists on computer applications in the analysis of these images, and the improvement of decision-making performance in a limited time, there seems to be excellent potential for the future of deep learning applications.
Figure 1. (A). Examples of dental anomalies and diseases on dental imaging techniques; a. Mesiodens on panoramic radiographs [21], b. Apical lesions on periapical radiographs [22], c. Temporomandibular joint osteoarthritis on orthopantomograms [23], d. Missing tooth on cone beam computed tomography [24], e. Dental caries on near-infrared-light transillumination [25], f. Dental caries on bite viewing radiographs [26], g. Dental calculus and inflammation on optical color images [27], h. Gingivitis on intraoral photos [28]. (B). Convolutional neural network architecture.
Figure 1. (A). Examples of dental anomalies and diseases on dental imaging techniques; a. Mesiodens on panoramic radiographs [21], b. Apical lesions on periapical radiographs [22], c. Temporomandibular joint osteoarthritis on orthopantomograms [23], d. Missing tooth on cone beam computed tomography [24], e. Dental caries on near-infrared-light transillumination [25], f. Dental caries on bite viewing radiographs [26], g. Dental calculus and inflammation on optical color images [27], h. Gingivitis on intraoral photos [28]. (B). Convolutional neural network architecture.
Diagnostics 13 02512 g001
Several reviews of deep learning for oral and dental health have been published recently. These studies have focused on specific research areas such as dental caries [29], dental implants [30,31], forensic [32], endodontics [13], temporomandibular joint disorder [33], periapical radiolucent lesions [14], gingivitis and periodontal disease [34], and dental informatics [35]. Other reviews have addressed deep learning issues in dentistry [36,37,38,39,40] and dental imaging [41,42]. However, a comprehensive review study on deep learning methods used to diagnose dental diseases, including dental anomalies, has yet to be conducted. This study aims to systematically review 101 related research articles applying deep learning methods to diagnose dental anomalies and diseases.
  • The essential contributions of this article can be listed as follows:
  • This study is the first systematic review of dental anomalies and deep learning.
  • This study includes 101 shortlisted research articles from Scholar and PubMed that apply deep learning methods for diagnosing dental anomalies and diseases.
  • This review included variables such as the size of the dataset, the dental imaging method, the deep learning architecture used for performance evaluation criteria, and the explainable AI method.
  • Unlike other reviews in the literature, in this review, studies comparing human-AI performance among shortlisted research articles are discussed in detail, especially statistical tests.
As per the workflow of the current article, Section 2 contains a research methodology demonstration that includes the research question, information sources, eligibility criteria, search strategy, selection process, data extraction, and analysis processes. In Section 3, the dataset features of the studies included in the shortlist were synthesized by presenting the findings, such as the deep learning method, performance metrics, and human-AI comparison. In Section 4, the findings presented in the previous section are discussed with emphasis on management implications, academic implications, literature shortcomings, and suggested solutions. The problems and suggested solutions for increasing the clinical utility of deep learning and the limitations of the current article are also included in this section. Finally, in Section 5, potential research directions are finalized.

2. Material and Methods

This systematic review was conducted by referring to the PRISMA 2020 statement [43], an updated guideline for reporting systematic reviews. The review question determining the study’s eligibility criteria and search strategy is based on the PICO (problem/population, intervention/indicator, comparison, and outcome) framework in Table 1.

2.1. Information Sources and Eligibility Criteria

The systematic literature search was carried out by a reviewer by conducting an extensive investigation in two different electronic databases, Medline via PubMed, and Google Scholar, for studies published in the last five years (2019–May 2023). Google Scholar is a comprehensive database of scholarly material from academic research, including books, journal articles, conference reports, chapters, and theses. Google Scholar provides free services, with no subscription required. Search results are ordered by relevance, where it was published, authors, full-text match, and how often it is cited. Medline is a database containing international publications on clinical medicine and biomedical research. The PubMed database is an accessible interface service provided by Medline. The research articles included in this systematic review were selected according to the eligibility criteria below.
Inclusion criteria:
  • Articles published between January 2019–May 2023.
  • Articles on the diagnosis of dental anomalies or diseases.
  • Articles suggesting deep learning methods.
  • Articles created using a reference dataset on dental imaging techniques.
  • Full-text research articles.
  • Articles written in English.
  • The article must contain detailed information about the dataset, methods, results, and tests applied.
Exclusion criteria:
  • Articles on topics such as healthy tooth detection, tooth labeling/numbering, dental implants, and endodontic treatment.
  • Articles that have applied other AI methods that do not include deep learning methodologies, such as classical machine learning.
  • Review articles and other types such as conferences, article abstracts, book chapters, preprints, or non-full-text articles, even if it is a research article.

2.2. Search Strategy and Selection Process

Keywords combining techniques of interest (such as deep learning/CNN), image materials (such as radiographs), and areas of interest (such as dental anomalies/diseases) were used to navigate through articles. Medical Subject Headings (MeSH) of deep learning, CNN, convolutional neural networks, oral, dental, tooth, teeth, anomalies, and diseases were included. Included MeSH terms are combined with Boolean operators such as and/or, and advanced settings of databases are used with selections such as inclusion date range, publication types and language. The electronic search strategy applied to databases is given in Table 2.
The articles included in this systematic review were selected in two stages. In the first stage, a reviewer evaluated the articles according to the relevance of the titles and abstracts related to our research topic. In the first stage, studies with titles and abstracts unrelated to oral and dental health that could not be full-text articles, such as abstracts, were eliminated. In the second stage, a second reviewer conducted a detailed examination according to the eligibility criteria. During this examination, review articles, articles whose method was not deep learning, and articles that did not focus on oral/dental anomalies or disease diagnosis were excluded.

2.3. Data Extraction and Analysis

One reviewer performed the data extraction phase from the included studies. From the included articles, the primary author, publication year, anomaly/disease for which the diagnosis was intended, image type, number of images, primary performance metric and outcome value, other measured performance criteria, and explainable AI method data were obtained by reviewing detailed full texts. The shortlist presented in the article was thoroughly reviewed and checked by a second reviewer (specialist dentist). Different shortlists were made for anomaly and disease studies, and the two subjects were analyzed within themselves. The included studies were categorized as classification, object detection, and segmentation studies. Data such as the country of origin of the studies, the data division strategy determining the number of training and test datasets used in the study, and the field of dentistry were not mentioned.
The distribution of the number of publications by year, type of task, type of anomaly/disease, and dental imaging technique was visualized and analyzed. Considering the heterogeneity, performance, and outcome measures of index and reference tests for quality assessment, meta-analysis was not performed as the results were largely unsuitable for heterogeneity tests. Instead, a separate shortlist was created by selecting studies that performed tests on human-AI comparison among the included studies for quality assessment. From these studies, data on reference datasets, statistical significance tests, diagnostic performance results, diagnostic time, and the impact of AI performance were extracted and analyzed. Further analysis, including the clinical significance of deep learning, was performed narratively alongside descriptive statistics.

3. Results

According to the search results, a total of 1997 records were identified, including 1860 from Google Scholar and 137 from PubMed. After removing duplicates from these records, 545 studies that were not full-text research articles (n = 497) and not related to dental health topics (n = 48) were excluded, and 296 records were scanned. According to the screening results, 101 studies that met the eligibility criteria were included in the systematic review. Of the included studies, 22 are on dental anomalies (Table 3), and 79 are on dental disease (Table 4). Figure 2 presents the search results in detail according to the PRISMA-2020 flowchart.
Figure 3 shows the distribution of publications by years, tasks performed, anomaly/disease applications, and dental imaging techniques. When the distribution of the number of publications in 2019–May 2023 is examined, the highest number belongs to 2022, with 14 anomalies and 23 diseases. Although 2023 (n = 17) is not yet finished, the number of publications is more than double the number of publications in 2019 (n = 7), and the number of publications has increased yearly. The most common task performed in diagnosing dental anomaly/disease is classification (n = 51). Another common task performed after classification is object detection in anomaly diagnosis (n = 7) and segmentation in disease diagnosis (n = 19). The most common diagnostic studies, mainly dental caries and plaques (n = 31) are on periodontal diseases (n = 23), cysts, and tumors (n = 11). Cleft lip and palate (n = 2), temporomandibular joint osteoarthritis (TMJOA), gingivitis, and missing teeth (n = 3) are the least researched diseases in the diagnosis of dental disease with deep learning. Apart from these studies, there are also studies on the diagnosis of inflammation, osteoporosis, and fractures. Since mesiodens are a type of supernumerary teeth, the most common type of anomaly in which deep learning methods are used for diagnosis is supernumerary teeth (n = 11). Other common types of anomalies examined were impacted teeth (n = 6), hypomineralization (n = 4), and ectopic eruption (n = 2). Three other anomaly applications in Figure 3 are diagnosing taurodont [64], maxillary canine impaction [48], and odontomas [46]. Another study is on the classification of ten different types of anomalies [47].
Dataset sizes ranged from 60 (CBCT) [136] to 12,179 (panoramic) [87] images. The most commonly used image types in dental disease studies are panoramic radiographs (n = 38), followed by periapical radiographs (n = 13), CBCT (n = 9), bite viewing radiographs (n = 8), and intraoral photographs (n = 7). Orthopantomogram (OPG, n = 2), NILT (n = 2), optical color images (n = 1), and microscopic histopathology (n = 1) are other dental imaging techniques using deep learning for diagnosis. There are no studies on image types other than panoramic radiographs (n = 17), intraoral photographs (n =4), and periapical radiographs (n = 1) in the diagnosis of dental anomalies. One study evaluated both CBCT scans and panoramic radiographs [88], while another evaluated both periapical and panoramic radiographs [76].
The most commonly used deep learning methods for the classification task in dental anomaly diagnosis are using pre-trained CNN models by fine-tuning (transfer learning) or modifying them. InceptionResNet, VGG16, AlexNet, InceptionV3, SqueezeNet, ResNet, and DenseNet are CNN models used as solution methods. In one study [44], Hybrid graph cut segmentation was applied to separate the background and anatomy in panoramic radiography images, and then the preprocessed images were classified with CNN. YOLO (n = 2), Faster R-CNN (n = 2), DetectNet, and EfficientDetD3 models were used for object detection tasks in dental anomaly diagnosis. In one study [55], the authors designed a new DMLnet model based on the YOLOv5 architecture for automatically diagnosing mesiodens on panoramic radiographs. Generally, U-Net (n = 3) or modified U-Net (n = 1) architectures are used for the segmentation task. In a study for the diagnosis of mesiodens [60], varied tasks were performed by segmentation with the DeepLabV3 model and classification with the InceptionResNetV2 model.
While more diverse than the classification methods used for dental anomaly diagnosis, the most commonly used deep learning method is the same for disease diagnosis, with models designed with pre-trained CNNs. ResNet (n = 5), DenseNet (n = 5), and AlexNet (n = 4) are the most commonly used pre-trained CNNs, with VGG (n = 2), Inception (n = 2), EfficientNet (n = 2), LeNet (n = 2), and MobileNet (n = 1) also used. Another common method is custom CNN models designed by the authors (n = 6). In addition to these methods, hybrid methods combined with two different algorithms were also used. CNN-LSTM [70], CNN-SVM [71], Siamese Network-DenseNet121 [71], and CNN-fuzzy logic [84] are hybrid models using. In a study [74], a swine transformer, one of the transformer types shown to compete with CNNs recently, was used. Faster R-CNN (n = 7), DetectNet (n = 5), YOLO (n = 4), Single-Shot Detector (SSD, n = 2), and Mask R-CNN (n = 1) were used for the object detection task. U-Net (n = 14) and DeepLabV3+ (n = 2) were the most commonly used architectures for the segmentation task in disease diagnosis as well as in dental anomaly diagnosis. Two-stage methods combining different tasks are frequently proposed for diagnosing dental diseases. After applying segmentation as an image preprocessing in the first stage, the studies that involved classification in the second stage used U-Net + DenseNet (n = 2), Mask R-CNN + CNN, Morphology-based Segmentation + Modified LeNet, Curvilinear Semantic DCNN + InceptionResNetV2 methods. In a study [27], a parallel 1D CNN was used as a YOLOv5 classifier as an image preprocessing method. To optimize the weights, methods that combine CNN with different optimization algorithms, such as antlion [83] and pervasive deep gradient [26], have also been proposed.
In sixty studies, the ACC metric was used as the primary performance measurement method. While the ACC metric was measured in nine studies, it was never used in thirty-two. In a study using Faster R-CNN to diagnose gingivitis from intraoral photographs, the highest ACC value of 100% was obtained [26]. The lowest ACC value of 69% was obtained in a study using ResNet18 to diagnose dental caries on NILT images [67]. After ACC, the most frequently used performance measurement method is SEN (SEN = recall = TPR), which was used in eighty-seven studies, nine of which were the primary metric. The values obtained in studies using SEN as the primary metric range from 81–99% [59,122]. Another frequently used metric is precision (precision = PPV). Precision was used as a performance measurement method in sixty-seven studies, of which nine were the primary metric. As the lowest value, the mean average precision (mAP) value of 59.09% was reached in a study where Faster R-CNN was recommended for detecting missing teeth on panoramic radiographs [119]. The highest precision value of 98.50% was achieved in a study that proposed YOLOv4 for detecting mandibular fractures on panoramic radiographs [117]. Another performance measurement method used as a primary metric is the Area under the ROC Curve (AUC). AUC was used in thirty studies, nine of which were the primary metric, and gave results in the 57.10–99.87% range [47,91]. F score (n = 47) and SPEC (n = 48) are among other frequently used metrics. In addition, some studies use Intersection over Union (IoU), negative predictive value (NPV), Dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC), Matthews correlation coefficient (MCC), false positive rate (FPR), loss, error rate (ER), and Classification rate (CR) as performance measurement methods.
Of the 101 studies, 22 mentioned the topic of explainable AI. Five of these studies described the class activation heat map (CAM) without detailing their explainable AI method, while the others used gradient-weighted class activation mapping (Grad-CAM).
In 21 studies, human and AI performances were tested and compared. The reference data, comparative tests, and performance results of these studies are summarized in Table 5. In these studies, test datasets prepared for the reference dataset were also used in comparative tests, and the size of the test datasets varied between 25 and 800 [86,87]. In one study [47], a different test dataset of 7697 images was used to test the model’s performance, and a different test dataset of 30 images was used to compare the model’s performance with the human’s performance. In one study [77], validation performance was used for tests for which no test dataset was created. In 14 studies, reference datasets were annotated by physicians experienced in oral and dental health, such as pediatric specialists (PS), general practitioners (GP), oral and maxillofacial radiologists (OMFR), surgeons (OMFS), and endodontic specialists (ES). In 10 studies, more than one specialist was given the task of explanation to ensure the reliability of the reference dataset. In two studies, dentist-trained researchers annotated the reference dataset [74,123]. In two studies, CBCT data were referenced rather than annotated, as images from retrospective databases were already labeled [23,97].
In all studies except for two, comparative tests were performed by comparing the performance results of a group of human auditors on the test data with the performance results of the model. In addition to this test, in two studies, the performance of the AI-unaided group and the AI-aided group were compared, and the effect of the AI model on the diagnostic performance of the specialists was measured [75,124]. Statistical analysis tests such as the Kruskal–Wallis test, t-tests, Mann–Whitney-u test, and Kappa statistics, especially McNemar’s χ2 test, were used to measure the significance of performance differences between specialists and AI models. Statistical significance was not measured in one study [87]. Of the eighteen studies whose p-value was calculated, thirteen reported that the performance difference was significant (p < 0.05), and five were insignificant. In addition to test performance, test times were also measured in seven studies. In only one of these studies, the AI model provided a diagnosis later than the specialist [74]. In other studies, the authors only compared the diagnostic performances, stating that the diagnostic time of the AI model would be shorter than that of the specialists. Except for six studies, the diagnostic performance of AI models proposed in other studies exceeded that of human auditory groups. In four of the six studies, AI lags behind the experts by a small margin, and in the other two studies, the performance gap is quite significant [21,47].

4. Discussion

This study evaluated the last five years of literature research on the diagnosis of dental anomalies and diseases using various deep learning methods, mainly CNNs, using a systematic review. According to the results of this evaluation, some findings that need to be discussed have emerged.
Even if it is not as much as the diagnostic applications in the field of medicine, the results of the searches made in two databases, Google Scholar, and PubMed, show the records of 1997 for the last five years, which shows that deep learning is experiencing a golden age in the diagnostic applications in dentistry. Only 137 of these records were available on PubMed, which is often an essential medical and dental research resource. This case indicates that a significant part of the research identified is not from dentistry but from technical sciences and has been published differently.
Over the years, deep learning has grown in popularity as a research topic in diagnosing dental diseases, given the number of studies shortlisted before the first half of 2023. However, diagnosing dental anomalies with deep learning has yet to be sufficiently investigated (n = 22). This article is the first systematic review of dental anomalies. Due to the rarity of dental anomalies compared to other dental diseases, the scarcity of data has made research on this subject with deep learning algorithms rare [137,138]. In addition, it is no coincidence that the most common oral and dental disease reported worldwide by the WHO is dental caries and that the studies included in the shortlist are primarily focused on diagnosing dental caries (n = 31). Due to the working principle of deep learning algorithms, the qualities of the data used, such as the number of images, quality, and an expert’s explanation of the image, are very important compared to other AI algorithms. These findings prove that deep learning has gained more space in the literature for diagnosing common worldwide diseases where it is easy to obtain quality data. In general, the progress of the health sector in the world draws the boundaries of the field of AI in medicine. Similar to this subject, panoramic radiographs are the most widely used imaging technique in the field of oral and dental health worldwide due to their advantages over others, and panoramic radiographs were used as data in more than half (n = 55) of the studies included in the shortlist in this review.
Since classification is the most appropriate task type for disease diagnosis in general, the most classification (n = 51) and the least segmentation (n = 24) task types were performed in the studies included in the shortlist. Despite the fact that deep learning algorithms can work with raw data, segmentation and object detection are used as preprocessing tools applied to the data before classification in order to overcome the difficulties of dental images. Although they are used as individual or at least individualized architectures as deep learning algorithms, standardized architectures such as pre-trained CNN models, Faster R-CNN, YOLO, and U-Net have been used in most studies. Considering the existence of these architectures in the literature, it shows that deep learning and diagnostic studies in dentistry lag behind other fields. The use of transformer architectures in only one study, a relatively new field of research according to CNNs, indicates a possible delay in adopting the latest architectures. Explainable AI methods are used to explain the decision-making processes of models. Visualizing why and how deep learning models, defined as black box AI models, make the diagnosis decision is vital to making the model’s accuracy, objectivity, and results reliable. Of the 101 included studies, only 22 mentioned an explainable AI method (Grad-CAM). Considering the clinical importance of deep learning diagnostic studies, it is essential to include explainable AI methods in studies for reliability. In addition, developing new and different explainable AI methods is very important.
Although the high performance of the proposed deep learning algorithms indicates their reliability, appropriate metrics were not selected for their performance in the clinical setting, and additional tests were not carried out. In some studies, only ACC was used as a performance metric [27,65,68,80,123,136]. ACC class imbalance can be misleading in existing problems. Likewise, the AUC is only partially informative when over- or under-detection is unimportant. In problems involving such inequalities, additional metrics must be measured. Although PPV = precision, one of the metrics giving information about the clinical benefit, was measured in 67 studies, NPV was measured in only 16 studies. In addition to the limitations of the reported metrics, the number of studies applying tests that provide information on clinical utility is very small. The number of studies comparing human and AI is 21. In some of these studies, the number of explanatory experts forming the reference data set is one [21,47,133]; in others, the researcher is used instead of an expert as an explanatory [74,123]. Using more experts to overcome the limitations of a single expert in creating reference datasets will increase reliability. In some studies, performance measurements obtained with validation data have been reported instead of creating a separate test dataset [77]. The validation phase is used to test the efficiency of the hyperparameters of the deep learning model, and its use in the final testing phase may make the reported results misleading. Only two studies test how deep learning affects expert performance in AI and human benchmark tests (AI-unaided group-AI-aided group comparison) [75,124]. In other studies, deep learning algorithms and the performance of experts were compared, and it was hoped that deep learning performance would reach or exceed that of experts. At this point, although the aim of using expert system applications supported by AI algorithms as an auxiliary tool for experts is emphasized in almost all studies, it is clear that tests suitable for this purpose need to be revised. As a result of this inadequacy, it will take many years for research on the clinical applicability and ethical and legal dimensions of AI algorithms to multiply. As every multidisciplinary task requires, the cooperation of health institutions and experts with computer scientists is the most critical factor in preventing this situation. Another vital solution factor may be defining a standard set of tests and performance criteria for deep learning oral and dental health studies. An open-access, standardized test dataset created by experts for each dental image type can enable the performance of deep learning algorithms to be reliably evaluated and compared.
This systematic review article has some limitations. Since today’s databases and publications are quite large, only two different databases were scanned for this review. The selected articles were evaluated in line with the inclusion and exclusion criteria and the boundaries drawn. Studies such as conferences, preprint articles, and book chapters were excluded because the inclusion criteria were broad. The fact that some articles are not open access or contain missing information that does not match the summary tables we have created has limited the included studies. One reviewer performed the data extraction phase from the included studies, and the shortlist presented in the article was only thoroughly reviewed and checked by a second reviewer (a specialist dentist). A traditional systematic review was used; meta-analyses were not conducted, and the results were quite broad. As a result, the study findings were compiled narratively and according to a systematization we designed, with the aim of guiding and further developing future studies in this field.

5. Conclusions

In this systematic review, deep learning diagnosis of dental anomalies and diseases was discussed, and 101 studies included in the shortlist were analyzed and evaluated with the limitations discussed. Deep learning algorithms show auspicious performance in evaluating visual data for diagnosing dental anomalies and diseases. Applications of deep learning in oral and dental health services can alleviate the workload of oral and dental health professionals by allowing more comprehensive, reliable, and objectively accurate image evaluation and disease detection, and can increase the chance of developing countries reaching diagnosis and treatment by reducing the cost. In order to achieve these advantages of deep learning, there seems to be a great need for the development of clinical applications of deep learning studies in the field of oral and dental health, including the definition of standard test datasets, testing procedures, and performance metrics.

Author Contributions

Conceptualization, G.B.S., M.S.G. and E.B.; Formal Analysis, E.S. and E.B.; Investigation, E.S., E.B., T.A. and K.A.; Writing—Original Draft Preparation, E.S., G.B.S., T.A. and K.A.; Writing—Review and Editing, T.A. and K.A.; Visualization, E.S.; Supervision, M.S.G., E.B. and G.B.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript: AI, Artificial Intelligence; ACC, Accuracy; AP, Average Precision; AUC, Area Under the ROC Curve; CAM, Class Activation Mapping; CBCT, Cone Beam Computed Tomography; CNN, Convolutional Neural Network; CR, Classification Rate; DSC, Dice Similarity Coefficient; ER, Error Rate; ES, Endodontic Specialist; FN, False Negative; FNR, False Negative Rate; FP, False Positive; FPR, False Positive Rate; GP, General Practitioner; Grad-CAM, Gradient-weighted Class Activation Mapping; ICC, Intraclass Correlation; IoU, Intersection over Union; JD, Junior Dentist; JSC, Jaccard Similarity Coefficient; LSTM, Long Short-Term Memory; mAP: mean Average Precision; MCC, Matthews Correlation Coefficient; mIoU: mean Intersection over Union; NILT, Near-Infrared-Light Transillumination; NPV, Negative Predictive Value; OMFR, Oral and Maxillofacial Radiologist; OMFS, Oral and Maxillofacial Surgeon; OPG, Orthopantomogram; PPV, Positive Predictive Value; PS, Pediatric Specialist; R-CNN, Region-based Convolutional Neural Network; SD, Specialist Dentist; SEN, Sensitivity; SPEC, Specificity; SSD, Single Shot Detector; SVM, Support Vector Machine; TMJOA, Temporomandibular Joint Osteoarthritis; TN, True Negative; TNR, True Negative Rate; TP, True Positive; TPR, True Positive Rate; YOLO, You Only Look Once.

References

  1. WHO. Global Oral Health Status Report: Towards Universal Health Coverage for Oral Health by 2030; WHO: Geneva, Switzerland, 2022. [Google Scholar]
  2. Pauwels, R. A brief introduction to concepts and applications of artificial intelligence in dental imaging. Oral Radiol. 2021, 37, 153–160. [Google Scholar] [CrossRef] [PubMed]
  3. Folly, P. Imaging Techniques in Dental Radiology: Acquisition, Anatomic Analysis and Interpretation of Radiographic Images. BDJ Stud. 2021, 28, 11. [Google Scholar] [CrossRef]
  4. Mazhar, T.; Haq, I.; Ditta, A.; Mohsan, S.A.H.; Rehman, F.; Zafar, I.; Gansau, J.A.; Goh, L.P.W. The Role of Machine Learning and Deep Learning Approaches for the Detection of Skin Cancer. Healthcare 2023, 11, 415. [Google Scholar] [CrossRef] [PubMed]
  5. Haq, I.; Mazhar, T.; Malik, M.A.; Kamal, M.M.; Ullah, I.; Kim, T.; Hamdi, M.; Hamam, H. Lung Nodules Localization and Report Analysis from Computerized Tomography (CT) Scan Using a Novel Machine Learning Approach. Appl. Sci. 2022, 12, 12614. [Google Scholar] [CrossRef]
  6. Naqvi, R.A.; Hussain, D.; Loh, W.K. Artificial Intelligence-Based Semantic Segmentation of Ocular Regions for Biometrics and Healthcare Applications. Comput. Mater. Contin. 2020, 66, 715–732. [Google Scholar] [CrossRef]
  7. Prados-Privado, M.; Villalón, J.G.; Martínez-Martínez, C.H.; Ivorra, C. Dental images recognition technology and applications: A literature review. Appl. Sci. 2020, 10, 2856. [Google Scholar] [CrossRef] [Green Version]
  8. Naqvi, R.A.; Arsalan, M.; Qaiser, T.; Khan, T.M.; Razzak, I. Sensor Data Fusion Based on Deep Learning for Computer Vision Applications and Medical Applications. Sensors 2022, 22, 8058. [Google Scholar] [CrossRef]
  9. Hinton, G.E.; Osindero, S.; Teh, Y.W. A Fast Learning Algorithm for Deep Belief Nets. Neural Comput. 2006, 18, 1527–1554. [Google Scholar] [CrossRef]
  10. Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of teeth in cone-beam CT using deep convolutional neural network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef]
  11. Estai, M.; Tennant, M.; Gebauer, D.; Brostek, A.; Vignarajan, J.; Mehdizadeh, M.; Saha, S. Deep learning for automated detection and numbering of permanent teeth on panoramic images. Dentomaxillofacial Radiol. 2022, 51, 20210296. [Google Scholar] [CrossRef]
  12. Awari, H.; Subramani, N.; Janagaraj, A.; Balasubramaniapillai Thanammal, G.; Thangarasu, J.; Kohar, R. Three-dimensional dental image segmentation and classification using deep learning with tunicate swarm algorithm. Expert Syst. 2022, e13198. [Google Scholar] [CrossRef]
  13. Khanagar, S.B.; Alfadley, A.; Alfouzan, K.; Awawdeh, M.; Alaqla, A.; Jamleh, A. Developments and Performance of Artificial Intelligence Models Designed for Application in Endodontics: A Systematic Review. Diagnostics 2023, 13, 414. [Google Scholar]
  14. Sadr, S.; Mohammad-Rahimi, H.; Motamedian, S.R.; Zahedrozegar, S.; Motie, P.; Vinayahalingam, S.; Dianat, O.; Nosrat, A. Deep Learning for Detection of Periapical Radiolucent Lesions: A Systematic Review and Meta-analysis of Diagnostic Test Accuracy. J. Endod. 2023, 49, 248–261.e3. [Google Scholar]
  15. Sultan, A.S.; Elgharib, M.A.; Tavares, T.; Jessri, M.; Basile, J.R. The use of artificial intelligence, machine learning and deep learning in oncologic histopathology. J. Oral Pathol. Med. 2020, 49, 849–856. [Google Scholar] [CrossRef]
  16. Alhazmi, A.; Alhazmi, Y.; Makrami, A.; Masmali, A.; Salawi, N.; Masmali, K.; Patil, S. Application of artificial intelligence and machine learning for prediction of oral cancer risk. J. Oral Pathol. Med. 2021, 50, 444–450. [Google Scholar] [CrossRef]
  17. Rajee, M.V.; Mythili, C. Gender classification on digital dental x-ray images using deep convolutional neural network. Biomed. Signal Process. Control 2021, 69, 102939. [Google Scholar] [CrossRef]
  18. Guo, Y.-C.; Han, M.; Chi, Y.; Long, H.; Zhang, D.; Yang, J.; Yang, Y.; Chen, T.; Du, S. Accurate age classification using manual method and deep convolutional neural network based on orthopantomogram images. Int. J. Legal Med. 2021, 135, 1589–1597. [Google Scholar] [CrossRef]
  19. Takahashi, T.; Nozaki, K.; Gonda, T.; Mameno, T.; Wada, M.; Ikebe, K. Identification of dental implants using deep learning—Pilot study. Int. J. Implant Dent. 2020, 6, 53. [Google Scholar] [CrossRef]
  20. Lee, D.W.; Kim, S.Y.; Jeong, S.N.; Lee, J.H. Artificial intelligence in fractured dental implant detection and classification: Evaluation using dataset from two dental hospitals. Diagnostics 2021, 11, 233. [Google Scholar] [CrossRef]
  21. Ahn, Y.; Hwang, J.J.; Jung, Y.-H.; Jeong, T.; Shin, J. Automated Mesiodens Classification System Using Deep Learning on Panoramic Radiographs of Children. Diagnostics 2021, 11, 1477. [Google Scholar] [CrossRef]
  22. Li, C.W.; Lin, S.Y.; Chou, H.S.; Chen, T.Y.; Chen, Y.A.; Liu, S.Y.; Liu, Y.L.; Chen, C.A.; Huang, Y.C.; Chen, S.L.; et al. Detection of Dental Apical Lesions Using CNNs on Periapical Radiograph. Sensors 2021, 21, 7049. [Google Scholar] [CrossRef] [PubMed]
  23. Choi, E.; Kim, D.; Lee, J.Y.; Park, H.K. Artificial intelligence in detecting temporomandibular joint osteoarthritis on orthopantomogram. Sci. Rep. 2021, 11, 10246. [Google Scholar] [CrossRef] [PubMed]
  24. Bayrakdar, S.K.; Orhan, K.; Bayrakdar, I.S.; Bilgir, E.; Ezhov, M.; Gusarev, M.; Shumilov, E. A deep learning approach for dental implant planning in cone-beam computed tomography images. BMC Med. Imaging 2021, 21, 86. [Google Scholar] [CrossRef]
  25. Casalegno, F.; Newton, T.; Daher, R.; Abdelaziz, M.; Lodi-Rizzini, A.; Schürmann, F.; Krejci, I.; Markram, H. Caries Detection with Near-Infrared Transillumination Using Deep Learning. J. Dent. Res. 2019, 98, 1227–1233. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Vimalarani, G.; Ramachandraiah, U. Automatic diagnosis and detection of dental caries in bitewing radiographs using pervasive deep gradient based LeNet classifier model. Microprocess. Microsyst. 2022, 94, 104654. [Google Scholar] [CrossRef]
  27. Park, S.; Erkinov, H.; Hasan, M.A.M.; Nam, S.H.; Kim, Y.R.; Shin, J.; Chang, W. Du Periodontal Disease Classification with Color Teeth Images Using Convolutional Neural Networks. Electronics 2023, 12, 1518. [Google Scholar] [CrossRef]
  28. Alalharith, D.M.; Alharthi, H.M.; Alghamdi, W.M.; Alsenbel, Y.M.; Aslam, N.; Khan, I.U.; Shahin, S.Y.; Dianišková, S.; Alhareky, M.S.; Barouch, K.K. A Deep Learning-Based Approach for the Detection of Early Signs of Gingivitis in Orthodontic Patients Using Faster Region-Based Convolutional Neural Networks. Int. J. Environ. Res. Public Health 2020, 17, 8447. [Google Scholar] [CrossRef]
  29. Mohammad-Rahimi, H.; Motamedian, S.R.; Rohban, M.H.; Krois, J.; Uribe, S.E.; Mahmoudinia, E.; Rokhshad, R.; Nadimi, M.; Schwendicke, F. Deep learning for caries detection: A systematic review. J. Dent. 2022, 122, 104115. [Google Scholar] [CrossRef]
  30. Benakatti, V.; Nayakar, R.; Anandhalli, M.; Lagali-Jirge, V. Accuracy of machine learning in identification of dental implant systems in radiographs-A systematic review and meta-analysis. J. Indian Acad. Oral Med. Radiol. 2022, 34, 354–358. [Google Scholar] [CrossRef]
  31. Alwohaibi, R.N.; Almaimoni, R.A.; Alshrefy, A.J.; AlMusailet, L.I.; AlHazzaa, S.A.; Menezes, R.G. Dental implants and forensic identification: A systematic review. J. Forensic Leg. Med. 2023, 96, 102508. [Google Scholar] [CrossRef]
  32. Khanagar, S.B.; Vishwanathaiah, S.; Naik, S.; Al-Kheraif, A.A.; Devang Divakar, D.; Sarode, S.C.; Bhandi, S.; Patil, S. Application and performance of artificial intelligence technology in forensic odontology—A systematic review. Leg. Med. 2021, 48, 101826. [Google Scholar]
  33. Farook, T.H.; Dudley, J. Automation and deep (machine) learning in temporomandibular joint disorder radiomics: A systematic review. J. Oral Rehabil. 2023, 50, 501–521. [Google Scholar] [CrossRef]
  34. Revilla-León, M.; Gómez-Polo, M.; Barmak, A.B.; Inam, W.; Kan, J.Y.K.; Kois, J.C.; Akal, O. Artificial intelligence models for diagnosing gingivitis and periodontal disease: A systematic review. J. Prosthet. Dent. 2022. [Google Scholar] [CrossRef]
  35. AbuSalim, S.; Zakaria, N.; Islam, M.R.; Kumar, G.; Mokhtar, N.; Abdulkadir, S.J. Analysis of Deep Learning Techniques for Dental Informatics: A Systematic Literature Review. Healthcare 2022, 10, 1892. [Google Scholar]
  36. Corbella, S.; Srinivas, S.; Cabitza, F. Applications of deep learning in dentistry. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2021, 132, 225–238. [Google Scholar] [CrossRef]
  37. Carrillo-Perez, F.; Pecho, O.E.; Morales, J.C.; Paravina, R.D.; Della Bona, A.; Ghinea, R.; Pulgar, R.; del Mar Pérez, M.; Herrera, L.J. Applications of artificial intelligence in dentistry: A comprehensive review. J. Esthet. Restor. Dent. 2022, 34, 259–280. [Google Scholar]
  38. Tandon, D.; Rajawat, J. Present and future of artificial intelligence in dentistry. J. Oral Biol. Craniofacial Res. 2020, 10, 391–396. [Google Scholar]
  39. Khanagar, S.B.; Al-ehaideb, A.; Maganur, P.C.; Vishwanathaiah, S.; Patil, S.; Baeshen, H.A.; Sarode, S.C.; Bhandi, S. Developments, application, and performance of artificial intelligence in dentistry—A systematic review. J. Dent. Sci. 2021, 16, 508–522. [Google Scholar]
  40. Mohammad-Rahimi, H.; Rokhshad, R.; Bencharit, S.; Krois, J.; Schwendicke, F. Deep learning: A primer for dentists and dental researchers. J. Dent. 2023, 130, 104430. [Google Scholar]
  41. Shah, N. Recent advances in imaging technologies in dentistry. World J. Radiol. 2014, 6, 794. [Google Scholar] [CrossRef]
  42. Singh, N.K.; Raza, K. Progress in deep learning-based dental and maxillofacial image analysis: A systematic review. Expert Syst. Appl. 2022, 199, 116968. [Google Scholar]
  43. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  44. Al Kheraif, A.A.; Wahba, A.A.; Fouad, H. Detection of dental diseases from radiographic 2d dental image using hybrid graph-cut technique and convolutional neural network. Meas. J. Int. Meas. Confed. 2019, 146, 333–342. [Google Scholar] [CrossRef]
  45. Mine, Y.; Iwamoto, Y.; Okazaki, S.; Nakamura, K.; Takeda, S.; Peng, T.; Mitsuhata, C.; Kakimoto, N.; Kozai, K.; Murayama, T. Detecting the presence of supernumerary teeth during the early mixed dentition stage using deep learning algorithms: A pilot study. Int. J. Paediatr. Dent. 2022, 32, 678–685. [Google Scholar] [CrossRef]
  46. Okazaki, S.; Mine, Y.; Iwamoto, Y.; Urabe, S.; Mitsuhata, C.; Nomura, R.; Kakimoto, N.; Murayama, T. Analysis of the feasibility of using deep learning for multiclass classification of dental anomalies on panoramic radiographs. Dent. Mater. J. 2022, 41, 889–895. [Google Scholar] [CrossRef]
  47. Ragodos, R.; Wang, T.; Padilla, C.; Hecht, J.T.; Poletta, F.A.; Orioli, I.M.; Buxó, C.J.; Butali, A.; Valencia-Ramirez, C.; Restrepo Muñeton, C.; et al. Dental anomaly detection using intraoral photos via deep learning. Sci. Rep. 2022, 12, 11577. [Google Scholar] [CrossRef]
  48. Aljabri, M.; Aljameel, S.S.; Min-Allah, N.; Alhuthayfi, J.; Alghamdi, L.; Alduhailan, N.; Alfehaid, R.; Alqarawi, R.; Alhareky, M.; Shahin, S.Y.; et al. Canine impaction classification from panoramic dental radiographic images using deep learning models. Inform. Med. Unlocked 2022, 30, 100918. [Google Scholar] [CrossRef]
  49. Liu, J.; Liu, Y.; Li, S.; Ying, S.; Zheng, L.; Zhao, Z. Artificial intelligence-aided detection of ectopic eruption of maxillary first molars based on panoramic radiographs. J. Dent. 2022, 125, 104239. [Google Scholar] [CrossRef]
  50. Askar, H.; Krois, J.; Rohrer, C.; Mertens, S.; Elhennawy, K.; Ottolenghi, L.; Mazur, M.; Paris, S.; Schwendicke, F. Detecting white spot lesions on dental photography using deep learning: A pilot study. J. Dent. 2021, 107, 103615. [Google Scholar] [CrossRef]
  51. Schönewolf, J.; Meyer, O.; Engels, P.; Schlickenrieder, A.; Hickel, R.; Gruhn, V.; Hesenius, M.; Kühnisch, J. Artificial intelligence-based diagnostics of molar-incisor-hypomineralization (MIH) on intraoral photographs. Clin. Oral Investig. 2022, 26, 5923–5930. [Google Scholar] [CrossRef]
  52. Alevizakos, V.; Bekes, K.; Steffen, R.; von See, C. Artificial intelligence system for training diagnosis and differentiation with molar incisor hypomineralization (MIH) and similar pathologies. Clin. Oral Investig. 2022, 26, 6917–6923. [Google Scholar] [CrossRef]
  53. Ha, E.G.; Jeon, K.J.; Kim, Y.H.; Kim, J.Y.; Han, S.S. Automatic detection of mesiodens on panoramic radiographs using artificial intelligence. Sci. Rep. 2021, 11, 23061. [Google Scholar] [CrossRef]
  54. Jeon, K.J.; Ha, E.G.; Choi, H.; Lee, C.; Han, S.S. Performance comparison of three deep learning models for impacted mesiodens detection on periapical radiographs. Sci. Rep. 2022, 12, 15402. [Google Scholar] [CrossRef]
  55. Dai, X.; Jiang, X.; Jing, Q.; Zheng, J.; Zhu, S.; Mao, T.; Wang, D. A one-stage deep learning method for fully automated mesiodens localization on panoramic radiographs. Biomed. Signal Process. Control 2023, 80, 104315. [Google Scholar] [CrossRef]
  56. Kuwada, C.; Ariji, Y.; Fukuda, M.; Kise, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2020, 130, 464–469. [Google Scholar] [CrossRef]
  57. Celik, M.E. Deep Learning Based Detection Tool for Impacted Mandibular Third Molar Teeth. Diagnostics 2022, 12, 942. [Google Scholar] [CrossRef]
  58. Başaran, M.; Çelik, Ö.; Bayrakdar, I.S.; Bilgir, E.; Orhan, K.; Odabaş, A.; Aslan, A.F.; Jagtap, R. Diagnostic charting of panoramic radiography using deep-learning artificial intelligence system. Oral Radiol. 2022, 38, 363–369. [Google Scholar] [CrossRef]
  59. Lee, S.; Kim, D.; Jeong, H.G. Detecting 17 fine-grained dental anomalies from panoramic dental radiography using artificial intelligence. Sci. Rep. 2022, 12, 5172. [Google Scholar] [CrossRef]
  60. Kim, J.; Hwang, J.J.; Jeong, T.; Cho, B.-H.H.; Shin, J. Deep learning-based identification of mesiodens using automatic maxillary anterior region estimation in panoramic radiography of children. Dentomaxillofacial Radiol. 2022, 51, 20210528. [Google Scholar]
  61. Ariji, Y.; Mori, M.; Fukuda, M.; Katsumata, A.; Ariji, E. Automatic visualization of the mandibular canal in relation to an impacted mandibular third molar on panoramic radiographs using deep learning segmentation and transfer learning techniques. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2022, 134, 749–757. [Google Scholar] [CrossRef]
  62. Imak, A.; Çelebi, A.; Polat, O.; Türkoğlu, M.; Şengür, A. ResMIBCU-Net: An encoder–decoder network with residual blocks, modified inverted residual block, and bi-directional ConvLSTM for impacted tooth segmentation in panoramic X-ray images. Oral Radiol. 2023, 1, 1–15. [Google Scholar] [CrossRef]
  63. Zhu, H.; Yu, H.; Zhang, F.; Cao, Z.; Wu, F.; Zhu, F. Automatic segmentation and detection of ectopic eruption of first permanent molars on panoramic radiographs based on nnU-Net. Int. J. Paediatr. Dent. 2022, 32, 785–792. [Google Scholar] [CrossRef] [PubMed]
  64. Duman, S.; Yılmaz, E.F.; Eşer, G.; Çelik, Ö.; Bayrakdar, I.S.; Bilgir, E.; Costa, A.L.F.; Jagtap, R.; Orhan, K. Detecting the presence of taurodont teeth on panoramic radiographs using a deep learning-based convolutional neural network algorithm. Oral Radiol. 2023, 39, 207–214. [Google Scholar] [CrossRef] [PubMed]
  65. Megalan Leo, L.; Kalpalatha Reddy, T. Dental Caries Classification System Using Deep Learning Based Convolutional Neural Network. J. Comput. Theor. Nanosci. 2020, 17, 4660–4665. [Google Scholar] [CrossRef]
  66. Wang, C.; Qin, H.; Lai, G.; Zheng, G.; Xiang, H.; Wang, J.; Zhang, D. Automated classification of dual channel dental imaging of auto-fluorescence and white lightby convolutional neural networks. J. Innov. Opt. Health Sci. 2020, 13, 2050014. [Google Scholar] [CrossRef]
  67. Schwendicke, F.; Elhennawy, K.; Paris, S.; Friebertshäuser, P.; Krois, J. Deep learning for caries lesion detection in near-infrared light transillumination images: A pilot study. J. Dent. 2020, 92, 103260. [Google Scholar] [CrossRef]
  68. Megalan Leo, L.; Kalapalatha Reddy, T. Learning compact and discriminative hybrid neural network for dental caries classification. Microprocess. Microsyst. 2021, 82, 103836. [Google Scholar] [CrossRef]
  69. Vinayahalingam, S.; Kempers, S.; Limon, L.; Deibel, D.; Maal, T.; Hanisch, M.; Bergé, S.; Xi, T. Classification of caries in third molars on panoramic radiographs using deep learning. Sci. Rep. 2021, 11, 12609. [Google Scholar] [CrossRef]
  70. Singh, P.; Sehgal, P.G. V Black dental caries classification and preparation technique using optimal CNN-LSTM classifier. Multimed. Tools Appl. 2021, 80, 5255–5272. [Google Scholar] [CrossRef]
  71. Bui, T.H.; Hamamoto, K.; Paing, M.P. Automated Caries Screening Using Ensemble Deep Learning on Panoramic Radiographs. Entropy 2022, 24, 1358. [Google Scholar] [CrossRef]
  72. Panyarak, W.; Wantanajittikul, K.; Suttapak, W.; Charuakkra, A.; Prapayasatok, S. Feasibility of deep learning for dental caries classification in bitewing radiographs based on the ICCMSTM radiographic scoring system. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2023, 135, 272–281. [Google Scholar] [CrossRef]
  73. Haghanifar, A.; Majdabadi, M.M.; Haghanifar, S.; Choi, Y.; Ko, S.B. PaXNet: Tooth segmentation and dental caries detection in panoramic X-ray using ensemble transfer learning and capsule classifier. Multimed. Tools Appl. 2023, 82, 27659–27679. [Google Scholar] [CrossRef]
  74. Zhou, X.; Yu, G.; Yin, Q.; Yang, J.; Sun, J.; Lv, S.; Shi, Q. Tooth Type Enhanced Transformer for Children Caries Diagnosis on Dental Panoramic Radiographs. Diagnostics 2023, 13, 689. [Google Scholar] [CrossRef]
  75. Ezhov, M.; Gusarev, M.; Golitsyna, M.; Yates, J.M.; Kushnerev, E.; Tamimi, D.; Aksoy, S.; Shumilov, E.; Sanders, A.; Orhan, K. Clinically applicable artificial intelligence system for dental diagnosis with CBCT. Sci. Rep. 2021, 11, 15006. [Google Scholar] [CrossRef]
  76. Rajee, M.V.; Mythili, C. Dental Image Segmentation and Classification Using Inception Resnetv2. IETE J. Res. 2021, 1–17. [Google Scholar] [CrossRef]
  77. Pauwels, R.; Brasil, D.M.; Yamasaki, M.C.; Jacobs, R.; Bosmans, H.; Freitas, D.Q.; Haiter-Neto, F. Artificial intelligence for detection of periapical lesions on intraoral radiographs: Comparison between convolutional neural networks and human observers. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2021, 131, 610–616. [Google Scholar] [CrossRef]
  78. Calazans, M.A.A.; Ferreira, F.A.B.S.; de Lourdes Melo Guedes Alcoforado, M.; dos Santos, A.; dos Anjos Pontual, A.; Madeiro, F. Automatic Classification System for Periapical Lesions in Cone-Beam Computed Tomography. Sensors 2022, 22, 6481. [Google Scholar] [CrossRef]
  79. Sankaran, K.S. An improved multipath residual CNN-based classification approach for periapical disease prediction and diagnosis in dental radiography. Neural Comput. Appl. 2022, 34, 20067–20082. [Google Scholar] [CrossRef]
  80. Chuo, Y.; Lin, W.M.; Chen, T.Y.; Chan, M.L.; Chang, Y.S.; Lin, Y.R.; Lin, Y.J.; Shao, Y.H.; Chen, C.A.; Chen, S.L.; et al. A High-Accuracy Detection System: Based on Transfer Learning for Apical Lesions on Periapical Radiograph. Bioengineering 2022, 9, 777. [Google Scholar] [CrossRef]
  81. Li, S.; Liu, J.; Zhou, Z.; Zhou, Z.; Wu, X.; Li, Y.; Wang, S.; Liao, W.; Ying, S.; Zhao, Z. Artificial intelligence for caries and periapical periodontitis detection. J. Dent. 2022, 122, 104107. [Google Scholar] [CrossRef]
  82. Liu, F.; Gao, L.; Wan, J.; Lyu, Z.L.; Huang, Y.Y.; Liu, C.; Han, M. Recognition of Digital Dental X-ray Images Using a Convolutional Neural Network. J. Digit. Imaging 2022, 36, 73–79. [Google Scholar] [CrossRef] [PubMed]
  83. Jaiswal, P.; Bhirud, D.S. An intelligent deep network for dental medical image processing system. Biomed. Signal Process. Control 2023, 84, 104708. [Google Scholar] [CrossRef]
  84. Chauhan, R.B.; Shah, T.; Shah, D.; Gohil, T. A novel convolutional neural network–Fuzzy-based diagnosis in the classification of dental pulpitis. Adv. Hum. Biol. 2023, 13, 79–86. [Google Scholar] [CrossRef]
  85. Chang, H.J.; Lee, S.J.; Yong, T.H.; Shin, N.Y.; Jang, B.G.; Kim, J.E.; Huh, K.H.; Lee, S.S.; Heo, M.S.; Choi, S.C.; et al. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci. Rep. 2020, 10, 7531. [Google Scholar] [CrossRef]
  86. Krois, J.; Ekert, T.; Meinhold, L.; Golla, T.; Kharbot, B.; Wittemeier, A.; Dörfer, C.; Schwendicke, F. Deep Learning for the Radiographic Detection of Periodontal Bone Loss. Sci. Rep. 2019, 9, 8495. [Google Scholar] [CrossRef] [Green Version]
  87. Kim, J.; Lee, H.S.; Song, I.S.; Jung, K.H. DeNTNet: Deep Neural Transfer Network for the detection of periodontal bone loss using panoramic dental radiographs. Sci. Rep. 2019, 9, 17615. [Google Scholar] [CrossRef] [Green Version]
  88. Lee, J.H.; Kim, D.H.; Jeong, S.N. Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network. Oral Dis. 2020, 26, 152–158. [Google Scholar] [CrossRef]
  89. Rao, R.S.; Shivanna, D.B.; Mahadevpur, K.S.; Shivaramegowda, S.G.; Prakash, S.; Lakshminarayana, S.; Patil, S. Deep Learning-Based Microscopic Diagnosis of Odontogenic Keratocysts and Non-Keratocysts in Haematoxylin and Eosin-Stained Incisional Biopsies. Diagnostics 2021, 11, 2184. [Google Scholar] [CrossRef]
  90. Sivasundaram, S.; Pandian, C. Performance analysis of classification and segmentation of cysts in panoramic dental images using convolutional neural network architecture. Int. J. Imaging Syst. Technol. 2021, 31, 2214–2225. [Google Scholar] [CrossRef]
  91. Lee, J.S.; Adhikari, S.; Liu, L.; Jeong, H.G.; Kim, H.; Yoon, S.J. Osteoporosis detection in panoramic radiographs using a deep convolutional neural network-based computer-assisted diagnosis system: A preliminary study. Dentomaxillofacial Radiol. 2019, 48, 20170344. [Google Scholar] [CrossRef]
  92. Lee, K.S.; Jung, S.K.; Ryu, J.J.; Shin, S.W.; Choi, J. Evaluation of Transfer Learning with Deep Convolutional Neural Networks for Screening Osteoporosis in Dental Panoramic Radiographs. J. Clin. Med. 2020, 9, 392. [Google Scholar] [CrossRef] [Green Version]
  93. Sukegawa, S.; Fujimura, A.; Taguchi, A.; Yamamoto, N.; Kitamura, A.; Goto, R.; Nakano, K.; Takabatake, K.; Kawai, H.; Nagatsuka, H.; et al. Identification of osteoporosis using ensemble deep learning model with panoramic radiographs and clinical covariates. Sci. Rep. 2022, 12, 6088. [Google Scholar] [CrossRef]
  94. Tassoker, M.; Öziç, M.Ü.; Yuce, F. Comparison of five convolutional neural networks for predicting osteoporosis based on mandibular cortical index on panoramic radiographs. Dentomaxillofacial Radiol. 2022, 51, 20220108. [Google Scholar] [CrossRef]
  95. Nishiyama, M.; Ishibashi, K.; Ariji, Y.; Fukuda, M.; Nishiyama, W.; Umemura, M.; Katsumata, A.; Fujita, H.; Ariji, E. Performance of deep learning models constructed using panoramic radiographs from two hospitals to diagnose fractures of the mandibular condyle. Dentomaxillofacial Radiol. 2021, 50, 50. [Google Scholar] [CrossRef]
  96. Yang, P.; Guo, X.; Mu, C.; Qi, S.; Li, G. Detection of vertical root fractures by cone-beam computed tomography based on deep learning. Dentomaxillofacial Radiol. 2023, 52, 20220345. [Google Scholar] [CrossRef]
  97. Murata, M.; Ariji, Y.; Ohashi, Y.; Kawai, T.; Fukuda, M.; Funakoshi, T.; Kise, Y.; Nozawa, M.; Katsumata, A.; Fujita, H.; et al. Deep-learning classification using convolutional neural network for evaluation of maxillary sinusitis on panoramic radiography. Oral Radiol. 2019, 35, 301–307. [Google Scholar] [CrossRef]
  98. Li, W.; Liang, Y.; Zhang, X.; Liu, C.; He, L.; Miao, L.; Sun, W. A deep learning approach to automatic gingivitis screening based on classification and localization in RGB photos. Sci. Rep. 2021, 11, 16831. [Google Scholar] [CrossRef]
  99. Jung, W.; Lee, K.E.; Suh, B.J.; Seok, H.; Lee, D.W. Deep learning for osteoarthritis classification in temporomandibular joint. Oral Dis. 2023, 29, 1050–1059. [Google Scholar] [CrossRef]
  100. Kuwada, C.; Ariji, Y.; Kise, Y.; Fukuda, M.; Nishiyama, M.; Funakoshi, T.; Takeuchi, R.; Sana, A.; Kojima, N.; Ariji, E. Deep-learning systems for diagnosing cleft palate on panoramic radiographs in patients with cleft alveolus. Oral Radiol. 2022, 39, 349–354. [Google Scholar] [CrossRef]
  101. Al-Sarem, M.; Al-Asali, M.; Alqutaibi, A.Y.; Saeed, F. Enhanced Tooth Region Detection Using Pretrained Deep Learning Models. Int. J. Environ. Res. Public Health 2022, 19, 15414. [Google Scholar] [CrossRef]
  102. Zhang, X.; Liang, Y.; Li, W.; Liu, C.; Gu, D.; Sun, W.; Miao, L. Development and evaluation of deep learning for screening dental caries from oral photographs. Oral Dis. 2022, 28, 173–181. [Google Scholar] [CrossRef] [PubMed]
  103. Chen, H.; Li, H.; Zhao, Y.; Zhao, J.; Wang, Y. Dental disease detection on periapical radiographs based on deep convolutional neural networks. Int. J. Comput. Assist. Radiol. Surg. 2021, 16, 649–661. [Google Scholar] [CrossRef] [PubMed]
  104. Kim, C.; Jeong, H.; Park, W.; Kim, D. Tooth-Related Disease Detection System Based on Panoramic Images and Optimization Through Automation: Development Study. JMIR Med. Inform. 2022, 10, e38640. [Google Scholar] [CrossRef]
  105. Chen, X.; Guo, J.; Ye, J.; Zhang, M.; Liang, Y. Detection of Proximal Caries Lesions on Bitewing Radiographs Using Deep Learning Method. Caries Res. 2022, 56, 455–463. [Google Scholar] [CrossRef] [PubMed]
  106. Park, E.Y.; Cho, H.; Kang, S.; Jeong, S.; Kim, E.K. Caries detection with tooth surface segmentation on intraoral photographic images using deep learning. BMC Oral Health 2022, 22, 573. [Google Scholar] [CrossRef]
  107. Fatima, A.; Shafi, I.; Afzal, H.; Mahmood, K.; de la Torre Díez, I.; Lipari, V.; Ballester, J.B.; Ashraf, I. Deep Learning-Based Multiclass Instance Segmentation for Dental Lesion Detection. Healthcare 2023, 11, 347. [Google Scholar] [CrossRef]
  108. Jiang, L.; Chen, D.; Cao, Z.; Wu, F.; Zhu, H.; Zhu, F. A two-stage deep learning architecture for radiographic staging of periodontal bone loss. BMC Oral Health 2022, 22, 106. [Google Scholar] [CrossRef]
  109. Thanathornwong, B.; Suebnukarn, S. Automatic detection of periodontal compromised teeth in digital panoramic radiographs using faster regional convolutional neural networks. Imaging Sci. Dent. 2020, 50, 169. [Google Scholar] [CrossRef]
  110. Kwon, O.; Yong, T.H.; Kang, S.R.; Kim, J.E.; Huh, K.H.; Heo, M.S.; Lee, S.S.; Choi, S.C.; Yi, W.J. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofacial Radiol. 2020, 49, 20200185. [Google Scholar] [CrossRef]
  111. Yang, H.; Jo, E.; Kim, H.J.; Cha, I.H.; Jung, Y.S.; Nam, W.; Kim, J.Y.; Kim, J.K.; Kim, Y.H.; Oh, T.G.; et al. Deep Learning for Automated Detection of Cyst and Tumors of the Jaw in Panoramic Radiographs. J. Clin. Med. 2020, 9, 1839. [Google Scholar] [CrossRef]
  112. Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Muramatsu, C.; Fukuda, M.; Kise, Y.; Nozawa, M.; Kuwada, C.; Fujita, H.; Katsumata, A.; et al. Automatic detection and classification of radiolucent lesions in the mandible on panoramic radiographs using a deep learning object detection technique. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2019, 128, 424–430. [Google Scholar] [CrossRef]
  113. Kise, Y.; Ariji, Y.; Kuwada, C.; Fukuda, M.; Ariji, E. Effect of deep transfer learning with a different kind of lesion on classification performance of pre-trained model: Verification with radiolucent lesions on panoramic radiographs. Imaging Sci. Dent. 2023, 53, 27–34. [Google Scholar] [CrossRef]
  114. Kuwana, R.; Ariji, Y.; Fukuda, M.; Kise, Y.; Nozawa, M.; Kuwada, C.; Muramatsu, C.; Katsumata, A.; Fujita, H.; Ariji, E. Performance of deep learning object detection technology in the detection and diagnosis of maxillary sinus lesions on panoramic radiographs. Dentomaxillofacial Radiol. 2020, 50, 20200171. [Google Scholar] [CrossRef]
  115. Watanabe, H.; Ariji, Y.; Fukuda, M.; Kuwada, C.; Kise, Y.; Nozawa, M.; Sugita, Y.; Ariji, E. Deep learning object detection of maxillary cyst-like lesions on panoramic radiographs: Preliminary study. Oral Radiol. 2021, 37, 487–493. [Google Scholar] [CrossRef]
  116. Fukuda, M.; Inamoto, K.; Shibata, N.; Ariji, Y.; Yanashita, Y.; Kutsuna, S.; Nakata, K.; Katsumata, A.; Fujita, H.; Ariji, E. Evaluation of an artificial intelligence system for detecting vertical root fracture on panoramic radiography. Oral Radiol. 2020, 36, 337–343. [Google Scholar] [CrossRef]
  117. Son, D.M.; Yoon, Y.A.; Kwon, H.J.; An, C.H.; Lee, S.H. Automatic Detection of Mandibular Fractures in Panoramic Radiographs Using Deep Learning. Diagnostics 2021, 11, 933. [Google Scholar] [CrossRef]
  118. Lee, K.S.; Kwak, H.J.; Oh, J.M.; Jha, N.; Kim, Y.J.; Kim, W.; Baik, U.B.; Ryu, J.J. Automated Detection of TMJ Osteoarthritis Based on Artificial Intelligence. J. Dent. Res. 2020, 99, 1363–1367. [Google Scholar] [CrossRef]
  119. Park, J.; Lee, J.; Moon, S.; Lee, K. Deep Learning Based Detection of Missing Tooth Regions for Dental Implant Planning in Panoramic Radiographic Images. Appl. Sci. 2022, 12, 1595. [Google Scholar] [CrossRef]
  120. Khan, H.A.; Haider, M.A.; Ansari, H.A.; Ishaq, H.; Kiyani, A.; Sohail, K.; Muhammad, M.; Khurram, S.A. Automated feature detection in dental periapical radiographs by using deep learning. Oral Surg. Oral Med. Oral Pathol. Oral Radiol. 2021, 131, 711–720. [Google Scholar] [CrossRef]
  121. Cantu, A.G.; Gehrung, S.; Krois, J.; Chaurasia, A.; Rossi, J.G.; Gaudin, R.; Elhennawy, K.; Schwendicke, F. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J. Dent. 2020, 100, 103425. [Google Scholar] [CrossRef]
  122. Bayrakdar, I.S.; Orhan, K.; Akarsu, S.; Çelik, Ö.; Atasoy, S.; Pekince, A.; Yasa, Y.; Bilgir, E.; Sağlam, H.; Aslan, A.F.; et al. Deep-learning approach for caries detection and segmentation on dental bitewing radiographs. Oral Radiol. 2022, 38, 468–479. [Google Scholar] [CrossRef] [PubMed]
  123. You, W.; Hao, A.; Li, S.; Wang, Y.; Xia, B. Deep learning-based dental plaque detection on primary teeth: A comparison with clinical assessments. BMC Oral Health 2020, 20, 141. [Google Scholar] [CrossRef]
  124. Lee, S.; Oh, S.-I.; Jo, J.; Kang, S.; Shin, Y.; Park, J.-W. Deep learning for early dental caries detection in bitewing radiographs. Sci. Rep. 2021, 11, 16807. [Google Scholar] [CrossRef] [PubMed]
  125. Lian, L.; Zhu, T.; Zhu, F.; Zhu, H. Deep Learning for Caries Detection and Classification. Diagnostics 2021, 11, 1672. [Google Scholar] [CrossRef] [PubMed]
  126. Zhu, H.; Cao, Z.; Lian, L.; Ye, G.; Gao, H.; Wu, J. CariesNet: A deep learning approach for segmentation of multi-stage caries lesion from oral panoramic X-ray image. Neural Comput. Appl. 2022, 35, 16051–16059. [Google Scholar] [CrossRef]
  127. Ari, T.; Sağlam, H.; Öksüzoğlu, H.; Kazan, O.; Bayrakdar, İ.Ş.; Duman, S.B.; Çelik, Ö.; Jagtap, R.; Futyma-Gąbka, K.; Różyło-Kalinowska, I.; et al. Automatic Feature Segmentation in Dental Periapical Radiographs. Diagnostics 2022, 12, 3081. [Google Scholar] [CrossRef]
  128. Dayı, B.; Üzen, H.; Çiçek, İ.B.; Duman, Ş.B. A Novel Deep Learning-Based Approach for Segmentation of Different Type Caries Lesions on Panoramic Radiographs. Diagnostics 2023, 13, 202. [Google Scholar] [CrossRef]
  129. Rajee, M.V.; Mythili, C. Novel technique for caries detection using curvilinear semantic deep convolutional neural network. Multimed. Tools Appl. 2023, 82, 10745–10762. [Google Scholar] [CrossRef]
  130. Kirnbauer, B.; Hadzic, A.; Jakse, N.; Bischof, H.; Stern, D. Automatic Detection of Periapical Osteolytic Lesions on Cone-beam Computed Tomography Using Deep Convolutional Neuronal Networks. J. Endod. 2022, 48, 1434–1440. [Google Scholar] [CrossRef]
  131. Song, I.S.; Shin, H.K.; Kang, J.H.; Kim, J.E.; Huh, K.H.; Yi, W.J.; Lee, S.S.; Heo, M.S. Deep learning-based apical lesion segmentation from panoramic radiographs. Imaging Sci. Dent. 2022, 52, 351–357. [Google Scholar] [CrossRef]
  132. Chen, C.C.; Wu, Y.F.; Aung, L.M.; Lin, J.C.Y.; Ngo, S.T.; Su, J.N.; Lin, Y.M.; Chang, W.J. Automatic recognition of teeth and periodontal bone loss measurement in digital radiographs using deep-learning artificial intelligence. J. Dent. Sci. 2023, 18, 1301–1309. [Google Scholar] [CrossRef]
  133. Endres, M.G.; Hillen, F.; Salloumis, M.; Sedaghat, A.R.; Niehues, S.M.; Quatela, O.; Hanken, H.; Smeets, R.; Beck-Broichsitter, B.; Rendenbach, C.; et al. Development of a Deep Learning Algorithm for Periapical Disease Detection in Dental Radiographs. Diagnostics 2020, 10, 430. [Google Scholar] [CrossRef]
  134. Yu, D.; Hu, J.; Feng, Z.; Song, M.; Zhu, H. Deep learning based diagnosis for cysts and tumors of jaw with massive healthy samples. Sci. Rep. 2022, 12, 1855. [Google Scholar] [CrossRef]
  135. Chau, R.C.W.; Li, G.-H.; Tew, I.M.; Thu, K.M.; McGrath, C.; Lo, W.-L.; Ling, W.-K.; Hsung, R.T.-C.; Lam, W.Y.H. Accuracy of Artificial Intelligence-Based Photographic Detection of Gingivitis. Int. Dent. J. 2023. [Google Scholar] [CrossRef]
  136. Wang, X.; Pastewait, M.; Wu, T.H.; Lian, C.; Tejera, B.; Lee, Y.T.; Lin, F.C.; Wang, L.; Shen, D.; Li, S.; et al. 3D morphometric quantification of maxillae and defects for patients with unilateral cleft palate via deep learning-based CBCT image auto-segmentation. Orthod. Craniofac. Res. 2021, 24, 108–116. [Google Scholar] [CrossRef]
  137. Laganà, G.; Venza, N.; Borzabadi-Farahani, A.; Fabi, F.; Danesi, C.; Cozza, P. Dental anomalies: Prevalence and associations between them in a large sample of non-orthodontic subjects, a cross-sectional study. BMC Oral Health 2017, 17, 62. [Google Scholar] [CrossRef] [Green Version]
  138. Sella Tunis, T.; Sarne, O.; Hershkovitz, I.; Finkelstein, T.; Pavlidi, A.M.; Shapira, Y.; Davidovitch, M.; Shpack, N. Dental Anomalies’ Characteristics. Diagnostics 2021, 11, 1161. [Google Scholar] [CrossRef]
Figure 2. Search results according to the PRISMA-2020 flowchart.
Figure 2. Search results according to the PRISMA-2020 flowchart.
Diagnostics 13 02512 g002
Figure 3. Distribution of publications by years, tasks performed, anomaly/disease applications, and dental imaging techniques.
Figure 3. Distribution of publications by years, tasks performed, anomaly/disease applications, and dental imaging techniques.
Diagnostics 13 02512 g003
Table 1. Definition of the research question within the framework of PICO.
Table 1. Definition of the research question within the framework of PICO.
Research
Question
What Are the Applications and Performance of Deep Learning for Diagnosing Dental Anomalies and Diseases?
PopulationDiagnostic medical images of patients with dental anomalies or disease (radiographs, CBCT, intraoral images, near-infrared-light transillumination (NILT) images, optical color Images, microscopic histopathology)
InterventionDeep learning-based models for diagnosis and clinical decision making
ComparisonExpert diagnosis
OutcomePredicted results that can be measured with performance metrics (accuracy (ACC), sensitivity (SEN), specificity (SPEC), Area Under the Curve (AUC), Matthews Correlation Coefficient (MCC), Intersection over Union (IoU), Positive/Negative Predictive Values (PPV/NPV), etc.)
Table 2. Structure of electronic search strategies.
Table 2. Structure of electronic search strategies.
DatabaseSearch StrategySearch Date
Google Scholarall: (“deep learning” OR “CNN” OR “convolutional neural network”) AND (“oral” OR “dental” OR “tooth” OR “teeth”) AND (“anomalies” OR “diseases”)26 May 2023
Medline/PubMed(“deep learning”[All Fields] OR “CNN”[All Fields] OR “convolutional neural network”[All Fields]) AND (“oral”[All Fields] OR “dental”[All Fields] OR “tooth” [All Fields] OR “teeth” [All Fields]) AND (“anomalies” [All Fields] OR “diseases” [All Fields])25 May 2023
Table 3. Summary of studies on deep learning diagnosis of dental anomalies.
Table 3. Summary of studies on deep learning diagnosis of dental anomalies.
Author, Year, ReferenceAnomalyImage TypeDataset SizeMethodPrimary Performance Metrics and Values (%)Other Performance MetricsExplainable AI Method
Classification
Ahn et al., 2021, [21]MesiodensPanoramic1100InceptionResNetV2ACC: 92.40Precision, Recall, F1 score, AUCGrad-CAM
Kheraif et al., 2019, [44]Supernumerary, Number teeth, Jaws position, Structure, Restoration, Implants, CavitiesPanoramic1500Hybrid Graph Cut Segmentation + CNNACC: 97.07Precision, Recall, F1 score, SPEC-
Mine et al., 2022, [45]SupernumeraryPanoramic220VGG16ACC: 84.00SEN, SPEC, AUC-
Okazaki et al., 2022, [46]Supernumerary, OdontomasPanoramic150AlexNetACC: 70.00Precision, SEN, F1 score-
Ragodos et al., 2022, [47]Supernumerary, Rotation, Agenesis, Mammalons, Microdontia, Impacted, Hypoplasia, Incisal Fissure, Hypocalcification, DisplacedIntraoral photos38,486ResNet18AUC for supernumerary class: 57.10Precision, Recall, F1 scoreGrad-CAM
Aljabri et al., 2022, [48]Maxillary canine impactionPanoramic416InceptionV3ACC: 92.59Precision, Recall, F1 score, SPECGrad-CAM
Liu et al., 2022, [49]Ectopic eruption of maxillary first molarsPanoramic1580CNN-based Fusion ModelSPEC: 86SEN, F1 score, PPV, NPVGrad-CAM
Askar et al., 2021, [50]White spot lesions, Hypomineralized lesionsIntraoral photos434SqueezeNetACC: 84.00SEN, SPEC, F1 score, AUC, PPV, NPVGrad-CAM
Schönewolf et al., 2022, [51]Molar-incisor-hypomineralization, Enamel breakdownIntraoral photos3241ResNeXt-101ACC: 95.20SEN, SPEC, AUC, PPV, NPVGrad-CAM
Alevizakos et al., 2022, [52]Molar-incisor-hypomineralization, Amelogenesis imperfecta, Dental fluorosis, White spot lesionsIntraoral photos462DenseNet121ACC: 92.86Loss-
Detection
Ha et al., 2021, [53]MesiodensPanoramic612YOLOv3ACC: 96.20SEN, SPEC-
Jeon et al., 2022, [54]MesiodensPeriapical720EfficientDetD3ACC: 99.20SEN, SPEC-
Dai et al., 2023, [55]MesiodensPanoramic850Authors Specific CNN: DMLnetACC: 94.00SEN, SPEC, mAP-
Kuwada et al., 2020, [56]SupernumeraryPanoramic550DetectNetAUC: 96.00Precision, Recall, F1 score, ACC-
Celik, 2022, [57]Third molar impacted teethPanoramic440YOLOv3mAP: 96.00IoU, ACC, Precision, Recall-
Başaran et al., 2022, [58]Impacted tooth, Residual root, and eight fine-grained dental anomaliesPanoramic1084Faster R-CNN InceptionV2 (COCO)SEN for Impacted class: 96.58TP, FP, FN, Precision, F1 score-
Lee et al., 2022, [59]Supernumerary, Impacted, Residual root, and 14 fine-grained dental anomaliesPanoramic23,000Faster R-CNNSEN: 99.00Precision, SPEC-
Segmentation
Kim et al., 2022, [60]MesiodensPanoramic988DeepLabV3plus + InceptionResNetV2ACC: DeepLabV3plus +: 83.90, InceptionResNetV2: 97.10IoU, MeanBF score, Precision, Recall, F1 scoreGrad-CAM
Ariji et al., 2022, [61]Third molar impacted teethPanoramic3200U-NetDSC: 83.10JSC, SEN-
Imak et al., 2023, [62]Impacted toothPanoramic304Authors Specific CNN: ResMIBCU-Net: an encoder–decoder network with residual blocks, modified inverted residual block, and bi-directional ConvLSTMACC: 99.82IoU, Recall, F1 score-
Zhu et al., 2022, [63]Ectopic eruption of first permanent molarsPanoramic285nnU-NetACC: 99.00DSC, IoU, Precision, SEN, SPEC, F1 score-
Duman et al., 2023, [64]TaurodontPanoramic434U-NetSEN: 86.50TP, FP, FN, Precision, F1 score-
ACC, Accuracy; AUC, Area Under the ROC Curve; CAM, Class Activation Mapping; CBCT, Cone Beam Computed Tomography; CNN, Convolutional Neural Network; DSC, Dice Similarity Coefficient; FN, False Negative; FP, False Positive; Grad-CAM, Gradient-weighted Class Activation Mapping; IoU, Intersection over Union; JSC, Jaccard Similarity Coefficient; mAP, mean Average Precision; NILT, Near-Infrared-Light Transillumination; NPV, Negative Predictive Value; OPG, Or-thopantomogram; PPV, Positive Predictive Value; SEN, Sensitivity; SPEC, Specificity; TMJOA, Temporomandibular Joint Osteoarthritis; TP, True Positive; YOLO, You Only Look Once.
Table 4. Summary of studies on deep learning diagnosis of dental diseases.
Table 4. Summary of studies on deep learning diagnosis of dental diseases.
Author, Year, ReferenceDiseaseImage TypeDataset SizeMethodPrimary Performance Metrics and Values (%)Other Performance MetricsExplainable AI Method
Classification
Megalan Leo and Kalpalatha Reddy, 2020, [65]Dental cariesBite viewing480InceptionV3ACC: 86.70--
Wang et al., 2020, [66]Dental caries, Dental plaqueIntraoral photos7200Authors Specific CNNACC: Dental caries: 95.30, Dental plaque: 95.90SEN, SPEC-
Schwendicke et al., 2020, [67]Dental cariesNILT226ResNet18ACC: 69.00SEN, SPEC, AUC, PPV, NPVCAM
Megalan Leo and Kalpalatha Reddy, 2021, [68]Dental caries: Enamel, Dentin, Pulp, Root lesionsBite viewing480Hybrid Neural Network (HNN)ACC: 96.00--
Vinayahalingam et al., 2021, [69]Dental cariesPanoramic400MobileNetV2ACC: 87.00SEN, SPEC, AUCCAM
Singh and Sehgal, 2021, [70]G.V Black dental cariesPeriapical1500CNN-LSTMACC: 96.00Precision, SEN, SPEC, F1 score, G-mean, AUC-
Bui et al., 2022, [71]Dental cariesPanoramic95Pretrained CNNs-SVMACC: 93.58SEN, SPEC, F1 score, PPV, NPV-
Vimalarani and Ramachandraiah, 2022, [26]Dental cariesBite viewing1000Pervasive deep gradient-based LeNetACC: 98.74SEN, SPEC, ER, PPV, NPV-
Panyarak et al., 2023, [72]Dental cariesBite viewing2758ResNet152ACC: 71.11SEN, SPEC, CR, AUCCAM
Haghanifar et al., 2023, [73]Dental cariesPanoramic470Authors Specific CNN: PaXNet: Ensemble transfer learning and capsule classifierACC: 86.05Loss, Precision, Recall, F0.5 scoreGrad-CAM
Zhou et al., 2023, [74]Dental cariesPanoramic304Swin TransformerACC: 85.57Precision, Recall, F1 score-
Ezhov et al., 2021, [75]Dental caries, Periapical lesion, Periodontal bone lossCBCT1346U-Net + DenseNetSEN: 92.39SPEC-
Rajee and Mythili, 2021, [76]Dental caries, Periapical infection, Periodontal, and Pericoronal diseasesPeriapical, Panoramic2000Curvilinear Semantic DCNN+ InceptionResNetV2ACC: 94.51MCC, DSC, JSC, ER, Precision, Recall, SPEC-
Pauwels et al., 2021, [77] Periapical lesionPeriapical280Authors Specific CNNSEN: 87.00SPEC, AUC-
Calazans et al., 2022, [78] Periapical lesionCBCT1000Siamese Network + DenseNet121ACC: 70.00SPEC, Precision, Recall, F1 score-
Sankaran, 2022, [79] Periapical lesionPanoramic1500Improved Multipath Residual CNN (IMRCNN)ACC: 98.90SEN, SPEC, Precision, F1 score-
Li et al., 2021, [22]Dental apical lesionsPeriapical476Authors Specific CNNACC: 92.50Loss-
Chuo et al., 2022, [80]Dental apical lesionsPeriapical760AlexNetACC: 96.21--
Li et al., 2022, [81]Dental caries, Periapical periodontitisPeriapical4129Modified ResNet18F1 score: Dental caries: 82.90, Periapical periodontitis: 82.80SEN, SPEC, AUC, PPV, NPVGrad-CAM
Liu et al., 2022, [82]Dental caries, Periapical periodontitis, Periapical cystsPeriapical1880DenseNet121ACC: 99.50SEN, SPEC, PPV, NPVCAM
Park et al., 2023, [27]Calculus and InflammationOptical Color Images220YOLOv5 + Parallel 1D CNNACC: 74.54--
Jaiswal and Bhirud, 2023, [83]Erosive wear, PeriodontitisOPG 500CNN with Antlion Optimization ACC: 77.00Precision, Recall, F1 score-
Chauhan et al., 2023, [84]Dental pulpitisPeriapical428CNN-Fuzzy logicACC: 94.00SEN, SPEC, Precision, F1 score, MCCGrad-CAM
Chang et al., 2020, [85]Periodontal bone loss, PeriodontitisPanoramic330Mask R-CNN + CNNPixel ACC: 92.00DSC, JSC-
Krois et al., 2019, [86]Periodontal bone lossPanoramic85Authors Specific CNNACC: 81.00SEN, SPEC, F1 score, AUC, PPV, NPV-
Kim et al., 2019, [87]Periodontal bone lossPanoramic12,179Authors Specific CNN: DeNTNet: Deep Neural Transfer NetworkF1 score: 75.00Precision, Recall, AUC, NPVGrad-CAM
Lee et al., 2020, [88]Odontogenic cystPanoramic, CBCTPanoramic 1140, CBCT 986InceptionV3AUC: Panoramic: 84.70, CBCT: 91.40SEN, SPEC-
Rao et al., 2021, [89]Odontogenic cystsMicroscopic histopathology2657DenseNet169ACC: 93.00Loss, Precision, Recall, F1 score-
Sivasundaram and Pandian, 2021, [90]Dental cystPanoramic1171Morphology-based Segmentation + Modified LeNetACC: 98.50CR, Precision, F1 score, DSC, SEN, SPEC, PPV, NPV-
Lee et al., 2019, [91]OsteoporosisPanoramic1268Multicolumn DCNNAUC: 99.87ACC, Precision, Recall, F1 score-
Lee et al., 2020, [92]OsteoporosisPanoramic680VGG16AUC: 85.80SEN, SPEC, ACCGrad-CAM
Sukegawa et al., 2022, [93]OsteoporosisPanoramic778EfcientNet Ensemble ModelACC: 84.50Precision, Recall, F1 score, AUCGrad-CAM
Tassoker et al., 2022, [94]OsteoporosisPanoramic1488AlexNetACC: 81.14SEN, SPEC, F1 score, AUCGrad-CAM
Nishiyama et al., 2021, [95]Mandibular condyle fracturesPanoramic400AlexNetACC: 84.50SEN, SPEC, AUC-
Yang et al., 2023, [96]Vertical root fracturesCBCT1641ResNet50AUC: 92.90SEN, SPEC, ACC, PPV, NPVCAM
Murata et al., 2019, [97]Maxillary sinusitisPanoramic920AlexNetACC: 87.50SEN, SPEC, AUC,-
Li et al., 2021, [98]GingivitisIntraoral photos625CNN with Multi-task LearningAUC: 87.11SEN, SPEC, FPR, Grad-CAM
Choi et al., 2021, [23]TMJOA OPG1189ResNetACC: 80.00SEN, SPEC, Cohen’s Kappa-
Jung et al., 2023, [99] TMJOAPanoramic858EfficientNetB7ACC: 88.37SEN, SPEC, AUCGrad-CAM
Kuwada et al., 2022, [100]Cleft palatePanoramic491DetectNet, VGG16AUC: DetectNe: 95.00, VGG16: 93.00SEN, SPEC, ACC-
Al-Sarem et al., 2022, [101]Missing toothCBCT500U-Net + DenseNet169Precision: 94.00ACC, Recall, F1 score, Loss, MCC-
Detection
Zhang et al., 2022, [102]Dental cariesIntraoral photos3932Single-Shot DetectorAUC: 95.00TPR-
Chen et al., 2021, [103]Dental caries, Periapical periodontitis, PeriodontitisPeriapical2900Faster R-CNNIoU: Dental caries: 71.59, Periapical periodontitis: 69.42, Periodontitis: 68.35AP, AUC, Recall-
Kim et al., 2022, [104]Dental caries, Periapical radiolucency, Residual rootPanoramic10,000Fast R-CNNACC: 90.00SEN, SPEC, Precision-
Chen et al., 2022, [105]Dental cariesBite viewing978Faster R-CNNACC: 87.00SEN, SPEC, PPV, NPV-
Park et al., 2022, [106]Dental cariesIntraoral photos2348Faster R-CNNACC: 81.30AUC, SEN, AP-
Fatima et al., 2023, [107]Periapical lesionsPeriapical534Lightweight Mask R-CNNACC: 94.00IoU, mAP-
Jiang et al., 2022, [108]Periodontal bone lossPanoramic640U-Net + YOLOv4ACC: 77.00AP, Recall, F1 score-
Thanathornwong and Suebnukarn, 2020, [109]Periodontally compromised teethPanoramic100Faster R-CNNPrecision: 81.00SEN, SPEC, F1 score-
Kwon et al., 2020, [110]Odontogenic cystsPanoramic1282YOLOv3ACC: 91.30SEN, SPEC, AUC-
Yang et al., 2020, [111]Odontogenic cystsPanoramic1603YOLOv2Precision: 70.70Recall, ACC, F1 score-
Ariji et al., 2019, [112]Radiolucent lesions in the mandible (Ameloblastomas, Odontogenic keratocysts, Dentigerous cysts, Radicular cysts, Simple bone cysts)Panoramic285DetectNetSEN: 88.00IoU, FPR-
Kise et al., 2023, [113]Mandibular radiolucent cyst-like lesions (Radicular cyst, Dentigerous cyst, Odontogenic keratocyst, Ameloblastoma)Panoramic310DetectNetACC: 89.00SEN, SPEC-
Kuwana et al., 2020, [114]Inflamed maxillary sinuses, Maxillary sinus cystsPanoramic611DetectNetACC: 92.00SEN, SPEC, FPR-
Watanabe et al., 2021, [115]Maxillary cyst-like lesionsPanoramic410DetectNetPrecision: 90.00Recall, F1 score-
Fukuda et al., 2020, [116]Vertical root fracturesPanoramic300DetectNetPrecision: 93.00Recall, F1 score-
Son et al., 2021, [117]Mandibular FracturesPanoramic420YOLOv4Precision: 98.50Recall, F1 score, -
Alalharith et al., 2020, [28]GingivitisIntraoral photos134Faster R-CNNACC: 100Recall, mAP-
Lee et al., 2020, [118] TMJOACBCT3514Single-Shot DetectorACC: 86.00Precision, Recall, F1 score-
Park et al., 2022, [119]Missing toothPanoramic455Faster R-CNNmAP: 59.09AP, IoU-
Segmentation
Casalegno et al., 2019, [25]Dental cariesNILT217U-NetmIoU: 72.70AUC-
Khan et al., 2021, [120]Dental caries, Alveolar bone recession, Interradicular radiolucenciesPeriapical206U-NetmIoU: 40.20DSC, Precision, Recall, NPV, F1 score-
Cantu et al., 2020, [121]Dental cariesBite viewing3686U-NetACC: 80.00SEN, SPEC, PPV, NPV, MCC, F1 Score-
Bayrakdar et al., 2022, [122]Dental cariesBite viewing621U-Net SEN: 81.00Precision, F1 score-
You et al., 2020, [123]Dental plaqueIntraoral photos886DeepLabV3+mIOU: 72.60--
Lee et al., 2021, [124]Dental cariesBite viewing304 U-NetPrecision: 63.29Recall, F1 score, PPV-
Lian et al., 2021, [125]Dental cariesPanoramic1160Caries detection: nnU-Net, Caries severity detection: DenseNet121IoU: nnU-Net: 78.50, ACC: DenseNet121: 95.70DSC, Precision, Recall, NPV, F1 score-
Zhu et al., 2022, [126]Dental cariesPanoramic1159Authors Specific CNN: CariesNetDSC: 93.64ACC, Precision, Recall, F1 score-
Ari et al., 2022, [127]Dental caries, Periapical lesionPeriapical1169U-NetSEN: Dental caries: 82.00, Periapical lesion: 92.00Precision, F1 Score-
Dayı et al., 2023, [128]Dental cariesPanoramic504Authors Specific CNN: DCDNetF1 score: 61.86Precision, Recall, c-
Rajee and Mythili, 2023, [129]Dental cariesPanoramic2000Curvilinear Semantic DCNNACC: 93.7DSC, JSC, TPR, FPR-
Kirnbauer et al., 2022, [130]Periapical lesionCBCT144U-NetACC: 97.30SEN, SPEC, FNR, DSC-
Song et al., 2022, [131]Dental apical lesionsPanoramic1000U-NetIoU: 82.80Precision, Recall, F1 score-
Chen et al., 2023, [132]Periodontal bone lossPeriapical8000U-Net-based Ensemble ModelACC: 97.00AP-
Endres et al., 2020, [133]Periapical inflammation, Granuloma, Cysts, Osteomyelitis, TumorPanoramic2902U-NetPPV: 67.00TPR, AP, F1 score-
Yu et al., 2022, [134]Odontogenic cystsPanoramic10,872MoCoV2 + U-NetACC: MoCoV2: 88.72, IoU: U-Net: 70.84Precision, F1 score, SEN, SPECGrad-CAM
Chau et al., 2023, [135]GingivitisIntraoral photos567DeepLabV3plusSEN: 92.00SPEC, IoU-
Wang et al., 2021, [136]Cleft lip and palateCBCT603D U-NetDSC: 77.00--
Bayrakdar et al., 2021, [24]Missing toothCBCT753D U-NetRight percentages: 95.30False percentages-
ACC, Accuracy; AP, Average Precision; AUC, Area Under the ROC Curve; CAM, Class Activation Mapping; CBCT, Cone Beam Computed Tomography; CNN, Convolutional Neural Network; CR, Classification Rate; DSC, Dice Similarity Coefficient; ER, Error Rate; FN, False Negative; FNR, False Negative Rate; FP, False Positive; FPR, False Positive Rate; Grad-CAM, Gradient-weighted Class Activation Mapping; IoU, Intersection over Union; JSC, Jaccard Similarity Coefficient; LSTM, Long Short-Term Memory; mAP: mean Average Precision; MCC, Matthews Correlation Coefficient; mIoU, mean Intersection over Union; NILT, Near-Infrared-Light Transillumination; NPV, Negative Predictive Value; OPG, Orthopantomogram; PPV, Positive Predictive Value; R-CNN, Region-based Convolutional Neural Network; SEN, Sensitivity; SPEC, Specificity; SSD, Single Shot Detector; SVM, Support Vector Machine; TMJOA, Temporomandibular Joint Osteoarthritis; TN, True Negative; TNR, True Negative Rate; TP, True Positive; TPR, True Positive Rate; YOLO, You Only Look Once.
Table 5. Summary of studies involving human-AI comparisons.
Table 5. Summary of studies involving human-AI comparisons.
Author, Year, ReferenceTest DatasetReference Dataset AnnotatorsComparator DentistsStatistical Significance TestDiagnostic Performance (%)Diagnostic TimeAI Performance
(+) Effective,
(−) Noneffective
Ahn et al., 2021, [21]Panoramic, 100 1 PS6 GP, 6 PSKruskal–Wallis test,
p < 0.05
(ACC) GP: 95.00, PS: 99.00, AI Model: 88.00GP: 811.8 s, PS: 375.5 s, AI Model: 1.5 sPerformance: −, Time: +
Ragodos et al., 2022, [47]Intraoral photos, Reference test size 7.697, Comparative test size 301 SD1 SDPre-calibration performance measurements(F1 score for mammalons class) SD: 85.70, AI Model: 50.60 SD: 1 year, AI Model: 16 min for the entire datasetPerformance: −, Time: +
Liu et al., 2022, [49]Panoramic, 100 3 PS3 PSCochran test, p = 0.114(SPEC) PS1: 88.00, PS2: 83.00, PS3: 87.00, AI Model: 86.00PS: -, AI Model: 1 sPerformance: −, Time: +
Zhu et al., 2022, [63]Panoramic, 651 OMFR, 2 PS2 GP, 1 PSMcNemar’s χ2 test, p < 0.001(ACC) GP1: 82.50, GP2: 83.30, PS: 77.50, AI Model: 99.00-Performance: +
Zhou et al., 2023, [74]Panoramic, 30An experienced data annotation worker trained by dentists2 SDKappa statistic(ACC) SD: 88.42, AI Model: 85.57SD: 64.5 s, AI Model: 68.97 sPerformance: − Time: −
Ezhov et al., 2021, [75]CBCT, 600OMFR4 OMFRStudent’s t-test, p < 0.05(SEN) OMFR1: 94.11, OMFR2: 94.38, OMFR3: 93.18, OMFR4: 93.37, AI Model: 92.39-Performance: −
CBCT, 40OMFR12 AI-aided group, 12 AI-unaided groupMann–Whitney-u test, p < 0.05(SEN) AI-unaided group: 76.72, AI-aided group: 85.37 AI-unaided group: 18.74 min, AI-aided group: 17.55 minPerformance: +, Time: +
Pauwels et al., 2021, [77]Periapical, 112 (Val. dataset)3 OMFR3 OMFRQuadratic weighted kappa(SEN) OMFR: 58.00, AI Model: 87.00-Performance: +
Li et al., 2022, [81]Periapical, 3003 SD3 JDKappa statistic(F1 score) JD1: 61.29, JD2: 61.87, JD3: 65.39, AI Model: 82.85-Performance: +
Chang et al., 2020, [85]Panoramic, 34OMFR 3 OMFR (1 Professor, 1 Fellow, 1 Resident)Intraclass Correlation Coefficient (ICC), p  <  0.01(ICC) AI Model-Professor: 86.00, AI Model-Fellow: 84.00, AI Model-Resident: 82.00-Performance: +
Krois et al., 2019, [86]Panoramic, 253 SD6 SD (1 PS, 1 ES, 4 GP)Welch’s t-test, p = 0.067(ACC) SD average: 76.00, AI Model: 81.00-Performance: +
Kim et al., 2019, [87]Panoramic, 8005 SDSame 5 SD-(F1 score) SD average: 69.00, AI Model: 75.00-Performance: +
Murata et al., 2019, [97]Panoramic, 120CBCT2 OMFR, 2 JDMcNemar’s χ2 test, p < 0.05(ACC) OMFR: 89.60, JD: 76.7, AI Model: 87.50OMFR, JD: -, AI Model: 9 s Performance: +, Time: +
Choi et al., 2021, [23]OPG, 450CBCT1 SDMcNemar’s test, p < 0.05(ACC) SD: 81.00, AI Model: 80.00-Performance: −
Kuwada et al., 2022, [100]Panoramic, 60-2 OMFRMcNemar’s χ2 test, p < 0.05(AUC) OMFR1: 70.00, OMFR2: 63.00, AI Models: 95.00, 93.00-Performance: +
Chen et al., 2022, [105]Bite viewing, 1602 ES, 1 OMFR2 JDMcNemar’s χ2 test, p < 0.05(ACC) JD: 82.00, AI Model: 87.00-Performance: +
Yang et al., 2020, [111]Panoramic, 181-3 OMFS, 2 GPKruskal–Wallis test,
p = 0.77
(Precision) OMFS: 67.10, GP: 65.80, AI Model: 70.70OMFS and GP average time: 33.8 min, AI Model: -Performance: +, Time: +
Cantu et al., 2020, [121]Bite viewing, 1413 SD7 SDTwo-sided paired t-test,
p < 0.05
(ACC) SD average: 71.00, Model: 80.00-Performance: +
You et al., 2020, [123] Intraoral photos, 98 A researcher1 PSPaired t-test,
p > 0.05
(mIOU) PS: 69.50, AI Model: 73.60-Performance: +
Intraoral photos, 102 A researcher1 PSPaired t-test,
p > 0.05
(mIOU) PS: 65.20, AI Model: 72.40-Performance: +
Lee et al., 2021, [124]Bite viewing, 502 SD3 SDGeneralized estimating equations,
p < 0.05
(SEN) AI-unaided group: SD1: 85.34, SD2: 85.86, SD3: 69.11, AI-aided group: SD1: 92.15, SD2: 93.72, SD3: 79.06, AI Model: 83.25-Performance: +
Lian et al., 2021, [125]Panoramic, 894 SD6 SDMcNemar’s χ2 test, p < 0.05Segmentation (IoU) SD average: 69.60, AI Model: 78.50; Classification (ACC) SD average: 91.50, AI Model: 95.70-Performance: +
Endres et al., 2020, [133]Panoramic, 1021 OMFS24 OMFSWilcoxon signed-rank test, p < 0.05(PPV) OMFS average: 69.00, AI Model: 67.00-Performance: + (The AI model outperformed 14 of the 24 OMFS)
Accuracy; AUC, Area Under the ROC Curve; CBCT, Cone Beam Computed Tomography; ES, Endodontic Specialist; GP, General Practitioner; ICC, Intraclass Correlation; JD, Junior Dentist; OMFR, Oral and Maxillofacial Radiologist; OMFS, Oral and Maxillofacial Surgeon; OPG, Orthopantomogram; PPV, Positive Predictive Value; PS, Pediatric Specialist; SD, Specialist Dentist; SEN, Sensitivity; SPEC, Specificity.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sivari, E.; Senirkentli, G.B.; Bostanci, E.; Guzel, M.S.; Acici, K.; Asuroglu, T. Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review. Diagnostics 2023, 13, 2512. https://doi.org/10.3390/diagnostics13152512

AMA Style

Sivari E, Senirkentli GB, Bostanci E, Guzel MS, Acici K, Asuroglu T. Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review. Diagnostics. 2023; 13(15):2512. https://doi.org/10.3390/diagnostics13152512

Chicago/Turabian Style

Sivari, Esra, Guler Burcu Senirkentli, Erkan Bostanci, Mehmet Serdar Guzel, Koray Acici, and Tunc Asuroglu. 2023. "Deep Learning in Diagnosis of Dental Anomalies and Diseases: A Systematic Review" Diagnostics 13, no. 15: 2512. https://doi.org/10.3390/diagnostics13152512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop