Next Article in Journal
Metaverse and Medical Diagnosis: A Blockchain-Based Digital Twinning Approach Based on MobileNetV2 Algorithm for Cervical Vertebral Maturation
Previous Article in Journal
COVID-19 Diagnosis in Computerized Tomography (CT) and X-ray Scans Using Capsule Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Segmentation of Teeth, Crown–Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls

1
Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Near East University, 99138 Nicosia, Cyprus
2
DESAM Institute, Near East University, 99138 Nicosia, Cyprus
3
Department of Computer Engineering, Applied Artificial Intelligence Research Centre, Near East University, 99138 Nicosia, Cyprus
4
Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, 06560 Ankara, Turkey
*
Author to whom correspondence should be addressed.
Diagnostics 2023, 13(8), 1487; https://doi.org/10.3390/diagnostics13081487
Submission received: 22 December 2022 / Revised: 26 February 2023 / Accepted: 1 March 2023 / Published: 20 April 2023
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)

Abstract

:
Background: The aim of our study is to provide successful automatic segmentation of various objects on orthopantomographs (OPGs). Methods: 8138 OPGs obtained from the archives of the Department of Dentomaxillofacial Radiology were included. OPGs were converted into PNGs and transferred to the segmentation tool’s database. All teeth, crown–bridge restorations, dental implants, composite–amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by two experts with the manual drawing semantic segmentation technique. Results: The intra-class correlation coefficient (ICC) for both inter- and intra-observers for manual segmentation was excellent (ICC > 0.75). The intra-observer ICC was found to be 0.994, while the inter-observer reliability was 0.989. No significant difference was detected amongst observers (p = 0.947). The calculated DSC and accuracy values across all OPGs were 0.85 and 0.95 for the tooth segmentation, 0.88 and 0.99 for dental caries, 0.87 and 0.99 for dental restorations, 0.93 and 0.99 for crown–bridge restorations, 0.94 and 0.99 for dental implants, 0.78 and 0.99 for root canal fillings, and 0.78 and 0.99 for residual roots, respectively. Conclusions: Thanks to faster and automated diagnoses on 2D as well as 3D dental images, dentists will have higher diagnosis rates in a shorter time even without excluding cases.

1. Introduction

Technological changes have created great changes in the fields of medicine and dentistry, and one of the most important innovations that caused this change is artificial intelligence (AI) technology. AI will come to be increasingly preferred in the fields of medicine and dentistry due to its important contributions to patient health services and the convenience that it provides to practicians. The increase in processing speed, computing power, storage capacity, ability to perform different tasks, and the affordability of advanced graphics processing units as well as computers are considered the beginning of a new era in medicine and especially in radiology [1,2,3].
Artificial intelligence (AI) is most simply defined as systems that mimic human intelligence to perform specific tasks and can improve themselves by repeating the data that they process. AI systems can display behaviors associated with human intelligence, such as planning, learning, reasoning, problem solving, perception, movement, manipulation, and, to a lesser extent, social intelligence and creativity [4,5,6,7,8]. Machine learning is considered to be a type of AI that can reveal results even for that which it was not programmed for. In 1959, Arthur Samuel defined machine learning as “the ability of machines to learn results for which they were not specifically programmed” [9]. He also developed a checkers game that can work in a computer environment, learn from its own mistakes, and thus improve itself [1,10,11,12].
One of the applications that made machine learning popular is image recognition. For image recognition, in order for a machine to learn what a relevant image is, it needs to be trained by showing similar images as much as possible [13,14,15,16]. Thus, by recognizing similar sequences, similar motifs, and similar pixels, the machine can detect, segment, and classify what those pictures are. The more data there are, the better artificial intelligence features will be revealed [1,11,12,17,18,19,20,21,22,23,24,25,26,27,28,29]. Deep neural networks have many hidden layers, with millions of interconnected artificial neurons. A number, called weight, represents the connections between one node and another [11,12,30,31,32]. Theoretically, deep neural networks can match any type of input to any type of output; however, they require much more training compared to other machine learning methods. They require millions of samples, compared to the hundreds or thousands of training data samples that a simpler network might need [1,2,11,12,17,18,19,20,21,30,33,34,35,36,37,38,39].
AI has quickly become more well-known and significant in dentistry clinical research. AI applications that can alter the conventional organization of dentistry education in universities remain in development [40,41,42]. Many technology advancements are now being quickly incorporated into the difficult dental training phase, from AI-supported virtual reality simulations to robotic patients [43,44]. Clinical decision support systems are a valuable aspect of computer technology that lower clinical inaccuracies, assist doctors in making better and more effective choices regarding patient care, and ultimately improve the standard of care [6,45]. This innovation has the potential to be an effective tool for clinical teaching in dental schools since it may provide medical professionals and students with more confidence in the course of treatment [46,47,48,49]. AI can be used to diagnose many diseases in many countries with poor socioeconomic statuses. For example, in countries with a high prevalence of tuberculosis, it is sometimes impossible for patients to be diagnosed due to the small number of radiologists who will evaluate the images of patients. Because artificial intelligence can accurately diagnose pulmonary tuberculosis with 95% sensitivity and 100% specificity, it can be easily diagnosed even in countries where radiologists are available with artificial intelligence by evaluating radiographs to be loaded from health centers [15,32,50,51,52].
Common applications of AI in oral diagnosis and dentomaxillofacial radiology are as follows:
  • Oral cancer prognosis and assessment of oral cancer risk [45,53,54,55,56,57,58,59,60,61,62];
  • Determination of temporomandibular joint disorder progression and temporomandibular internal derangements [27,30,34,38,63];
  • Interpretation of conventional 2D imaging [31,64,65,66,67,68];
  • Interpretation of cone beam computed tomography and other 3D imaging methods [1,10,12,17,18,19,21,23,27,69,70,71].
The majority of the research was designed to address dental issues; periapical radiography, orthopantomography (OPG), and lateral cephalometric radiographs represent the most frequent imaging techniques in dentomaxillofacial radiology. One of the earliest experiments using a 3D imaging AI model sought to distinguish between radicular cysts and apical granulomas [72]. Cephalometric landmark detection, osteoporosis analysis, odontogenic lesion categorization, and the detection of periapical/periodontal pathologies are the most frequently studied topics for AI in DMFR, according to a review by Hung et al. [7].
The majority of the studies in DMFR are about the “localization” and “basic features of teeth”, rather than general evaluations. Although the focus is on three-dimensional radiographs, such as cone beam computed tomography (CBCT), and two-dimensional radiographs, such as OPG and periapical radiographs, there are various studies that involve intra-oral scanners as well as quantitative, fluorescence, and other new modalities [2,7,8,32,35,73,74,75].
Although OPG is the most widely utilized extra-oral imaging modality in dental care, more standardized standards should be employed to prevent any error owing to image quality, patient orientation, or magnification. To guarantee the creation of a valid set of data, radiographs collected with various OPG equipment should be assessed collaboratively [76]. The research to date frequently commits the mistake of obtaining data from a single imaging method, which is problematic since distinct models are developed for each machine, and it is possible that a model for one device will not apply to other devices. AI models that were trained with manually cropped radiograph data are likewise another problem, since the algorithms may not interpret images without any specific region of interest [1,76].
Even if an OPG with proper acquisition techniques is taken, an ideally taken OPG may have its own limitations and difficulties that affect both the clinician’s decision and the precision of AI models. For instance, some scenarios for these OPGs are as follows [77,78]:
  • Individuals with maxillofacial disorders or anatomical variations that are unable to maintain an upward spinal posture;
  • Patients with severe Class II or Class III malocclusions (due to the inability to position both jaws within the focal trough at the same time);
  • Due to difficulties situating the left and right sides of the face inside the focal trough because of facial skeletal asymmetry, only one side of the face may be clearly seen on the radiograph;
  • Patients with moderate or severe periodontitis may find it difficult to bite the groove of the biting block. As mobile teeth have a tendency to tilt/move during biting, a cotton roll might be indicated to be put between the upper and lower incisors. Although this seems to be a solution for their acquisition, artefacts related to increased distance affect image quality;
  • Even within the same OPG the image magnification changes, often because of anatomical variations and defects. The horizontal plane exhibits more distortion than the vertical plane, which might affect the interpretation of the OPG;
  • The OPG’s diagnostic quality is impacted by the image’s tomographic characteristics. The focal trough measures approximately 20 mm in the lateral regions and 10 mm in the anterior area, and only those structures located inside the focal trough are clearly visible on an OPG. Any structures that will be examined outside of this focal trough might cause underdiagnosis;
  • Due to superimpositions, bone loss and carious lesions that are localized at the interproximal areas cannot be demonstrated by OPG images. As those superimpositions are more common and problematic in the premolar region, even tooth segmentations may have lower success;
  • Ghost images and double shadows are two of the phenomena of OPG imaging that drastically affect interpretation of radiographs.
The aim of this study is to create a deep learning model that can provide automatic segmentations of teeth, dental caries, dental restoration, crown–bridge restorations, dental implants, root canal fillings, and residual roots with a high Dice similarity coefficient value on OPGs that were acquired from three different OPG units to significantly reduce the time spent by dentists on radiological evaluations.

2. Materials and Methods

This study was ethically approved by the Health Sciences Ethics Committee of the Near East University Ethics Review Board (YDÜ/2022/108-1651) in December 2022.
In order to eliminate any biases that might cause a negative effect on the generalizability [79], the whole OPG database of the faculty of dentistry, a total of 8138 OPGs that were acquired by 3 different OPG devices, were obtained from the archive of the Near East University, Department of Dentomaxillofacial Radiology.
The exclusion criteria of OPGs were as follows:
  • Presence of motion artefacts;
  • Presence of removable dentures;
  • Presence of fixed orthodontic appliances;
  • Presence of ghost images due to glasses, earrings, piercings, and hearing aids;
  • OPGs of edentulous patients;
  • OPGs with positioning problems (head tilted downwards/upwards, head twisted to one side, head tipped, etc.).
Of the OPGs, 442 were excluded, and 7696 images that were suitable for the study were included.
Following the obtainment of the images in a DICOM format, they were converted into PNG files and transferred to the segmentation tool’s database, Computer Vision Annotation Tool (CVAT), for the segmentation process.
All teeth, crown, and bridge restorations, dental implants, composite and amalgam fillings, dental caries, residual roots, and root canal fillings were manually segmented by 2 dentomaxillofacial radiologists with the manual drawing semantic segmentation technique.
All of the objects mentioned above were segmented by determining their margins by creating points, and the model was trained separately for each of the structures with those segmentations (Figure 1).
We resized all of the images in the dataset to 512 × 1280 pixels and created our algorithm with the U Net interpretation of the Python computer language. We split all of our segmentations, with 80% in a training set, 10% in a validation set, and 10% in a test set. The most successful model that performed best on the test set was selected. For statistical analysis, the Dice similarity coefficient (DSC) and accuracy values were calculated.
The DSC, an indicator of how identical objects are, was utilized to determine the algorithm’s score. The DSC is calculated by dividing the total area of the two variables by the size of the overlap between the two segmentations. Similar to precision, the DSC measures the number of true positives discovered while additionally penalizing the approach for false positives. The denominator, which includes both the total number of positives and only the positives that the approach identifies, is the sole distinction. As a result, the DSC additionally penalizes for the positives that an algorithm or approach missed [80,81].

3. Results

The calculated DSC values across all OPGs (Table 1) were 0.85 for the teeth, 0.88 for dental caries, 0.87 for dental restorations, 0.93 for crown–bridge restorations, 0.94 for dental implants, 0.78 for root canal fillings, and 0.78 for residual roots. Manual segmentations and successful automatic segmentations of the model are given in Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7, while common erroneous automatic segmentations with the most possible reasons are given in Figure 8, Figure 9, Figure 10, Figure 11 and Figure 12.
Most of the erroneous segmentations occurred due to the limitations of OPG devices, and some of the examples are as follows:
In Figure 8, a more successful automatic segmentation is observed at the maxillary right third molar than the maxillary left third molar, and a missing segmented area in the form of a notch is observed. As most of the upper third molars were superimposed on the floor of the maxillary sinus and zygomatic process of the maxilla, erroneous segmentations were inevitable. In Figure 9, an erroneous automatic segmentation at the mandibular left second molar due to the superimposition between the mandibular left first and mandibular left second premolars can be seen.
In Figure 10, it can be seen that a wide amalgam restoration in the mandibular left first premolar tooth was mis-segmented as a crown restoration. The mis-segmentation of wide amalgam restorations was seen in a total of five OPGs.
Although the DSC for dental implant segmentation was 0.94 in our study, after checking the output data it was seen that the implant abutments were segmented as dental implants or crowns (Figure 11).
In Figure 12, an erroneous segmentation of an unsuccessful root canal filling can be seen. While our model had a fair DSC value for successful root canal treatments, it was seen that cases with inadequate root canal fillings and gutta-perchas that superimposed with the neighboring teeths’ roots caused a relatively lower DSC value.

4. Discussion

In this study, semantic segmentation was performed on OPGs. Semantic segmentation is the classification of all of the different structures on an OPG, namely teeth, implants, fillings, caries, root remnants, and canal fillings, by marking each pixel. U-Net architecture was used to reveal this. U-Net is a convolutional neural network developed at the Freiburg University Computer Science Department for segmentation in image processing studies in biomedical fields [10,11,12,17,20,21,26,30,31,34,38,50,64,66,67,70,71,82,83,84,85,86].
Although the high DSC and accuracy rates demonstrated the highly convenient nature of automatic segmentation, there are numerous pitfalls that we would like to discuss in order to elaborate both the limitations of OPGs and the automatic segmentation of 2D images [87,88,89,90]. Most of the pitfalls were associated with the geometrical limitations of 2D images, but, in order to be more precise, detailed explanations for the pitfalls were as follows:
In tooth segmentation, it has been observed that the segmentation of the root apices of the maxillary third molars, especially those that are impacted in a vertical position, in cases where the root apices are superimposed with the maxillary sinus floor, is incorrectly automatically segmented at different degrees. This is one of the reasons for erroneous segmentation, which reduced the Dice score in our study, albeit only by a small amount. The superposition, which we have seen especially in the premolar region on OPGs, actually shows a limitation of OPGs, not a deficiency of our model (Figure 9). The primary reason why our model could not achieve a perfect result in tooth segmentation is because it is almost impossible to avoid superimpositions on OPGs, especially in premolar teeth, and also because patients with crowding are included in the study. Several studies excluded patients with orthodontic problems; however, one of our main goals was to evaluate the success of our model in the general population, since there will not be any exclusions in dental clinics [91,92].
In crown–bridge segmentations, several large amalgam fillings have been mi-segmented as crown restorations in some cases due to both their width and metallic opacities. To avoid this type of error, amalgam restorations and composite restorations can be segmented with separate labels, and more OPGs that have both amalgam restorations and crown–bridge restorations can be included into the dataset.
In a systematic review conducted by Revilla-Leon et al., it was reported that the automatic segmentation of dental implants by AI models was between 93.8% and 98% in the literature [93]. Similar to the literature, our model was successful, with a DSC value of 0.94. Due to both their external structures with grooves and metallic opacities, dental implants were not mis-segmented as any anatomical structures or restorations, and the automatic segmentation had an almost perfect DSC. When the reasons for the relatively lower DSC in implant segmentation were examined, it was seen that our model randomly segmented the implant abutment in some OPGs and not in others; therefore, it is fair to state that, in further studies, segmentations of dental implants, abutments, and the crowns on implants via three separate labels might increase the DSC, as mis-segmentation between the abutment and the implant will not be present.
In root canal filling segmentations, the number of erroneous situations were higher than the rest of the segmentations as there were multiple limitations. The pitfalls in automatic segmentation for this study’s dataset of the root canal fillings were as follows: inadequate fillings that were carried out with a single or several gutta-perchas, root canal fillings of multirooted teeth that were superimposed with an adjacent tooth, gutta-perchas that did not extend through the entire root canal, and cases in which the restoration in the pulp chamber was misinterpreted and segmented as gutta-percha; however, despite these three limitations, it was seen that the automatic segmentation of our model had more precise and sharply demarcated segmentations than the radiologists did. In root canal filling segmentations, the only limitation was related to the superimpositions, as the residual roots that were superimposed with the neighboring structures were not automatically segmented by our model.
Although it was possible to achieve a higher DSC, we preferred to test our model’s success in a retrospective dataset so that no abnormalities were excluded. Multiple studies are present in the literature that avoided any controversies that might be caused by superimpositions, and in those studies a higher DSC was achieved. For instance, Bayraktar et al. conducted caries detection on bitewing radiographs, and thanks to the bitewing radiographs they excluded the possibility of any superimpositions as the modality is superior in interproximal caries detection [94]. Moreover, Fontenele et al. conducted a similar study, excluding the cases that have metal/motion artefacts by CBCT for the detection of caries [95]. Zhu et al. conducted tooth segmentation for ectopic eruptions and excluded any cases that had an extraction history, periapical periodontitis, or the presence of cystic lesions;, and they also excluded poor-quality OPGs [66]; however, in our study, we only excluded the metal/motion artefacts that create a challenging image for interpretation, even for radiologists.
Furthermore, all of the studies that were mentioned above and the studies that were conducted by Sheng et al. and Ying et al. used only a single imaging unit, which might cause a bias as the models tend to learn the patterns that are characteristic for each imaging unit [67,96]. In order to eliminate this bias, we conducted our study with three different OPG units that have different imaging parameters. One of the only studies that used several OPG units with different models was Schneider et al.’s study, in which they built 72 models with 6 different deep learning network architectures for 1625 bitewing radiographs [31].
This study has several limitations: First of all, although we conducted our study with three different OPG units, the success of our model does not claim a generalizability for all OPG units that can be found on the market [26,31]. Secondly, we concentrated on the DSC values of the segmentations in order to evaluate the success of our model. There are several more reliable metrics, such as pixel accuracy and intersection-over-union (the Jaccard index); however, we used the DSC as it is not only a measure of how many positives were found but it also penalizes for false positives. Additionally, theory states that the DSC and Jaccard index approximate each other relatively and absolutely [97,98]. Thirdly, although the dataset contained OPGs of participants that have different nationalities and ethnic backgrounds we collected the data from a single center, which probably had a negative effect on the generalizability of the model [76]. Last, but not least, it was assumed that increasing the number of OPGs will significantly increase the accuracy, DSC, and robustness of our model; however, in a study that was conducted by Lei et al. [99], how robustness changes with increased amounts of training data for several representative neural networks on different datasets was investigated. They stated that, with increased amounts of training data, both accuracy and robustness improve initially; however, there exists a turning point after which accuracy keeps increasing while robustness starts to decrease. Thus, it is possible that our assumption of using excessive amounts of OPGs might have actually deteriorated the robustness of our model.
In this study, we would like to emphasize the success of our model in multiple tasks on OPGs while discussing the limitations. It must be remembered that, for any AI application in DMFR, we have to maintain a better standardization for both 2D and 3D imaging (such as patient positioning), a bigger dataset (>1000), public datasets from multiple institutions, higher computational power, unsupervised/semi-supervised learning instead of supervised learning, prospectively collected data instead of retrospectively collected and preprocessed data, and randomized controlled trials.

5. Conclusions

Artificial intelligence applications in dentomaxillofacial radiology is a fast-processing branch that has had exceptional success in its early stage. With faster and automated diagnosis on 2D and 3D dental images, dentists will have higher diagnosis rates in a shorter time. Although AI applications are not routine in dental clinics, future clinics will certainly be integrated with most of these implementations.

Author Contributions

Conceptualization, K.O. and S.A.; methodology, S.A.; software, N.A.; validation, E.G., N.A. and G.Ü.; formal analysis, N.A. and G.Ü.; investigation, E.G. and G.Ü.; resources, E.G.; data curation, N.A.; writing—original draft preparation, E.G. and G.Ü.; writing—review and editing, S.A. and K.O.; visualization, N.A.; supervision, K.O.; project administration, K.O. and S.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and was approved by the Institutional Review Board of the Near East University (YDÜ/2022/108-1651 in December 2022).

Informed Consent Statement

Informed consent was obtained from all of the subjects involved in the study. Written informed consent for publication must be obtained from participating patients who can be identified (including by the patients themselves).

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. The data are not publicly available due to privacy/ethical restrictions.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Orhan, K.; Shamshiev, M.; Ezhov, M.; Plaksin, A.; Kurbanova, A.; Ünsal, G.; Gusarev, M.; Golitsyna, M.; Aksoy, S.; Mısırlı, M.; et al. AI-based automatic segmentation of craniomaxillofacial anatomy from CBCT scans for automatic detection of pharyngeal airway evaluations in OSA patients. Sci. Rep. 2022, 12, 11863. [Google Scholar] [CrossRef] [PubMed]
  2. Sin Akkaya, N.; Aksoy, S.; Orhan, K.; Öz, U. A deep learning algorithm proposal to automatic pharyngeal airway detection and segmentation on CBCT images. Orthod. Craniofacial Res. 2021, 24, 117–123. [Google Scholar] [CrossRef] [PubMed]
  3. Willaert, R.; Degrieck, B.; Orhan, K.; Deferm, J.; Politis, C.; Shaheen, E.; Jacobs, R. Semi-automatic magnetic resonance imaging based orbital fat volumetry: Reliability and correlation with computed tomography. Int. J. Oral Maxillofac. Surg. 2021, 50, 416–422. [Google Scholar] [CrossRef] [PubMed]
  4. Ahmed, N.; Abbasi, M.S.; Zuberi, F.; Qamar, W.; Bin Halim, M.S.; Maqsood, A.; Alam, M.K. Artificial Intelligence Techniques: Analysis, Application, and Outcome in Dentistry—A Systematic Review. BioMed Res. Int. 2021, 2021, 9751564. [Google Scholar] [CrossRef]
  5. Carrillo-Perez, F.; Pecho, O.E.; Morales, J.C.; Paravina, R.D.; Della Bona, A.; Ghinea, R.; Pulgar, R.; Pérez, M.D.M.; Herrera, L.J. Applications of artificial intelligence in dentistry: A comprehensive review. J Esthet Restor Dent. 2022, 34, 259–280. [Google Scholar] [CrossRef]
  6. Chen, Y.-W.; Stanley, K.; Att, W. Artificial intelligence in dentistry: Current applications and future perspectives. Quintessence Int. 2020, 51, 248–257. [Google Scholar]
  7. Hung, K.; Montalvao, C.; Tanaka, R.; Kawai, T.; Bornstein, M.M. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofacial Radiol. 2020, 49, 20190107. [Google Scholar] [CrossRef]
  8. Issa, J.; Olszewski, R.; Dyszkiewicz-Konwińska, M. The Effectiveness of Semi-Automated and Fully Automatic Segmentation for Inferior Alveolar Canal Localization on CBCT Scans: A Systematic Review. Int. J. Environ. Res. Public Health 2022, 19, 560. [Google Scholar] [CrossRef]
  9. Wiederhold, G.; McCarthy, J. Arthur Samuel: Pioneer in Machine Learning. IBM J. Res. Dev. 1992, 36, 329–331. [Google Scholar] [CrossRef]
  10. Hsu, K.; Yuh, D.-Y.; Lin, S.-C.; Lyu, P.-S.; Pan, G.-X.; Zhuang, Y.-C.; Chang, C.-C.; Peng, H.-H.; Lee, T.-Y.; Juan, C.-H.; et al. Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography. Sci. Rep. 2022, 12, 19809. [Google Scholar] [CrossRef]
  11. Ariji, Y.; Gotoh, M.; Fukuda, M.; Watanabe, S.; Nagao, T.; Katsumata, A.; Ariji, E. A preliminary deep learning study on automatic segmentation of contrast-enhanced bolus in videofluorography of swallowing. Sci. Rep. 2022, 12, 18754. [Google Scholar] [CrossRef] [PubMed]
  12. Choi, H.; Jeon, K.J.; Kim, Y.H.; Ha, E.-G.; Lee, C.; Han, S.-S. Deep learning-based fully automatic segmentation of the maxillary sinus on cone-beam computed tomographic images. Sci. Rep. 2022, 12, 14009. [Google Scholar] [CrossRef] [PubMed]
  13. Chen, T.; Kim, S.; Zhou, J.; Metaxas, D.; Rajagopal, G.; Yue, N. 3D Meshless Prostate Segmentation and Registration in Image Guided Radiotherapy. Med. Image Comput. Comput. Assist. Interv. 2009, 12, 43–50. [Google Scholar] [CrossRef] [PubMed]
  14. Aliaga, I.; Vera, V.; Vera, M.; García, E.; Pedrera, M.; Pajares, G. Automatic computation of mandibular indices in dental panoramic radiographs for early osteoporosis detection. Artif. Intell. Med. 2020, 103, 101816. [Google Scholar] [CrossRef]
  15. van Ginneken, B.; Katsuragawa, S.; Romeny, B.T.H.; Doi, K.; Viergever, M. Automatic detection of abnormalities in chest radiographs using local texture analysis. IEEE Trans. Med. Imaging 2002, 21, 139–149. [Google Scholar] [CrossRef]
  16. Rueda, S.; Gil, J.A.; Pichery, R.; Alcaniz, M. Automatic Segmentation of Jaw Tissues in CT Using Active Appearance Models and Semi-automatic Landmarking. Med. Image Comput. Comput. Assist. Interv. 2006, 9, 167–174. [Google Scholar] [CrossRef]
  17. Chifor, R.; Hotoleanu, M.; Marita, T.; Arsenescu, T.; Socaciu, M.A.; Badea, I.C.; Chifor, I. Automatic Segmentation of Periodontal Tissue Ultrasound Images with Artificial Intelligence: A Novel Method for Improving Dataset Quality. Sensors 2022, 22, 7101. [Google Scholar] [CrossRef]
  18. Nogueira-Reis, F.; Morgan, N.; Nomidis, S.; Van Gerven, A.; Oliveira-Santos, N.; Jacobs, R.; Tabchoury, C.P.M. Three-dimensional maxillary virtual patient creation by convolutional neural network-based segmentation on cone-beam computed tomography images. Clin. Oral Investig. 2022, 27, 1133–1141. [Google Scholar] [CrossRef]
  19. Alqahtani, K.A.; Jacobs, R.; Smolders, A.; Van Gerven, A.; Willems, H.; Shujaat, S.; Shaheen, E. Deep convolutional neural network-based automated segmentation and classification of teeth with orthodontic brackets on cone-beam computed-tomographic images: A validation study. Eur. J. Orthod. 2022, 45, 169–174. [Google Scholar] [CrossRef]
  20. Cho, H.-N.; Gwon, E.; Kim, K.-A.; Baek, S.-H.; Kim, N.; Kim, S.-J. Accuracy of convolutional neural networks-based automatic segmentation of pharyngeal airway sections according to craniofacial skeletal pattern. Am. J. Orthod. Dentofac. Orthop. 2022, 162, e53–e62. [Google Scholar] [CrossRef]
  21. Morgan, N.; Van Gerven, A.; Smolders, A.; Vasconcelos, K.D.F.; Willems, H.; Jacobs, R. Convolutional neural network for automatic maxillary sinus segmentation on cone-beam computed tomographic images. Sci. Rep. 2022, 12, 7523. [Google Scholar] [CrossRef] [PubMed]
  22. Wang, X.; Kittaka, M.; He, Y.; Zhang, Y.; Ueki, Y.; Kihara, D. OC_Finder: Osteoclast Segmentation, Counting, and Classification Using Watershed and Deep Learning. Front. Bioinform. 2022, 2, 819570. [Google Scholar] [CrossRef] [PubMed]
  23. Cui, Z.; Fang, Y.; Mei, L.; Zhang, B.; Yu, B.; Liu, J.; Jiang, C.; Sun, Y.; Ma, L.; Huang, J.; et al. A fully automatic AI system for tooth and alveolar bone segmentation from cone-beam CT images. Nat. Commun. 2022, 13, 2096. [Google Scholar] [CrossRef]
  24. Adışen, M.Z.; Aydoğdu, M. Comparison of mastoid air cell volume in patients with or without a pneumatized articular tubercle. Imaging Sci. Dent. 2022, 52, 27–32. [Google Scholar] [CrossRef] [PubMed]
  25. Lee, S.; Kim, J.-E. Evaluating the Precision of Automatic Segmentation of Teeth, Gingiva and Facial Landmarks for 2D Digital Smile Design Using Real-Time Instance Segmentation Network. J. Clin. Med. 2022, 11, 852. [Google Scholar] [CrossRef]
  26. Dot, G.; Schouman, T.; Dubois, G.; Rouch, P.; Gajny, L. Fully automatic segmentation of craniomaxillofacial CT scans for computer-assisted orthognathic surgery planning using the nnU-Net framework. Eur. Radiol. 2022, 32, 3639–3648. [Google Scholar] [CrossRef]
  27. Nozawa, M.; Ito, H.; Ariji, Y.; Fukuda, M.; Igarashi, C.; Nishiyama, M.; Ogi, N.; Katsumata, A.; Kobayashi, K.; Ariji, E. Automatic segmentation of the temporomandibular joint disc on magnetic resonance images using a deep learning technique. Dentomaxillofacial Radiol. 2022, 51, 20210185. [Google Scholar] [CrossRef]
  28. Ararat, E.; Yalcin, E.D. Morphometric and volumetric evaluation of maxillary sinus in patients with chronic obstructive pulmonary disease using cone-beam CT. Oral Radiol. 2022, 38, 261–268. [Google Scholar] [CrossRef]
  29. Jang, T.J.; Kim, K.C.; Cho, H.C.; Seo, J.K. A fully automated method for 3D individual tooth identification and segmentation in dental CBCT. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6562–6568. [Google Scholar] [CrossRef]
  30. Kwak, G.H.; Kwak, E.-J.; Song, J.M.; Park, H.R.; Jung, Y.-H.; Cho, B.-H.; Hui, P.; Hwang, J.J. Automatic mandibular canal detection using a deep convolutional neural network. Sci. Rep. 2020, 10, 5711. [Google Scholar] [CrossRef]
  31. Schneider, L.; Arsiwala-Scheppach, L.; Krois, J.; Meyer-Lueckel, H.; Bressem, K.; Niehues, S.; Schwendicke, F. Benchmarking Deep Learning Models for Tooth Structure Segmentation. J. Dent. Res. 2022, 101, 1343–1349. [Google Scholar] [CrossRef] [PubMed]
  32. Moses, D.A. Deep learning applied to automatic disease detection using chest X-rays. J. Med. Imaging Radiat. Oncol. 2021, 65, 498–517. [Google Scholar] [CrossRef]
  33. Morgan, N.; Shujaat, S.; Jazil, O.; Jacobs, R. Three-dimensional quantification of skeletal midfacial complex symmetry. Int. J. Comput. Assist. Radiol. Surg. 2022, 18, 611–619. [Google Scholar] [CrossRef] [PubMed]
  34. Kajor, M.; Kucharski, D.; Grochala, J.; Loster, J.E. New Methods for the Acoustic-Signal Segmentation of the Temporomandibular Joint. J. Clin. Med. 2022, 11, 2706. [Google Scholar] [CrossRef]
  35. Li, C.; Chen, H.; Wang, Y.; Sun, Y.C. Labelling, segmentation and application of neural network based on machine learning of three-dimensional intraoral anatomical features. Zhonghua Kou Qiang Yi Xue Za Zhi 2022, 57, 540–546. [Google Scholar]
  36. Xu, J.; Liu, J.; Zhang, D.; Zhou, Z.; Zhang, C.; Chen, X. A 3D segmentation network of mandible from CT scan with combination of multiple convolutional modules and edge supervision in mandibular reconstruction. Comput. Biol. Med. 2021, 138, 104925. [Google Scholar] [CrossRef]
  37. Giudice, A.L.; Ronsivalle, V.; Spampinato, C.; Leonardi, R. Fully automatic segmentation of the mandible based on convolutional neural networks (CNNs). Orthod. Craniofacial Res. 2021, 24, 100–107. [Google Scholar] [CrossRef]
  38. Kim, Y.H.; Shin, J.Y.; Lee, A.; Park, S.; Han, S.-S.; Hwang, H.J. Automated cortical thickness measurement of the mandibular condyle head on CBCT images using a deep learning method. Sci. Rep. 2021, 11, 14852. [Google Scholar] [CrossRef]
  39. Leonardi, R.; Giudice, A.L.; Farronato, M.; Ronsivalle, V.; Allegrini, S.; Musumeci, G.; Spampinato, C. Fully automatic segmentation of sinonasal cavity and pharyngeal airway based on convolutional neural networks. Am. J. Orthod. Dentofac. Orthop. 2021, 159, 824–835.e1. [Google Scholar] [CrossRef]
  40. Islam, N.M.; Laughter, L.; Sadid-Zadeh, R.; Smith, C.; Dolan, T.A.; Crain, G.; Squarize, C.H. Adopting artificial intelligence in dental education: A model for academic leadership and innovation. J. Dent. Educ. 2022, 86, 1545–1551. [Google Scholar] [CrossRef]
  41. Schwendicke, F.; Chaurasia, A.; Wiegand, T.; Uribe, S.E.; Fontana, M.; Akota, I.; Tryfonos, O.; Krois, J. Artificial intelligence for oral and dental healthcare: Core education curriculum. J. Dent. 2023, 128, 104363. [Google Scholar] [CrossRef] [PubMed]
  42. Saghiri, M.A.; Bs, J.V.; Nadershahi, N. Scoping review of artificial intelligence and immersive digital tools in dental education. J. Dent. Educ. 2021, 86, 736–750. [Google Scholar] [CrossRef] [PubMed]
  43. Abe, S.; Noguchi, N.; Matsuka, Y.; Shinohara, C.; Kimura, T.; Oka, K.; Okura, K.; Rodis, O.M.M.; Kawano, F. Educational effects using a robot patient simulation system for development of clinical attitude. Eur. J. Dent. Educ. 2017, 22, e327–e336. [Google Scholar] [CrossRef] [PubMed]
  44. Abouzeid, H.L.; Chaturvedi, S.; Abdelaziz, K.M.; Alzahrani, F.A.; AlQarni, A.A.S.; Alqahtani, N.M. Role of Robotics and Artificial Intelligence in Oral Health and Preventive Dentistry—Knowledge, Perception and Attitude of Dentists. Oral Health Prev. Dent 2021, 19, 353–363. [Google Scholar]
  45. Khanagar, S.B.; Al-Ehaideb, A.; Maganur, P.C.; Vishwanathaiah, S.; Patil, S.; Baeshen, H.A.; Sarode, S.C.; Bhandi, S. Developments, application, and performance of artificial intelligence in dentistry—A systematic review. J. Dent. Sci. 2021, 16, 508–522. [Google Scholar] [CrossRef]
  46. Seguin, D.; Pac, S.; Wang, J.; Nicolson, R.; Martinez-Trujillo, J.; Duerden, E.G. Amygdala subnuclei development in adolescents with autism spectrum disorder: Association with social communication and repetitive behaviors. Brain Behav. 2021, 11, e2299. [Google Scholar] [CrossRef]
  47. Cheng, B.; Wang, W. Dental hard tissue morphological segmentation with sparse representation-based classifier. Med. Biol. Eng. Comput. 2019, 57, 1629–1643. [Google Scholar] [CrossRef]
  48. Xu, T.; Tay, F.R.; Gutmann, J.L.; Fan, B.; Fan, W.; Huang, Z.; Sun, Q. Micro–Computed Tomography Assessment of Apical Accessory Canal Morphologies. J. Endod. 2016, 42, 798–802. [Google Scholar] [CrossRef]
  49. Seipel, S.; Wagner, I.-V.; Koch, S.; Schneider, W. Three-dimensional visualization of the mandible: A new method for presenting the periodontal status and diseases. Comput. Methods Programs Biomed. 1995, 46, 51–57. [Google Scholar] [CrossRef]
  50. Ma, L.; Wang, Y.; Guo, L.; Zhang, Y.; Wang, P.; Pei, X.; Qian, L.; Jaeger, S.; Ke, X.; Yin, X.; et al. Developing and verifying automatic detection of active pulmonary tuberculosis from multi-slice spiral CT images based on deep learning. J. X-Ray Sci. Technol. 2020, 28, 939–951. [Google Scholar] [CrossRef]
  51. Zurac, S.; Mogodici, C.; Poncu, T.; Trăscău, M.; Popp, C.; Nichita, L.; Cioplea, M.; Ceachi, B.; Sticlaru, L.; Cioroianu, A.; et al. A New Artificial Intelligence-Based Method for Identifying Mycobacterium Tuberculosis in Ziehl–Neelsen Stain on Tissue. Diagnostics 2022, 12, 1484. [Google Scholar] [CrossRef] [PubMed]
  52. Yan, C.; Wang, L.; Lin, J.; Xu, J.; Zhang, T.; Qi, J.; Li, X.; Ni, W.; Wu, G.; Huang, J.; et al. A fully automatic artificial intelligence-based CT image analysis system for accurate detection, diagnosis, and quantitative severity evaluation of pulmonary tuberculosis. Eur. Radiol. 2022, 32, 2188–2199. [Google Scholar] [CrossRef] [PubMed]
  53. Hegde, S.; Ajila, V.; Zhu, W.; Zeng, C. Artificial intelligence in early diagnosis and prevention of oral cancer. Asia-Pacific, J. Oncol. Nurs. 2022, 9, 100133. [Google Scholar] [CrossRef]
  54. Chen, W.; Zeng, R.; Jin, Y.; Sun, X.; Zhou, Z.; Zhu, C. Artificial Neural Network Assisted Cancer Risk Prediction of Oral Precancerous Lesions. BioMed. Res. Int. 2022, 2022, 7352489. [Google Scholar] [CrossRef] [PubMed]
  55. Tobias, M.A.S.; Nogueira, B.P.; Santana, M.C.; Pires, R.G.; Papa, J.P.; Santos, P.S. Artificial intelligence for oral cancer diagnosis: What are the possibilities? Oral Oncol. 2022, 134, 106117. [Google Scholar] [CrossRef]
  56. Kolokythas, A. Can Artificial Intelligence (AI) assist in the diagnosis of oral mucosal lesions and/or oral cancer? Oral. Surg. Oral. Med. Oral. Pathol. Oral. Radiol. 2022, 134, 413–414. [Google Scholar] [CrossRef]
  57. Sarode, G.S.; Kumari, N.; Sarode, S.C. Oral cancer histopathology images and artificial intelligence: A pathologist’s perspective. Oral Oncol. 2022, 132, 105999. [Google Scholar] [CrossRef]
  58. Al-Rawi, N.; Sultan, A.; Rajai, B.; Shuaeeb, H.; Alnajjar, M.; Alketbi, M.; Mohammad, Y.; Shetty, S.R.; Mashrah, M.A. The Effectiveness of Artificial Intelligence in Detection of Oral Cancer. Int. Dent. J. 2022, 72, 436–447. [Google Scholar] [CrossRef]
  59. Ramezani, K.; Tofangchiha, M. Oral Cancer Screening by Artificial Intelligence-Oriented Interpretation of Optical Coherence Tomography Images. Radiol. Res. Pr. 2022, 2022, 1–10. [Google Scholar] [CrossRef]
  60. Rai, R.; Vats, R.; Kumar, M. Detecting Oral Cancer: The Potential of Artificial Intelligence. Curr. Med. Imaging. 2022, 18, 919–923. [Google Scholar] [CrossRef]
  61. Baniulyte, G.; Ali, K. Artificial intelligence—Can it be used to outsmart oral cancer? Evid. Based Dent. 2022, 23, 12–13. [Google Scholar] [CrossRef] [PubMed]
  62. García-Pola, M.; Pons-Fuster, E.; Suárez-Fernández, C.; Seoane-Romero, J.; Romero-Méndez, A.; López-Jornet, P. Role of Artificial Intelligence in the Early Diagnosis of Oral Cancer. A Scoping Review. Cancers 2021, 13, 4600. [Google Scholar] [CrossRef] [PubMed]
  63. Minervini, G.; Fiorillo, L.; Russo, D.; Lanza, A.; D’Amico, C.; Cervino, G.; Meto, A.; Di Francesco, F. Prosthodontic Treatment in Patients with Temporomandibular Disorders and Orofacial Pain and/or Bruxism: A Review of the Literature. Prosthesis 2022, 4, 253–262. [Google Scholar] [CrossRef]
  64. Bayrakdar, I.S.; Orhan, K.; Çelik, Ö.; Bilgir, E.; Sağlam, H.; Kaplan, F.A.; Görür, S.A.; Odabaş, A.; Aslan, A.F.; Różyło-Kalinowska, I. A U-Net Approach to Apical Lesion Segmentation on Panoramic Radiographs. BioMed. Res. Int. 2022, 2022, 7035367. [Google Scholar] [CrossRef]
  65. Kim, J.; Hwang, J.J.; Jeong, T.; Cho, B.-H.; Shin, J. Deep learning-based identification of mesiodens using automatic maxillary anterior region estimation in panoramic radiography of children. Dentomaxillofac. Radiol. 2022, 51, 7. [Google Scholar] [CrossRef] [PubMed]
  66. Zhu, H.; Yu, H.; Zhang, F.; Cao, Z.; Wu, F.; Zhu, F. Automatic segmentation and detection of ectopic eruption of first permanent molars on panoramic radiographs based on nnU-Net. Int. J. Paediatr Dent. 2022, 32, 785–792. [Google Scholar] [CrossRef]
  67. Sheng, C.; Wang, L.; Huang, Z.; Wang, T.; Guo, Y.; Hou, W.; Xu, L.; Wang, J.; Yan, X. Transformer-Based Deep Learning Network for Tooth Segmentation on Panoramic Radiographs. J. Syst. Sci. Complex. 2022, 36, 257–272. [Google Scholar] [CrossRef]
  68. Leite, A.F.; Van Gerven, A.; Willems, H.; Beznik, T.; Lahoud, P.; Gaêta-Araujo, H.; Vranckx, M.; Jacobs, R. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin. Oral Investig. 2020, 25, 2257–2267. [Google Scholar] [CrossRef]
  69. Giudice, A.L.; Ronsivalle, V.; Gastaldi, G.; Leonardi, R. Assessment of the accuracy of imaging software for 3D rendering of the upper airway, usable in orthodontic and craniofacial clinical settings. Prog. Orthod. 2022, 23, 22. [Google Scholar] [CrossRef]
  70. Szentimrey, Z.; de Ribaupierre, S.; Fenster, A.; Ukwatta, E. Automated 3D U-net based segmentation of neonatal cerebral ventricles from 3D ultrasound images. Med. Phys. 2021, 49, 1034–1046. [Google Scholar] [CrossRef]
  71. Groves, L.A.; VanBerlo, B.; Veinberg, N.; Alboog, A.; Peters, T.M.; Chen, E.C.S. Automatic segmentation of the carotid artery and internal jugular vein from 2D ultrasound images for 3D vascular reconstruction. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 1835–1846. [Google Scholar] [CrossRef]
  72. De Rosa, C.S.; Bergamini, M.L.; Palmieri, M.; Sarmento, D.J.D.S.; de Carvalho, M.O.; Ricardo, A.L.F.; Hasseus, B.; Jonasson, P.; Braz-Silva, P.H.; Costa, A.L.F. Differentiation of periapical granuloma from radicular cyst using cone beam computed tomography images texture analysis. Heliyon 2020, 6, E05194. [Google Scholar] [CrossRef]
  73. Kim, J.J.; Nam, H.; Kaipatur, N.R.; Major, P.W.; Flores-Mir, C.; Lagravere, M.O.; Romanyk, D.L. Reliability and accuracy of segmentation of mandibular condyles from different three-dimensional imaging modalities: A systematic review. Dentomaxillofac. Radiol. 2020, 49, 20190150. [Google Scholar] [CrossRef]
  74. Wallner, J.; Schwaiger, M.; Hochegger, K.; Gsaxner, C.; Zemann, W.; Egger, J. A review on multiplatform evaluations of semi-automatic open-source based image segmentation for cranio-maxillofacial surgery. Comput. Methods Programs Biomed. 2019, 182, 105102. [Google Scholar] [CrossRef]
  75. Xi, T.; Schreurs, R.; Heerink, W.; Bergé, S.J.; Maal, T.J.J. A Novel Region-Growing Based Semi-Automatic Segmentation Protocol for Three-Dimensional Condylar Reconstruction Using Cone Beam Computed Tomography (CBCT). PLoS ONE 2014, 9, e111126. [Google Scholar] [CrossRef]
  76. Nsal, G.; Orhan, K. Deep Learning and Artificial Intelligence Applications in Dentomaxillofacial Radiology. In Applied Machine Learning and Multi-Criteria, Özşahin, D., Özşahin, I., eds.; Bentham Science Publishers: Sharjah, United Arab Emirates, 2021; pp. 126–140. [Google Scholar]
  77. Perschbacher, S. Interpretation of panoramic radiographs. Aust. Dent. J. 2012, 57, 40–45. [Google Scholar] [CrossRef]
  78. Różyło-Kalinowska, I. Panoramic radiography in dentistry. Clin. Dent. Rev. 2021, 5, 26. [Google Scholar] [CrossRef]
  79. Arsiwala-Scheppach, L.T.; Chaurasia, A.; Müller, A.; Krois, J.; Schwendicke, F. Machine Learning in Dentistry: A Scoping Review. J. Clin. Med. 2023, 12, 937. [Google Scholar] [CrossRef]
  80. Zou, K.H.; Warfield, S.; Bharatha, A.; Tempany, C.M.; Kaus, M.R.; Haker, S.J.; Wells, W.M.; Jolesz, F.A.; Kikinis, R. Statistical validation of image segmentation quality based on a spatial overlap index1: Scientific reports. Acad. Radiol. 2004, 11, 178–189. [Google Scholar] [CrossRef]
  81. Ye, J. Multicriteria decision-making method using the Dice similarity measure based on the reduct intuitionistic fuzzy sets of interval-valued intuitionistic fuzzy sets. Appl. Math. Model. 2012, 36, 4466–4472. [Google Scholar] [CrossRef]
  82. Hou, S.; Zhou, T.; Liu, Y.; Dang, P.; Lu, H.; Shi, H. Teeth U-Net: A segmentation model of dental panoramic X-ray images for context semantics and contrast enhancement. Comput. Biol. Med. 2023, 152, 106296. [Google Scholar] [CrossRef]
  83. Lin, X.; Fu, Y.; Ren, G.; Yang, X.; Duan, W.; Chen, Y.; Zhang, Q. Micro–Computed Tomography–Guided Artificial Intelligence for Pulp Cavity and Tooth Segmentation on Cone-beam Computed Tomography. J. Endod. 2021, 47, 1933–1941. [Google Scholar] [CrossRef]
  84. Nishitani, Y.; Nakayama, R.; Hayashi, D.; Hizukuri, A.; Murata, K. Segmentation of teeth in panoramic dental X-ray images using U-Net with a loss function weighted on the tooth edge. Radiol. Phys. Technol. 2021, 14, 64–69. [Google Scholar] [CrossRef]
  85. Shaheen, E.; Leite, A.; Alqahtani, K.A.; Smolders, A.; Van Gerven, A.; Willems, H.; Jacobs, R. A novel deep learning system for multi-class tooth segmentation and classification on cone beam computed tomography. A validation study. J. Dent. 2021, 115, 103865. [Google Scholar] [CrossRef]
  86. Zhang, J.; Xia, W.; Dong, J.; Tang, Z.; Zhao, Q. Root Canal Segmentation in CBCT Images by 3D U-Net with Global and Local Combination Loss. In Proceedings of the 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Mexico, 1–5 November 2021; pp. 3097–3100. [Google Scholar] [CrossRef]
  87. Fowler, P. Limitations of the panoramic radiograph’s focal trough: A case report. N Dent. J. 1991, 87, 92–93. [Google Scholar]
  88. Lopes, L.J.; Gamba, T.O.; Bertinato, J.V.J.; Freitas, D. Comparison of panoramic radiography and CBCT to identify maxillary posterior roots invading the maxillary sinus. Dentomaxillofacial Radiol. 2016, 45, 20160043. [Google Scholar] [CrossRef]
  89. Ohba, T.; Ogawa, Y.; Shinohara, Y.; Hiromatsu, T.; Uchida, A.; Toyoda, Y. Limitations of panoramic radiography in the detection of bone defects in the posterior wall of the maxillary sinus: An experimental study. Dentomaxillofac. Radiol. 1994, 23, 149–153. [Google Scholar] [CrossRef]
  90. Nakagawa, Y.; Ishii, H.; Nomura, Y.; Watanabe, N.Y.; Hoshiba, D.; Kobayashi, K.; Ishibashi, K. Third Molar Position: Reliability of Panoramic Radiography. J. Oral Maxillofac. Surg. 2007, 65, 1303–1308. [Google Scholar] [CrossRef]
  91. Abdalla-Aslan, R.; Yeshua, T.; Kabla, D.; Leichter, I.; Nadler, C. An artificial intelligence system using machine-learning for automatic detection and classification of dental restorations in panoramic radiography. Oral Surgery, Oral Med. Oral Pathol. Oral Radiol. 2020, 130, 593–602. [Google Scholar] [CrossRef]
  92. Kuwada, C.; Ariji, Y.; Fukuda, M.; Kise, Y.; Fujita, H.; Katsumata, A.; Ariji, E. Deep learning systems for detecting and classifying the presence of impacted supernumerary teeth in the maxillary incisor region on panoramic radiographs. Oral Surgery, Oral Med. Oral Pathol. Oral Radiol. 2020, 130, 464–469. [Google Scholar] [CrossRef]
  93. Revilla-León, M.; Gómez-Polo, M.; Vyas, S.; Barmak, A.B.; Özcan, M.; Att, W.; Krishnamurthy, V.R. Artificial intelligence applications in restorative dentistry: A systematic review. J. Prosthet. Dent. 2021, 128, 867–875. [Google Scholar] [CrossRef]
  94. Bayraktar, Y.; Ayan, E. Diagnosis of interproximal caries lesions with deep convolutional neural network in digital bitewing radiographs. Clin. Oral Investig. 2022, 26, 623–632. [Google Scholar] [CrossRef]
  95. Fontenele, R.C.; Gerhardt, M.D.N.; Pinto, J.C.; Van Gerven, A.; Willems, H.; Jacobs, R.; Freitas, D.Q. Influence of dental fillings and tooth type on the performance of a novel artificial intelligence-driven tool for automatic tooth segmentation on CBCT images—A validation study. J. Dent. 2022, 119, 104069. [Google Scholar] [CrossRef]
  96. Ying, S.; Wang, B.; Zhu, H.; Liu, W.; Huang, F. Caries segmentation on tooth X-ray images with a deep network. J. Dent. 2022, 119, 104076. [Google Scholar] [CrossRef]
  97. Eelbode, T.; Bertels, J.; Berman, M.; Vandermeulen, D.; Maes, F.; Bisschops, R.; Blaschko, M.B. Optimization for Medical Image Segmentation: Theory and Practice When Evaluating with Dice Score or Jaccard Index. IEEE Trans. Med. Imaging 2020, 39, 3679–3690. [Google Scholar] [CrossRef]
  98. Feng, H.; Fu, Z.; Wang, Y.; Zhang, P.; Lai, H.; Zhao, J. Automatic segmentation of thrombosed aortic dissection in post-operative CT-angiography images. Med. Phys. 2022. [Google Scholar] [CrossRef]
  99. Lei, S.; Zhang, H.; Wang, K.; Su, Z. How Training Data Affect the Accuracy and Robustness of Neural Networks for Image Classification; ICLR: New Orleans, LA, USA, 2019. [Google Scholar]
Figure 1. Manual segmentation process of the dental implants. Note the precision for the dental implant’s grove segmentation in order to achieve higher accuracy and DSC.
Figure 1. Manual segmentation process of the dental implants. Note the precision for the dental implant’s grove segmentation in order to achieve higher accuracy and DSC.
Diagnostics 13 01487 g001
Figure 2. Automatic segmentation of the teeth. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above. Each tooth has a unique label according to FDI World Dental Federation notation.
Figure 2. Automatic segmentation of the teeth. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above. Each tooth has a unique label according to FDI World Dental Federation notation.
Diagnostics 13 01487 g002
Figure 3. Automatic segmentation of the carious lesions at the upper-left second molar and lower-right first molar. Manual segmentation (left) as well as automatic segmentation (right) can be seen above.
Figure 3. Automatic segmentation of the carious lesions at the upper-left second molar and lower-right first molar. Manual segmentation (left) as well as automatic segmentation (right) can be seen above.
Diagnostics 13 01487 g003
Figure 4. Automatic segmentation of the bridges. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 4. Automatic segmentation of the bridges. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g004
Figure 5. Automatic segmentation of dental implants. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 5. Automatic segmentation of dental implants. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g005
Figure 6. Automatic segmentation of root-canal fillings. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 6. Automatic segmentation of root-canal fillings. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g006
Figure 7. Automatic segmentation of residual roots. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 7. Automatic segmentation of residual roots. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g007
Figure 8. Erroneous automatic segmentation at the maxillary left third molar due to the superimposition between the maxillary sinus floor and root apices of the tooth. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 8. Erroneous automatic segmentation at the maxillary left third molar due to the superimposition between the maxillary sinus floor and root apices of the tooth. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g008
Figure 9. Erroneous automatic segmentation at the mandibular left second molar due to the superimposition between the mandibular left first and mandibular left second premolars. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 9. Erroneous automatic segmentation at the mandibular left second molar due to the superimposition between the mandibular left first and mandibular left second premolars. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g009
Figure 10. A wide amalgam restoration in the mandibular left first premolar tooth was mis-segmented as a crown restoration. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 10. A wide amalgam restoration in the mandibular left first premolar tooth was mis-segmented as a crown restoration. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g010
Figure 11. Crown–abutment and implant parts of a dental implant with a crown (left). Manual segmentation of the dental implant (middle) and automatic segmentation of the dental implant (right). Note that the abutment part and the superior portion of the implant were not segmented by our model in this case.
Figure 11. Crown–abutment and implant parts of a dental implant with a crown (left). Manual segmentation of the dental implant (middle) and automatic segmentation of the dental implant (right). Note that the abutment part and the superior portion of the implant were not segmented by our model in this case.
Diagnostics 13 01487 g011
Figure 12. Imperfect segmentation of a gutta-percha is seen at the mandibular left first molar tooth’s mesial root. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Figure 12. Imperfect segmentation of a gutta-percha is seen at the mandibular left first molar tooth’s mesial root. Manual segmentation (upper image) and automatic segmentation (lower image) can be seen above.
Diagnostics 13 01487 g012
Table 1. The calculated Dice similarity coefficient values of teeth, dental caries, dental restoration, crown–bridge restorations, dental implants, root canal fillings, and residual roots segmentations.
Table 1. The calculated Dice similarity coefficient values of teeth, dental caries, dental restoration, crown–bridge restorations, dental implants, root canal fillings, and residual roots segmentations.
StructureDSC
Tooth segmentation (Figure 2)0.95
Dental caries (Figure 3)0.88
Dental restoration0.87
Crown–bridge restorations (Figure 4)0.93
Dental implants (Figure 5)0.94
Root canal fillings (Figure 6)0.78
Residual roots (Figure 7)0.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gardiyanoğlu, E.; Ünsal, G.; Akkaya, N.; Aksoy, S.; Orhan, K. Automatic Segmentation of Teeth, Crown–Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics 2023, 13, 1487. https://doi.org/10.3390/diagnostics13081487

AMA Style

Gardiyanoğlu E, Ünsal G, Akkaya N, Aksoy S, Orhan K. Automatic Segmentation of Teeth, Crown–Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls. Diagnostics. 2023; 13(8):1487. https://doi.org/10.3390/diagnostics13081487

Chicago/Turabian Style

Gardiyanoğlu, Emel, Gürkan Ünsal, Nurullah Akkaya, Seçil Aksoy, and Kaan Orhan. 2023. "Automatic Segmentation of Teeth, Crown–Bridge Restorations, Dental Implants, Restorative Fillings, Dental Caries, Residual Roots, and Root Canal Fillings on Orthopantomographs: Convenience and Pitfalls" Diagnostics 13, no. 8: 1487. https://doi.org/10.3390/diagnostics13081487

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop