Next Article in Journal
Design and Non-Uniform Current Analysis of a 0.35-THz Extended Interaction Oscillator Based on Pseudospark-Sourced Multiple Sheet Electron Beams
Next Article in Special Issue
A Study of the Interpretability of Fundus Analysis with Deep Learning-Based Approaches for Glaucoma Assessment
Previous Article in Journal
An Image Object Detection Model Based on Mixed Attention Mechanism Optimized YOLOv5
Previous Article in Special Issue
Two-Stage Cascaded CNN Model for 3D Mitochondria EM Segmentation
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Periodontal Disease Classification with Color Teeth Images Using Convolutional Neural Networks

Artificial Intelligence and Convergence Department, Pukyong National University, Busan 48513, Republic of Korea
Department of Computer Science Engineering, Rajshahi University of Engineering and Technology, Rajshahi 6204, Bangladesh
Department of Dental Hygiene, Kangwon National University, Samcheok 25949, Republic of Korea
Department of Dental Hygiene, Silla University, Busan 46958, Republic of Korea
School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu 965-8580, Japan
Authors to whom correspondence should be addressed.
Electronics 2023, 12(7), 1518;
Submission received: 23 January 2023 / Revised: 10 March 2023 / Accepted: 21 March 2023 / Published: 23 March 2023
(This article belongs to the Collection Image and Video Analysis and Understanding)


Oral health plays an important role in people’s quality of life as it is related to eating, talking, and smiling. In recent years, many studies have utilized artificial intelligence for oral health care. Many studies have been published on tooth identification or recognition of dental diseases using X-ray images, but studies with RGB images are rarely found. In this paper, we propose a deep convolutional neural network (CNN) model that classifies teeth with periodontal diseases from optical color images captured in front of the mouth. A novel network module with one-dimensional convolutions in parallel was proposed and compared to the conventional models including ResNet152. In results, the proposed model achieved 11.45% higher than ResNet152 model, and it was proved that the proposed structure enhanced the training performances, especially when the amount of training data was insufficient. This paper shows the possibility of utilizing optical color images for the detection of periodontal diseases, which may lead to a mobile oral healthcare system in the future.

1. Introduction

Oral health plays an important role in people’s quality of life [1]. It is considered to contribute to well-being, as it is related to daily activities, such as eating, talking, and smiling. Recently, many researchers have proposed oral healthcare systems, as artificial intelligence (AI) has many potential advantages [2,3,4]. Most studies on this topic have been conducted using dental panoramic radiography (DPR) to identify the types of teeth (often called tooth numbering) and to recognize ages and dental caries. The earliest studies on tooth numbering were conducted in 2005 using periapical images [5,6]. These studies classified molars and premolars using pattern recognition and numbered them according to the position of each tooth. Since then, studies on tooth detection and numbering have become popular with the use of deep learning techniques [7].
In 2017, Miki et al. classified teeth into seven types according to their location using 52 cone-beam computed tomography (CBCT) images. This study used AlexNet as the convolutional neural network (CNN) structure and obtained a classification accuracy of 91% [8]. Later, an alternative approach pursued by Oktay et al. used 100 DPR images to classify teeth into three types. They used AlexNet as the CNN structure and obtained a classification accuracy of over 90% [9].
In 2018, Zhang et al. proposed a method to recognize 32 teeth and number them according to their location. They utilized the VGG16 model with datasets of 1000 X-ray images, and their proposed approach achieved a precision and recall of more than 95% [10]. Likewise, Tuzoff et al. proposed a computer-aided diagnosis solution to detect teeth and number them using the VGG16 model with 1574 DPR image datasets. Their proposed system achieved over 99% sensitivity and precision for teeth detection and over 98% for teeth numbering [11].
In 2019, Chen et al. studied tooth detection and numbering based on the ResNet model, with 1250 dental periapical images [6]. Muramatsu et al. studied tooth recognition and numbering using 100 DPR images. They used DetectNet, GoogLeNet, and ResNet structures and obtained a teeth detection sensitivity of 96% and a classification accuracy of more than 93% [12].
In 2020, Sukegawa et al. conducted a study that classified 11 implant types using 8859 dental X-ray images that included implants. They used VGG-16 and VGG-19 models and obtained a classification accuracy of more than 90% [13]. Kim et al. studied tooth and implant recognition and numbering using 303 DPR images. They used a region-based CNN (R-CNN) together with heuristics and obtained an accuracy of more than 96% for tooth recognition and more than 84% for tooth numbering [14]. Yasa et al. proposed a faster R-CNN model based on the GoogLeNet Inception v2 network for tooth recognition and numbering using 1125 bitewing radiographs. Their proposed model exhibited high sensitivity and precision rates, with values of 97.48%, 92.93%, and 95.15 for the sensitivity, precision, and F-measure, respectively [15].
In 2021, Kılıc et al. proposed a method for the automated detection and numbering of deciduous teeth in children using 421 DPR images. They implemented a faster R-CNN model based on the Inception v2 network. The performances were 96%, 95%, and 98% for the F1 score, precision, and sensitivity, respectively [16].
Görürgöz et al. studied tooth recognition and numbering with 1686 X-ray images. They utilized the GoogLeNet and obtained an F1 score, precision, and sensitivity of 87%, 78%, and 98%, respectively [17]. Estai et al. studied tooth recognition and numbering based on VGG-16 with 591 DPR images. They achieved a recall and precision of more than 99% for tooth detection and more than 98% for tooth numbering [18].
Thus, the usability of deep learning models has been widely studied for the teeth in X-ray images. However, disease detections of teeth have not been much studied relatively. Prajapati et al. classified 251 DPR images of disease-infected teeth using VGG-16 [19]. They obtained a classification accuracy of more than 88%. In another study, in 2020, You et al. designed a model for judging the area of calculus using tooth images obtained by an intraoral camera [20]. A tooth region was cropped manually to train the AI model and the calculus regions were identified from a single-tooth image. The model was trained on color images of a tooth using a disclosing agent. Its mean intersection-over-union (MIoU) was 0.724 ± 0.159 compared to an MIoU of 0.652 ± 0.195 for a dentist with 20 years of experience.
Despite these recent advances in teeth classifications and oral healthcare using X-ray images [8,11,12,16], the number of studies using color images is relatively small in the literature. Li et al. detected the region of plaque from tooth color images using super-pixel level features [21]. The detection accuracy was 86.42% with 607 tooth images. You et al. also presented a method to detect plaque region from a tooth image, using a transfer learning technique. The mIOU of the detected regions was statistically similar to the detected region by dentists [20].
One of the issues in teeth disease classification would be the collection and preparation of the labeled dataset, as studies using color teeth images have not been popular yet. It is difficult to obtain a precise model with a small number of data because deep neural networks generally require a large size of the dataset to train a model.
In this paper, we propose a neural network model that recognizes the presence of periodontal diseases including calculus and inflammations with a small-sized teeth dataset. The teeth images taken in front of the mouth using an optical camera were collected over the internet. The proposed method automatically detects the tooth area and classifies tooth images with and without the diseases. Novel network structures were designed by utilizing one-dimensional convolutions and shortcuts.
The remainder of this paper is organized as follows. Section 2 describes the research method by explaining the data collection and network models. Section 3 presents the experimental results. Finally, Section 4 gives the conclusions.

2. Methods

2.1. Data Acquisition

To train the tooth image recognition model, we used 220 frontal tooth images collected over the internet. The collected data includes 82 healthy teeth, 138 teeth with calculus or inflammation. During the data collection, we selected the teeth images that covered the entire tooth region using a mouth opener. Figure 1 and Figure 2 show the examples of healthy and teeth with calculus. The manual classification of calculus was conducted by two experts on dental hygiene and a dentist. This dataset is open to the public on our website ( (accessed on 20 March 2023).
The tooth regions were labeled with a rectangular shape using an open-source software ( (accessed on 15 February 2023). All labeling tasks were conducted manually to minimize errors (Figure 3).
All the images in the dataset were resized to a size of (640, 640, 3) because the deep neural network requires the image size to be fixed. The images were resized with a fixed aspect ratio, whereas the empty regions were padded with zero. Color values were normalized to between 0 and 1.

2.2. Method Overview

The proposed method of calculus recognition follows two steps: teeth region detection and the classification of teeth with calculus or inflammation (Figure 4). The teeth region detection was employed to increase the accuracy of the proposed system and to avoid overfitting issues when the amount of training data is limited.
We adopted the 10-fold validation policy for precise validations of the methods. Ninety percent of the data was used for the training and the left was used for the test. The training and test were performed 10 times for the test data and were not repeated over the validations. The divisions of the training and test were kept over the teeth region detections and the classification of calculi or inflammations.

2.3. Tooth Region Detection

Tooth detection was performed using YOLOv5 [22,23]. YOLOv5 was developed by the Ultralytics group in 2020 and has been widely used for various tasks. The detection accuracy and speed of YOLOv5 are known to be significantly faster than conventional YOLO [23]. There are 5 sub-models of YOLOv5 such as “x”, “l”, “m”, “s”, and “n”. We utilized the model “s”, as the number of objects that need to be detected is only one. The number of epochs for training was set to 300, batch size was set to 8, and pretrained weights were used for transfer learning. The SGD (stochastic gradient descent) was used for the optimizer, and the momentum, learning rate, and decay were set to 0.98, 0.01, and 0.001, which are the default parameters for the optimizer of YOLOv5.

2.4. Calculus Classification

The proposed network structure for the classification of calculus or inflammation is illustrated in Figure 5. In the overview, the network was designed by stacking convolution blocks together with max pooling layers. A global average pooling layer summarizes the extracted features from the convolutional blocks, and dropout and fully connected layers follow to classify features into two groups.
In this paper, we propose a novel way of utilizing two 1D convolutional layers by placing them parallel (see the parallel conv block in Figure 5). One of the 1D convolutions was designed to detect the features along the horizontal axis, and the other was to detect the vertical features. The filter sizes of both layers were the same, but the directions of the filters were orthogonal. The number of the weights of the network was reduced by two-thirds in comparison to the case of 2D convolutions.
The parallel convolutional block has shortcut paths for bypassing the 1D convolutional layers inside. By allowing the bypass, the network can learn with deeper layers as was proven with ResNet [24].
Figure 6 illustrates the variations of convolutional blocks designed to evaluate the effectiveness of the parallel convolutions and shortcuts. A shortcut was removed for the type A; the two 1D convolutional blocks were placed serially for the type B; a 1D convolutional layer was solely used in the types C and D. The size of the type C’s convolution was [1,S], whereas the size of the type D was [S,1]. The type E utilized a conventional 2D convolutional layer. The primary difference between the types A and E is the number of weights, as the type A does not consider diagonal features.

3. Results

3.1. Tooth Detection

Several studies have utilized deep transfer learning to detect teeth from radiographic tooth images (see Table 1), but the studies using color images are rare. Tooth detection was successful by employing pretrained networks for the faster R-CNN model. Various types of networks, such as AlexNet, VGG, and GoogLeNet, were utilized, and the accuracies were higher than 95% for all the pretrained networks. In this study, we achieved an F1 score of 99.9% and an mAP50 of 99.5% for the teeth region detection. This is partly because the task was relatively easier than the individual tooth detection.

3.2. Classification of Periodontal Disease

The proposed model showed superior accuracies for the classifications of the periodontal disease as listed in Table 2. It achieved 74.54%, whereas the ResNet model achieved 63.09%. It was also proved that the proposed parallel convolutions and shortcuts were effective as the mean accuracy decreased when they were removed or substituted to other components. Removing the shortcut decreased the accuracy by 7.72%, and the models with single 1D vertical/horizontal convolutional layers, serial structures, or 2D convolutional layers achieved 69.54%, 68.19%, 67.73%, and 65.00%, respectively. This indicated that the proposed model learned effectively from the small number of training images.
Table 3 lists recent reports on the performance of the detections and classifications of tooth diseases. As it was discussed in Introduction, the study on this topic using color images is hardly found, and it is difficult to compare the results directly because of the different experimental conditions such as the dataset. One of the recent advances in this field is the work of Liang et al. [25], who detected and classified calculus, gingivitis, and deposits. The area under the curve of their model was 80.11%. The accuracy was not reported in numbers, but the reported AUC graph indicated that the accuracies were lower than 80%. The present study, however, achieved 11.45% higher than ResNet152, and it is expected that the accuracy could be increased with additional training images.

4. Conclusions

It is important to be able to analyze teeth in color images automatically because this enables people to determine the health of their teeth. However, despite the number of studies on tooth recognition and classifications in recent decades, analysis of teeth in color images has not been extensively studied. This paper presented a model for periodontal diseases classifications from color teeth images with convolutional neural network.
The proposed model was designed to classify teeth images calculi and inflammation, especially when the amount of the training data was insufficient, by substituting one-dimensional convolutions for two-dimensional convolutions and employing shortcuts in the network structure. The proposed model identified the teeth region, and classified periodontal diseases successfully, as it increased the classification accuracy 11.45% in comparison to ResNet152 model.
The present work showed that it was possible to utilize optical color images for the detection of periodontal diseases. This may lead to the development of a mobile oral healthcare system in the future because cameras on mobile phones are widely available.
In addition, it was found that one-dimensional convolutions in a parallel structure could avoid overfitting and increase the accuracy when the amount of training data is insufficient. This network structure could be utilized in various fields of the image classification.
Another future topic would be the use of different types of datasets, including frontal teeth images without a mouth opener. All images in the experiments were taken a mouth opener, which would not be suitable for public use. A larger dataset with precise labeling work would be necessary for a future study.

Author Contributions

Conceptualization, S.P. and W.-D.C.; Data Curation, S.P., S.-H.N. and Y.-R.K.; Methodology, S.P.; Project Administration, W.-D.C. and J.S.; Software, S.P.; Supervision, W.-D.C. and J.S.; Validation, H.E., S.-H.N. and Y.-R.K.; Writing—Original Draft Preparation, S.P., H.E. and M.A.M.H.; Writing—Review and Editing, H.E., M.A.M.H., J.S. and W.-D.C. All authors have read and agreed to the published version of the manuscript.


This work was supported by Pukyong Global Joint Research Program (202301570001) of Pukyong National University.

Data Availability Statement (accessed on 20 March 2023).

Conflicts of Interest

The authors declare no conflict of interest.


  1. Baiju, R.; Peter, E.; Varghese, N.; Sivaram, R. Oral Health and Quality of Life: Current Concepts. J. Clin. Diagn. Res. 2017, 11, ZE21–ZE26. [Google Scholar] [CrossRef] [PubMed]
  2. Yu, K.H.; Beam, A.L.; Kohane, I.S. Artificial Intelligence in Healthcare. Nat. Biomed. Eng. 2018, 2, 719–731. [Google Scholar] [CrossRef] [PubMed]
  3. Jiang, F.; Jiang, Y.; Zhi, H.; Dong, Y.; Li, H.; Ma, S.; Wang, Y.; Dong, Q.; Shen, H.; Wang, Y. Artificial Intelligence in Healthcare: Past, Present and Future. Stroke Vasc. Neurol. 2017, 2, 230–243. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Shen, C.; Nguyen, D.; Zhou, Z.; Jiang, S.B.; Dong, B.; Jia, X. An Introduction to Deep Learning in Medical Physics: Advantages, Potential, and Challenges. Phys. Med. Biol. 2020, 65, 05TR01. [Google Scholar] [CrossRef] [PubMed]
  5. Mahoor, M.H.; Abdel-Mottaleb, M. Classification and Numbering of Teeth in Dental Bitewing Images. Pattern. Recognit. 2005, 38, 577–586. [Google Scholar] [CrossRef]
  6. Chen, H.; Zhang, K.; Lyu, P.; Li, H.; Zhang, L.; Wu, J.; Lee, C.H. A Deep Learning Approach to Automatic Teeth Detection and Numbering Based on Object Detection in Dental Periapical Films. Sci. Rep. 2019, 9, 1–11. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Lecun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  8. Miki, Y.; Muramatsu, C.; Hayashi, T.; Zhou, X.; Hara, T.; Katsumata, A.; Fujita, H. Classification of Teeth in Cone-Beam CT Using Deep Convolutional Neural Network. Comput. Biol. Med. 2017, 80, 24–29. [Google Scholar] [CrossRef] [PubMed]
  9. Betul Oktay, A. Tooth Detection with Convolutional Neural Networks. In Proceedings of the 2017 Medical Technologies National Congress (TIPTEKNO), Trabzon, Turkey, 12–14 October 2017; pp. 1–4. [Google Scholar]
  10. Zhang, K.; Wu, J.; Chen, H.; Lyu, P. An Effective Teeth Recognition Method Using Label Tree with Cascade Network Structure. Comput. Med. Imaging Graph. 2018, 68, 61–70. [Google Scholar] [CrossRef] [PubMed]
  11. Tuzoff, D.V.; Tuzova, L.N.; Bornstein, M.M.; Krasnov, A.S.; Kharchenko, M.A.; Nikolenko, S.I.; Sveshnikov, M.M.; Bednenko, G.B. Tooth Detection and Numbering in Panoramic Radiographs Using Convolutional Neural Networks. Dentomaxillofac. Radiol. 2019, 48, 20180051. [Google Scholar] [CrossRef] [PubMed]
  12. Muramatsu, C.; Morishita, T.; Takahashi, R.; Hayashi, T.; Nishiyama, W.; Ariji, Y.; Zhou, X.; Hara, T.; Katsumata, A.; Ariji, E.; et al. Tooth Detection and Classification on Panoramic Radiographs for Automatic Dental Chart Filing: Improved Classification by Multi-Sized Input Data. Oral. Radiol. 2021, 37, 13–19. [Google Scholar] [CrossRef] [PubMed]
  13. Sukegawa, S.; Yoshii, K.; Hara, T.; Yamashita, K.; Nakano, K.; Yamamoto, N.; Nagatsuka, H.; Furuki, Y. Deep Neural Networks for Dental Implant System Classification. Biomolecules 2020, 10, 984. [Google Scholar] [CrossRef] [PubMed]
  14. Kim, C.; Kim, D.; Jeong, H.G.; Yoon, S.J.; Youm, S. Automatic Tooth Detection and Numbering Using a Combination of a CNN and Heuristic Algorithm. Appl. Sci. 2020, 10, 5624. [Google Scholar] [CrossRef]
  15. Yasa, Y.; Çelik, Ö.; Bayrakdar, I.S.; Pekince, A.; Orhan, K.; Akarsu, S.; Atasoy, S.; Bilgir, E.; Odabaş, A.; Aslan, A.F. An Artificial Intelligence Proposal to Automatic Teeth Detection and Numbering in Dental Bite-Wing Radiographs. Acta. Odontol. Scand. 2021, 79, 275–281. [Google Scholar] [CrossRef] [PubMed]
  16. Kılıc, M.C.; Bayrakdar, I.S.; Çelik, Ö.; Bilgir, E.; Orhan, K.; Aydın, O.B.; Kaplan, F.A.; Sağlam, H.; Odabaş, A.; Aslan, A.F.; et al. Artificial Intelligence System for Automatic Deciduous Tooth Detection and Numbering in Panoramic Radiographs. Dentomaxillofac. Radiol. 2021, 50, 20200172. [Google Scholar] [CrossRef] [PubMed]
  17. Görürgöz, C.; Orhan, K.; Bayrakdar, I.S.; Çelik, Ö.; Bilgir, E.; Odabaş, A.; Aslan, A.F.; Jagtap, R. Performance of a Convolutional Neural Network Algorithm for Tooth Detection and Numbering on Periapical Radiographs. Dentomaxillofac. Radiol. 2022, 51, 20210246. [Google Scholar] [CrossRef] [PubMed]
  18. Estai, M.; Tennant, M.; Gebauer, D.; Brostek, A.; Vignarajan, J.; Mehdizadeh, M.; Saha, S. Deep Learning for Automated Detection and Numbering of Permanent Teeth on Panoramic Images. Dentomaxillofac. Radiol. 2022, 51, 20210296. [Google Scholar] [CrossRef] [PubMed]
  19. Prajapati, S.A.; Nagaraj, R.; Mitra, S. Classification of Dental Diseases Using CNN and Transfer Learning. In Proceedings of the 5th International Symposium on Computational and Business Intelligence (ISCBI), Dubai, United Arab Emirates, 11–14 August 2017; pp. 70–74. [Google Scholar] [CrossRef]
  20. You, W.; Hao, A.; Li, S.; Wang, Y.; Xia, B. Deep Learning-Based Dental Plaque Detection on Primary Teeth: A Comparison with Clinical Assessments. BMC Oral Health 2020, 20, 141. [Google Scholar] [CrossRef] [PubMed]
  21. Li, S.; Pang, Z.; Song, W.; Guo, Y.; You, W.; Hao, A.; Qin, H. Low-Shot Learning of Automatic Dental Plaque Segmentation Based on Local-to-Global Feature Fusion. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI), Iowa City, IA, USA, 3–7 April 2020; pp. 664–668. [Google Scholar]
  22. Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2016, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar] [CrossRef] [Green Version]
  23. Available online: https://Github.Com/Ultralytics/Yolov5 (accessed on 10 February 2023).
  24. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Los Alamitos, CA, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  25. Li, W.; Liang, Y.; Zhang, X.; Liu, C.; He, L.; Miao, L.; Sun, W. A Deep Learning Approach to Automatic Gingivitis Screening Based on Classification and Localization in RGB Photos. Sci. Rep. 2021, 11, 16831. [Google Scholar] [CrossRef] [PubMed]
  26. Kats, L.; Vered, M.; Zlotogorski-Hurvitz, A.; Harpaz, I. Atherosclerotic Carotid Plaques on Panoramic Imaging: An Automatic Detection Using Deep Learning with Small Dataset. arXiv 2018, arXiv:1808.08093. [Google Scholar]
Figure 1. Examples of healthy teeth images.
Figure 1. Examples of healthy teeth images.
Electronics 12 01518 g001
Figure 2. Examples of teeth images with calculus and inflammation.
Figure 2. Examples of teeth images with calculus and inflammation.
Electronics 12 01518 g002
Figure 3. Example of frontal tooth image data (a) and its labeled areas (b).
Figure 3. Example of frontal tooth image data (a) and its labeled areas (b).
Electronics 12 01518 g003
Figure 4. System overview.
Figure 4. System overview.
Electronics 12 01518 g004
Figure 5. Proposed network structure for calculus and inflammation classifications.
Figure 5. Proposed network structure for calculus and inflammation classifications.
Electronics 12 01518 g005
Figure 6. Five different types of convolutional blocks.
Figure 6. Five different types of convolutional blocks.
Electronics 12 01518 g006
Table 1. Results from different studies of tooth recognition models based on transfer learning.
Table 1. Results from different studies of tooth recognition models based on transfer learning.
Ref.YearImage TypeGoalDataset SizeCNN ArchitectureAccuracy (%)
[8]2017CBCTTooth classification (7 classes)52AlexNet88.4
[9]2017PanoramicTooth detection (3 classes)100AlexNet92.84
[10]2018PeriapicalTooth classification (binary)1000VGG1698.1
(F1 score)
[6]2019PeriapicalTooth classification (binary)1250ResNet98.65
(F1 score)
[11]2018PanoramicTooth detection1574VGG1699.42
(F1 score)
[14]2020PanoramicTooth detection303Inception v396.7 (mean average precision)
[15]2020BitewingTooth classification (12 classes)1125Inception v295.15
(F1 score)
[16]2021PanoramicTooth detection421Inception v296.86
(F1 score)
ProposedColor imagesTeeth region detection220YOLOv5s99.99
(F1 score)
Table 2. Classification accuracy of periodontal disease in comparison to other methods.
Table 2. Classification accuracy of periodontal disease in comparison to other methods.
Fold IDAvg.
Parallel 1D conv + shortcut86.36 86.36 68.18 68.18 63.64 72.73 72.73 77.27 68.18 81.81 74.54
Parallel 1D conv72.73 54.55 72.73 59.09 68.18 72.73 72.73 86.36 50.00 59.09 66.82
Single 1D (vertical) conv + shortcut77.27 72.73 59.09 63.64 68.18 68.18 68.18 90.90 68.18 59.09 69.54
Single 1D (horizontal) conv + shortcut77.27 86.36 68.18 63.64 68.18 72.78 68.18 68.18 54.55 54.55 68.19
Serial 1D conv + shortcut72.73 68.18 63.64 63.64 54.55 72.73 72.73 72.73 59.09 77.27 67.73
2D conv + shortcut72.73 59.09 68.18 77.27 54.55 59.09 59.09 59.09 68.18 72.73 65.00
ResNet152 + transfer learning63.64 77.27 59.09 50.00 59.09 59.09 68.18 63.63 63.64 63.64 62.73
ResNet15249.09 81.82 68.18 63.64 54.55 77.27 63.64 50.00 54.55 68.18 63.09
Table 3. The performance of different models for the localization, segmentation, detection, and classification of tooth diseases based on different datasets with varying sizes.
Table 3. The performance of different models for the localization, segmentation, detection, and classification of tooth diseases based on different datasets with varying sizes.
NumYearData TypeDataset
TargetModel ArchitectureDetection/Classification Accuracy (%)
[25]2021RGB Images921CalculusMulti-Task Learning CNNAUC
87.11 (gingivitis)
80.11 (calculus)
78.57 (deposits)
[21]2020RGB Images607PlaqueSuper-Pixel Based CNNCA 86.42
[20]2020RGB Intraoral Images886PlaqueCNN ModelMIoU 0.726
[26]2020Panoramic Images65PlaqueFaster R CNNAUC 83
ProposedOptical Color Images220Calculus and inflammationParallel 1D CNNCA 74.54
Area Under Curve (AUC), Intersection Over Union (IOU), Pixel Accuracy (PA), Mean Intersection Over Union (MIoU), Classification Accuracy (CA) in percentage.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, S.; Erkinov, H.; Hasan, M.A.M.; Nam, S.-H.; Kim, Y.-R.; Shin, J.; Chang, W.-D. Periodontal Disease Classification with Color Teeth Images Using Convolutional Neural Networks. Electronics 2023, 12, 1518.

AMA Style

Park S, Erkinov H, Hasan MAM, Nam S-H, Kim Y-R, Shin J, Chang W-D. Periodontal Disease Classification with Color Teeth Images Using Convolutional Neural Networks. Electronics. 2023; 12(7):1518.

Chicago/Turabian Style

Park, Saron, Habibilloh Erkinov, Md. Al Mehedi Hasan, Seoul-Hee Nam, Yu-Rin Kim, Jungpil Shin, and Won-Du Chang. 2023. "Periodontal Disease Classification with Color Teeth Images Using Convolutional Neural Networks" Electronics 12, no. 7: 1518.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop