Next Article in Journal
Numerical Modeling of an Impinging Jet Flow inside a Thermal Cut Kerf Using CFD and Schlieren Method
Next Article in Special Issue
Reverse Guided Bone Regeneration (R-GBR) Digital Workflow for Atrophic Jaws Rehabilitation
Previous Article in Journal
Combining Rigorous Requirements Specifications with Low-Code Platforms to Rapid Development Software Business Applications
Previous Article in Special Issue
Pain Perception during Orthodontic Treatment with Fixed Appliances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Relevance of the Operator’s Experience in Conditioning the Static Computer-Assisted Implantology: A Comparative In Vitro Study with Three Different Evaluation Methods

1
Department of Biomedical and Neuromotor Sciences (DIBINEM), University of Bologna, 40100 Bologna, Italy
2
Department of Oral and Maxillo-Facial Sciences, Sapienza, University of Rome, 00100 Rome, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(19), 9561; https://doi.org/10.3390/app12199561
Submission received: 28 July 2022 / Revised: 19 September 2022 / Accepted: 20 September 2022 / Published: 23 September 2022
(This article belongs to the Special Issue Dental Materials: Latest Advances and Prospects - Volume II)

Abstract

:
The present study aimed to evaluate the influence of manual expertise on static computer-aided implantology (s-CAI) in terms of accuracy and operative timings. After the cone-beam CT (CBCT) scanning of eleven mandibular models, a full-arch rehabilitation was planned, and two different skilled operators performed s-CAI. The distances between the virtual and actual implant positions were calculated considering the three spatial vectorial axes and the three-dimensional Euclidean value for the entry (E) and apical (A) points, along with the axis orientation differences (Ax). These values emerged from the overlapping of the pre-op CBCT to post-op CBCT data (method 1), from scanning the data from the laboratory scanner (method 2), and from the intra-oral scanner (method 3) and were correlated with the operators’ expertise and operative timings. The mean values for accuracy from the three methods were: E = 0.57 (0.8, 0.45, 0.47) mm, A = 0.6 (0.8, 0.48, 0.49) mm, and Ax 1.04 (1.05,1.03,1.05) ° for the expert operator; and E = 0.8 (0.9, 0.87, 0.77), A = 0.95 (1.02, 0.95, 0.89), and Ax =1.64 (1.78, 1.58, 1.58) for the novice. The mean value of the operative timings was statistically inferior for the expert operator (p < 0.05), with an improved accuracy over time for both operators. A significant difference (p < 0.05) emerged between method 1 and methods 2 and 3 for seven of the nine variables, without differences between the evaluations from the two scanners. The support from digital surgical guides does not eliminate the importance of manual expertise for the reliability and the shortening of the surgical procedure, and it requires a learning pathway over time.

1. Introduction

Standardizing a treatment to a high level of reliability and effectiveness, beyond personal human experience and capabilities, is the goal of applied Digital Science [1]. Computerized aided implantology (CAI), both static (s-CAI) and dynamic (d-CAI), allowed a more precise fixture placement than a freehand approach, reducing the surgical timing and the patient discomfort [2,3,4,5,6].
This procedure, however, can present criticisms due to its complexity and the number of steps as well as the competence required. The quality of the Digital Imaging and Communications in Medicine (DICOM) from cone-beam CT (CBCT) and Standard Tessellation Language (STL) data and the reliability of the digital algorithm in merging these files condition the virtual planning [4,7,8,9,10]. Furthermore, in s-CAI, the computer-aided manufacturing (CAM) process [11] and the structural features of the surgical guide are relevant to the final result [2,4,12].
Assuming the integrity of the digital planning and guide printing, it appears helpful to verify whether, and how much, the personal operator’s ability can interfere in the final surgical phase, improving the precision of the implant placement and shortening the surgical timing.
Conflicting outcomes have emerged in in vitro and in vivo trials about the relevance of human expertise, not only as a freehand implantologist, but also whether there is competence in using s-CAI, and whether there is a learning curve. No relevance of the s-CAI expertise for skilled implantologists has been recorded [13], and better accuracy by novices under senior control has emerged compared to expert operators [14]. Significant deviations from the planned and the actual positions have been attributed to improper use of the handle piece, the conditioning of the action of the drills in the sleeves [15], and to the positioning of the guide into the patient’s mouth [13]. The presence of improvement over time for experienced implantologists has been excluded, at least for edentulous patients [16]. In the most recent literature reviews on s- CAI, the operator’s experience has not emerged as influencing the accuracy of the procedure. Nevertheless, the variability and unreliability of the assessment methods adopted by the studies considered were highlighted, due to the difficulty in performing meta-analyses [6,7,12,17].
Concerning the methods of accuracy evaluation, the degree of digital superimposition for the postoperative with the virtually planned implant positions—the matching of the pre- and post-CT DICOM data—has been ratified and shared as an accuracy parameter in the literature. The shifting of the coronal and apical portion of the fixture and its tilting and deepening were considered geometrical reference points [18]. Optical scanners introduced an alternative to the previous analysis, but the same factors affecting the registration process in the planning phase with these tools also conditioned the assessment method [19,20,21,22]. No study has investigated the influence of operator skill on reducing the operative timing.
The primary objective of the present study was to understand the relevance of the operator’s expertise in conditioning implant placement accuracy with the s-CAI approach and to determine if accuracy improved over time for experts and novices (learning curve).
The secondary objective was to understand if the operator’s expertise could influence the operative time in the s-CAI approach.
Three methods of superimposition data evaluation were adopted and compared to each other: CBCT (pre and post), CBCT (pre) and laboratory scanning (post), and CBCT (pre) and intraoral scanning (post).

2. Materials and Methods

2.1. Digital and Operative Flow-Chart

Eleven acrylic resin models, reproducing completely edentulous mandibles with different alveolar shapes, were scanned with a CBCT. The models reproduced typical situations to be managed without grafting techniques to avoid dehiscence or fenestrations. DICOM data were imported in a dedicated software (B&B Dent SL GS, B&B Dental Implant Company, Bologna, Italy). The placement of eight implants 4 mm wide and 11 mm long (Duravit 3P, B&B Dental implant company, Bologna, Italy) in each model, uniformly distributed in posterior and anterior locations, was planned for a fixed full-arch rehabilitation according to the bone volumes and shapes simulated by the models. The virtual implant position was identified through the DICOM data, which were then converted into STL-files for the models’ segmentation and the guide design and printing, according to the implant positions. The first dentist (GP), the more experienced operator, accomplished the planning phase in collaboration with the digital designer.
The surgical guides were projected according to the supporting bone modality. Since there was no occlusal correlation with an opposite maxilla, the plan could not start from actual prosthetic demands. Three vestibular fixing pins to stabilize the guides and eight sleeves of 4.2 and 5 mm in diameter and length, respectively, were designed. The sleeves were made in poly-ether-ether-ketone (PEEK), instead of using metallic ones, to avoid friction and overheating during the implant site preparation. Five models and a half, with the applied guides, were randomly assigned to each of the two operators and fixed to the dental chair headboard to simulate the clinical ergonomics (Figure 1). The guides were fixed to all the models by the more experienced operator (GP), who was unaware of the subsequent casual operator’s assignment.
The first dentist (GP) had had more than fifteen years of experience in freehand implantology and more than ten years of experience in s-CAI. The second one (FD) was a post-graduate dentist, an utter novice in general dental practice. The implants were placed independently in different sittings dedicated to accomplishing each model procedure, and the time taken was recorded. For each model a new kit of burs was used to eliminate the bias of the tool’s wear. The operative timings were recorded in seconds and statistically elaborated.
After that, the implanted models were CBCT scanned once again, and the DICOM data was imported in the same software mentioned above (method n.1). The “image substitution analysis” matched the pre- and post-CT session without the segmentation of the implants and the STL data conversion.
Optical scanning of the resin models was taken with a laboratory scanner (EGS dscan3) and an intra-oral scanner (3 Shape Trios, Copenhagen, Denmark) after connecting the scan bodies with the implants as in methods n.2 and n.3.
The flow-chart is shown in Figure 2.
The data relating to the external surface of the resin clones from the three techniques, exported as STL data, were volumetrically analyzed, and the pre- and postoperative virtual models and implants were aligned by the GOM inspect software. The alignment of the pre- and postoperative surfaces was singularly exported and loaded onto Meshmixer software, which was able to remove the superimposing surfaces of the resin copies while maintaining those of the implants (Figure 3).

2.2. Measurement Parameters for Accuracy Analysis

For each pre-and postoperative implant surface, the entry point (E) and the apical point (A) were identified as the most coronal and apical points, respectively, obtained from the geometric intersection of the symmetric line of the fixture and its surface. The distance between these two points was divided into three vectorial components, vestibular-lingual (x), mesiodistal (y), and apical-coronal (z), and assumed to be deviation values from the planned positions in the respective implant portions (Edx, Edy, Edz; Adx, Ady, and Adz). The three-dimensional Euclidean distance was calculated to have a single entry (Ed) and an apical (Ad) point. The angular discrepancy (Ang), expressed in grades, was further calculated from the longitudinal axes of the pre- and postoperative implants. The more the spread from the “zero” value, the more the inaccuracy amount.
These calculations were performed for the five groups derived from matching each operator with the three analysis methods.

2.3. Measurement Parameters for Operative Timing Analysis

The time counter was managed by an external assessor (FDB), and the counting started when the motor switched on and off for each site preparation and implant placement. Every phase time was added to the subsequent with a total value calculated for every implant site management. A mean value for each model timing was reported on the ten simulators excluding the shared one, on a total number of 80 implants.

2.4. Statistical Analysis

A descriptive analysis was performed, presenting continuous variables as mean ±standard deviations (SDs) and giving minimum and maximum values. An inferential analysis compared the groups in terms of the mean values of all the accuracy variables (the Friedman test was used since the data were not normally distributed).
The significance level was set at 0.05. All the analyses were performed using Stata, version 15 (Stata Corp LP, College Station, TX, USA).
The sample size was calculated considering an alpha error of 0.05, a power of 0.80, and an f-effect size of 0.40, taking the literature data into consideration.
A double-tailed t-test evaluated the significant difference between the two operators regarding implant placement velocity.

3. Results

3.1. Accuracy

An overall number of 88 implant positions were compared with the relative pre-operative planned ones.
An overall significant level of accuracy resulted for the first operator, compared to the second, from the mean values from the three evaluation methods (p < 0.05) (Table 1).
According to the method nr.1, Ed = 0.8 ± 0.43 mm, Ad = 0.85 ± 0.39 mm, and Ang = 1.05 ± 0.53° were reached by the first operator; and Ed = 0.91 ± 46, Ad = 1.02 ± 0.45, and Ang = 1.78 ± 1.12° by the second one. The overall mean values were: Ed = 0.86 ± 44, Ad = 0.93 ± 43, and Ang = 1.41 ± 0.95°. Considering each vectorial component, the second operator obtained more minimal values than the first one for all the variables except the Ay one, all compensated by the higher maximum data.
According to the method nr.2, Ed = 0.45 ± 0.23 mm, Ad = 0.48 ± 0.24 mm, and Ang = 1.03 ± 0.47° were reached by the first operator; and Ed = 0.87 ± 45, Ad = 0.95 ± 0.42, and Ang = 1.58 ± 0.97° by the second one, with Ed = 0.66 ± 41, Ad = 0.72 ± 42, and Ang = 1.31 ± 0.81° as the mean values between each of them.
The second operator obtained more minimal deviations than the first one, but for fewer variables, e.g., Ed, Adx, Ady, and Ang, than in method nr.1. According to method nr.3, Ed = 0.47 ± 0.27 mm, Ad = 0.49 ± 0.25 mm;, and Ang = 1.05 ± 0.57° were reached by the first operator; and Ed = 0.77 ± 34, Ad = 0.89 ± 0.36, and Ang = 1.58 ± 0.83° by the second one, with Ed = 0.62 ± 34, Ad = 0.69 ± 37, and Ang = 1.31 ± 0.76° as the mean values between each of them. An identical value of Edx, 0.15 ± 0.1, was identified for both operators. The second operator’s minimal deviations, as reported above with the previous methods, regarded the Edx, Ed, and Ad variables (Table 2).
A statistically significant difference (p < 0.05) resulted between method n.1 and both the methods n.2 and n.3, except for the Adx and Ang values, with no differences between the n.2 and n.3 optic scanner evaluations for all the variables (Table 3).
The mean values from the three methods were 0.57 mm (Ed), 0.60 mm (Ad), and 1.04° (Ang) for the first, and 0.85 mm (Ed), 0.95 mm (Ad), and 1.64° (Ang) for the second operator. Figure 4 contains the box plots comparing the two operators relative to the mean values of the three methods.
A positive correlation resulted between the number of progressively implanted models and the implant placement accuracy, although this result was statistically significant for the second operator only (Table 4).

3.2. Operative Timings

The first operator accomplished the implant seating with a mean time of 14.24 min for each model, which was less than 25.48 min taken by the second operator, and this difference was statistically significant (p < 0.05) (Table 5).

4. Discussion

Freehand dental implantology depends on the manual skills of the operator, particularly for the axial implant shifting to what is prosthetically required [23]. An average angular deviation of 3.04° (ranging from 0.4°–6.3°) and 7.03° (ranging from 0.7°–21.3°) has been reported for s-CAI and traditional treatment, respectively [24]. Younes et al. reported that 19.2% of the freehand-placed implants were bearing cement-retained prostheses compared to 100% of screw-retained super-structures on fixtures inserted with digital guides [6]. An overall accuracy value with s-CAI systems has been reported as ≤1 mm for all linear evaluations and ≤5° for angular shifting [4,25]. Provided there is the correct procedure for realizing the digital template, the principal reason for the higher precision of the s-CAI could be that the human error is reduced. The present study tried to investigate the possible relevance of human expertise intended to enhance the accuracy and reduce the operative timing as derived from the repetition of a specific practical activity over time. The method followed aimed to eliminate, as much as possible, the factors interfering with the univocal interpretation of the data. Since the present study was undertaken “in vitro”, it was not affected by the variables of clinical practice that would not have been reduced by using an operative guide and allowed a novice operator to carry out the procedure independently. The models used in this trial reproduced similar edentulous mandibular situations with the same complexity level. The flowchart for guide manufacturing was simplified to minimize the risk of procedural mistakes and a bone-supported device was planned without an opposite dental arcade and the soft tissues feature. Since the guides’ application to the model was performed by the expert dentist and accurately controlled, the possible bias related to an error in this phase was prevented. The present trial results are, on the other hand, poorly attributable to flapless approaches where soft tissue scanning is mandatory.
The major accuracy for the expert operator recorded reported in this study confirms what has been reported in in vitro trials [2,26,27,28] while clinical studies are inconsistent [29,30]. One study reported more accuracy with the guided digital system than with the free-hands systems when used by novices [2]. An average of 1.12 mm and a maximum value of 4.5 mm at the entry point, an average of 1.39 mm and a maximum value of 7.1 mm at the apex point, and an average axis shifting of 3.89°, and a maximum value of 21.16° were recorded in in vivo studies reported in a literature review [4]. Naeini et al. reported 1.00 mm, 1.23 mm, and 3.13° for coronal, apical, and angular deviations respectively, in another clinical literature review about the flapless technique [7]. Cuhna et al. recorded, in vivo, a mean angular deviation of 2.04 degrees and mean coronal, central, and apical linear deviations of 0.68 mm, 0.72 mm, and 0.82 mm, respectively [29]. The present study’s overall deviations were lower than those in the literature studies with patients, particularly for angular shifting, both for expert and inexpert operators, regardless of evaluation methods.
Strictly correlated with the experience is the learning curve concept, which expresses that there are a progressive number of attempts needed to reach a repeatable level of precision. The awareness of this data helps standardize and optimize the method of training. With reference to the d-CAI, e.g., Block et al., in a clinical trial, reported the presence of a learning curve, and set the twentieth attempt as the initial plateau case [30]. Cassetta et al. did not find a learning curve for s-CAI after analyzing the results from two different implantologists, both experts in the freehand procedure but inexpert with the s-CAI, who were working on totally edentulous patients. At the same time, a significant correlation between coronal and angular deviation and the progressive number of treated cases over time, emerged for partially edentulous subjects [16]. Nevertheless, the authors accept that a reduced sample size and the absence of the operative timing evaluation are limits of their trial. No in vitro studies have correlated the accuracy of implant placement with the progressive number of operative performances.
Even if a real learning curve reaching the same accuracy level plateau was not identified, a continuous enhancement of accuracy for both operators over time resulted in the present survey.
The habit of managing the physical opposition of the bone during the implant placement phases by keeping a steady position of the handle-piece, has maintained its importance [7,15]. Nevertheless, the reported deviations in the present study were so low that the differences between the two operators did not compromise the final results of the procedure. The major reliability of s-CAI with respect to the freehand approach was confirmed. On the basis that the fitting between the sleeves and the drills is considered crucial for implant placement accuracy, the adopted device’s characteristics may have positively affected the outcomes. The present study adopted specific disposable poly-ether-ether-ketone (PEEK) sleeves, as used in another similar trial [31]. This innovation, proposed for the same purposes in orthopedics [32], should give less rigidity to the system while keeping a good fitting with the drills, thanks to the thermoplastic properties of the PEEK. The templates with tunnels directly printed according to the digital project appear more customized than those with standard metallic sleeves inserted subsequently in the device at the designed positions.
Nevertheless, particularly in partial edentulism with five residual teeth at least, these devices reached a similar level of accuracy [33,34].
Before the advent of optical scanners, the way for evaluating the superimposition of virtually placed with actually placed implants was through the comparison of the pre-and postoperative CT scans [35,36]. The “indirect” methods, with the EOS or IOS, avoid a second x-ray exam for the patient, but are prone to the above-cited shortcomings in the phase of imaging data acquisition. Impression distortions and related cast imprecisions for EOS, intraoral conditions, and the movements of the patients or erroneous device movements during IOS, can affect the accuracy of STL data [20,34] along with causing an imprecise connection with the scanning pins for both techniques. Furthermore, the quality of the 3D planning software algorithms for matching DICOM and STL files affects the absolute reliability of the s-CAI using optical scanners [35]. Regarding in vitro surveys, Stefanelli et al. reported a minimal discrepancy between a new type of IOS with improved scan bodies and the EOS [37]; and Franchina et al. did not find a significant difference between the outcomes from radiographic, EOS, and IOS methods of accuracy evaluation as compared to free-hand implantology, s-CAI, and d-CAI methods,, as accomplished by a relevant expert in each [31]. A notable discrepancy among the outcomes of scanning and radiologic evaluation methods emerged in the literature [19,20,34,37]. Since a gold standard has not yet been stated, it was declared that cross-validation using both direct and indirect methods should be recommended [7,11,16]. The present trial adopted and compared the results from three assessment methods to eliminate the evaluation bias highlighted by several papers.
A recent literature review on clinical studies revealed that a pre-and postoperative comparison of CT images, a scanned post-surgical model, and intraoral optical scanning were adopted for accuracy evaluation in eight, two, and four studies, respectively [35].
The average values between the expert and the novice operators for the three methods were 0.2 mm, 0.3 mm, and 0.6° for the Ed, Ad, and Ang parameters, confirming the apex orientation and the implant inclination as the most critical points. Skjerven et al. reported in vivo studies showing a significant difference between the deviation measurements for coronal and apex points with pre-and postoperative CBCT and pre-op CBCT and post-op IOS [19].
In particular, the overall mean angular deviation value from the three methods, i.e., 1.04° and 1.64° for the expert and the novice operators respectively, were both lower than those reported by a similar study in vitro: i.e., 3.23° ± 1.00°, as an average of radiological and optical scanning analysis. In addition, a lower deviation resulted when comparing our angular results by method n.1: i.e., 1.05 ± 0.53° and 1.78 ± 1.12° for experienced and novice operators, respectively, with the 2.60 ±1.25°, and the 3.96 ± 1.64° results of Cushen & Turkyilmaz, who adopted only the CBCT pre-and post- superimposition data as analysis [28]. In any case, an exhaustive comparison within the different trials is challenging due to many variables related to data importing, matching, and the elaboration processes, even in the same method adopted.

5. Conclusions

The present trial confirmed, by all three analysis methods, that there was an impressive level of accuracy for novice and skilled implantologists with s-CAI, which was undoubtedly superior to the free-hand approach. The starting experience and the improvement in precision over time was significantly relevant to implementing the implant placement reliability.

Author Contributions

Conceptualization: G.P. and F.D.; methodology: G.P., A.F. and G.L.; software: F.D., L.V.S. and G.P.; validation: A.M., F.D.B. and P.F.; formal analysis: A.F., G.L., G.P. and F.D.B.; resources: G.P.; data curation: P.F., A.M. and G.P.; writing—original draft preparation: G.L.; writing—review and editing: G.L.; visualization: P.F. and G.P.; supervision: G.P. and L.V.S.; project administration: G.P.; funding acquisition: G.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pellegrino, G.; Lizio, G.; Ferri, A.; Marchetti, C. Flapless and bone-preserving extraction of partially impacted mandibular third molars with dynamic navigation technology. A report of three cases. Int. J. Comput. Dent. 2021, 24, 253–262. [Google Scholar]
  2. Abduo, J.; Lau, D. Accuracy of static computer-assisted implant placement in long span edentulous area by novice implant clinicians: A cross-sectional in vitro study comparing fully-guided, pilot-guided, and freehand implant placement protocols. Clin. Implant Dent. Relat. Res. 2021, 23, 361–372. [Google Scholar] [CrossRef]
  3. Pellegrino, G.; Bellini, P.; Cavallini, P.F.; Ferri, A.; Zacchino, A.; Taraschi, V.; Marchetti, C.; Consolo, U. Dynamic Navigation in Dental Implantology: The Influence of Surgical Experience on Implant Placement Accuracy and Operating Time. An in Vitro Study. Int. J. Environ. Res. Public Health 2020, 17, 2153. [Google Scholar] [CrossRef]
  4. Tahmaseb, A.; Wu, V.; Wismeijer, D.; Coucke, W.; Evans, C. The accuracy of static computer-aided implant surgery: A systematic review and meta-analysis. Clin. Oral Implants Res. 2018, 29 (Suppl. S16), 416–435. [Google Scholar] [CrossRef]
  5. Younes, F.; Eghbali, A.; De Bruyckere, T.; Cleymaet, R.; Cosyn, J. A randomized controlled trial on the efficiency of free-handed, pilot-drill guided and fully guided implant surgery in partially edentulous patients. Clin. Oral Implants Res. 2019, 30, 131–138. [Google Scholar] [CrossRef]
  6. Vermeulen, J. The Accuracy of Implant Placement by Experienced Surgeons: Guided vs. Freehand Approach in a Simulated Plastic Model. Int. J. Oral Maxillofac. Implants 2017, 32, 617–624. [Google Scholar] [CrossRef]
  7. Naeini, E.N.; Atashkadeh, M.; De Bruyn, H.; D’Haese, J. Narrative review regarding the applicability, accuracy, and clinical outcome of flapless implant surgery with or without computer guidance. Clin. Implant Dent. Relat. Res. 2020, 22, 454–467. [Google Scholar] [CrossRef]
  8. Joda, T.; Derksen, W.; Wittneben, J.G.; Kuehl, S. Static computer-aided implant surgery (s-CAIS) analysing patient-reported outcome measures (PROMs), economics and surgical complications: A systematic review. Clin. Oral Implants Res. 2018, 29 (Suppl. S16), 359–373. [Google Scholar] [CrossRef]
  9. Vercruyssen, M.; Laleman, I.; Jacobs, R.; Quirynen, M. Computer-supported implant planning and guided surgery: A narrative review. Clin. Oral Implants Res. 2015, 26 (Suppl. S11), 69–76. [Google Scholar] [CrossRef]
  10. Elliott, T.; Hamilton, A.; Griseto, N.; Gallucci, G.O. Additively Manufactured Surgical Implant Guides: A Review. J. Prosthodont. 2022, 31 (Suppl. S1), 38–46. [Google Scholar] [CrossRef]
  11. Anadioti, E.; Kane, B.; Zhang, Y.; Bergler, M.; Mante, F.; Blatz, M.B. Accuracy of Dental and Industrial 3D Printers. J. Prosthodont. 2022, 31 (Suppl. S1), 30–37. [Google Scholar] [CrossRef]
  12. Derksen, W.; Wismeijer, D.; Flügge, T.; Hassan, B.; Tahmaseb, A. The accuracy of computer-guided implant surgery with tooth-supported, digitally designed drill guides based on CBCT and intraoral scanning. A prospective cohort study. Clin. Oral Implants Res. 2019, 30, 1005–1015. [Google Scholar] [CrossRef]
  13. Cassetta, M.; Bellardini, M. How much does experience in guided implant surgery play a role in accuracy? A randomized controlled pilot study. Int. J. Oral Maxillofac. Surg. 2017, 46, 922–930. [Google Scholar] [CrossRef]
  14. Van de Wiele, G.; Teughels, W.; Vercruyssen, M.; Coucke, W.; Temmerman, A.; Quirynen, M. The accuracy of guided surgery via mucosa-supported stereolithographic surgical templates in the hands of surgeons with little experience. Clin. Oral Implants Res. 2015, 26, 1489–1494. [Google Scholar] [CrossRef]
  15. Van Assche, N.; Quirynen, M. Tolerance within a surgical guide. Clin. Oral Implants Res. 2010, 21, 455–458. [Google Scholar] [CrossRef]
  16. Cassetta, M.; Altieri, F.; Giansanti, M.; Bellardini, M.; Brandetti, G.; Piccoli, L. Is there a learning curve in static computer-assisted implant surgery? A prospective clinical study. Int. J. Oral Maxillofac. Surg. 2020, 49, 1335–1342. [Google Scholar] [CrossRef]
  17. Bover-Ramos, F.; Viña-Almunia, J.; Cervera-Ballester, J.; Peñarrocha-Diago, M.; García-Mira, B. Accuracy of Implant Placement with Computer-Guided Surgery: A Systematic Review and Meta-Analysis Comparing Cadaver, Clinical, and In Vitro Studies. Int. J. Oral Maxillofac. Implants 2018, 33, 101–115. [Google Scholar] [CrossRef]
  18. Verhamme, L.M.; Meijer, G.J.; Boumans, T.; Schutyser, F.; Bergé, S.J.; Maal, T.J.J. A clinically relevant validation method for implant placement after virtual planning. Clin. Oral Implants Res. 2013, 24, 1265–1272. [Google Scholar] [CrossRef]
  19. Skjerven, H.; Olsen-Bergem, H.; Rønold, H.J.; Riis, U.H.; Ellingsen, J.E. Comparison of postoperative intraoral scan versus cone beam computerised tomography to measure accuracy of guided implant placement—A prospective clinical study. Clin. Oral Implants Res. 2019, 30, 531–541. [Google Scholar] [CrossRef]
  20. Park, J.H.; Hwang, C.J.; Choi, Y.J.; Houschyar, K.S.; Yu, J.-H.; Bae, S.-Y.; Cha, J.-Y. Registration of digital dental models and cone-beam computed tomography images using 3-dimensional planning software: Comparison of the accuracy according to scanning methods and software. Am. J. Orthod. Dentofac. Orthop. 2020, 157, 843–851. [Google Scholar] [CrossRef]
  21. Kessler, A.; Le, V.; Folwaczny, M. Influence of the tooth position, guided sleeve height, supporting length, manufacturing methods, and resin E-modulus on the in vitro accuracy of surgical implant guides in a free-end situation. Clin. Oral Implants Res. 2021, 32, 1097–1104. [Google Scholar] [CrossRef]
  22. Schnutenhaus, S.; Edelmann, C.; Rudolph, H. Does the macro design of an implant affect the accuracy of template-guided implantation? A prospective clinical study. Int. J. Implant Dent. 2021, 7, 42. [Google Scholar] [CrossRef]
  23. Jemt, T.; Olsson, M.; Renouard, F.; Stenport, V.; Friberg, B. Early Implant Failures Related to Individual Surgeons: An Analysis Covering 11,074 Operations Performed during 28 Years. Clin. Implant Dent. Relat. Res. 2016, 18, 861–872. [Google Scholar] [CrossRef]
  24. Varga, E., Jr.; Antal, M.; Major, L.; Kiscsatári, R.; Braunitzer, G.; Piffkó, J. Guidance means accuracy: A randomized clinical trial on freehand versus guided dental implantation. Clin. Oral Implants Res. 2020, 31, 417–430. [Google Scholar] [CrossRef]
  25. Jorba-García, A.; González-Barnadas, A.; Camps-Font, O.; Figueiredo, R.; Valmaseda-Castellón, E. Accuracy assessment of dynamic computer-aided implant placement: A systematic review and meta-analysis. Clin. Oral Investig. 2021, 25, 2479–2494. [Google Scholar] [CrossRef]
  26. Fernández-Gil, Á.; Gil, H.S.; Velasco, M.G.; Vázquez, J. An In Vitro Model to Evaluate the Accuracy of Guided Implant Placement Based on the Surgeon’s Experience. Int. J. Oral Maxillofac. Implants 2017, 32, 515–524. [Google Scholar] [CrossRef]
  27. Pettersson, A.; Kero, T.; Söderberg, R.; Näsström, K. Accuracy of virtually planned and CAD/CAM-guided implant surgery on plastic models. J. Prosthet. Dent. 2014, 112, 1472–1478. [Google Scholar] [CrossRef]
  28. Cushen, S.E.; Turkyilmaz, I. Impact of operator experience on the accuracy of implant placement with stereolithographic surgical templates: An in vitro study. J. Prosthet. Dent. 2013, 109, 248–254. [Google Scholar] [CrossRef]
  29. Cunha, R.M.; Souza, F.A.; Hadad, H.; Poli, P.P.; Maiorana, C.; Carvalho, P.S.P. Accuracy evaluation of computer-guided implant surgery associated with prototyped surgical guides. J. Prosthet. Dent. 2021, 125, 266–272. [Google Scholar] [CrossRef]
  30. Block, M.S.; Emery, R.W.; Lank, K.; Ryan, J. Implant Placement Accuracy Using Dynamic Navigation. Int. J. Oral Maxillofac. Implants 2017, 32, 92–99. [Google Scholar] [CrossRef]
  31. Franchina, A.; Stefanelli, L.V.; Maltese, F.; Mandelaris, G.A.; Vantaggiato, A.; Pagliarulo, M.; Pranno, N.; Brauner, E.; Angelis, F.D.; Carlo, S.D. Validation of an Intra-Oral Scan Method Versus Cone Beam Computed Tomography Superimposition to Assess the Accuracy between Planned and Achieved Dental Implants: A Randomized In Vitro Study. Int. J. Environ. Res. Public Health 2020, 17, 9358. [Google Scholar] [CrossRef]
  32. Choi, D.; Yoon, Y.S.; Hwang, D. Evaluation of sleeved implant fixation using a rat model. Med. Eng. Phys. 2011, 33, 310–314. [Google Scholar] [CrossRef] [PubMed]
  33. Kiatkroekkrai, P.; Takolpuckdee, C.; Subbalekha, K.; Mattheos, N.; Pimkhaokham, A. Accuracy of implant position when placed using static computer-assisted implant surgical guides manufactured with two different optical scanning techniques: A randomized clinical trial. Int. J. Oral Maxillofac. Surg. 2020, 49, 377–383. [Google Scholar] [CrossRef] [PubMed]
  34. Tallarico, M.; Xhanari, E.; Kim, Y.J.; Cocchi, F.; Martinolli, M.; Alushi, A.; Baldoni, E.; Meloni, S.M. Accuracy of computer-assisted template-based implant placement using conventional impression and scan model or intraoral digital impression: A randomised controlled trial with 1 year of follow-up. Int. J. Oral Implantol. 2019, 12, 197–206. [Google Scholar]
  35. Putra, R.H.; Yoda, N.; Astuti, E.R.; Sasaki, K. The accuracy of implant placement with computer-guided surgery in partially edentulous patients and possible influencing factors: A systematic review and meta-analysis. J. Prosthodont. Res. 2022, 66, 29–39. [Google Scholar] [CrossRef]
  36. Pyo, S.W.; Lim, Y.J.; Koo, K.T.; Lee, J. Methods Used to Assess the 3D Accuracy of Dental Implant Positions in Computer-Guided Implant Placement: A Review. J. Clin. Med. 2019, 8, 54. [Google Scholar] [CrossRef]
  37. Stefanelli, L.V.; Franchina, A.; Pranno, A.; Pellegrino, G.; Ferri, A.; Pranno, N.; Di Carlo, S.; De Angelis, F. Use of Intraoral Scanners for Full Dental Arches: Could Different Strategies or Overlapping Software Affect Accuracy? Int. J. Environ. Res. Public Health 2021, 18, 9946. [Google Scholar] [CrossRef]
Figure 1. Surgical guide fixed to the acrylic resin model.
Figure 1. Surgical guide fixed to the acrylic resin model.
Applsci 12 09561 g001
Figure 2. The scheme of the flow-chart.
Figure 2. The scheme of the flow-chart.
Applsci 12 09561 g002
Figure 3. Pre- (A) and post- (B) operative virtual models and their alignment superimposing the external surfaces (C). Planned (D) and actual (E) implant positions after removing the models’ superimposing surface.
Figure 3. Pre- (A) and post- (B) operative virtual models and their alignment superimposing the external surfaces (C). Planned (D) and actual (E) implant positions after removing the models’ superimposing surface.
Applsci 12 09561 g003
Figure 4. Box-plots showing the comparison of the two operators relative to the mean values from the three methods evaluation for E point (A), A point (B), and angular deviation (C).
Figure 4. Box-plots showing the comparison of the two operators relative to the mean values from the three methods evaluation for E point (A), A point (B), and angular deviation (C).
Applsci 12 09561 g004
Table 1. Pairwise ANOVA analysis (Tukey HSD test) correlating the dependent variables with the three methods’.
Table 1. Pairwise ANOVA analysis (Tukey HSD test) correlating the dependent variables with the three methods’.
Parameters of EvaluationThree Methods Results Mean ValuesMeans Values’ Differential between the OperatorsHSD-Testp Value
Operator 1Operator 2
Ed point0.57280.85050.277716.2522 *>0.0001
Edx0.14890.19830.04946.1613 *0.0031
Edy0.26550.45970.194211.7764 *>0.0001
Edz0.39800.58420.18627.7265 *0.0002
Ad point0.60840.95330.344920.4794 *>0.0001
Adx0.16710.24360.07658.8757 *0.0002
Ady0.30350.55160.248114.1916 *>0.0001
Adz0.40280.57210.16936.9093 *0.001
Ang1.04241.64630.603921.6030 *>0.0001
* Statistic insignificance in place of Means results for each operator.
Table 2. Means, SDs (Standard Deviations), and extreme values for each operator from the three methods of evaluation.
Table 2. Means, SDs (Standard Deviations), and extreme values for each operator from the three methods of evaluation.
Methods of
Evaluation
OperatorParametersEdxEdyEdzEdAdxAdyAdzAdAng
n.11mean0.140.160.740.80.150.250.760.851.05
SD0.100.130.450.430.100.210.440.390.53
minimum0.000.010.080.220.170.000.030.360.24
maximum0.340.611.921.920.411.151.921.932.13
2mean0.280.310.750.910.280.430.771.021.78
SD0.190.220.490.460.220.30.50.451.12
minimum0.410.010.000.190.000.010.010.240.13
maximum0.830.762.022.150.881.111.962.274.13
n.21mean0.150.30.220.450.180.320.220.481.03
SD0.120.20.190.230.130.250.180.240.47
minimum0.000.010.010.160.020.010.010.210.11
maximum0.50.710.921.180.631.070.841.362.19
2mean0.160.580.520.870.230.570.530.951.58
SD0.130.360.410.450.190.430.450.420.97
minimum0.010.070.070.130.010.010.070.240.04
maximum0.491.231.651.770.641.51.781.793.62
n.31mean0.150.330.230.470.170.340.230.491.05
SD0.10.270.150.270.140.250.150.250.57
minimum0.010.010.000.090.000.010.010.060.21
maximum0.40.860.651.040.550.890.631.062.66
2mean0.150.490.480.770.220.650.410.891.58
SD0.110.320.330.340.170.420.270.360.83
minimum0.010.030.020.070.010.040.020.30.08
maximum0.431.241.051.440.631.651.011.773.19
Table 3. Pairwise ANOVA analysis (Tukey HSD test) correlating the single dependent variables mean results from the two operators for each evaluation method.
Table 3. Pairwise ANOVA analysis (Tukey HSD test) correlating the single dependent variables mean results from the two operators for each evaluation method.
Parameters of EvaluationEvaluation Methods’ ComparisonOperators’ Mean Values’ for Each MethodMean Values’
Difference
HSD-Testp Value
Edn.1 vs n.20.85640.62090.19879.4939 *>0.05
n.1 vs. n.30.85640.62090.235511.2534 *>0.05
n.2 vs. n.30.65770.15600.03681.7594<0.05
Edxn.1 vs. n.20.21470.15010.05865.9700 *>0.05
n.1 vs. n.30.21470.15010.06466.5715 *>0.05
n.2 vs. n.30.15600.44140.00590.6015<0.05
Edyn.1 vs. n.20.23450.41200.206910.2443 *>0.05
n.1 vs. n.30.23450.41200.17768.7926 *>0.05
n.2 vs. n.30.44140.37170.02931.4518<0.05
Edzn.1 vs. n.20.74850.35320.376812.7676 *>0.05
n.1 vs. n.30.74850.35320.395313.3953 *>0.05
n.2 vs. n.30.37170.71850.01850.6277<0.05
Adn.1 vs. n.20.93350.69050.215010.4235 *>0.05
n.1 vs. n.30.93350.69050.243111.7842 *>0.05
n.2 vs. n.30.71850.20190.02811.3607<0.05
Adxn.1 vs. n.20.21660.19740.01471.3931<0.05
n.1 vs. n.30.21660.19740.01931.8236<0.05
n.2 vs. n.30.20190.44860.00450.4305<0.05
Adyn.1 vs. n.20.33870.49520.10995.1349 *>0.05
n.1 vs. n.30.33870.49520.15657.3113 *>0.05
n.2 vs. n.30.44860.37590.04662.1763<0.05
Adzn.1 vs. n.20.76260.32390.386712.8877 *>0.05
n.1 vs. n.30.76260.32390.438714.6223 *>0.05
n.2 vs. n.30.37591.30590.05201.7346<0.05
Angn.1 vs. n.21.41271.31440.10683.1201<0.05
n.1 vs. n.31.41271.31440.09832.8712<0.05
n.2 vs. n.31.30590.65770.00850.2489<0.05
* Statistic insignificance.
Table 4. Double-tailed t-test verifying a significant difference between the two operators in terms of implant placement velocity.
Table 4. Double-tailed t-test verifying a significant difference between the two operators in terms of implant placement velocity.
OperatorObsMeanSD[Conf. Interval 95%]tp Value
1514.243.919.3819.1−3.90.0046
2525.485.1319.131.86
combined1019.867.3214.6225.1
differents −11.24 −17.89−4.59
Table 5. Linear regression model for the correlation of the sequence of the model preparation and the operator error in implant placement.
Table 5. Linear regression model for the correlation of the sequence of the model preparation and the operator error in implant placement.
OperatorCoefStandard Errort[Conf. Interval 95%]p Value
1−1.2714020.29−4.42−2.19−0.360.021
2−1.304420.47−2.80−2.790.180.068
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pellegrino, G.; Lizio, G.; D’Errico, F.; Ferri, A.; Mazzoni, A.; Bianco, F.D.; Stefanelli, L.V.; Felice, P. Relevance of the Operator’s Experience in Conditioning the Static Computer-Assisted Implantology: A Comparative In Vitro Study with Three Different Evaluation Methods. Appl. Sci. 2022, 12, 9561. https://doi.org/10.3390/app12199561

AMA Style

Pellegrino G, Lizio G, D’Errico F, Ferri A, Mazzoni A, Bianco FD, Stefanelli LV, Felice P. Relevance of the Operator’s Experience in Conditioning the Static Computer-Assisted Implantology: A Comparative In Vitro Study with Three Different Evaluation Methods. Applied Sciences. 2022; 12(19):9561. https://doi.org/10.3390/app12199561

Chicago/Turabian Style

Pellegrino, Gerardo, Giuseppe Lizio, Filippo D’Errico, Agnese Ferri, Annalisa Mazzoni, Federico Del Bianco, Luigi Vito Stefanelli, and Pietro Felice. 2022. "Relevance of the Operator’s Experience in Conditioning the Static Computer-Assisted Implantology: A Comparative In Vitro Study with Three Different Evaluation Methods" Applied Sciences 12, no. 19: 9561. https://doi.org/10.3390/app12199561

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop