Next Article in Journal
Target Speaker Extraction by Fusing Voiceprint Features
Previous Article in Journal
Automatic Identification of Landslides Based on Deep Learning
Previous Article in Special Issue
Predicting Objective Response Rate (ORR) in Immune Checkpoint Inhibitor (ICI) Therapies with Machine Learning (ML) by Combining Clinical and Patient-Reported Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Toward Supporting Maxillo-Facial Surgical Guides Positioning with Mixed Reality—A Preliminary Study

1
Politecnico di Torino, Corso Duca Abruzzi 24, 10129 Torino, Italy
2
Fondazione Piemonte per l’Oncologia, Strada Prov. 142, 10060 Candiolo, Italy
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(16), 8154; https://doi.org/10.3390/app12168154
Submission received: 20 June 2022 / Revised: 31 July 2022 / Accepted: 8 August 2022 / Published: 15 August 2022
(This article belongs to the Special Issue Advances in Augmented Medicine)

Abstract

:
Following an oncological resection or trauma it may be necessary to reconstruct the normal anatomical and functional mandible structures to ensure the effective and complete social reintegration of patients. In most surgical procedures, reconstruction of the mandibular shape and its occlusal relationship is performed through the free fibula flap using a surgical guide which allows the surgeon to easily identify the location and orientation of the cutting plane. In the present work, we present a Mixed Reality (MR)-based solution to support professionals in surgical guide positioning. The proposed solution, through the use of a Head-Mounted Display (HMD) such as that of the HoloLens 2, visualizes a 3D virtual model of the surgical guide, positioned over the patient’s real fibula in the correct position as identified by the medical team before the procedure. The professional wearing the HMD is then assisted in positioning the real guide over the virtual one by our solution, which is capable of tracking the real guide during the whole process and computing its distance from the final position. The assessment results highlight that Mixed Reality is an eligible technology to support surgeons, combining the usability of the device with an improvement of the accuracy in fibula flap removal surgery.

1. Introduction

Surgery has changed significantly over the years and continues to evolve with the introduction of newer technologies and techniques. There is an increasing interest in the use of virtual, augmented and mixed reality approaches to support different medical areas, from medical teaching [1] to intra-operative guidance [2]. Recently, Mixed Reality (MR) has acquired a key role in pre-, intra-, and post-operative medical procedures [3]. MR merges both Virtual Reality (VR) and Augmented Reality (AR) to produce new environments where real and virtual objects co-exist and interact in real time [4]. This provides incredible advantages in medical applications: MR gives new ways to interact with patients [5], to accelerate diagnosis [6], to reduce intra-operative time [7], to enable treatment personalization [8] and, more in general, to improve procedure outcomes [9]. MR systems are currently used in different surgical disciplines [10]. Incekara et al. [11] demonstrated the clinical feasibility of a wearable Mixed Reality device for preoperative neurosurgical planning using 3D hologram reconstructions of brain tumors, achieving more precise tumor localization than the standard neuronavigation system. Verhey et al. [5] defined MR as the most functional technology for the orthopedic surgeon. In fact, it allows more freedom of control over the CT reconstructions for preoperative planning and intra-operative visualization. In a study by Gregory et al. [12], the surgeons performed a standard reverse shoulder arthroplasty aided by the MR system, remotely connected with other surgeons in the United States and the United Kingdom. Post-operative CT evaluation demonstrated proper positioning of the prosthesis, and the patient was without complications on a clinical visit 45 days after the procedure. MR systems are used also in oral and maxillofacial surgery. This clinical specialty treats diseases, injuries and defects in the area of the face, mouth and jaws [13]. Since 1992, fibula-free flaps have been used to reconstruct mandibular and maxilla defects greater than 4–6 cm [14]. However, achieving a natural and functional mandibular construction requires a precise fit between the native mandible and the modeled fibular segments, because this influences facial support, occlusal function, and masticatory movements [15]. Existing methods use customized surgical guides to remove fibula flaps to ensure a secure and precise cut. The use of guides provides good functional outcomes and correct occlusion, reducing operation time [16]. Since a guide positioning error implies an error in fibula segment shape, inaccurate guide positioning can compromise the overall outcome of the mandibular reconstruction [17].
Wenjuan Zhou et al. [18] review the current literature regarding the clinical accuracy of guided surgery and analyzed the involved clinical factors. From 1951 to 2016, fourteen case studies were analyzed and they showed that the position of guide, guide fixation, type of guide, and flap approach could influence the accuracy of computer-aided implant surgery. Moreover, a totally guided system using fixation screws with a flapless protocol demonstrated the greatest accuracy. Jacques Blanc et al. [19] analyzed the advantages of virtual surgical planning using customized devices. Fourteen patients undergoing maxilla reconstruction were enrolled in their study and, in all cases, flaps fitted into the defects precisely and no bone grafts were needed. They demonstrated that preoperative virtual planning enables accurate and reliable customized cutting guide positioning with better operative results. Jingya Jane Pu et al. [20] developed a novel fibula malleolus cap to overcome the sliding and rotational errors during fibula flap harvesting for oncologic jaw reconstruction, with increased accuracy in simultaneous dental implants. The use of this method significantly reduced the deviations in locations and angles of distal fibula osteotomies and increased the accuracy of implant platform locations. Sophisticated robotic devices, such as the da Vinci Surgical System (SIC) [21], can be used to increase the accuracy of surgical guide positioning [22]. However, they are expensive, bulky and require a special operating room (OR) configuration [23].
The main contribution of this paper is to present a novel Mixed Reality-based methodology to support the mandibular reconstruction procedure. Our methodology has been exploited to develop an application whose goal was to assess whether an improvement of the surgical guide positioning process was achieved, providing the correct reference position through holograms projected into the visual field of the user wearing a headset device.
The development and testing of the application have been conducted jointly with the Maxillo-Facial Surgery Department located in Molinette hospital (Azienda Ospedaliero-Universitaria Città della Salute e della Scienza di Torino, Italy).
This paper is structured as follows. In Section 2, our Mixed Reality-based methodology is presented. In Section 3, we present the application that implements our methodology. This section addresses the reconstruction and tracking of 3D fibula and surgical guide models required by the application. A detailed description of the steps used to realize the application is also provided. Results obtained from the testing phase are presented in Section 4, while in Section 5, conclusions and future work are drawn.

2. Methods

Mandibular bony defects can result from oncological disease, trauma, facial malformations, or infection. In most surgical procedures, an anatomic and functional reconstruction is performed using a free fibula flap. The surgeon removes part of the fibular bone: it is a small thin bone that can be entirely removed without affecting your ability to bear weight. The donor leg is usually selected from the same side as the mandibular defect. The fibular flap is removed along with two blood vessels, one of which supplies blood to the flap (the artery) and one of which drains blood from the flap (the vein). It is used to recreate the missing alveolar process. Moreover, the fibula flap contains skin paddle and muscle. The skin paddle is used to seal the mucosal defect and the muscle is used to fill the mandibular cavity [24]. CAD simulations are performed prior to surgery to analyze the normal distance between the maxilla and the mandible and to locate the fibula bone in the appropriate 3D position for future dental restoration. All bony segments available for contact with the fibula flap are exposed, and the upper neck vessels are isolated. A surgical guide is fabricated to model the flap, considering the estimated number, length, and orientation of the fibula segments (Figure 1a,b).
Once the bone is raised, it is transferred to the mandible and secured in position with small titanium plates and screws as shown in Figure 2. The blood vessels supplying and draining the flap are then joined to blood vessels in the neck.
In Figure 3, the steps composing our methodology are presented.
Computed Tomographic (CT) scans of the patient’s mandible and of the lower leg as the donor site are performed to obtain DICOM images.
Using DICOM images data, a CAD simulation of the mandibular osteotomies is performed to design and fabricate custom surgical guides, which can be used by surgeons to more accurately harvest the fibula flaps. Then, a tracking system is necessary to recognize the fibula and the surgical guide in order to align virtual and real models.
The MR tracking techniques are grouped into three classes: sensor-based, vision-based, and hybrid [25]. Sensor-based tracking is based on active sensors that are placed in an environment. Vision-based tracking techniques use image or 3D model information to track the position and orientation of objects, enabling accurate real-time overlays for a handheld device. The hybrid tracking technique is the combination of both sensor-based tracking and vision-based tracking using multiple measurements to produce robust results [26]. In this study, the vision-based method was used. This method minimizes the amount of data that needs to be extracted from the video feed and it provides robust, real-time and accurate alignment of objects, allowing the overlay to remain attached to the patient even with patient and camera movements.
In this approach, CAD models of the fibula and the surgical guide are used to improve the robustness and efficiency of tracking. Using the tracking system it is possible to recognize the fibula and overlap the 3D virtual model on which the hologram of the guide in the correct position is located. Observing the hologram, the user overlaps the real guide, hands-held and tracked by the tracking system, on the virtual model.
Moreover, an MR display system is necessary to visualize and analyze the results of the guide alignment. The recent deployment of MR systems in the medical field has necessitated the development of displays that can provide a hands-free display of information in the user’s field of view, allowing free movement of the head, and avoiding constraints that would limit the mobility of the entire body. Head-Mounted Displays (HMDs) currently on the market are portable, do not compromise sterility, and can support safe practice [27]. For these reasons, HMDs can be used for image guidance in endoscopy and robotic-assisted endoscopy, data-display, training and in many surgical specialty [28], such as urology [29], neurosurgery [30] and craniomaxillofacial surgery [31].

3. Case Study

The method presented in Section 2 has been used to develop a Mixed Reality-based application to support surgeons during mandibular reconstruction with fibula bone graft by increasing the accuracy of surgical guide positioning. In this Section, we describe the developed application in detail.
To obtain the 3D models of the fibula and the surgical guide, CT scanning of the lower leg should be performed first in order to obtain DICOM images.
Then, using image segmentation software, the fibula is identified and exported to Blender, an open-source 3D computer graphics software, to realize the 3D virtual model used to identify the correct position of the surgical guide. The long axis of the fibula should be aligned with the y-axis, with the backside facing the positive direction of the y-axis. In addition to the fibula, a 3D model of a surgical guide is produced for the fabrication of fibula flaps. Moreover, the guide is positioned with the long axis aligned with the y-axis. These virtual models are then customized to fit their features for our purpose, modifying the material of objects. In particular, different colors are assigned to different parts of the fibula and the surgical guide.
These colors are helpful to distinguish the fibula and the guide from other objects and to handle illumination changes. A 3D printer is used to turn the 3D CAD models into real synthetic objects. In this way, the presence of the patient is not necessary for the development of the application.
As a tracking system, the MR application uses Vuforia Engine, already available on the market. It enables reliable tracking and performance on a variety of hardware (including mobile devices and HMDs). Vuforia permits to position and orient virtual objects, such as 3D models and other media, to the real world when they are framed through the camera of a mobile device [32].
It uses Model Targets to recognize and track particular objects in the real world based on the shape of the object. Starting from the 3D models, the Model Targets of the fibula and the guide are configured using the Model Target Generator (MTG) (Figure 4).
From a software perspective, a Unity3D project was created. It leverages the Mixed Reality Toolkit for interfacing with the HMDs and accessing its main capabilities such as spatial anchoring and advanced interaction methods between users and the virtual content, i.e.: head movements, gestures, and voice commands.
For our purpose, the Microsoft HoloLens 2 headset was used as an MR display system. It can map the users’ real environment to place virtual 3D elements in a relative position to real objects. HoloLens 2 contains the sensors, processors and hardware, including four cameras used to scan the environment, collect data and measure distance and spatial depth [33]. The displayed images are created by the reflection of two high-definition light engines on each of the user’s retinas. Gaze commands and head-tracking allow the user to bring the application’s attention to what the user is perceiving [34]. Therefore, being an all-in-one mobile machine, it is easily portable, hence usable both in the operating room and in wards, depending on the patient’s needs.
A feedback window has been implemented to visualize and register data giving feedback to the user (In the upper part of Figure 5). The window is a User experience (UX) control that contains a button and an input field to save the results concerning the overlap between models; and UI text which informs the user about the tracking status of the fibula and the surgical guide and provides quantitative measures of the overlap in real time.
It follows the user’s head in order to remain in the user’s visual field to be easily accessible anytime.

The Application

When the application launches, it searches for the fibula object in the visual field of the user. Once the Model Target of the fibula is detected, Vuforia starts tracking it and will continue as long as the bone is at least partially visible by the camera. When the fibula is detected, the application superimposes the virtual model containing the cutting guide on the physical 3D printed model (Figure 5).
After the fibula is recognized by the application, the physical world surgical guide detection starts. This guide, hands-held by the professional using our application, is tracked by Vuforia during the procedure to overlap it with its virtual counterpart, which is visible as a hologram in the correct position Figure 5.
To assess the alignment of the physical world surgical guide over the virtual one, we use Euclidean distance, such as in Equation (1). The subscript “r” indicates the coordinates of the real object and the subscript “v” the coordinates of the virtual model.
D = ( x v x r ) 2 + ( y v y r ) 2 + ( z v z r ) 2
Rotation between the two models is computed using quaternions (2). Each rotation of a rigid body is equivalent to a single rotation of a given angle ϕ around a fixed axis (called the Euler axis u ).
q = exp ϕ 2 ( u x i + u y j + u z z )
So, for each guide movement, the system calculates the distance and the rotation between the two models.
Using the above metrics, it is possible to follow the professional during the alignment procedure giving him feedback about how the positioning is closing the expected one.

4. Results and Discussion

This section reports on the results of the study conducted. Fifteen healthy subjects were recruited, five males and ten females with a mean age of 29.3 ± 8.4 (mean ± standard deviation). The subjects had different backgrounds: some were computer engineers and some were biomedical. However, all subjects had some experience with HoloLens and basic knowledge about Mixed Reality. None of them reported neurological, psychiatric, or other brain-related diseases. All volunteers were informed about the experimental procedure and written consent was obtained from all participants.
Firstly, the accuracy of the positioning of the guide on the fibula was studied. In order to evaluate the progress of surgical guide placement over time, distance and rotation data from all subjects were averaged (Figure 6). To highlight the trend of the data, in both cases a regression model with a polynomial curve of order 2 was used. The R-Squared (R2) value was calculated to evaluate how well the data approximated the curve. R2 can assume any value between 0 to 1:0 represents a model that does not account for any of the changes in the variable. One represents a model that is able to represent all the variations of the variable. In this case, R2 has a value of 0.98 for the distance and 0.99 for the rotation, indicating a better fit for the regression model. In both distance and rotation, the graphs show a trend that decreases over time. Moreover, in order to obtain the error, the difference between the last value obtained in the averaged session and the reference value (zero) was calculated. We obtained a distance error of 6 mm and a rotation error of 5.25 degrees. Furthermore, since the curve is almost regular, it can be concluded that the application is quite stable.
In addition, a questionnaire was administered to assess the usefulness of the MR application. Questionnaires are an inexpensive and very effective tool to obtain a quantitative measure of user experience. The User Experience Questionnaire (UEQ) is a widely used evaluation tool for interactive products. The questionnaire is composed of 26 items. Each element is represented by two terms with opposite meanings. The order of the terms is randomized: half of the items on a scale begin with the positive term and the other half begin with the negative term. Each element can be rated on a 7-point Likert scale. According to [35], the Likert scale is a psychometric scale frequently used in research to represent people’s opinions on a topic or subject. It consists of seven different response options from 1 to 7 used to assess the degree to which a user agrees or disagrees with a statement.
The UEQ is composed of six scales: Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation and Novelty. The attractiveness is the overall impression of the application, the perspicuity is a measure of how easy the application is to learn to use. The efficiency expresses the extent to which the application can be used by the user without effort; dependability assesses how much the user feels they can control the application. Stimulation and novelty allow us to understand how stimulating and innovative the product is for the user. Usually, 3–5 min are sufficient for a participant to read the instructions and complete the questionnaire. The UEQ was provided in English, after verifying that all subjects were able to understand the language.
The results obtained were analyzed. An MS Excel file was created to visualize the results and calculate some basic statistical indicators necessary for an interpretation of the data. Firstly, the data were converted by assigning the value +3 to the most positive evaluation and −3 to the most negative one. The answers were grouped into the six scales (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty) and averaged in order to derive the value associated. For each of the six scales, the mean value was calculated (Table 1).
Values between −0.8 and 0.8 represent a more or less neutral evaluation, values greater than 0.8 represent a positive evaluation and values less than −0.8 represent a negative evaluation.
The range of scales is between −3 (terribly bad) and +3 (extremely good). The values in Table 1 were automatically represented in a graph, in which the ranges for a positive, neutral, or negative value were highlighted in different colors (Figure 7). The error bars represent the 5% confidence intervals for the scale means. In other words, they represent the probability that the true value of the scale mean lies outside this interval is less than 5%. The dimension of the error bars depends on the number of respondents and on the level of agreement between the respondents. Thus, the more the participants that filled out the questionnaire agree concerning their evaluation of the product the smaller the dimension of the error bars typically is [36].
As this is the first time this questionnaire was used to evaluate our MR application, in order to have a more objective opinion, the results were compared with a benchmark [37], which allows the results to be compared with a large data set. The data set used contains data from 21,175 people from 468 studies concerning different products. Ranges of values were identified to classify each of the six scales into one of the following categories: (1) Excellent, (2) Good, (3) Above average, (4) Below average, and (5) Bad. The ranges identifying the various categories are summarized in Table 2.
By comparing the mean values obtained in this study with the reference ranges, a category was assigned to each UEQ scale (Figure 8).
The average value for each UEQ scale has been superimposed to the graph. As shown in Figure 8, “Excellent” was assigned to 4 scales (Attractiveness, Dependability, Stimulation and Novelty), indicating that the application developed is an innovative, safe and user-friendly product. Instead, Perspicuity presents a slightly lower value, meaning that the subjects encountered difficulties to learn to use the application. This result can be explained considering that the subjects had a minimum experience with the HoloLens and, therefore, part of the difficulties was related to the use of this device and not to the application itself. Despite this Perspicuity value, the application is still “Above Average” in terms of ease of learning. The category “Good” was assigned to Efficiency so users judged the application as not very fast. This result is probably due to the fact that the application is slow in recognizing the fibula.
In fact, during the test, it is highlighted that Vuforia is sensitive to variations in light and/or color of the object and that a cooler light seemed to allow Vuforia faster tracking of the fibula. For this reason, to improve the performance and the efficiency of the system in tracking objects, new technologies can be considered, such as VisionLib of Visiometry GmbH.

5. Conclusions and Future Works

Sometimes it may be necessary to reconstruct the normal anatomical and functional structures of the mandible as a consequence of trauma or oncological resection. In most surgical procedures, the reconstruction is performed using a free fibula flap. The bone segments must be taken with extreme accuracy and, for this reason, it is often performed using a cutting guide which allows the surgeon to easily identify the cutting point and angle. This paper focused on the development of a Mixed Reality application for HoloLens2 that helps the positioning of surgical guide on the fibula during surgery. The application was tested to estimate the accuracy of the system, evaluating the error on positioning in terms of both distance and rotation. Results show greater accuracy and stable performance in positioning the guide, with a distance error of 6 mm and a rotation error of 5.23 degrees. Furthermore, from the study conducted to assess the usability of the system, the application was judged by users to be innovative, helpful, and satisfying, despite they found some difficulties to learn to use it. From this preliminary study, the application showed good results and potential. The next steps include clinical trial and surgery validation to validate the application from a medical perspective.
Furthermore, the main problem encountered during this study was related to the recognition of the fibula by Vuforia. This software kit is sensitive to variations in light and/or color of the object, which does not guarantee optimal performance, especially in terms of time. For this reason, future work will include the use of different technologies for object tracking and recognition to improve the performance and the efficiency of the system, and also during real surgery procedures where different obstacles, such as blood or tissues, may influence fibula visibility.

Author Contributions

Conceptualization, L.U. and P.P.; Methodology, L.U. and P.P.; Software, C.P.; Supervision, P.P. and E.V.; Validation, L.U.; Writing—original draft, C.P. and P.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gerup, J.; Soerensen, C.B.; Dieckmann, P. Augmented reality and mixed reality for healthcare education beyond surgery: An integrative review. Int. J. Med. Educ. 2020, 11, 1. [Google Scholar] [CrossRef]
  2. Cartucho, J.; Shapira, D.; Ashrafian, H.; Giannarou, S. Multimodal mixed reality visualisation for intraoperative surgical guidance. Int. J. Comput. Assist. Radiol. Surg. 2020, 15, 819–826. [Google Scholar] [CrossRef] [PubMed]
  3. Innocente, C.; Ulrich, L.; Moos, S.; Vezzetti, E. Augmented Reality: Mapping Methods and Tools for Enhancing the Human Role in Healthcare HMI. Appl. Sci. 2022, 12, 4295. [Google Scholar] [CrossRef]
  4. Bartella, A.K.; Kamal, M.; Kuhnt, T.; Hering, K.; Halama, D.; Pausch, N.C.; Lethaus, B. Mixed reality in oral and maxillofacial surgery: A symbiosis of virtual and augmented reality or a pointless technological gadget? Int. J. Comput. Dent. 2021, 24, 65–76. [Google Scholar] [PubMed]
  5. Verhey, J.T.; Haglin, J.M.; Verhey, E.M.; Hartigan, D.E. Virtual, augmented, and mixed reality applications in orthopedic surgery. Int. J. Med. Robot. Comput. Assist. Surg. 2020, 16, e2067. [Google Scholar] [CrossRef] [PubMed]
  6. Ye, W.; Zhang, X.; Li, T.; Luo, C.; Yang, L. Mixed-reality hologram for diagnosis and surgical planning of double outlet of the right ventricle: A pilot study. Clin. Radiol. 2021, 76, 237.e1–237.e7. [Google Scholar] [CrossRef]
  7. Cai, E.Z.; Gao, Y.; Ngiam, K.Y.; Lim, T.C. Mixed Reality Intraoperative Navigation in Craniomaxillofacial Surgery. Plast. Reconstr. Surg. 2021, 148, 686e–688e. [Google Scholar] [CrossRef]
  8. Tian, S.; Yang, W.; Le Grange, J.M.; Wang, P.; Huang, W.; Ye, Z. Smart healthcare: Making medical care more intelligent. Glob. Health J. 2019, 3, 62–65. [Google Scholar] [CrossRef]
  9. Azimi, E.; Winkler, A.; Tucker, E.; Qian, L.; Doswell, J.; Navab, N.; Kazanzides, P. Can mixed-reality improve the training of medical procedures? In Proceedings of the 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 4065–4068. [Google Scholar]
  10. Lungu, A.J.; Swinkels, W.; Claesen, L.; Tu, P.; Egger, J.; Chen, X. A review on the applications of virtual reality, augmented reality and mixed reality in surgical simulation: An extension to different kinds of surgery. Expert Rev. Med. Devices 2021, 18, 47–62. [Google Scholar] [CrossRef] [PubMed]
  11. Incekara, F.; Smits, M.; Dirven, C.; Vincent, A. Clinical feasibility of a wearable mixed-reality device in neurosurgery. World Neurosurg. 2018, 118, e422–e427. [Google Scholar] [CrossRef]
  12. Gregory, T.M.; Gregory, J.; Sledge, J.; Allard, R.; Mir, O. Surgery guided by mixed reality: Presentation of a proof of concept. Acta Orthop. 2018, 89, 480–483. [Google Scholar] [CrossRef]
  13. Andersson, L.; Kahnberg, K.E.; Pogrel, M.A. Oral and Maxillofacial Surgery; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  14. De Santis, G.; Pinelli, M.; Starnoni, M. Extended and unusual indications in jaw reconstruction with the fibula flap: An overview based on our 30-year experience. Ann. Med. Surg. 2021, 62, 37–42. [Google Scholar] [CrossRef]
  15. Metzler, P.; Geiger, E.J.; Alcon, A.; Ma, X.; Steinbacher, D.M. Three-dimensional virtual surgery accuracy for free fibula mandibular reconstruction: Planned versus actual results. J. Oral Maxillofac. Surg. 2014, 72, 2601–2612. [Google Scholar] [CrossRef]
  16. Prevost, A.; Delanoë, F.; Cavallier, Z.; Diakité, C.; Muller, S.; Lopez, R.; Briot, J.; Lauwers, F. Universal surgical guide dedicated to mandibular reconstruction by fibula flap: A pilot multicentric feasibility study. Int. J. Oral Maxillofac. Surg. 2019, 48, 146. [Google Scholar] [CrossRef]
  17. Caiti, G.; Dobbe, J.G.; Strijkers, G.J.; Strackee, S.D.; Streekstra, G.J. Positioning error of custom 3D-printed surgical guides for the radius: Influence of fitting location and guide design. Int. J. Comput. Assist. Radiol. Surg. 2018, 13, 507–518. [Google Scholar] [CrossRef]
  18. Zhou, W.; Liu, Z.; Song, L.; Kuo, C.l.; Shafer, D.M. Clinical Factors Affecting the Accuracy of Guided Implant Surgery—A Systematic Review and Meta-analysis. J. Evid. Based Dent. Pract. 2018, 18, 28–40. [Google Scholar] [CrossRef]
  19. Blanc, J.; Fuchsmann, C.; Nistiriuc-Muntean, V.; Jacquenot, P.; Philouze, P.; Ceruse, P. Evaluation of virtual surgical planning systems and customized devices in fibula free flap mandibular reconstruction. Eur. Arch. Oto-Rhino-Laryngol. 2019, 276, 3477–3486. [Google Scholar] [CrossRef] [PubMed]
  20. Pu, J.J.; Choi, W.S.; Yeung, W.K.; Yang, W.F.; Zhu, W.Y.; Su, Y.X. A Comparative Study on a Novel Fibula Malleolus Cap to Increase the Accuracy of Oncologic Jaw Reconstruction. Front. Oncol. 2022, 11, 743389. [Google Scholar] [CrossRef] [PubMed]
  21. Douissard, J.; Hagen, M.E.; Morel, P. The da Vinci surgical system. In Bariatric Robotic Surgery; Springer: Berlin/Heidelberg, Germany, 2019; pp. 13–27. [Google Scholar]
  22. Ramirez, E.A.; Helguero, C.G.; Amaya, J.L.; Mustahsan, V.; Komatsu, D.E.; Khan, F.; Kao, I. Improving positioning of 3D-printed surgical guides using image-processing techniques. In Proceedings of the IEEE Second Ecuador Technical Chapters Meeting (ETCM), Salinas, Ecuador, 16–20 October 2017; pp. 1–6. [Google Scholar]
  23. Ruhle, B.C.; Ferguson Bryan, A.; Grogan, R.H. Robot-assisted endocrine surgery: Indications and drawbacks. J. Laparoendosc. Adv. Surg. Tech. 2019, 29, 129–135. [Google Scholar] [CrossRef]
  24. Peng, X.; Mao, C.; Yu, G.Y.; Guo, C.b.; Huang, M.X.; Zhang, Y. Maxillary reconstruction with the free fibula flap. Plast. Reconstr. Surg. 2005, 115, 1562–1569. [Google Scholar] [CrossRef] [PubMed]
  25. Rokhsaritalemi, S.; Sadeghi-Niaraki, A.; Choi, S.M. A review on mixed reality: Current trends, challenges and prospects. Appl. Sci. 2020, 10, 636. [Google Scholar] [CrossRef]
  26. Rabbi, I.; Ullah, S. A survey on augmented reality challenges and tracking. Acta Graph. Znan. Asopis Tisk. Graf. Komun. 2013, 24, 29–46. [Google Scholar]
  27. Al Janabi, H.F.; Aydin, A.; Palaneer, S.; Macchione, N.; Al-Jabir, A.; Khan, M.S.; Dasgupta, P.; Ahmed, K. Effectiveness of the HoloLens mixed-reality headset in minimally invasive surgery: A simulation-based feasibility study. Surg. Endosc. 2020, 34, 1143–1149. [Google Scholar] [CrossRef] [PubMed]
  28. Rahman, R.; Wood, M.E.; Qian, L.; Price, C.L.; Johnson, A.A.; Osgood, G.M. Head-mounted display use in surgery: A systematic review. Surg. Innov. 2020, 27, 88–100. [Google Scholar] [CrossRef] [PubMed]
  29. Reis, G.; Yilmaz, M.; Rambach, J.; Pagani, A.; Suarez-Ibarrola, R.; Miernik, A.; Lesur, P.; Minaskan, N. Mixed reality applications in urology: Requirements and future potential. Ann. Med. Surg. 2021, 66, 102394. [Google Scholar] [CrossRef]
  30. Ivan, M.E.; Eichberg, D.G.; Di, L.; Shah, A.H.; Luther, E.M.; Lu, V.M.; Komotar, R.J.; Urakov, T.M. Augmented reality head-mounted display–based incision planning in cranial neurosurgery: A prospective pilot study. Neurosurg. Focus 2021, 51, E3. [Google Scholar] [CrossRef]
  31. Wagner, A.; Rasse, M.; Millesi, W.; Ewers, R. Virtual reality for orthognathic surgery: The augmented reality environment concept. J. Oral Maxillofac. Surg. 1997, 55, 456–462. [Google Scholar] [CrossRef]
  32. Liu, X.; Sohn, Y.H.; Park, D.W. Application development with vuforia and unity 3D. Int. J. Appl. Eng. Res. 2018, 13, 43. [Google Scholar]
  33. Vassallo, R.; Rankin, A.; Chen, E.C.; Peters, T.M. Hologram stability evaluation for Microsoft HoloLens. Medical Imaging 2017: Image Perception, Observer Performance, and Technology Assessment. In Proceedings of the International Society for Optics and Photonics, Orlando, FL, USA, 11–16 February 2017; p. 1013614. [Google Scholar]
  34. Ungureanu, D.; Bogo, F.; Galliani, S.; Sama, P.; Meekhof, C.; Stühmer, J.; Cashman, T.J.; Tekin, B.; Schönberger, J.L.; Olszta, P.; et al. Hololens 2 research mode as a tool for computer vision research. arXiv 2020, arXiv:2008.11239. [Google Scholar]
  35. Sullivan, G.M.; Artino, A.R., Jr. Analyzing and interpreting data from Likert-type scales. J. Grad. Med. Educ. 2013, 5, 541–542. [Google Scholar] [CrossRef]
  36. Rauschenberger, M.; Schrepp, M.; Pérez Cota, M.; Olschner, S.; Thomaschewski, J. Efficient measurement of the user experience of interactive products. How to use the user experience questionnaire (UEQ). Example: Spanish language version. Int. J. Interact. Multimed. Artif. Intell. 2013, 2, 39–45. [Google Scholar] [CrossRef]
  37. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Construction of a Benchmark for the User Experience Questionnaire (UEQ). Int. J. Interact. Multim. Artif. Intell. 2017, 4, 40–44. [Google Scholar] [CrossRef]
Figure 1. Surgical guide and fibula 3D models. (a) 3D models of surgical guide (red and white) and fibula (red and green) before the positioning. (b) 3D models positioned as planned for the intervention.
Figure 1. Surgical guide and fibula 3D models. (a) 3D models of surgical guide (red and white) and fibula (red and green) before the positioning. (b) 3D models positioned as planned for the intervention.
Applsci 12 08154 g001
Figure 2. 3D model of the mandibula after the resection. The bone tissue of the fibula should replace the missing part. In red you can see a guide used to keep the bones in place during the surgery.
Figure 2. 3D model of the mandibula after the resection. The bone tissue of the fibula should replace the missing part. In red you can see a guide used to keep the bones in place during the surgery.
Applsci 12 08154 g002
Figure 3. Workflow of the Mixed Reality application supporting surgical guides positioning.
Figure 3. Workflow of the Mixed Reality application supporting surgical guides positioning.
Applsci 12 08154 g003
Figure 4. Model Target of surgical guide and fibula.
Figure 4. Model Target of surgical guide and fibula.
Applsci 12 08154 g004
Figure 5. Example of fibula tracking. (a) When the fibula is detected, its virtual model (white colored) appears superimposed on it along with the surgical guide in the correct position, as determined by the medical team before the procedure. (b) The real surgical guide is tracked during its positioning by our system, providing feedback about its current distance both in position, both in rotation, from its correct position.
Figure 5. Example of fibula tracking. (a) When the fibula is detected, its virtual model (white colored) appears superimposed on it along with the surgical guide in the correct position, as determined by the medical team before the procedure. (b) The real surgical guide is tracked during its positioning by our system, providing feedback about its current distance both in position, both in rotation, from its correct position.
Applsci 12 08154 g005
Figure 6. Experimental mean distance and rotation values. The polynomial curve of order 2 is overlapped showing a decreasing trend.
Figure 6. Experimental mean distance and rotation values. The polynomial curve of order 2 is overlapped showing a decreasing trend.
Applsci 12 08154 g006
Figure 7. Mean value of each UEQ scales.
Figure 7. Mean value of each UEQ scales.
Applsci 12 08154 g007
Figure 8. Comparison to the benchmark, highlighting the trend of the mean values of each scale (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty).
Figure 8. Comparison to the benchmark, highlighting the trend of the mean values of each scale (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty).
Applsci 12 08154 g008
Table 1. Mean values and confidence interval for each scale (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty).
Table 1. Mean values and confidence interval for each scale (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty).
UEQ Scales
ScaleMeanConfidence Interval
Attractiveness2.1330.263
Perspicuity1.5330.373
Efficiency1.7330.251
Dependability1.8170.315
Stimulation2.1670.354
Novelty2.4330.237
Table 2. Reference ranges of the five categories for each scale (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty) [37].
Table 2. Reference ranges of the five categories for each scale (Attractiveness, Perspicuity, Efficiency, Dependability, Stimulation, and Novelty) [37].
Ranges of the 5 Categories for Each UEQ Scale
ScaleExcellentGoodAbove AverageBelow AverageBad
Attractiveness≥1.75≥1.52 <1.75≥1.17 <1.52≥0.70 <1.17<0.70
Perspicuity≥1.90≥1.56 <1.90≥ 1.08 <1.56≥0.64 <1.08<0.64
Efficiency≥1.78≥1.47 <1.78≥0.98 <1.47≥0.54 <0.98<0.54
Dependability≥1.65≥1.48 <1.65≥1.14 <1.48≥0.78 <1.14<0.78
Stimulation≥1.55≥1.31 <1.55≥0.99 <1.31≥0.50 <0.99<0.50
Novelty≥1.40≥1.05 <1.40≥0.71 <1.05≥0.30 <0.71<0.30
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Piramide, C.; Ulrich, L.; Piazzolla, P.; Vezzetti, E. Toward Supporting Maxillo-Facial Surgical Guides Positioning with Mixed Reality—A Preliminary Study. Appl. Sci. 2022, 12, 8154. https://doi.org/10.3390/app12168154

AMA Style

Piramide C, Ulrich L, Piazzolla P, Vezzetti E. Toward Supporting Maxillo-Facial Surgical Guides Positioning with Mixed Reality—A Preliminary Study. Applied Sciences. 2022; 12(16):8154. https://doi.org/10.3390/app12168154

Chicago/Turabian Style

Piramide, Chiara, Luca Ulrich, Pietro Piazzolla, and Enrico Vezzetti. 2022. "Toward Supporting Maxillo-Facial Surgical Guides Positioning with Mixed Reality—A Preliminary Study" Applied Sciences 12, no. 16: 8154. https://doi.org/10.3390/app12168154

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop