Next Article in Journal
High-Resolution Remote Sensing Image Segmentation Algorithm Based on Improved Feature Extraction and Hybrid Attention Mechanism
Next Article in Special Issue
VR Drumming Pedagogy: Action Observation, Virtual Co-Embodiment, and Development of Drumming “Halvatar”
Previous Article in Journal
Correction: Mei et al. The Class D Audio Power Amplifier: A Review. Electronics 2022, 11, 3244
Previous Article in Special Issue
An Internet of Things-Based Home Telehealth System for Smart Healthcare by Monitoring Sleep and Water Usage: A Preliminary Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhanced Wearable Force-Feedback Mechanism for Free-Range Haptic Experience Extended by Pass-Through Mixed Reality

Spatial Media Group, University of Aizu, Tsuruga, Ikki–Machi, Aizu-Wakamatsu 965-8580, Fukushima, Japan
*
Authors to whom correspondence should be addressed.
Electronics 2023, 12(17), 3659; https://doi.org/10.3390/electronics12173659
Submission received: 20 June 2023 / Revised: 4 August 2023 / Accepted: 20 August 2023 / Published: 30 August 2023
(This article belongs to the Special Issue Wearable Sensing Devices and Technology)

Abstract

:
We present an extended prototype of a wearable force-feedback mechanism coupled with a Meta Quest 2 head-mounted display to enhance immersion in virtual environments. Our study focuses on the development of devices and virtual experiences that place significant emphasis on personal sensing capabilities, such as precise inside-out optical hand, head, and controller tracking, as well as lifelike haptic feedback utilizing servos and vibration rumble motors, among others. The new prototype addresses various limitations and deficiencies identified in previous stages of development, resulting in significant user performance improvements. Key enhancements include weight reduction, wireless connectivity, optimized power delivery, refined haptic feedback intensity, improved stylus alignment, and smooth transitions between stylus use and hand-tracking. Furthermore, the integration of a mixed reality pass-through feature enables users to experience a comprehensive and immersive environment that blends physical and virtual worlds. These advancements pave the way for future exploration of mixed reality applications, opening up new possibilities for immersive and interactive experiences that combine useful aspects of real and virtual environments.

Graphical Abstract

1. Introduction

In recent years, virtual reality (VR), augmented reality (AR), and mixed reality (MR) have seen rapid advancements, leading to exciting new trends in these immersive technologies. One significant trend is the convergence of VR and AR, giving rise to the concept of MR. Mixed reality combines virtual elements with the real-world environment, allowing users to interact with both physical and digital objects [1]. In the field of virtual reality, research has focused on improving the hardware and user experience. Development of stand-alone VR headsets, such as Meta Quest 2, has eliminated the need for external sensors or cables, enhancing mobility and accessibility. Moreover, advancements in tracking technologies, such as inside-out tracking, have improved accuracy of hand and body movement tracking within virtual environments [2]. Augmented reality has experienced significant growth, especially in industrial applications. The use of AR in fields such as manufacturing, maintenance, and training has shown promising results. By overlaying digital information upon the real world, AR enhances situational awareness, reduces errors, and improves efficiency [3,4]. MR has emerged as a promising area, combining the best aspects of VR and AR. It enables users to interact with virtual objects while maintaining a sense of presence in the real world. Research in this domain has focused on seamless integration of digital content with the physical environment, precise alignment of virtual objects, and realistic occlusion effects. Furthermore, advancements in haptic feedback technologies have contributed to immersive experiences in VR, AR, and MR. Researchers have explored various haptic feedback modalities, such as vibration, texture rendering, and force-feedback, to enhance user interactions and improve the sense of presence [5,6,7]. For example, vibrotactile feedback is commonly employed through handheld controllers, providing users with tactile sensations corresponding to virtual interactions. Moreover, researchers have investigated the integration of wearable haptic devices, such as gloves or suits, to enhance realism and immersion in virtual environments [8,9]. In AR and MR, haptic feedback is important for creating compelling experiences. Haptic feedback can be utilized to enhance the perception of physical objects and interactions in the augmented space. For instance, researchers have explored the use of devices with force-feedback capabilities to simulate the feeling of touching virtual objects in AR [10]. Wearable haptic devices, such as finger-mounted actuators or arm exoskeletons, provide users with force-feedback and simulate the resistance and interaction forces of virtual objects [11,12]. These advancements in haptic feedback technology contribute to creating more immersive, accurate, and realistic mixed reality experiences. In VR experiences, even pseudo-haptic feedback has been found to improve object recognition and manipulation tasks [13]. In AR, haptic feedback has been shown to enhance the perception of virtual objects’ properties, such as texture and stiffness [14]. As haptic feedback continues to evolve, one particularly notable trend is the development of lightweight and wearable haptic devices that provide high-fidelity feedback while ensuring user comfort and freedom of movement.

2. Materials and Methods

2.1. Problem Description

While existing haptic feedback devices have made significant progress, there remain limitations that hinder the full realization of accurate and realistic haptic experiences. Current wearable or handheld force-feedback devices lack sufficient accuracy, simulation of textures and stiffness, support for positional translation, or a combination of these features, thus impacting the fidelity of user interactions. For instance, handheld controllers with vibrotactile feedback allow tactile sensation that corresponds to virtual interactions but lack the ability to constrain the movement of fingers, limiting the realism of grasping and virtual object stiffness simulation. Similarly, haptic suits provide vibrotactile feedback or other force-feedback sensations but fail to arrest the movement of a user’s arms, compromising immersive experience. Arm exoskeletons or fixtures can affect limb movement, but they often lack fine precision, simulation of grasping, and vibrotactile feedback. To address these challenges, our research projects have focused on developing (designing, prototyping, refining) a wearable haptic interface derived from a force-feedback stylus initially intended for desktop use. Our primary objective has been to build a device that can offer untethered six degrees of freedom in virtual reality while ensuring accurate tracking and force-feedback capabilities, specifically tailored for computer-aided design (CAD) applications such as 3D modeling and sculpting. In addition to these functionalities, our prototype aims to simulate textures and object stiffness, and provide vibrotactile feedback. Development of this prototype has presented various additional challenges and complexities, including the ergonomic design of a wearable harness, considerations regarding weight, and the device’s overall form factor [15]. The capabilities of this prototype had been previously demonstrated exclusively within VR environments, with no exploration in the context of AR or the intermediate spectrum of MR experiences.

2.2. Antecedent Prototype Description

2.2.1. Hardware Overview

In our previous work, we presented a prototype for immersive and realistic interaction with virtual objects. The ambulatory prototype includes an adjustable platform with a 3D Systems Touch force-feedback device mounted in front of the user, providing lifelike tactile sensations. The force-feedback device, operated through a stylus affordance held in the user’s hand like a pen, can exert forces up to 3.3 N and features a force-feedback workspace size of 431 W × 348 H × 165 D mm. It can simulate stiffness on all three of its force-feedback-providing axes: X (1.26 N/mm), Y (2.31 N/mm), and Z (1.02 N/mm), as well as inertia (mass at the stylus tip) of up to 45 g [16]. This platform is integrated into a versatile tactical vest with aluminum plates on its front and back. The backplate holds a laptop that streams content to a Meta Quest 2 HMD, while the frontplate serves as the haptic device base. This cantilevered design allows adjustments for varying user heights and arm lengths. The stylus assembly underwent modifications, including weight removal and relocation of the main board. The wearable assembly comprises three devices with distinct power sources. The HMD and laptop use internal batteries, while the device’s servomotors rely on an external power bank conveniently strapped to the user’s waist. These hardware changes enable a more immersive and realistic haptic feedback experience with virtual objects [15].

2.2.2. Software Overview

In the antecedent prototype’s software implementation, Meta Quest 2, an Android-based VR headset, was utilized. However, to overcome compatibility issues with Open Haptics SDK, it operated in Link mode, which tethered the “thin client” device to a laptop. The VR environment was implemented in the Unity game engine, using a combination of an Oculus XR plugin and a Mixed Reality Toolkit (MRTK). The Oculus XR plugin handled lower-level tasks, including stereoscopic rendering, Quest Link functionality, input subsystems for controller support, and HMD tracking [17]. Additional features—such as hand-tracking, gesture-operated teleportation, ray-cast reticle operation, and physics-enabled hand models—were extended through the MRTK [18]. For controlling the 3D Systems stylus, the Open Haptics for Unity plugin is employed, allowing integration of various 3D haptic interactions within Unity. This plugin comprises the Quick Haptics micro API, Haptic Device API (HDAPI), Haptic Library API (HLAPI), Geomagic Touch Device Drivers (GTDD), and additional utilities [19]. However, structure of the Open Haptics plugin for Unity differs from the native edition. Instead of using Quick Haptics, it employs the OHToUnityBridge dynamic-link library (DLL) to establish communication between the Unity controller, the HapticPlugin script written in C#, and the HD/HL APIs written in the C programming language. The OHToUnityBridge.dll library directly invokes the HD, HL, and OpenGL libraries without relying on Quick Haptics [15,20].

2.2.3. Pilot Experience Overview

The virtual environment for the pilot study, as shown in Figure 1, immersed users in a range of haptic simulations that demonstrated various functionalities of the haptic mechanism. The experience began with a shape-sorting activity, wherein users interacted with a virtual table containing diverse shapes. Using the physical haptic stylus, users could manipulate virtual shapes with realistic feedback. When the stylus touched a virtual object, it responded by mechanically locking or constraining movement and rotation, creating a lifelike sensation of the object’s massive presence. The HMD also displayed spatialized binaural audio cues to the user indicating that an object had been touched or grabbed. Another simulation involved two sculptures. One sculpture featured outer and inner spheres, the outer layer being deformable and the inner layer rigid. Users experienced the tactile sensation of piercing deformable material, like to popping a balloon with a pin. The second sculpture virtually attracted the stylus to its surface and limited movement, providing the feeling of dragging a magnetic stick across a metal table. Next, users encountered two boards made of different materials, glass and wood, to showcase the haptic system’s ability to simulate textures and friction smoothness effectively. The difference between these materials deepened the user’s perception of texture roughness or smoothness, complemented by audio cues such as the sound of scratching on wood or the squeaking when dragging the stylus across the glass surface. The final segment presented a table with five colored capsules, each representing a distinct tactile effect: elasticity, viscosity, vibration, constant force, and friction. Users could explore these effects to enhance their interactive experience. Furthermore, users had the option to create basic 3D forms (cube, cylinder, sphere, or plane) and manipulate them regarding scale, location and orientation using bimanual manipulation through hand-tracking. Interaction with objects could be achieved through ray-casting selection, collision-based interaction, or direct hand-grabbing, depending on their proximity. These capabilities highlighted the creative potential of free-range haptic and tactile interfaces in digital content creation and CAD applications, resembling the functionality of the desktop edition of the Geomagic Freeform modeling tool [21].

2.3. Preliminary Validation

2.3.1. Summary of a Pilot Experiment

A pilot experiment compared two conditions of experiencing haptic feedback: seated with a normal desktop monitor, and mobile VR with a head-mounted display (HMD). The baseline condition involved playing a Jenga game on a desktop computer with a haptic device, while the room-scale condition allowed participants to explore an immersive haptic sandbox using a harness and VR headset. After experiencing both setups, participants completed a questionnaire, rating their impressions on a Likert scale. The results showed that users had neutral to slightly positive feelings of immersion during the desktop experience. Feedback regarding the wearable harness was mixed, with concerns about weight and comfort, but participants provided strong positive feedback on immersiveness, despite the crudeness of the harness. The combination of virtual reality and haptic force-feedback contributed to overall immersion. Users found the combination of inputs usable but somewhat tricky, with some issues related to avatar misalignment, drift, and hand-tracking transitions.
Overall, participants believed this type of device arrangement could be used for computer-aided design (CAD) applications. The participants’ overall experience manifested positive feedback, with an average score of 8.5 out of 10 for the proof of concept. The experiment provided valuable feedback for future refinement and expansion of the system [21].

2.3.2. Summary of a Performance Experiment

The performance experiment aimed to confirm whether the ambulatory condition, utilizing mobile VR with an HMD, could match or surpass performance of the seated condition with a regular monitor. The experiment included block-sorting and Jenga games presented to separate groups of participants for the ambulatory and desktop conditions to avoid bias from learning effects. During the Jenga game, participants were instructed to remove as many blocks as possible from the tower within a 4-min time limit without causing it to collapse. The block-sorting game required participants to successfully sort a variety of shapes within the same time frame. The student participants had diverse levels of VR experience, and all were right-handed, with academic backgrounds in Computer Science or Software Engineering. Data collection involved observation, stopwatch timing, and questionnaires eliciting user experience and subjective impressions. The results indicated that there were no significant differences between the desktop and ambulatory conditions in terms of user experience dimensions (attractiveness, dependability, efficiency, novelty, perspicuity, stimulation) and subjective experience factors (effort, importance, interest, enjoyment, perceived choice, perceived competence, pressure, tension, value, usefulness). Likewise, the performance measurements, including the maximum number of removed and stacked Jenga blocks, number of Jenga trials, and number of filled shape-sorter boards, did not exhibit any notable variances between the two conditions.
The experiment demonstrated that the ambulatory condition achieved comparable performance to the seated condition, while user experience, subjective impressions, and performance measurements showed no significant disparities between the two setups [21].

2.4. Identified Deficiencies

Based on the summarized experiments, several shortcomings could be identified that affected user experience with the prototype. These deficiencies had impact on user performance, and despite the haptic feedback enhancing immersiveness in this particular use case, performance was comparable between the seated and mobile VR conditions. This suggests that improving performance could potentially further enhance immersiveness and depth of presence. One significant deficiency is related to the comfort of the wearable harness, which received varied feedback from the participants. The weight of the system emerged as a primary concern, indicating sub optimal material choices. The wearable harness was only comfortable to wear for a limited duration, typically 15 min to around half an hour. Another deficiency lies in the haptic feedback, where some effects were reported to lack the necessary strength (force) to replicate real sensations accurately. This indicates a shortfall in delivering realistic haptic feedback to users. The combination of inputs proved to be usable but posed certain challenges as well. A rotational reset feature was implemented to address user avatar misalignment with the stylus after locomotion. However, the virtual stylus sometimes drifted from its real-world representation, making it unreliable as a rotational reset function. This limitation highlights the difficulty in maintaining consistent alignment and synchronization between the virtual and physical aspects of the system. While hand-tracking was generally considered reasonably accurate, transitioning between using the haptic stylus and hand gestures required users to temporarily hide their hand and bring it back into view to reload the hand-tracking mode. This process was deemed inconvenient, suggesting a deficiency in achieving smooth transitions between different interaction modes. These identified deficiencies indicated aspects that required improvement to enhance the user experience. Addressing these concerns involves improving comfort, enhancing the realism of touch sensations, refining input usability, ensuring alignment accuracy, and facilitating smoother transitions between different interaction modes in the system.

3. Resulting Enhancements

3.1. Weight Reduction

The prototype’s comfort aspects and certain software-related shortcomings were the main foci for improvement, based on the identified deficits mentioned above. A key objective of this new edition was to achieve significant weight reduction by streamlining components attached to the harness. As part of this effort, the entire structure of the prototype underwent redesign, starting with the removal of the aluminum plating. The front shelf carrying 3D Systems Touch was also eliminated, making way for a more robust frame made of extruded aluminum that reduced flexing, particularly on the shelf in front of the user. The front cantilever was replaced with an extruded aluminum bar as well. To further reduce weight, the stylus device underwent additional modifications, stripping it down to essentials: servo motors, pulleys, and wiring. This resulted in less strain on the user due to reduced weight at the end of the cantilever. Moreover, the laptop computer formerly carried at the back of the user was removed, allowing the stylus control board to be relocated alongside the addition of a Raspberry Pi 3. While it would have been feasible to remove and replace these components without encountering difficulties, our specific implementation and use case demanded the preservation of untethered six-degrees-of-freedom movement within the virtual space. Consequently, we had to revector connectivity between Quest 2 and 3D Systems Touch, transitioning from a wired to a wireless setup to maintain the desired freedom of movement. The redesigned harness is shown in Figure 2 and Figure 3.

3.2. Wireless Connectivity

Switching from Quest Link to Air Link, which involves the HMD acting as a thin client of a PC, was a rather straightforward process. However, it required upgrading to newer versions of our development environment, Unity, and its Oculus XR libraries to ensure reliable functionality of Air Link. The conversion of the haptic device from wired to wireless connection posed significant challenges. We initially explored the possibility of hardware-level conversion from USB to Bluetooth, but ultimately opted for USB over network. To accomplish this, we employed a VirtualHere server running on Raspberry Pi 3, augmented with a Wi-Fi adapter capable of a 5 GHz Wi-Fi connection. VirtualHere allows the network to act as a pathway for transmitting USB signals, effectively enabling USB over IP [22]. This means that the USB device, such as the stylus apparatus, behaves as though it is directly connected to the client machine (e.g., a laptop computer), even though it is physically plugged into a remote server, specifically the Raspberry Pi 3. As a result, existing drivers and software function without requiring special modifications. However, there is a slight latency penalty that, to a small extent, negatively impacts our implementation under certain conditions. As described in Section 2.2.2, Unity utilizes the OHToUnityBridge.dll library, whereas vanilla applications from 3D Systems directly interface with the device via HD/HL APIs. When the stylus is driven directly using HD/HL libraries, the latency and reliability of haptic feedback are perceived as identical to when the device is physically connected to the target machine. However, when utilizing an application built in Unity, the additional overhead of the translation layer, which involves conversion of native API calls through middleware to accurately express virtual objects’ physical properties in real life via the stylus, introduces occasional perceivable delays in force-feedback response and subtle stylus jitter. The occurrence of these issues depends on various factors, such as the environment and Wi-Fi signal quality, which may be beyond our control. Furthermore, even without perceptible issues, a slight degradation in the servoloop frequency, which facilitates bi-directional communication between the application and the device, can be observed when comparing wired and wireless communication. However, there are plans to address these concerns and improve the application in future updates by exploring ways to minimize overhead in the middleware.

3.3. Power Delivery

Wireless communication between the host laptop, client HMD, and the stylus allows for reduction in power delivery requirements for all the devices involved. Previously, we were constrained by the built-in battery of our host machine, a 90 Wh Li-ion battery. With several power-saving techniques in place, such as undervolting the CPU and underclocking the CPU and GPU, we achieved approximately 60 to 90 min of screen-on time. However, by eliminating the need for the user to carry the laptop, we can disable all power-saving measures, increase rendering performance output, and strive for higher texture quality and visuals while considering the limitations of the Air Link function of Meta Quest 2, rather than being restricted by the power consumption of the PC. The power delivery for the stylus device remains the same as described in Section 2.2.1, with an external power bank. However, the battery now needs to supply power to the Raspberry Pi 3 as well, which reduces the stylus’s power-on time. Previously estimated at approximately 12 h, the stylus’s system-on time can now be estimated to a maximum of about 10 h due to the added power requirements of Raspberry Pi 3. As the 10-h run-time of this system exceeds the presumed continuous session duration by a single user, we utilize this external power source to extend the battery life of Quest 2 as well. By taking into account the average power consumption of Quest 2 (between 4.7 and 7 W), Raspberry Pi 3 (between 1.3 and 3.7 W), and 3D Systems Touch (between 18 and 31.5 W), we can estimate that the entire system can be run on a single charge of a 20-ampere–hour battery for approximately 4.5 to 8 h, effectively quadrupling the minimum system up-time. Considering (as mentioned in Section 2.4) that the comfortable usability of this prototype is up to 30 min, the extended power management parameters significantly exceeded this use-case. Even if the weight reduction and ergonomic improvements described in Section 3.1 only quadruple usability time to 2 h, the power requirements would not become a significant limitation. All components of the wireless connectivity and power delivery are illustrated in Figure 4.

3.4. Haptic Feedback Intensity

In the previous section (Section 2.4), it was mentioned that the strength of certain haptic effects was perceived as inadequate. To address this, adjustments were made to enhance the modeling of virtual objects’ physical properties. Unity’s physics engine handles the simulation, which is then rendered into forces exerted through the stylus. Various properties of virtual objects were reviewed and modified to simulate realistic physical attributes and their corresponding responses when interacting with using the stylus. These properties include perceived weight, drag, bounciness, friction coefficient between different materials, their respective smoothness, stiffness, and the intensity of damping when the stylus interacts with a touched object. By fine-tuning these properties, we aimed to create a more immersive experience that closely resembles real-life interaction. The adjustments made allow users to perceive and interact with virtual objects in a manner that aligns with their expectations and provides a more satisfying haptic experience.

3.5. Stylus-to-Avatar Alignment and Reset Function

Previously, there were issues encountered in maintaining consistent alignment and synchronization between the avatar of the user and the stylus device. Whenever a user utilized the locomotion feature, which involved initiating teleportation through hand gestures, slight drift or rotational misalignment occurred between the virtual representation of the stylus and its real-life counterpart. In the implementation described in Section 2.2.2, the MRTK was utilized to enable features such as hand-tracking and gesture-controlled teleportation. However, the teleportation process resulted in the avatar being independently rotated around its local gravitational vertical axis (yaw), separate from the orientation of the stylus attached to the avatar. To address this discrepancy, a reset function was implemented, allowing users to request realignment by simultaneously pressing two buttons on the stylus. Due to the decision to switch to the latest version of the Oculus XR Plugin, departure from the outdated MRTK was necessary. This switch required reimplementation of the features previously provided by MRTK into the application. The newer XR Interaction Toolkit was utilized for this purpose and served as an aid in implementing the updated locomotion system. From the user’s perspective, the locomotion function operates in an indistinguishable manner. However, the reimplementation resolves the issue of independently rotating the avatar and stylus after each teleportation. The updated system ensures that the user and their stylus face the same direction as they did before initiating the teleportation process.

3.6. Transition between Stylus Use and Hand-Tracking

The availability of newer libraries and prefabs within the Meta Quest Interaction SDK allows integration of enhanced hand-tracking and controller tracking into Unity applications [23]. By incorporating updated hand models and modifying relevant scripts to suit our specific use case, the problem of inconsistent transitions between using the stylus and hand-tracking with the user’s dominant hand, based on their selected chirality (handedness), was resolved. The adjustments made to the hand-tracking functionality ensure that the transition between using the stylus and hand-tracking is now seamless. Users no longer experience a sense of disconnection or inconsistency between the perceived tracking performance and the movements of their real hands. This improvement significantly enhances the overall accuracy of tracking and reinforces the naturalness of the user’s hand movements. This enhancement becomes particularly important in light of the introduction of a new feature to this prototype’s software, a mixed reality pass-through, described below. All aforementioned software changes and the overall software architecture are illustrated in Figure 5.

3.7. Mixed Reality Pass-Through

The mixed reality pass-through feature on Meta Quest 2 enhances the virtual reality (VR) experience by integrating real-world surroundings into the virtual environment. This advanced capability utilizes the built-in cameras of the headset to capture a live graphic stereo video feed of the user’s physical environment, which is then blended with the virtual content. As a result, users can view and interact with their surroundings while wearing the headset, allowing virtual objects and characters to coexist with the real-world environment. This feature enables users to move around, navigate obstacles, and interact with physical objects while still being immersed in the virtual world. It enhances situational awareness, promoting user safety and preventing collisions with physical objects. The integration of real-world surroundings into virtual experiences creates an augmented reality-like effect, offering a more comprehensive and immersive encounter for users. Combining the mixed reality pass-through feature with haptic force-feedback devices further enriches the experience, providing users with heightened sensory stimulation and a seamless blend of virtual and physical realms.

4. Enhancements Validation

To assess the impact of the implemented enhancements, we conducted an experiment similar to the one outlined above in Section 2.3.2. However, the focus this time was on evaluating the effects of the enhancements on user performance and overall experience.

4.1. Conditions: Virtual Reality and Mixed Reality

In the baseline condition, each participant was equipped with our harness and guided through a set of tasks in virtual reality, including block-sorting and Jenga games. To precisely replicate the mixed reality environment, we re-implemented the virtual reality environment. The laboratory room where the experiment took place was scanned using a LiDAR sensor on an iPad Pro. The resulting mesh model was edited to remove non-manifold vertices, and its textures were adjusted to appear monochromatic with reduced resolution, while a grain effect was added to match the resolution and image quality of the Quest 2 pass-through function. Figure 6 and Figure 7 illustrate the compositions of these scenes.

4.1.1. Procedure and Controls

In the Jenga game, both VR and MR segments had a 4-min time limit for participants to remove as many blocks as possible without toppling the tower. If the tower toppled, the participants had to reset and start again. To reset, the users pushed a physics-reactive virtual button, placed next to the block sorter and Jenga tower, using their index finger (enabled via hand-tracking). The highest number of removed blocks from any number of runs was recorded, along with the number of resets. Similarly, in the block-sorting game, participants had 4 min to successfully sort the full range of shapes. The score was incremented only if all blocks were successfully sorted, preventing participants from sorting only easier shapes. To minimize the influence of learning on results, participants were divided into two groups: (1) those who first participated in the virtual reality segment and then in the mixed reality block, and (2) those who first participated in the mixed reality segment and then in the virtual reality block. Each experiment segment took approximately 10 min, excluding the introduction, warm-up sessions, and questionnaire answering. The warm-up session, lasting about 2 min for each game segment (8 min in total), primarily focused on harness adjustment to ensure comfort and familiarity with the interface. The questionnaire, administered after each segment, took up to 10 min, resulting in an overall experiment duration of 30 to 40 min per participant. Participants removed the harness and HMD while answering the questionnaire between the VR and MR segments. Furthermore, as described in Section 3.2, due to the varied latency introduced by the middleware in the wireless set-up and dependency on the quality of the network, the participants were tethered to a nearby laptop via 5 m long Quest Link and 5 m long USB cables for the HMD and haptic devices, respectively. This configuration ensured consistency between the experiment runs, as interference of the university network could impact consistency of the experiment. No hardware that would be otherwise present in the wireless set-up was removed or adjusted, and only the connectivity was changed from self-contained to external.

4.1.2. Participants

Ten adults were recruited to participate in this experiment, with 40% falling between the ages of 18 and 25, and the remaining 60% between 26 and 35. Among the participants, 60% were males and 40% were females. All subjects were right-handed and had a background in Computer Science or Software Engineering, making them familiar with standard human–computer interactions. However, some participants had no prior experience with VR or haptic devices. They were given a basic introduction to VR concepts and usage, followed by a brief explanation of interactions with force-feedback haptics. Participants were compensated JPY 1000 (approximately USD 8) for their involvement in this study.

4.1.3. Data Acquisition and Composition

The experimental supervisor collected the measured data using a stopwatch and Google Forms to record the scores. After each measured segment (VR or MR), each participant completed a User Experience Questionnaire (UEQ) consisting of 26 pairs of bipolar extremes, such as “complicated/easy” and “inventive/conventional,” evaluated on a quantized Likert scale from 1 to 7 [24,25]. Additionally, participants were asked to rate their agreement with eight statements regarding the improvements, such as “The harness was heavy.” or “The tethered connection bothered me.” on a seven-point Likert scale, where 1 indicated “Strongly Disagree” and 7 “Strongly Agree.” Participants were also asked to estimate the duration they could comfortably wear the harness and to rate their overall experience on a scale from 1 to 10.

4.1.4. Results

The User Experience Questionnaire (UEQ) was utilized to evaluate participants’ experiences across six dimensions: Attractiveness, Dependability, Efficiency, Novelty, Perspicuity, and Stimulation. The questionnaire responses were collected on a zero-centered seven-point scale (−3 → +3). Analysis of the UEQ data revealed that there were no statistically significant differences between the VR and MR experiences. Moreover, a comparison was made between the UEQ results obtained from the ambulatory condition in the previous experiment, which utilized the old hardware, and the VR condition in the current experiment, where the enhanced hardware was employed. The results of estimated marginal means showed no significant differences between these conditions. To calculate the estimated marginal means of the fitted linear model, we employed the “lme4” and “emmeans” libraries in the R programming language. This statistical analysis allowed us to assess the differences between treatment conditions while considering the influence of potential covariates and interaction effects, providing valuable insights into the user experience across the three conditions [26,27,28]. These results are presented in Figure 8. Furthermore, the UEQ also includes benchmarking of our results against data from 21,175 individuals who participated in 468 studies involving various products, such as business software, web pages, web shops, and social networks. Our study found that both conditions, VR and MR, performed similarly when compared to this extensive dataset.
Performance measurements were analyzed similarly to the UEQ, using estimated marginal means, but with a fitted generalized linear mixed-effects model [29]. Three dimensions were considered: the maximum number of removed and stacked Jenga blocks, the number of Jenga trials (resets + 1), and the number of filled block-sorter boards. Regarding comparison between the VR and MR segments, results showed that there was no significant performance difference. However, when comparing the VR segments using the previous version of the hardware with those using the latest one, results indicated a significant difference in performance across all three metrics, favoring the revised hardware implementation. These results are presented in Figure 9.
Additionally, the collected responses concerning the prototype enhancements indicate that participants perceived both the VR and MR segments as immersive, with a slight preference, albeit not statistically significant, toward the MR segment. The comfort of the harness was also perceived positively, although the standard deviation suggests high variability in the responses. Notably, participants strongly disagreed with the statement that the harness was heavy. As described in Section 4.1.1, users were tethered to a nearby PC to ensure system stability; while the key features of this prototype are freedom of movement and ambulatory aspects, in this experiment, every part of the virtual environment was reachable by walking without being limited by the wired connection. Nonetheless, participants were asked if the wired connection inconvenienced them in any way. The results indicate that the wired connection was generally not cumbersome. Furthermore, responses regarding the software enhancements, such as the strength of simulated forces, hand-tracking, stylus precision, and transitions between them, demonstrate that these improvements positively influenced the strength of force-feedback, precision, reliability of tracking, and seamlessness of transitions between the two types of control. These results are summarized in Table 1.
Upon completion of both the VR and MR segments of this experiment, participants were surveyed to ascertain their estimated time limit for comfortable wear. The aggregate of all responses indicates that the wearability limit significantly improved to approximately 1.2 h (or 72 min). This represents a substantial enhancement, doubling the prototype’s wearability time compared to its earlier version.
Upon evaluating the overall experience on a scale ranging from 1 to 10, participants provided remarkably positive feedback. The quality of our proof of concept received an average score of 9.1/10, affirming strong reception and high satisfaction with our refreshed implementation.

5. Discussion

The results of our study, based on the analysis of the User Experience Questionnaire (UEQ) data, provide valuable insights into the user experiences in both virtual reality (VR) and mixed reality (MR) segments. Our statistical analysis revealed no significant differences between the VR and MR experiences, indicating that the environments offered comparable user experiences. This finding is particularly interesting, as it suggests that the advanced mixed reality pass-through feature on the HMD successfully achieved seamless integration of the virtual and physical realms, allowing users to interact with the real world while remaining immersed in the virtual environment.
However, it is worth noting that the understood definitions of terms such as “immersion” and “presence” lack uniformity. Some studies define immersion as a subjective property and presence as objective, while others assume the opposite [30]. Nonetheless, by considering immersion as an inherent characteristic of a VR system that assesses the extent to which it can faithfully replicate natural sensorimotor contingencies for perception and appropriately respond to perceptual actions [31], one can claim that a VR headset employing visual pass-through via cameras can impact one’s sense of immersion and interaction between augmented virtuality and augmented reality.
Presence can be understood as a perceptual illusion of being present in a virtual environment while knowing that you are not physically there [31]. With this definition, we can explore the notion that the difference in levels of presence between virtual and mixed realities could be negligible, given that a virtual reality system can render the virtual representation of a real environment, excluding the augmented elements, to the same quality as the video pass-through.
To assess the impact of hardware enhancements, we compared the UEQ results obtained from the previous ambulatory condition, which utilized the old hardware, with the current VR condition employing the enhanced hardware, as well as an MR version of the current system implementation. Statistical analysis using the estimated marginal means did not show any significant differences among any of the conditions. This indicates that the hardware enhancements did not significantly alter the user experience when comparing VR with the previous ambulatory setup, nor with the MR extension.
Comparison of performance measurements between the VR and MR segments, using a fitted generalized linear mixed-effects model, demonstrated no significant performance difference. This suggests that both environments were equally effective in allowing participants to interact with virtual objects. However, when comparing the VR segments utilizing the previous version of the hardware with the latest one, a significant difference in performance was observed across all three metrics: the maximum number of removed and stacked Jenga blocks, the number of Jenga trials (resets + 1), and the number of filled shape-sorter boards. This result supports the notion that the hardware enhancements led to improved performance and precision in VR interactions.
The responses regarding the hardware enhancements indicated that participants perceived both the VR and MR segments as immersive, with a slight preference toward the MR segment, although this preference was not statistically significant. The comfort of the harness was generally perceived positively, though some variability was observed in the responses. Notably, participants strongly disagreed with the statement that the harness was heavy, indicating that the redesigned prototype effectively addressed previous weight-related concerns.
The software enhancements—including the strength of simulated forces, hand-tracking, stylus precision, and transitions between control modes—received positive feedback from subjects. This indicates that the software modifications contributed to enhancing the overall immersion in the virtual and virtually augmented environments.
Lastly, the improved wearability of the prototype represents an advancement, as participants reported an average anticipated wearability time of approximately 1.2 h (or 72 min), doubling the previous version’s wearability time. This improvement is crucial for longer virtual experiences, such as immersive content-creation tasks, where user comfort and sustained engagement are essential.
Using the definitions stated above, these results suggest that there is no significant difference in the sense of presence within these three environments (based on the results of UEQ). However, there is a potential for differences in levels of immersion (based on the performance results), as the level of immersion depends on the properties of a system, such as clarity, responsiveness, smoothness, precision, endurance, stability, and comfort.
Overall, our findings suggest that the implemented hardware and software enhancements in the prototype led to a more comfortable and immersive user experience. The integration of VR and MR segments, combined with the refined hardware and software components, positively influenced user interactions and performance within the virtual environment. The positive feedback received from participants further validates the successful implementation of the proof of concept. These outcomes highlight the potential of our prototype in diverse applications, ranging from training simulations to content creation or computer-aided design, where users can benefit from free-range haptic and tactile interfaces, thereby bridging the gap between the virtual and physical worlds.

Future Work

Currently, the haptic stylus’s base is not being explicitly tracked; rather, it is positioned based on the user’s height and arm’s length. The Unity scene takes into account the distance from the user’s torso (X-axis) and chin (Y-axis), which determines the virtual representation of the stylus and its offset from the headset’s anchor point. However, this setup has a limitation. If a user leans sideways without moving their hips, the virtual stylus follows such movement, while the real stylus remains stationary, causing a tracking disconnect.
To overcome this limitation in the future, we propose the use of an additional pair of cameras for image or object recognition. Deploying technologies like OpenCV or other image-processing frameworks would enable an analysis of the real space around the user, allowing accurate estimation of the true position of the haptic interface. This improvement would enhance tracking accuracy, providing a more realistic experience for users.
Additionally, the current prototype only provides force-feedback for one hand, while immersive experiences often involve both hands. Hence, enabling force-feedback for bimanual manipulation would be beneficial. One idea is to replace the stylus with a haptic glove attached to the gimbal part of the device at the wrist, allowing for the positional arrest of the user’s arm and individual finger simulation, including touching and grabbing. This concept could be scaled to a bimanual setup where two Touch devices are used.
Furthermore, the current system only facilitates tactile perception with pre-made virtual elements within a scene. However, with future enhancements involving depth cameras or environment-scanning technologies, real-world objects could be rendered in real-time into simplified virtual representations. This extension would usher in the realm of augmented virtuality, seamlessly blending real and virtual objects.
The potential outcome of these developments is the creation of a “portable room” experience for users, regardless of their physical location. This advancement opens up exciting possibilities for various applications—such as remote collaboration, training simulations, and interactive experiences—that merge the virtual and physical worlds. The combination of improved tracking accuracy and augmented virtuality would provide users with immersive and engaging experiences beyond the confines of traditional virtual environments.

6. Conclusions

In conclusion, analysis of our ambulatory haptic system found no significant differences in user experiences between virtual and mixed reality segments, indicating seamless integration. The hardware enhancements positively impacted interactions, improving performance and precision. Participants reported enhanced wearability, doubling the previous version’s duration of comfortable use. Positive feedback validated the prototype’s success, showcasing its potential in various applications and bridging the virtual and physical worlds. Future developments in VR and MR technologies can build upon these findings, enriching immersive interactions via haptic feedback.

Author Contributions

The primary development of this project was carried out by the first author, P.K., under the guidance and support of the supervising professor, M.C. All authors have read and agreed to the published version of this manuscript.

Funding

The Spatial Media Group at the University of Aizu provided financial support for the development of the prototype of the wearable haptic device.

Institutional Review Board Statement

Subjective experiments complied with the requirements established by the University of Aizu Research Ethics Committee.

Informed Consent Statement

Prior to their participation, experimental participants provided written consent after being fully informed about the contents of this study. Additionally, explicit written consent was obtained from the subjects regarding publication of any potentially identifiable images or data.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data can be found online: https://github.com/peterukud/WearableHapticsVRMR (accessed 23 July 2023).

Acknowledgments

We express our gratitude to Julián Alberto Villegas Orozco, Camilo Arévalo Arboleda, Wen Wen, James Pinkl, and experimental subjects for their valuable support during the implementation of this project. Additionally, we extend our appreciation to the anonymous referees for their useful and thoughtful suggestions on this submission.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations, initializations, and acronyms are used in this manuscript:
3D Three-dimensional
API Application programming interface
AR Augmented reality
CAD Computer-aided design
DLL Dynamic-link library
GTDD Geomagic touch device drivers
HDAPI Haptic device API
HLAPI Haptic library API
LAN Local area network
HMD Head-mounted display
MRTK Mixed reality toolkit
SDK Software development kit
USB Universal serial bus
VR Virtual reality
XR Extended reality
MR Mixed reality

References

  1. Skarbez, R.; Smith, M.; Whitton, M.C. Revisiting Milgram and Kishino’s Reality-Virtuality Continuum. Front. Virtual Real. 2021, 2, 647997. [Google Scholar] [CrossRef]
  2. Hillmann, C. Comparing the Gear VR, Oculus Go, and Oculus Quest. In Unreal Mob. Standalone VR; Apress: Berkeley, CA, USA, 2019; pp. 141–167. [Google Scholar] [CrossRef]
  3. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Comput. Graph. Appl. 2001, 21, 34–47. [Google Scholar] [CrossRef]
  4. Billinghurst, M.; Kato, H. Collaborative augmented reality. Commun. ACM 2002, 45, 64–70. [Google Scholar] [CrossRef]
  5. Burdea, G. Force and Touch Feedback for Virtual Reality; John Wiley & Son, Inc.: Hoboken, NJ, USA, 1996. [Google Scholar]
  6. Slater, M. Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philos. Trans. R. Soc. B Biol. Sci. 2009, 364, 3549–3557. [Google Scholar] [CrossRef]
  7. Dangxiao, W.; Yuan, G.; Shiyi, L.; Yuru, Z.; Weiliang, X.; Jing, X. Haptic display for virtual reality: Progress and challenges. Virtual Real. Intell. Hardw. 2019, 1, 136. [Google Scholar] [CrossRef]
  8. Perret, J.; Vander Poorten, E. Touching Virtual Reality: A Review of Haptic Gloves. In Proceedings of the ACTUATOR International Conference on New Actuators, Bremen, Germany, 25–27 June 2018; pp. 1–5. [Google Scholar]
  9. Kang, D.; Lee, C.G.; Kwon, O. Pneumatic and acoustic suit: Multimodal haptic suit for enhanced virtual reality simulation. Virtual Real. 2023, 1, 1–23. [Google Scholar] [CrossRef]
  10. Valentini, P.P.; Biancolini, M.E. Interactive Sculpting Using Augmented-Reality, Mesh Morphing, and Force Feedback: Force-Feedback Capabilities in an Augmented Reality Environment. IEEE Consum. Electron. Mag. 2018, 7, 83–90. [Google Scholar] [CrossRef]
  11. Tzemanaki, A.; Al, G.A.; Melhuish, C.; Dogramadzi, S. Design of a wearable fingertip haptic device for remote palpation: Characterisation and interface with a virtual environment. Front. Robot. AI 2018, 5, 62. [Google Scholar] [CrossRef] [PubMed]
  12. Kishishita, Y.; Das, S.; Ramirez, A.V.; Thakur, C.; Tadayon, R.; Kurita, Y. Muscleblazer: Force-feedback suit for immersive experience. In Proceedings of the IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Osaka, Japan, 23–27 March 2019; pp. 1813–1818. [Google Scholar] [CrossRef]
  13. Son, E.; Song, H.; Nam, S.; Kim, Y. Development of a Virtual Object Weight Recognition Algorithm Based on Pseudo-Haptics and the Development of Immersion Evaluation Technology. Electronics 2022, 11, 2274. [Google Scholar] [CrossRef]
  14. Bermejo, C.; Hui, P. A Survey on Haptic Technologies for Mobile Augmented Reality. ACM Comput. Surv. 2021, 54, 184. [Google Scholar] [CrossRef]
  15. Kudry, P.; Cohen, M. Prototype of a wearable force-feedback mechanism for free-range immersive experience. In Proceedings of the RACS International Conference on Research in Adaptive and Convergent Systems, Virtual, 3–6 October 2022; pp. 178–184. [Google Scholar] [CrossRef]
  16. Haptic Devices; 3D Systems: Rock Hill, SC, USA, 2022.
  17. About the Oculus XR Plugin | Oculus XR Plugin | 3.3.0. 2023. Available online: https://docs.unity3d.com/Packages/com.unity.xr.oculus@3.3/ (accessed on 16 June 2023).
  18. MRTK2-Unity Developer Documentation—MRTK 2 | Microsoft Learn. Available online: https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk2/?view=mrtkunity-2022-05 (accessed on 16 June 2023).
  19. OpenHaptics® Toolkit Version 3.5.0 API Reference Guide Original Instructions. 2018. Available online: https://s3.amazonaws.com/dl.3dsystems.com/binaries/Sensable/OH/3.5/OpenHaptics_Toolkit_API_Reference_Guide.pdf (accessed on 16 June 2023).
  20. OpenHaptics® Toolkit Version 3.5.0 Programmer’s Guide. 2018. Available online: https://s3.amazonaws.com/dl.3dsystems.com/binaries/Sensable/OH/3.5/OpenHaptics_Toolkit_ProgrammersGuide.pdf (accessed on 16 June 2023).
  21. Kudry, P.; Cohen, M. Development of a wearable force-feedback mechanism for free-range haptic immersive experience. Front. Virtual Real. 2022, 3. [Google Scholar] [CrossRef]
  22. Home | VirtualHere. Available online: https://www.virtualhere.com/ (accessed on 16 June 2023).
  23. Interaction SDK Overview | Oculus Developers. Available online: https://developer.oculus.com/ (accessed on 16 June 2023).
  24. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Construction of a benchmark for the User Experience Questionnaire (UEQ). Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 40. [Google Scholar] [CrossRef]
  25. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Design and evaluation of a short version of the User Experience Questionnaire (UEQ-S). Int. J. Interact. Multimed. Artif. Intell. 2017, 4, 103. [Google Scholar] [CrossRef]
  26. R Core Team. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria, 2018. [Google Scholar]
  27. R Core Team. lm: Fitting Linear Models; R Foundation for Statistical Computing: Vienna, Austria, 2023. [Google Scholar]
  28. Russell, V.; Lenth, E.A. emmeans: Estimated Marginal Means, aka Least-Squares Means, R Package Version 1.8.7. 2023. Available online: https://cran.r-project.org/web/packages/emmeans/index.html (accessed on 16 June 2023).
  29. R Core Team. glmer: Fitting Generalized Linear Mixed-Effects Models; R Foundation for Statistical Computing: Vienna, Austria, 2023. [Google Scholar]
  30. Wilkinson, M.; Brantley, S.; Feng, J. A Mini Review of Presence and Immersion in Virtual Reality. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2021, 65, 1099–1103. [Google Scholar] [CrossRef]
  31. Slater, M. Immersion and the illusion of presence in virtual reality. Br. J. Psychol. 2018, 109, 431–433. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Virtual environment showcasing haptic simulation. Shape-sorting activity, two sculptures (nested balloon and a magnetic sphere), model of a Stanford bunny, glass and wood material simulation, and five distinct tactile effects.
Figure 1. Virtual environment showcasing haptic simulation. Shape-sorting activity, two sculptures (nested balloon and a magnetic sphere), model of a Stanford bunny, glass and wood material simulation, and five distinct tactile effects.
Electronics 12 03659 g001
Figure 2. System assembly profile depicting the wearable harness and its adjustability, supporting the devices used for personal sensing. Head-mounted display (Meta Quest 2) and modified haptic force-feedback stylus (3D Systems Touch).
Figure 2. System assembly profile depicting the wearable harness and its adjustability, supporting the devices used for personal sensing. Head-mounted display (Meta Quest 2) and modified haptic force-feedback stylus (3D Systems Touch).
Electronics 12 03659 g002
Figure 3. System assembly front view.
Figure 3. System assembly front view.
Electronics 12 03659 g003
Figure 4. System assembly rear view showing the Haptics logic board, as well as the devices enabling wireless communication (Raspberry Pi 3 and a 5-GHz-capable Wi-Fi router), their power source, and the wired connections between Pi 3 and haptics logic board.
Figure 4. System assembly rear view showing the Haptics logic board, as well as the devices enabling wireless communication (Raspberry Pi 3 and a 5-GHz-capable Wi-Fi router), their power source, and the wired connections between Pi 3 and haptics logic board.
Electronics 12 03659 g004
Figure 5. Wireless software architecture depicting the four devices used in this prototype, their operating systems, applications, and libraries utilized, as well as physical connections via cables and wireless data links, including the type of data being transmitted.
Figure 5. Wireless software architecture depicting the four devices used in this prototype, their operating systems, applications, and libraries utilized, as well as physical connections via cables and wireless data links, including the type of data being transmitted.
Electronics 12 03659 g005
Figure 6. The mixed reality scene showcases virtual Jenga and shape-sorter desks created with computer graphics, overlaid onto photographic pass-through imagery rendered as a skybox within the Unity game engine. This scene incorporates three types of personal sensing: hand-tracking, head-tracking, and stylus-sensing. The user’s hands are accurately represented by their corresponding 3D models. However, the virtual stylus is positioned slightly in front of the real electro-mechanical device to avoid the need for the user to be in close proximity to interactable objects.
Figure 6. The mixed reality scene showcases virtual Jenga and shape-sorter desks created with computer graphics, overlaid onto photographic pass-through imagery rendered as a skybox within the Unity game engine. This scene incorporates three types of personal sensing: hand-tracking, head-tracking, and stylus-sensing. The user’s hands are accurately represented by their corresponding 3D models. However, the virtual stylus is positioned slightly in front of the real electro-mechanical device to avoid the need for the user to be in close proximity to interactable objects.
Electronics 12 03659 g006
Figure 7. Similar to the mixed reality scene, the virtual reality scene showcases virtual Jenga and block-sorter desks. However, the seemingly real environment lacks the dynamicity that the MR provides as it is a static texture map.
Figure 7. Similar to the mixed reality scene, the virtual reality scene showcases virtual Jenga and block-sorter desks. However, the seemingly real environment lacks the dynamicity that the MR provides as it is a static texture map.
Electronics 12 03659 g007
Figure 8. Comparison of User Experience Questionnaire Scores in (A) VR using the previous version (“v1”) of hardware, (B) VR using the latest version (“v2”) of hardware, and (C) MR using the latest version of hardware. Used confidence level: 95%.
Figure 8. Comparison of User Experience Questionnaire Scores in (A) VR using the previous version (“v1”) of hardware, (B) VR using the latest version (“v2”) of hardware, and (C) MR using the latest version of hardware. Used confidence level: 95%.
Electronics 12 03659 g008
Figure 9. Comparison of performance results in (A) VR using the previous version (“v1”) of hardware, (B) VR using the latest version (“v2”) of hardware, and (C) MR using the latest version of hardware. Used confidence level: 95%.
Figure 9. Comparison of performance results in (A) VR using the previous version (“v1”) of hardware, (B) VR using the latest version (“v2”) of hardware, and (C) MR using the latest version of hardware. Used confidence level: 95%.
Electronics 12 03659 g009
Table 1. User Perceptions and Responses to Prototype Enhancements in VR and MR Segments.
Table 1. User Perceptions and Responses to Prototype Enhancements in VR and MR Segments.
StatementMean ± SD
The VR mode was immersive.5.5 ± 0.92
The MR mode was immersive.6.1 ± 0.83
The wearable harness was comfortable.5.9 ± 1.22
The harness was heavy.2.5 ± 0.81
The tethered connection bothered me.2.7 ± 1.27
The intensity of simulated force-feedback was adequate.5.7 ± 1.27
The stylus and hand-tracking were adequately precise and reliable.5.1 ± 1.51
The transition between stylus use and hand-tracking was seamless.5.5 ± 1.12
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kudry, P.; Cohen, M. Enhanced Wearable Force-Feedback Mechanism for Free-Range Haptic Experience Extended by Pass-Through Mixed Reality. Electronics 2023, 12, 3659. https://doi.org/10.3390/electronics12173659

AMA Style

Kudry P, Cohen M. Enhanced Wearable Force-Feedback Mechanism for Free-Range Haptic Experience Extended by Pass-Through Mixed Reality. Electronics. 2023; 12(17):3659. https://doi.org/10.3390/electronics12173659

Chicago/Turabian Style

Kudry, Peter, and Michael Cohen. 2023. "Enhanced Wearable Force-Feedback Mechanism for Free-Range Haptic Experience Extended by Pass-Through Mixed Reality" Electronics 12, no. 17: 3659. https://doi.org/10.3390/electronics12173659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop