Next Article in Journal
Two Sound Field Control Methods Based on Particle Velocity
Next Article in Special Issue
Situating Learning in AR Fantasy, Design Considerations for AR Game-Based Learning for Children
Previous Article in Journal
Dynamic Feedback versus Varna-Based Techniques for SDN Controller Placement Problems
Previous Article in Special Issue
Virtual/Augmented Reality for Rehabilitation Applications Using Electromyography as Control/Biofeedback: Systematic Literature Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Virtual Object Weight Recognition Algorithm Based on Pseudo-Haptics and the Development of Immersion Evaluation Technology

Korea Electronics Technology Institute, Gwangju 61011, Korea
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(14), 2274; https://doi.org/10.3390/electronics11142274
Submission received: 14 June 2022 / Revised: 13 July 2022 / Accepted: 19 July 2022 / Published: 21 July 2022

Abstract

:
In this work, we propose a qualitative immersion evaluation technique based on a pseudo-haptic-based user-specific virtual object weight recognition algorithm and an immersive experience questionnaire (IEQ). The proposed weight recognition algorithm is developed by considering the moving speed of a natural hand tracking-based, user-customized virtual object using a camera in a VR headset and the realistic offset of the object’s weight when lifting it in real space. Customized speeds are defined to recognize customized weights. In addition, an experiment is conducted to measure the speed of lifting objects by weight in real space to obtain the natural object lifting speed weight according to the weight. In order to evaluate the weight and immersion of the developed simulation content, the participants’ qualitative immersion evaluation is conducted through three IEQ-based immersion evaluation surveys. Based on the analysis results of the experimental participants and the interview, this immersion evaluation technique shows whether it is possible to evaluate a realistic tactile experience in VR content. It is predicted that the proposed weight recognition algorithm and evaluation technology can be applied to various fields, such as content production and service support, in line with market demand in the rapidly growing VR, AR, and MR fields.

1. Introduction

Research is continuing to increase the utilization of VR devices as VR accessibility and the usability of ordinary users increase due to the light weight and convenience of VR devices [1]. VR devices are applied in various industries such as education, medical care, and entertainment, but there are still areas to be solved in order to commercialize them [2]. Among them, the most important thing is to provide users with realistic stimuli and experiences [3]. To this end, a realistic haptic experience should be provided in a virtual environment to induce a higher sense of immersion, and appropriate interaction should be possible. In order to achieve this purpose, various studies [4,5,6] are continuing to provide a realistic tactile experience in the VR environment. Virtual object manipulation used by users to interact in a VR environment can be said to be a key factor in increasing the immersion of virtual reality content [7]. Representative interactions with virtual objects are mainly actions that involve lifting objects, such as lifting, catching, throwing, picking up, and rolling. The weight of a virtual object that a user feels in lifting an object, one of the basic interactions, is typically felt through visual and haptic feedback, a technique that simulates tactile sensations in a virtual environment using visual feedback and the properties of human visual-touch perception called pseudo-haptics [8]. Pseudo-haptics has mainly been conducted with studies that provide the right sense when catching virtual objects using haptic interfaces. Haptic interfaces such as haptic gloves are attached to the user’s arms, hands, and fingers, and they are adjusted to provide weight by giving the user a force appropriate to the weight of the virtual object in the process of lifting the object [9,10,11]. In addition, research is being conducted to allow users to feel the weight of several objects by varying the degree of vibration and pressure felt on the skin according to the weight of each virtual object [12,13,14]. Weight recognition in VR environments is one of the most important tactile properties that contributes to realistic interactions with virtual objects and immersive experiences, and most methods of weighting virtual objects tend to rely on manipulating physical senses or visual information [15]. The physical sensation was made to feel the weight by utilizing the vibration of an electric muscle stimulator or controller, and visual information was for detecting the weight by applying an offset between the actual and virtual hands [3,16,17]. Although this method is used to increase immersion in a VR environment, other devices (controllers, electric muscle stimulation devices, etc.) require high costs and have difficulty interacting with users, thereby lowering immersion. To address this problem, we propose a pseudo-haptic-based, user-customized virtual object weight recognition algorithm using natural hand tracking and visual illusion effects with only cameras in VR headsets.
In this study, we propose a method to provide the weight of a virtual object and a method to qualitatively measure the user’s content commitment, focusing on the actual weight recognition applied with a camera-based offset without a separate haptic device in the VR headset environment. The implementation of a pseudo-haptic-based virtual object weight recognition algorithm induces the user-customized virtual object to feel its weight without a controller and is designed to implement a more realistic offset considering the result value of the object lifting speed measurement experiment in real space to set its natural offset threshold. In addition, we intend to measure the weight recognition and content commitment of the proposed algorithm by requiring experimental participants to complete a questionnaire for evaluation.
Our proposed pseudo-haptic-based virtual object weight recognition method considers the weights of virtual objects when lifting virtual objects by tracking real hands in a VR environment. By creating a deliberate error in the distance between the mapped 3D hand model and the lifted virtual object, we intend to create a visual illusion effect of weight and qualitatively measure the immersion in a given environment. A visual illusion effect means that by using the algorithm developed in this paper, decoupling of the user’s hand position in XR content and the user’s hand position in real space creates a visual illusion that makes the actual hand position different from that shown to the real user. We expect that this study will provide algorithms and pilot contents for pseudo-haptics technology based on the experimental visual illusion effect, which can lead to the implementation of more detailed research in the area and contribute to the development of a cultural content industry based on increased immersion.
Our main suggestions for weight sensing are as follows:
  • Development of decoupling algorithms for user hand positions in VR environments and user hand positions in real space;
  • A weight recognition software approach that takes into account the volume, density, and speed of a virtual object based on hand tracking using a camera in a VR headset;
  • Develop a customized virtual object weight algorithm based on the speed of lifting standard objects in real space;
  • After conducting an experiment to measure the speed of lifting an object according to weight in real space, consider the resulting value as a weight;
  • Because the user feels a different individual weight depending on the experience and degree of immersion in VR, the user lifts a golf ball in the real space and measures this speed to define it as a customized standard speed;
  • A study showing the subject’s perceived weight using the proposed approach;
  • Development of qualitative evaluation technology based on an immersive experience questionnaire (IEQ) within a VR environment;
  • Qualitative user experience evaluation based on developed pseudo-haptics technology and analysis of the results.

2. Related Work

Multi-sensory feedback, such as a sense of experience with a virtual object in a VR environment, provides a high degree of immersion to the user [3,18,19]. Among them, tactile sense has features that can explore texture, hardness, temperature, and weight, a major factor contributing to improving user immersion in VR content [20]. Accordingly, many studies related to tactile sensation have been developed, and similar tactile feedback is typically being studied in various applications [4,21]. Similar tactile feedback helps one navigate VR content, such as feeling the size, texture, and weight of an object [22]. There are two main similar tactile feedback study methods: one is to detect the actual weight by adding separate equipment, and the other is to add visual elements [2,23]. In this study, a pseudo-haptic-based virtual object weight recognition algorithm is implemented to induce a sense of weight for a virtual object, and to set a natural offset threshold according to weight, a more realistic offset is considered.

2.1. Pseudo-Haptics

Rietzler et al. [16] implemented virtual object weight through the occurrence of intentional offsets with virtual objects in a virtual environment. Through experiments quantifying the perceptual threshold of the offset, it was suggested that experimenters could recognize the object weight according to the visual volume. However, we used VR headset controllers to track the hands, which reduced immersion, and did not consider volume or speed in the implementation of virtual object weights. Kim et al. [24], in a related study, showed that the effect of visual pseudo-haptic feedback occurred differently depending on the size of the virtual object. Another study by Kim et al. [3] studied the multi-sensory-like haptic effect that combines the control if display ratio manipulation and electrical muscle stimulation. These studies have shown that multi-sensory-like haptic feedback can produce more powerful simulated senses but tends to be somewhat device-dependent. These leading studies were based on equipment, including controllers, as the start stage of university-centered technology development. In addition, although the content of the study is weight recognition using offset-based visual illusion effects, the purpose of the study is to quantify the offset threshold in weight recognition, so it does not provide a systematic algorithm according to the weight and moving speed of the virtual object. In addition, we show that the threshold of the offset, which feels a sense of weight, is different for each experimental participant, but it has a limitation in that it does not provide a customized weight algorithm.

2.2. Muscle Strength for One Hand

According to Mo et al. [25], the maximum weight that does not cause fatigue during one-handed lifting is appropriate, and men had a relatively higher recommended weight than women. It is recommended that the weight be reduced by 30% for repetitive motion. According to Kim [26], the static maximum muscle strength when raising with one hand differs according to the height of exertion, and the dynamic maximum muscle strength is not statistically significant between one hand and both hands or the difference between the left and right hands. Based on this research, in order to set a natural offset threshold according to weight, an experiment was conducted to measure the speed of lifting an object according to the weight of an object in real space, and the resulting value of the experiment was considered as a weight to implement a more realistic offset.

3. Designing a Virtual Object Weight Algorithm

This chapter describes the design method of the virtual object weight algorithm considering the density, volume, and speed weights of virtual objects in a VR environment.
The algorithm we propose is based on pseudo-haptics-based visual illusion effects. The user lifts the virtual object in a VR environment. At this time, the difference in weight of different virtual objects is shown as an offset, which is the distance between the virtual object and the user’s hand. The proposed algorithm sets the speed at which standard objects (golf balls) present in real space are lifted to the user’s standard speed. Based on this, when lifting objects of different weights, it deliberately controls the speed at which the virtual object is lifted, thereby inducing custom weight recognition. In addition, in order to consider the perception of real-life weights, we experimented with measuring the speed of lifting an object according to the weight of the object in real space, and the resulting values were used as weights to guide one to define a realistic offset. Considering that immersion can be weakened by excessive offsets during relatively long interactions, it is designed to slow down the moving speed as the distance between the virtual object and the hand increases. The proposed algorithm ultimately derives the moving speed v f i n a l _ v i r t u a l of the object, dependent on the weight of the virtual object.
Figure 1 shows the basic driving principle of the pseudo-haptic customized virtual object weight algorithm based on visual illusion effects. There are two main types of speeds for personalized weight recognition: v r e a l , which refers to the actual movement speed of the user’s hand, and v v r _ o b j , which refers to the speed at which the virtual object is raised according to the weight of the virtual object. Users feel different weights for each individual depending on the presence or absence of VR experience and the degree of immersion. It induces user-customized weight recognition based on the speed of lifting a standard weight object (golf ball) existing in real space, and this speed affects the rate at which a virtual object with a weight other than the standard weight is lifted (see Figure 1).

3.1. Ideas and Algorithms

When lifting an object existing in real space, there is a weight (W) according to the mass (m) of the object and the gravitational acceleration (g) and a physical strength ( F p h y s i c a l ), which is a force originating from the muscle used to lift it. However, in a VR environment, there is no force for a camera-tracked hand to lift a virtual object. The implementation of the corresponding force to be similar in a VR environment is called a virtual force ( F v i r t u a l ). In real space, the density (ρ) and volume (V) are considered for the mass (m) of an object, and the moving speed of the object is determined by the force of lifting the object but not considered in a VR environment. Therefore, we propose a virtual object weight algorithm that takes into account the weight of a virtual object, customized travel speed, and a realistic offset according to the weight of the object when lifting the object in real space.
For customized virtual object weight recognition, users were asked to lift objects of the same weight to define the y-axis movement speed as a customized speed, and the speed of the virtual object was controlled to be proportional to the volume and density of the virtual object. In order to set the natural offset threshold according to the weight, it was designed to implement a more realistic offset by considering the resulting value through the experiment of measuring the speed of lifting an object according to the weight of an object in real space.
The confinement conditions of our virtual object weight algorithm were as follows:
  • The virtual object can be lifted only on the y-axis in a three-dimensional VR environment;
  • The location of each virtual object is represented by local coordinates and moves in a positive space greater than or equal to 0 (the y-axis coordinates of a virtual object cannot be negative);
  • The unit time for measuring the average speed of a virtual object is defined as 1 s;
  • The densities of the virtual objects compared are the same.
The proposed virtual object weight recognition algorithm was implemented so that when two virtual objects with different weights were lifted in a VR environment, the movement distances of objects with larger weights were small even if the actual hand was raised to the same height. In addition, even if we moved our hands at the same height when lifting two virtual objects with the same weight, we implemented that the movement of the two fast-moving virtual objects was greater, considering the instantaneous acceleration of the 3D hand model lifting the virtual object:
W = 𝓂 ×
In Equation (1), 𝓂 is the mass considering the density and volume of the object, ℊ is the acceleration of gravity, and W is the weight of the object.
When lifting a virtual object, considering the weight of an object in a VR environment, an object with a standard weight ( W ) that can be lifted at the same speed as the tracked 3D hand model is required:
W s t a n d a r d = 0.045   kg × 9.80665   m / s 2
In Equation (2), W s t a n d a r d is the standard weight of a virtual object required to measure the force of lifting by the user, and the gravitational acceleration was considered based on the weight of a general golf ball (0.045 kg). To consider the speed of lifting a virtual object, a standard speed of a virtual object that could move simultaneously in the same position as the 3D hand model was required. There was a deviation in the speed of lifting virtual objects for each user, so there needed to be a standard speed for each user. The initial speed at which a user lifts a virtual object with a weight of W s t a n d a r d with a force of F v _ s t a n d a r d   is called v s t a n d a r d :
F v _ s t a n d a r d = W s t a n d a r d × v r e a l _ s t
Equation (3) shows the relationship between the standard weight W s t a n d a r d and v r e a l _ s t according to the customized speed v s t a n d a r d of lifting the object. The customized standard speed v s t a n d a r d for lifting a virtual object with a standard weight W s t a n d a r d , defined in Equation (2), determines the v r e a l _ s t for lifting the standard weight object:
v v i r t u a l = v r e a l _ v × W s t a n d a r d × W v i r t u a l
Equation (4) is a formula for obtaining the speed v v i r t u a l of lifting a virtual object other than the standard weight. The corresponding equation was cross-verified through comparison to W s t a n d a r d and v r e a l _ v .
The cases of cross-verification are as follows:
  • The virtual object is lighter in weight and faster in speed;
  • If the weight is lighter and the speed is slower;
  • If the weight is heavier and the speed is the same;
  • If the weight is lighter and the speed is the same;
  • If the weight is heavier and the speed is also slow.
According to Equation (5), the moving speed v v i r t u a l of the virtual object in consideration of the weight of each virtual object may be derived. However, the speed at which this is derived is not realistic. Therefore, a more realistic offset was realized through an experiment of measuring the speed of lifting an object according to its weight:
v e x p e r i m e n t = 1.6793 e 0.24 × W v i r t u a l ,
Equation (5) is a formula derived from the experiment of measuring the one-handed lifting speed according to the weight in real space, and it shows the change in the speed of lifting a realistic object, while v e x p e r i m e n t shows the velocity of the actual object derived according to the actual weight:
v f i n a l _ v i r t u a l = v v i r t u a l + v e x p e r i m e n t 2 ,
Equation (6) is a formula that adjusts the weight of the v v i r t u a l speed derived from Equation (5) and the speed of v e x p e r i m e n t derived from the experiment. Through this equation, an implement offset was induced, which is the distance between a more realistic hand and a virtual object. Here, the speed of the user-customized virtual object considering the weight was determined by the derived speed of v f i n a l _ v i r t u a l . However, considering the weakening of the sense of immersion due to an excessive offset during a relatively long interaction, it was designed to slow down the moving speed as the distance between the virtual object and the hand increased.
If the distance between the center point of the virtual object and the tracked 3D hand model was greater than 0.2 m during the virtual object lift operation, the speed v f i n a l _ v i r t u a l obtained from Equation (6) was corrected in a gradual deceleration manner. After checking the distance between the virtual object and the hand, it was corrected by moving v f i n a l _ v i r t u a l + 0.02 (m) faster on the y-axis if the object was lighter than W s t a n d a r d and moving −0.02 slower if the object was heavier than the standard weight.

3.2. Experiment to Measure the Speed of a One-Handed Lift According to the Weight in Real Space

In order to set a natural offset according to the weight of an object in a VR environment, an experiment was conducted to measure the speed of one hand according to the weight in real space.

3.2.1. Participants and Experimental Tools

A simple experiment was conducted with one woman in her 20 s. The experiment was conducted using five exercise dumbbells with different weights. According to the study, the maximum weight that did not cause fatigue during the one-handed lifting operation of Mo et al. [19] was set at 12.25 N, 19.60 N, 24.50 N, 39.20 N, and 49.00 N by setting the maximum weight to 5 kg (see Figure 2).

3.2.2. Experimental Procedures and Methods

The measurement of the moving speed of an object according to the static muscle ability of the palm center height to the top of the head was measured at 78.00 cm (approximately 30.71 in) in height for the general palm joint height and elbow height, taking into account the simplicity of the participant’s experimental procedure. In addition, the maximum dynamic muscle strength during the lifting operation was measured according to the five weights of these palm-centered to elbow-height tasks: 12.25 N, 19.60 N, 24.50 N, 39.20 N, and 49.00 N. At this time, the speed measurement was performed by using the right hand, which was the main hand of the participant. The operation of lifting an object with a weight of five steps in a real space was repeated six times. The experiment for the purpose of this study was organized as follows (see Figure 3):
  • The experimenter set the camera position and angle in consideration of the height of the participant;
  • The position and angle of the camera filming the participant were fixed;
  • Participants lifted an object weighing 12.25 N using one hand;
  • The lifting of an object of the same weight was repeated six times;
  • The experimenter measured the movement time of the object from the center of the participant’s palm to the top of the head;
  • Repeated lifting of objects weighing 19.60 N, 24.50 N, 39.20 N, and 49.00 N (steps 2–5);
  • The experimenter used the measured data to calculate the speed of lifting objects by weight;
  • The user derived a formula suitable for the speed of lifting objects by weight.

3.2.3. Speed Measurement Experiment Results

The resulting value of the measurement of the moving speed of the object for the object lifting task according to the dynamic muscle strength ability was averaged according to the weight to derive an appropriate equation. As the relatively light object was lifted, the deviation of the object’s lift speed was large, so it was easier to eliminate outliers by using the average value of the speed according to each weight rather than using the value of each population as a whole (see Figure 4).
The resulting value of the experiment is as shown in Equation (5).

3.3. Exclude Inertia

If only the weight due to gravity and the muscle force, which is the force to lift an object, are applied to the VR environment, maximum inertia occurs, and it takes a relatively long time for the object to stop, causing the virtual object to shake. Therefore, we removed the falling inertia that occurred at the moment of holding the object to reduce the maximum inertia of the object. When a tracked 3D hand model passes through a weighted virtual object in a VR environment, the tracked 3D hand model has a force F to lift the object at the same time as it passes so that the virtual object does not fall to the floor and moves the virtual object using the Y-axis position value of the tracked hand.

3.4. Hand Tracking

In this study, the controller was not used for the natural user interface (NUI) for natural interaction. Hand tracking using a camera in a VR headset was used for interaction. Hand tracking using the Oculus Interaction Software Development Kit (SDK) can improve user immersion in a VR environment.

4. Design and System Implementation of Simulated Content

In this chapter, we describe the construction of a virtual environment for virtual object weight recognition algorithm-based simulation content and the method of hand tracking through a camera within a VR headset. We also describe an experimental simulation for immersion evaluation based on the proposed algorithm.

4.1. Building a Virtual Environment

We used an Oculus Quest 2, one of the most popular VR headsets, as the equipment for running the proposed virtual object weight recognition algorithm simulation software. Our authoring environment was developed with Unity 2019.4.18f1. The virtual environment includes tasks implemented to show interactions with virtual objects.
Based on the pseudo-haptic-based virtual object weight recognition algorithm proposed in this study, we implemented a virtual experience space to implement simulation content. A 3D table model for placing a 3D virtual experience space and a virtual object were constructed with simple objects supported by the Unity Asset Store (see Figure 5).

4.2. User and 3D Virtual Object Interaction

The Oculus Interaction SDK version 38.0 provided by Meta, a manufacturer of the Oculus Quest 2, recognized the user’s hand movements. Using a camera within a VR headset, the hand was recognized without additional equipment and mapped to a VR environment hand model. It was used to interact with 3D virtual objects to which the weighting algorithm was applied.
When the content started, three virtual objects floated in a virtual reality (VR) environment, and a 3D hand model appeared, in which the recognized hand was mapped by the camera of the VR headset. The three virtual objects floating in the VR environment consisted of virtual objects of different sizes, and the weights and renderings of the virtual objects may have been different depending on the density inside each virtual object. The user could lift two or more with both hands, and the virtual object moved in the Y-axis direction. To experience an algorithm that recognized weights, one had to lift each virtual object at least once. One of the three virtual objects was an object with a standard weight, so it moved with the hand model. However, when lifting a virtual object heavier than the standard weight, the object fell short of the position of the 3D hand model.

4.3. System Implementation and Enforcement

Figure 6 shows the proposed algorithm simulation content driving scene (see Figure 6).

5. Questionnaire Development

As various media such as games and videos appear, questionnaires measuring immersion that fit the characteristics of each media are being developed. Jennett [27] has prepared a questionnaire for an experiment to measure game content immersion. Three experiments were conducted to measure game immersion, and the goal was to measure immersion so that the participants could better understand how to participate in the game. Overall, the research results showed that immersion can be measured not only by objective factors (e.g., task completion time and eye movement) but also by subjective factors (e.g., survey papers). Rigby [28] wrote a questionnaire to measure the degree of immersion of viewers when watching a video. Based on the verified game immersion questionnaire of Jennett, four factors were found through exploratory factor analysis, and the questionnaire was constructed by modifying the content of the questionnaire to be suitable for video viewing immersion measurement.
The questionnaire design method used in this study was as follows. It is important to clarify the objectives in developing the questionnaire. First of all, the field to be proven through the experimental results was selected. We judged that the weight perception of virtual objects in VR environments can affect immersion enhancement through existing leading studies [16,17] and designed a questionnaire to find out the user’s weight perception and degree of it in VR environments. In the first experiment, we developed a questionnaire to see if a user could detect the weight of an object in a virtual environment, and in the second experiment, we developed a questionnaire to measure the user’s immersion. In the second experiment, the immersion measurement section, a separate item was also added so that participants could evaluate their own immersion level. When developing a questionnaire, a resulting value should be considered, and in order to reduce a slight error in the process of rounding the result value, the questionnaire was selected in 10 units. Due to the characteristics of the two experiments, the sensitivity of the weight felt by the user and immersion in the content were measured. In addition, in order to select the right question, it was necessary to select the question item in consideration of the respondent’s position. First of all, technical terms should be excluded, and terms that are easy for respondents to understand and intuitive words should be constructed. Each question should also focus on obtaining specific information, but only one question should be asked so that respondents are not confused. The contents of each question used in this study were selected to measure the weight and immersion by reflecting the characteristics of the experiment, and questions were selected by mixing positive and negative questions to secure consistency in the user answers. Using the characteristics of the pseudo-haptic content, the questionnaire items were divided into weight detection measurement (Part 1), immersion measurement (Part 2), and immersion self-evaluation (Part 3), ( Appendix A and Appendix B). Whether or not the weight was detected could be measured in consideration of visual or touch, object movement speed, virtual or real hand position, and weight comparison. In the immersion measurement, the items for immersion measurement were selected in consideration of the experimenter’s concentration, interest, time flow oblivion, and sense of reality. Finally, through self-evaluation, the items were selected so that the participants could measure how much they were immersed in the content themselves.
Specifically, in the measurement of weight detection and immersion, the participants were asked to what extent they agreed with each statement describing their experience participating in the experiment. The answers to each question could be measured by dividing them into a five-point scale, and in the self-evaluation of immersion, it was composed of questionnaire items asking how much immersion there was overall on a scale from 1 to 10. In addition, among some items, questions regarding mean negation were calculated by reversing the scale. In the first experiment, we demonstrated a virtual object weight recognition algorithm implemented using a weight detection measurement questionnaire, and in the second experiment, we used an immersion measurement and immersion self-assessment questionnaire to measure the user’s immersion state when weight was given to a virtual reality through a validated virtual object weight recognition algorithm.

6. Experiment 1

6.1. Procedure

Using our developed virtual object weight recognition algorithm, we conducted an experiment to find out whether the weight of a virtual object could be detected in user VR content. The algorithm in the experiment induced user-customized virtual object weights based on the speed of lifting standard objects in real space, such as in the proposed algorithm. However, in the final stage of the algorithm, no correction was made according to the center point of the virtual object and the distance of the tracked 3D hand model. In addition, for the composition of the experiment, the part where the density of the virtual object, one of the algorithm’s limitations, was the same was modified. An experiment was conducted to measure whether or not the weights of three rectangular parallels having different densities of the same size were detected (see Figure 7). It is necessary to implement wooden boxes of the same size but different densities in a general background and to recognize that each object has different weights by lifting it with both hands without a controller. Twelve participants were recruited, and the experiment was conducted. It consisted of various age groups from 23 to 39 years old with VR experience, with 7 male and 5 female participants. The participants were asked to wear VR headsets (Oculus Quest 2), and the controller was not used due to the nature of the content, so the experiment was conducted after informing them how to use hand tracking. The experiment was conducted one by one, and the questionnaire was filled out after experiencing the content for 5 min. As for the questionnaire, a questionnaire related to weight detection was prepared. At this time, the user was not informed of the density of the object.

6.2. Results

To verify the virtual object weight recognition algorithm, the participants conducted weight detection experiments and analyzed the questionnaire prepared. The answers to each question in the weight detection questionnaire were prepared by dividing the degree to which the participants agreed with the question on a five-point scale. The SA was calculated as 5 points and the SD as 1 point, and Q4 and Q7 for the mean negation were calculated by reversing the score. The results of calculating the average of the questionnaire prepared by the participants are shown in Table 1. This means that most of the experimental participants felt the weight when lifting a virtual object in a virtual environment. In addition, 9 out of 12 participants in the experiment mentioned one object (B) and gave an accurate answer when asked which object was the heaviest in the interview that followed the questionnaire. In addition, when asked about their preference for applying offsets to virtual objects, 10 out of 12 said that applying offsets to virtual objects helped sense the weight. On the other hand, one participant said that he felt the buffering of the screen during the operation with hand tracking, and another participant said that it was uncomfortable because they were unfamiliar with the operation of hand tracking. One of the participants in the experiment, who responded positively during the experiment, expressed surprise that the weight of the virtual object was felt even when the controller was not used.

7. Experiment 2

7.1. Procedure

The second experiment was conducted to measure the user’s immersion state when weight was given to a virtual object in virtual reality through a verified virtual object weight recognition algorithm. Before conducting this experiment, a preliminary experiment was conducted to find out the difference in weight between the object in the virtual world and the object in the real world. This was an experiment to find out the relationship between the virtual object and the weight of the real space object. Preliminary experiments showed that the weight of the water bottle felt in the virtual environment was similar to the weight felt when the actual water bottle was lifted. We conducted the second experiment in reference to this. In the second experiment, the degree of immersion was measured by placing water bottles commonly encountered in daily life by size so that the participants could easily distinguish and judge experiences in the virtual environment and in the real world (see Figure 8). The algorithm of the ongoing experiment induced user-customized virtual object weights based on the speed of lifting standard objects in real space, as suggested by the method in the first experiment. However, in the final stage of the algorithm, no correction was made according to the center point of the virtual object and the distance of the tracked 3D hand model. The participants should have recognized that each object had a different weight and was similar to the difference in weight in the real world by lifting the water bottle with both hands without a controller. The experiment was conducted with 12 participants consisting of various age groups from 23 to 39. Eight were male and four were female participants. The participants were asked to wear VR headsets (Oculus Quest 2), and the controller was not used due to the nature of the content, so the experiment was conducted after informing them how to operate hand tracking. The experiment was conducted one by one, and the questionnaire was filled out after experiencing the content for 5 min. As for the questionnaires, an immersion measurement questionnaire and an immersion evaluation questionnaire were prepared. At this time, the users were not informed that the weight was reflected in each object.

7.2. Results

We conducted an experiment to measure user immersion when weight was given to a virtual object through content reflecting the virtual object weight recognition algorithm that we previously verified. At this time, the immersion measurement and immersion evaluation questionnaire form were utilized to measure the immersion state of the user. The immersion measurement questionnaire consisted of 10 questions, and the degree to which each item was agreed upon was divided into a five-point scale. The participants wrote down the extent to which each item corresponded to them on the questionnaire. We calculated five points for closer to “a lot” and one point for the closer to “not at all”. Q3, Q5, Q8, and Q9 were calculated by inverse calculation as questions using negative words to secure user consistency. The calculated the survey averages created by the participants are as shown in Table 2, indicating that the majority of the participants in the experiment had higher immersion in the virtual environments with virtual object weight recognition algorithms. In addition, the degree of immersion self-assessment questionnaire for the participants themselves evaluating the degree of immersion was composed of a single question, and the degree of immersion was measured by using the degree of immersion as a 10-point scale. According to Table 3, the immersion level evaluated directly by the participants was high, and the participants’ responses were positive in the subsequent interviews.Most people said that they felt the weight of lifting a water bottle in the real world, even though it had no effect on their hands when holding a water bottle with the contents embodied in virtual reality. In addition, the dominant evaluation was that applying the offset to the weights of virtual objects had a significant impact on high immersion in the content. On the other hand, one person said he was unable to immerse himself due to VR sickness. Nevertheless, 11 people with VR experience answered that the content used in the experiment was more immersive than the content they had previously experienced.

8. Discussion

Experiment 1 attempted to verify the algorithm developed through an experiment to find out whether the weight of a virtual object was detected in a virtual environment. The questionnaire completed by participants after conducting the weight detection experiment showed that most of the participants felt the weight when they lifted the virtual object in their virtual environment. However, some participants answered that they were uncomfortable because they were unfamiliar with the process of manipulation with hand tracking. In Experiment 2, which was conducted for the second time, we tried to measure the user’s immersion when weight was given to a virtual object through content that reflected the virtual object weight recognition algorithm verified in advance. It was confirmed that most of the participants had a high degree of immersion in the experience in the virtual environment to which the virtual object weight recognition algorithm was applied. While most people said they felt the weight when they picked up the object in the real world, even though it was not doing anything to their hands when they held it in a virtual environment, one person said that they could not immerse themselves due to motion sickness in virtual reality. Nevertheless, 11 people with VR experience answered that the content used in the experiment was more immersive than the previously experienced content. We believe that inducing a tactile experience through Experiments 1 and 2 contributed to increasing the immersion of the VR content, and in future studies, we intend to implement more stable content by upgrading the virtual object weight recognition algorithm.

9. Conclusions

In this paper, to improve VR headset environment immersion, we produced pseudo-haptic and simulation contents based on visual illusion effects and proposed qualitative evaluation techniques using an IEQ. Experiment 1 demonstrated its own virtual object weight recognition algorithm, and Experiment 2 demonstrated that the virtual object weight recognition algorithm was effective at increasing the user’s content commitment. In other words, it was confirmed that the visual illusion effect of the user using the weight and speed of the virtual object helped to recognize the weight of the object in the VR environment. This is a standalone software-based approach that facilitates the tracking of VR interactions. It is not dependent on other equipment because it uses only VR headsets without using controllers among VR devices. It can also be seen as an advantage that the user’s burden can be reduced by using a single device. Through the virtual object weight recognition algorithm and IEQ-based qualitative evaluation technology proposed in this paper, algorithms and pilot contents for pseudo-haptic technologies can be developed to provide more detailed research in the field, and they can be applied to various fields.
On the other hand, the proposed algorithm only applies to the method of lifting an object on one axis—the y-axis—and has the limitation that the contents of the deceleration in travel speed according to height for high-weight objects are not considered in the algorithm. As a future study, it is intended to find a way to add an algorithm for speed deceleration according to the movement in the xyz direction and the height applied in the real space.

Author Contributions

Conceptualization, E.S.; methodology, Y.K.; software, E.S.; validation, E.S.; formal analysis, H.S.; investigation, H.S. and S.N.; data curation, H.S.; writing—original draft preparation, H.S. and E.S.; writing—review and editing, E.S., H.S. and S.N.; visualization, E.S.; supervision, Y.K.; project administration, H.S. and Y.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Culture, Sports and Tourism R&D Program through the Korea Creative Content Agency grant funded by the Ministry of Culture, Sports and Tourism in 2022 (Project Name: The expandable Park Platform technology development and demonstration based on Metaverse·AI convergence, Project Number: R2021040269, Contribution Rate: 100%).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

Thank you to everyone who supported this study.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Immersion Questionnaire Used in Experiment 1

Electronics 11 02274 g0a1

Appendix B. Immersion Questionnaire Used in Experiment 2

Electronics 11 02274 g0a2

References

  1. MacIsaac, D. Google Cardboard: A virtual reality headset for $10? AAPT 2015, 53, 125. [Google Scholar] [CrossRef]
  2. Praveena, P.; Rakita, D.; Mutlu, B.; Gleicher, M. Supporting Perception of Weight through Motion-induced Sensory Conflicts in Robot Teleoperation. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Online, 9 March 2020; pp. 509–517. [Google Scholar]
  3. Kim, J.; Kim, S.; Lee, J. The Effect of Multisensory Pseudo-Haptic Feedback on Perception of Virtual Weight. IEEE Access 2022, 10, 5129–5140. [Google Scholar] [CrossRef]
  4. Collins, K.; Kapralos, B. Pseudo-haptics: Leveraging cross-modal perception in virtual environments. Senses Soc. 2019, 14, 313–329. [Google Scholar] [CrossRef]
  5. Lécuyer, A.; Coquillart, S.; Kheddar, A.; Richard, P.; Coiffet, P. Pseudo-haptic feedback: Can isometric input devices simulate force feedback? In Proceedings of the IEEE Virtual Reality 2000, New Brunswick, NJ, USA, 18–22 March 2000; pp. 83–90. [Google Scholar]
  6. Punpongsanon, P.; Iwai, D.; Sato, K. Softar: Visually manipulating haptic softness perception in spatial augmented reality. IEEE Trans. Vis. Comput. Graph. 2015, 21, 1279–1288. [Google Scholar] [CrossRef] [PubMed]
  7. Lopes, P.; You, S.; Cheng, L.P.; Marwecki, S.; Baudisch, P. Providing haptics to walls & heavy objects in virtual reality by means of electrical muscle stimulation. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Online, 2 May 2017; pp. 1471–1482. [Google Scholar]
  8. Culbertson, H.; Schorr, S.B.; Okamura, A.M. Haptics: The present and future of artificial touch sensation. Annu. Rev. Control. Robot. Auton. Syst. 2018, 1, 385–409. [Google Scholar] [CrossRef]
  9. Borst, C.W.; Indugula, A. Realistic virtual grasping. In Proceedings of the IEEE Virtual Reality, Bonn, Germany, 12–16 March 2005; pp. 91–98. [Google Scholar]
  10. Ott, R.; De Perrot, V.; Thalmann, D.; Vexo, F. MHaptic: A haptic manipulation library for generic virtual environments. In Proceedings of the 2007 International Conference on Cyberworlds (CW’07), Hannover, Germany, 24–26 October 2007; pp. 338–345. [Google Scholar]
  11. Ott, R.; Vexo, F.; Thalmann, D. Two-handed haptic manipulation for CAD and VR applications. Comput. Des. Appl. 2010, 7, 125–138. [Google Scholar] [CrossRef] [Green Version]
  12. Fisch, A.; Mavroidis, C.; Melli-Huber, J.; Bar-Cohen, Y. Haptic devices for virtual reality, telepresence, and human-assistive robotics. Biologically inspired intelligent robots. Biol. Inspired Intell. Robot. 2003, 73, 1–24. [Google Scholar]
  13. Turner, M.L.; Gomez, D.H.; Tremblay, M.R.; Cutkosky, M.R. Preliminary tests of an arm-grounded haptic feedback device in telemanipulation. ASME 1998, 15861, 145–149. [Google Scholar]
  14. Giachritsis, C.D.; Garcia-Robledo, P.; Barrio, J.; Wing, A.M.; Ferre, M. Unimanual, bimanual and bilateral weight perception of virtual objects in the master finger 2 environment. In Proceedings of the 19th International Symposium in Robot and Human Interactive Communication, Viareggio, Italy, 13–15 September 2010; pp. 513–519. [Google Scholar]
  15. Hummel, J.; Dodiya, J.; Wolff, R.; Gerndt, A.; Kuhlen, T. An evaluation of two simple methods for representing heaviness in immersive virtual environments. In Proceedings of the 2013 IEEE Symposium on 3D User Interfaces (3DUI), Orlando, FL, USA, 16–17 March 2013; pp. 87–94. [Google Scholar]
  16. Rietzler, M.; Geiselhart, F.; Gugenheimer, J.; Rukzio, E. Breaking the tracking: Enabling weight perception using perceivable tracking offsets. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Online, 19 April 2018; pp. 1–12. [Google Scholar]
  17. Rietzler, M.; Geiselhart, F.; Frommel, J.; Rukzio, E. Conveying the perception of kinesthetic feedback in virtual reality using state-of-the-art hardware. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Online, 21 April 2018; pp. 1–13. [Google Scholar]
  18. Lécuyer, A.; Burkhardt, J.M.; Coquillart, S.; Coiffet, P. “Boundary of illusion”: An experiment of sensory integration with a pseudo-haptic system. In Proceedings of the IEEE Virtual Reality 2001, Yokohama, Japan, 13–17 March 2001; pp. 115–122. [Google Scholar]
  19. Lopes, P.; Baudisch, P. Interactive systems based on electrical muscle stimulation. Computer 2017, 50, 28–35. [Google Scholar] [CrossRef] [Green Version]
  20. Azmandian, M.; Hancock, M.; Benko, H.; Ofek, E.; Wilson, A.D. Haptic retargeting: Dynamic repurposing of passive haptics for enhanced virtual reality experiences. In Proceedings of the 2016 Chi Conference on Human Factors in Computing Systems, Online, 7 May 2016; pp. 1968–1979. [Google Scholar]
  21. Lopes, P.; You, S.; Ion, A.; Baudisch, P. Adding force feedback to mixed reality experiences and games using electrical muscle stimulation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Online, 21 April 2018; p. 446. [Google Scholar]
  22. Choi, I.; Culbertson, H.; Miller, M.R.; Olwal, A.; Follmer, S. Grabity: A wearable haptic interface for simulating weight and grasping in virtual reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Online, 20 October 2017; pp. 119–130. [Google Scholar]
  23. Park, C.H.; Kim, H.T. Induced pseudo-haptic sensation using multisensory congruency in virtual reality. In Proceedings of the HCI Society of Korea Conference, Jeju-si, Korea, 14 February 2017; pp. 1055–1059. [Google Scholar]
  24. Kim, J.; Lee, J. The Effect of the Virtual Object Size on Weight Perception Augmented with Pseudo-Haptic Feedback. In Proceedings of the 2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Lisbon, Portugal, 27 March–1 April 2021; pp. 575–576. [Google Scholar]
  25. Mo, S.; Kwag, J.; Jung, M.C. Literature Review on One-Handed Manual Material Handling. J. Ergon. Soc. Korea 2010, 29, 819–829. [Google Scholar] [CrossRef] [Green Version]
  26. Kim, H.K. Comparison of Muscle Strength for One-hand and Two-hands Lifting Activity. J. Ergon. Soc. Korea 2007, 26, 35–44. [Google Scholar]
  27. Jennett, C.; Cox, A.L.; Cairns, P.; Dhoparee, S.; Epps, A.; Tijs, T.; Walton, A. Measuring and defining the experience of immersion in games. Int. J. Hum. Comput. Stud. 2008, 66, 641–661. [Google Scholar] [CrossRef]
  28. Rigby, J.M.; Brumby, D.P.; Gould, S.J.; Cox, A.L. Development of a questionnaire to measure immersion in video media: The Film IEQ. In Proceedings of the 2019 ACM International Conference on Interactive Experiences for TV and Online Video, Online, 4 June 2019; pp. 35–46. [Google Scholar]
Figure 1. Based on the visual illusion effect, fundamental driving principles of pseudo-haptic customized virtual object weight algorithms. (a) the speed of lifting a golf ball existing in a real space, which refers to the y-axis movement speed of a 3D hand model ( v r e a l _ s t ), (b) the speed of a 3D hand model that lifts a virtual object other than the standard weight ( v r e a l _ v ), and (c) the driving principle of an object with a standard weight. In order to recognize a user-customized weight, it is necessary to lift an object having the same weight existing in real space. The weight of an object with a standard weight is called W s t a n d a r d and is based on the actual weight of the golf ball. The lifting speed of this object is called v s t a n d a r d , which is defined as the standard speed of raising the standard weight, so it can be considered to be the same as the speed v r e a l _ s t of the 3D hand model. In addition, Figre 1 also shows (d,e) a situation in which the weight of a virtual object is lighter or heavier than a golf ball having a standard weight, where W v i r t u a l refers to the weight of a virtual object that has a different weight than the standard weight and v v i r t u a l refers to the speed at which a virtual object with a weight of W v i r t u a l is lifted, which is the moving speed of the virtual object and not the speed of the 3D hand model, (d) that if the weight of the virtual object is lighter than W s t a n d a r d , the virtual object is higher even if the object is lifted at a speed of v s t a n d a r d , (e) that if the weight of the virtual object is heavier than W s t a n d a r d , the virtual object is lower even if the object is lifted at a speed of v s t a n d a r d , and (f) the equation according to the experiment of lifting an object according to a weight in real space ( v e x p e r i m e n t ). An experiment was conducted in real space to set a natural offset threshold in a VR environment.
Figure 1. Based on the visual illusion effect, fundamental driving principles of pseudo-haptic customized virtual object weight algorithms. (a) the speed of lifting a golf ball existing in a real space, which refers to the y-axis movement speed of a 3D hand model ( v r e a l _ s t ), (b) the speed of a 3D hand model that lifts a virtual object other than the standard weight ( v r e a l _ v ), and (c) the driving principle of an object with a standard weight. In order to recognize a user-customized weight, it is necessary to lift an object having the same weight existing in real space. The weight of an object with a standard weight is called W s t a n d a r d and is based on the actual weight of the golf ball. The lifting speed of this object is called v s t a n d a r d , which is defined as the standard speed of raising the standard weight, so it can be considered to be the same as the speed v r e a l _ s t of the 3D hand model. In addition, Figre 1 also shows (d,e) a situation in which the weight of a virtual object is lighter or heavier than a golf ball having a standard weight, where W v i r t u a l refers to the weight of a virtual object that has a different weight than the standard weight and v v i r t u a l refers to the speed at which a virtual object with a weight of W v i r t u a l is lifted, which is the moving speed of the virtual object and not the speed of the 3D hand model, (d) that if the weight of the virtual object is lighter than W s t a n d a r d , the virtual object is higher even if the object is lifted at a speed of v s t a n d a r d , (e) that if the weight of the virtual object is heavier than W s t a n d a r d , the virtual object is lower even if the object is lifted at a speed of v s t a n d a r d , and (f) the equation according to the experiment of lifting an object according to a weight in real space ( v e x p e r i m e n t ). An experiment was conducted in real space to set a natural offset threshold in a VR environment.
Electronics 11 02274 g001
Figure 2. Five-step object image used in the experiment.
Figure 2. Five-step object image used in the experiment.
Electronics 11 02274 g002
Figure 3. Experimental scene of measuring the speed of one-hand raising by weight in real space.
Figure 3. Experimental scene of measuring the speed of one-hand raising by weight in real space.
Electronics 11 02274 g003
Figure 4. Experimental results for measuring the lifting speed of objects by weight in real space (from 12.25 N to 49 N).
Figure 4. Experimental results for measuring the lifting speed of objects by weight in real space (from 12.25 N to 49 N).
Electronics 11 02274 g004
Figure 5. Virtual object weight recognition algorithm-based simulation content’s virtual environment. (a) A virtual experience space where the proposed algorithm is simulated, (b) a 3D hand model that receives and maps posture information from the user’s hand using the camera in Oculus Quest 2, (c) a table model that places virtual objects, (d) 3D Virtual Objects for Weight Recognition Experiments, and (e) 3D virtual objects for immersion measurement experiments.
Figure 5. Virtual object weight recognition algorithm-based simulation content’s virtual environment. (a) A virtual experience space where the proposed algorithm is simulated, (b) a 3D hand model that receives and maps posture information from the user’s hand using the camera in Oculus Quest 2, (c) a table model that places virtual objects, (d) 3D Virtual Objects for Weight Recognition Experiments, and (e) 3D virtual objects for immersion measurement experiments.
Electronics 11 02274 g005
Figure 6. Simulation scene based on virtual object weight algorithm. (a) Because the user feels a different individual weight depending on the experience and degree of immersion in VR, the user lifts a golf ball in the real space and measures this speed to define it as a customized standard speed, (b) Virtual Object Weight Algorithm.
Figure 6. Simulation scene based on virtual object weight algorithm. (a) Because the user feels a different individual weight depending on the experience and degree of immersion in VR, the user lifts a golf ball in the real space and measures this speed to define it as a customized standard speed, (b) Virtual Object Weight Algorithm.
Electronics 11 02274 g006
Figure 7. Screenshot of the scene in Experiment 1. The experiment was constructed so that the weight felt different by applying different offsets to objects A, B and C. Participants may think the weight was heavy or light in the order in which the objects were listed, so B was set as the heaviest weight.
Figure 7. Screenshot of the scene in Experiment 1. The experiment was constructed so that the weight felt different by applying different offsets to objects A, B and C. Participants may think the weight was heavy or light in the order in which the objects were listed, so B was set as the heaviest weight.
Electronics 11 02274 g007
Figure 8. Screenshot of scene from Experiment 2.
Figure 8. Screenshot of scene from Experiment 2.
Electronics 11 02274 g008
Table 1. Results of the weight sensing experiment questionnaire.
Table 1. Results of the weight sensing experiment questionnaire.
P1P2P3P4P5P6
4.65.03.83.24.34.8
P7P8P9P10P11P12
5.04.72.93.74.54.9
Table 2. Results of the immersion measurement questionnaire.
Table 2. Results of the immersion measurement questionnaire.
P1P2P3P4P5P6
3.84.23.14.64.72.1
P7P8P9P10P11P12
3.54.84.74.24.34.6
Table 3. The results of the immersion self-assessment.
Table 3. The results of the immersion self-assessment.
P1P2P3P4P5P6
710810105
P7P8P9P10P11P12
898899
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Son, E.; Song, H.; Nam, S.; Kim, Y. Development of a Virtual Object Weight Recognition Algorithm Based on Pseudo-Haptics and the Development of Immersion Evaluation Technology. Electronics 2022, 11, 2274. https://doi.org/10.3390/electronics11142274

AMA Style

Son E, Song H, Nam S, Kim Y. Development of a Virtual Object Weight Recognition Algorithm Based on Pseudo-Haptics and the Development of Immersion Evaluation Technology. Electronics. 2022; 11(14):2274. https://doi.org/10.3390/electronics11142274

Chicago/Turabian Style

Son, Eunjin, Hayoung Song, Seonghyeon Nam, and Youngwon Kim. 2022. "Development of a Virtual Object Weight Recognition Algorithm Based on Pseudo-Haptics and the Development of Immersion Evaluation Technology" Electronics 11, no. 14: 2274. https://doi.org/10.3390/electronics11142274

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop