Next Article in Journal
Congestion Avoidance in Intelligent Transport Networks Based on WSN-IoT through Controlling Data Rate of Zigbee Protocol by Learning Automata
Next Article in Special Issue
Cognitive Workload Classification in Industry 5.0 Applications: Electroencephalography-Based Bi-Directional Gated Network Approach
Previous Article in Journal
Blockchain-Based Trusted Federated Learning with Pre-Trained Models for COVID-19 Detection
Previous Article in Special Issue
Pyramidal Predictive Network: A Model for Visual-Frame Prediction Based on Predictive Coding Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Modular Haptic Agent System with Encountered-Type Active Interaction

Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Electronics 2023, 12(9), 2069; https://doi.org/10.3390/electronics12092069
Submission received: 24 March 2023 / Revised: 28 April 2023 / Accepted: 29 April 2023 / Published: 30 April 2023
(This article belongs to the Special Issue Human Robot Interaction and Intelligent System Design)

Abstract

:
Virtual agents are artificial intelligence systems that can interact with users in virtual reality (VR), providing users with companionship and entertainment. Virtual pets have become the most popular virtual agents due to their many benefits. However, haptic interaction with virtual pets involves two challenges: the rapid construction of various haptic proxies, and the design of agent-initiated active interaction. In this paper, we propose a modular haptic agent (MHA) prototype system, enabling the tactile simulation and encountered-type haptic interaction of common virtual pet agents through a modular design method and a haptic mapping method. Meanwhile, the MHA system with haptic interaction is actively initiated by the agents according to the user’s intention, which makes the virtual agents appear more autonomous and provides a better experience of human–agent interaction. Finally, we conduct three user studies to demonstrate that the MHA system has more advantages in terms of realism, interactivity, attraction, and raising user emotions. Overall, MHA is a system that can build multiple companion agents, provide active interaction and has the potential to quickly build diverse haptic agents for an intelligent and comfortable virtual world.

1. Introduction

The increasing engagement of people in virtual reality (VR) has led to some psychological and cognitive problems in some relatively extreme situations. For example, long-term living or working in VR may increase users’ loneliness and anxiety [1]. This results from the big difference between the virtual environment and the real life of humans. To tackle this problem, a very intuitive approach is adding more participants (i.e., virtual agents) into the virtual environments because virtual agents can provide users with companionship in a virtual environment [2]. The companion virtual agents mainly fall into three categories: virtual avatars of real humans [3], virtual characters [4], and virtual pets [5]. However, virtual avatars of real humans cannot provide companionship all the time due to the limitation of the humans behind the avatars. Additionally, virtual human characters are not widely used because of the complexity of appearance and interaction design, especially the lack of high-level intelligence [6]. Different from the above two, virtual pets can keep an online state all day long and can be built much more easily compared to virtual humans [7]. As a result, virtual pets have emerged as a prominent choice for companion agents [8]. A good companion agent needs to have two important properties; it not only should have variable appearance to provide different companionship but must also sense the user’s interaction and give feedback [9]. Existing work mainly focuses on the appearance and behavior of virtual agents in the visual channel [5], ignoring the haptic channel of virtual agents. However, haptic perception has an important role in the interaction between users and agents [10]. It can significantly improve the user’s sense of immersion and help users to understand the interactive behaviors of virtual agents [11]. So, the design of haptics is necessary for a good interaction experience. Among several kinds of haptics, the encountered-type haptics can provide force feedback based on the user’s behavior, which can improve the interaction experience and provide a better sense of companionship [12].
Besides the effects on visual and tactile perception, the paradigm of virtual agent-initiated interaction can further enhance the sense of companionship [13]. This type of agent-initiated interaction demonstrates the intelligence of the agent, leading to a more immersive and engaging user experience [14]. However, due to the complexity and variety of the haptic proxies’ design process, the haptic behaviors initiated by agents have not been considered in previous research [15,16]. Currently, the haptic proxies of virtual agents are still difficult to design and face major challenges in widespread applications.
In this paper, we present a modular haptic agent (MHA) prototype system, which enables the rapid construction of diverse virtual agents’ haptic proxies and interaction modes as shown in Figure 1. The MHA system focuses on addressing two key challenges, i.e., the diverse design of haptic proxies and the active interaction of agents. For the first problem, MHA adopts a modular design structure, and different pet agents with physical haptic modules are implemented. Distinct tactile elements of various agents are compiled to create diverse haptic proxies and physical props. By changing different patches and props on the module, MHA can simulate different tactile elements of various pets and map different parts of pets through a movable mechanical structure. For the second problem, we demonstrate different interactions between users and virtual pets and divide the interactive modes between users and virtual agents into passive interaction and active interaction. Passive interaction is easy to perform, while active interaction requires some additional effort. To support the active interaction, we achieved pet-initiated interaction by monitoring the user’s head movement, hand position, and the position of the interactive object in real time. The design methods of MHA enable designers to build diverse virtual agent haptic proxies in VR and improve the user’s interactive experience to obtain better accompaniment.
Our main contributions are three-fold:
1.
We propose the design methods of the haptic module of virtual agents by deconstructing the tactile elements of haptic proxies. We also provide a haptic mapping method for haptic simulation to fit the appearance and animation of different virtual agents.
2.
We develop a prototyping system called MHA to build haptic proxies of virtual agents in VR. The MHA system allows for predicting some of the user’s interaction intentions and enables the agent to initiate a haptic interaction with the user.
3.
Three user studies have been conducted to verify the advantages of building modular haptic agents based on the modular design method and the encountered-type haptic interactions. Experimental results have proved that MHA has the potential to quickly build diversified companion agents with haptic interactions in VR.
The remainder of this paper is structured as follows. Section 2 reviews the related work about encountered-type haptics and companion agents. Section 3 and Section 4 introduce the design methods and the system implementation, respectively. Section 5 conducts experiments about the system performance through user studies. Section 6 discusses some further results, and Section 7 gives the conclusion.

2. Related works

2.1. Virtual Pets

Pets have long been regarded as partners of human beings. The interactions between humans and animals can influence health and mood on both psychological and physiological levels [17]. There have been many studies on the value of human–pet interactions. Pet companions can reduce anxiety, loneliness, and depression, thereby delaying the onset of stress-related illness [17,18]. Pets can also increase sensory stimulation while lowering blood pressure and heart rate, making people of all ages feel comfortable [19]. In addition, studies have shown that virtual pets have a very good adjunctive therapeutic effect on children with autism [20] or asthma [21].
Ways to interact with pets or virtual pets have been the focus of researchers. One of the ways is interacting without physical haptics through a display screen, and another is to interact with physical virtual pet toys or pet haptic proxies to engage multiple senses of the users. Chahyana et al. proposed a virtual pet system based on the Android platform, in which users could interact with each other through applications and understand the daily behavior of pets [22]. Their virtual pets could only provide visual feedback, and they expressed their concern that the application of non-haptic virtual pets cannot replace the real ones. Pretty Pelvis added a force feedback device to the interaction system to obtain the user’s sitting posture, thus guiding the user to adjust their sitting posture promptly by interacting with virtual pets [23]. On top of this, MobiliMB presented a 5-degree-of-freedom mechanical device to be installed at the bottom of the phone, giving the user limited force feedback of a virtual pet [24]. Metazoa Ludens allowed users to play with their pets together remotely in a mixed reality environment [25].
Another approach is simulating physical pets through pet toys or proxies to provide a human–pet interaction experience. One of the most typical examples is the robot dog “AIBO” developed by Sony. AIBO had an image sensor that allowed it to walk to its target, kick a ball, or bump its head. Users could also actually touch AIBO’s head sensor to obtain force feedback [26]. AIBO had a wide range of applications not only in the field of entertainment but also in medical treatment and education [27]. It is also worth mentioning that iCat had the function of signal input and output at the same time, which could obtain the user’s intention and provide feedback with expressions and actions [28]. In addition, Lee et al. proposed a remote interaction device based on mixed reality, which could simulate the actions of remote pets through mechanical entities to achieve both-way interactions [16].

2.2. Encountered-Type Haptics in VR

The haptic interface has become essential for enhancing user experiences in VR systems [29]. In early research, McNeely proposed the concept of robotic graphics [30] and pointed out the need for various types of haptic feedback in VR. Encountered-type haptics refers to tactile behavior, in which one or both users or interactive objects move towards each other [31,32]. Among them, haptic solutions in VR fall on a spectrum from passive props to active props [33].
To achieve encountered-type haptics with passive props, physical props need to be matched with virtual environments or virtual objects to provide haptic feedback. Many representative works have been put forward, such as using integrated modules to achieve different haptics [34,35], manipulating props [36,37], self-assembled cell modules [38], and building synchronized scenes [39]. Snake Charmer is a typical example [35], which uses the rotation of the polyhedron to dynamically change the surface and make contact with the users to simulate multi-angle texture, temperature, and force feedback information. However, passive props have certain limitations. The physical props must be meticulously aligned with their virtual counterparts in terms of shape, position, and tactile feedback [40,41]. Furthermore, passive props do not offer force feedback during user interaction, which could diminish the user’s sense of presence [31].
To provide more general haptic feedback in VR, researchers proposed active props in encountered-type haptics. For example, robots [42] or humans [43,44] carried the devices to the target position to match the users’ feelings in the virtual world [45]. When users contacted virtual objects, haptic devices could dynamically transfer physical props or provided force feedback to users, which could significantly improve the users’ sense of realism. Active props also provided new possibilities for human–robot interactions.
Bouzbib divided the current encountered-type haptic devices into two dimensions, which were physicality and actuation [11]. Physicality refers to whether the haptic system uses real haptic props to simulate haptics. Actuation refers to whether motor-based hardware is deployed to simulate the autonomous movement of haptics. For a virtual pet agent, we suggest that it is necessary to simulate both the tactile sensation with real props for the physical dimension and the kinetic feedback with mechanical devices for the actuating dimension. Although several works have investigated encountered-type haptic devices that provide haptic feedback by predicting the user’s intention [37,46,47,48], little attention has been paid to achieving both tactile sensation and kinetic feedback when interacting with virtual agents [32], especially for virtual pets. One way to solve this problem is to build different interaction objects, using changeable interaction devices [49,50]. We would like to further work on the changeable interaction devices concept to achieve a variety of haptic interactions with virtual intelligent pets through the modular design method and the haptic mapping method.

3. Design Methods

Designing haptic proxies and human–agent interaction in VR involves answering two main research questions.
  • RQ1: What is an efficient method for constructing a large number of haptic proxies for different agents using minimal props?
  • RQ2: How can agent-initiated interactions be designed to enhance the sense of companionship?
To design haptic interaction between users and virtual pets in VR with quality and efficiency, we propose two key schemes to solve the two problems above, respectively. For the diverse haptic proxies construction, we adopt the modular design method and the visual–haptic mapping method. For agent-initiated interaction, we classify human–agent interaction and design an active interaction system. We use a system prototype of various virtual pets to illustrate our modular haptic proxy design and interaction design.

3.1. Haptic Proxies Construction with Minimal Props

3.1.1. Classification of Tactile Elements

Tactile elements are the sets of haptic props that share the same haptic rendering properties, reflecting a similar class of attributes (e.g., size, texture, softness, temperature, etc.) [51,52,53]. Tactile elements of the same type can be used to approximate the tactile sensation of a class of objects [54,55]. Therefore, haptic elements of the same class can be represented by the same haptic props.
We surveyed more than 20 common household pets through questionnaires and expert interviews to investigate pets’ tactile elements and interactive behavior. Though different pets have different shapes and haptic rendering properties, through our survey, we found that most pets have very similar tactile elements. Their shapes and tactile elements can be roughly divided into five categories, i.e., flat surface, cambered surface, hard sharp corner, triangular-shaped or stripe-shaped ears and tails, and other special categories. As a result, the tactile sensation of different parts of a virtual pet can be simulated by constructing different categories of tactile props and mapping them to the position of the specified virtual avatar. We demonstrate different categories of tactile elements, some examples of haptic props, and the different parts of the virtual pets they correspond to as shown in Figure 2. Each of the five categories of tactile elements is described below.
  • Flat surface: A large flat soft surface, often covered with long or short fur (e.g., the heads and bodies of dogs, pet pigs, and pet calves). When the users come in contact with the flat surfaces, their hands are often extended, and users touch along the plane or gently pat along the normal direction of the plane.
  • Cambered surface: A soft surface with a large or small radius of curvature, often covered with long or short fur (e.g., the heads of kittens, ducks, and rabbits). When the users come in contact with the cambered surfaces, their hands are often half open, and users touch them with their palms or fingertips, sliding in a circle with their hands.
  • Sharp corner: A hard, sharp pyramid or cylinder. For small sharp corners (e.g., the mouths of ducks), users often touch them with the tips of their fingers or palms. For large sharp corners (e.g., the horns of pet calves or lambs), users often touch or hold them with multiple fingers and palms.
  • Ears and tails: A soft triangular or elliptical piece or column, often covered with long or short fur (e.g., the triangular ears and elliptical tails of dogs, pet pigs, and kittens). When the users come in with contact the ears, their hands are either fully extended or bent to pinch. When the users come in with contact the tails, their hands often hold the tails or rub up and down.
  • Others: Some special pets have special tactile elements that are very unique (e.g., the long ears of rabbits, the mane of pet ponies, and the nose and tail of pet pigs). When interacting with these pets, users will make specific haptic interactions according to specific situations (e.g., patting the ears of rabbits, fondling the mane of ponies, and pinching the nose of pigs).

3.1.2. Modular Design Method

Based on the composition of virtual pets, we can use the five tactile elements to represent the physical proxies of virtual pets. Therefore, tactile proxies can be built in the form of a module to achieve a realistic pet tactile sensation in the virtual environment. The main concept behind the modular design method is to create virtual pet proxies using a modular structure. By assembling modules containing tactile elements, designers can quickly construct haptic proxies.
We design and produce the prototype MHA module. The MHA module consists of a module core, detachable patches, and props on the patches (see Figure 1a). The module core is located in the center of the module and is connected to the patches and mechanical devices. The detachable patches can be secured on the module core as holders for props. Props are closely connected to the patches to form tactile elements of virtual pets. In addition, a variety of sensors and drivers can be installed on the patches to detect the users’ touch behavior and provide feedback. By combining the module core, the detachable patches, and the props, all the tactile elements of a pet can be represented.

3.1.3. Haptic Mapping Method

When all the tactile elements in the haptic proxies are formed through the modular method, they need to be allocated in different specific spatial positions to achieve the tactile sensation of the pets. Since we use the same module core, a vision–haptic mapping method ought to be programmed to map different sizes of virtual agents when constructing the haptic proxies. We show examples of different assemblies and the mapping of three different virtual pets. For the same module core, we use different patches and props to construct the tactile elements required by three virtual pets.
For the small pet duck, the actual module size basically corresponds to the size of the virtual duck’s head. By moving and fixing the actual module to the position of the virtual image, as shown in Figure 3a, the simulation of the duck can be achieved.
For the medium-sized pet dog, the actual module size is smaller than the entire size of the virtual dog. Due to the matching characteristics of users’ vision and touch, users often look at the position they are about to touch [56]. We monitor the user’s viewing position according to the collision between the user’s vision and parts of the virtual image to further predetermine the position that the user intends to touch. By using a mechanical device that continuously drives the module to move to positions where the user looks, and aligning the haptic props to the tactile elements on the virtual images, designers can use one small module to match the whole virtual dog’s tactile elements as shown in Figure 3b.
For a large pet calf, we can use the same mapping method to achieve the tactile stimulation of different parts of the pet calf as shown in Figure 3c. Furthermore, for other pets, designers can reasonably divide different areas of tactile elements according to the size of the virtual image and achieve different mapping methods.

3.2. Agent-Initiated Active Interaction

A large part of human–agent communication is achieved through haptics. Interactive behavior can be initiated by either users or virtual agents. Suzuki proposed the concept of explicit interaction and implicit interaction in his work [57]. In explicit interaction, the users directly touch the interaction props with their hands or bodies, while implicit interaction takes the user’s implicit motion as input, such as the user’s position or proximity to the robot [58]. In haptic interaction, we divide these two forms of interaction as passive interaction and active interaction depending on the haptic initiating agent.
From the perspective of virtual agents, we define passive interaction as user-initiated haptic interaction, while active interaction is interaction initiated by an agent when it notices some of the user’s intention to interact. In passive interaction, users approach and touch the pets, and then the pets give feedback according to the touch behavior of the users. In active interaction, pets sense the user’s intention and spontaneously approach users to exert force on users.
Based on the survey mentioned in Section 3.1, five common haptic interactions of common household pets are summarized, i.e., head touch, body touch, discipline, feeding, and rubbing. Head touch and body touch fall under the category of passive interaction; the user touches different parts of the pets, and pets respond to the user’s haptic behavior. Pets react differently depending on where the user touches them. While discipline, feeding, and rubbing are active interactions, the pets sense the user’s intent and interact spontaneously with the user. In particular, the term “discipline” in our study refers to the haptic interaction initiated by the virtual pet with the user after the user has issued a non-haptic command to the pet, such as beckoning the pet to come closer.
The judgment of user intention in active interaction includes head direction [59,60,61], proximity [58,62], gesture [63,64], voice [65,66], and other interaction modalities [57,67]. Among them, head direction and proximity interaction are the two most important aspects [68,69]. Head movements can provide a convenient and natural addition to human–agent interaction [70,71], and proximity interaction effectively reflects the interpersonal relationship between the human and the agent [72]. We use two assumptions for active interaction decisions in our system: i) if the user’s head direction is oriented to the virtual agent for a period of time, the agent will initiate an interaction, and ii) if the user’s hand or interactive object approaches the virtual agent, the agent will initiate the interaction.
Not every agent has all these types of interactions. For instance, some birds do not allow their bodies to be touched in passive interaction, and some pets do not understand the user’s intent and may not rub themselves against the user’s hand. Though the above situations exist, these interaction types basically cover most of the interaction design of all kinds of virtual pets. Compared with previous works [7,15], the active interaction design expands the possibility of interaction between users and virtual agents. Combining agent-initiated interaction with user-initiated interaction is more rational, and more likely to be preferred by users and provide better companionship.

4. System Implementation

4.1. Overview

According to the modular design method and the mapping method, we proposed the modular haptic agent, a prototype system for the rapid construction of virtual agents’ haptic proxies and interactions. We attached the MHA module with all tactile elements of a virtual pet to a mechanical arm that drove it to achieve tactile simulation, encountered-type passive and active interaction as shown in Figure 4. Our prototype system included (i) a haptic module with tactile elements, (ii) a mechanical device to provide haptic mapping and force feedback, (iii) a microcomputer for signal exchange and circuits, and (iv) a suit of virtual reality devices for display and tracking. Additional information can be found in the supplementary material on Video S1.

4.2. Module Design

4.2.1. Module Core

The module core is an empty cube shell, one side of which is fixed with the mechanical device, and the other five faces are perforated to connect different detachable patches to simulate tactile elements. To ensure the authenticity of interactions, the size of the module core has certain requirements. The module should not be too small, for the user’s whole palm should be able to touch the tactile elements. However, the module should not be too large due to the limitation of the mobile range of the mechanical manipulator. We investigated the sizes of common pets and the palm sizes of people of different ages and genders in advance and found that the plane range of 12 cm × 12 cm was accepted by users. We made a cube with a side length of 12 cm using the 3D printer, served as the module core.

4.2.2. Detachable Patches and Props

Detachable patches were holders for sensors and props. Various props could be used to simulate the tactile elements of different pets. Each patch was perforated in a standardized way at the same position on the module core surface. Any of the patches was able to connect to any of the surfaces of the module core.
After fixing the patches, we installed the sensors and servos on the surfaces of the patches for haptic input and haptic output, as shown in Figure 5. Hand tracking is not always accurate due to the loss of depth information when the user’s hands are trapped in the plush. So, we used photo sensor pasting on the patch and punched a tiny hole on the surface of the props for haptic input. When the user touches the patch, their hand will cover the photo sensor, and thus the sensor will capture the touch signal. The reason we did not use touch sensors is because their large metallic surface might have affected the users’ sense of touch. We were able to install a servo motor on the patch with slots and place the props on the top to form movements of the duck’s mouth for haptic output. Since the servo motor can move quickly, the tiny movements of the duck’s mouth were able to be simulated.
We used an Arduino Uno microcomputer to process the user’s different touch signals and the servo motor driving signals. The real-time signal exchange was deployed between the microcomputer and processor. The microcomputer transmitted the touch signal to the processor, and the processor transmitted the servo motor moving signal to the microcomputer.

4.3. Tracking System

We captured some of the user’s interaction intentions in real time by the line of sight, the hand position, and the position of interactive objects.
(1) Line of sight. The center of the head-mounted display emitted a virtual ray to determine the user’s visual location, detecting the head direction of the user. We detected whether this ray crosses the mesh of the virtual agent in real time.
(2) User’s hand position. We used Leap Motion to track the user’s hands. The Leap Motion Controller provided a virtual hand image, guiding the user to touch the virtual pet, and monitoring the distance between the user’s hand and the module.
(3) Position of interactive objects. When the user interacted with the virtual agent using an interactive object, the tracker on the object was used to determine the spatial position for active interaction.

4.4. Encountered-Type Haptic Interaction

To better illustrate the MHA encountered-type haptic design process, we used an example of interaction with a pet dog to illustrate our design methods. It is important to note that the encountered-type haptic interaction of different agents could be designed similarly.
We assembled a module with five patches. We divided the virtual dog into a head part and a body part, each of which approximates the size of the module. The props on the patches were composed of a flat plane prop (top of the head), two large curved props (can be either the cheeks of the head or the back of the body), a small curved prop (front face of the head), and a tail prop (tail of the body). Each patch had a sensor to register the user’s touch behavior. We pre-edited the positions, moving speeds, and strength of the mechanical arm to provide the shape simulation and encountered-type haptic interaction.
The virtual dog was about the size of two modules. We divided the virtual dog into a head area and a body area. When a user’s visual ray hit an area, the processor would send the moving signal to the mechanical arm, driving the same module to match different areas. In the encountered-type haptic interactions, we edited the force of the mechanical arm to match the feedback animation of the virtual pets.
Passive interaction. The users came close to the virtual pets and touched them. The virtual pets gave the user movement feedback and force feedback after the user’s touch behavior. For the pet dog, different animation and force feedback can be given when different parts of the dog are touched. As shown in Figure 6a, when the users touched the dog’s face, the virtual dog would nod, and the mechanical arm drove the module to move in an arc to provide force feedback.
Active interaction. When the user’s hands or the tracked food container were close to the dog, it would understand some of the user’s intentions and come up to the user. When the distance between the users’ palm or interactive props and the MHA module was less than the threshold value of 10 cm, the animation system played the animation of the pets moving towards the users, and the mechanical arm synchronously drove the module to the user’s hand position to provide the active interaction behavior of the virtual pet. Different animation feedback and force feedback could be triggered when the user placed their hands or interactive objects in different positions. For instance, as shown in Figure 6b, when the users’ single hand or both hands were placed in front of the virtual dog for a period of time, the dog would take the initiative to rub against the user’s palm, and the mechanical arm drove the module to move back and forth to provide force feedback.

4.5. Software

We used Unity3D to render virtual pets on the processor, and Unity3D was responsible for signal processing and logical judgment of all kinds of interactions. The processor received the user’s sight position, hand position, tracker position, and touch signal in real time, judging the user’s interaction intention and behavior. Based on the different signal inputs, the processor could determine the current animation to play, as well as the signal to be sent to the servo motor and the mechanical arm. The pseudo-code of our designed interaction logic is shown in Algorithm 1.
Algorithm 1 MHA system control algorithm.
1:
Input: line of sight L, haptic input H i , hand-module distance D h , tracker-module distance D t , head orientation time threshold t = 5 s, interaction distance threshold d = 10 cm.
2:
Output: mechanical device terminal position P, mechanical device force feedback M, module haptic output H o , virtual animation feedback A.
3:
for each time of interaction do
4:
   Move P to align the position of L; (Haptic Mapping)
5:
   if  H i ! = 0  then
6:
     Passive interaction(user-initiated) is detected;
7:
     Activate M feedback, activate H o feedback, activate A feedback;
8:
   else
9:
     Remain stationary and be ready to move;
10:
     if  D h  < d or D t  < d or (the time of L on P) > t then
11:
        Active interaction(user’s intent of interaction) is detected;
12:
        Activate M feedback, activate H o feedback, activate A feedback;
13:
     end if
14:
   end if
15:
end for

5. System Evaluation and User Study

To verify that the MHA system had the potential to address both RQ1 and RQ2, we conducted three studies to evaluate the MHA system. For RQ1, Study 1 and Study 2 were designed to evaluate the modular design method and the haptic mapping method, respectively. For RQ2, Study 3 was designed to evaluate the enhancement of the virtual agent active interaction for the user experience. Study 1 was to verify whether the MHA system could simulate the tactile sensation with a modular unit that was smaller than a virtual agent. Meanwhile, the MHA system could provide the same user experience as a larger haptic prop that matched the size and shape of the virtual agent. Study 2 discussed the system generalization performance when users touched virtual pet agents of different sizes and shapes. Study 3 tested the effect of passive and active interaction on users’ interactive experience in the encountered-type interaction.

5.1. Participants

In total, 30 participants (10 females and 20 males) aged 21–32 ( M = 24.2 ) were recruited to participate in these studies. All participants reported no fear or resistance towards the virtual pets we designed. We taught each participant for 5–10 min to ensure that they were familiar with the images of the virtual pets and could interact with the MHA system safely.
A total of 30 participants were enrolled in the system evaluation, and each individual participated in all three studies (Studies 1, 2, and 3). The order of the three studies was randomized for each participant.

5.2. Apparatus and Materials

The virtual environments were displayed with an HTC VIVE Pro. A Leap Motion Controller was attached to the head-mounted display for users to observe their hands, and the HTC VIVE Tracker was used for tracking interactive props. The experiment was conducted on a computer with the Intel Core i7 processor and the NVIDIA RTX 3080 graphics card, and the software was implemented in Unity 2019.4.10f1. All virtual pets’ 3D models were obtained from the Unity Asset Store. The module containing all tactile elements for each study was assembled in advance and fixed on a KUKA lbr iiwa R820 mechanical arm, whose manipulator end can move in a 1 m cubic area in 2 s, and provide forces of less than 30 N in any direction. Arduino Uno was selected for signal processing and exchange, and an SG-90 servo motor was used to provide feedback on the module. The haptic proxies available for haptics on the module are equipped with three-pin photo sensors to facilitate the perception of touch signals. We planned the movement path of the mechanical arm in advance according to different pets and programmed the mechanical arm. Although we limited the movement speed and force of the robotic arm, we were concerned that the participants might hit the module or the robot arm if they moved their arms blindly, which could be dangerous. Therefore, we informed the participants in advance of the limited area of the virtual pet that they could touch, and we provided instructions to participants with and without the VR headset.

5.3. Questionnaire

There were four main aspects to consider when designing a virtual agent with a sense of companionship, which were realism, interactivity, attraction, and user emotion [15,73]. We designed our questionnaire based on standardized human–computer interaction questionnaires [74,75,76]. The questionnaire was a 7-point Likert scale, with scores ranging from zero for “strongly disagree” to six for “strongly agree”. The participants were asked to fill out this questionnaire after finishing a group of tests. A total of 4 ( q u e s t i o n s ) × 4 ( i n d e x e s ) = 16 ( q u e s t i o n s ) were asked on each scale, which were related to realism, interactivity, attraction, and user emotion, respectively. The three studies shared the same questionnaire to maintain the same evaluating standard. The questionnaire can be found in the appendix (see Appendix A).

5.4. Study 1: Module Design Method and Mapping Method

5.4.1. Experiment Design and Procedure

We adopted a comparative experiment to verify our modular design method and haptic mapping method. Each participant interacted with a virtual dog in a random group order. For each set of groups, the image and animation of the virtual dog were the same, and the only difference was the haptic perception. Users interacted with the virtual dog for three to five minutes in each group.
  • NHD group (none haptic dog): Participants could see the pet dog but could not touch it (see Figure 7a). When their virtual hands collided with the virtual dog, the virtual dog provided animation feedback.
  • HapticDog group (MHA haptic dog): We designed the module in advance and planned the path of the mechanical arm (see Figure 7b). When a user looked at different positions of the dog, the mechanical arm drove the module to move to the corresponding positions of the virtual dog. When their virtual hands touched the virtual dog, the dog provided animation feedback.
  • ST group (stuffed toy): We prepared a plush toy dog similar in shape and size to the virtual image (see Figure 7c), which could be regarded as a 1:1 haptic proxy of the virtual dog based on previous research [77]. The size and shape of the stuffed toy were identical to those of the avatar. Therefore, it was only necessary to position the stuffed toy in the corresponding location.

5.4.2. Results

We collected 30 results from each group (see Figure 8). Except for the three indexes of realism, interactivity, and attraction in the ST group, the rest of the data were all in line with normal distribution. A Wilcoxon signed-rank test is used to analyze the results.
Significant differences were found between the NHD group and the HapticDog group in realism (Z = −4.60, p < 0.001), interactivity (Z = −4.75, p < 0.001), attraction (Z = −4.46, p < 0.001) and user emotion (Z = −4.58, p < 0.001). Additionally, significant differences were found between the NHD and ST groups in realism (Z = −4.67, p < 0.001), interactivity (Z = −4.66, p < 0.001), attraction (Z = −4.53, p < 0.001) and user emotion (Z = −3.88, p < 0.001). No significant difference was found between the HapticDog group and the ST group in realism (Z = −1.60, p = 0.11), interactivity (Z = −1.12, p = 0.24), attraction (Z = −0.30, p = 0.77) and user emotion (Z = −0.32, p = 0.76).

5.4.3. Results Discussion

We found that participants with VR experience rated the ST group lower, while participants without VR experience rated the ST group higher, which might be the reason why the three groups of data for stuffed toys did not conform to a normal distribution. Since some participants were more familiar with haptic interaction in VR environments, their expectations may have affected their evaluation status.
In addition, we found improvements in four indexes between the NHD group, the HapticDog group, the NHD group and the ST group. The experimental results showed an enhancement of the haptic interaction system compared to the visual system. There was no significant difference between the HapticDog group and the ST group, indicating that, with a small module, it is possible to achieve a realistic haptic simulation of large haptic props by mapping methods.

5.5. Study 2: Simulation of Different Pets

5.5.1. Experiment Design and Procedure

To verify that our system could simulate pets of different sizes and shapes, we designed a pet calf. Each participant interacted with the calf in a random group order.
  • NHC group (none haptic calf): Participants could see the pet cow but could not touch it (see Figure 7d). When their virtual hands collided with the virtual calf, the calf provided animation feedback.
  • HapticCalf group (MHA haptic calf): We reassembled the module and planned the path of the mechanical arm (see Figure 7e). When a user looked at different positions of the calf, the mechanical arm drove the module to move to the corresponding positions of the virtual calf. When their virtual hands touched the virtual calf, the calf provided animation feedback.
The interaction process with the virtual calf was the same as that of the virtual dog, and the animation feedback of the virtual calf was the same as that of the virtual dog in Study 1. Users interacted with the virtual calf for three to five minutes in each group. We still used four indexes in each group to describe the realism, interactivity, attraction, and users’ emotion. We compared the results of Study 2 with Study 1 to verify whether there were differences in the four indexes regarding the user’s score of various agents.

5.5.2. Results

The results of the NHD, HapticDog, NHC and HapticCalf groups were compared together (see Figure 9). The four indexes of all groups were under the normal distribution. We paired the four indexes of the four groups and conducted several paired t-tests to analyze the differences.
From the comparison of different pet images, no significant difference was found in any of the factors, including realism (t = −1.07, p = 0.29), interactivity (t = −0.50, p = 0.62), attraction (t = −0.92, p = 0.37) and user emotion (t = −1.00, p = 0.33) between the NHD and NHC groups. Additionally, there was no significant difference in the realism (t = −1.38, p = 0.18), attraction (t = −0.30, p = 0.77) and user emotion (t = −1.19, p = 0.24) between the HapticDog and HapticCalf groups. A significant difference was found in the interactivity index (t = −2.80, p = 0.01) between the HapticDog and HapticCalf groups.
From comparison on haptic feedback, significant differences were found between the NHD and HapticDog groups in realism (t = −8.15, p < 0.001), interactivity (t = −9.32, p < 0.001), attraction (t = −6.86, p < 0.001) and user emotion (t = −7.53, p < 0.001). Meanwhile, significant differences were found between the NHC and HapticCalf groups in realism (t = −8.36, p < 0.001), interactivity (t = −11.00, p < 0.001), attraction (t = −8.27, p < 0.001) and user emotion (t = −6.27, p < 0.001).

5.5.3. Results Discussion

Through comparisons of Study 1 and Study 2, there were no significant differences in the visual aspects of the virtual pet images experienced by the participants, thus providing a baseline for haptic interaction with different pets. Among them, the participants felt certain differences in interactivity between the HapticDog and HapticCalf groups. We thought the possible reason might be that the virtual calf was larger than the virtual dog, and the participants were more likely to observe the calf’s feedback and, therefore, higher ratings were given to the interactivity index. Through comparisons on Study 2, we found that the method of module design and mapping could effectively improve all indexes.
According to the results, we verified the potential of the MHA system to simulate virtual pets of different sizes and shapes. Our proposed modular design method and mapping method has the potential to be expanded in the design of different pet agent haptic proxies. MHA could significantly improve users’ virtual experience compared with the ones with no haptic feedback.

5.6. Study 3: Passive Interaction and Active Interaction

5.6.1. Experiment Design and Procedure

To explore the influence of force feedback on passive interaction and active interaction, we adopted another set of comparative studies. To eliminate the influence of the previous two experiments, we adopted another virtual avatar of a duck and reassembled the module. Each participant interacted with the duck in a random group order:
  • NF group (no feedback): When a participant touched different parts of the virtual duck, it responded to the user’s touch with only animation feedback.
  • PF group (passive feedback): When a participant touched different parts of the virtual duck, the mechanical arm drove the module to generate different animation feedback and force feedback for the participant.
  • AF group (active feedback): When the hands of a participant approached different parts around the virtual duck or the participant’s head oriented towards the virtual duck for 5 s, the mechanical arm drove the module to generate different animation feedback and force feedback for the participant.
The participants stood in front of a virtual duck in the virtual environment. We placed two photo sensors on different module patches. In the NF group, when a photo sensor received the touch signal, the duck provided animation feedback of touching the part of the duck. In group PF, when a photo sensor signal received a touch signal, the mechanical arm drove the module to provide force feedback, and the virtual duck played the equivalent animation feedback at the same time. In the AF group, in addition to the PF group, we tracked the user’s head direction and hand position. When the center ray of the head-mounted display hit the bounding box of the virtual duck for 5 s, or one of the palm centers of the user approached the virtual duck to a range of a given threshold (e.g., 10 cm), the mechanical arm drove the module to the user’s palm, and the virtual duck played the equivalent animation feedback at the same time (see Figure 10). For each set of experiments, the virtual duck had the same image and animation. Again, users interacted with the virtual duck for three to five minutes in each group and rated 16 questions on all 4 indexes.

5.6.2. Results

We collected 30 results on the NF, PF, and AF groups (see Figure 11). The four indexes of the three groups of data were all under the normal distribution. The results of the paired t-test showed that significant differences were found between the NF and PF groups in realism (t = −6.98, p < 0.001), interactivity (t = −6.63, p < 0.001), attraction (t = −5.32, p < 0.001) and user emotion (t = −8.08, p < 0.001). Furthermore, there were further improvements in realism (t = −5.53, p < 0.001), interactivity (t = −4.51, p < 0.001), attraction (t = −2.97, p < 0.001) and user emotion (t = −5.89, p < 0.001) when we compared the AF and PF groups. Additionally, significant differences were found between the AF and NF groups in realism (t = −8.93, p < 0.001), interactivity (t = −7.29, p < 0.001), attraction (t = −5.49, p < 0.001) and user emotion (t = −9.01, p < 0.001).

5.6.3. Results Discussion

The comparison between the NF and PF groups showed that encountered-type haptic passive interaction had better comments on realism, interactivity, attraction, and users’ emotion. The comparison between the PF and AF groups showed that the design of the encountered-type haptic active interaction could further promote user experiences when interacting with virtual pets.
It is worth mentioning that in the AF group, the average score of the user’s emotion index was 5.29 out of 6. In terms of the prompt “it makes me feel happy”, 24 out of 30 people gave “strongly agree”, indicating that active interactions had a great effect on improving the psychological feeling of the users and providing better companionship.

6. Discussion

6.1. Further Analysis

Three user experiments evaluated our designed MHA system from different perspectives, focusing on both the design of different haptic proxies and the active interaction of agents.
Aiming at the problem of how to quickly construct multiple haptic agents with limited haptic props, Study 1 and Study 2 together illustrate the modular design method and haptic mapping method for haptic proxies, allowing the construction of multiple haptic agents with a small and limited number of haptic props. By modularizing the tactile elements and utilizing a mechanical device to map different parts of the agent, designers can assemble different haptic proxies to simulate the shapes and tactile elements of different agents, and thus the number of haptic props can be reduced to some extent. At the same time, the design methods of MHA allow designers to build unlimited types of haptic proxies with limited haptic props, which has a design space that can be further extended.
In addition, we carried out a haptic interaction design actively initiated by the haptic agents. Haptic agents can understand certain interaction intentions of the users through their head movements, hand positions, and interactive object positions, and take the initiative to operate a haptic interaction. The design of active interaction makes up for the lack of agent-initiated haptic interaction in VR and receives better comments from users. The MHA system can improve realism, interactivity, the attraction of virtual agents, and user emotion in VR, and provide better companionship for users. The design methods of the MHA system can help stakeholders quickly build active–interactive virtual pet agents and have the potential to be applied to the haptic design of various intelligent agents in VR.

6.2. Limitations and Future Work

Based on our design and application demonstrations of the MHA system, we identified several issues and potential future works to discuss the prototyping of haptic proxies.
Our entire system currently uses a 3D-printed module, some handmade patches and props for pets, and a lightweight robotic arm. The system is expensive, as it is still in the prototype stage. Since virtual companion agents may be used in home scenarios in the future, further research may investigate users’ needs for haptic agent design, including simpler methods for patch design, and smaller robotic arms or unmanned vehicles to provide encounter-type haptics.
We present three examples of different pets in this paper to illustrate our module design method and haptic mapping method for MHA, showing that our system can be adapted to different agents. Although we were able to achieve haptic interaction at different locations for some pets with one module, the challenge of continuous haptics remains, and advanced settings are required during module assembling and mapping. Our current system uses a cube module with a side length of 12 cm, which is sufficient to simulate the tactile sensation of a certain part of the virtual pet. However, when touching a large agent from head to tail, the user’s hand inevitably crosses a 90-degree angle, which may break the immersion. Moreover, the accidental touching of haptic props that should not be touched may confuse the user. Therefore, future research could explore the use of multiple, smaller self-assembling modules or robots to achieve continuous haptics. Smaller robots could enable automatic assembly and spatial topology reorganization during user interaction, resulting in a continuous haptic approach.
In the MHA system, the line of sight position, hand position, and tracked object position are used together to predict some user’s interaction intention and drive the agent to initiate interaction with the user. However, due to the diverse interaction scenarios and different virtual agents, the user’s interaction intention is difficult to predict. In the future, we plan to use a larger algorithmic model for the companion agent to process the multimodal user input, jointly determine the user’s multiple interaction intentions, and provide feedback in a proper manner.
In this work, we use the Leap Motion Controller to capture and present the users’ hand position. The Leap Motion Controller has the advantage of allowing haptic interactions with bare hands, users do not need extra devices on their hands. However, the Leap Motion Controller, as an infrared detector, may be affected by the shape of the module. For instance, when a user touches a soft part of the patches, the user’s fingers may become trapped inside the soft patches. Losing the depth information of the fingers makes it difficult to recognize the users’ fingers. Future work may investigate a way to obtain the hand position more accurately, which can bring a better experience for users.

7. Conclusions

In this paper, we propose an MHA prototype system based on the modular design method, the haptic mapping method, and the encountered-type haptic device. We demonstrate our system with examples of human–agent interactions. The MHA system has two notable advantages. First, we put forward the modular design method and haptic mapping method to construct multiple haptic proxies with limited props. Second, the haptic agents can initiate interaction with the user through the real-time tracking of the user’s eyes, hands, and interactive objects. Study 1 and Study 2 prove that the MHA system can achieve the tactile simulation of different sizes and shapes through a single module. Study 3 shows that virtual agents with active interactions have better reviews than passive interactions and non-haptic virtual agents on realism, interactivity, attraction, and users’ emotions, and thus provide better companionship.
We believe that the modular design method and the use of the haptic mapping method to achieve haptics at different locations are effective in the design of haptic proxies and have the potential to be extended to other types of haptic proxies’ designs. In addition, the encountered-type active interaction design will expand the design space of haptic feedback, which can provide a better companion in virtual environments. We expect that our system and demonstrations will benefit future researchers to provide better and more diverse experiences in VR.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/electronics12092069/s1, Video S1: A Modular Haptic Agent System with Encountered-Type Active Interaction.

Author Contributions

Conceptualization, D.W. and H.J.; methodology, X.D. and H.J.; software, X.D.; validation, X.D. and L.F.; formal analysis, X.D. and H.J.; investigation, X.D. and L.F.; resources, X.D. and L.F.; data curation, X.D. and H.J.; writing—original draft preparation, X.D.; writing—review and editing, X.D., D.W., H.J. and L.F.; visualization, X.D., H.J. and L.F.; supervision, D.W. and H.J.; project administration, D.W.; funding acquisition, D.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Key-Area Research and Development Program of Guangdong Province (No. 2019B010149001) and the National Natural Science Foundation of China (No. 62072036, No. 61902026), and the 111 Project (B18005).

Institutional Review Board Statement

Authors’ institution does not require approval by the ethical review board for non-interventional studies.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data can be made available upon request from the authors.

Acknowledgments

We would like to thank Zhenliang Zhang for his assistance in our paper revision and Xinnan Song’s tutorial on 3D printing.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MHAModular Haptic Agent
VRVirtual Reality

Appendix A. System Evaluation Questionaire

We first investigated the background information of the participants, including age, gender, VR-use experience, fear of pets, etc. We then used the following questionnaire in Study 1, Study 2, and Study 3. Participants were asked to rate each question on a scale of 0–6. All the subjects received compensation for finishing all the experiments.
Q1: This virtual pet is real.
Q2: This virtual pet is interesting.
Q3: I like this virtual pet better than the real one.
Q4: We can be good friends.
Q5: I’m happy with this virtual pet.
Q6: I feel its sense of touch.
Q7: I feel like it’s alive.
Q8: I feel involved with it.
Q9: I want to own a virtual pet like this.
Q10: I want its company.
Q11: This virtual pet is smart.
Q12: I find this virtual pet attractive.
Q13: I enjoy playing with it.
Q14: I notice its feedback.
Q15: I want to spend time with it.
Q16: It makes me feel happy.
These 16 questions evaluate the virtual pet interaction from four different perspectives. The four major indexes are realism, interactivity, attraction, and emotion.
Questions 1–4 survey the realism of this kind of virtual pet. Questions 5–8 survey the haptic interactivity of this kind of virtual pet. Questions 9–12 survey the attraction of this kind of virtual pet to participants. Questions 13–16 survey this kind of virtual pet’s affection for participants’ emotions. The average score of each of the four questions constituted a participant’s evaluation of this index.

References

  1. Guo, J.; Weng, D.; Zhang, Z.; Jiang, H.; Liu, Y.; Wang, Y.; Duh, H.B.L. Mixed reality office system based on maslow’s hierarchy of needs: Towards the long-term immersion in virtual environments. In Proceedings of the 2019 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), Beijing, China, 14–18 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 224–235. [Google Scholar]
  2. Chesney, T.; Lawson, S. The illusion of love: Does a virtual pet provide the same companionship as a real one? Interact. Stud. 2007, 8, 337–342. [Google Scholar] [CrossRef]
  3. Freeman, G.; Zamanifard, S.; Maloney, D.; Adkins, A. My body, my avatar: How people perceive their avatars in social virtual reality. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–8. [Google Scholar]
  4. Lee, M.; Norouzi, N.; Bruder, G.; Wisniewski, P.J.; Welch, G.F. The physical-virtual table: Exploring the effects of a virtual human’s physical influence on social interaction. In Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology, Tokyo, Japan, 28 November–1 December 2018; pp. 1–11. [Google Scholar]
  5. Liang, W.; Yu, X.; Alghofaili, R.; Lang, Y.; Yu, L.F. Scene-aware behavior synthesis for virtual pets in mixed reality. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–12. [Google Scholar]
  6. Hieida, C.; Nagai, T. Survey and perspective on social emotions in robotics. Adv. Robot. 2022, 36, 17–32. [Google Scholar] [CrossRef]
  7. Bylieva, D.; Almazova, N.; Lobatyuk, V.; Rubtsova, A. Virtual pet: Trends of development. In Proceedings of the 2018 International Conference on Digital Science, Manila City, Philippines, 8–9 November 2018; Springer: Berlin/Heidelberg, Germany, 2019; pp. 545–554. [Google Scholar]
  8. Bloch, L.R.; Lemish, D. Disposable love: The rise and fall of a virtual pet. New Media Soc. 1999, 1, 283–303. [Google Scholar] [CrossRef]
  9. Veevers, J.E. The social meanings of pets: Alternative roles for companion animals. In Pets and the Family; Routledge: Oxfordshire, UK, 2016; pp. 11–30. [Google Scholar]
  10. Murata, K.; Usui, K.; Shibuya, Y. Effect of haptic perception on remote human–pet interaction. In Proceedings of the International Conference on Human Interface and the Management of Information, Heraklion, Greece, 22–27 June 2014; Springer,: Berlin/Heidelberg, Germany, 2014; pp. 226–232. [Google Scholar]
  11. Bouzbib, E.; Bailly, G.; Haliyo, S.; Frey, P. “Can I Touch This?”: Survey of Virtual Reality Interactions via Haptic Solutions: Revue de Littérature Des Interactions En Réalité Virtuelle Par Le Biais de Solutions Haptiques. In Proceedings of the 32nd Conference on l’Interaction Homme-Machine, Virtual Event, 13–16 April 2021; Association for Computing Machinery: New York, NY, USA, 2022. IHM ’21. [Google Scholar] [CrossRef]
  12. Suzuki, R.; Ofek, E.; Sinclair, M.; Leithinger, D.; Gonzalez-Franco, M. HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile Robots. In Proceedings of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, 10–14 October 2021; pp. 1269–1281. [Google Scholar]
  13. Calvert, M.M. Human-pet interaction and loneliness: A test of concepts from Roy’s adaptation model. Nurs. Sci. Q. 1989, 2, 194–202. [Google Scholar] [CrossRef]
  14. O’Haire, M. Companion animals and human health: Benefits, challenges, and the road ahead. J. Vet. Behav. 2010, 5, 226–234. [Google Scholar] [CrossRef]
  15. Lee, C.J.; Tsai, H.R.; Chen, B.Y. HairTouch: Providing Stiffness, Roughness and Surface Height Differences Using Reconfigurable Brush Hairs on a VR Controller. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–13. [Google Scholar]
  16. Lee, S.P.; Cheok, A.D.; James, T.K.S.; Debra, G.P.L.; Jie, C.W.; Chuang, W.; Farbiz, F. A mobile pet wearable computer and mixed reality system for human–poultry interaction through the internet. Pers. Ubiquitous Comput. 2006, 10, 301–317. [Google Scholar] [CrossRef]
  17. Beck, A.M.; Katcher, A.H. Future directions in human-animal bond research. Am. Behav. Sci. 2003, 47, 79–93. [Google Scholar] [CrossRef]
  18. Giaquinto, S.; Valentini, F. Is there a scientific basis for pet therapy? Disabil. Rehabil. 2009, 31, 595–598. [Google Scholar] [CrossRef]
  19. Wilson, C.C.; Barker, S.B. Challenges in designing human-animal interaction research. Am. Behav. Sci. 2003, 47, 16–28. [Google Scholar] [CrossRef]
  20. Stanton, C.M.; Kahn, P.H.; Severson, R.L.; Ruckert, J.H.; Gill, B.T. Robotic animals might aid in the social development of children with autism. In Proceedings of the 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Amsterdam, The Netherlands, 12–15 March 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 271–278. [Google Scholar]
  21. Lee, H.R.; Panont, W.R.; Plattenburg, B.; Croix, J.P.D.L.; Patharachalam, D. Asthmon: Empowering asthmatic children’s self-management with a virtual pet. In Proceedings of the International Conference on Human Factors in Computing Systems, Atlanta, GA, USA, 10–15 April 2010. [Google Scholar]
  22. Chahyana, J.; Yesmaya, V. Virtual Pet Simulator Game Using Augmented Reality on Android Platform. J. Phys. Conf. Ser. 2020, 1566, 012088. [Google Scholar] [CrossRef]
  23. Min, D.A.; Kim, Y.; Jang, S.A.; Kim, K.Y.; Jung, S.E.; Lee, J.H. Pretty pelvis: A virtual pet application that breaks sedentary time by promoting gestural interaction. In Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, Seoul, Republic of Korea, 18–23 April 2015; pp. 1259–1264. [Google Scholar]
  24. Teyssier, M.; Bailly, G.; Pelachaud, C.; Lecolinet, E. Mobilimb: Augmenting mobile devices with a robotic limb. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, Berlin, Germany, 14–17 October 2018; pp. 53–63. [Google Scholar]
  25. Cheok, A.D.; Tan, R.T.K.C.; Peiris, R.L.; Fernando, O.N.N.; Soon, J.T.K.; Wijesena, I.J.P.; Sen, J.Y.P. Metazoa ludens: Mixed-reality interaction and play for small pets and humans. IEEE Trans. Syst. Man, Cybern.-Part A Syst. Hum. 2011, 41, 876–891. [Google Scholar] [CrossRef]
  26. Kaplan, F.; Oudeyer, P.Y.; Kubinyi, E.; Miklósi, A. Robotic clicker training. Robot. Auton. Syst. 2002, 38, 197–206. [Google Scholar] [CrossRef]
  27. Goris, K.; Saldien, J.; Vanderniepen, I.; Lefeber, D. The huggable robot probo, a multi-disciplinary research platform. In Proceedings of the International Conference on Research and Education in Robotics, Kuala Lumpur, Malaysia, 12–15 December 2018; Springer: Berlin/Heidelberg, Germany, 2008; pp. 29–41. [Google Scholar]
  28. van Breemen, A.; Yan, X.; Meerbeek, B. iCat: An animated user-interface robot with personality. In Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, Utrecht, The Netherlands, 25–29 July 2005; pp. 143–144. [Google Scholar]
  29. Burdea, G.C. Force and Touch Feedback for Virtual Reality; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 1996. [Google Scholar]
  30. McNeely, W.A. Robotic graphics: A new approach to force feedback for virtual reality. In Proceedings of the IEEE Virtual Reality Annual International Symposium, Seattle, WA, USA, 18–22 September 1993; IEEE: Piscataway, NJ, USA, 1993; pp. 336–341. [Google Scholar]
  31. Hayward, V.; Astley, O.R.; Cruz-Hernandez, M.; Grant, D.; Robles-De-La-Torre, G. Haptic interfaces and devices. Sens. Rev. 2004, 24, 16–29. [Google Scholar] [CrossRef]
  32. Bouzbib, E.; Bailly, G. “Let’s Meet and Work it Out”: Understanding and Mitigating Encountered-Type of Haptic Devices Failure Modes in VR. In Proceedings of the 2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), Christchurch, New Zealand, 12–16 March 2022; pp. 360–369. [Google Scholar] [CrossRef]
  33. Mercado, V.R.; Marchal, M.; Lecuyer, A. “Haptics On-Demand”: A Survey on Encountered-Type Haptic Displays. IEEE Trans. Haptics 2021, 14, 449–464. [Google Scholar] [CrossRef] [PubMed]
  34. Wagner, C.R.; Lederman, S.J.; Howe, R.D. A tactile shape display using RC servomotors. In Proceedings of the 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, Orlando, FL, USA, 24–25 March 2002; HAPTICS 2002. IEEE: Piscataway, NJ, USA, 2002; pp. 354–355. [Google Scholar]
  35. Araujo, B.; Jota, R.; Perumal, V.; Yao, J.X.; Singh, K.; Wigdor, D. Snake charmer: Physically enabling virtual objects. In Proceedings of the TEI’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, Eindhoven, The Netherlands, 14–17 February 2016; pp. 218–226. [Google Scholar]
  36. Yamagami, M.; Junuzovic, S.; Gonzalez-Franco, M.; Ofek, E.; Cutrell, E.; Porter, J.; Wilson, A.; Mott, M. Two-In-One: A Design Space for Mapping Unimanual Input into Bimanual Interactions in VR for Users with Limited Movement. arXiv 2021, arXiv:2108.12390. [Google Scholar] [CrossRef]
  37. Bouzbib, E.; Bailly, G.; Haliyo, S.; Frey, P. CoVR: A Large-Scale Force-Feedback Robotic Interface for Non-Deterministic Scenarios in VR. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, 20–23 October 2020; Association for Computing Machinery: New York, NY, USA, 2020. UIST ’20. pp. 209–222. [Google Scholar] [CrossRef]
  38. Zhao, Y.; Kim, L.H.; Wang, Y.; Le Goc, M.; Follmer, S. Robotic assembly of haptic proxy objects for tangible interaction and virtual reality. In Proceedings of the 2017 ACM International Conference on Interactive Surfaces and Spaces, Brighton, UK, 17–20 October 2017; pp. 82–91. [Google Scholar]
  39. Cheng, L.P.; Chang, L.; Marwecki, S.; Baudisch, P. iturk: Turning passive haptics into active haptics by making users reconfigure props in virtual reality. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–10. [Google Scholar]
  40. Dominjon, L.; Lécuyer, A.; Burkhardt, J.M.; Richard, P.; Richir, S. Influence of control/display ratio on the perception of mass of manipulated objects in virtual environments. In Proceedings of the IEEE Proceedings, VR 2005. Virtual Reality, Bonn, Germany, 12–16 March 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 19–25. [Google Scholar]
  41. Rietzler, M.; Geiselhart, F.; Gugenheimer, J.; Rukzio, E. Breaking the tracking: Enabling weight perception using perceivable tracking offsets. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [Google Scholar]
  42. Suzuki, R.; Hedayati, H.; Zheng, C.; Bohn, J.L.; Szafir, D.; Do, E.Y.L.; Gross, M.D.; Leithinger, D. Roomshift: Room-scale dynamic haptics for vr with furniture-moving swarm robots. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–11. [Google Scholar]
  43. Cheng, L.P.; Marwecki, S.; Baudisch, P. Mutual human actuation. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, Quebec City, QC, Canada, 22–25 October 2017; pp. 797–805. [Google Scholar]
  44. Cheng, L.P.; Roumen, T.; Rantzsch, H.; Köhler, S.; Schmidt, P.; Kovacs, R.; Jasper, J.; Kemper, J.; Baudisch, P. Turkdeck: Physical virtual reality based on people. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, Charlotte, NC, USA, 11–15 November 2015; pp. 417–426. [Google Scholar]
  45. Huang, H.Y.; Ning, C.W.; Wang, P.Y.; Cheng, J.H.; Cheng, L.P. Haptic-go-round: A surrounding platform for encounter-type haptics in virtual reality experiences. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–10. [Google Scholar]
  46. Hoppe, M.; Knierim, P.; Kosch, T.; Funk, M.; Futami, L.; Schneegass, S.; Henze, N.; Schmidt, A.; Machulla, T. VRHapticDrones: Providing haptics in virtual reality through quadcopters. In Proceedings of the 17th International Conference on Mobile and Ubiquitous Multimedia, Cairo, Egypt, 25–28 November 2018; pp. 7–18. [Google Scholar]
  47. Abtahi, P.; Landry, B.; Yang, J.; Pavone, M.; Follmer, S.; Landay, J.A. Beyond the force: Using quadcopters to appropriate objects and the environment for haptics in virtual reality. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, Glasgow, UK, 4–9 May 2019; pp. 1–13. [Google Scholar]
  48. Yamaguchi, K.; Kato, G.; Kuroda, Y.; Kiyokawa, K.; Takemura, H. A Non-grounded and Encountered-type Haptic Display Using a Drone. In Proceedings of the ACM Spatial User Interaction (SUI ’16), Tokyo, Japan, 15–16 October 2016. [Google Scholar]
  49. Romanishin, J.W.; Gilpin, K.; Rus, D. M-blocks: Momentum-driven, magnetic modular robots. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 4288–4295. [Google Scholar]
  50. Xue, Y.; Weng, D.; Jiang, H.; Gao, Q. MMRPet: Modular Mixed Reality Pet System Based on Passive Props. In Proceedings of the Chinese Conference on Image and Graphics Technologies, Beijing, China, 23–25 April 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 645–658. [Google Scholar]
  51. Zhao, L.; Liu, Y.; Song, W. Tactile Perceptual Thresholds of Electrovibration in VR. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2618–2626. [Google Scholar] [CrossRef]
  52. Ujitoko, Y.; Ban, Y.; Hirota, K. Modulating fine roughness perception of vibrotactile textured surface using pseudo-haptic effect. IEEE Trans. Vis. Comput. Graph. 2019, 25, 1981–1990. [Google Scholar] [CrossRef] [PubMed]
  53. Yu, R.; Bowman, D.A. Pseudo-haptic display of mass and mass distribution during object rotation in virtual reality. IEEE Trans. Vis. Comput. Graph. 2020, 26, 2094–2103. [Google Scholar] [CrossRef]
  54. Wuillemin, D.; van Doorn, G.; Richardson, B.; Symmons, M. Haptic and visual size judgements in virtual and real environments. In Proceedings of the First Joint Eurohaptics Conference and Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, World Haptics Conference, Pisa, Italy, 18–20 March 2005; IEEE: Piscataway, NJ, USA, 2005; pp. 86–89. [Google Scholar]
  55. Ebrahimi, E.; Babu, S.V.; Pagano, C.C.; Jörg, S. An empirical evaluation of visuo-haptic feedback on physical reaching behaviors during 3D interaction in real and immersive virtual environments. ACM Trans. Appl. Percept. (TAP) 2016, 13, 1–21. [Google Scholar] [CrossRef]
  56. Azmandian, M.; Hancock, M.; Benko, H.; Ofek, E.; Wilson, A.D. Haptic retargeting: Dynamic repurposing of passive haptics for enhanced virtual reality experiences. In Proceedings of the 2016 chi Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016; pp. 1968–1979. [Google Scholar]
  57. Suzuki, R.; Karim, A.; Xia, T.; Hedayati, H.; Marquardt, N. Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, Orleans, LA, USA, 29 April–5 May 2022; pp. 1–32. [Google Scholar]
  58. Watanabe, A.; Ikeda, T.; Morales, Y.; Shinozawa, K.; Miyashita, T.; Hagita, N. Communicating robotic navigational intentions. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–3 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 5763–5769. [Google Scholar]
  59. Ostanin, M.; Klimchik, A. Interactive robot programing using mixed reality. IFAC-PapersOnLine 2018, 51, 50–55. [Google Scholar] [CrossRef]
  60. Quintero, C.P.; Li, S.; Pan, M.K.; Chan, W.P.; Van der Loos, H.M.; Croft, E. Robot programming through augmented trajectories in augmented reality. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 1838–1844. [Google Scholar]
  61. Yuan, L.; Reardon, C.; Warnell, G.; Loianno, G. Human gaze-driven spatial tasking of an autonomous MAV. IEEE Robot. Autom. Lett. 2019, 4, 1343–1350. [Google Scholar] [CrossRef]
  62. Muhammad, F.; Hassan, A.; Cleaver, A.; Sinapov, J. Creating a shared reality with robots. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 614–615. [Google Scholar]
  63. Ostanin, M.; Mikhel, S.; Evlampiev, A.; Skvortsova, V.; Klimchik, A. Human-robot interaction for robotic manipulator programming in Mixed Reality. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2805–2811. [Google Scholar]
  64. Siu, A.F.; Yuan, S.; Pham, H.; Gonzalez, E.; Kim, L.H.; Le Goc, M.; Follmer, S. Investigating tangible collaboration for design towards augmented physical telepresence. In Design Thinking Research; Springer: Berlin/Heidelberg, Germany, 2018; pp. 131–145. [Google Scholar]
  65. Qian, L.; Deguet, A.; Wang, Z.; Liu, Y.H.; Kazanzides, P. Augmented reality assisted instrument insertion and tool manipulation for the first assistant in robotic surgery. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 5173–5179. [Google Scholar]
  66. Jones, B.; Zhang, Y.; Wong, P.N.; Rintel, S. Vroom: Virtual robot overlay for online meetings. In Proceedings of the Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; pp. 1–10. [Google Scholar]
  67. Bednarik, R.; Vrzakova, H.; Hradis, M. What do you want to do next: A novel approach for intent prediction in gaze-based interaction. In Proceedings of the Symposium on Eye Tracking Research and Applications, Barbara, CA, USA, 28–30 March 2012; pp. 83–90. [Google Scholar]
  68. Pfeuffer, K.; Alexander, J.; Chong, M.K.; Gellersen, H. Gaze-touch: Combining gaze with multi-touch for interaction on the same surface. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Honolulu, HI, USA, 5–8 October 2014; pp. 509–518. [Google Scholar]
  69. Stellmach, S.; Dachselt, R. Look & touch: Gaze-supported target acquisition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 2981–2990. [Google Scholar]
  70. Rossano, F. 15 Gaze in Conversation. InThe Handbook of Conversation Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2012; p. 308. [Google Scholar]
  71. Miniotas, D.; Špakov, O.; MacKenzie, I.S. Eye gaze interaction with expanding targets. In Proceedings of the CHI’04 Extended Abstracts on Human Factors in Computing Systems, Vienna, Austria, 24–29 April 2004; pp. 1255–1258. [Google Scholar]
  72. Boschma, R. Role of proximity in interaction and performance: Conceptual and empirical challenges. Regional Studies. 2005, 39, 41–45. [Google Scholar] [CrossRef]
  73. Araujo, T. Living up to the chatbot hype: The influence of anthropomorphic design cues and communicative agency framing on conversational agent and company perceptions. Comput. Hum. Behav. 2018, 85, 183–189. [Google Scholar] [CrossRef]
  74. Crawford, J.R.; Henry, J.D. The Positive and Negative Affect Schedule (PANAS): Construct validity, measurement properties and normative data in a large non-clinical sample. Br. J. Clin. Psychol. 2004, 43, 245–265. [Google Scholar] [CrossRef]
  75. Schrepp, M.; Hinderks, A.; Thomaschewski, J. Applying the user experience questionnaire (UEQ) in different evaluation scenarios. In Proceedings of the International Conference of Design, User Experience, and Usability, Heraklion, Greece, 22–27 June 2014; Springer: Berlin/Heidelberg, Germany, 2014; pp. 383–392. [Google Scholar]
  76. Lombard, M.; Ditton, T. At the heart of it all: The concept of presence. J. Comput.-Mediat. Commun. 1997, 3, JCMC321. [Google Scholar] [CrossRef]
  77. Cheng, L.P.; Ofek, E.; Holz, C.; Benko, H.; Wilson, A.D. Sparse haptic proxy: Touch feedback in virtual environments using a general passive prop. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, 6–11 May 2017; pp. 3718–3728. [Google Scholar]
Figure 1. The modular haptic agent (MHA) is a modular encountered-type haptic interaction prototype system that enables the simulation of various shapes and interactions through a single module. (a) A haptic module to simulate different tactile elements. (b) The mechanical-device-based haptic mapping for simulating a large agent with a small mobile module. (c) The virtual pets’ agent-initiated haptic interactions based on the MHA. (d) The physical–virtual correspondence of the proposed system.
Figure 1. The modular haptic agent (MHA) is a modular encountered-type haptic interaction prototype system that enables the simulation of various shapes and interactions through a single module. (a) A haptic module to simulate different tactile elements. (b) The mechanical-device-based haptic mapping for simulating a large agent with a small mobile module. (c) The virtual pets’ agent-initiated haptic interactions based on the MHA. (d) The physical–virtual correspondence of the proposed system.
Electronics 12 02069 g001
Figure 2. Classification of tactile elements of common pets. Almost all pets have very similar tactile elements. By deconstructing the tactile elements of virtual pets, designers can use the modular design method to build haptic proxies. The red boxes represent the positions of the virtual pets corresponding to the haptic props.
Figure 2. Classification of tactile elements of common pets. Almost all pets have very similar tactile elements. By deconstructing the tactile elements of virtual pets, designers can use the modular design method to build haptic proxies. The red boxes represent the positions of the virtual pets corresponding to the haptic props.
Electronics 12 02069 g002
Figure 3. Examples of different module assembling and mapping. Different colors on the virtual pets’ images correspond to the tactile elements on the module. (a) The small pet duck and its mapping method; (b) the medium size pet dog and its mapping method; (c) the large pet calf and its mapping method.
Figure 3. Examples of different module assembling and mapping. Different colors on the virtual pets’ images correspond to the tactile elements on the module. (a) The small pet duck and its mapping method; (b) the medium size pet dog and its mapping method; (c) the large pet calf and its mapping method.
Electronics 12 02069 g003
Figure 4. The Modular Haptic Agent prototype system. The head-mounted display obtains the user’s line-of-sight information. The Leap Motion Controller obtains the user’s hand position. The tracker on the interacting object obtains its position and rotation. Sensors on the module take haptic input from the user, and after being processed by a microcomputer, the touch signal is fed to a processor. After receiving all inputs, the processor determines the user’s passive and active interaction intent and aligns the MHA module to the virtual avatar. Visual feedback is given to the user via a head-mounted display; the haptic output is given to the user via a module; and force feedback is given to the user via a mechanical device. L is the line of sight position, D h is the hand–module distance, D t is the tracker–module distance, H i is the haptic input received by the sensors, H o is the haptic output provided by the servo on the module, A is the animation feedback of the virtual agent, P is the position of the robot arm end, and M is the force feedback provided by the robot arm.
Figure 4. The Modular Haptic Agent prototype system. The head-mounted display obtains the user’s line-of-sight information. The Leap Motion Controller obtains the user’s hand position. The tracker on the interacting object obtains its position and rotation. Sensors on the module take haptic input from the user, and after being processed by a microcomputer, the touch signal is fed to a processor. After receiving all inputs, the processor determines the user’s passive and active interaction intent and aligns the MHA module to the virtual avatar. Visual feedback is given to the user via a head-mounted display; the haptic output is given to the user via a module; and force feedback is given to the user via a mechanical device. L is the line of sight position, D h is the hand–module distance, D t is the tracker–module distance, H i is the haptic input received by the sensors, H o is the haptic output provided by the servo on the module, A is the animation feedback of the virtual agent, P is the position of the robot arm end, and M is the force feedback provided by the robot arm.
Electronics 12 02069 g004
Figure 5. Examples of patchwork. (a) Haptic input (for head-touch scenario) through the holder patch, the photo sensor, and the haptic prop of the cambered surface; (b) Haptic output (for feeding scenario) through the slotted patch, the micro servo, and the haptic prop of a sharp corner.
Figure 5. Examples of patchwork. (a) Haptic input (for head-touch scenario) through the holder patch, the photo sensor, and the haptic prop of the cambered surface; (b) Haptic output (for feeding scenario) through the slotted patch, the micro servo, and the haptic prop of a sharp corner.
Electronics 12 02069 g005
Figure 6. Examples of animation feedback and force feedback for passive and active interaction. (a) In passive interaction, the agent provides feedback only when the user touches the agent; (b) in active interaction, the agent will actively initiate interaction with nearby users.
Figure 6. Examples of animation feedback and force feedback for passive and active interaction. (a) In passive interaction, the agent provides feedback only when the user touches the agent; (b) in active interaction, the agent will actively initiate interaction with nearby users.
Electronics 12 02069 g006
Figure 7. Configuration of Study 1 and Study 2. (ac) Study 1 used the same image of a virtual dog, while the physical haptic proxies were different. (d,e) Study 2 used the same image of a virtual calf, and the difference was the presence or absence of a haptic proxy.
Figure 7. Configuration of Study 1 and Study 2. (ac) Study 1 used the same image of a virtual dog, while the physical haptic proxies were different. (d,e) Study 2 used the same image of a virtual calf, and the difference was the presence or absence of a haptic proxy.
Electronics 12 02069 g007
Figure 8. Subjective results of study 1. NHD (none haptic dog) group served as a control group without haptics. HapticDog group and ST (stuffed toy) group achieved similar tactile perception in four perspectives. A virtual haptic dog was efficiently simulated via MHA modular design method and haptic mapping method. The design methods enable the simulation of a large haptic agent using a small module. “***” stands for significant difference (p < 0.001 ), “ n s ” stands for no significant difference (p > 0.05 ). The error bar indicates the standard deviation of data.
Figure 8. Subjective results of study 1. NHD (none haptic dog) group served as a control group without haptics. HapticDog group and ST (stuffed toy) group achieved similar tactile perception in four perspectives. A virtual haptic dog was efficiently simulated via MHA modular design method and haptic mapping method. The design methods enable the simulation of a large haptic agent using a small module. “***” stands for significant difference (p < 0.001 ), “ n s ” stands for no significant difference (p > 0.05 ). The error bar indicates the standard deviation of data.
Electronics 12 02069 g008
Figure 9. Subjective results of Study 2. In each subplot, the none haptic dog and none haptic calf groups served as the control groups, demonstrating that visual differences did not affect the evaluation results, and the results of the HapticDog and HapticCalf groups were compared together to show the difference between user’s scores on different agents. “*” stands for significant difference ( 0.01 < p < 0.05 ). “ n s ” stands for no significant difference (p > 0.05 ). The error bar indicates the standard deviation of data.
Figure 9. Subjective results of Study 2. In each subplot, the none haptic dog and none haptic calf groups served as the control groups, demonstrating that visual differences did not affect the evaluation results, and the results of the HapticDog and HapticCalf groups were compared together to show the difference between user’s scores on different agents. “*” stands for significant difference ( 0.01 < p < 0.05 ). “ n s ” stands for no significant difference (p > 0.05 ). The error bar indicates the standard deviation of data.
Electronics 12 02069 g009
Figure 10. Two examples of AF group in study 3. The green boxes represent the correspondence between virtual and real spaces. The red arrows represent the direction of movement. (a) When the center ray of the head-mounted display hits the bounding box of the avatar, the mechanical arm moves forward and collides with the user’s palm; (b) when the hand tracking device detects that the palm of the user is close, the mechanical arm moves down and collides with the user’s palm.
Figure 10. Two examples of AF group in study 3. The green boxes represent the correspondence between virtual and real spaces. The red arrows represent the direction of movement. (a) When the center ray of the head-mounted display hits the bounding box of the avatar, the mechanical arm moves forward and collides with the user’s palm; (b) when the hand tracking device detects that the palm of the user is close, the mechanical arm moves down and collides with the user’s palm.
Electronics 12 02069 g010
Figure 11. Subjective results of study 3. The haptic agent that could provide passive interaction feedback (PF) had better ratings than non-haptic feedback (NF) agents in four perspectives. In addition, a haptic agent that could provide active interaction feedback based on user intent (AF) was able to further improve realism, interactivity, attraction, and users’ emotion. “***” stands for significant difference (p < 0.001 ). The error bar indicates the standard deviation of data.
Figure 11. Subjective results of study 3. The haptic agent that could provide passive interaction feedback (PF) had better ratings than non-haptic feedback (NF) agents in four perspectives. In addition, a haptic agent that could provide active interaction feedback based on user intent (AF) was able to further improve realism, interactivity, attraction, and users’ emotion. “***” stands for significant difference (p < 0.001 ). The error bar indicates the standard deviation of data.
Electronics 12 02069 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dongye, X.; Weng, D.; Jiang, H.; Feng, L. A Modular Haptic Agent System with Encountered-Type Active Interaction. Electronics 2023, 12, 2069. https://doi.org/10.3390/electronics12092069

AMA Style

Dongye X, Weng D, Jiang H, Feng L. A Modular Haptic Agent System with Encountered-Type Active Interaction. Electronics. 2023; 12(9):2069. https://doi.org/10.3390/electronics12092069

Chicago/Turabian Style

Dongye, Xiaonuo, Dongdong Weng, Haiyan Jiang, and Lulu Feng. 2023. "A Modular Haptic Agent System with Encountered-Type Active Interaction" Electronics 12, no. 9: 2069. https://doi.org/10.3390/electronics12092069

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop