Next Article in Journal
Applying Machine Learning in Retail Demand Prediction—A Comparison of Tree-Based Ensembles and Long Short-Term Memory-Based Deep Learning
Next Article in Special Issue
Path Optimization Using Metaheuristic Techniques for a Surveillance Robot
Previous Article in Journal
HS-YOLO: Small Object Detection for Power Operation Scenarios
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Remote Control Device to Drive the Arm Gestures of an Assistant Humanoid Robot

Robotics Laboratory, Universitat de Lleida, Jaume II, 69, 25001 Lleida, Spain
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(19), 11115; https://doi.org/10.3390/app131911115
Submission received: 28 June 2023 / Revised: 1 October 2023 / Accepted: 6 October 2023 / Published: 9 October 2023

Abstract

:
This work presents a remote control device designed to drive the arm gestures of an assistant humanoid mobile robot. The remote control is a master device with two passive arms configured to replicate the four degrees of freedom of each arm of the original assistant humanoid robot and send this information to the robot. This configuration allows the mobile robot to directly replicate the position of the arms on the remote controller. The objective of this proposal is to provide the robot with enhanced non-verbal and pointing communication capabilities during human interaction or assistance. The master device registers the angular position of each joint of its passive arms and transmits this information to the mobile robot, which replicates it. The experimental evaluation of the system has shown that the humanoid robot is able to successfully replicate any gesture on the remote controller. The positions of the arms have been sampled at a frame rate of 20 ms, and the average telecontrol delay obtained in the gesture experiments has been 549 ms, without appreciable jumps or irregularities in the gestures. The conclusion is that the direct manipulation of the passive arms of the remote control device provides the APR-02 humanoid robot with enhanced non-verbal and pointing communication capabilities during human interaction or assistance.

1. Introduction

The technological development of autonomous mobile robots is leading to increasingly capable devices [1], but there are still many applications that require supervision or direct operator guidance in unknown and unconstrained environments [2].
Teleoperation is the extension of sense and manipulation capacities to a distant location [3]. Teleoperated robots are devices remotely controlled by operators in distant environments [4,5,6,7,8] to collect information or carry out tasks. In this context, telepresence provides people with the feeling of being present in a distant environment [9].
Teleoperation is a meaningful subject that is imperative in some critical robot applications [10]. Even fully autonomous mobile robots may require some degree of supervision, teleoperation or telepresence to complete complex tasks [11], such as search and rescue missions [12], delicate maintenance labor [13] or elderly care [14]. Indeed, in situations where human life depends on the action and precision of robot movements, a human operator controlling the teleoperation device is indispensable [15]. Deploying activities by mimicking human behavior is also useful in simple and repetitive chores [16] and imitation learning allows a mobile robot to be trained to perform tasks from human demonstrations [17].
The development of cooperative tasks is naturally supported via gestures and non-verbal communication because they ensure successful expression and task comprehension [18]. When using teleoperated robots in teleworking and collaboration over distance, nonverbal communication also plays an important role if the robots are capable of expressing gestures [19].
Teleoperated systems can be classified according to the type and direction of the information exchanged between the operator and the robot [20]. According to this criterion, the first teleoperated systems worked under unilateral control, in which there was a one-way communication channel from the operator to the robot. The subsequent introduction of feedback from the robot to the operator resulted in the bilateral control method, and the combination of force feedback and variable feedback such as force, tactile, visual and sound led to the multimodal feedback method.
Most commonly used teleoperation methods can be sorted according to the mode of control into direct, supervisory and multimodal teleoperation [2,3]. Direct or master-slave control mode requires the commands of the operator to drive the robot and make decisions. It employs visual feedback from the remote vehicle and traditional controllers such as joysticks for control input. In supervisory teleoperation, the operator monitors the robot and provides assistance when making decisions. In this method, an autonomous control loop provides continuous feedback from the robot, and the control of the system is shared by the robot and the operator, who performs a higher level of overall monitoring [21]. Multimodal teleoperation systems collect information from several sensors, synthesize it and provide a multimodal view of the world to the teleoperator. Multimodal sensor feedback offers efficient control command generation tools and rich situational awareness that help the operator make correct and timely control decisions in dynamic and complex environments [22]. In all telecontrolled systems, the operator is primarily responsible for instructing or monitoring robot tasks in a remote environment.
New teleoperation methods are continually researched, and the devices used vary depending on the application [23,24]. More information can be found in the review on robotic remote control methods based on human motion for realistic interaction in virtual collaboration systems presented by Jung et al. [20]. In summary, robot control techniques allow different degrees of operation intervention; the most widespread include collaborative control [25], mixed-initiative control [26], adjustable autonomy [27] and sliding autonomy [28].

New Contribution

The exact reproduction of human motion by robots is usually a challenging task because of the different proportions, degrees of freedom and joint ranges of humans and robots. Considering the mechanical configuration of the robot joints as a limitation, the new contribution of this work is the development of a low-cost remote control device that replicates the proportions and degrees of freedom of the arms of a humanoid robot. The advantage of this proposal is twofold: it simplifies the task of mapping the gestures for robot replication, and the remote telecontrol is easier because a human operator intuitively maps user input to output when controlling bioinspired devices [29,30].
The device allows a remote operator to define expressive and pointing arm gestures [31], whereas the humanoid robot can replicate such gestures. The device is composed of a mechanical structure with a static T-shaped configuration with two passive articulated arms attached that replicate the shape and degrees of freedom of the telecontrolled robot. The system registers the angular position of each joint using potentiometers and this information is mapped and transmitted to the robot. Then, the robot arms mimic the angular positions of the joints defined in the remote controller.
The paper is structured as follows. Section 2 describes some background on remote robot control. Section 3 describes the humanoid mobile robot that is remotely controlled. In Section 4, the remote controller is presented in a detailed manner. Section 5 presents the experiments performed to validate the remote controller. Final remarks are given in Section 6.

2. Background on Remote Robot Telecontrol

In the scientific literature, there are a variety of proposals focused on mobile robot telecontrol. Some of these approaches are based on data gathered from the human body, which are adapted offline to the kinematic structure and constraints of the telecontrolled robot [32,33,34,35,36,37,38]. Other approaches are focused on imitating human motion [39] in real time [40]. In some of them, the lower body of the robot does not move [41,42] or is only used for balancing [43,44,45], whereas some applications are focused on providing efficient and stable locomotion [46].
Chen et al. [47] introduced a collaborative project to put assistive robot manipulators into homes to help people with disabilities. Researchers identified and developed assistive capabilities of the PR2 robot to develop a suite of open-source software tools that blend the capabilities of the robot and the user. The system interface mode consists of a screen with button-click inputs executed with a cursor on-screen controlled by users via a head tracker. Koenemann et al. [48] proposed a real-time imitation of human body motions for the Nao robot based on registering human motion with an MVN Suit by Xsens (Enschede, The Netherlands). The system maps the motion to the robot end effectors while balancing the center of mass of the robots. In this case, the joints of the operator and the Nao robot are not equivalent, so the human motion had to be calibrated and mapped. Cerón et al. [49] developed a multimodal teleoperated assistive robot with real-time motion mimic. The system uses a NAO robot, a Kinect V2 sensor, a set of Meta Quest virtual reality glasses and Nintendo Switch controllers to implement communication between devices. The robot can operate under two configurations: using the Nintendo Switch controllers to drive the robot in long-distance travel and using the Kinect sensor and the virtual reality glasses to control the arm gestures and head orientation. In this case, the virtual reality glasses are also used to give image feedback to the operator.
Balmik et al. [50] proposed a motion recognition framework based on a deep convolutional neural network for a Kinect-based Nao teleoperation. They developed an adaptive technique that dynamically balances the center of mass of the robot and allows whole-body imitation. The robot recognizes human motions and imitates them with an accuracy of 95%, which demonstrates that the scheme presented is robust and can be used in teleoperated robots. Eirale et al. [51] presented Marvin, an omnidirectional robotic assistant for domestic environments tailored to implement three target service functions: monitoring of elderly and reduced mobility users, remote presence and connectivity and night assistance. The platform can be controlled with a wireless gamepad and uses a lightweight deep-learning solution for visual perception and vocal command processing.
Materna et al. [52] presented a user interface for a semiautonomous assistive robot based on a mixed virtual 3D environment and sensor data. In this case, robot control is achieved using low-cost commodity hardware, with the optional addition of a 3D mouse and stereoscopic display. Moczulski [53] proposed the combination of autonomous and virtual teleoperation technologies in robots to deal with complex situations in which the control system is unable to find the right solution to solve the problem. In such cases, the control to solve the problem is transferred to a remote operator that feels immersed in the operating scene of the robotic system. Su et al. [54] proposed an approach for intuitive and immersive teleoperation of a single robotic manipulator using a mixed reality subspace for visualization. In a similar direction, Lim et al. [55] used six virtual reality trackers to teleoperate the upper body of a humanoid robot. The advantage of input devices implemented in virtual reality is that they are part of the virtual environment and no special input device is required [47,56].
In contrast to the cited methods, the remote control device proposed in this work is based on the use of two passive arms replicating the four degrees of freedom of each arm of the humanoid robot telecontrolled. In general, remote control devices that are geometrically similar and have the same degrees of freedom as human arms and hands are easier to use because the operator intuitively maps user input to output [29,30]. This intuitive configuration simplifies the task of mapping the gestures in the robot, increases the efficiency of telecontrolled mobile robot, and increases teleoperator performance [29,57]. Additional information on input devices can be found in the review on human–machine interfaces for systems with robotic manipulators presented by Young et al. [58].

3. APR-02 Humanoid Robot

The humanoid mobile robot telecontrolled in this work was developed under the Assistant Personal Robot (APR) concept described in [59,60]. The first prototype developed under the APR concept was the APR-01 mobile robot, which was originally designed as a teleoperated robot [59]. The next APR-02 mobile robot prototype was designed as an autonomous robot with teleoperation capabilities [60]. The APR-02 mobile robot has been applied as a walk-helper [61] and used as a tool for systematic odometry error correction [62,63]. In the walk-helper application, the height and weight of the robot provided physical support and guidance to people walking, using the arms as a holding support.
The APR prototypes include communication features such as audio and video connection between the local user and the remote operator combined with omnidirectional mobility. The APR-02 mobile robot includes a set of improvements to increase the anthropomorphism and affinity with the robot. The mobile robot includes a vertical tactile screen to display an iconic face with an animated mouth and eyes that follow the user to enhance the sense of attention from the robot [64]. The robot generates short characteristic sounds to express salutations and provide acceptance or rejection feedback. It has two articulated arms with four degrees of freedom each: three in the shoulder and one in the elbow. The servomotors used in each joint of the arms are digital bus servos (Dynamixel AX-12) that operate in a closed loop to manage arm motion robustly in terms of fault-tolerant control [65]. Figure 1a shows the APR-02 mobile robot used in this paper moving its hands to knock at a door, and Figure 1b shows the same scene from the teleoperator perspective, captured with a panoramic camera located on the head of the robot. Currently, the hands of the APR-02 robot are figurative and fixed because the use of hands for object manipulation is not usually required in social telepresence robots [19].
In a first stage, mobile robot gestures were controlled with an online imitation system using a finite number of prerecorded gestures. This work expands gesture control options by allowing a human operator to directly control the pose of the arms of the robot.

4. Remote Control Device

The mechanical structure of the remote control device developed in this work is a scaled reproduction of the upper part of the APR-02 robot. This mechanical structure has a fixed T-shaped configuration with two mobile arms attached at the ends. Each arm replicates the degrees of freedom of one robot arm and includes a gamepad at the end of the forearm, providing direct access to some mobile robot functions. Figure 2 shows the kinematic representation of the remote control device, which coincides with the kinematics of the robot. This kinematics is a simplified interpretation of a human model. For example, the robot elbow only has one degree of freedom and the wrist is fixed. Hence, the remote control device and the mobile robot do not allow an exact imitation of human arm motions.

4.1. Electronic Control Board and Sensors

The electronic control board of the remote control device is based on a microcontroller responsible for reading the sensors and sending the commands to the robot. Figure 3 shows the schematic diagram of the board and its connections with the sensors. The microcontroller used is the STM32F407G microcontroller from ST Microelectronics (Geneva, Switzerland). This microcontroller includes an ARM Cortex-M4 32-bit processor with a clock frequency of 168 MHz. The chip also packages 1 MB of non-volatile flash memory for user code and 196 kB of internal SRAM. The STM32F407G provides a wide variety of hardware peripherals such as precision timers, USART, I2C, SPI, CAN, and USB OTG communications, as well as three 12-bit Analog to Digital Converters (ADC) that can be configured to sample different input pins, two Digital to Analog Converters (DAC), two General Purpose Direct Memory Access (DMA) controllers and 80 General Purpose Inputs and Outputs (GPIO) pins. The sensors used are one potentiometer at each arm joint and multiple selection buttons on the auxiliary gamepads placed on the hands’ region.
The joints’ angular positions in the remote control device are measured using the 3547S three-turn precision potentiometers manufactured by Bourns (Bourns, Inc., Riverside, CA, USA), which have a total resistance of 1 kΩ ± 3%, and a maximum dissipated power of 1 W. The potentiometers are connected to a supply voltage of 3.3 V, resulting in a 3.3 mA of current drawn. Each potentiometer actuation knob is coupled to the mechanical structure of the arms so that any change in the angular position of the joints causes a change in its resistance. The resistance of each potentiometer is indirectly measured by reading its output voltage using the internal ADCs, which return a value between 0 and 4095 that is calibrated and mapped to an angular position. The auxiliary gamepads have 8 buttons and a joystick with two additional potentiometers each. Each button is connected to an individual GPIO pin, which reads them as digital inputs. The potentiometers of the joysticks are also connected to ADC channels.
Table 1 shows the sensors and the sampling frequencies used to get their outputs. The microcontroller samples the position of the joints every 20 ms and sends this information to the mobile robot every 20 ms. Other information such as the state of the buttons of the auxiliary gamepads is sampled every 100 ms, but it is only submitted to the robot in case of changes, as it is only used to activate specific functionalities in the mobile robot. One of the joysticks in the auxiliary gamepads is used to control the motion of the mobile robot (forward, backward, left and right displacements) and is sampled every 100 ms. The other joystick is used to control the gaze of the eyes of the robot which are updated at 20 ms in order to synchronize the motion of the gaze and the arms. The submission rate can be decreased if arm movement becomes discontinuous.

4.2. Humanoid Robot Joint Replication

Figure 4a shows the shoulder of the APR-02 robot, which has three degrees of freedom (DOF), implemented with servomotors. Figure 4a shows a view partially oriented from one side and below to highlight the disposition of the three servomotors in the left shoulder of the robot. The three-DOF motion simulating a shoulder is achieved by concatenating the three servomotors: a motor shaft is fixed to a 3D printed part, which in turn is attached to the housing of another motor. Hence, the first motor is housed inside a part that is fixed to the central structure of the robot, the second motor is attached to the shaft of the first one, and the third motor is attached to the shaft of the second one. The shaft of the third motor is then connected to a 3D-printed part that represents the robot arm.
The shoulder of the remote control device acquires three DOFs with the same method as the aforementioned, but using potentiometers instead of motors. The casing of the first potentiometer is fixed to a 3D-printed part that is statically joined to the main structure of the device. The second potentiometer is fixed to the shaft of the first one, and the third potentiometer is fixed to the shaft of the second one using 3D-printed parts. Finally, a part simulating the arm of the remote control device is fixed to the shaft of the third potentiometer. Figure 4b shows the CAD design of the shoulder of the remote control device with the same view orientation as Figure 4a. Each independent 3D-printed part is represented in a different color to enhance comprehension, potentiometers are represented in green, and the main body of the structure is grey. Figure 4c shows the implementation of the shoulder in the remote control device with the same view orientation as Figure 4a. In this case, 3D-printed parts and potentiometers are black.
Figure 5a shows the elbow of the APR-02 robot, which has only one degree of freedom implemented with one servomotor. Figure 5a shows the elbow in a slightly bent position. The motion is achieved by fixing the servomotor casing to the robot arm with a 3D-printed part and joining the motor shaft to the forearm with another 3D-printed part. The elbow of the remote control device is assembled in the same way: fixing the potentiometer to the arm and its shaft to the forearm. Figure 5b shows the CAD design of the elbow of the remote control device, with the arm represented in yellow, the forearm in grey, and the potentiometer in green. Figure 5c shows the final elbow implementation; the arm, forearm and potentiometer are black.

4.3. Hand Gamepads

The arms of the mobile robot include figurative representations of human hands that are not articulated. Instead of replicating the figurative hands, the remote control device includes a gamepad at the end of each arm. Figure 6 shows the gamepads, which have multiple buttons and one joystick each. The buttons provide access to some specific mobile robot functionalities without releasing the arms from the device during robot telecontrol. Figure 6a shows the functions associated with each button in the right hand and Figure 6b in the left hand. For example, the gamepad corresponding to the robot’s right arm (Figure 6a) allows telecontrolling the motion of the robot, and the gamepad corresponding to the robot’s left arm (Figure 6b) allows controlling the eye movements, the facial expression and the sounds emitted by the robot, as well as the activation and deactivation of the robot arms and the on-screen camera. In both controllers, there is an emergency stop button and a button to emit a warning sound in case of emergency. The inclusion of these two gamepads increases the versatility of the remote control device presented because it enables a direct control of the main robot functionalities.

4.4. Device Implementation

Figure 7 shows the CAD design and the final implementation of the remote control device. The assembly is composed of a base that supports a central T-shaped mechanical structure with two arms and an electronic control board attached. The fixed structure is made of an aluminum structural profile with 3D-printed parts fastened to hold the arms. The electronic control board is inside a casing that is joined to the top of the main structure. The connections between the electronics, the potentiometers and the gamepads are made via bus cables. The distance between both shoulders is 0.43 m, their height from the base is 0.52 m, and the total length of the arms is 0.42 m. These dimensions allow the teleoperator to control the robot by standing behind the device and using its arms as an extension of their own arms.

4.5. Imitation Model

Motion-based remote control is an important technology for intuitive interaction with remotely placed robots [20]. An imitation model needs to be defined to achieve a perceptive system that successfully captures motion commands and translates them to the robot. The imitation model implemented in the remote control device involves the reproduction of the motion of the master device by the humanoid robot.
The sense of embodiment in teleoperation applications contributes to making the operator feel present in the environment where the robot is [66]. Some embodiment components include the sense of ownership, agency and self-location, which can be increased through visual feedback [67]. The combination of the sense of embodiment, communication quality, robustness of the robot control system and experience of the operator determine the effectiveness of the control achieved [67]. In this work, the sense of embodiment is achieved through visual and auditory feedback. To this end, the APR-02 mobile robot includes a panoramic camera located on the top of the robot and its screen can be used to represent an iconic face or the teleoperator’s face through an additional videoconference application.
Figure 8 shows the imitation model proposed to telecontrol the APR-02 mobile robot. The teleoperator moves the arms of the remote control device and the mobile robot imitates the motion of the arms. The imitation model includes four stages: joint position measurement at a frame rate of 20 ms, mapping, data transmission, and execution by the robot. The positions of the four joints of each arm in the remote control device are measured as a direct reference for the imitation process. The advantage of this remote control device replicating the joints of the mobile robot is that it is not needed to compute the inverse kinematics of the arms in order to properly map the position of the arms of the mobile robot. There are several available communication systems: USB wire in case of short distance telecontrol, Zigbee wireless personal area network (WPAN) in telecontrol distance ranges from 10 to 100 m, internet WiFi wireless network protocol combined with videoconference communication for telepresence, and 4G/5G wireless broadband cellular network as internet access point [68,69] in case the remote controller, the robot or both devices do not have access to a WiFi network. The use of a wireless communication system includes a characteristic communication delay that affects the telecontrol of the arms and the remote videoconference established between the mobile robot and the remote control device. One future work pending is the implementation of direct 5G device-to-device communication [70,71] between the remote control device and the mobile robot in order to take advantage of its low communication delay.

5. Experimental Evaluation

This section describes the experimental evaluation of the remote control device driving the arm gestures of the APR-02 mobile robot. The proposed system is evaluated regarding the joint position similarity between the remote control device and the humanoid robot. The experimental evaluation is focused on the time delay of the imitation process, whereas other factors also studied in legged humanoid robots such as standing stability are not analyzed because the APR-02 is a wheeled robot.

5.1. Measurement of Target Joint Positions

Joint trajectories are the evolution of the angular values of each joint over time, and they define the performance of the imitation [39]. The target joint positions established in the remote control device have been monitored to assess the information submitted to the mobile robot.
Figure 9 shows the evolution of the joint angles in the right and left arms measured in the remote controller during a dynamic gesticulation example. The joint positions have been sampled at a frequency of 50 Hz and do not show discontinuities. The joints are numbered as in Figure 2: joints 1, 2 and 3 are in the shoulder, and joint 4 is in the elbow. In the gesturing example represented, the arms of the remote control device made three inward circles in the air, with the arms raised (almost parallel to the ground) and the elbow slightly bent. This circular profile can be appreciated in Figure 9, where each ascending curve corresponds to the start of a new turn. The comparison of the joints’ angle profiles in the right and left arm registering the same symmetric gesture shows similarities between both arms, but they are not equal because the operator is focused on performing a natural gesticulation rather than generating a perfectly symmetric movement. The small differences and irregularities between the movements of the two arms is what makes the robot movement feel more natural and familiar.

5.2. Measurement of the Structural Delay of the Imitation Model

The structural delay is the time required by the mobile robot to replicate the position of the joints submitted by the remote controller device. The structural delay depends on the closed-loop control applied to drive the arms of the mobile robot and does not depend on the communication network.
Figure 10 and Figure 11 show the evolution over time of the target joint angles of the right and left arms of the mobile robot (solid lines) during a dynamic gesticulation, and also the real evolution of the joint angles of the robot (dotted lines). The joints are numbered as in Figure 2. In all experiments, the starting position of the arms of the remote controller device and the humanoid robot corresponds to a resting position in which the arms are extended and hanging. This initial position is called the neutral position in this work.
In the gesturing example represented in Figure 10, the arms of the robot perform four inward circles in the air, with the arms raised (almost parallel to the ground) and the elbow slightly bent. The dynamic joint evolution generated by this circular gesture is difficult to imitate because the circular profile is always accelerating and decelerating the motors of the robot joints. Similarly, in the gesturing example represented in Figure 11, the arms of the robot perform three lateral displacements in the air: starting from the neutral position, moving upwards and diagonally to the left, and then horizontally to the right, returning to the left and repeating to the right.
The analysis of the information displayed in Figure 10 and Figure 11 shows that the joint positions received by the robot are the same as those submitted by the remote control device, without discontinuities or jumps. Additionally, the comparison of the evolution of the target joint (solid lines) and the real robot joint (dotted lines) reveals that the structural delay of the humanoid robot replicating arm gestures is from 120 to 540 ms, depending on the joint and movement performed, with an average structural delay of 274 ms.
The arms of the robot are programmed to reach all target joint positions, but the structural delay may reduce the amplitude of the motion in case of changing the direction of the motion. In the case of the periodic gestures shown in Figure 10 and Figure 11, this effect reduces the amplitude of the joint evolution performed by the robot with respect to the target joint evolution.

5.3. Measurement of the Telecontrol Delay

In networked communication, the communication delay or latency is the time delay between sending a control command and obtaining an output response [2]. The communication delay depends on the time required to pass via the communication channel established between the remote control device and the mobile robot [72]. The total telecontrol delay during a robot telecontrol is twofold, caused by the communication delay (time required to send target information from the remote control device to the robot) and the structural delay (time required by the robot to imitate the joint positions received).
In a previous study, the average latency measured in the WiFi network available in the experimentation area was 266 ms [73], showing low variability. Therefore, the combination of this communication latency (266 ms) and structural delay (274 ms) makes that an average telecontrol delay of 540 ms must be expected during mobile robot telecontrol. In general, it is assumed that a teleoperator remotely tracking a trajectory with a robotic manipulator will show compromised performances if the cumulated latency exceeds 300 ms [74], a threshold that decreases task efficiency when grasping objects [20]. However, this is not the application case because the telecontrol of the humanoid robot is designed for arm gesture imitation and pointing during human interaction instead of object tracking and grasping. In such applications, latency variability can be more problematic than average latency when using gestures for non-verbal communication [75].
The total average telecontrol delay has been measured by comparing the motion of an operator moving the arms of the remote control device and the motion of the arms of the humanoid robot. Some of the gestures assessed include upwards, downwards, lateral and circular movements, waves and claps. In all experiments, the starting position of the arms is the neutral position.
Table 2 shows a sequence of frames extracted from a video recording of an upwards movement. The relative time-lapse between the frames displayed is approximately 0.42 s. In each picture, a circle has been manually used to mark the position of the arms of the human operator and the robot, and the trajectory between frames has been depicted with a spline line. Blue color has been used to identify the right arms of the robot and the user, while green color has been used for the left arms. The lines help to appreciate the time delay between the remote control device commands and the robot’s accomplishment. Although this gesture requires rapid joint acceleration and deceleration, the robot was able to replicate this upwards gesture without any noticeable problem or difficulty, showing an average telecontrol delay of 680 ms.
Similarly, Table 3 shows a sequence of frames extracted from a video recording of an upwards and lateral movement, in which the arms start at the neutral position, move diagonally slightly upwards and to the left, and then move to the right horizontally. The relative time-lapse between frames is 0.42 s, circles are also superposed to the arms of both devices and curved lines join the positions in each frame. Again, although this gesture requires rapid joint acceleration and deceleration, the mobile robot was able to replicate this gesture without any noticeable problem or difficulty, showing an average telecontrol delay of 445 ms.
Finally, Table 4 summarizes the structural delay of the robot and the total telecontrol delay measured while performing nine arm gestures. The gestures analyzed are upwards, upwards-lateral, downwards, circular-inwards, circular-clockwise, wave and clap. The movement column of Table 4 provides a short description of the gesture performed. To simplify the interpretation of the results, the structural delay (established in the experiment described in Section 5.2) is assumed as a fixed value in all the gesture experiments. The telecontrol delay shows the motion delay observed by visually comparing the gesture of the human operator and the robot. The telecontrol delay is in a range from 340 ms to 900 ms with an average value of 549 ms, which is very similar to the value estimated from the average structural delay and the average communication delay (540 ms). This delay does not compromise the dynamic visual perception of the remote operator controlling the humanoid robot or the user interacting with the robot. A video showing the telecontrol of the APR-02 robot can be found in [76] and in the Supplementary Materials.

6. Discussion and Conclusions

This work presents a remote control device designed to drive the arm gestures of an assistant humanoid robot. The remote control is a master device that has been implemented with two passive arms replicating the four degrees of freedom of each arm of the humanoid robot. The electronic control board of the master device is based on a microcontroller that senses the position of passive each joint by using potentiometers and then sends this information to the humanoid robot for direct arm gesture replication. The remote control device also includes two gamepads instead of hands in order to control the motion and head of the mobile robot without releasing the arms of the device during telecontrol.
The imitation model implemented includes four stages: joint position measurement, mapping, data transmission and execution by the robot. The positions of the joints of each arm in the remote control device are measured and submitted at a frame rate of 20 ms as a direct reference for the imitation process. The advantage of a remote control replicating the joints of a humanoid robot is that there is no need to compute the inverse kinematics of the arms to properly map the position of the arms of the robot. This exact joint replication avoids the inverse kinematic problems reported by Koenenmann et al. [48] when controlling the position of the end effector of a humanoid robot using a complete human model as a reference. The imitation model is compatible with a half-duplex or full-duplex videoconference call to fully develop a telepresence application. To this end, the robot includes a high-resolution panoramic camera on its top and three RGB-D cameras placed at different heights.
The experimental evaluation of the remote control device has been performed by comparing the time-evolution of the joints’ angles of the arms of the device and of the APR-02 humanoid robot. A first experiment was conducted to monitor the joint information registered by the remote control device before submitting it to the mobile robot. This monitoring showed no jumps or irregularities in the remote control device sampling and submitting the position of the joints and other information at a general frame rate of 20 ms.
A second experiment was conducted to measure the average structural delay of the mobile robot to replicate the position of the joints received from the remote controller device. This structural delay only depends on the current closed-loop control applied to drive the arms of the mobile robot and does not depend on the communication network. The results show that the APR-02 mobile robot required from 120 to 540 ms to replicate the position of the arms of the remote control device, which represents an average structural delay of 274 ms. The evaluation of this structural delay has also shown that the use of a target sampling time of 20 ms in the remote controller device does not generate discontinuities or data loss during the replication of the gestures.
A third experiment was conducted to measure the total telecontrol delay, which cumulatively includes the communication delay required to send the target information from the remote control device and to receive it on the robot plus the structural delay required by the robot to replicate the joint positions received. The total telecontrol delay has been estimated by comparing the gesture established in the remote control device and the gesture of the humanoid robot. Some of the gestures assessed include upwards, downwards, lateral and circular movements, waves and claps. This experiment has shown that the humanoid robot is able to replicate any target gesture successfully. The telecontrol delay measured while performing these gestures was found in a range from 340 ms to 900 ms with an average value of 549 ms. Although time delay and data missing during networked communication are inevitable [20], the results showed no jumps, discontinuities or data loss during the development of all the gesture experiments performed in this work. At this point, it should be noted that the communication delay in a remote telecontrol application depends on the quality and performance of the communication network [67,77]; for example, they can be significantly enhanced by using a 5G device-to-device communication [70,71].
In this work, the teleoperator performing the gestures tried to perform all movements using a normal (not too fast and not too slow) human speed. However, the highest telecontrol delays have been achieved in the cases in which the operator moved the arms upwards and circularly (inwards), which were the fastest gestures assessed. These results agree with the imitation experiments conducted by Koenenmann et al. [48], who concluded that gesture error is correlated with the velocity of the gesture. In any case, similarly to the results obtained by Koenemann et al. [48] and Cerón et al. [49] controlling an NAO robot, the communication delay did not compromise the dynamic visual perception of the remote operator controlling the humanoid robot and the user perception while interacting with the robot.
The development of the experiments has validated the importance of the gamepads included in the remote control device as they provide direct access to some common robot functionalities without releasing the hands from the controller. These results agree with Young et al. [58], who stated that the physical control of input devices plays an important role in the link between the robot and the remote operator.
As a final conclusion, the direct manipulation of the passive arms of the remote control device provides the APR-02 humanoid robot with enhanced non-verbal and pointing communication capabilities during human interaction or assistance.

Limitations and Future Work

The main limitation of this proposal is the fact that the remote control device is customized to match the mechanical structure of the APR-02 mobile robot. This limitation can be avoided by using inverse kinematics to compute the joint angles of any specific humanoid robot configuration. Another limitation is that this approach is not addressed to control end effectors.
Future work will cover the application of the remote control device to optimize the control and torque demands of each joint and the addition of end-effectors in the hands [78,79]. Additional future works will focus on improving the spatial awareness of the teleoperator [80] by using a virtual reality headset and a stereoscopic camera in the mobile robot [81] and the implementation of direct 5G device-to-device communication [70,71].

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app131911115/s1, Video S1: APR-02 Remote gesture control, also available at https://youtu.be/EmjljtDh3YI (accessed on 27 June 2023).

Author Contributions

Conceptualization, E.R.; Data curation, E.C.; Funding acquisition, J.P.; Investigation, E.R. and J.P.; Methodology, E.C. and J.P.; Supervision, J.P.; Validation, E.R.; Writing—original draft, E.R. and R.B.; Writing—review and editing, J.P. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by predoctoral grants from the Departament de Recerca i Universitats de la Generalitat de Catalunya and the European Social Fund Plus: AGAUR FI SDUR 2022 and AGAUR FI Joan Oró 2023.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

This work does not report any data.

Acknowledgments

The authors acknowledge the engineering work done to build the device by Albert Farré and David Martínez.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Royakkers, L.; van Est, R. A literature review on new robotics: Automation from love to war. Int. J. Soc. Robot. 2015, 7, 549–570. [Google Scholar] [CrossRef]
  2. Moniruzzaman, M.; Rassau, A.; Chai, D.; Islam, S.M.S. Teleoperation methods and enhancement techniques for mobile robots: A comprehensive survey. Robot. Auton. Syst. 2022, 150, 103973. [Google Scholar] [CrossRef]
  3. Fong, T.; Thorpe, C. Vehicle teleoperation interfaces. Auton. Robot. 2001, 11, 9–18. [Google Scholar] [CrossRef]
  4. Weber, B.; Balachandran, R.; Riecke, C.; Stulp, F.; Stelzer, M. Teleoperating Robots from the International Space Station: Microgravity Effects on Performance with Force Feedback. In Proceedings of the International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019. [Google Scholar] [CrossRef]
  5. Dimitrov, V.; Padir, T. A comparative study of teleoperated and autonomous task completion for sample return rover missions. In Proceedings of the IEEE Aerospace Conference, Big Sky, MT, USA, 1–8 March 2014. [Google Scholar] [CrossRef]
  6. Watanabe, M. Decontamination and concrete core sampling by teleoperated robot at Fukushima Daiichi reactor buildings. In Proceedings of the International Conference on Nuclear Engineering: Nuclear Power—Reliable Global Energy, ICONE, Chiba, Japan, 17–21 May 2015. [Google Scholar]
  7. Qian, K.; Qian, K.; Song, A.; Bao, J.; Zhang, H. Small teleoperated robot for nuclear radiation and chemical leak detection. Int. J. Adv. Robot. Syst. 2012, 9, 73. [Google Scholar] [CrossRef]
  8. Vitanov, I.; Farkhatdinov, I.; Denoun, B.; Palermo, F.; Otaran, A.; Brown, J.; Omarali, B.; Abrar, T.; Hansard, M.; Oh, C.; et al. A Suite of Robotic Solutions for Nuclear Waste Decommissioning. Robotics 2021, 10, 112. [Google Scholar] [CrossRef]
  9. Minsky, M. Telepresence (Essay). In OMNI Magazine; OMNI Publications International Ltd.: New York, NY, USA, 1980; Volume 2, pp. 45–52. [Google Scholar]
  10. Pawłowski, A.; Wolniakowski, A.; Romaniuk, S. Comparison of Mobile Platform Teleoperation Systems Using a Force-Torque Sensor and a Joystick. In Proceedings of the 26th International Conference on Methods and Models in Automation and Robotics (MMAR), Międzyzdroje, Poland, 22–25 August 2022. [Google Scholar] [CrossRef]
  11. Choi, J.J.; Kim, Y.; Kwak, S.S. The autonomy levels and the human intervention levels of robots: The impact of robot types in humanrobot interaction. In Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014. [Google Scholar] [CrossRef]
  12. Klamt, T.; Rodriguez, D.; Schwarz, M.; Lenz, C.; Pavlichenko, D.; Droeschel, D.; Behnke, S. Supervised Autonomous Locomotion and Manipulation for Disaster Response with a Centaur-Like Robot. In Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018. [Google Scholar] [CrossRef]
  13. Di Bona, G.; Cesarotti, V.; Arcese, G.; Gallo, T. Implementation of Industry 4.0 technology: New opportunities and challenges for maintenance strategy. Procedia Comput. Sci. 2021, 180, 424–429. [Google Scholar] [CrossRef]
  14. Luperto, M.; Romeo, M.; Monroy, J.; Renoux, J.; Vuono, A.; Moreno, F.-A.; Gonzalez-Jimenez, J.; Basilico, N.; Borghese, N.A. User feedback and remote supervision for assisted living with mobile robots: A field study in long-term autonomy. Robot. Auton. Syst. 2022, 155, 104170. [Google Scholar] [CrossRef]
  15. Bao, X.; Guo, S.; Guo, Y.; Yang, C.; Shi, L.; Li, Y.; Jiang, Y. Multilevel Operation Strategy of a Vascular Interventional Robot System for Surgical Safety in Teleoperation. IEEE Trans. Robot. 2022, 38, 2238–2250. [Google Scholar] [CrossRef]
  16. Zhu, Z.; Hu, H. Robot Learning from Demonstration in Robotic Assembly: A Survey. Robotics 2018, 7, 17. [Google Scholar] [CrossRef]
  17. Hussein, A.; Gaber, M.M.; Elyan, E.; Jayne, C. Imitation Learning: A Survey of Learning Methods. ACM Comput. Surv. 2017, 50, 21. [Google Scholar] [CrossRef]
  18. Bekker, M.M.; Olson, J.S.; Olson, G.M. Analysis of gestures in face-to-face design teams provides guidance for how to use groupware in design. In Proceedings of the Conference on Designing Interactive Systems: Processes, Practices, Methods, & Techniques (DIS ’95), Ann Arbor, MI, USA, 23–25 August 1995. [Google Scholar] [CrossRef]
  19. Stahl, C.; Anastasiou, D.; Latour, T. Social Telepresence Robots: The role of gesture for collaboration over a distance. In Proceedings of the 11th Pervasive Technologies Related to Assistive Environments Conference (PETRA ’18), Corfu, Greece, 26–29 June 2018. [Google Scholar] [CrossRef]
  20. Jung, H.; Song, Y.-E. Robotic remote control based on human motion via virtual collaboration system: A survey. J. Adv. Mech. Des. Syst. Manuf. 2018, 12, JAMDSM0126. [Google Scholar] [CrossRef]
  21. Sheridan, T. Human supervisory control of robot systems. In Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 7–10 April 1986. [Google Scholar] [CrossRef]
  22. Sergeant, J.; Sunderhauf, N.; Milford, M.; Upcroft, B. Multimodal deep autoencoders for control of a mobile robot. In Proceedings of the Australasian Conference on Robotics and Automation, ACRA, Canberra, Australia, 2–4 December 2015. [Google Scholar]
  23. Nahri, S.N.F.; Du, S.; Van Wyk, B.J. A Review on Haptic Bilateral Teleoperation Systems. J. Intell. Robot. Syst. 2022, 104, 13. [Google Scholar] [CrossRef]
  24. Deng, Y.; Tang, Y.; Yang, B.; Zheng, W.; Liu, S.; Liu, C. A Review of Bilateral Teleoperation Control Strategies with Soft Environment. In Proceedings of the 6th IEEE International Conference on Advanced Robotics and Mechatronics (ICARM), Chongqing, China, 3–5 July 2021. [Google Scholar] [CrossRef]
  25. Fong, T.; Thorpe, C.; Baur, C. Robot as partner: Vehicle teleoperation with collaborative control. In Multi-Robot Systems: From Swarms to Intelligent Automata; Schultz, A.C., Parker, L.E., Eds.; Springer: Dordrecht, Netherlands, 2002; pp. 195–202. [Google Scholar] [CrossRef]
  26. Bruemmer, D.J.; Marble, J.L.; Dudenhoeffer, D.D.; Anderson, M.; McKay, M.D. Mixed-initiative control for remote characterization of hazardous environments. In Proceedings of the 36th Annual Hawaii International Conference on System Sciences, HICSS, Big Island, HI, USA, 6–9 January 2003. [Google Scholar] [CrossRef]
  27. Dorais, G.; Bonasso, R.P.; Kortenkamp, D.; Pell, B.; Schreckenghost, D. Adjustable autonomy for human-centered autonomous systems. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence Workshop on Adjustable Autonomy Systems, Stockholm, Sweden, 31 July–6 August 1999; pp. 16–35. [Google Scholar]
  28. Sellner, B.; Heger, F.W.; Hiatt, L.M.; Simmons, R.; Singh, S. Coordinated multiagent teams and sliding autonomy for large-scale assembly. Proc. IEEE 2006, 94, 1425–1444. [Google Scholar] [CrossRef]
  29. Vozar, S.; Tilbury, D.M. Augmented reality user interface for mobile robots with manipulator arms: Development, testing, and qualitative analysis. In Proceedings of the ASME 2012 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Chicago, IL, USA, 12–15 August 2012. [Google Scholar] [CrossRef]
  30. Shim, H.; Jun, B.H.; Lee, P.M.; Baek, H.; Lee, J. Workspace control system of underwater tele-operated manipulators on an ROV. Ocean. Eng. 2010, 37, 1036–1047. [Google Scholar] [CrossRef]
  31. Deichler, A.; Wang, S.; Alexanderson, S.; Beskow, J. Learning to generate pointing gestures in situated embodied conversational agents. Front. Robot. AI 2023, 10, 1110534. [Google Scholar] [CrossRef]
  32. Kim, S.; Kim, C.; You, B.; Oh, S. Stable whole-body motion generation for humanoid robots to imitate human motions. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009. [Google Scholar] [CrossRef]
  33. Suleiman, W.; Yoshida, E.; Kanehiro, F.; Laumond, J.-P.; Monin, A. On human motion imitation by humanoid robot. In Proceedings of the IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008. [Google Scholar] [CrossRef]
  34. Chalodhorn, R.; Grimes, D.B.; Grochow, K.; Rao, R.P. Learning to Walk through Imitation. In Proceedings of the International Joint Conference on Artificial Intelligence, Hyderabad, India, 6–12 January 2007. [Google Scholar]
  35. Nakaoka, S.; Nakazawa, A.; Kanehiro, F.; Kaneko, K.; Morisawa, M.; Hirukawa, H.; Ikeuchi, K. Learning from observation paradigm: Leg task models for enabling a biped humanoid robot to imitate human dances. Int. J. Robot. Res. 2007, 26, 777–884. [Google Scholar] [CrossRef]
  36. Ude, A.; Atkeson, C.; Riley, M. Programming full-body movements for humanoid robots by observation. Robot. Auton. Syst. 2004, 47, 93–108. [Google Scholar] [CrossRef]
  37. Safonova, A.; Pollard, N.; Hodgins, J.K. Optimizing human motion for the control of a humanoid robot. In Proceedings of the International Symposium on Adaptive Motion of Animals and Machines (AMAM ’03), Kyoto, Japan, 4–8 March 2003. [Google Scholar]
  38. Munirathinam, K.; Chevallereau, C.; Sakka, S. Offline Imitation of a Human Motion by a Humanoid Robot Under Balance Constraint. In New Trends in Medical and Service Robots; Rodić, A., Pisla, D., Bleuler, H., Eds.; Springer: Cham, Switzerland, 2014; Volume 20, pp. 269–282. [Google Scholar] [CrossRef]
  39. Gonen, E.C.; Chae, Y.J.; Kim, C. Imitation of Human Upper-Body Motions by Humanoid Robots. In Proceedings of the 16th International Conference on Ubiquitous Robots (UR), Jeju, Republic of Korea, 24–27 June 2019. [Google Scholar] [CrossRef]
  40. Oh, J.; Lee, I.; Jeong, H.; Oh, J.-H. Real-time humanoid whole-body remote control framework for imitating human motion based on kinematic mapping and motion constraints. Adv. Robot. 2019, 33, 293–305. [Google Scholar] [CrossRef]
  41. Stanton, C.; Bogdanovych, A.; Ratanasen, E. Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning. In Proceedings of the Australasian Conference on Robotics and Automation (ACRA 2012), Wellington, New Zealand, 3–5 December 2012. [Google Scholar]
  42. Yamane, K.; Hodgins, J. Controlling humanoid robots with human motion data: Experimental validation. In Proceedings of the IEEE-RAS International Conference on Humanoid Robots, Nashville, TN, USA, 6–8 December 2010. [Google Scholar] [CrossRef]
  43. Ott, C.; Lee, D.; Nakamura, Y. Motion capture based human motion recognition and imitation by direct marker control. In Proceedings of the 8th IEEE-RAS International Conference on Humanoid Robots, Daejeon, Republic of Korea, 1–3 December 2008. [Google Scholar] [CrossRef]
  44. Dariush, B.; Gienger, M.; Arumbakkam, A.; Zhu, Y.; Jian, B.; FujiMura, K.; Goerick, C. Online transfer of human motion to humanoids. Int. J. Humanoid Robot. 2009, 6, 265–289. [Google Scholar] [CrossRef]
  45. Do, M.; Azad, P.; Asfour, T.; Dillmann, R. Imitation of human motion on a humanoid robot using non-linear optimization. In Proceedings of the Humanoids 2008—8th IEEE-RAS International Conference on Humanoid Robots, Daejeon, Republic of Korea, 1–3 December 2008. [Google Scholar] [CrossRef]
  46. Qiu, X.; Yu, Z.; Meng, L.; Chen, X.; Zhao, L.; Huang, G.; Meng, F. Upright and Crawling Locomotion and Its Transition for a Wheel-Legged Robot. Micromachines 2022, 13, 1252. [Google Scholar] [CrossRef]
  47. Chen, T.L.; Ciocarlie, M.; Cousins, S.; Grice, P.; Hawkins, K.; Hsiao, K.; Kemp, C.C.; King, C.-H.; Lazewatsky, D.A.; Leeper, A.; et al. Robots for Humanity: A Case Study in Assistive Mobile Manipulation. IEEE Robot. Autom. Mag. 2013, 20, 30–39. [Google Scholar] [CrossRef]
  48. Koenemann, J.; Burget, F.; Bennewitz, M. Real-time imitation of human whole-body motions by humanoids. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014. [Google Scholar] [CrossRef]
  49. Cerón, J.C.; Sunny, M.S.H.; Brahmi, B.; Mendez, L.M.; Fareh, R.; Ahmed, H.U.; Rahman, M.H. A Novel Multi-Modal Teleoperation of a Humanoid Assistive Robot with Real-Time Motion Mimic. Micromachines 2023, 14, 461. [Google Scholar] [CrossRef] [PubMed]
  50. Balmik, A.; Paikaray, A.; Jha, M.; Nandy, A. Motion recognition using deep convolutional neural network for Kinect-based NAO teleoperation. Robotica 2022, 40, 3233–3253. [Google Scholar] [CrossRef]
  51. Eirale, A.; Martini, M.; Tagliavini, L.; Gandini, D.; Chiaberge, M.; Quaglia, G. Marvin: An Innovative Omni-Directional Robotic Assistant for Domestic Environments. Sensors 2022, 22, 5261. [Google Scholar] [CrossRef] [PubMed]
  52. Materna, Z.; Španěl, M.; Mast, M.; Beran, V.; Weisshardt, F.; Burmester, M.; Smrž, P. Teleoperating Assistive Robots: A Novel User Interface Relying on Semi-Autonomy and 3D Environment Mapping. J. Robot. Mechatron. 2017, 29, 381–394. [Google Scholar] [CrossRef]
  53. Moczulski, W. Autonomous systems control aided by Virtual Teleportation of remote operator. IFAC Pap. 2022, 55, 59–64. [Google Scholar] [CrossRef]
  54. Su, Y.-P.; Chen, X.-Q.; Zhou, T.; Pretty, C.; Chase, G. Mixed-Reality-Enhanced Human–Robot Interaction with an Imitation-Based Mapping Approach for Intuitive Teleoperation of a Robotic Arm-Hand System. Appl. Sci. 2022, 12, 4740. [Google Scholar] [CrossRef]
  55. Lim, D.; Kim, D.; Park, J. Online Telemanipulation Framework on Humanoid for both Manipulation and Imitation. In Proceedings of the 19th International Conference on Ubiquitous Robots (UR), Jeju, Republic of Korea, 4–6 July 2022. [Google Scholar] [CrossRef]
  56. Leeper, A.E.; Hsiao, K.; Ciocarlie, M.; Takayama, L.; Gossow, D. Strategies for human-in-the-loop robotic grasping. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction (HRI ’12), Boston, MA, USA, 5–8 March 2012. [Google Scholar] [CrossRef]
  57. Nixon, G.A.; Wildenbeest, J.G.; Abbink, D.A. Effective Human-Machine Interfaces for Aerial Telemanipulation. Master’s Thesis, Delft University of Technology, Delft, The Netherlands, 2015. [Google Scholar]
  58. Young, S.N.; Peschel, J.M. Review of Human–Machine Interfaces for Small Unmanned Systems with Robotic Manipulators. IEEE Trans. Hum. Mach. Syst. 2020, 50, 131–143. [Google Scholar] [CrossRef]
  59. Clotet, E.; Martínez, D.; Moreno, J.; Tresanchez, M.; Palacín, J. Assistant Personal Robot (APR): Conception and Application of a Tele-Operated Assisted Living Robot. Sensors 2016, 16, 610. [Google Scholar] [CrossRef]
  60. Palacín, J.; Rubies, E.; Clotet, E. The Assistant Personal Robot Project: From the APR-01 to the APR-02 Mobile Robot Prototypes. Designs 2022, 6, 66. [Google Scholar] [CrossRef]
  61. Palacín, J.; Clotet, E.; Martínez, D.; Martínez, D.; Moreno, J. Extending the Application of an Assistant Personal Robot as a Walk-Helper Tool. Robotics 2019, 8, 27. [Google Scholar] [CrossRef]
  62. Palacín, J.; Rubies, E.; Clotet, E. Systematic Odometry Error Evaluation and Correction in a Human-Sized Three-Wheeled Omnidirectional Mobile Robot Using Flower-Shaped Calibration Trajectories. Appl. Sci. 2022, 12, 2606. [Google Scholar] [CrossRef]
  63. Palacín, J.; Rubies, E.; Bitrià, R.; Clotet, E. Non-Parametric Calibration of the Inverse Kinematic Matrix of a Three-Wheeled Omnidirectional Mobile Robot Based on Genetic Algorithms. Appl. Sci. 2023, 13, 1053. [Google Scholar] [CrossRef]
  64. Rubies, E.; Palacín, J.; Clotet, E. Enhancing the Sense of Attention from an Assistance Mobile Robot by Improving Eye-Gaze Contact from Its Iconic Face Displayed on a Flat Screen. Sensors 2022, 22, 4282. [Google Scholar] [CrossRef] [PubMed]
  65. Ortiz-Torres, G.; Castillo, P.; Reyes-Reyes, J. An Actuator Fault Tolerant Control for VTOL vehicles using Fault Estimation Observers: Practical validation. In Proceedings of the 2018 International Conference on Unmanned Aircraft Systems (ICUAS), Dallas, TX, USA, 12–15 June 2018; pp. 1054–1062. [Google Scholar] [CrossRef]
  66. Falcone, S.; Englebienne, G.; Van Erp, J.; Heylen, D. Toward Standard Guidelines to Design the Sense of Embodiment in Teleoperation Applications: A Review and Toolbox. Hum. Comput. Interact. 2023, 38, 322–351. [Google Scholar] [CrossRef]
  67. Opiyo, S.; Zhou, J.; Mwangi, E.; Kai, W.; Sunusi, I. A Review on Teleoperation of Mobile Ground Robots: Architecture and Situation Awareness. Int. J. Control Autom. Syst. 2021, 19, 1384–1407. [Google Scholar] [CrossRef]
  68. Lian, Y.; Zhang, W.; Jiang, J. The architecture of the remote control system oriented to 4G networks. In Proceedings of the International Conference on Consumer Electronics, Communications and Networks, Yichang, China, 21–23 April 2012; pp. 1386–1390. [Google Scholar] [CrossRef]
  69. Poncela, J.; Moreno, J.M.; Aamir, M. Analysis of M2M capabilities in 4G. In Proceedings of the International Conference on Wireless Communications, Vehicular Technology, Information Theory and Aerospace & Electronic Systems, Aalborg, Denmark, 11–14 May 2014; pp. 1–5. [Google Scholar] [CrossRef]
  70. Shen, X. Device-to-device communication in 5G cellular networks. IEEE Netw. 2015, 29, 2–3. [Google Scholar] [CrossRef]
  71. Mehmood, Y.; Haider, N.; Imran, M.; Timm-Giel, A.; Guizani, M. M2M Communications in 5G: State-of-the-Art Architecture, Recent Advances, and Research Challenges. IEEE Commun. Mag. 2017, 55, 194–201. [Google Scholar] [CrossRef]
  72. MacKenzie, I.S.; Ware, C. Lag as a determinant of human performance in interactive systems. In Proceedings of the Conference on Human Factors in Computing Systems, INTERACT’93 and CHI’93, Amsterdam, The Netherlands, 24–29 April 1993; pp. 488–493. [Google Scholar] [CrossRef]
  73. Rubies, E.; Bitriá, R.; Clotet, E.; Palacín, J. Non-Contact and Non-Intrusive Add-on IoT Device for Wireless Remote Elevator Control. Appl. Sci. 2023, 13, 3971. [Google Scholar] [CrossRef]
  74. Neumeier, S.; Walelgne, E.A.; Bajpai, V.; Ott, J.; Facchi, C. Measuring the Feasibility of Teleoperated Driving in Mobile Networks. In Proceedings of the 2019 Network Traffic Measurement and Analysis Conference (TMA), Paris, France, 19–21 June 2019. [Google Scholar] [CrossRef]
  75. Watson, B.; Walker, N.; Ribarsky, W.; Spaulding, V. Effects of variation in system responsiveness on user performance in virtual environments. Hum. Factors 1998, 40, 403–414. [Google Scholar] [CrossRef]
  76. APR-02 Remote Gesture Control. Available online: https://youtu.be/EmjljtDh3YI (accessed on 27 June 2023).
  77. Oliveira, V.M.; Morais, P.; Oliveira, B.; Vilaca, J.L.; Moreira, A.H.J. Exploring current communication frameworks for medical teleoperation. In Proceedings of the SeGAH 2021—2021 IEEE 9th International Conference on Serious Games and Applications for Health, Dubai, United Arab Emirates, 4–6 August 2021. [Google Scholar] [CrossRef]
  78. Sanjuan De Caro, J.D.; Sunny, M.S.H.; Muñoz, E.; Hernandez, J.; Torres, A.; Brahmi, B.; Wang, I.; Ghommam, J.; Rahman, M.H. Evaluation of Objective Functions for the Optimal Design of an Assistive Robot. Micromachines 2022, 13, 2206. [Google Scholar] [CrossRef] [PubMed]
  79. Yi, J.-B.; Kim, J.; Kang, T.; Song, D.; Park, J.; Yi, S.-J. Anthropomorphic Grasping of Complex-Shaped Objects Using Imitation Learning. Appl. Sci. 2022, 12, 12861. [Google Scholar] [CrossRef]
  80. Spano, L.D. Teleoperating Humanoids Robots using Standard VR Headsets: A Systematic Review. In Proceedings of the International Conference on Computer-Human Interaction Research and Applications, CHIRA, Virtual, 28–29 October 2021. [Google Scholar] [CrossRef]
  81. Su, Y.; Chen, X.; Zhou, T.; Pretty, C.; Chase, G. Mixed reality-integrated 3D/2D vision mapping for intuitive teleoperation of mobile manipulator. Robot. Comput. Integr. Manuf. 2022, 77, 102332. [Google Scholar] [CrossRef]
Figure 1. APR-02 robot used in this paper moving its hands to knock at a door: (a) Side view of the robot; (b) teleoperator view from the panoramic camera available on the head of the robot.
Figure 1. APR-02 robot used in this paper moving its hands to knock at a door: (a) Side view of the robot; (b) teleoperator view from the panoramic camera available on the head of the robot.
Applsci 13 11115 g001
Figure 2. Kinematic representation of the joints of the remote control device, which is a scaled-down reproduction of the upper part of the APR-02 robot, with four degrees of freedom in each arm.
Figure 2. Kinematic representation of the joints of the remote control device, which is a scaled-down reproduction of the upper part of the APR-02 robot, with four degrees of freedom in each arm.
Applsci 13 11115 g002
Figure 3. Schematic diagram of the electronic control board: the position of the arm joints is measured using potentiometers, and two gamepads including buttons and potentiometers provide direct access to several mobile robot functions.
Figure 3. Schematic diagram of the electronic control board: the position of the arm joints is measured using potentiometers, and two gamepads including buttons and potentiometers provide direct access to several mobile robot functions.
Applsci 13 11115 g003
Figure 4. Shoulder joints detail seen from below: (a) Robot shoulder; (b) CAD model of the remote control device; (c) implementation of the remote control device.
Figure 4. Shoulder joints detail seen from below: (a) Robot shoulder; (b) CAD model of the remote control device; (c) implementation of the remote control device.
Applsci 13 11115 g004
Figure 5. Elbow joint detail: (a) Robot elbow; (b) CAD model of the remote control device; (c) implementation of the remote control device.
Figure 5. Elbow joint detail: (a) Robot elbow; (b) CAD model of the remote control device; (c) implementation of the remote control device.
Applsci 13 11115 g005
Figure 6. Front view of the gamepads and functions of each button: (a) right arm; (b) left arm.
Figure 6. Front view of the gamepads and functions of each button: (a) right arm; (b) left arm.
Applsci 13 11115 g006
Figure 7. Body structure of the remote control device: (a) CAD model; (b) implementation.
Figure 7. Body structure of the remote control device: (a) CAD model; (b) implementation.
Applsci 13 11115 g007
Figure 8. Structure of the imitation model: the operator moves the arms of the remote control device; the positions of the joints are measured, mapped, and sent to the APR-02 mobile robot.
Figure 8. Structure of the imitation model: the operator moves the arms of the remote control device; the positions of the joints are measured, mapped, and sent to the APR-02 mobile robot.
Applsci 13 11115 g008
Figure 9. Evolution of the joint angles measured by the remote control device when the arms perform three circular inwards trajectories in the air: (a) right arm; (b) left arm.
Figure 9. Evolution of the joint angles measured by the remote control device when the arms perform three circular inwards trajectories in the air: (a) right arm; (b) left arm.
Applsci 13 11115 g009
Figure 10. Evolution of the joint angles when the arms of the mobile robot perform four circular inwards trajectories in the air: (a) right arm; (b) left arm. Solid lines depict the joint position received by the mobile robot, and dotted lines depict the real evolution of the joints of the robot.
Figure 10. Evolution of the joint angles when the arms of the mobile robot perform four circular inwards trajectories in the air: (a) right arm; (b) left arm. Solid lines depict the joint position received by the mobile robot, and dotted lines depict the real evolution of the joints of the robot.
Applsci 13 11115 g010
Figure 11. Evolution of the joint angles when the arms of the remote control device perform three lateral trajectories in the air: starting from a neutral position, moving upwards-left, and then horizontally to the right, returning to the left and repeating to the right: (a) right arm; (b) left arm. Solid lines depict the joint position received by the mobile robot, and dotted lines depict the real evolution of the joints of the robot.
Figure 11. Evolution of the joint angles when the arms of the remote control device perform three lateral trajectories in the air: starting from a neutral position, moving upwards-left, and then horizontally to the right, returning to the left and repeating to the right: (a) right arm; (b) left arm. Solid lines depict the joint position received by the mobile robot, and dotted lines depict the real evolution of the joints of the robot.
Applsci 13 11115 g011
Table 1. Sensors, sampling frequencies and submission rates.
Table 1. Sensors, sampling frequencies and submission rates.
SensorSampling FrequencySubmission Rate
Joint potentiometers50 Hz50 Hz
Gamepad buttons10 HzIn case of changes
Gamepad joystick (for motion control)10 Hz10 Hz
Gamepad joystick (for eye-gaze control)50 Hz50 Hz
Table 2. Sequence of movements obtained when the remote operator performs an upwards gesture, moving the arms from the neutral position to upwards.
Table 2. Sequence of movements obtained when the remote operator performs an upwards gesture, moving the arms from the neutral position to upwards.
(a) t = 0.00 s(b) t = 0.42 s(c) t = 0.83 s(d) t = 1.25 s(e) t = 1.67 s(f) t = 2.08 s(g) t = 2.50 s
Applsci 13 11115 i001
Table 3. Sequence of movements obtained when the operator performs an upwards and lateral gesture, moving the arms from the neutral position to upwards-left and then horizontally to the right.
Table 3. Sequence of movements obtained when the operator performs an upwards and lateral gesture, moving the arms from the neutral position to upwards-left and then horizontally to the right.
(a) t = 0.00 s(b) t = 0.42 s(c) t = 0.83 s(d) t = 1.25 s(e) t = 1.67 s(f) t = 2.08 s(g) t = 2.50 s
Applsci 13 11115 i002
Table 4. Telecontrol performances of the imitation model.
Table 4. Telecontrol performances of the imitation model.
GestureMovementStructural DelayTotal Telecontrol Delay
UpwardsNeutral (Table 2, (a))—Up (Table 2, (g))0.274 s0.68 s
Upwards and lateralNeutral (Table 3, (a))—Diagonal left (Table 3, (d))0.274 s0.45 s
Left (Table 3, (d))—Right (Table 3, (g))0.274 s0.44 s
Right (Table 3, (g))—Left0.274 s0.50 s
DownwardsUp (Table 2, (g))—Neutral0.274 s0.50 s
Circular (inwards)Neutral—Neutral (both arms inwards, 4 turns)0.274 s0.90 s
Circular (clockwise)Neutral—Neutral (both arms clockwise, 4 turns)0.274 s0.34 s
WaveNeutral—up—wave (right arm, 4 waves)0.274 s0.60 s
ClapNeutral—clap (4 claps)0.274 s0.53 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rubies, E.; Bitriá, R.; Clotet, E.; Palacín, J. Remote Control Device to Drive the Arm Gestures of an Assistant Humanoid Robot. Appl. Sci. 2023, 13, 11115. https://doi.org/10.3390/app131911115

AMA Style

Rubies E, Bitriá R, Clotet E, Palacín J. Remote Control Device to Drive the Arm Gestures of an Assistant Humanoid Robot. Applied Sciences. 2023; 13(19):11115. https://doi.org/10.3390/app131911115

Chicago/Turabian Style

Rubies, Elena, Ricard Bitriá, Eduard Clotet, and Jordi Palacín. 2023. "Remote Control Device to Drive the Arm Gestures of an Assistant Humanoid Robot" Applied Sciences 13, no. 19: 11115. https://doi.org/10.3390/app131911115

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop