Next Article in Journal
Analysis of the Critical Bits of a RISC-V Processor Implemented in an SRAM-Based FPGA for Space Applications
Next Article in Special Issue
Brain-Switches for Asynchronous Brain–Computer Interfaces: A Systematic Review
Previous Article in Journal
Hybrid Intrusion Detection System Based on the Stacking Ensemble of C5 Decision Tree Classifier and One Class Support Vector Machine
Previous Article in Special Issue
Optimal Feature Search for Vigilance Estimation Using Deep Reinforcement Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback

1
School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China
2
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(1), 174; https://doi.org/10.3390/electronics9010174
Submission received: 16 December 2019 / Revised: 9 January 2020 / Accepted: 15 January 2020 / Published: 17 January 2020

Abstract

:
Brain computer interface (BCI) adopts human brain signals to control external devices directly without using normal neural pathway. Recent study has explored many applications, such as controlling a teleoperation robot by electroencephalography (EEG) signals. However, utilizing the motor imagery EEG-based BCI to perform teleoperation for reach and grasp task still has many difficulties, especially in continuous multidimensional control of robot and tactile feedback. In this research, a motor imagery EEG-based continuous teleoperation robot control system with tactile feedback was proposed. Firstly, mental imagination of different hand movements was translated into continuous command to control the remote robotic arm to reach the hover area of the target through a wireless local area network (LAN). Then, the robotic arm automatically completed the task of grasping the target. Meanwhile, the tactile information of remote robotic gripper was detected and converted to the feedback command. Finally, the vibrotactile stimulus was supplied to users to improve their telepresence. Experimental results demonstrate the feasibility of using the motor imagery EEG acquired by wireless portable equipment to realize the continuous teleoperation robot control system to finish the reach and grasp task. The average two-dimensional continuous control success rates for online Task 1 and Task 2 of the six subjects were 78.0% ± 6.1% and 66.2% ± 6.0%, respectively. Furthermore, compared with the traditional EEG triggered robot control using the predefined trajectory, the continuous fully two-dimensional control can not only improve the teleoperation robot system’s efficiency but also give the subject a more natural control which is critical to human–machine interaction (HMI). In addition, vibrotactile stimulus can improve the operator’s telepresence and task performance.

1. Introduction

Brain computer interface (BCI) utilizes brain activity to communicate with external devices directly [1,2,3]. In the past few decades, both invasive BCI and noninvasive BCI received much attention from researchers. The BCI has evolved from basic communication to a state in which some complex tasks can be routinely performed with healthy subjects. Soekadar et al. [4] demonstrated that the hand exoskeleton based on hybrid electroencephalography (EEG)/electro-oculogram (EOG) BCI can restore the autonomy and independence of paraplegic individuals in everyday life. The feasibility of inducing neurological recovery in paraplegic patients by long term training with a BCI-based gait protocol was shown in [5]. In addition, BCI-based control of virtual object [6], robotic arm [7,8,9], robotic prosthetic [10,11], wheelchair [12], and various rehabilitation devices [13,14,15,16] were also reported in previous research.
With the development of the BCI technology, some groups have carried out research on BCI-based teleoperation. In [17], a brain-driven telepresence system was provided to the user in remote environments through a mobile robot. The EEG signal of P300 was utilized. A BCI-based teleoperation control for the exoskeleton robotic system was proposed in [18]. In this study, the subject can guide the robot to perform manipulation tasks by integrating the BCI commands and adaptive fuzzy controllers. Zhao et al. [19] developed a teleoperation control framework for multiple coordinated mobile robots using BCI. Both of the above two studies adopt the EEG signals of steady-state visually evoked potentials (SSVEP).
However, few research groups have attempted to apply the motor imagery EEG signals to the teleoperation robotic system in reach and grasp tasks, and many difficulties still remain, such as continuous extracting trajectory information from human movement imagination, continuous multidimensional control, and tactile feedback. Most of the previous research focuses on the conventional two class or four class classification of EEG signal to get one or four discrete control commands and trigger the robot to move along the predefined trajectory instead of directly converting the EEG signals to multidimensional continuous control information [20]. For example, Bousseta et al. [21] proposed a BCI system which controlled a robot arm based on the user’s thought. Four subjects were instructed to imagine the execution of movements of the left hand, right hand, both hands, or movement of the feet. The classifier generated four commands to control the robot’s arm to move along the predefined trajectory instead of continuous control. Li Y et al. [22] presented an EEG-based system classifying the signals from the Emotiv EPOC into the corresponding action commands. The trigger commands were sent to the robot and the robot performed the predefined basic maneuvering, such as moving forwards, moving backwards, turning left, and turning right. In our previous work, trigger commands were also used to control the rehabilitation robot [23]. In addition, most of the BCI-based online control systems adopted nonportable equipment and could only be used in laboratory environments. Although it has high signal-to-noise ratio, it is very inconvenient to use and carry.
Aiming at these problems, we designed a BCI-based continuous teleoperation robot control system with tactile feedback and recruited a group of healthy participants to use their movement imagination EEG to continuously control the remote robotic arm for performing reach and grasp tasks. Wireless portable acquisition equipment was adopted to obtain the EEG signals in order to make the BCI-based system easier to use in daily life. Moreover, the tactile information of remote robotic gripper was detected and transferred to the local computer. At last, the subjects were provided with vibrotactile stimulus to improve their telepresence and task performance. The motor imagery EEG-based continuous teleoperation robot control system with vibrotactile feedback for the task of reach and grasp may offer the following advantages: (a) It allows subjects to perform teleoperation using only movement imagination and explore the operator’s motor initiatives; (b) the continuous control not only can improve the teleoperation robot system’s efficiency but also offer the subject a more natural control which is very important in human machine interaction, especially in the teleoperation robot system; (c) the biofeedback based on vibrotactile stimulus may improve operator’s telepresence, enhance the confidence of the operator, and eventually improve task performance; (d) it may provide critical data to understand the processes of motor imagery based-teleoperation robot control system.

2. Materials and Methods

2.1. System Description

Figure 1 shows the experimental setup used in this study. The EEG-based continuous teleoperation robot system consists of the following components: A wireless EEG amplifier, vibrotactile stimulation system, a robotic arm, robotic gripper force detection system, a wireless gateway, a master PC, and a slave PC.
Firstly, the master PC displayed the target to indicate the participant to perform the corresponding motor imagery of left hand, right hand, both hands, or relaxation. Secondly, the cursor was controlled by motor imagery EEG and continued moving on the screen as the visual feedback until it hit the virtual target. In accordance with continuous motion trajectory of the cursor, the robotic arm moved continuously toward the target. The wireless EEG amplifier is in charge of recording the motor imagery EEG and sending the data to the master PC by Bluetooth. The master PC is responsible for processing the movement imagination EEG signals, sending the continuous motion commands to the slave PC, controlling the virtual cursor for the purpose of providing subjects with the visual feedback, receiving the force information of robotic gripper from slave PC, and controlling the vibrotactile stimulation system for the purpose of supplying the subjects with tactile feedback. The slave PC is in charge of controlling the robot to finish the task of reach and grasp according to the motion trajectory from the master PC, acquiring the tactile information of robotic gripper, and sending it to the master PC. In addition, the communication between master and slave PC is based on a client-server system with TCP-IP protocol. The master PC and slave PC act as the server and client, respectively.

2.2. Control Architecture

The control architecture of the EEG-based continuous teleoperation robot control system is shown in Figure 2. To begin with, the multichannel EEG signals were spatially filtered by a common average reference (CAR) filter. Secondly, in every 50 ms, the amplitude of specific mu rhythm band or beta rhythm band over the left (C3 channel) and right (C4 channel) hemispheres were estimated based on a 16 order autoregressive (AR) model using the last 500 ms EEG signals. Thirdly, these amplitudes were linearly mapped to generate the two-dimensional continuous motion trajectory. Next, the motion trajectory information was utilized to control the cursor for the purpose of giving visual feedback. At the same time, the motion trajectory information was transmitted to the slave PC by the wires local area network (LAN). Then, the robotic arm was controlled according to the continuous motion trajectory. Once the robotic arm arrives the hover area, the robotic gripper would reach and grasp the target automatically in order to reduce the brain load. The “hover area” was a virtual cylindrical region centered above the target wood with a radius of 1 cm. What is more, the grasp force was detected and sent to the master PC also by wireless LAN as the feedback. Finally, the feedback based on vibrotactile stimulus was given to the subjects when the robotic gripper grasped the target in order to improve the telepresence and task performance.

2.3. Human Subjects

Six right-handed healthy subjects were the users of the BCI-based teleoperation robot system. All participants had no previous experience for motor imagery based BCI experiments and teleoperation experiments. They were all recruited from Southeast University, Nanjing, China. Before performing experiments, informed consent was obtained from each subject. This study was approved by local Ethics Committee.

2.4. Experimental Paradigm

Each subject sat in a reclining chair facing a screen, while scalp electrodes recorded the EEG. The robotic arm was placed on another table about 8 m away from the subjects. This can be seen in Figure 3. The subject’s task was to imagine movement of the left hand, right hand, both hands, and relaxation of both hands. The subject was looking at the virtual cursor during the motor imagery, and could not see the robotic arm. By the motor imagery, the subjects learned to modulate their sensorimotor rhythm amplitude in the mu rhythm (8–12 Hz) and beta (18–26 Hz) frequency bands. Each subject performed five runs and each run included 12 trials.
Each trial started with a period of 2 s. Next, the virtual target was displayed on the screen to indicate the subject to perform the corresponding movement imagination. After 2 s, the cursor was moving on the screen as the visual feedback. Meanwhile, the continuous motion trajectory was sent to the slave PC by wireless LAN and the robotic arm moved continuously toward the target in real time. Once the movement imagination-controlled cursor hit the target, the robotic arm gripper had arrived the hover area and would perform grasping automatically. At the same time, the fore information of robotic gripper was transmitted from slave PC to master PC, and the feedback based on vibrotactile stimulus was supplied to the subject so as to improve the telepresence. Finally, the robotic arm moved to the original position and prepared to reach and grasp for the next trial.

2.5. EEG Recording and Processing

According to the international standard electrode placement [24], electrodes (Figure 4) of C3, FC3, CP3, C5, C4, FC4, CP4, and C6 were fed into a g.tec’s portable acquisition system, namely g.MOBIlab module. All channels were referenced to the left earlobe and the ground electrode was AFz. EEG signals of movement imagination were acquired from the amplifier using the BCI2000 [25] software by Bluetooth. The sample rate was 256 Hz. The impedance of electrodes was kept below 10 KΩ. BCI2000 is in charge of controlling the virtual cursor and displaying the virtual targets.
In EEG-based BCI systems, the use of spatial filters can significantly improve the user’s performance [20]. A CAR filter was used to preprocess the eight-channel motor imagery EEG signals. Then, an autoregressive (AR) model was adopted to estimate the amplitude of the sensorimotor rhythm in a specific frequency band of the subject. The AR model is given by:
x t = i = 1 i = p w i x t i + ε ,
where xt was the estimated signal at time t , wi was the weight coefficient, and ε is the error of estimation. Every 50 ms, the 16th order AR model with the last 500 ms EEG data was applied to obtain the online amplitude of mu or beta rhythmic activity. Coefficients of the AR model were calculated based on the least-squares criteria.
The vertical movement of the cursor ( D V ) was obtained by:
D V = w R V R V + w L V L V + b V
where R V and LV were the right-side amplitude and left-side amplitude, respectively, w R V and wLV were the coefficients, b V was the offset. The initial values of w R V , w L V , and bV were +1.0, +1.0, and 0.0, respectively.
The horizontal movement of the cursor ( D H ) was given by:
D H = w R H R H + w L H L H + b H
where RH and L H were the right-side amplitude and left-side amplitude, respectively, w R H and wLH were the coefficients, bH was the offset. The initial values of w R H , w L H , and bH were +1.0, −1.0, and 0.0, respectively.
According to Equations (2) and (3), when subjects imagine the movement of both hands, the right-side amplitude ( R V ) and the left-side amplitude ( L V ) will decrease. Therefore, D V becomes negative and the cursor is controlled to move upwards. Conversely, when subjects imagine the relaxation of both hands, the right-side amplitude ( R V ) and the left-side amplitude ( L V ) will increase. As a result, D V becomes positive and the cursor is controlled to move downwards. Similarly, when subjects imagine the right hand movement or left hand movement, the right-side amplitude ( R H ) will decrease or increase and the left-side amplitude ( L H ) will increase or decrease. Consequently, D H becomes negative or positive and the cursor is controlled to move right or left. After the first trial, the least-mean-square (LMS) algorithm adaptively adjusts weights and offsets according to past experiments. This adaptation optimizes the online translation of EEG control into the cursor control for the next trial [26].
As a result, the sensorimotor rhythm amplitude of specific frequency bands over the left and right sensorimotor cortex was linearly mapped to the control virtual cursor in one or two dimensions. Simultaneously, the control signals were sent to the robot control software through the TCP/IP protocol with 5 Hz to continuously control the remote robotic arm in two dimensions.

2.6. Grasp Force Detection and Biofeedback System

Figure 5 shows the grasp force detection system. Firstly, the differential signals from the micro force sensor FSS1500 supplied by Honeywell were amplified and filtered. Then, the microcontroller with internal analog to digital convert transformed the analog force signals into digital signals. Finally, the digital signals were calibrated and sent to slave PC via wireless interface. The sample rate of this force detection system is 100 Hz and the measure range is 0–2000 g. Figure 6 demonstrates the calibration data of two micro force sensors. It can be seen that these sensors have good linearity and can be met with requirements of the fore measurement between the robotic gripper and the target.
There are many ways to realize the tactile feedback, such as vibration stimulation [27], electrical stimulation [28], and thermal stimulation. Vibration stimulation was chosen as the motor imagery EEG-based teleoperation robot system’s biofeedback due to its characteristics of fast response, convenient wearing, small size, low power, etc. Moreover, vibrotactile stimulation has been widely used in other research fields, such as rehabilitation training and prosthesis control [29].
The vibrotactile feedback system consists of vibrating motors, motor driving modules, a microcontroller, and a wireless module used for communication with master PC. The vibrating motors were attached to the cuff, which was wrapped around the user’s upper arm during the online experiment. Figure 7 illustrates the locations of the vibrating motors. These locations were selected on the basis of our previous prosthetic tactile feedback and teleoperation robot study [29,30]. The vibrating stimulation waveform was a series of discrete pulses with a duty cycle of 50% and the frequency of each pulse was 250 Hz. Once the robotic arm grasped the target, the grasp information was measured by the force detection system and sent to the master PC via wireless net. At the same time, the vibration stimulation based feedback was provided to the subject. Finally, the robotic arm released the target and returned to the center. Meanwhile, the vibration stimulation was stopped. The vibrotactile feedback system can be expected to improve the telepresence and task performance of the operator.

2.7. Task Design

In order to improve the task performance for the motor imagery EEG-based teleoperation, a series of experiments with progressively increasing task difficulty were designed. The success rate defined as the ratio of the correct target hit versus all targets is adopted to evaluate the performance of the participants.
As shown in Figure 8, in the first stage, experiments were performed with one-dimensional left vs. right virtual cursor movement control by imagining the movement of left hand or right hand for several sessions until the task success rate reached 80%. In the second stage, subjects were asked to control the virtual cursor move in another one-dimensional up vs. down by imagining the movement of both hands or relaxing until the task performance exceeded 80%. In the third stage, two-dimensional virtual cursor control tasks, namely Task 1 and Task 2, were used to further enhance the ability of modulating their mu and beta rhythm. After the above three stages, participants used the EEG of imagining left hand movement, right hand movement, both hands movement, and both hands relaxation to control the remote robot arm continuously and the tactile feedback was offered.

3. Experiment Results

3.1. Task Success Rate of Cursor Control Training

Figure 9a,b show the training task success rate of cursor control in one-dimensional left vs. right and another dimension up vs. down, respectively. The average training duration of one-dimensional for all subjects was 4.6 h and it was finished in several days. Obviously, the subjects’ ability to control the one-dimensional virtual cursor improved significantly after training and the task performance of all subjects exceeded 80%. Moreover, subject 4 achieved the task success rate of 93%, which was greatly increased compared with the start of the training.
Figure 10a,b demonstrate the success rate of two different cursor control tasks in two dimensional spaces. The average training duration of the two-dimensional control for all subjects was 3.8 h within several days. Similarly, the task performance after training is obviously better than before. However, the overall success rate of the two-dimensional control is lower than that of one-dimensional control due to the increasing difficulty. Additionally, the performance of Task 2 dropped a lot and the success rate of all subjects was less than 80%. Subject 4 still got the best task performance with the success rate of 77%.
After the training, subjects utilized the motor imagery EEG to continuously control the remote robot arm. Figure 11 and Figure 12 are the topographies of power in 8–13 Hz frequency band of two subjects controlling the robotic arm to perform the reach and grasp task. In these figures, blue represents energy decrease and red indicates energy increase.

3.2. The ERD/ERS Phenomenon

From Figure 11 and Figure 12, we can observe the event-related desynchronization (ERD) and event-related synchronization (ERS) phenomenon of subject 1 and subject 2 for the online experiments. When the subjects imagined unilateral hand movement (Figure 11a,b and Figure 12a,b), the power in the specific band of the contralateral brain decreased compared with the motor imagery of relaxation (Figure 11d and Figure 12d) and the power in the specific band of the ipsilateral brain increased. Moreover, the decrease of the power in the specific band of the bilateral brain was apparent when the subjects performed movement imagination of both hands (Figure 11c and Figure 12c). Based on the above different phenomena, two-dimensional continuous control information was extracted from mu or beta rhythm frequency bands using Equations (2) and (3) and the remote teleoperation robot was controlled continuously.
In addition, from the topographies in different frequency bands, we can see that the most obvious difference between four different imagery tasks exists near 12 Hz for these two subjects. The determination of optimal frequency for every subject is critical for the motor imagery EEG-based teleoperation control system. The offline analysis tool from the BCI2000 platform can be used to identify the specific electrodes and frequencies that were most differential during the execution of movement imagination tasks.

3.3. Trajectory of Robotic Arm

Figure 13 demonstrates the target distribution, coordinate system, and an example trajectory of robotic arm within a two-dimensional plane. There are four targets located in a restricted square area. At the beginning of the experiment, the robotic arm is in the center of the square area. The distance between target B and target D is on the X axis where B is positive and D is negative. Similarly, the distance between target A and target C is on the Y axis where A is positive and C is negative. The initial position of the robotic arm is at the origin point.
Figure 14 and Figure 15 demonstrate the normalized trajectories of the robotic arm within a two-dimensional plane above the target objects. The remote continuous movement of the robotic arm was driven directly by motor imagery EEG signals acquired from local subject 1 and 2.
The trajectories of robotic arm moving toward a region above target A are shown in Figure 14a and Figure 15a. The subjects performed the movement imagination of both hands to modulate their sensorimotor rhythm amplitudes which were utilized to generate the continuous trajectories based on Equations (2) and (3). When the subjects imagined the movement of the left hand more intensely than right hand, the robotic arm deviated to right of the Y axis. On the contrary, the robotic arm moved to the left of the Y axis, if movement imagination of the right hand was performed more intensely than left hand.
Next, Figure 14b and Figure 15b demonstrate the normalized trajectories when subjects imagined the movement of their right hand to guide the robotic arm to the hover region above target B. If the robotic arm deviated to below X axis, the subjects carried out the movement imagination of both hands to move the robotic arm forward. Conversely, when the robotic arm deviated to above of X axis, the subjects imagined the relaxation of both hands to move the robotic arm backward.
Furthermore, when subjects imagined relaxation of their both hands, the robotic arm was controlled to move toward target C. In this case, the trajectories of the robotic arm are illustrated in Figure 14c and Figure 15c. If the robotic arm moved to the right of Y axis, the subjects imagined the movements of left hand to drive the robotic arm to the left. On the contrary, the subjects imagined the movements of right hand to move the robotic arm to the right, if the robotic arm deviated to left of the Y axis.
Moreover, the trajectories of the robotic arm moving toward target D are demonstrated in Figure 14d and Figure 15d. Movement imagination of the left hand was executed to control the robotic arm. If the robotic arm deviated to below of the X axis, the subjects imagined the movements of both hands to move the robotic arm forward. Conversely, if the robotic arm deviated to above of the X axis, imagination of the relaxation of both hands was performed to control the robotic arm to move backward.
Then, if the robotic arm reached the hover region above target A, the robotic arm descended automatically to grasp the object. As was illustrated in Figure 5, the tactile information between the robotic gripper and the target was conducted by the grasp force detection system, which adopted micro force sensors of FSS1500 from Honeywell. Finally, the biofeedback based on vibrotactile stimulus was provided to the participants to improve the telepresence and task performance.
Additionally, the maximum trajectories error of target A, B, C, and D is 6.20%, 6.53%, 8.27%, and 7.46% for subject 1, respectively. Similarly, for subject 2, the maximum trajectories error is 8.53%, 8.20%, 8.52%, and 8.26%.

3.4. Online Control Task

Figure 16 presents the success rate of the reach and grasp task, in which the subjects used motor imagery EEG signals to control the remote robotic arm. Similar to the cursor control training, the control performance of movement imagination EEG for Task 1 is better than Task 2. After a series of cursor control training, the success rate of the reach and grasp task is greatly increased. The average two-dimensional continuous control success rates for online Task 1 and Task 2 of the six subjects are 78.0% ± 6.1% and 66.2% ± 6.0%, respectively. It indicates that the ability of subjects to modulate their sensorimotor rhythm amplitude in the mu or beta frequency band over the left and right sensorimotor cortex has greatly improved. Thus, the two-dimensional continuous control ability for the teleoperation robot increases dramatically. This teleoperation system can effectively and continuously control the robotic arm to complete the four-target reach and grasp task using motor imagery EEG signals. In addition, the operator is able to get the telepresence by the vibration stimulus based feedback.

4. Discussion

In the past, a variety of studies focused on living organisms interaction with robotic cues [31,32,33,34] and controlling the robot through BCI, but there have been few research on the control of teleoperation robot based on motor imagery EEG. Meanwhile, most of the previous research groups focus on the conventional two class or four class classification from EEG signals to get one or four control commands. It can only use the command to trigger the robot arm to move along the predefined trajectory. However, continuous multidimensional control is very important, especially in teleoperation. It can not only improve the teleoperation robot system’s efficiency but also give the subject a more natural human machine interaction.
In this paper, a continuous teleoperation robot control system is utilized to demonstrate the feasibility of using the motor imagery EEG acquired by wireless portable equipment to realize the reach and grasp task and it is a fully two-dimensional continuous control system. The real-time continuous control signals sent to the robotic arm includes both horizontal and vertical position control signals when the subject performs a motor imagery. The robotic arm can be controlled to move toward the target in a fully two-dimensional plane.
Furthermore, visual feedback has been utilized in the EEG-based teleportation system. In [18], a BCI-driven teleoperation control of an exoskeleton robotic system was presented. Zhao et al. [19] developed an EEG-based teleoperation control framework of multiple coordinated mobile robots. Both of these studies used the visual feedback.
Nevertheless, tactile feedback also plays an important role in the traditional teleoperation system and will be a critical part of any EEG-based teleoperation system. We adopt both visual and vibrotactile feedback in the proposed motor imagery based teleoperation system. Thus, it can not only fully explore the operator’s initiatives and attention processes but also increase the operator’s telepresence.
Moreover, the wireless portable EEG acquisition device was adopted in this study in order to overcome the disadvantages of the traditional EEG equipment, which is inconvenient to carry and complex to operation. Thus, the proposed motor imagery EEG-based teleoperation system provides us a novel and convenient way to interact with the external device without using keyboard, joystick, hand controller, and any other traditional input equipment. Additionally, it also allows the people to interact with the remote various scenarios.

5. Conclusions and Future Work

In this paper, a BCI-based teleoperation robot control system was developed with a wireless portable EEG acquisition device. The movement imagination EEG were translated into a continuous two-dimensional control signal and transmitted to the remote robotic arm using TCP/IP and the robotic arm moved through the control signal in real time. According to the continuously extracted trajectory information, the remote robotic arm finished the reach and grasp task. Furthermore, the grasp force was detected and sent to the master PC. Finally, the biofeedback based on vibrotactile stimulus was given to the subjects in order to improve the telepresence and task performance.
An important work of the next step is to extend the current two-dimensional control to three-dimensional (3D) control. Due to the characteristics of noninvasive EEG, it is very difficult to directly extract the 3D trajectory information from the EEG signals. In order to control the robotic arm to move towards the target in the 3D space, we are currently conducting research on a hybrid BCI system. Moreover, the teleoperation system can be improved from four targets to more targets in the reach and grasp task.
Furthermore, potential application of this system is continuous control of the teleoperation robot, multi-degree of freedom prosthesis, and rehabilitation robot using only motor imagery for paraplegic.

Author Contributions

Conceptualization, B.X. and X.H.; methodology, B.X., W.L., X.H., Z.W. and D.Z.; software, B.X. and X.H.; formal analysis, B.X., W.L. and X.H.; investigation, X.H. and W.L.; supervision, A.S.; writing—original draft preparation, B.X. and Z.W.; writing—review and editing, W.L., D.Z. and C.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Key Research and Development Program of China, grant number: 2016YFB1001303; National Natural Science Foundation of China, grant number: 61673114, 91648206, 61803201 and 61673105; Natural Science Foundation of Jiangsu Province, grant number: BK20170803.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wolpaw, J.R.; Birbaumer, N.; McFarland, D.J.; Pfurtscheller, G.; Vaughan, T.M. Brain–computer interfaces for communication and control. Clin. Neurophysiol. 2002, 113, 767–791. [Google Scholar] [CrossRef]
  2. Van Gerven, M.; Farquhar, J.; Schaefer, R.; Vlek, R.; Geuze, J.; Nijholt, A.; Ramsey, A.; Haselager, P.; Vuurpijl, L.; Gielen, S.; et al. The brain–computer interface cycle. J. Neural Eng. 2009, 6, 041001. [Google Scholar] [CrossRef]
  3. Romano, D.; Donati, E.; Benelli, G.; Stefanini, C. A review on animal-robot interaction: From bio-hybrid organisms to mixed societies. Biol. Cybern. 2019, 113, 201–225. [Google Scholar] [CrossRef]
  4. Soekadar, S.R.; Witkowski, M.; Gómez, C.; Opisso, E.; Medina, J.; Cortese, M.; Vitiello, N. Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia. Sci. Robot. 2016, 1, 3296. [Google Scholar] [CrossRef] [Green Version]
  5. Liu, J.; Abd-El-Barr, M.; Chi, J.H. Long-term Training with a Brain-Machine Interface-Based Gait Protocol Induces Partial Neurological Recovery in Paraplegic Patients. Neurosurgery 2016, 79, N13–N14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Xia, B.; Maysam, O.; Veser, S.; Cao, L.; Li, J.; Jia, J.; Xie, H.; Birbaumer, N. A combination strategy based brain–computer interface for two-dimensional movement control. J. Neural Eng. 2015, 12, 046021. [Google Scholar] [CrossRef] [PubMed]
  7. Minati, L.; Yoshimura, N.; Koike, Y. Hybrid control of a vision-guided robot arm by EOG, EMG, EEG biosignals and head movement acquired via a consumer-grade wearable device. IEEE Access 2016, 4, 9528–9541. [Google Scholar] [CrossRef]
  8. Meng, J.; Zhang, S.; Bekyo, A.; Olsoe, J.; Baxter, B.; He, B. Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Sci. Rep. 2016, 6, 38565. [Google Scholar] [CrossRef] [Green Version]
  9. Casey, A.; Azhar, H.; Grzes, M.; Sakel, M. BCI controlled robotic arm as assistance to the rehabilitation of neurologically disabled patients. Disabil. Rehabil. Assist. Technol. 2019, 1–13. [Google Scholar] [CrossRef]
  10. McMullen, D.P.; Hotson, G.; Katyal, K.D.; Wester, B.A.; Fifer, M.S.; McGee, T.G.; Harris, A.; Johannes, M.S.; Vogelstein, R.J.; Ravitz, A.D.; et al. Demonstration of a semi-autonomous hybrid brain–machine interface using human intracranial EEG, eye tracking, and computer vision to control a robotic upper limb prosthetic. IEEE Trans. Neural Syst. Rehabil. Eng. 2014, 22, 784–796. [Google Scholar] [CrossRef] [Green Version]
  11. Hotson, G.; McMullen, D.P.; Fifer, M.S.; Johannes, M.S.; Katyal, K.D.; Para, M.P.; Armiger, R.; Anderson, W.S.; Thakor, N.V.; Wester, B.A.; et al. Individual finger control of a modular prosthetic limb using high-density electrocorticography in a human subject. J. Neural Eng. 2016, 13, 026017. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Fernández-Rodríguez, Á.; Velasco-Álvarez, F.; Bonnet-Save, M.; Ron-Angevin, R. Evaluation of switch and continuous navigation paradigms to command a brain-controlled wheelchair. Front. Neurosci. 2018, 12, 438. [Google Scholar] [CrossRef] [PubMed]
  13. Lazarou, I.; Nikolopoulos, S.; Petrantonakis, P.C.; Kompatsiaris, I.; Tsolaki, M. EEG-Based Brain–Computer Interfaces for Communication and Rehabilitation of People with Motor Impairment: A Novel Approach of the 21st Century. Front. Hum. Neurosci. 2018, 12, 14. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. He, Y.; Eguren, D.; Azorín, J.M.; Grossman, R.G.; Luu, T.P.; Contreras-Vidal, J.L. Brain–machine interfaces for controlling lower-limb powered robotic systems. J. Neural Eng. 2018, 15, 021004. [Google Scholar] [CrossRef]
  15. Xu, B.G.; Song, A.G.; Zhao, G.P.; Liu, J.; Xu, G.Z.; Pan, L.Z.; Yang, R.H.; Li, H.J.; Cui, J.W. EEG-modulated robotic rehabilitation system for upper extremity. Biotechnol. Biotechnol. Equip. 2018, 32, 795–803. [Google Scholar] [CrossRef] [Green Version]
  16. Delisle-Rodriguez, D.; Cardoso, V.; Gurve, D.; Loterio, F.; Romero-Laiseca, M.A.; Krishnan, S.; Bastos Filho, T. System based on subject-specific bands to recognize pedaling motor imagery: Towards a BCI for lower-limb rehabilitation. J. Neural Eng. 2019, 16, 056005. [Google Scholar] [CrossRef] [Green Version]
  17. Escolano, C.; Antelis, J.M.; Minguez, J. A telepresence mobile robot controlled with a noninvasive brain–computer interface. IEEE Trans. Syst. Man. Cybern. Part B Cybern. 2012, 42, 793–804. [Google Scholar] [CrossRef] [PubMed]
  18. Qiu, S.; Li, Z.; He, W.; Zhang, L.; Yang, C.; Su, C.Y. Brain-Machine Interface and Visual Compressive Sensing-Based Teleoperation Control of an Exoskeleton Robot. IEEE Trans. Fuzzy Syst. 2017, 25, 58–69. [Google Scholar] [CrossRef] [Green Version]
  19. Zhao, S.; Li, Z.; Cui, R.; Kang, Y.; Sun, F.; Song, R. Brain–machine interfacing-based teleoperation of multiple coordinated mobile robots. IEEE Trans. Ind. Electron. 2017, 64, 5161–5170. [Google Scholar] [CrossRef]
  20. Lo, C.C.; Chien, T.Y.; Chen, Y.C.; Tsai, S.H.; Fang, W.C.; Lin, B.S. A wearable channel selection-based brain-computer interface for motor imagery detection. Sensors 2016, 16, 213. [Google Scholar] [CrossRef] [Green Version]
  21. Bousseta, R.; El Ouakouak, I.; Gharbi, M.; Regragui, F. EEG based brain computer interface for controlling a robot arm movement through thought. Innov. Res. Biomed. Eng. 2018, 39, 129–135. [Google Scholar] [CrossRef]
  22. Li, Y.; Zhou, G.; Graham, D.; Holtzhauer, A. Towards an EEG-based brain-computer interface for online robot control. Multimed. Tools Appl. 2016, 75, 7999–8017. [Google Scholar] [CrossRef]
  23. Xu, B.G.; Song, A.G.; Zhao, G.P.; Xu, G.Z.; Pan, L.Z.; Yang, R.H.; Li, H.J.; Cui, J.W.; Zeng, H. Robotic neurorehabilitation system design for stroke patients. Adv. Mech. Eng. 2015, 7, 1687814015573768. [Google Scholar] [CrossRef]
  24. Acharya, J.N.; Hani, A.; Cheek, J.; Thirumala, P.; Tsuchida, T.N. American Clinical Neurophysiology Society Guideline 2: Guidelines for Standard Electrode Position Nomenclature. J. Clin. Neurophysiol. 2016, 33, 308–311. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Schalk, G.; McFarland, D.J.; Hinterberger, T.; Birbaumer, N.; Wolpaw, J.R. BCI2000: A general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 2004, 51, 1034–1043. [Google Scholar] [CrossRef]
  26. Wolpaw, J.R.; McFarland, D.J. Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc. Natl. Acad. Sci. USA 2004, 101, 17849–17854. [Google Scholar] [CrossRef] [Green Version]
  27. De Nunzio, A.M.; Dosen, S.; Lemling, S.; Markovic, M.; Schweisfurth, M.A.; Ge, N.; Graimann, B.; Falla, D.; Farina, D. Tactile feedback is an effective instrument for the training of grasping with a prosthesis at low-and medium-force levels. Exp. Brain Res. 2017, 235, 2547–2559. [Google Scholar] [CrossRef]
  28. Patel, G.K.; Dosen, S.; Castellini, C.; Farina, D. Multichannel electrotactile feedback for simultaneous and proportional myoelectric control. J. Neural Eng. 2016, 13, 056015. [Google Scholar] [CrossRef]
  29. Wu, C.C.; Song, A.G.; Ling, Y.; Wang, N.; Tian, L. A control strategy with tactile perception feedback for EMG prosthetic hand. J. Sens. 2015, 2015, 869175. [Google Scholar] [CrossRef] [Green Version]
  30. Xu, X.N.; Song, A.G.; Ni, D.J.; Li, H.J.; Xiong, P.W.; Zhu, C.C. Visual-haptic aid teleoperation based on 3-D environment modeling and updating. IEEE Trans. Ind. Electron. 2016, 63, 6419–6428. [Google Scholar] [CrossRef]
  31. Romano, D.; Benelli, G.; Stefanini, C. Encoding lateralization of jump kinematics and eye use in a locust via bio-robotic artifacts. J. Exp. Biol. 2018, 222, jeb187427. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Romano, D.; Benelli, G.; Hwang, J.S.; Stefanini, C. Fighting fish love robots: Mate discrimination in males of a highly territorial fish by using female-mimicking robotic cues. Hydrobiologia 2019, 833, 185–196. [Google Scholar] [CrossRef]
  33. Nishinoma, H.; Ohno, K.; Kikusui, T.; Nagasawa, M.; Tsuchihashi, N.; Matsushita, S.; Mikayama, T.; Tomori, S.; Saito, M.; Murayama, M.; et al. Canine Motion Control Using Bright Spotlight Devices Mounted on a Suit. IEEE Trans. Med. Robot. Bionics 2019, 1, 189–198. [Google Scholar] [CrossRef]
  34. Polverino, G.; Karakaya, M.; Spinello, C.; Soman, V.R.; Porfiri, M. Behavioural and life-history responses of mosquitofish to biologically inspired and interactive robotic predators. J. R. Soc. Interface 2019, 16, 20190359. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Motor imagery EEG-based continuous teleoperation robot control system with biofeedback.
Figure 1. Motor imagery EEG-based continuous teleoperation robot control system with biofeedback.
Electronics 09 00174 g001
Figure 2. Control architecture of BCI-driven continuous teleoperation robot system.
Figure 2. Control architecture of BCI-driven continuous teleoperation robot system.
Electronics 09 00174 g002
Figure 3. A subject with our BCI-driven system (a) and the teleoperation robotic arm (b).
Figure 3. A subject with our BCI-driven system (a) and the teleoperation robotic arm (b).
Electronics 09 00174 g003
Figure 4. The EEG electrodes positions adopted in our study.
Figure 4. The EEG electrodes positions adopted in our study.
Electronics 09 00174 g004
Figure 5. Robotic grasp force detection system.
Figure 5. Robotic grasp force detection system.
Electronics 09 00174 g005
Figure 6. Calibration of left (a) and right (b) force sensors of the robot gripper.
Figure 6. Calibration of left (a) and right (b) force sensors of the robot gripper.
Electronics 09 00174 g006
Figure 7. Vibrating motor (a) and vibration stimulation location (b).
Figure 7. Vibrating motor (a) and vibration stimulation location (b).
Electronics 09 00174 g007
Figure 8. Cursor control tasks at different stages.
Figure 8. Cursor control tasks at different stages.
Electronics 09 00174 g008
Figure 9. Task success rate of one-dimensional cursor control training. (a) Left vs. right. (b) Up vs. down.
Figure 9. Task success rate of one-dimensional cursor control training. (a) Left vs. right. (b) Up vs. down.
Electronics 09 00174 g009
Figure 10. Task success rate of two-dimensional cursor control training. (a) Task 1. (b) Task 2.
Figure 10. Task success rate of two-dimensional cursor control training. (a) Task 1. (b) Task 2.
Electronics 09 00174 g010
Figure 11. Topographies of power in 8–13 Hz frequency band during motor imagery of subject 1.
Figure 11. Topographies of power in 8–13 Hz frequency band during motor imagery of subject 1.
Electronics 09 00174 g011
Figure 12. Topographies of power in 8–13 Hz frequency band during motor imagery of subject 2.
Figure 12. Topographies of power in 8–13 Hz frequency band during motor imagery of subject 2.
Electronics 09 00174 g012
Figure 13. Target distribution, coordinate system, and an example trajectory of the robotic arm.
Figure 13. Target distribution, coordinate system, and an example trajectory of the robotic arm.
Electronics 09 00174 g013
Figure 14. Normalized continuous trajectories of robotic arm moving towards target A (a), target B (b), target C (c), and target D (d) controlled by movement imagination EEG signals from subject 1.
Figure 14. Normalized continuous trajectories of robotic arm moving towards target A (a), target B (b), target C (c), and target D (d) controlled by movement imagination EEG signals from subject 1.
Electronics 09 00174 g014
Figure 15. Normalized continuous trajectories of robotic arm moving towards target A (a), target B (b), target C (c), and target D (d) controlled by movement imagination EEG signals from subject 2.
Figure 15. Normalized continuous trajectories of robotic arm moving towards target A (a), target B (b), target C (c), and target D (d) controlled by movement imagination EEG signals from subject 2.
Electronics 09 00174 g015
Figure 16. Task success rate of motor imagery EEG-based teleoperation robot system for online Task 1 (a) and online Task 2 (b).
Figure 16. Task success rate of motor imagery EEG-based teleoperation robot system for online Task 1 (a) and online Task 2 (b).
Electronics 09 00174 g016

Share and Cite

MDPI and ACS Style

Xu, B.; Li, W.; He, X.; Wei, Z.; Zhang, D.; Wu, C.; Song, A. Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback. Electronics 2020, 9, 174. https://doi.org/10.3390/electronics9010174

AMA Style

Xu B, Li W, He X, Wei Z, Zhang D, Wu C, Song A. Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback. Electronics. 2020; 9(1):174. https://doi.org/10.3390/electronics9010174

Chicago/Turabian Style

Xu, Baoguo, Wenlong Li, Xiaohang He, Zhiwei Wei, Dalin Zhang, Changcheng Wu, and Aiguo Song. 2020. "Motor Imagery Based Continuous Teleoperation Robot Control with Tactile Feedback" Electronics 9, no. 1: 174. https://doi.org/10.3390/electronics9010174

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop