Next Article in Journal
Fractional Order KDHD Impedance Control of the Stewart Platform
Next Article in Special Issue
Numerical Shape Planning Algorithm for Hyper-Redundant Robots Based on Discrete Bézier Curve Fitting
Previous Article in Journal
A Novel Ensemble of Arithmetic Optimization Algorithm and Harris Hawks Optimization for Solving Industrial Engineering Optimization Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EEG-Based Empathic Safe Cobot

by
Alberto Borboni
1,*,
Irraivan Elamvazuthi
2 and
Nicoletta Cusano
1,3
1
Mechanical and Industrial Engineering Department, University of Brescia, Via Branze 38, 25073 Brescia, Italy
2
Department of Electrical and Electronic Engineering, Universiti Teknologi Petronas, Seri Iskandar 32610, Malaysia
3
Faculty of Political Science and Sociopsychological Dynamics, Università degli Studi Internazionali di Roma, Via Cristoforo Colombo 200, 00147 Rome, Italy
*
Author to whom correspondence should be addressed.
Machines 2022, 10(8), 603; https://doi.org/10.3390/machines10080603
Submission received: 10 June 2022 / Revised: 19 July 2022 / Accepted: 21 July 2022 / Published: 24 July 2022

Abstract

:
An empathic collaborative robot (cobot) was realized through the transmission of fear from a human agent to a robot agent. Such empathy was induced through an electroencephalographic (EEG) sensor worn by the human agent, thus realizing an empathic safe brain-computer interface (BCI). The empathic safe cobot reacts to the fear and in turn transmits it to the human agent, forming a social circle of empathy and safety. A first randomized, controlled experiment involved two groups of 50 healthy subjects (100 total subjects) to measure the EEG signal in the presence or absence of a frightening event. The second randomized, controlled experiment on two groups of 50 different healthy subjects (100 total subjects) exposed the subjects to comfortable and uncomfortable movements of a collaborative robot (cobot) while the subjects’ EEG signal was acquired. The result was that a spike in the subject’s EEG signal was observed in the presence of uncomfortable movement. The questionnaires were distributed to the subjects, and confirmed the results of the EEG signal measurement. In a controlled laboratory setting, all experiments were found to be statistically significant. In the first experiment, the peak EEG signal measured just after the activating event was greater than the resting EEG signal (p < 10−3). In the second experiment, the peak EEG signal measured just after the uncomfortable movement of the cobot was greater than the EEG signal measured under conditions of comfortable movement of the cobot (p < 10−3). In conclusion, within the isolated and constrained experimental environment, the results were satisfactory.

1. Introduction

Collaborative robots (cobot) [1] are special robots that can collaborate with humans. They were originally intended for industrial applications [2,3,4,5,6,7,8,9], but they have also found use in biomedical [10,11,12,13,14,15,16,17,18], domestic [19,20,21,22], and military [23,24,25,26] fields.
Because of the physical proximity of humans and robots, many scientific works in the field of mechatronics [27,28,29] focus on the relationship’s safety [30] and comfort [31]. Several authors have evaluated the use of natural interface systems, such as vision [32,33] and face recognition and communication [34,35,36,37], gestures [38,39,40], and spoken natural language [41,42,43,44], to improve the relationship. This process has led to research into incorporating the concept of empathy [45] into the human–robot relationship by giving the robot the ability to decode the emotional state of the human subject [46]. The results in the literature are almost entirely based on the analysis of human facial expressions [47,48,49,50], sometimes supplemented by gestures and body language [51,52,53] or by voice modulation [54,55,56,57].
On the basis of psychological studies, the literature differentiates at least two groups of emotions, namely primary and secondary emotions [58]. The category of primary emotions is thought to be intrinsic [58]. Literature says that the main emotions developed during phylogeny to help people act quickly and instinctively when danger is near. Primary emotions are also thought to be prototype emotion kinds that can be assigned to one-year-old children [59]. Secondary emotions such as “relief” or “hope” are thought to emerge from higher cognitive processes based on the ability to evaluate choices over outcomes and expectations. Then, secondary emotions are acquired through social learning processes. The adjective “secondary” is used in the literature to allude to “adult” emotions. Furthermore, secondary emotions, like main emotions, influence bodily (and facial) manifestations of primary emotions. Different authors listed different primary and secondary emotions, i.e., according to Becker et al. [60], the nine primary emotions are: angry, annoyed, bored, concentrated, depressed, fearful, happy, sad, surprised. Always according to Becker et al. [60], the three secondary emotions are: hope, fears-confirmed, and relief.
Studies on functional neuroanatomy of emotion of brain show that in the majority of investigations, across individual emotions and induction methods, no unique brain region was consistently active, implying that no single brain region is routinely activated by all emotional tasks [61]. The same meta-analysis article [61] classifies five primary emotions, happiness, fear, anger, sadness, and disgust and enumerated the studies in the literature that reported the activation of twenty brain regions. Respectively, happiness, fear, anger, sadness, and disgust activated 18, 17, 16, 19, and 13 brain regions. So, it might be a good idea to use a mathematical relationship based on ElectroEncephaloGraphic (EEG) signals and the activation or sequence of activation of multiple brain regions to figure out what emotions a person is feeling.
At first, applications for EEG were developed to help people with disabilities [62] in a variety of areas, including substitution of natural verbal and non-verbal communication [63,64,65], the connection with ICT devices [66], and the use of assistive orthoses [67], such as mobile systems [68] and robotic manipulators [69], exoskeletons [70], prosthetic limbs [71], or grippers [72]. Assisting patients suffering from paralysis [73], amputations [74], and dysfunctions of the central nervous system [75] are some of the other uses for EEG-based prostheses.
EEG-based brain–computer interface applications have been used also in non-medical domain, i.e., the entertainment industry [76]. There have been studies conducted in non-medical fields, such as the creation of devices to evaluate the level of attentiveness [77]. Another topic has been investigated is ways to ensure the overall safety in professional and daily living activities [78,79,80]. Applications based on EEG might be used to control the operation of intelligent houses [81]. EEG has the potential to be used for a variety of non-medical applications, including the empowerment of non-pathological subjects’ performances [82]. The use of drones [83] and robot control [84] might also fall under the category of applications, along with video games [85] designed to increase a person’s ability to concentrate or pay attention.
Some authors have used EEG technology [86,87,88] to identify emotions [89,90,91,92,93], but this approach does not appear to have been applied to robotics, particularly collaborative robotics, to the best of the authors’ knowledge. In general, the EEG signal is used in robotics to identify stressful conditions or decreased concentration [94,95] or to send voluntary commands to the robot [96].
The work proposed in [97] analyzed the EEG signal to discriminate between: interest, stress, relaxation, excitement, engagement, long-term excitement, and focus. This approach may be of interest, for example, in assessing operator stress and ability to concentrate in order to reduce one of the risk factors for an accident and recommend a break or slowdown in operations, as described in [98]. The work proposed in [99] describes a brain–computer–interface based on an EEG sensor to command the motion of a virtual robot. All these research exhibited an accuracy limited between the range 80% and 90%.
Thus, in this work, we introduce a first novel element in comparison to the literature: we use EEG to transfer information about the emotional state of the human subject from the human subject to the cobot, thereby promoting the formation of an empathic relationship between human and cobot. We want to focus on the aspect of safety in particular: when a human subject perceives danger, he or she feels fear, which causes hyperactivation of various brain regions [100] and then produces a rapid reaction to move away from the dangerous situation. As a result, monitoring the EEG signal could quickly transfer the fear information to the cobot. To be able to use this approach, an initial hypothesis will be introduced in this paper, namely that the subject and the robot proceed in their activity without interactions with other subjects, so that the human subject’s brain hyperactivation can actually be associated with a fear emotion. Fear is the most important and intense emotion because it allows the subject to survive [101]; thus, the rationale of this article is in favor of the subject’s safety; that is, even if other strong emotions distract the subject from interacting with the robot, the robot will interpret them as an emotion of fear in order to maintain itself in a safe condition. Fear is transmitted from one subject to another in human society through the decoding of facial expressions and nonverbal signals, with the receiving subject unaware of the reason for the fear [102]. This evolutionary phenomenon serves to shorten the reaction time to danger and protect the entire society and all of its members. As a result, the approach proposed in this paper tends to facilitate the transition from a collaborative to a social dimension, even for current collaborative robots that become sobots [103]. Following the perception of the fear signal, the robot implements a strategy to contain the danger by moving to a safe position; this behavior should be recognized by human subjects as a nonverbal message of danger, allowing for a rapid transmission of fear from the robot to the human subjects.

2. Materials and Methods

Materials consists of two distinct but interconnected systems: an ElectroEncephaloGraphic (EEG) system and a collaborative robot. There are two experimental protocols in place: one in which decision thresholds are identified and one in which decision thresholds are validated. In the first protocol, we only use the EEG system, whereas in the second protocol, we use both systems, EEG and cobot, which are made to communicate with one another.

2.1. EEG Sensor System

The used EEG system (Figure 1) [104] and consists of a band to which some electrodes are attached. The electrodes are connected to an acquisition board, and the band is worn around the head of a human subject. The acquisition board receives and partially processes the EEG signal before wirelessly transmitting it to a USB dongle connected to a PC.
The electrical scheme of the acquisition board is described in Appendix A.
In the first experimental protocol, the data are saved offline and later reprocessed using the following Matlab script, where the first lines on input file configuration are omitted.
...
%data input
filename = strcat(path, slash, file);
fileID = fopen(filename);
data=textscan(fileID,’%f%f%f%f%f%f%f%f%f%f%f%f%f%
f%f%f%f%f%f%f%f%f%f%f%q’, ‘Delimiter’, ‘,’,’headerlines’, 5);
fclose(fileID);
sample_index = data{1};
k=1;
for i = 2:9
eeg_data(:,k) = data{i}; %electrodes measures [microV]
k = k+1;
end
reference = data{10};
%example parameters
fs= 250; %[Hz] - sampling rate
Channel = 8; %number of channels
n = length(sample_index);
time = data{25};
start_time = time(1);
epoch_start = datetime(start_time);
end_time = time(n);
epoch_end = datetime(end_time);
elapsed_time = epoch_end - epoch_start;
elapsed_time = seconds(elapsed_time); %registration duration [s]
t = linspace(0, elapsed_time, n);
%bandpass filter
Wp = [3 15]/(fs/2); %pass band
Ws = [2 20]/(fs/2); %attenuation band
Rp = 1; %[db] maximum bandwidth loss value
Rs = 60; %[db] attenuation value
[N, Wp] = ellipord(Wp,Ws,Rp,Rs);
[B, A] = ellip(N,Rp,Rs,Wp);
X = filtfilt(B,A,double(eeg_data));
%calculation of values at rest and during fright
std_val = mean(m(fs*4:fs*10)); %resting average [V] between 4–10 s
[fright_val, i]= max(m(fs*10:length(m))); %maximum after 10 s
fright_time = i/fs + 10;
threshold = 100; %fright threshold example
if fright_val - std_val > threshold
disp(‘frightened’)
else
disp(‘unfrightened’)
end
Instead, in the second experimental protocol, data are processed inline by a ROS (Robot Operating System) application to develop decision strategies, and a command signal is sent to a cobot that interacts with the subject wearing the EEG system.

2.2. Cobot

Rethink Robotics’ Sawyer is the adopted cobot (Figure 2). It consists of a 7-degree-of-freedom robotic arm, a gripping system (tool), a screen (head), a vision system with two cameras (one on the end effector and one on the head), and a control system. The arm is outfitted with seven servomotors that allow movement in all seven degrees of freedom; it has a maximum extension of 1260 mm and a maximum payload of 4 kg. The entire system weighs 19 kg.
According to the Denavit and Hartenber notation, the robot can be represented by the parameters listed in Table 1.
The robot communicates with a ROS node via a proper driver. The cobot transfers information to the ROS node via the driver from encoders that measure the angular positions at the joints, torque sensors at the joints, and cameras mounted on the cobot itself. The ROS node also sends commands to the cobot via the driver based on the required function, as well as by processing data from the cobot’s sensors or other sensors (EEG) connected to the PC. The ROS application is described in Appendix B.

2.3. Identification of the Decision Threshold

2.3.1. Participants

A total of 100 subjects were recruited ranging in age from 19 to 30 years. The subjects were divided into two groups, experimental group and control group, for an equal number of 50 subjects in each group. Subjects completed an informed consent form, as well as information and evidence of consent to the processing of personal data. Following that, subjects underwent a preliminary screening based on the following inclusion criteria:
-
No cardiovascular disease, which could pose a risk factor during the experiment;
-
The absence of neurological disorders that could change the intensity, shape, and latency time of the response signal;
-
Absence of abnormal eating habits, excessive sports activity, not having over-hydrated or exercised shortly before the experiment; particularly to limit changes in skin hydration;
-
No creams or other cosmetic or medicinal products applied to the skin in the area where the electrodes will be applied, as well as no long hair, to limit changes in the contact impedance between the skin and the electrodes;
-
No substance abuse that alters the psychophysical state or general hydration level (alcohol, drugs, systemic medications).

2.3.2. Experimental Protocol

The experimental observations were carried out in a robotics laboratory while all other electrical and electromechanical equipment were turned off. The season was winter and the indoor temperature was set at 21 °C with relative humidity in the 40–60% range. All cables and measuring equipment were shielded. The computer connected to the measurement system via Bluetooth was placed at the maximum possible distance so as to limit possible electrical interference.
EEG data were collected using active electrodes placed on the scalp at the location(s) of interest (Table 2 and Figure 3). The reference was the average ear value. The impedances of all electrodes were kept under 6 kΩ. The sampling rate of the EEG channels was 500 Hz, while the accelerometer sampling rate was 25 Hz.
Using the definitions in Table 2 and the positioning standard in Figure 3 according to the 10-10 system using Modified Combinatorial Nomenclature (MCN), the following order was used to come up with the steps for putting the electrodes on the patient.
  • The Velcro strip was put on the person, and a mark was made on the Velcro strip to show the standard place to put the spikey electrodes;
  • The band was removed from the subject, and then all the spikey electrodes were mounted in the correct position indicated in the previous step;
  • The first lobe clip electrode was connected to the correct pin on the board (BIAS);
  • The second lobe clip electrode was connected into the correct pin of the board (SRB);
  • The three flat electrodes were connected to the three respective pins of the board (N1P, N2P, and N3P).
  • The five spikey electrodes were connected into the five respective pins of the board (N4P, N5P, N6P, N7P and N8P);
  • The Velcro band with the eight electrodes (three flat and five spikey) was placed on the subject;
  • Finally, it was verified that the position of the electrodes was correct after the assembly had taken place.
Electrooculogram activity was not recorded because its effect on the EEG signal is very small compared to the hyperactivation associated with the fear phenomenon.
Face-muscular activity was not recorded because the subject was required to maintain a relaxed condition, and until the relaxed condition was achieved, data acquisition did not begin. To ensure this condition, the subject was asked to raise his or her hand slightly when he or she felt ready, and an operator who was able to observe his or her face would, subsequently, give a start signal with his or her hand when he or she observed the subject in the relaxed state. The operator observing the subject never checked the macro-movements of the subject’s face. Had macro-movements of the face been observed, the measurement would have been discarded and never repeated on the same subject.
The data of the subjects were then anonymized using a three-digit numerical code. To avoid bias in the analysis stages, the research team member who performed the screening and anonymization did not participate in the later stages of the study and was also kept anonymous to the other members of the research team, aside from the group leader.
The subject is made to sit in the data collection station, which includes a chair and a table to support the device. In order to obtain reliable results, the subject is placed away from any nearby electronic devices and facing away from the research group. A member of the research team ensures that the electrode band is properly placed on the patient to ensure that the sensors and the subject’s head are in constant contact. The test subject is instructed to remain calm and relaxed. After the sensors have been tested to make sure they work, the Open BCI software records the signal.
The EEG signal is recorded for approximately 20 s for each subject; after 10 s, a member of the research team induces a fear reaction in the subject through the percussion of a pair of metal objects in order to detect any changes in the EEG signal.
Following the experiment, a member of the research team asks the subject if he or she was scared in order to confirm the presence of the fear emotion.

2.4. Randomized-Controlled Trial (RCT) of Emphatic Collaboration

2.4.1. Participants

The sample size for the RCT described in this section could be estimated using the research protocol described in Section 2.3; but, as will be noted in results Section 3.1, the statistics do not require a large sample. The sample size is chosen to be 100 subjects conventionally to account for the difference between the two experiments; in particular, the subject in the protocol described here is exposed to the motion of a cobot while at rest, so his or her level of EEG activation may differ from that identified in Section 2.3. A total of 100 subjects, all of whom are different from those who participated in the previous experimental protocol and range in age from 19 to 30 years, are recruited. The subjects were divided into two groups, experimental A group and control group B, for an equal number of 50 subjects in each group. The subjects completed an informed consent form, as well as information and expressions of consent to the processing of personal data. Following that, subjects were subjected to the same screening described in Section 2.3.
The data of subjects are then anonymized using a three-digit numeric code. To avoid bias in the analysis stages, the member of the research team who performed the screening and anonymization does not participate in the subsequent stages of the study and is also kept anonymous to the other members of the research team, apart from the group leader. Subjects are put into one of two groups, A or B, based on a random draw. Group A is the intervention group, and Group B is the control group.

2.4.2. Experimental Protocol

The subject is made to sit in the data collection station, which includes a chair and table to support the device, as well as a collaborative robot in front of the subject. To ensure reliable results, the subject is placed away from any nearby electronic devices to avoid interference. A member of the research team ensures that the electrode band is correctly placed on the patient to ensure that sensors and the subject’s head are in constant contact. For safety reasons, a member of the research team stays on the subject’s right side by holding an emergency button and ensuring that the subject is always behind the edge of the table with his or her hands under the table. The subject being tested is instructed to remain calm and relaxed. After the correct sensor activity has been checked with the Open BCI software, the signal is then recorded.
If the subject is assigned to group A, the robot moves at a distance of 800 mm for the first 10 s, then moves rapidly toward the subject to a distance of 150 mm and a height of 400 mm from the table, and finally moves back slowly, closing on itself. The EEG signal is measured for a total of 10 s. This rapid movement is sufficient to generate a feeling of fear and still provide a sufficient level of safety for the subject. In fact, it is known in the literature that an intense acoustic stimulus coming from a source near a subject produces a stunning effect. The stunning effect is followed by autonomic reflexes such as increased heart rate and danger avoidance behaviors. It has been observed in these subjects that the autonomic reflex is closely related to the emotion of fear [105]. Therefore, it can be said that autonomic reflex and fear occur together in the subject.
If the subject is in group B, the robot moves for 20 s while staying 800 mm away from the subject. During the robot’s work session, the EEG signal is measured according to the procedure described in Section 2.3.
Following the experiment, a member of the research team asks the subject in both groups A and B whether he or she was scared to check the manifestation of the emotion of fear. This questionnaire serves to confirm the hypothesis in the literature.

3. Results

This section is divided into two parts: the first is for the experiment described in Section 2.3, in which the subject is subjected to rapid high-intensity sound stimulation; the second is for the experiment described in Section 2.4, in which the subject is subjected to rapid, close-range movement of a collaborative robot.

3.1. Results—Identification of the Decision Threshold

In accordance with the protocol outlined in Section 2.3, 100 subjects were screened for enrolment. As shown in Figure 4, five subjects were discarded: one had engaged in sports activity just before the test session, one had abused alcohol the night before, two had hair that was too long, and two had applied an oily based moisturizer to their face. Two more subjects were discarded after the experimental phase because the measurement was incomplete, most likely due to the detachment of one or more wires during the experiment. It was better not to repeat the experiment with the same subject so that the experiment would not be biased because he or she would already know what to do.
Table 3 summarizes the experiment’s results, emphasizing the significant statistical difference between the data composing the rest distribution and the data composing the peak distribution. From a statistical standpoint, there is a significant difference between the data that comprise the rest distribution and the data that comprise the peak/rest distribution.
The probability that the mean value of the rest distribution is equal to 53.58 (the minimal value of the peak distribution) is computed with the Mann–Whitney U test and produced a p value < 10−3. According to this result and to results in Table 1, the rest distribution is always under the minimum value measured for the peak distribution.
Figure 5 and Figure 6 show comparisons of the probability distributions in Table 3, where it can be seen qualitatively, among other things, that the probability distributions are not normal, so the choice of the statistical criterion for comparison, the Mann–Whitney U test, is easily justified.
Similarly, it is possible to measure how much the ratio of rest to peak exceeds a threshold value that is less than the minimum observed value obtaining the same result. This second analysis, although more accurate, requires a measurement of the rest value over a period of time before one can assess whether a peak is present.
The raw data (Figure 7) acquired from the eight EEG channels listed in Table 2 and depicted in Figure 3 illustrate an example of acquisition for a single individual. After band-pass filtering that preserves the fundamental signal content, the same obtained data are depicted in Figure 8 in the frequency domain and in Figure 9 in the time domain. In addition, Figure 10 shows the mean of the data received from the 8 EEG channels already mentioned, and this mean is utilized as the primary variable to identify the peak circumstances already discussed in this section and according to the technique given in Section 2.1.

3.2. Results—Randomized-Controlled Trial (RCT) of Emphatic Collaboration

A total of 100 subjects were identified in accordance with the protocol outlined in Section 2.4. As shown in Figure 11, two subjects were discarded: one had abused drugs the night before, and the other had applied moisturizer to his face. The remaining 98 subjects were divided into two groups of 49 each: experimental group A and control group B.
Table 4 summarizes the experiment’s results, highlighting a statistically significant difference between the data composing the rest distribution and the data composing the peak distribution in experimental group A (p < 10−3). In experimental group A, there is a statistically significant difference between the data composing the rest distribution and the data composing the peak/rest distribution (p < 10−3). Finally, there is no statistically significant difference (p = 0.287) between the data that comprise group A’s rest distribution and the data that comprise control group B’s rest distribution.
Figure 12, Figure 13 and Figure 14 show comparisons of the probability distributions in Table 4, where it can be seen qualitatively, among other things, that the probability distributions are not normal, so the choice of the statistical criterion for comparison, the Mann–Whitney U test, is easily justified.
To determine whether the subjects felt fear, they were given a questionnaire in which they were asked to assign a value from zero to ten to the amount of fear they felt during the experiment, with zero indicating no fear and ten indicating maximum fear. Table 5 summarizes the findings, demonstrating that the experimental group’s level of fear is significantly higher (p < 10−3) than the control group’s level of fear.
Figure 15 shows a comparison of the probability distributions in Table 5.

4. Discussion

There are multiple articles in the literature on the recognition of emotional state by means of EEG instruments [106]. In particular, the emotion manifested in the observed subject can be elicited in several ways, the most popular being: visual-based elicitation using images, prepared task, audio-visual elicitation using short film video clips, audio-based elicitation using music, multiple techniques, imagination techniques/memory recall, social interactions. This paper presents two experiments. In the first experiment, emotion is elicited through an auditory stimulus. In the second experiment, there is a task that is performed to distract the subject, there is an interaction with a robot that is the source of the emotion elicitation. It should be noted that the interaction with the robot takes place in the visual field domain, although the phenomenon cannot be limited to a merely visual dimension, as the robot enters the subject’s peripersonal field. Therefore, compared to the literature, the second experiment has a rather hybrid elicitation technique, but certainly tending toward interactivity.
In the literature [107] there are several methods of feature classification and extraction, the main ones are: in the frequency domain, in the time domain, in the wavelet domain, based on statistical features, using support vector machine, adopting K-nearest neighbor, with linear discriminant analysis, using artificial neural network. The algorithm proposed in this paper, although it uses filtered data in the frequency domain, is based on analysis in the time domain. Probably the class that classification methods that comes closest to those proposed in the literature is the one based on statistical features, in that the discriminant is the maximum value of the mean of the EEG signals reached in the time frame of observation and synchronized with the source of artificial elicitation of emotion.
At present, there are several types of applications in the literature that can be attributed to a medical and a non-medical macro class. Medical classifications often follow the pathology of the subject or the stage of health treatment. Non-medical classifications are often divided into two large families: entertainment and safety. Certainly, the application proposed in this paper is non-medical and aimed at safety.
From the point of view of the hardware used, we distinguish the type of electrodes that influences measurement; although the type of electrode is a very random indication, because various factors influence the measurement, surely it is more relevant to state the contact impedance and in our case it is kept under 6 k. The number of channels ranges from 5 channels up to 256 channels and in our case it is 8 channels.
Some scientific works are based on standard emotion databases. In our case it was not possible to use these standard data because we are interested in a source of emotion elicitation based on interaction with the robot that is not represented in standard databases.
The number of emotions that are recognized ranges from 2 to 11. In ours, under an example hypothesis that will be expounded later, our model is based on two emotions: fear and pleasure. To expose this hypothesis clearly, we introduce Russell’s 2D emotion model, denoted by the expression:
a r o u s a l 2 + v a l e n c e 2 = 1
In the two-emotion model, both can reach all values of arousal, that is, between −1 and +1, while from the perspective of valence, one emotion is considered positive (pleasure) and the other is considered negative (fear). Therefore, both fear and pleasure take values in the form between 0 and 1, but fear is only the name of valence when this is negative and pleasure is only the name of valence when this is negative.
Now come the working example hypotheses. The first hypothesis is that the maximum of fear is always greater than the maximum of pleasure. Since valence measures the intensity of emotion, this assumption is reasonable in the industrial work environment, where fear of losing one’s life in the face of a dangerous event (valence in modulus of 1) may manifest; but it is difficult, if not impossible, for such intense pleasure (valence in modulus of 1) to manifest. This also represents a limitation on the type of work, as there are rare cases of work that produces intense emotions: think of a performer on stage. The second limitation is related to the intense positive or negative emotions that may be though produced in a work environment due to non-work causes that may occur unexpectedly. This second case, although not contemplated still remains to be considered a condition of subsequent danger as it distracts the subject’s attention from his activity, so it is right for the cobot to react by interrupting the work activity and producing a safe situation.
There is another previous work [108], which presents the same hypothesis as ours, using pleasantness emotion in the case of odor-induced EEG signal measurement.
Finally, regarding the accuracy of emotion recognition [109], apart from the case where valence is measured indiscriminately by the sign taken, the literature presents various results ranging from 50% to 95%, although most works present accuracies between 75% and 85%. There are no papers that introduce classification accuracy since they implicitly introduce a strong limitation considering the measurement database as universal i.e., exactly statistically representative of the actual sample from which it is extracted. In this, our work introduces a methodological innovation by using probability distributions reconstructed from the data and thus being able to assess accuracy as well as precision. Moreover, the accuracy results obtained are greater than 99.99%.
As stated in Section 3.1, when a subject is relaxed, an abrupt sound signal of high intensity can be used to produce the occurrence of an artificially induced fear emotion in the subject. This can be accomplished simply by using EEG sensors and placing electrodes in areas of the encephalon that are not particularly relevant or precise. This property is thought to be due to the fact that primary emotions activate various brain regions [61] and that, in particular, fear is associated with high EEG signal intensities [110,111], making this phenomenon easily measurable, as was later confirmed experimentally.
Eye movement, as just mentioned in Section 2, produces a negligible artifact in relation to the observed phenomenon [112]. There is an accelerometer signal in the measurement system that is used by the board’s internal system for appropriate compensation (Figure A9). As previously stated in Section 2, the autonomic response to a sudden, intense sound from a source close to the subject is inextricably linked to the emotion of fear [105]. As a result, whether we measure one or the other, the important thing for us is that the cobot receives information that there is a danger, and this result was obtained. During the experimental phase, we realized that the phenomenon observed is so evident that excessive signal processing is not required to highlight the presence of the subject’s perception of danger. Artifacts are present in the signal, but their significance is meaningless. This observation enables us to accelerate signal processing and quickly send a reaction decision to the robot. To avoid danger, it is critical that the reaction be quick.
The results of the second experiment, conducted with a collaborative robot and described in Section 3.2, produced the same result. This second study was performed in an especially rigorous manner using an RCT (randomized controlled trial) scheme, preserving an extremely high level of statistical reliability. It can be seen, for example, that identifying a discrimination threshold between the relaxation and fear conditions using an EEG sensor is always statistically possible. It is important to note, however, a numerical detail that is unusual in comparison to the literature: the intensity of the peak signal associated with the occurrence of fear is extremely high, much higher than would be expected. In fact, some authors report that the presence of strong emotions, such as fear, is associated with EEG signal intensities greater than 100 µV [110], while others indicate EEG signal intensities greater than 200 µV [111]; in particular, the experiment by Kometer et al. [111] identifies an EEG signal intensity of 500 µV in subjects watching scary movies. This phenomenon is most likely associated with the presence of significant artifacts as a result of macroscopic and microscopic movements in response to the emotion of fear. As a result, the spike appears to be associated with the artifacts rather than the EEG signal itself. In this particular case, artifacts should not be discarded or deleted because they always occur concurrently with the emotion of fear, serving as an indicator of the discrimination threshold between normal and fearful conditions. It is possible for the artifacts to appear a few milliseconds after the fear emotion has occurred in the encephalon, but reaction times to a fearful event are extremely limited, so even while maintaining a high EEG threshold, the decision can be identified with a delay of a few tens of milliseconds from the occurrence of the emotion in the encephalon. As a result, a fear communication channel was established between the human subject and the robot.
After detecting the fear condition, the robot can react and move to a position that reduces the chances of colliding with the subject in the work cell. In the experiment described in Section 2.4 and the results of which are listed in Section 3.2, the collaborative robot closes in on itself, moving away from the area where the human subject is present (which is separated from the robot by a work table), reducing its overall moment of inertia and stopping, bringing its kinetic energy to zero. An alternative reaction would be to switch to a control that only compensates for static actions, so that any dynamic actions from the subject would allow the robot to move away quickly and easily, while any static actions (including the weight of the robot itself and any objects carried by the gripper) would be compensated for. If the human subject is aware of and recognizes the robot’s reaction to a dangerous situation, the human subject is also capable of receiving a signal equivalent to its fear emotion, this time emanating from the robot toward the human subject. In the future, it would be interesting to test this phenomenon experimentally, but for now, we can assume it as a reasonable working hypothesis.
As previously stated in Section 1, the phenomenon of fear transmission through the members of a subject society, even without knowing the exact cause of the danger but only by decoding the attitudes of the frightened subjects, is a property of societies that promotes the preservation of those societies and their members. Because it can send and receive fear, the EEG sensor-based empathic cobot becomes a part of society and helps to keep it and its members alive.
One might object that the subject interacting with the cobot is not always in a relaxed state and may experience emotions other than fear. There are different possible responses to these observations. In the case where the subject has standard conditions other than rest, as long as there are no sudden spikes in the EEG signal, i.e., no strong emotions, a decision threshold that is relative to the subject and determined by the ratio of standard signal to spike signal using only data from that subject can be used; that is, the cobot must be aware of the peculiarities of the subject interacting with it and, possibly, know how to identify him/her. In the case where the subject exhibits sudden spikes in the EEG signal that cannot be associated with the phenomenon of fear, it can be observed that the phenomenon of fear is due to a danger. Thus, identifying the EEG spikes as associated with a potential danger allows the effect of these potential dangers to be limited as much as possible. This approach may reduce the efficiency of the interaction, especially if there are too many interruptions that are not related to fear. In the latter case, it would be beneficial to introduce additional information via other sensors to differentiate between different emotions. The robot’s response time should be carefully monitored; it should not be increased excessively, as this would reduce the safety level of the interaction.

5. Limitations and Future Developments

5.1. General Considerations

This study proposes a new communication model between human and cobot that aims to safeguard the safety of the human.
A human and a cobot arm are connected in the previously illustrated manner, which can be summarized as follows: the human has a band of electrodes on his head that record electroencephalographic signals of his brain activity; the electrodes are connected via cable to a sensor; the sensor is connected via Bluetooth to a computer, which in turn is connected via cable to the cobot arm. By means of a simple AI programme in the computer, the cobotic arm is able to distinguish, among various electroencephalographic signals, a certain brain hyperactivity of the human, decode it as a fear signal, and stop and close in on itself upon its detection.
The reasoning that guided the study was as follows: where the EEG signal indicates a certain brain hyperactivity, this must be interpreted by the cobot as fear; if the human feels fear, then it means there is danger; if there is danger, the cobot must stop.
The study is valuable because it maintains the basic direction of cobotics research which, since its inception in the 1990s [1] to the present day, has always focused on the relationship between human safety and machine performance. In the first decade of the 2000s, a group of scholars from the University of Southern Denmark, coordinated by E. Østergaard [113], focused on the issue of safety and in 2008 realized the UR5 cobot, equipped with virtual barriers that are as effective as they are sustainable.
As valuable as it is, however, this study has its limitations, which should be worked on in order to develop its strengths and potential. Our analysis first turns to the three logical assumptions underlying the study:
  • Fear in humans is an emotion capable of more intensely activating brain activity. This ‘cerebral hyperactivity’ enables fear to be distinguished from other emotions;
  • Precisely because it is recognizable-distinguishable, fear is precisely detectable by the EEG;
  • If the human feels fear, then there is a real danger that requires the cobot to stop.
Consider points 1 and 2, whereby fear causes a unique brain hyperactivity in the human, different from any other emotion, which makes it easily detectable by the EEG.
Some recent neuroscientific studies, coordinated by Alexander Shackman (University of Maryland) in collaboration with other universities [114], have shown that this is not the case at all, leading the scientific community to revise the neurobiological model by which the etiology of anxiety and that of fear were considered different and separate. Shackman showed that psychological-behavioral reactions to threats, whether concrete, possible, or imaginary, are actually controlled by common neural circuits. According to previous scientific literature, fear was controlled by the amygdala, which in the event of perceived (or supposed) danger causes the hormones associated with attack and flight to be released, and the circulatory, muscular, and intestinal systems to be activated. Anxiety, on the other hand, was found to depend on the nucleus of the terminal stria (BNST).
Studies coordinated by Shackman challenged this difference. One hundred volunteers were subjected to a painful shock associated with an unpleasant image and sound, with or without warning, and through functional magnetic resonance imaging, brain responses were observed to determine which areas were activated. Well, the experiment found that when subjects perceived a threat, both the neural circuits afferent to the amygdala and the nucleus of the stria terminalis were activated in them. The two structures gave statistically indistinguishable responses. The study was therefore able to conclude that there is a common neural basis controlling both anxiety and fear.
It is now clear that if the studies coordinated by Shackman are right, assumptions 1 and 2 lose their value: one cannot distinguish between fear and anxiety, and who knows whether this also applies to other primary emotions. The cobotic arm is therefore exposed to the misinterpretation of the electroencephalographic signal. A human subject who is particularly anxious, or even just prone to anxiety, would continually generate ambiguous signals, because his anxiety-driven brain activity would be neurobiologically identical to fear and the cobot would interpret it as fear, hence danger, and would continually stop. By not distinguishing anxiety from fear, the cobotic arm would be blocked continuously by false alarms. Result: ensuring the safety of the human would kill the performance of the machine.
Such a machine would not only increase production performance, it would actually damage it. It could in fact lead to the psychological alteration of the human, who would be irritated by the constant interruption of the machine for no reason. In the case of an anxious human, the blocking of the machine would feed the anxiety, and the constant occurrence of anxiety in the human—interpreted by the cobot as fear—would lead to further and continuous stoppage of the cobot. In an endless spiral, anxiety and blockage would feed each other in a directly proportional way. Human–cobot interaction would spiral into an endless loop.
In addition, it cannot be ruled out that other brain activities could also be interpreted by the machine as fear, since here the cobot does not use any special magnetic resonance imaging, but only a simple EEG. In that case, any hyperactivity could lead to an unmotivated blocking of the machine, resulting in performance inhibition and psychological alteration of the human.
First outcome of the analysis: the impossibility of distinguishing fear nullifies the fear/danger equation on which this work is based.
But beyond the hermeneutic difficulties, is the fear-danger equation really valid in an absolute sense? Even if the cobot’s interpretation were correct and the human was afraid, would that fear signify such danger that the cobot’s arrest was necessary? Here is assumption 3.

5.2. If the Human Feels Fear, It Is Because There Is Some Situation of Real Danger

Assuming that the machine has correctly registered the feeling of fear: the human subject is not anxious, not euphoric, but genuinely frightened. Fear could indicate real danger and therefore the stopping of the cobot would be correct. The conditional mode has been used because the reasoning here is based on the validity of another equation, taking it for granted whereas it is not: granted that the human is really afraid, does this fear of his certainly relate to a danger relative to the human–cobot interaction, which therefore requires the cobot to be stopped?
The study does not ask this question. Yet without the relation between danger and the cobot’s work, there is no need for the cobot to stop: if the danger does not concern what the cobot is doing with the human, but an external and unrelated situation, why should the cobot stop?
This second problem is undoubtedly inferior to the first, but it is not absent: the fear of the human is real and well read by the machine, but it is related to factors that are completely external and unrelated to the interaction with the cobot. The outbreak of a thunderstorm or the memory of an event or any other event, extrinsic to the human-machine collaborative relationship, might frighten the human: however, that fear, actually experienced and correctly registered by the machine, would not affect the human–cobot interactive activity at all and would not require the cobot to stop. But the cobot stops. It cannot do anything else, because it cannot distinguish between fear that involves danger and fear that does not involve danger. It is not the hermeneutics of fear that does the damage here, but the fear-danger equation, as it is understood in an uncritical, undifferentiated, and indeterminate way. Hence the risk of the cobot getting stuck unnecessarily, damaging the human’s performance and psychological state.

5.3. How to Solve the Problems?

One possible solution to the problems that emerged could be to enhance the cobot’s AI.
Owing to the increased AI, the cobot would be able to analyze its surroundings and obtain useful data to recognize the different nature of events. More importantly, it would be able to observe the human’s behavior and obtain valuable additional information on his/her nature. It could retrieve that information through natural language as well as through biometric facial, sound, and tactile recognition, as well as through temperature and heart rate detection and pupil observation. With this data, it could analyze human behavior, recognize its reactions, and distinguish its emotions.
Owing to AI, the cobot would self-learn from the relationship, could make autonomous decisions, and react to stimuli in a relevant and effective manner, also owing to facial and sound expressions. Its interaction with the human would be much more empathetic.
The human would then have to learn to decipher the cobot’s language correctly in order to interact with it. The interaction between human and cobot would be similar in many respects to that between human and social robot. The only difference would be the context: personal that of social robots, work that of cobots. Even at work, the interaction could be personalized: the cobot could learn the specific anthropological characteristics of the subject it works with and conform completely to them in manner and timing. The cobot would then be an ideal collaborator for the human, because in addition to safety, it would also guarantee the serenity of the working environment.

6. Conclusions

An EEG signal was used to connect a human agent and a cobot agent. This link allowed the human agent to transmit a fearful emotion to the cobot agent. Furthermore, the cobot’s reaction allowed a message of fear to be transferred from the cobot agent to the human agent. The social relationship between the two agents was strengthened by the transmission of fear. The experiments revealed an extremely high level of statistical reliability. The experiments were carried out in a controlled environment. In a more complicated setting, there may be strong limitations to how this technique can be used, necessitating the development of more complex strategies than those proposed in this paper.

Author Contributions

Conceptualization, A.B. and I.E.; methodology, A.B.; software, A.B., I.E.; validation, A.B.; formal analysis, A.B., I.E.; investigation, A.B.; resources, A.B.; data curation, A.B., I.E.; writing—original draft section ‘Limitations and future developments’, N.C.; writing—original draft preparation of the other sections, A.B.; writing—review and editing section ‘Limitations and future developments’, N.C.; writing—review and editing sections A.B., I.E.; visualization, A.B.; supervision, A.B.; project administration, A.B.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, with the positive report of the “Consulta Etica del laboratorio SAR” (protocol code 021-0001 date 18-10-2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data are not public.

Acknowledgments

Special thanks to students Alessandro Bettinsoli, Alessandro Bossoni, Andrea Bordoni, Simona Sgarbi, and Sofia Sina for their preliminary work on material testing.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Description of The Acquisition Board

The electrical scheme of the acquisition board is depicted in Figure A1, Figure A2, Figure A3, Figure A4 and Figure A5 [115].
Figure A1. Acquisition board—part A.
Figure A1. Acquisition board—part A.
Machines 10 00603 g0a1
Figure A2. Acquisition board—part B.
Figure A2. Acquisition board—part B.
Machines 10 00603 g0a2
Figure A3. Acquisition board—part C.
Figure A3. Acquisition board—part C.
Machines 10 00603 g0a3
Figure A4. Acquisition board—part D.
Figure A4. Acquisition board—part D.
Machines 10 00603 g0a4
Figure A5. Acquisition board—part E.
Figure A5. Acquisition board—part E.
Machines 10 00603 g0a5
The acquisition board is powered through an appropriately shielded power supply system (Figure A6 and Figure A7) to limit electromagnetic interference [115].
Figure A6. Power supply system—part A: battery connect (left) and voltage inverter (right).
Figure A6. Power supply system—part A: battery connect (left) and voltage inverter (right).
Machines 10 00603 g0a6
Figure A7. Power supply system: first voltage regulator at -2.5 V (up); second voltage regulator at 3 V (middle); and third voltage regulator at +2.5 V (down).
Figure A7. Power supply system: first voltage regulator at -2.5 V (up); second voltage regulator at 3 V (middle); and third voltage regulator at +2.5 V (down).
Machines 10 00603 g0a7aMachines 10 00603 g0a7b
The power system is endowed with an automatic shutdown (ASD) relay [115], as shown in Figure A8.
Figure A8. Automatic shutdown (ASD) relay.
Figure A8. Automatic shutdown (ASD) relay.
Machines 10 00603 g0a8
The measurement system is equipped with an accelerometer (Figure A9) to acquire motion signals and appropriately compensate for any disturbances of kinematic origin in the measurement signal [115].
Figure A9. Equipped accelerometer.
Figure A9. Equipped accelerometer.
Machines 10 00603 g0a9
At the heart of the measurement system is the integrated ADS1299 (Figure A10). It is an EEG and biopotential measurement analog-to-digital converter with a flexible per-channel input multiplexer that can be independently linked to internally generated signals for test, temperature, and lead-off detection. Furthermore, any combination of input channels can be used to generate the patient bias output signal [116].
Figure A10. Integrated ADS1299 for EEG conversion and its connections.
Figure A10. Integrated ADS1299 for EEG conversion and its connections.
Machines 10 00603 g0a10
The Bluetooth connection to the PC is provided by an RFDuino RFD22102 module, an ultra-small programmable computer, according to the connections shown in Figure A11.
Figure A11. RFDuino RFD22102 Bluetooth communication module.
Figure A11. RFDuino RFD22102 Bluetooth communication module.
Machines 10 00603 g0a11

Appendix B. Description of the ROS Application

As mentioned in Section 2.2, the interaction between human agent and robotic agent is realized through an application in the ROS development environment. This application (Figure A12) consists of three groups of serially connected nodes. The first group consists of a driver interface to the EEG acquisition hardware system and a set of instructions for processing EEG signals in order to check whether the threshold associated with the human agent’s perception of danger has been reached. If so, one of the nodes in the block publishes on the topic threshold a True value of exceeding that threshold. Downstream of the first group of nodes is an action client type node, which, upon reading the True value in the topic threshold, initiates an action command to a subsequent action server type node. The action server node incorporates in Figure 13 a number of other nodes that take care of the interface with the robot through a dedicated driver.
Figure A12. ROS application scheme.
Figure A12. ROS application scheme.
Machines 10 00603 g0a12
Real-time control of the robot, on the other hand, is left to the dedicated controller and is not coded in the ROS application although it sends signals about the kinematic state of the joints to the action server at a constant cadence so that it can process some simple decisions based on information that may describe normal or abnormal situations. The action client communicates to the action server through two topics: goal and cancel. Goal defines the kinematic configuration that the robot must achieve, i.e., complete closure on itself, while cancel sends a command to suspend the request indicated in goal. If the robot is correctly positioned in the working layout, it is unlikely that a collision with people or things will occur in the phase of closing the robot on itself. In fact, nothing should be present between the end of the gripper and the base of the robot other than the kinematic chain of the robot itself. This consideration makes it possible to state that under normal conditions, it is unlikely that the topic cancel will be activated, however, for safety reasons, this possibility was provided for. When the goal has been reached, the action server signals the event by writing to the topic result. In general, the action server sends frequent updates of the current kinematic situation by writing to the topic status. In the event that an abnormal event occurs, particularly in this application, if the current kinematic configuration deviates excessively from the expected kinematic configuration, the action server reports this eventuality on the topic feedback. In fact, if the current kinematic configuration is far away from the expected one, it means that a collision with an obstacle may have occurred, or damage to the motion system may have occurred, or other unexpected anomalies may have occurred. If the action client reads this information on the feedback topic, it proceeds to send a signal to the action server by writing to the cancel topic to suspend the command previously sent via the goal topic. The state diagrams of the action server and action client conform to the ROS standard and are shown in Figure A13 and Figure A14, respectively.
Figure A13. State diagrams of the action server.
Figure A13. State diagrams of the action server.
Machines 10 00603 g0a13
Specifically, in Figure A13, the various states of the action server are shown and are divided into two categories: intermediate states and terminal states. The intermediate states are: Pending, Recalling, Activating and Preempting. Terminal states are Rejected, Recalled, Preempted, Aborted, and Succeeded. During the execution process of the action server, it processes information from the action client, specifically the goal to be executed and whether the goal previously sent by the client needs to be deleted. The transition from one state to another in the state tree depends on information coming from the client (goal and cancel) and on internal server variables (rejected, canceled, accepted, aborted, succeeded). In fact, the client’s request may be rejected because it fails internal checks and is not an acceptable request. If the request is accepted, the server takes action to execute the goal. It could be that the action is aborted or preempted for various reasons, bringing the server into the respective terminal states.
Figure A14. State diagrams of the action client.
Figure A14. State diagrams of the action client.
Machines 10 00603 g0a14
In Figure A14, a diagram of action client states is shown, in which all states are intermediate except the last one (Stop), which defines the end of the process. Two intermediate states, Sending goal and Cancelling receive no information from the action serve and essentially represent only an instruction execution that has a rather limited time duration. The transition from one state to the other of the action client depends on the information received from the action server and the state of the action server itself, which is indicated next to the arrows leading from one state to the other of the action client.

References

  1. Colgate, J.E.; Wannasuphoprasit, W.; Peshkin, M.A. Cobots: Robots for collaboration with human operators. In Proceedings of the American Society of Mechanical Engineers, Dynamic Systems and Control Division (Publication) DSC, Atlanta, GA, USA, 17–22 November 1996; pp. 433–439. [Google Scholar]
  2. Paula, G. Cobots for the assembly line. Mech. Eng. 1997, 119, 82–84. [Google Scholar]
  3. Wannasuphoprasit, W.; Akella, P.; Peshkin, M.; Colgate, J.E. Cobots: A novel material handling technology. In Proceedings of the American Society of Mechanical Engineers (Paper), Anaheim, CA, USA, 15–20 November 1998; pp. 1–7. [Google Scholar]
  4. Akella, P.; Peshkin, M.; Colgate, E.; Wannasuphoprasit, W.; Nagesh, N.; Wells, J.; Holland, S.; Pearson, T.; Peacock, B. Cobots for the automobile assembly line. Proc.-IEEE Int. Conf. Robot. Autom. 1999, 1, 728–733. [Google Scholar]
  5. Peshkin, M.; Colgate, J.E. Cobots. Ind. Robot 1999, 26, 335–341. [Google Scholar] [CrossRef]
  6. Peshkin, M.A.; Edward Colgate, J.; Wannasuphoprasit, W.; Moore, C.A.; Brent Gillespie, R.; Akella, P. Cobot architecture. IEEE Trans. Robot. Autom. 2001, 17, 377–390. [Google Scholar] [CrossRef] [Green Version]
  7. Šurdilović, D.; Bernhardt, R.; Zhang, L. New intelligent power-assist systems based on differential transmission. Robotica 2003, 21, 295–302. [Google Scholar] [CrossRef]
  8. Surdilovic, D.; Simon, H. Singularity avoidance and control of new cobotic systems with differential CVT. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–01 May 2004; pp. 715–720. [Google Scholar]
  9. Bi, Z.M.; Lang, S.Y.T.; Wang, L. Improved control and simulation models of a tricycle collaborative robot. J. Intell. Manuf. 2008, 19, 715–722. [Google Scholar] [CrossRef]
  10. Rastegarpanah, A.; Saadat, M.; Borboni, A. Parallel Robot for Lower Limb Rehabilitation Exercises. Appl. Bionics Biomech. 2016, 2016, 8584735. [Google Scholar] [CrossRef] [Green Version]
  11. Mehrotra, Y.; Yadav, S. Coupled Bi-Orientation Octet Pattern for Medical Image Retrieval. In Proceedings of the 2020 IEEE 15th International Conference on Industrial and Information Systems, ICIIS 2020–Proceedings, Rupnagar, India, 26–28 November 2020; pp. 472–477. [Google Scholar]
  12. Aggogeri, F.; Borboni, A.; Pellegrini, N.; Adamini, R. Design and development of a mechanism for lower limb movement. Int. J. Mech. Eng. Robot. Res. 2019, 8, 911–920. [Google Scholar] [CrossRef]
  13. Riwan, A.; Giudicelli, B.; Taha, F.; Lazennec, J.Y.; Sabhani, A.; Kilian, P.; Jabbour, Z.; VanRhijn, J.; Louveau, F.; Morel, G.; et al. Surgicobot project: Robotic assistant for spine surgery. IRBM 2011, 32, 130–134. [Google Scholar] [CrossRef]
  14. Amici, C.; Borboni, A.; Faglia, R.; Fausti, D.; Magnani, P.L. A parallel compliant meso-manipulator for finger rehabilitation treatments: Kinematic and dynamic analysis. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Nice, France, 22–26 September 2008; pp. 735–740. [Google Scholar]
  15. Boy, E.S.; Burdet, E.; Teo, C.L.; Colgate, J.E. Motion guidance experiments with Scooter Cobot. In Proceedings of the 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, HAPTICS 2003, Los Angeles, CA, USA, 22–23 March 2003; pp. 63–69. [Google Scholar]
  16. Trochimczuk, R.; Łukaszewicz, A.; Mikołajczyk, T.; Aggogeri, F.; Borboni, A. Finite element method stiffness analysis of a novel telemanipulator for minimally invasive surgery. Simulation 2019, 95, 1015–1025. [Google Scholar] [CrossRef] [Green Version]
  17. Rossi, F.; Pini, F.; Carlesimo, A.; Dalpadulo, E.; Blumetti, F.; Gherardini, F.; Leali, F. Effective integration of Cobots and additive manufacturing for reconfigurable assembly solutions of biomedical products. Int. J. Interact. Des. Manuf. 2020, 14, 1085–1089. [Google Scholar] [CrossRef]
  18. Villafañe, J.H.; Taveggia, G.; Galeri, S.; Bissolotti, L.; Mullè, C.; Imperio, G.; Valdes, K.; Borboni, A.; Negrini, S. Efficacy of Short-Term Robot-Assisted Rehabilitation in Patients With Hand Paralysis After Stroke: A Randomized Clinical Trial. Hand 2018, 13, 95–102. [Google Scholar] [CrossRef] [Green Version]
  19. Pascher, M.; Kronhardt, K.; Franzen, T.; Gruenefeld, U.; Schneegass, S.; Gerken, J. My Caregiver the Cobot: Comparing Visualization Techniques to Effectively Communicate Cobot Perception to People with Physical Impairments. Sensors 2022, 22, 755. [Google Scholar] [CrossRef]
  20. Rosenthal, S.; Biswas, J.; Veloso, M. An effective personal mobile robot agent through symbiotic human-robot interaction. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS, Toronto, ON, Canada, 10–14 May 2010; pp. 915–922. [Google Scholar]
  21. Rosenthal, S.; Veloso, M. Mixed-initiative long-term interactions with an all-day-companion robot. In Proceedings of the AAAI Fall Symposium–Technical Report, Arlington, VA, USA, 11–13 November 2010; pp. 97–102. [Google Scholar]
  22. Vaz, M.; Ventura, R. Real-Time Ground-Plane Based Mobile Localization Using Depth Camera in Real Scenarios. J. Intell. Robot. Syst. Theory Appl. 2015, 80, 525–536. [Google Scholar] [CrossRef]
  23. Hamed, O.; Hamlich, M.; Ennaji, M. Hunting strategy for multi-robot based on wolf swarm algorithm and artificial potential field. Indones. J. Electr. Eng. Comput. Sci. 2022, 25, 159–171. [Google Scholar] [CrossRef]
  24. Krishna Kumar, K.; Karthikeyan, A.; Elango, M. Selection of a Best Humanoid Robot Using “TOPSIS” for Rescue Operation. Lect. Notes Mech. Eng. 2022, 943–953. [Google Scholar] [CrossRef]
  25. Nikitha, M.A.; Sai Swetha, B.S.; Mantripragada, K.H.; Jayapandian, N. The Future Warfare with Multidomain Applications of Artificial Intelligence: Research Perspective. Lect. Notes Netw. Syst. 2022, 351, 329–341. [Google Scholar] [CrossRef]
  26. Vardhini, P.A.H.; Babu, K.M.C. IoT based Autonomous Robot Design Implementation for Military Applications. In Proceedings of the 2022 IEEE Delhi Section Conference, DELCON, New Delhi, India, 11–13 February 2022. [Google Scholar]
  27. Bishop, R.H. The Mechatronics Handbook; CRC Press: Boca Raton, FL, UDA, 2002; pp. 1–1251. [Google Scholar]
  28. Aggogeri, F.; Borboni, A.; Merlo, A.; Pellegrini, N.; Ricatto, R. Real-time performance of mechatronic PZT module using active vibration feedback control. Sensors 2016, 16, 1577. [Google Scholar] [CrossRef] [Green Version]
  29. Tomizuka, M. Mechatronics: From the 20th to 21st century. Control Eng. Pract. 2002, 10, 877–886. [Google Scholar] [CrossRef]
  30. Borboni, A.; Carbone, G.; Pellegrini, N. Reference Frame Identification and Distributed Control Strategies in Human-Robot Collaboration. In Advances in Service and Industrial Robotics, Proceedings of the International Conference on Robotics in Alpe-Adria Danube Region, Kaiserslautern, Germany, 19 June 2020; Mechanisms and Machine Science; Springer: Cham, Switzerland, 2020; pp. 93–102. [Google Scholar]
  31. Rubagotti, M.; Tusseyeva, I.; Baltabayeva, S.; Summers, D.; Sandygulova, A. Perceived safety in physical human–robot interaction—A survey. Robot. Auton. Syst. 2022, 151, 104047. [Google Scholar] [CrossRef]
  32. Pagani, R.; Nuzzi, C.; Ghidelli, M.; Borboni, A.; Lancini, M.; Legnani, G. Cobot user frame calibration: Evaluation and comparison between positioning repeatability performances achieved by traditional and vision-based methods. Robotics 2021, 10, 45. [Google Scholar] [CrossRef]
  33. AlAttar, A.; Rouillard, L.; Kormushev, P. Autonomous air-hockey playing cobot using optimal control and vision-based bayesian tracking. In Towards Autonomous Robotic Systems, Proceedings of the Annual Conference Towards Autonomous Robotic Systems, London, UK, 3–5 July 2019; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11650, pp. 358–369. [Google Scholar] [CrossRef] [Green Version]
  34. Borboni, A.; Marinoni, P.; Nuzzi, C.; Faglia, R.; Pagani, R.; Panada, S. Towards safe collaborative interaction empowered by face recognition. In Proceedings of the 2021 24th International Conference on Mechatronics Technology, ICMT 2021, Singapore, 18–22 December 2021. [Google Scholar]
  35. Boucher, J.D.; Pattacini, U.; Lelong, A.; Bailly, G.; Elisei, F.; Fagel, S.; Dominey, P.F.; Ventre-Dominey, J. I reach faster when i see you look: Gaze effects in human-human and human-robot face-to-face cooperation. Front. Neurorobotics 2012, 6, 3. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Brèthes, L.; Menezes, P.; Lerasle, F.; Hayet, J. Face tracking and hand gesture recognition for human-robot interaction. In Proceedings of the IEEE International Conference on Robotics and Automation, New Orleans, LA, USA, 26 April–01 May 2004; pp. 1901–1906. [Google Scholar]
  37. Wilhelm, T.; Böhme, H.J.; Gross, H.M. A multi-modal system for tracking and analyzing faces on a mobile robot. Robot. Auton. Syst. 2004, 48, 31–40. [Google Scholar] [CrossRef]
  38. Nuzzi, C.; Pasinetti, S.; Pagani, R.; Docchio, F.; Sansoni, G. Hand Gesture Recognition for Collaborative Workstations: A Smart Command System Prototype. In New Trends in Image Analysis and Processing–ICIAP 2019, Proceedings of the International Conference on Image Analysis and Processing, Trento, Italy, 9–13 September 2019; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; Volume 11808, pp. 332–342. [Google Scholar] [CrossRef]
  39. Calinon, S.; D’Halluin, F.; Sauser, E.L.; Caldwell, D.G.; Billard, A.G. Learning and reproduction of gestures by imitation. IEEE Robot. Autom. Mag. 2010, 17, 44–54. [Google Scholar] [CrossRef] [Green Version]
  40. Nagi, J.; Ducatelle, F.; Di Caro, G.A.; Cireşan, D.; Meier, U.; Giusti, A.; Nagi, F.; Schmidhuber, J.; Gambardella, L.M. Max-pooling convolutional neural networks for vision-based hand gesture recognition. In Proceedings of the 2011 IEEE International Conference on Signal and Image Processing Applications, ICSIPA 2011, Kuala Lumpur, Malaysia, 16–18 November 2011; pp. 342–347. [Google Scholar]
  41. Lauria, S.; Bugmann, G.; Kyriacou, T.; Bos, J.; Klein, E. Training personal robots using natural language instruction. IEEE Intell. Syst. Appl. 2001, 16, 38–45. [Google Scholar] [CrossRef]
  42. Adamini, R.; Antonini, N.; Borboni, A.; Medici, S.; Nuzzi, C.; Pagani, R.; Pezzaioli, A.; Tonola, C. User-friendly human-robot interaction based on voice commands and visual systems. In Proceedings of the 2021 24th International Conference on Mechatronics Technology, ICMT 2021, Singapore, 18–22 December 2021. [Google Scholar]
  43. Scheutz, M.; Schermerhorn, P.; Kramer, J.; Middendorff, C. The utility of affect expression in natural language interactions in joint human-robot tasks. In Proceedings of the HRI 2006: ACM Conference on Human-Robot Interaction, Salt Lake City, UT, USA, 2–3 May 2006; pp. 226–233. [Google Scholar]
  44. Thomason, J.; Zhang, S.; Mooney, R.; Stone, P. Learning to interpret natural language commands through human-robot dialog. In Proceedings of the IJCAI International Joint Conference on Artificial Intelligence, Buenos Aires, Argentina, 25–31 July 2015; pp. 1923–1929. [Google Scholar]
  45. Preston, S.D.; de Waal, F.B.M. Empathy: Its ultimate and proximate bases. Behav. Brain Sci. 2002, 25, 1–20. [Google Scholar] [CrossRef] [Green Version]
  46. Ekman, P. An Argument for Basic Emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
  47. Jain, N.; Kumar, S.; Kumar, A.; Shamsolmoali, P.; Zareapoor, M. Hybrid deep neural networks for face emotion recognition. Pattern Recognit. Lett. 2018, 115, 101–106. [Google Scholar] [CrossRef]
  48. Maglogiannis, I.; Vouyioukas, D.; Aggelopoulos, C. Face detection and recognition of natural human emotion using Markov random fields. Pers. Ubiquitous Comput. 2009, 13, 95–101. [Google Scholar] [CrossRef]
  49. Wegrzyn, M.; Vogt, M.; Kireclioglu, B.; Schneider, J.; Kissler, J. Mapping the emotional face. How individual face parts contribute to successful emotion recognition. PLoS ONE 2017, 12, e0177239. [Google Scholar] [CrossRef] [Green Version]
  50. Zhang, H.; Jolfaei, A.; Alazab, M. A Face Emotion Recognition Method Using Convolutional Neural Network and Image Edge Computing. IEEE Access 2019, 7, 159081–159089. [Google Scholar] [CrossRef]
  51. Gunes, H.; Piccardi, M. Fusing face and body gesture for machine recognition of emotions. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 306–311. [Google Scholar]
  52. Gunes, H.; Piccardi, M. Bi-modal emotion recognition from expressive face and body gestures. J. Netw. Comput. Appl. 2007, 30, 1334–1345. [Google Scholar] [CrossRef] [Green Version]
  53. Wang, K.; Meng, D.; Zeng, X.; Zhang, K.; Qiao, Y.; Yang, J.; Peng, X. Cascade attention networks for group emotion recognition with face, body and image cues. In Proceedings of the ICMI 2018 International Conference on Multimodal Interaction, New York, NY, USA, 16–20 October 2018; pp. 640–645. [Google Scholar]
  54. Castellano, G.; Kessous, L.; Caridakis, G. Emotion recognition through multiple modalities: Face, body gesture, speech. In Affect and Emotion in Human-Computer Interaction; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2008; Volume 4868, pp. 92–103. [Google Scholar] [CrossRef]
  55. Cohn, J.F.; Katz, G.S. Bimodal expression of emotion by face and voice. In Proceedings of the Proceedings of the 6th ACM International Conference on Multimedia: Face/Gesture Recognition and their Applications, MULTIMEDIA 1998, Bristol, UK, 13–16 September 1998; pp. 41–44. [Google Scholar]
  56. Mansoorizadeh, M.; Charkari, N.M. Multimodal information fusion application to human emotion recognition from face and speech. Multimed. Tools Appl. 2010, 49, 277–297. [Google Scholar] [CrossRef]
  57. Metallinou, A.; Lee, S.; Narayanan, S. Audio-visual Emotion recognition using Gaussian Mixture Models for face and voice. In Proceedings of the 10th IEEE International Symposium on Multimedia, ISM 2008, Berkeley, CA, USA, 15–17 December 2008; pp. 250–257. [Google Scholar]
  58. Bechara, A.; Damasio, H.; Damasio, A.R. Role of the amygdala in decision-making. Ann. N. Y. Acad. Sci. 2003, 985, 356–369. [Google Scholar] [CrossRef]
  59. Shaver, P.; Schwartz, J.; Kirson, D.; O’Connor, C. Emotion Knowledge: Further Exploration of a Prototype Approach. J. Personal. Soc. Psychol. 1987, 52, 1061–1086. [Google Scholar] [CrossRef]
  60. Becker-Asano, C.; Wachsmuth, I. Affective computing with primary and secondary emotions in a virtual human. Auton. Agents Multi-Agent Syst. 2010, 20, 32–49. [Google Scholar] [CrossRef] [Green Version]
  61. Phan, K.L.; Wager, T.; Taylor, S.F.; Liberzon, I. Functional neuroanatomy of emotion: A meta-analysis of emotion activation studies in PET and fMRI. NeuroImage 2002, 16, 331–348. [Google Scholar] [CrossRef] [Green Version]
  62. Kang, S.J.; Kim, H.S.; Baek, K.H. Effects of Nature-Based Group Art Therapy Programs on Stress, Self-Esteem and Changes in Electroencephalogram (EEG) in Non-Disabled Siblings of Children with Disabilities. Int. J. Environ. Res. Public Health 2021, 18, 5912. [Google Scholar] [CrossRef]
  63. Balconi, M.; Fronda, G. The Use of Hyperscanning to Investigate the Role of Social, Affective, and Informative Gestures in Non-Verbal Communication. Electrophysiological (EEG) and Inter-Brain Connectivity Evidence. Brain Sci. 2020, 10, 29. [Google Scholar] [CrossRef] [Green Version]
  64. Sheikh, H.; McFarland, D.J.; Sarnacki, W.A.; Wolpaw, J.R. Electroencephalographic(EEG)-based communication: EEG control versus system performance in humans. Neurosci. Lett. 2003, 345, 89–92. [Google Scholar] [CrossRef]
  65. Krishna, G.; Tran, C.; Han, Y.; Carnahan, M.; Tewfik, A.H.; leee. Speech Synthesis Using EEG. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, Barcelona, Spain, 4–8 May 2020; pp. 1235–1238. [Google Scholar]
  66. Formica, C.; De Salvo, S.; Micchia, K.; La Foresta, F.; Dattola, S.; Mammone, N.; Corallo, F.; Ciavola, A.; Arcadi, F.A.; Marino, S.; et al. Cortical Reorganization after Rehabilitation in a Patient with Conduction Aphasia Using High-Density EEG. Appl. Sci. 2020, 10, 5281. [Google Scholar] [CrossRef]
  67. Al-Hudhud, G.; Alqahtani, L.; Albaity, H.; Alsaeed, D.; Al-Turaiki, I. Analyzing Passive BCI Signals to Control Adaptive Automation Devices. Sensors 2019, 19, 3042. [Google Scholar] [CrossRef] [Green Version]
  68. Palumbo, A.; Gramigna, V.; Calabrese, B.; Ielpo, N. Motor-Imagery EEG-Based BCIs in Wheelchair Movement and Control: A Systematic Literature Review. Sensors 2021, 21, 6285. [Google Scholar] [CrossRef] [PubMed]
  69. Xu, B.G.; Li, W.L.; Liu, D.P.; Zhang, K.; Miao, M.M.; Xu, G.Z.; Song, A.G. Continuous Hybrid BCI Control for Robotic Arm Using Noninvasive Electroencephalogram, Computer Vision, and Eye Tracking. Mathematics 2022, 10, 618. [Google Scholar] [CrossRef]
  70. Barria, P.; Pino, A.; Tovar, N.; Gomez-Vargas, D.; Baleta, K.; Diaz, C.A.R.; Munera, M.; Cifuentes, C.A. BCI-Based Control for Ankle Exoskeleton T-FLEX: Comparison of Visual and Haptic Stimuli with Stroke Survivors. Sensors 2021, 21, 6431. [Google Scholar] [CrossRef]
  71. Amici, C.; Borboni, A.; Tuveggia, G.; Legnani, G. Bioelectric prostheses: Review of classifications and control strategies. G. Ital. Med. Del Lav. Ergon. 2015, 37, 39–44. [Google Scholar]
  72. Lee, J.; Mukae, N.; Arata, J.; Iihara, K.; Hashizume, M. Comparison of Feature Vector Compositions to Enhance the Performance of NIRS-BCI-Triggered Robotic Hand Orthosis for Post-Stroke Motor Recovery. Appl. Sci. 2019, 9, 3845. [Google Scholar] [CrossRef] [Green Version]
  73. Tran, Y.; Austin, P.; Lo, C.; Craig, A.; Middleton, J.W.; Wrigley, P.J.; Siddall, P. An Exploratory EEG Analysis on the Effects of Virtual Reality in People with Neuropathic Pain Following Spinal Cord Injury. Sensors 2022, 22, 2629. [Google Scholar] [CrossRef]
  74. Gannouni, S.; Belwafi, K.; Aboalsamh, H.; AlSamhan, Z.; Alebdi, B.; Almassad, Y.; Alobaedallah, H. EEG-Based BCI System to Detect Fingers Movements. Brain Sci. 2020, 10, 965. [Google Scholar] [CrossRef]
  75. Sanchez-Cuesta, F.J.; Arroyo-Ferrer, A.; Gonzalez-Zamorano, Y.; Vourvopoulos, A.; Badia, S.B.I.; Figuereido, P.; Serrano, J.I.; Romero, J.P. Clinical Effects of Immersive Multimodal BCI-VR Training after Bilateral Neuromodulation with rTMS on Upper Limb Motor Recovery after Stroke. A Study Protocol for a Randomized Controlled Trial. Medicina 2021, 57, 736. [Google Scholar] [CrossRef]
  76. Anwar, S.M.; Saeed, S.M.U.; Majid, M.; Usman, S.; Mehmood, C.A.; Liu, W. A Game Player Expertise Level Classification System Using Electroencephalography (EEG). Appl. Sci. 2017, 8, 18. [Google Scholar] [CrossRef] [Green Version]
  77. Al-Nafjan, A.; Aldayel, M. Predict Students’ Attention in Online Learning Using EEG Data. Sustainability 2022, 14, 6553. [Google Scholar] [CrossRef]
  78. Yang, Y.Z.; Du, Z.G.; Jiao, F.T.; Pan, F.Q. Analysis of EEG Characteristics of Drivers and Driving Safety in Undersea Tunnel. Int. J. Environ. Res. Public Health 2021, 18, 9810. [Google Scholar] [CrossRef] [PubMed]
  79. Zhang, Y.M.; Zhang, M.Y.; Fang, Q. Scoping Review of EEG Studies in Construction Safety. Int. J. Environ. Res. Public Health 2019, 16, 4146. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  80. Zhang, Z.T.; Luo, D.Y.; Rasim, Y.; Li, Y.J.; Meng, G.J.; Xu, J.; Wang, C.B. A Vehicle Active Safety Model: Vehicle Speed Control Based on Driver Vigilance Detection Using Wearable EEG and Sparse Representation. Sensors 2016, 16, 242. [Google Scholar] [CrossRef] [PubMed]
  81. Kim, M.; Kim, M.K.; Hwang, M.; Kim, H.Y.; Cho, J.; Kim, S.P. Online Home Appliance Control Using EEG-Based Brain-Computer Interfaces. Electronics 2019, 8, 1101. [Google Scholar] [CrossRef] [Green Version]
  82. Hong, Y.G.; Kim, H.K.; Son, Y.D.; Kang, C.K. Identification of Breathing Patterns through EEG Signal Analysis Using Machine Learning. Brain Sci. 2021, 11, 293. [Google Scholar] [CrossRef]
  83. Chen, Y.J.; Chen, S.C.; Zaeni, I.A.E.; Wu, C.M. Fuzzy Tracking and Control Algorithm for an SSVEP-Based BCI System. Appl. Sci. 2016, 6, 270. [Google Scholar] [CrossRef] [Green Version]
  84. Korovesis, N.; Kandris, D.; Koulouras, G.; Alexandridis, A. Robot Motion Control via an EEG-Based Brain-Computer Interface by Using Neural Networks and Alpha Brainwaves. Electronics 2019, 8, 1387. [Google Scholar] [CrossRef] [Green Version]
  85. Martinez-Tejada, L.A.; Puertas-Gonzalez, A.; Yoshimura, N.; Koike, Y. Exploring EEG Characteristics to Identify Emotional Reactions under Videogame Scenarios. Brain Sci. 2021, 11, 378. [Google Scholar] [CrossRef]
  86. American Electroencephalographic Society Guidelines for Standard Electrode Position Nomenclature1. J. Clin. Neurophysiol. 1991, 8, 200–202. [CrossRef]
  87. Al-Quraishi, M.S.; Elamvazuthi, I.; Daud, S.A.; Parasuraman, S.; Borboni, A. Eeg-based control for upper and lower limb exoskeletons and prostheses: A systematic review. Sensors 2018, 18, 3342. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  88. van Albada, S.J.; Robinson, P.A. Relationships between electroencephalographic spectral peaks across frequency bands. Front. Hum. Neurosci. 2013, 7, 56. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  89. Coan, J.A.; Allen, J.J.B. Frontal EEG asymmetry as a moderator and mediator of emotion. Biol. Psychol. 2004, 67, 7–50. [Google Scholar] [CrossRef]
  90. Jenke, R.; Peer, A.; Buss, M. Feature extraction and selection for emotion recognition from EEG. IEEE Trans. Affect. Comput. 2014, 5, 327–339. [Google Scholar] [CrossRef]
  91. Lin, Y.P.; Wang, C.H.; Jung, T.P.; Wu, T.L.; Jeng, S.K.; Duann, J.R.; Chen, J.H. EEG-based emotion recognition in music listening. IEEE Trans. Biomed. Eng. 2010, 57, 1798–1806. [Google Scholar] [CrossRef]
  92. Petrantonakis, P.C.; Hadjileontiadis, L.J. Emotion recognition from EEG using higher order crossings. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 186–197. [Google Scholar] [CrossRef]
  93. Zheng, W.L.; Lu, B.L. Investigating Critical Frequency Bands and Channels for EEG-Based Emotion Recognition with Deep Neural Networks. IEEE Trans. Auton. Ment. Dev. 2015, 7, 162–175. [Google Scholar] [CrossRef]
  94. Krishnan, N.M.; Mariappan, M.; Muthukaruppan, K.; Hijazi, M.H.A.; Kitt, W.W. Electroencephalography (EEG) based control in assistive mobile robots: A review. IOP Conf. Ser. Mater. Sci. Eng. 2016, 121, 012017. [Google Scholar] [CrossRef]
  95. Li, P.; Meziane, R.; Otis, M.J.D.; Ezzaidi, H.; Cardou, P. A smart safety helmet using IMU and EEG sensors for worker fatigue detection. In Proceedings of the ROSE 2014 IEEE International Symposium on RObotic and SEnsors Environments, Timisoara, Romania, 16–18 October 2014; pp. 55–60. [Google Scholar]
  96. Fu, Y.; Xiong, X.; Jiang, C.; Xu, B.; Li, Y.; Li, H. Imagined Hand Clenching Force and Speed Modulate Brain Activity and Are Classified by NIRS Combined with EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1641–1652. [Google Scholar] [CrossRef]
  97. Eyam, A.T.; Mohammed, W.M.; Martinez Lastra, J.L. Emotion-driven analysis and control of human-robot interactions in collaborative applications. Sensors 2021, 21, 4626. [Google Scholar] [CrossRef] [PubMed]
  98. Arpaia, P.; Moccaldi, N.; Prevete, R.; Sannino, I.; Tedesco, A. A Wearable EEG Instrument for Real-Time Frontal Asymmetry Monitoring in Worker Stress Analysis. IEEE Trans. Instrum. Meas. 2020, 69, 8335–8343. [Google Scholar] [CrossRef]
  99. Martinez-Peon, D.; Parra-Vega, V.; Sanchez-Orta, A. EEG-motor sequencing signals for online command of dynamic robots. In Proceedings of the 3rd International Winter Conference on Brain-Computer Interface, BCI 2015, Gangwon, Korea, 12–14 January 2015. [Google Scholar]
  100. Alvarez, R.P.; Chen, G.; Bodurka, J.; Kaplan, R.; Grillon, C. Phasic and sustained fear in humans elicits distinct patterns of brain activity. NeuroImage 2011, 55, 389–400. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  101. Isbell, L.A. Snakes as agents of evolutionary change in primate brains. J. Hum. Evol. 2006, 51, 1–35. [Google Scholar] [CrossRef] [PubMed]
  102. Olsson, A.; Nearing, K.I.; Phelps, E.A. Learning fears by observing others: The neural systems of social fear transmission. Soc. Cogn. Affect. Neurosci. 2007, 2, 3–11. [Google Scholar] [CrossRef] [Green Version]
  103. Cusano, N. Cobot and sobot: For a new ontology of collaborative and social robot. Found. Sci. 2022, 27, 1–20. [Google Scholar]
  104. Durka, P.J.; Kus, R.; Zygierewicz, J.; Michalska, M.; Milanowski, P.; Labecki, M.; Spustek, T.; Laszuk, D.; Duszyk, A.; Kruszynski, M. User-centered design of brain-computer interfaces: OpenBCI.pl and BCI Appliance. Bull. Pol. Acad. Sci. Tech. Sci. 2012, 60, 427–431. [Google Scholar] [CrossRef] [Green Version]
  105. Götz, T.; Janik, V.M. Repeated elicitation of the acoustic startle reflex leads to sensitisation in subsequent avoidance behaviour and induces fear conditioning. BMC Neurosci. 2011, 12, 30. [Google Scholar] [CrossRef] [Green Version]
  106. Al-Nafjan, A.; Hosny, M.; Al-Ohali, Y.; Al-Wabil, A. Review and classification of emotion recognition based on EEG brain-computer interface system research: A systematic review. Appl. Sci. 2017, 7, 1239. [Google Scholar] [CrossRef] [Green Version]
  107. Houssein, E.H.; Hammad, A.; Ali, A.A. Human emotion recognition from EEG-based brain–computer interface using machine learning: A comprehensive review. Neural Comput. Appl. 2022, 34, 12527–12557. [Google Scholar] [CrossRef]
  108. Pham, T.D.; Tran, D.; Ma, W.; Tran, N.T. Enhancing performance of EEG-based emotion recognition systems using feature smoothing. In Neural Information Processing, ICONIP 2015, Proceedings of the International Conference on Neural Information Processing, New Delhi, India, 22–26 November 2022; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2015; Volume 9492, pp. 95–102. [Google Scholar] [CrossRef]
  109. Alarcão, S.M.; Fonseca, M.J. Emotions recognition using EEG signals: A survey. IEEE Trans. Affect. Comput. 2019, 10, 374–393. [Google Scholar] [CrossRef]
  110. Bălan, O.; Moise, G.; Moldoveanu, A.; Leordeanu, M.; Moldoveanu, F. Fear level classification based on emotional dimensions and machine learning techniques. Sensors 2019, 19, 1738. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  111. Kometer, H.; Luedtke, S.; Stanuch, K.; Walczuk, S.; Wettstein, J. The Effects Virtual Reality Has on Physiological Responses as Compared to Two-Dimensional Video; University of Wisconsin School of Medicine and Public Health: Madison, WI, USA, 2010. [Google Scholar]
  112. Wallstrom, G.L.; Kass, R.E.; Miller, A.; Cohn, J.F.; Fox, N.A. Automatic correction of ocular artifacts in the EEG: A comparison of regression-based and component-based methods. Int. J. Psychophysiol. 2004, 53, 105–119. [Google Scholar] [CrossRef] [PubMed]
  113. Østergaard, E.H.; Lund, H.H. Evolving control for modular robotic units. In Proceedings of the IEEE International Symposium on Computational Intelligence in Robotics and Automation, CIRA, Kobe, Japan, 16–20 July 2003; pp. 886–892. [Google Scholar]
  114. Shackman, A.J.; Fox, A.S. Contributions of the central extended amygdala to fear and anxiety. J. Neurosci. 2016, 36, 8050–8063. [Google Scholar] [CrossRef]
  115. Murphy, J. Hardware Design Files. 2022. Available online: https://github.com/OpenBCI (accessed on 27 June 2022).
  116. Instruments, T. ADS1299 Low-Noise, 8-Channel, 24-Bit Analog-to-Digital Converter for Biopotential Measurements. 2022. Available online: https://www.ti.com/product/ADS1299 (accessed on 28 June 2022).
Figure 1. EEG sensor system, where: e indicates electrodes attached to the band, r represents two ear electrodes, b is the acquisition board, d is the USB dongle.
Figure 1. EEG sensor system, where: e indicates electrodes attached to the band, r represents two ear electrodes, b is the acquisition board, d is the USB dongle.
Machines 10 00603 g001
Figure 2. Cobot testbench.
Figure 2. Cobot testbench.
Machines 10 00603 g002
Figure 3. Identification of the code and position of electrodes (red circles with red arrows) on the subject.
Figure 3. Identification of the code and position of electrodes (red circles with red arrows) on the subject.
Machines 10 00603 g003
Figure 4. Application of the first protocol. 100 subjects were screened, 5 were not eligible, 2 were discarded after the experiment.
Figure 4. Application of the first protocol. 100 subjects were screened, 5 were not eligible, 2 were discarded after the experiment.
Machines 10 00603 g004
Figure 5. Comparison between peak (left) and rest (right) distributions with Mann–Whitney U test.
Figure 5. Comparison between peak (left) and rest (right) distributions with Mann–Whitney U test.
Machines 10 00603 g005
Figure 6. Comparison between relative peak (left) and rest (right) distributions with Mann–Whitney U test.
Figure 6. Comparison between relative peak (left) and rest (right) distributions with Mann–Whitney U test.
Machines 10 00603 g006
Figure 7. Example of raw data from the three EEG selected channels for a single subject.
Figure 7. Example of raw data from the three EEG selected channels for a single subject.
Machines 10 00603 g007
Figure 8. Example of raw data from three EEG selected channels for a single subject in the frequency domain.
Figure 8. Example of raw data from three EEG selected channels for a single subject in the frequency domain.
Machines 10 00603 g008
Figure 9. Example of band-pass filtered data from the three EEG-selected channels for a single subject.
Figure 9. Example of band-pass filtered data from the three EEG-selected channels for a single subject.
Machines 10 00603 g009
Figure 10. Example of mean value from the eight EEG-selected channels for a single subject, used to identify the signal peak.
Figure 10. Example of mean value from the eight EEG-selected channels for a single subject, used to identify the signal peak.
Machines 10 00603 g010
Figure 11. Application of the second protocol.
Figure 11. Application of the second protocol.
Machines 10 00603 g011
Figure 12. Comparison between Rest distributions in the experimental (left) and control (right) groups with Mann–Whitney U test.
Figure 12. Comparison between Rest distributions in the experimental (left) and control (right) groups with Mann–Whitney U test.
Machines 10 00603 g012
Figure 13. Comparison between rest (left) and peak (right) distributions in the experimental group with Mann–Whitney U test.
Figure 13. Comparison between rest (left) and peak (right) distributions in the experimental group with Mann–Whitney U test.
Machines 10 00603 g013
Figure 14. Comparison between rest (left) and relative peak (right) distributions in the experimental group with Mann–Whitney U test.
Figure 14. Comparison between rest (left) and relative peak (right) distributions in the experimental group with Mann–Whitney U test.
Machines 10 00603 g014
Figure 15. Comparison of the probability distributions of the level of fear declared in the questionnaire in the experimental group (a) and the control group (b).
Figure 15. Comparison of the probability distributions of the level of fear declared in the questionnaire in the experimental group (a) and the control group (b).
Machines 10 00603 g015
Table 1. Saywer robot structure described with DH parameters a, d, α, and θ sequentially from the base joint to the gripper joint.
Table 1. Saywer robot structure described with DH parameters a, d, α, and θ sequentially from the base joint to the gripper joint.
αθad
−π/2θ18.1 × 10−20
π/2θ201.91 × 10−1
−π/2θ303.99 × 10−1
π/2θ40−1.683 × 10−1
−π/2θ503.965 × 10−1
π/2θ601.360 × 10−1
0θ701.785 × 10−1
Table 2. Placement of electrodes on the subject according to Figure 3 and respective connections to the data acquisition board.
Table 2. Placement of electrodes on the subject according to Figure 3 and respective connections to the data acquisition board.
Electrode CodeElectrode TypeBoard PinGUI Channel
FP1FlatN1P1
FP2FlatN2P2
FPZFlatN3P3
TP7SpikeyN4P4
TP8SpikeyN5P5
P7SpikeyN6P6
P8SpikeyN7P7
OZSpikeyN8P8
A1Ear clipSRB-
A2Ear clipBIAS-
Table 3. Comparison between rest, peak, and peak to rest ratio distributions of the EEG signal; where m is the minimal observed value, M is the maximum observed value, µ is the mean value, σ is the standard deviation, p is the statistical significance for the comparison with the rest distribution.
Table 3. Comparison between rest, peak, and peak to rest ratio distributions of the EEG signal; where m is the minimal observed value, M is the maximum observed value, µ is the mean value, σ is the standard deviation, p is the statistical significance for the comparison with the rest distribution.
SignalmMµσp 1
rest0.00437.91001.17961.6861-
peak53.585004.41386.951362.19<0.000
peak/rest69.06127,785.211,142.2125,189.99<0.000
1 p measures the difference with rest distribution and is computed with Mann–Whitney U test with independent samples. m, M, µ, and σ are expressed in mV for the rest and peak distributions; whereas they are non-dimensional for the peak/rest distribution.
Table 4. Comparison between rest, peak, peak to rest ratio distributions in the experimental group A and the rest distribution in the control group B of the EEG signal; where m is the minimal observed value, M is the maximum observed value, µ is the mean value, σ is the standard deviation, p is the statistical significance for the comparison with the rest distribution.
Table 4. Comparison between rest, peak, peak to rest ratio distributions in the experimental group A and the rest distribution in the control group B of the EEG signal; where m is the minimal observed value, M is the maximum observed value, µ is the mean value, σ is the standard deviation, p is the statistical significance for the comparison with the rest distribution.
SignalmMµσp 1
rest—group A0.19768.8532.71742.3624-
peak group A49.033492.31578.7961.44<10−3
peak/rest A33.547641.81253.01526.7<10−3
rest—group B0.57758.75254.7322.4590.287
1 p measures the difference with rest distribution of the experimental group A and is computed with Mann–Whitney U test with independent samples. m, M, µ, and σ are expressed in mV for the rest and peak distributions; whereas they are non-dimensional for the peak/rest distribution.
Table 5. Comparison between the measured fear of the subjects in the experimental group A and in the control group B.
Table 5. Comparison between the measured fear of the subjects in the experimental group A and in the control group B.
SignalmMµσp 1
group A6108.51.4-
group B010.60.5<10−3
1 p measures the difference with rest distribution and is computed with Mann–Whitney U test with independent samples.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Borboni, A.; Elamvazuthi, I.; Cusano, N. EEG-Based Empathic Safe Cobot. Machines 2022, 10, 603. https://doi.org/10.3390/machines10080603

AMA Style

Borboni A, Elamvazuthi I, Cusano N. EEG-Based Empathic Safe Cobot. Machines. 2022; 10(8):603. https://doi.org/10.3390/machines10080603

Chicago/Turabian Style

Borboni, Alberto, Irraivan Elamvazuthi, and Nicoletta Cusano. 2022. "EEG-Based Empathic Safe Cobot" Machines 10, no. 8: 603. https://doi.org/10.3390/machines10080603

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop