Next Article in Journal
Measuring the Impact of Language Models in Sentiment Analysis for Mexico’s COVID-19 Pandemic
Next Article in Special Issue
A Review on Human–Robot Proxemics
Previous Article in Journal
A Systematic Method to Generate Effective STLs for the In-Field Test of CAN Bus Controllers
Previous Article in Special Issue
Influence of the Stiffness of the Robotic Arm on the Position of the Effector of an EOD Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Objective Navigation Strategy for Guide Robot Based on Machine Emotion

1
School of Automation, China University of Geosciences, Wuhan 430074, China
2
Hubei Key Laboratory of Advanced Control and Intelligent Automation for Complex Systems, Wuhan 430074, China
3
Engineering Research Center of Intelligent Technology for Geo-Exploration, Ministry of Education, Wuhan 430074, China
4
School of Future Technology, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(16), 2482; https://doi.org/10.3390/electronics11162482
Submission received: 30 June 2022 / Revised: 24 July 2022 / Accepted: 4 August 2022 / Published: 9 August 2022
(This article belongs to the Special Issue Path Planning for Mobile Robots)

Abstract

:
In recent years, the rapid development of robot technology means more kinds of robots appear in life and they are applied in different fields of society. Service robots are mainly used to provide convenience for human beings. Guide robots are a kind of service robot, which can replace manual instruction and guidance. However, most of the existing studies provide a preset guidance trajectory for the guiding robot, or they let the user choose the next target point for position guidance, which is a lack of intelligence. To solve the above problems, a robot navigation strategy based on machine emotion is proposed. Firstly, the machine emotion of the guide robot is established according to the user’s emotional state and environmental information. Then, the machine emotion and current location information are used to estimate the user’s intention, i.e., the most desired next target point. Finally, the classical indoor path planning method and obstacle avoidance method are employed to calculate a passable path between the target point and the current position. Simulation results show that the proposed strategy can execute different navigation strategies according to user emotion. The navigation strategy proposed in this paper has been tested on Pepper robot and received good feedback from the subjects.

1. Introduction

With the rapid development of science and technology, people in the information age continuously enjoy the convenience brought about by scientific and technological progress [1]. More and more robot products appear in our field of vision, which brings great convenience to the life of contemporary people [2]. From the application scenarios, robots can be divided into industrial robots and service robots. The International Organization for Standardization defines a service robot as a robot that performs useful tasks for humans or equipment excluding industrial automation applications [3]. Service robots concretely share environments with human beings to actively collaborate with them in specific daily tasks [4]. Guide robots are a kind of service robot that can perform the service work of welcoming guests and guiding tourists to the designated tourist spots. The use of robot guides can help alleviate the problem of manpower shortage. Meanwhile, a good human-robot interaction system has rich means of information transmission which can help give tourists a better experience.
In an ideal intelligent interactive environment, the machine has the same ability of external stimulus perception as a human does [5]. However, at the level of interaction, people prefer to interact with people rather than robots, because people can understand people, and the current robot is not enough like a real human. For the guide robot, it is necessary to improve the intelligence degree in service target selection, navigation strategy and other aspects, so that tourists can feel that they are understood. This paper will discuss how to make the guide robot choose the service target autonomously and execute the corresponding navigation strategy according to the emotional state of the service target.
To achieve the above objectives, the following challenges need to be addressed:
(1)
In the navigation scene, the number of guide robots is often far smaller than the number of tourists, and only a small number of people can be selected for service. Therefore, how to make the guide robot choose the service object reasonably needs to be studied.
(2)
Traditional guide robots cannot adjust their navigation strategies according to users’ emotional states. For users, such a guide robot is not humanized. Therefore, how to make the robot dynamically adjust the navigation strategy according to the perceived user state and environmental information remains to be studied.
(3)
To realize the humanized autonomous navigation strategy, the guide robot can understand the attributes of different tourist spots to estimate the most expected tourist spots. In the guided tour scenario, the decision-making indexes that may affect tourists include the content and distance of the tour site. How to make the robot understand this information remains to be studied.
(4)
To achieve the target navigation guided by tourists’ emotions, the guide robot should perform corresponding actions according to users’ emotions on the basis of understanding the environment. Therefore, it is necessary to design a detailed and fully functional navigation strategy.
The main contributions of this paper can be summarized as follows:
(1)
This paper proposes a tourist emotions-oriented navigation strategy for tourist guide robots. Machine emotions are established according to tourists’ emotional states, and machine emotions give the robot the ability to have “empathy” with tourists, so as to find the target points most expected by tourists.
(2)
To help guide robots judge the most expected target points of tourists, a novel guide map is proposed, including the distance and relevance between different locations. Such a map helps the guide robot figure out which target points are relevant or irrelevant to the current location. Combined with machine emotion, it can judge whether the user is interested in the current tour content and make the most satisfying decision by combining with the distance information between different tour points.
(3)
In view of the situation in which the number of tourists is much larger than the number of guiding robots, the group that needs guiding robots most is identified through the evaluation of the communication atmosphere field of different groups.
The rest of this paper is organized as follows. In Section 2, a brief overview of relevant literature is given. In Section 3, the method of determining the service object by atmosphere field recognition is introduced, and a multi-objective navigation strategy based on machine emotion is proposed. The experimental results are shown in Section 4. In Section 5, summaries and future works are given.

2. Related Works

2.1. Artificial Emotional Model

Constructing machine emotion is one kind of artificial emotion. The recognition and expression of artificial emotion cannot be separated from emotion modeling. In artificial emotion modeling, there are the discrete emotion model, dimensional emotion model, cognitive evaluation model and machine learning emotion recognition model [6]. Wessman et al. [7] believed that emotion has the characteristics of polarity dimension (positive and negative emotion) and intensity dimension (strong and weak emotion), and proposed a two-dimensional emotion model on this basis. Russell’s two-dimensional circular emotion model [8] is a representative emotion computing model. In addition to the polarity and intensity of emotions, other factors are also taken into account in the description of emotions. Miwa et al. [9] established an emotional space composed of arousal, pleasure and certainty and applied it to WE-3RV robot.
Today, more scholars are investing in the research of artificial emotion, and the artificial emotion model is improving constantly. Wu et al. [10] established a universal artificial emotion computing model in the three-dimensional emotion space of pleasure degree, activation degree and dominance degree (PAD), which can perform certain emotional processing activities on external stimuli and can possess emotional decision-making and expression ability. Tian et al. [11] constructed a machine personalized artificial emotion model. Chen et al. [12] designed the memory pool mechanism and the emotion random change mechanism based on emotion consistency, providing a reference scheme for artificial emotion simulation. Bi [13] believes that artificial intelligence literary emotion is a kind of artificial emotion. Jiang et al. [14] studied the simplest emotional expression of an intelligent terminal.
The results of the researchers mentioned above have been able to simulate the function of artificial emotion, but how to apply artificial emotion to robots and improve the process of human-robot interaction needs further research.

2.2. Robot Navigation Strategy

With the rapid development of sensor technology and the deep integration of artificial intelligence and robot technology, intelligent robot navigation technology has made great progress, and has realized the functions of autonomous movement and dynamic obstacle avoidance in a complex environment [15]. However, in order to make the service robot truly enter people’s daily life, autonomous navigation based on obstacle avoidance to reach the destination can no longer meet the needs of human-machine integration. People gradually pay attention to the research on human comfort, nature and sociality [16] in the autonomous navigation process, and establish an intelligent navigation planning system with social consciousness.
Charalampous et al. [17] proposed to introduce social mapping into map construction to represent the acquired human interaction information in the map, so as to further improve the service robot’s social awareness navigation ability. Moller et al. [18] combined the four functional modules of active vision, robot navigation, human-robot interaction and human social behavior modeling to enable the service robot to better integrate into people’s daily life and perform socially acceptable “correct” behaviors. It can be seen that social-conscious navigation in harmony with people has always been one of the hotspots in the research field of service robots to improve the social acceptability of service robots [19]. Ferrer et al. [20] introduced the social force model (SFM) into navigation and proposed a robot social awareness navigation method based on the social force model. Malviya et al. [21] not only considered the attraction and repulsion between human and robot, but also considered whether the distance maintained between them was balanced, and how these distances changed in different human behaviors and social customs. Perez et al. [22] adopted a social navigation model based on membrane computing to provide an inherent parallel computing framework that can simulate computation on parallel hardware to meet the real-time requirements of robot navigation, and combined the dynamic window method and the social force model to plan a path that can meet the social requirements. The above methods are all based on the social force model, achieving navigation by defining or improving different “social forces”. However, humans are regarded as a part of the environment and do not achieve interaction with specific objects. At the same time, only a fixed social environment is suitable, with poor generalization performance and low flexibility.
Wang et al. [23] proposed an adaptive motion control method considering speed constraints for model matching on a robot navigation framework based on the social force model and spatial relations. Reddy et al. [24] added a new social force model and selected geometric gaps according to social behaviors to ensure a comfortable distance between the robot and the crowd, and proposed a hybrid algorithm combining the social force model, the geometric method and the gap selection strategy. Kivrak et al. [25] extended the local planner based on the social force model and combined it with A* algorithm to propose a critical path point selection algorithm in view of the problem where the artificial potential field method generally fell into the local optimum. Repiso et al. [26] proposed a navigation method in which robots accompany individuals or groups with human social behaviors. Robots keep abreast or form a V formation with pedestrians, avoid static and dynamic obstacles in advance, and can dynamically change their position in the group. Kivrak et al. [27] proposed a social force model based on collision prediction and used the model as a local path planner to enable robots to navigate smoothly and safely in locally unknown environments and generate human-friendly collision-free paths in indoor environments such as corridors. Patompak et al. [28] improved the social force model, extended it to the social relationship between human and robot, and proposed a navigation method of the social relation model (SRM) based on the social force model. However, these methods do not consider the individual differences of human beings, and cannot achieve personalized navigation for people with different preferences.
Truong et al. [29] considered robot navigation in a complex social environment, innovatively took human-to-object interaction into account in the navigation system, and proposed a social response control to enable mobile service robots to navigate safely and socially in a human interaction environment. Yang et al. [30] proposed the extended social force model method based on the characteristics of the complete system. Laser rangefinders and cameras were used as sensors to build environmental models and detect human behavior information. These methods are capable of sensing human dynamic behavioral information for socially conscious navigation, but do not consider the important factor of human emotion. To achieve a more precise and natural human-robot relationship, a deeper perception of human nature is required, which is the detection of human emotions.
Currently, roboticists create robots that look like humans to increase robot acceptance [31]. However, users are disappointed by the lack of peer-to-peer empathy from bots [32]. The decision-making model based on user emotions can enable robots to have such capabilities, but there is still a lack of relevant research. In short, the navigation strategy of the service robot considering user emotion needs to be studied, and the empathy ability of the service robot in regards to human needs must be further improved.

3. Methodology

3.1. Concept of Machine Emotion

Machine emotion refers to a simulated emotional state, which can affect decision-making according to the state of the machine itself, environmental factors and users’ emotions [33].
Emotion originally refers to an attitude generated by human beings in regards to whether objective things meet their own needs. It is a positive or negative attitude generated by human beings based on the environment and their own situation. A large number of studies show that people are not always completely rational, and emotion will affect the results of decision-making. If the emotional influence mechanism is added to the intelligent decision-making process, it is possible to find a more humanized solution. Machine emotion aims to give machines or intelligent products the same emotion as people, in order to control and influence the decision-making process. Machine emotion is expected to reflect the user’s emotions to some extent, so that the user feels understood by the robot.
In the application of the guide robot, it makes no sense to consider the state of the robot itself, and environmental factors will act as external events. Therefore, we omit the robot’s own state and environmental factors in machine emotion, and only use users’ emotions as the basis to establish machine emotion.

3.1.1. Description of Machine Emotion

We use the dimension method to describe machine emotion. The expression of emotion dimension describes emotion on the continuously changing dimension [34], soemotion can be expressed as a point in the multi-dimensional emotion space. Wundt first put forward the viewpoint of emotional dimension in 1986, holding that emotion is composed of three dimensions: pleasure-unhappiness, excitement-calm and tension-relaxation. In the field of emotional psychology, the PAD three-dimensional emotional space divides emotion into three dimensions: pleasure-displeasure, arousal-nonarousal and dominance-submissiveness [35]. In the application of the guide robot, the division of emotion does not need to be very careful, so we modified it on the basis of PAD emotion space. We reduce one of the dimensions and use the two-dimensional emotion space as the emotion space to describe the machine emotion. Emotional space for machine emotion is shown in Figure 1.

3.1.2. Transformation of Machine Emotion

People’s own conditions and external events may cause emotion to change, but the input of external events is the most direct reason for the change. In the absence of external event input, machine emotion tends to flatten over time, which is expressed as gradually approaching the zero point in the two-dimensional emotion space. When an external event occurs, we define the external event as an emotion vector in two-dimensional space e i ; this vector acts on the current emotion E 0 = ( P 0 , A 0 ) , and leads to transformations of emotion. It is expressed as the change of point position in emotional space. Obviously, different external event inputs may have the same emotion vector. Similar to human emotion, when the current emotional state is different, for the same external event input, it may correspond to different emotion vectors.
Our method treats the user’s experience as an emotional vector of input from external events.
The emotion vector is composed of two indexes, e i , j = ( c i , j , d i , j ) , d i , j representing the walking distance between points i and j , and c i , j representing the correlation index between points i and j .
E j = E i + e i , j × [ λ 1 0 0 λ 2 ] = ( P j , A j ) ,
where e i , j is defined as an emotion vector, λ 1 , λ 2 are defined as the weight of two dimensions of emotion vector in different emotional states, which is determined by user’s emotion. E i is defined as the current machine emotion, E j is defined as the predicted value of the machine emotion after reaching the target point. A schematic diagram is shown in Figure 2.

3.1.3. The Influence of Current Emotion

The current different emotional states will react differently to the same target point. For example, a visitor with pleasant emotions will be interested in the next target point that is highly relevant to the current spot, while a less happy visitor is likely to be frustrated by such a point.
Therefore, we set up two coefficients to distinguish the simultaneous impact of different current emotional states.
λ 1 = | P 0 | | P 0 | + | A 0 | ,
λ 2 = | A 0 | | P 0 | + | A 0 | ,
These two coefficients simulate the different responses of tourists when visiting in different emotional states. If the user’s emotion is identified as pleasant, this is a manifestation of interest in the current tour point. Then the P 0 value of the corresponding machine emotion will be higher, and then λ 1 will be higher, and the target point with higher correlation with the current tour point will have a higher chance of being selected.

3.1.4. Synthesis of Initial Machine Emotion

Before navigation, the evaluation of machine emotion is based on the emotional state of the user. Below are some basic sources of assessment, as shown in Table 1.

3.2. Navigation Strategy

Applying machine emotions to robots can help them understand human intentions and make them more like a real person. Applying this technology to guide robot indoor navigation, which provides users with a more humanized service, can solve the navigation problem of multi-objective optimization of user preferences, user interests, walking time, etc.
This method takes machine emotion as the optimization target of navigation strategy. The navigation strategy can select different target points by three parameters, including the distance of the next target point, the correlation coefficient between the next target point and the current point, and the crowding degree of the target point.
In the navigation process, by establishing multiple objective functions to predict the attractiveness of different locations to users, our method will find the target point with the maximum attractiveness to users, that is, the target point that users most expect to reach.

3.2.1. The Establishment of the Transcendental Scene Map

Before performing the navigation task, a detailed prior scene map needs to be established using grid map as shown in Figure 3, which not only needs to record the location of users’ visiting points, but also needs to establish the walking distance and the correlation index between each point. Understanding the environment’s characteristics is a significant task to allow the robot to move autonomously and make suitable decisions accordingly [36].

3.2.2. Machine Emotion Objective Function

The distance between each walking point is the Manhattan distance between the two points.
d i , j = | x i x j | + | y i y j | ,
where x i , y i is the coordinate of point i , x j , y j is the coordinate of point j .
Correlation index refers to the correlation between two tourist attractions, which can be type correlation, time correlation, etc. Taking the museum scene as an example, there will be a high correlation between the cultural relics display sites in the adjacent period.
c i , j = n = 1 N c o r r n ( i , j ) N ,
where c o r r n ( i , j ) represents the correlation degree of the n dimension. If only type correlation and time correlation are considered, the correlation between point i and point j is the average of type correlation and time correlation. The degree of correlation of each dimension needs to be defined in advance. Each indicator is quantified by a number in the range [ 1 , 1 ] .

3.2.3. Machine Emotion Penalty Function

Taking the museum scene as an example, N is the theoretical average value of tourists near each tourist attraction, that is, the total number of visitors n u m t o t a l divided by the total number of tourist attractions n u m p .
N = n u m t o t a l n u m p ,
Our method introduces a penalty function into the distance and relevance indicator model, expressed as:
p ( N ) = { 1 , N 0 2 × N 1 N 0 N × K , N 0 > 2 × N } ,
where K is defined as a penalty coefficient of a larger value. N 0 is defined as the number of visitors currently detected at the target point. The classical target recognition algorithm is used to identify the number of tourists at the tourist destination. If the number of visitors is more than twice the theoretical average, a large penalty factor is obtained, which will significantly reduce the attraction of the next target point to users. On the contrary, if the number of tourists around the destination is within a reasonable range, it has no effect on the attraction of the next target point.

3.2.4. Linear Weighted Multi-Objective Optimization Method Based on Machine Emotion

Among various algorithms for multi-objective optimization, the linear weighting method is widely used. According to the importance of multiple objectives, the linear weighting method sets different weights for them, which is transformed into a single objective optimization problem.
A t t j = ( λ 1 × c i , j + λ 2 × d i , j ) × p ( N ) ,
where A t t j is defined as the attraction of the target site to visitors. The disadvantage of the linear weighting method is that it is difficult to determine the weight and it cannot guarantee the advantages and disadvantages of the results. Our method can provide the weight judgment basis for the linear weighting method. The linear weighted multi-objective optimization algorithm based on machine emotion has a set of dynamic and personalized weight judgment methods, which can transform the multi-objective optimization into a single objective optimization problem for machine emotion.
The target point with the maximum value of A t t j is the most expected target point by the users predicted by our method.

3.2.5. Navigation Process

As shown in Figure 4, the navigation strategy based on machine emotion is divided into three parts: loading the map for initialization, calculating the target point most expected by users, and conducting indoor navigation. Each time one completes the navigation, one must hide the current point on the map to avoid repeated navigation to the same location.
Prior to this, the communication atmosphere field was used to analyze the communication atmosphere of different groups [37]. Communication atmosphere is composed of the psychological factors and psychological feelings that pervade the space and can affect the behavior process and results [38]. Created by the dialogue between multiple objects, it exists in space, but cannot be seen or touched, and can only be perceived by people’s hearts. By analyzing the communication atmosphere, the robot can determine the emotional state of a group composed of multiple people, thereby determining the target group to serve.

4. Experiments and Discussions

4.1. Introduction to Experimental Platform

4.1.1. Hardware Environment Introduction

The Pepper robot is chosen in this paper as the development platform for this design, as shown in Figure 5. The Pepper robot is a programmable humanoid robot designed and developed by France’s Aldebaran Robotics and Japan’s SoftBank Group.
Pepper has 20 degrees of freedom, and the sensors and audio-visual functions all over the body make Pepper “sentient”. As for the hardware of the Pepper robot itself, it has more than 2000 API interfaces, and the application can be freely expanded. The Pepper robot also has a rich sensor category, providing multiple styles of human-robot interaction. Its main specification parameters are shown in Table 2.

4.1.2. Introduction to Software Development Environment

Choregraphe is multi-platform application software. Using Choregraphe, one can create behavior modules for Pepper robot and connect to Pepper robot to test the behavior modules created by users. In a nutshell, Choregraphe is control software dedicated to Pepper robot application development, as shown in Figure 6. It uses graphical programming, which is enough to achieve a variety of complex behaviors through the combination of some instruction boxes, and Choregraphe also provides users with the functions of writing modules in Python and calling SDK interfaces [39].

4.2. Authentication on Pepper

This paper initially deploys and tests the above strategies on the Pepper robot.
Since the NAOqi system utilized by the Pepper robot only supports Python 2.7, the emotional atmosphere field modeling and analysis cannot be performed on the Pepper robot. Therefore, the emotional atmosphere field modeling and analysis are performed on the local server, and the results are transmitted through Wi-Fi to achieve the function of service target selection. In practical applications, the sound source localization method proposed in [40] can be used to simplify the service target selection process.
Figure 7 shows an example of the Pepper robot navigating in a simply arranged laboratory scene and Table 3 shows the preset parameters for each point. The Pepper robot obtains awareness of the current emotion of tourists through facial expression recognition, and calculates the attractiveness of all other target points to tourists based on the emotion to find out the most attractive target point.
The implementation of the navigation function requires the Pepper robot to locate itself. In the experiment, the Pepper robot’s own positioning function is used, and the coordinate relationship between the accessible target points is drawn. Once the next target point is confirmed and the positioning is successful, the Pepper can follow the specified direction and walk the route to the destination.
To better evaluate the proposed method, a real-world test was performed. The test site and related data are shown in Figure 8 and Table 4.
Sixteen experimenters were divided into two groups to participate in the experiment, and they evaluated the human-robot interaction experience during the experiment. The first group of experimenters visited the laboratory in a designated order led by Pepper. The navigation strategy proposed in this paper was tested in the second group of experimenters. After the tour, the experimenters rated the tour experience through the evaluation form.
List 1. Questionnaire
  • Can you feel the robot sensing your mood changes?
  • Can you feel that the robot senses your preferences/has some empathy?
  • Does such a robot bring you a better experience?
Since the human-robot interaction experience is subjective, experimenters used five levels of 1–5 to evaluate each item. The levels 1–5 represent completely disagree, somewhat disagree, neutral, somewhat agree and strongly agree. The results are shown in Table 5.
According to the scoring results, the experimenters in group 1 believe that the guide robot does not have the ability to understand humans. However, the experimenters in group 2 clearly felt that the navigation strategy of the guide robot changed according to experimenters’ emotions, and they felt that the robot has a certain degree of empathy. Compared with the guide robot using fixed route navigation, the robot using the navigation strategy proposed in this paper provided the experimenters with a better human-robot interaction experience.

5. Conclusions and Future Work

It is difficult for existing service robots to give corresponding feedback according to the user’s emotions, which leads people to feel, when interacting with the robot, that the robot lacks humanity. This paper proposes a multi-objective navigation strategy based on machine emotions, which enables the guide robot to predict the destination that tourists expect to visit according to the emotional state of the tourists, and enhances the ability of the guide robot to empathize with humans, which can be widely used in various service robots.
Further exploration and analysis can be carried out in the following aspects:
(a)
Using more informative maps to increase the basis that the guide robot can use to predict tourists’ preferences.
(b)
At present, only two dimensions are used to describe machine emotions, and it is hoped that a more appropriate emotional space can be used to describe machine emotions to better judge user emotions.
(c)
Specifying more navigation strategies for different scenarios, optimizing and applying them to more types of service robots.
(d)
After the start of the navigation, input is only available from a single tourist. In future research, real-time communication and atmosphere field analysis will be used to improve the strategy to achieve the goal of taking multiple inputs from different tourists.

Author Contributions

Investigation, D.C. and Y.G.; Project administration, Y.G.; Validation, D.C.; Writing—original draft, D.C.; Writing—review & editing, D.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the College Students’ Innovative Entrepreneurial Training Plan Programs under Grant 202110491053 and 202210491031, and in part by College Students’ Launching Project of Independent Innovation Funding Program, China University of Geosciences (Wuhan) under Grant CUGDCJJ202246.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

We would like to thank Zhen-Tao Liu of China University of Geosciences for his guidance and project support and Xin-Heng Li for his contribution to the experiment. Without their support and help, this article may not have been completed successfully.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tao, Y. Development of Service Robots for The Development of Intelligent Society. Sci. Technol. Rev. 2015, 33, 58–65. [Google Scholar]
  2. Guo, Y.; Xie, Y.; Chen, Y.; Ban, X.; Sadoun, B.; Obaidat, M.S. An Efficient Object Navigation Strategy for Mobile Robots Based on Semantic Information. Electronics 2022, 11, 1136. [Google Scholar] [CrossRef]
  3. IFR. Service-Robots. Available online: https://ifr.org/service-robots (accessed on 28 June 2022).
  4. Graterol, W.; Diaz-Amado, J.; Cardinale, Y.; Dongo, I.; Lopes-Silva, E.; Santos-Libarino, C. Emotion Detection for Social Robots Based on NLP Transformers and an Emotion Ontology. Sensors 2021, 21, 1322. [Google Scholar] [CrossRef] [PubMed]
  5. Xiao, G.; Ma, Y.; Liu, C.; Jiang, D. A machine emotion transfer model for intelligent human-machine interaction based on group division. Mech. Syst. Signal Process. 2020, 142, 106736. [Google Scholar] [CrossRef]
  6. Su, Y. Ontology-Based Agent Emotion Recognition and Emotion Induction Research; Lanzhou University: Lanzhou, China, 2019. [Google Scholar]
  7. Wessman, A.E.; Ricks, D.F. Mood and Personality; Holt Rinehart and Winston: Oxford, UK, 1966; pp. 23–99. [Google Scholar]
  8. Russell, J.A. A circumplex model of affect. J. Personal. Soc. Psychol. 1980, 39, 1161–1178. [Google Scholar] [CrossRef]
  9. Miwa, H.; Takanobu, H.; Takanishi, A. Human-Like Head Robot WE-3RV for Emotional Human-Robot Interaction. In Romansy; Bianchi, G., Jean-Claude, G., Rzymkowski, R., Eds.; Springer: Vienna, Austria, 2002; Volume 14, pp. 519–526. [Google Scholar]
  10. Wu, W.; Li, H. Artificial emotion modeling and human-computer interaction experiment in PAD emotion space. J. Harbin Inst. Technol. 2019, 51, 29–37. [Google Scholar]
  11. Tian, Z.; Chen, X.; Jiang, D. An artificial emotion model based on mutual mapping between discrete states and latitude space. J. Syst. Simul. 2021, 33, 1062–1069. [Google Scholar]
  12. Chen, J.; Jiang, D. A machine-oriented framework for artificial emotion simulation. J. Shantou Univ. 2020, 35, 36–46. [Google Scholar]
  13. Bi, R. AI Literary Emotion Is a Kind of Artificial Emotion; Yangtze River Art and Literature Publishing House: Beijing, China, 2020; Volume 19, pp. 136–139. [Google Scholar]
  14. Jiang, H.; Xu, J.; Lin, S.; Yang, C.; Yang, W.; Guo, J. Minimal emotion expression for intelligent terminals. J. Comput. Aided Des. Comput. Graph. 2020, 32, 1042–1051. [Google Scholar]
  15. Niloy, M.A.; Shama, A.; Chakrabortty, R.K.; Ryan, M.J.; Badal, F.R.; Tasneem, Z.; Ahamed, H.; Moyeen, S.I.; Das, S.K.; Ali, F.; et al. Critical design and control issues of indoor autonomous mobile robots: A review. IEEE Access 2021, 9, 35338–35370. [Google Scholar] [CrossRef]
  16. Kruse, T.; Pandey, A.K.; Alami, R.; Kirsch, A. Human-Aware Robot Navigation: A Survey. Robot. Auton. Syst. 2013, 61, 1726–1743. [Google Scholar] [CrossRef] [Green Version]
  17. Charalampous, K.; Kostavelis, I.; Gasteratos, A. Recent trends in social aware robot navigation: A survey. Robot. Auton. Syst. 2017, 93, 85–104. [Google Scholar] [CrossRef]
  18. Möller, R.; Furnari, A.; Battiato, S.; Härmä, A.; Farinella, G.M. A survey on human-aware robot navigation. Robot. Auton. Syst. 2021, 145, 103837. [Google Scholar] [CrossRef]
  19. He, L.; Zhang, H.; Yuan, L.; Liu, Z.; Zhang, W.; Zhong, R.; Zhang, S. A review of social awareness navigation methods for service robots. Comput. Eng. Appl. 2022, 58, 1–11. [Google Scholar]
  20. Ferrer, G.; Garrell, A.; Sanfeliu, A. Social-Aware Robot Navigation in Urban Environments. In Proceedings of the European Conference on Mobile Robots, Catalonia, Spain, 25–27 September 2013; pp. 331–336. [Google Scholar]
  21. Malviya, A.; Kala, R. Social robot motion planning using contextual distances observed from 3D human motion tracking. Expert Syst. Appl. 2021, 184, 115515. [Google Scholar] [CrossRef]
  22. Pérez-Hurtado, I.; Orellana-Martín, D.; Martínez-del-Amor, M.Á.; Valencia-Cabrera, L. A membrane computing framework for social navigation in robotics. Comput. Electr. Eng. 2021, 95, 107408. [Google Scholar] [CrossRef]
  23. Wang, C.; Li, Y.; Ge, S.S.; Lee, T.H. Adaptive Control for Robot Navigation in Human Environments based on Social Force Model. In Proceedings of the 2016 IEEE International Conference on Robotics and Automation(ICRA), Stockholm, Sweden, 16–21 May 2016; pp. 5690–5695. [Google Scholar]
  24. Reddy, A.K.; Malviya, V.; Kala, R. Social cues in the autonomous navigation of indoor mobile robots. Int. J. Soc. Robot. 2021, 13, 1335–1358. [Google Scholar] [CrossRef]
  25. Kivrak, H.; Cakmak, F.; Kose, H.; Yavuz, S. Waypoint based path planner for socially aware robot navigation. Clust. Comput. 2022, 25, 1665–1675. [Google Scholar] [CrossRef]
  26. Repiso, E.; Zanlungo, F.; Kanda, T.; Garrell, A.; Sanfeliu, A. People’s v-Formation and Side-by-Side Model adapted to Accompany Groups of People by Social Robots. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems(IROS), Macau, China, 3–8 November 2019; pp. 2082–2088. [Google Scholar]
  27. Kivrak, H.; Cakmak, F.; Kose, H.; Yavuz, S. Social navigation framework for assistive robots in human inhabited unknown environments. Eng. Sci. Technol. 2021, 24, 284–298. [Google Scholar] [CrossRef]
  28. Patompak, P.; Jeong, S.; Chong, N.Y.; Nilkhamhang, I. Mobile Robot Navigation for Human-Robot Social Interaction. In Proceedings of the 16th International Conference on Control, Automation and Systems (ICCAS), Gyeongju, Korea, 16–19 October 2016; pp. 1298–1303. [Google Scholar]
  29. Truong, X.T.; Yoong, V.N.; Ngo, T.D. Socially aware robot navigation system in human interactive environments. Intell. Serv. Robot. 2017, 10, 287–295. [Google Scholar] [CrossRef]
  30. Yang, C.T.; Zhang, T.; Chen, L.P.; Fu, L.C. Socially-Aware Navigation of Omnidirectional Mobile Robot with Extended Social Force Model in Multi-Human Environment. In Proceedings of the IEEE International Conference on Systems, Man and Cybernetics(SMC), Bari, Italy, 6–9 October 2019; pp. 1963–1968. [Google Scholar]
  31. James, J.; Watson, C.I.; Macdonald, B. Artificial Empathy in Social Robots: An analysis of Emotions in Speech. In Proceedings of the 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018. [Google Scholar]
  32. Fung, P.; Bertero, D.; Wan, Y.; Dey, A.; Chan, R.H.Y.; Siddique, F.B.; Yang, Y.; Wu, C.; Lin, R. Towards Empathetic Human-Robot Interactions; Springer: Cham, Switzerland, 2016. [Google Scholar]
  33. Chen, D.; Liu, Z.T. Multi-Objective Route Planning Based on Machine Emotion. In Proceedings of the 7th International Workshop on Advanced Computational Intelligence and Intelligent Informatics (IWACIII 2021), Beijing, China, 31 October–3 November 2021. [Google Scholar]
  34. Russell, J.A.; Mehrabian, A. Evidence for a three-factor theory of emotions. J. Res. Personal. 1977, 11, 273–294. [Google Scholar] [CrossRef]
  35. Mehrabian, A. Framework for a comprehensive description and measurement of emotional states. Genet. Soc. Gen. Psychol. Monogr. 1995, 121, 339–361. [Google Scholar]
  36. Alenzi, Z.; Alenzi, E.; Alqasir, M.; Alruwaili, M.; Alhmiedat, T.; Alia, O.M. A Semantic Classification Approach for Indoor Robot Navigation. Electronics 2022, 11, 2063. [Google Scholar] [CrossRef]
  37. Zhang, R. Human-Computer Communication Atmosphere Field Modeling Based on Fuzzy AHP and Its Application in Human-Computer Interaction System; China University of Geosciences: Wuhan, China, 2018. [Google Scholar]
  38. Liu, Z.T.; Wu, M.; Li, D.-Y.; Chen, L.-F.; Dong, F.-Y.; Yamazaki, Y.; Hirota, K. Concept of Fuzzy Atmosfield for Representing Communication Atmosphere and its Application to Humans-Robots Interaction. J. Adv. Comput. Intell. Intell. Inform. 2013, 17, 3–17. [Google Scholar] [CrossRef]
  39. Zhang, X. Development and application of Pepper robot intelligent interaction based on Choregraphe. Netw. Secur. Technol. Appl. 2020, 12, 55–57. [Google Scholar]
  40. Chen, B.; Lu, Z.; Zhou, Y.; Ye, Q. Indoor speech separation and sound source localization system based on dual microphones. Comput. Appl. 2018, 38, 3643–3648. [Google Scholar]
Figure 1. Emotional space for machine emotion.
Figure 1. Emotional space for machine emotion.
Electronics 11 02482 g001
Figure 2. A schematic diagram for Equation (1).
Figure 2. A schematic diagram for Equation (1).
Electronics 11 02482 g002
Figure 3. An example of a grid map.
Figure 3. An example of a grid map.
Electronics 11 02482 g003
Figure 4. Navigation strategy process based on machine emotion.
Figure 4. Navigation strategy process based on machine emotion.
Electronics 11 02482 g004
Figure 5. Pepper.
Figure 5. Pepper.
Electronics 11 02482 g005
Figure 6. Choregraphe programming interface.
Figure 6. Choregraphe programming interface.
Electronics 11 02482 g006
Figure 7. The process of robot navigation.
Figure 7. The process of robot navigation.
Electronics 11 02482 g007
Figure 8. Schematic diagram of field test site.
Figure 8. Schematic diagram of field test site.
Electronics 11 02482 g008
Table 1. Initial emotion state.
Table 1. Initial emotion state.
Emotion StatePA
urgent−0.95−0.32
not urgent1.57−0.79
pleasant2.771.21
unpleasant−1.60−0.80
active1.721.71
upset−1.200.40
Table 2. Main specification parameters of Pepper.
Table 2. Main specification parameters of Pepper.
PepperSpecifications
Size1200 mm × 425 mm × 485 mm
Weight28 Kg
BatteryLithium batteries
Capacity: 30.0 Ah/795 Wh
Run time: more than 12 h
SensorHeadMic × 4, speaker × 2, 2 × 5 million pixel camera, 3D camera, etc.
ChestGyro sensor, inertial sensor
HandTouch sensor × 2
LegSonar sensor × 2, laser sensor × 6, infrared sensor × 2, omnidirectional wheel × 3, etc.
Display10.1 inch touch display
PlatformNAOqi OS
InternetWi-Fi/Ethernet/Bluetooth
SpeedUp to 3 km/h
ClimbingUp to 1.5 cm
Move20 degrees of freedom in total
Head: 2, Arm: 2 × 5, Leg: 3, Hand: 2
Table 3. Preset parameters for each point.
Table 3. Preset parameters for each point.
Current PointMachine EmotionTarget Point
CunpleasantA
CactiveB
CpleasantD
Table 4. Parameters for each point.
Table 4. Parameters for each point.
Tour PointLocationRelevance to Other Locations
Lab A(8, 9)B: middle; C: weak; D: weak
Lab B(17, 9)A: middle; C: weak; D: weak
Lab C(13, 6)A: weak; B: middle; D: strong
Lab D(24, 6)A: weak; B: middle; C: strong
Table 5. Parameters for each point.
Table 5. Parameters for each point.
ExperimenterEvaluation 1Evaluation 2Evaluation 3
Group 101224
02113
03113
04114
05112
06112
07213
08113
Average1.251.1253
Group 209334
10433
11444
12545
13433
14444
15444
16554
Average4.1253.753.875
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, D.; Ge, Y. Multi-Objective Navigation Strategy for Guide Robot Based on Machine Emotion. Electronics 2022, 11, 2482. https://doi.org/10.3390/electronics11162482

AMA Style

Chen D, Ge Y. Multi-Objective Navigation Strategy for Guide Robot Based on Machine Emotion. Electronics. 2022; 11(16):2482. https://doi.org/10.3390/electronics11162482

Chicago/Turabian Style

Chen, Dan, and Yuncong Ge. 2022. "Multi-Objective Navigation Strategy for Guide Robot Based on Machine Emotion" Electronics 11, no. 16: 2482. https://doi.org/10.3390/electronics11162482

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop