Next Article in Journal
A Survey of Image-Based Fault Monitoring in Additive Manufacturing: Recent Developments and Future Directions
Next Article in Special Issue
Piezoelectric Wafer Active Sensor Transducers for Acoustic Emission Applications
Previous Article in Journal
Vehicular Traffic Flow Analysis and Minimize the Vehicle Queue Waiting Time Using Signal Distribution Control Algorithm
Previous Article in Special Issue
Optical Multi-Parameter Measuring System for Fluid and Air Bubble Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions

1
Department of Mechanical and Industrial Engineering, Università degli Studi di Brescia, Via Branze 38, 25123 Brescia, Italy
2
IRCCS Fondazione Don Carlo Gnocchi, Via di Scandicci 269, 50143 Florence, Italy
3
Faculty of Political Science and Sociopsychological Dynamics, Università degli Studi Internazionali, Via Cristoforo Colombo 200, 00147 Rome, Italy
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(15), 6820; https://doi.org/10.3390/s23156820
Submission received: 28 May 2023 / Revised: 14 July 2023 / Accepted: 26 July 2023 / Published: 31 July 2023
(This article belongs to the Special Issue Feature Papers in Physical Sensors 2023)

Abstract

:
Cyber-physical or virtual systems or devices that are capable of autonomously interacting with human or non-human agents in real environments are referred to as social robots. The primary areas of application for biomedical technology are nursing homes, hospitals, and private homes for the purpose of providing assistance to the elderly, people with disabilities, children, and medical personnel. This review examines the current state-of-the-art of social robots used in healthcare applications, with a particular emphasis on the technical characteristics and requirements of these different types of systems. Humanoids robots, companion robots, and telepresence robots are the three primary categories of devices that are identified and discussed in this article. The research looks at commercial applications, as well as scientific literature (according to the Scopus Elsevier database), patent analysis (using the Espacenet search engine), and more (searched with Google search engine). A variety of devices are enumerated and categorized, and then our discussion and organization of their respective specifications takes place.

1. Introduction

Social robots (SR) are artificial systems capable of playing an active and positive social role within a society in which human and non-human agents are present. Several of these systems are involved in medical context, for clinical protocols and follow-up processes, or in general in the rehabilitation field, for instance to restore the motor functionality of limbs. Since great improvements have been performed in recent years, social robots became more similar to human entities; they allowed us not only to connect people around the world with telepresence systems, but in some cases, they even acted as family members, caregivers, and nurses in the companionship of people with dementia, autism spectrum disorder, and other mental or physical disorders. Some of the greatest advantages provided by social robots applied in older adult-care consist of increasing individual independence, facilitating their everyday routine, limiting human caregivers intervention, and making easier communication with family members using remote connections [1]. Furthermore, robotic animal companions have been developed and adopted inside care facilities to carry out pet therapy provided to the patients. In other cases, social robots can be used to reduce children’s pain and anxiety related to surgery, adapting their behavior to the estimated emotional state of children, which has reduced their capability to communicate their possible discomfort [2]. Moreover, ethical implications of social robots and embedded artificial intelligences have to be carefully considered in designing their behavior, especially when they are involved in pediatric environments [3]. Although this research field is rapidly progressing and even more effective solutions are being employed, numerous issues and social stigma still have to be overcome. Thus, multidisciplinary research teams (i.e., physicians, engineers, and social scientists) have to be involved to identify social robots design and way to behave, which can reduce people’s negative attitude toward them [4]. The literature proposes different scientific reviews of social robots, mainly focused on more or less broad application sectors; nonetheless, there is an absence of scientific reviews that address the functional aspects and technical features used to achieve those functionalities. For this reason, this work focuses on the state of the art of social robotics from a technical perspective, specifically in medical, healthcare, and educational environments
According to a design-centered approach proposed by Bartneck and Forlizzi [5], a social robot is an autonomous or semi-autonomous robot that interacts with humans by following the behavioral norms expected by the people with whom the robot is intended to interact. Their definition presupposes three conditions: the robot has to be autonomous; depending on the case it has to interact cooperatively or non-cooperatively; and it has to recognize human values and roles. Furthermore, Bartneck and Forlizzi asserted that verbal and non-verbal communication between social robots and the human entities is fundamental. These systems must be perfectly matched with their final user: a lot of work must be accomplished in order to obtain an appearance that avoids the “Uncanny valley phenomenon”. In fact, in 1970 Masahiro Mori hypothesized that once the robot has reached a threshold value of lifelikeness, it may evoke a negative response in the people. This phenomenon is strongly related to the appearance of robots and its impact on human targets. The effectiveness in the establishment of a relationship that provides benefits for users is defined by a range of robots’ similarity to the human being in terms of behavior and aesthetic features. Thus, the design of humanoid robots may be limited, and the development of more machine-like systems may be preferred to more realistic one. Emotion and expression recognition is also important to generate a strong connection between humans and cyber systems. Moreover, it is essential to design a robot which transmits trust and is able to behave as much as possible in the most natural way. For current robots, all these features can be achieved thanks to several sensors, actuators, and devices connected to complex hardware and managed by operating systems where programs run to guide robots in the environment, allowing them to relate and be part of the daily lives of humans. In this context, this paper aims to provide a mapping of methodologies and various technical solutions required to build a social robot for healthcare environments. The review is performed analyzing data extracted from three specific fields: SL (the scientific literature, investigated through the Scopus database (Elsevier ©)), PL (the patent literature, analyzed querying the Espacenet database by the European Patent Office EPO), and MA (a wider market analysis, performed thanks to the Google search engine (Google LLC, Mountain View, CA, USA)). This work is organized according to these three branches of investigation: the first section considers the scientific research; the second section collects patents data; and the last part depicts the heuristic research on the market solutions. For each section, the data selection procedure and performed data analyses are described in a first Materials and Methods subsections; then, the results are presented and discussed to capture, explain, and reorganize the most relevant technical characteristics of social robots for healthcare environments. In conclusion, this paper consists of a very extensive, technical review, that also focus attention on ethics and privacy concerns related to the deployment of social robots in healthcare environment, although not being a primary goal. Summarizing the principal features of this work, the following are presented:
  • It stands out as a broad and comprehensive review, both from the temporal and sectorial points of views (i.e., years, scientific field, patents field, and global market);
  • The healthcare environment is very wide and diverse; thus, specific applications, stakeholders, and diseases in which SRs are involved are presented;
  • Technical devices and requirements of SRs are discussed in detail;
  • Guidelines for a multidisciplinary team of researchers involved in social robotics field are reported. This leads to the development of a cyber solution capable of establishing effective human–robot interactions.

Organization

In this paragraph, the general organization of the present work is reported. The complexity and amplitude of the field under investigation have required an adequate structure, capable of providing all the information, in an effective and easy understandable manner, to whoever is interested in social robotics applied in a healthcare environment. The structure of this review can be summarized using the compact framework described below:
  • Scientific literature research and an accurate analysis of the Scopus database (Elsevier @) are reported. In this first part, the method used to obtain data is presented, and according to different parameters information is elaborated upon, mapped, and filtered to present results in the Discussion (Paragraph 5).
  • Patent research (in this case, the database involved is the Espacenet database) and information on analytical methods used and data treatments are presented. In this second part, a classification based on the international patent classification (IPC) has highlighted important patents reported and analyzed in the Discussion.
  • Market research has been executed through Google search engine (Google LLC, Mountain View, CA, USA), and several pieces of information have been reported about market goals, limitations, and global dimension. Also in this case, the obtained results have been properly discussed in Paragraph 5.
Information, technical data, and descriptions of solutions coming from the three sources mentioned above have been obtained using the same taxonomy (i.e., analytical review and maturity review), and then, they have been incorporated and discussed extensively in the Discussion part. The present work is intended as the optimal starting point for multidisciplinary teams of scientists that have started studying social robots, and their final aim may be developing their own project. This review provides information about the most active countries and institutions in the sector, the rapid growth of interest around this field, both from the industrial and scientific communities point of view, and the technical innovations that have occurred over time.

2. Scientific Literature Research

2.1. Materials and Methods

2.1.1. Data Selection Procedure

The literature research was performed on documents indexed in the Scopus database (Elsevier @. Conceptual keywords for the query were social and robots, combined with medical or healthcare to focus the review. For the purpose of the current analysis, those keywords or variations are expected in the title, abstract, or keyword metadata field of the documents. According to these considerations, the following search string was combined: “TITLE-ABS-KEY (social AND robot* AND (*medic* OR *care OR clinic* OR therap* OR treatmen* OR disabilit* OR dementia OR autism OR diabet* OR stroke OR pain))”. This query, run for the last time on the 13 March 2023, provided 6036 research products. Results were then filtered applying as inclusion criteria the classification of the document in at least one of the following subject areas of the Scopus database: Computer Science (“COMP”) and Engineering (ENGI). The subset of products compliant with the inclusion criteria was composed of 3977 documents, distributed among years and by document type as Figure 1 and Figure 2 describe, respectively. Thus, the collected results have been stratified in different ways to stress, respectively, temporal, topic, and geographical correlations with publications by year, documents by subject area, documents by country, and documents by source type. This analysis has been performed directly with Scopus Elsevier instruments.
In Figure 1, only the 1992–2022 range is shown for greater comparability between years, although it should be noted that indexing in one year is generally completed by June of the following year, so a careful analysis shows a very high and continuous growth trend in scientific production since at least the year 2003. From Figure 2, we can infer some observations that will be further elaborated in the section on the results discussion. We observe a notably greater number of scientific conference products than journal products. This phenomenon is typical of disciplines with a high rate of innovation, such as computer science, where the lengthy publication times typical of academic journals are incompatible with the innovation rate. It is evident that the journal plays an important role in establishing scientific results and laying a solid foundation for future development; therefore, researchers also utilize this publication venue. Journals are recommended for obtaining reliable information, but conference materials cannot be ignored. This outcome is also a consequence of the fact that the current scientific work focuses on the technical aspects of social robotics. The high number of scientific reviews in both journals and conferences, comparable to medical disciplines, is an additional intriguing phenomenon. In medical disciplines, the phenomenon is a result of meta-analyses that combine difficult-to-replicate experiments involving human subjects. In the field of social robotics, however, the authors believe that this phenomenon is due to the high level of interdisciplinarity, which will be explored in greater detail in the discussion section, and the inability of a single reader to delve into all the disciplines required to comprehend and develop a social robot. For this reason, researchers in the field of social robotics feel the need to write and read scientific reviews, which represent a moment of interdisciplinary synthesis and are a crucial tool for fostering team cohesion.
Scientific products segmented by scientific subject areas are collected in Figure 3. It should be specified that a document can belong to multiple subjects simultaneously, which represent, therefore, labels attributed to the individual document. Scientific subjects are associated with subject areas; in fact, often and traditionally, editorial sources deal with disciplines and not applications, apart from special cases. For this reason, Figure 3 can be interpreted to understand which disciplines are most involved in social robotics for technical design purposes (because of the particular selection string). As might be expected, computer science and engineering are determinants, but mathematics (it would be better to specify logic, theoretical computer science, and control theory), social science, medicine, and psychology are relevant. Physics and astronomy are very general fields, so it is not appropriate to consider them. Arts and humanities, as well as decision sciences, are interesting to emphasize in order to accurately represent the human dimension and develop decision strategies in complex situations. Chemical, biological, and materials sciences are absolutely relevant for proposing innovative materials that mimic biological ones but also for studying material interactions with humans. Neurosciences are present, although in limited quantity, because the Scopus classification includes in this subject only neurology and related disciplines, which represent a very broad field in general and have great scientific impact but are limitedly involved in social robotics compared to, for example, cognitive sciences, which Scopus tends to include in the social sciences. The spatial stratification (Figure 4) is much more complex than it appears in the breakdown by countries. In fact, different nations have different geographic sizes, numbers of inhabitants, and numbers of active researchers and are able to be composed of various states. A compromise was chosen by identifying three determining geographic areas: the Commonwealth, Europe (including the European Union, Eastern Europe, Russia, and Israel), and Asia, as well as including the rest of the world in the “other” container. Certainly, this segmentation has many weaknesses; for example, it is difficult to compare the life expectancy and gross domestic product of former USSR areas with those of Western European areas. On the other hand, Japan has great similarities, e.g., with Italy, due to the presence of a strong drive for tradition side by side with a great tendency for innovation. However, from the proposed segmentation and a more in-depth analysis, which is omitted here, it appears that areas where the following factors are present are facilitated in the production of research in the field of social robotics: good social sensitivity combined with respect for the individual, high life expectancy combined with a good gross domestic product per capita, and, above all, a holistic cultural dimension.

2.1.2. Subset Pre-Filtering

In order to obtain a reduced dataset, focused on documents which have technical relevance in the social robotics field, the subset mentioned above has been treated analyzing the title and the abstract of each document. This process has been carried out assigning a numerical marker (0, 1, 2) to each element of the subset imported in an Excel worksheet, trying to answer to three research questions: (0) Can the described system be used as a social robot? (1) How is it used, or what is needed to make it a good social robot? and (2) What is the system composed of? The marker “0” has been therefore related to documents that describe the result of social robots’ involvement in medical trials or any cyber-apparatus, such as exoskeletons or surgical robots, incapable of acting as social partner with the human being, even if characterized by a high level of autonomy. Investigations on users’ opinion and ethical and social impact of social robots on people have been grouped in this category as well. The marker “1” has been assigned to documents that provide information about the development and fields of application of social robotic platforms and prototypes but lack technical information about embedded equipment reported. The main focus of the documents is evaluating the medical path in which the systems have been inserted and their effectiveness, but more technical information can be obtained through analyzing the references. The marker “2” has been related to documents that contain technical information about social robots’ equipment and detailed descriptions of robotic platforms. Moreover, technological improvements of already known robotic features (e.g., algorithms, frameworks, sensors, and motors) are grouped in this directory. In conclusion, 77 documents (marked by value “2”) passed the selection process and have been assembled in the final reduced dataset for further analysis.

2.1.3. Data Treatment

The subset and the reduced dataset are separately analyzed. For the first one, a bibliometric analysis has been performed mapping the whole literature with “VOSviewer” (Laiden University, The Netherlands), to highlight keywords and authors bibliometric networks, respectively.

Bibliometric Analysis

Results in the subset are firstly presented with a map approach [6] to enhance the bibliometric network, in terms of keywords using the software VOSviewer. Figure 5 shows a keywords network in the “network visualization” display mode by selecting “Co-occurrence” and “Authors-Keywords” to extract the map. Figure 5 identifies the structure of the research area of social robotics and drives eventual heuristic sub-searches, representing the most relevant keywords as wider and interconnected poles.

Taxonomy

The reduced dataset of 77 documents has been investigated with respect to two different analysis processes, enabling the synthesis of an analytical and maturity review.

Analytical Review

In this phase, in order to carry out the reduced dataset analysis, the following classification has been used:
  • Embodiments: This category focuses on describing the main structure of the robots related to their specific functions and field of application;
  • Appearance: This category considers the appearance of social robots applied in the healthcare area, discussing on materials, shapes, and innovative techniques used to produce some aesthetic parts of the robots;
  • Movements: In this category, the motion capability of robots has been analyzed, concerning the degrees of freedom and actuators used to generate movements;
  • Sensors: In this category, the sensory equipment of social robots has been described. It has been classified with respect to the measured target magnitude and main aim of each type of device;
  • Algorithms: This category gathers algorithms used to define the system interactions between environment and social partners;
  • Hardware and Connectivity: This category describes the involved communication protocols and the adopted hardware and architecture;
  • Human–robot interaction: This category describes methodologies and devices that allow to share information between social robots and human being and controlling systems used to direct robots’ assistance.
  • Ethical, privacy, and security issues: This category describes aspects that lawmakers are focusing on most, because of several potentially controversial implications.

Maturity Review

In this phase, the technology readiness level (TRL) scale defined by the European commission [7] has been used to carry out the maturity analysis of solutions described in the reduced dataset. This scale has been used to determine the development state, maturity, or market uptake of the system under investigation. It is characterized by the 9 levels reported in Table 1. Adaptations to medical environment have been considered during the analysis process.
The used evaluation criteria have required the analysis of the full texts, considering mainly the environment of experimental trials performed, the procedure of validation, results reported, and concluding remarks.

2.2. Results

2.2.1. Results for Subset Sub-Filtering Process

The subset has been stressed according to the above-described taxonomy. The numerical markers (0, 1, 2) have been used as labels to identify which documents have passed the validation process and which one have been excluded:
  • Marked by value “2”, 96 documents have passed the validation process, and they have been subjected to deeper analyses (Analytical review and Maturity review);
  • Marked by value “1”, 1021 documents have not passed the validation process, and a further review is required;
  • Marked by value “0”, 2860 documents have not passed the validation process.
In conclusion, documents labeled with tags “0” and “1” have been excluded from the present review, and the result of the sub-filtering process is reported in Figure 6.
It is important to specify that the articles that were included, in addition to being relevant in terms of content related to social robotics for health, present technical information about how a specific robot or some parts of it are constituted. This scientific review therefore focuses on technical aspects of social robotics, which, as we will see in the discussion section, cannot be completely separated from other aspects because social robotics is inherently multidisciplinary, and application in the health sector adds additional areas of scientific expertise.

2.2.2. Bibliometric Results

The set of keywords appearing in each document, both author-defined and indexed, was represented using VOSviewer to identify a bibliometric co-occurrence map, i.e., the relatedness of the items is determined based on the number of documents in which they occur together. A full counting method was adopted; i.e., each co-occurrence has the same weight in the analysis. To be represented, a keyword must appear with a minimum number of occurrences of five.
Two classes of observations emerge from the analysis of the keywords depicted in Figure 5. First, it is possible to identify the most recurring keywords in the class of articles considered, namely robotics, human, and social robots, alongside which emerge some keywords that could have been imagined, such as diseases, aged, adult, autism, artificial intelligence, and some less obvious ones, such as economic and social effects, sustainability, male, priority journal, and cohort analysis. Focusing on the less obvious keywords, economic and social effects, as well as sustainability draws attention to the strong economic and social motivations underlying the investment in social robotics for health. Interesting is the appearance of the keyword male, which alludes to the aspect of gender that needs to be explored, also because robots have no inherent gender and their entry into a society of gendered individuals represents an element of complexity. Still, the term priority journal needs to be explained because it can have a multiplicity of meanings, in this case, generally representing the set of priorities of the 2030 Agenda for Sustainable Development Goals. The presence of the term cohort analysis, on the other hand, alludes to the fact that even technical studies of social robotics can be validated through experimental interaction approaches on human subjects using methods typical of medical disciplines. A second class of observations can be made on the macroclasses of keywords formed by proximity that do not exactly coincide with the segmentation into colors assigned automatically by the software in Figure 5. One notices keywords typical of technical disciplines on the left and keywords typical of medical disciplines on the right, but in the middle of the picture, keywords typical of the social sciences and a kaleidoscope of other disciplines dominate, holding together technology and medicine through a complex web of cultural glue.

2.2.3. Analytical Results

The analytical review has led to the topic arrangement reported in Figure 7. A second classification layer has been identified for the following blocks: “Movements”, “Sensors”, and “Human-robot interaction”. It has been used to direct and focus the discussion phase on specific targets of interest. The results in Figure 8 show which topics are more addressed by the documents in the reduced dataset. The indexed labels reflect a homogeneous treatment of any type of technical equipment and particular attention is paid to “Sensors” and “Algorithms” fields.

2.2.4. Maturity Results

The reduced dataset has been treated investigating the “Technology Readiness Level” (TRL) of the described solutions to analyze the status of development of research project. This type of evaluation, reported in Figure 9, can be a useful parameter to investigate and discuss the implementation of social robots in real-cases and field trials. Moreover, the TRL analysis can fill the gap between SL analysis, PL analysis, and MA analysis.

3. Patents

3.1. Materials and Methods

The patent research was conducted on the Espacenet database. The research was performed for the last time on the 3rd of January 2022, considering patents published between 1995 and 2021. An ADVANCED SEARCH based on “all text fields or names” was conducted using the string “(Social) AND (Robot) AND (Healthcare)”, that produced 1985 results, collected in the main dataset. This is composed of all the available results, since at this step of the process no exclusion criteria have been applied to select a specific patent inside the patent families. In order to identify the application information of every patent in the main dataset, focusing a subsequent analysis on a narrower and more specific collection, the international patent classification (IPC) index has been considered. The IPC is a classification system that organizes inventions and their documents into technical fields that cover all areas of technology. The IPC has a hierarchical structure and is subdivided into sections, classes, subclasses, groups, and subgroups [1]. The analysis of the main dataset has been based on three specific IPCs individuated by a group of experts:
  • B25, hand tools; portable power-driven tools; manipulators [1];
  • A61, medical or veterinary science; hygiene [1];
  • G06, computing; calculating; counting [1].
Thus, the main dataset has been treated, and 512 patents have been excluded by further analysis processes. Once the first selection criterion has been applied, 1457 patents belonging at least to one of the previously reported IPC groups have been identified. These results have been arranged in the filtered dataset, and to perform a more detailed analysis, title and abstract have been considered to classify them as follows:
  • Marked “III” patents: These documents are related to robots involved in social and/or crowded environments for personal assistance, medical, and telepresence purposes.
  • Marked “II” patents: These documents are related to non-specific embodiments and general technologies that are not yet implemented in social robotics field, or their aim is not expressly referred to medical purposes and/or robotic applications.
  • Marked “I” patents: These documents include systems that are not related to social robots and their application in medical and assistance fields. For instance, patents focused on industrial devices or methods and products used to treat infections and diseases have been marked with “0 tag”.
In the following, Figure 10 reports the applicants per country, whereas Figure 11 and Figure 12 analyze earliest priority and publication dates, respectively. Figure 13 finally collects the applicants by number of documents. These figures allow describing at a glance the field under investigations, in terms of temporal, geographical, and industrial coordinates. Moreover, further research may be differently oriented to develop more specific strings for Espacenet search engine, involving a narrow group of countries or applicants.

3.2. Results

The main dataset of 1985 results obtained by the Espacenet database has been treated as mentioned in the previous section. Figure 14 shows the flowchart of the applied data selection process.
The final set of 74 collected results has been stratified according to the same classification labels used in the scientific literature research and reported in the previous figure, Figure 6. For each patent family, the analytical analysis of the final dataset has been carried out on the patents presenting the most recent publication date; thus, the resulting treated documents are 42. In order to determine which areas are most assessed by the collected patents, the frequency distribution of documents among identified labels has been investigated: Figure 15 presents the most frequently addressed labels for patent classification and the corresponding number of patents that refer to the specific label.
Figure 16 reports the IPC groups most frequently present in the main dataset, whereas Table 2 allows correlating every IPC group with its frequency of occurrence.
Figure 17 shows a comparison between SL and PL analyses in terms of indexed labels and their overall collection. It must be considered that the SL dataset (77 items) and PL dataset (42 items) have different dimensions. The overall collection made by the sum of indexed labels of scientific and patent datasets aims to identify the dimension of the literature investigated in the following Section 5 “Discussion”.

4. Market

4.1. Materials and Methods

The market analysis has been carried out between June and August 2021 querying the Google search engine. This heuristic research has been focused on commercialized products and the in-real application of findings related to scientific and patent filtered literatures. Thus, in this case random queries based on “Social”, “Robots”, and “Healthcare” keywords have been run. In order to obtain detailed information on the social robotics market and its dimension, not even documents that consider economic, social, and political aspects have been a priori excluded in the collection of final results. The use of a single search engine constitutes the main limitation of this research process; possible improvements may be related to the choice of other web search engines as additional data sources. Finally, the same taxonomy adopted in the analytical analysis of the scientific literature for the study of the reduced dataset has been used to treat the found products.

4.2. Results

The integration of data collected by the market analysis has been preceded by considerations about market dimensions, goals, and applications of social robots in specifical medical and assistive tasks. Concerning service robots, i.e., those social robots specifically able to assist humans in performing useful tasks, the analysis of their sales by industry in 2014 shows that robots were more involved in defense, farming, and logistics than in the medical field [8]. However, in the last decade, the aging of the population has increased the need of support systems for medical personnel. In fact, it is expected that the social robots’ market capitalization will annually grow of 12.68% starting from the USD 395.577 million in 2019 [8]. This amount is related to several devices which have already been deployed in hospitals for surgical purposes and physical therapy (e.g., exoskeletons), but many other fields will be improved by robotics in the near future. For example, SARs (socially assistive robots) will be very useful in taking care of people affected by dementia or neurodegenerative diseases. Several companion bots such as PARO [9] and AIBO [10] have already been implemented in care houses or private homes, and their efficacy has been confirmed. Thus, they consist both of optimal currently used solutions and a starting point for the economic growth of this sector. As suggested by the scientific analysis on the Scopus database, North America and Pacific Asia will benefit from their research program in social robotics, and they will act as a protagonist in the global market by the end of 2026. For instance, in 2018 the Japan government allocated USD 100 million investment to develop and deploy nursing robots. Also, South Korea is currently employing great resources for social robots in healthcare. In fact, according to the World Bank’s estimations, the aging population in South Korea is rapidly increasing, surpassing the Japanese one and reaching 37% in 2045. Another cue to enlarge investments has been provided in this last two years by COVID-19 pandemic, in which robots have guaranteed both caring of people and social distancing. In this specific situation, exploiting a medical robots classification proposed by Khadidos (i.e., disinfecting/spraying robots, robotic hospitality, telepresence robot and surgical robots, and robotic hospitality) and based on their main function, it can be observed that devices involved in robotic hospitality (i) and telepresence solutions (ii) are the most propelling considering the near future medical robot market. Considering the two above mentioned categories, robotic technical endowment must guarantee:
(i)
Food and medication distribution (e.g., Sona-2.5, Zafi medic robot, and KARMI-Bot, CO-Bot) through a high degree of automatic handling (e.g., SLAM algorithm) and load-carrying capacity (above 15 kg);
(ii)
A high level of robot interaction capacity with users or patients. Touchscreens, displays, and cameras constitute essential devices to provide teleconference ability to the robots (based on Web real-time communication), while algorithms such as speech recognition, emotional state recognition (e.g., monitoring systems based on deep neural network), and SLAM localization allow robots to collect data on people and the environment [11].

5. Discussion

Social robots are becoming increasingly significant in healthcare, and technical advancements will increase their usage and impact. Patients’ health may be affected by these robots’ emotional support. These systems may improve healthcare accessibility, efficiency, and cost. One of its biggest advantages in healthcare is emotional support. Social robots may reduce anxiety, depression, and other mental illnesses, according to research. Social robots have treated several disorders in psychiatry, pediatrics, geriatrics, and rehabilitation. Interactive play and behavioral feedback from social robots have helped autistic patients improve their speech and social abilities. In the case of dementia, social robots have been utilized to provide cognitive stimulation and companionship to patients through talking, playing games, and sending daily reminders. In the rehabilitation phase, stroke patients have utilized this technology to provide tailored feedback, assistance with exercises, and motivation for recovery. In addition, social robots have been utilized to provide therapy, tailored support, and advice for various mental illnesses, such as depression and anxiety, as just mentioned, supporting patients through behavioral activation approaches. Social robots are being employed in the healthcare sector to improve patient communication and education. For instance, a social robot may be utilized to educate patients on their medical issues, prescriptions, and therapies. This can aid individuals in comprehending their healthcare requirements and improve their overall health. Social robots allow healthcare practitioners to offer distant care, lowering costs and increasing access. This technology can monitor patients at home, eliminating the need for hospitalization and frequent doctor visits. Also, this has freed up healthcare resources for those with the highest requirement. The impact of social robots on healthcare personnel is a second effect. Social robots can alleviate the workload of healthcare professionals by performing regular tasks such as collecting vital signs and patient data. This permits physicians and therapists to concentrate on more difficult duties and spend more time with patients. In addition to these effects, social robots have contributed to an increase in patient satisfaction with healthcare experiences. Social robots made patients feel more comfortable and involved, improving their treatment experience and increasing their probability to seek further care. Thus, it might enhance patient outcomes, save healthcare costs, and boost treatment availability.

5.1. Embodiments

Social robots are involved in various fields, and their features are extremely related to the function that have to perform. Thus, functionality and structural and aesthetic characteristics must be completely synchronized, and different types of social robot, such as the one involved in entertainment, companionship, monitoring, delivering supplies, or rehabilitation, are subjected to different engineering processes (i.e., conception, designing, and manufacturing) [12]. Social robots can be preferred in several different embodiments depending on the target people, environment of work, and the tasks to carry out:
  • Humanoid robots are systems characterized by human-like appearance. They can present a virtual or a physical face; the first is shown on a screen or by LED arrays where the mouth, eyes, and nose can be displayed; the second can be realized by three-dimensional printing or other processes and be coated by synthetic skin or left uncoated. Moreover, they can present legs and/or arms to improve their capability of interaction.
  • Pet companion-bots are systems designed to replicate the shape of a companion animal such as cats, dogs, seals, etc. The most famous embodiments are AIBO and PARO. They are used for pet therapy, when using real animals could be difficult or impossible due to allergies or when the patient is not able to take care of the companion.
  • Telepresence robots are a system coupled with wheels and a motor drive unit. They have a vertical structure which ends with a display or a touchscreen, capable of showing videocalls or a virtual human face to better interact with people.
This latter robotic field in particular is rapidly progressing and benefits other relevant technologies [13,14,15]. A classification of different robots according to their embodiments is listed in Table 3.
The patent state of the art resulted usually in generalized embodiments with unrecognizable animal or human traits, because these innovations are applicable to a wide range of robots with different embodiments. Different patents have defined the main characteristics and the principles of operation, the customization of which according to the final users’ needs in specific environments will be established subsequently [58,59,60,61,62,63,64,65,66,67,68]. Products currently available on the market exhibit several types of embodiments intended for different application sectors:
  • Small-sized robots are desktop-designed bots, a very common solution inside the market. They are involved in children’s educational environment and personal assistance (i.e., Dinsow Mini [69], Little Sophia [70], Buddy PRO [71], ELLI-Q [72], SOTA [73], and CANBOTU05 [74]);
  • Large-sized robots are involved in healthcare environments to deliver drugs in various hospital wards (i.e., Relay [75], Tug T3 [76], Moxi [77], and CSJBOT [78]) or as a receptionist (i.e., CSJBOT robots [79,80,81] and ROBOVIE R3 [82]). This type of social robots also includes more realistic human-like devices involved in university research such as Sophia [83].
In this context, the social robot Sophia can be evaluated an illustrative case to clarify the adopted classification: the robot has been specially developed for university research, to experiment advanced artificial intelligence and even more human-like behaviors and appearance. Instead, its small sized version Little Sophia is involved in schools and educational environments with the aim of bringing together children and the social robot ecosystem. The ultimate purpose of Little Sophia is to firstly interface kids with coding and technology.

5.2. Appearance

5.2.1. Description of the Uncanny Valley Phenomenon

Concerning the customization and the appearance of social robots, considerations about the Uncanny Valley phenomenon have been reported by Zia-ul-Haque et al. [84]. This research was based on the model proposed by Mori M. in 1970, who theorized that building robots too similar to humans evoke negative emotions and fear in people. Zia-ul-Haque et al. [84] describe an increase for the human positive response when the robots’ appearance is close to humans but the mechanical appearance of the systems is still recognized, and people are interested in having an interaction with them. The Uncanny Valley sets in when the mechanical aspect of the system is identified with difficulty, producing a feeling of unease and disturbance (lowest peak of the curve): robots as a result are unattractive and transmit mistrust, and people are reluctant to interact with them. The highest level of the curve has been reached when there is not theoretically a difference between the cyber entity and the human entity involved. Similar results have been collected to study the acceptance ratio depending on complexity of behavior. For instance, particular attention has been paid to implementing human skills for the gait of the robots. Amira et al. [85] proposed kinematic and dynamic models of a humanoid female prototype that is able to swing its arms in its gait. In this case, the solution proposed as an applicable model is able to ensure realistic and easy-to-control movements through separate consideration of lower and upper region of robot’s body. Moreover, the Uncanny Valley curve application field has been extended, and it can be adopted also to study more complex phenomena. For instance, Chung et al. [86], starting from the theory of Mori M., have investigated how robots’ level of anthropomorphism can affect privacy perception of users. Finally, the ultimate goal of research teams involved in social robots development is to realize devices capable of providing personalized assistance related to a very specific health context but characterized by a high degree of adaptability to different environments [87]. In this particular scenario, social robots have the ability to be perceived as effective social companions and engage in positive interactions with patients.

5.2.2. Design Guidelines

In order to avoid the Uncanny phenomenon from an aesthetic point of view instead, a suggestion can be the adoption of 3D printing as a possible solution to design highly customizable social robots. An example of application of this approach is MaFaRo [88], a Many Faced Robot that can adopt different appearances according to the situation. It constitutes an optimal starting point to develop a low-cost head for social robots. Functionalities of conventional robotic head have been guaranteed, considering that the manufacturing process involves 3D printing with commercially available desktop printers. Another essential aspect of MaFaRo is its modularity, whereby each part may be plugged and played easily without using any screws. Finally, as mentioned above, designing robots’ appearance and behavior in order to avoid the Uncanny Valley effect may affect the emotions evoked in people by the systems. In this sense, technological improvements have been collected both from patent and bibliographic analysis. Stiehl et al. [40] have proposed the Huggable, a small bear robot involved in nursing homes and hospitals. An ad hoc designed sensitive skin has been implemented, in which tactile information from electric filed sensors and force sensors are coupled with signals from potentiometers and thermistors to obtain a specific somatic map. Hug-ability and affective touch have been demonstrated to be particularly relevant also in the patent field. Boyle et al. [89] have filed a patent that proposes the general structure for an huggable companion bot to assist individuals with mental illness. The system is composed of a first robust layer, capable of sustaining a composite clothing made of a memory foam layer and synthetic fur.

5.3. Degrees of Freedom

Social robots are required to move nimbly in hospitals, households, homes, and other human environments. In order to be socially interconnected with people and to help them in daily life, they may have actuated joints (axes) to perform limbs’ motion, as well as motors that allow the eyelids to blink, gaze focusing on object and stakeholders, or other expressive motions. For social robots used in medical environments, two main categories have been theorized (Table 4):
  • Highly actuated robots (HAR), that are cyber-physical systems in which several actuators and sensors (e.g., encoders) associated with the respective movable joints allow realistic movements. In this way, robots (humanoid or pet companion bots) are capable of better reproducing the motional behavior of the natural counterparts. For instance, NAO and PEPPER robots (Softbank Robotics) can be grouped in this section;
  • Slightly actuated robots (SAR), that are cyber-physical systems in which some actuators, located on the robot in specific relevant points, allow carrying out significative and elementary movements, such as gaze-following or other human-like behaviors. For instance, JIBO [90] and VITA (InTouch Health) can be grouped in this section.
Slightly actuated robots are preferred embodiments for telepresence robots, nursing robots, and telemedicine robots; on the other hand, highly actuated robots are preferred embodiments for extremely realistic systems where the human-like aspect is essential. This last type of device may include commercial robotic platforms used for the research, such as Robovie-PC [91]. Every social robot must have actuators or servomotors in order to move and complete tasks. In their study [41], Bethel et al. used Dynamixel AX-12A servomotors on Therabot, an adaptive therapeutic support robot. The same devices were used by Salichs et al. [92] to develop the desktop robot Mini, which included them in the base, arms, and neck. As a result, it is possible to conclude that servomotors are interesting solutions for pet companion bots due to their low weight, high stall torque, and small size. This type of actuators is also used in some Robotis platforms and in general for four-leg and six-leg robots; they allow them to create motion even by servomotor chain (Daisy-Chain). They consist of a fully integrated DC motor with gearhead reduction and a controller with its own driver and network. These servomotors have been used in the Bioloid robot [93], a highly actuated robot (18 degrees of freedom), aimed at encouraging the elderly to accomplish 15 min of physical exercise. Other Dynamixel servomotor versions (MX-64AT and MX-28AT) have been implemented in the social robot “Arash” [32], whereas a geared brushed DC motor has been preferred for the mobile base. Regarding robots’ movements and navigation, telepresence and telemedicine robots are developed preferring wheels or omnidirectional bases, managed from motor control units. For instance, to minimize the magnitude of errors in tracking position and orientation of four-wheeled robots, Hasan et al. have proposed and evaluated a hybrid controller, combining the Backstepping-Type 2 fuzzy logic control and a social spider optimization [94]. It performs better than traditional controllers, thanks to the backstepping controller used to compute the torque on wheels, while the fuzzy logic control and the social spider optimization are used to compute gain parameters. Moreover, robots’ bases are typically surrounded by some bumpers, as in the Maggie robot [95] or Turtlebot 2 [48] for example, to prevent damaging collisions. Mobile bases may allow the connection to docking station that may be equipped with solar cells capable to provide electric energy to charge the entire system [96]. Moreover, specific motors may be implemented inside social robots to carry out specific tasks, such as motors for vibrational cues expected by the penguin companion bot developed by IKKIWORKS PTY LTD [97] or the therapeutic robot for elderly by UNIV HONG KONG POLYTECHNIC [98].

5.4. Sensors

Several different sensing devices are implemented in social robots in order to create an “analogic to digital bridge”, interfacing the cyber-physical systems to the environment and people. Sensors applied in this field can be considered as sensory organs and networks analogously to their human counterpart. In this way, robots not only resemble human entities and are able to relate with them, but with specific medical sensors robots can help the elderly, people with impairments, or children with cognitive disorders. In this section, the sensing devices have been grouped depending on their functionality: tactile sensors, audio sensors, video sensors, RFID devices, and sensors for medical purposes. All these macro-groups have been described and analyzed in the following subsections.

5.4.1. Tactile Sensors

Tactile sensors constitute a composite and engineered skin that coats specific surfaces or parts of social robots. For instance, they are placed on the hands and arms of NAO and PEPPER robots allowing them to perceive the environment and touch objects and people. Networks composed of tactile sensors and pressure sensors are essential in order to obtain human-like behavior in robots, not only in humanoid systems during hugs for example, but also for pet companions such as PARO. In this case, several tactile sensors are located under a coating synthetic fur and give feedback to elderly people, when they touch or stroke it. The ultimate goal of providing this artificial sense is to realize a strong link between robots and humans. Commonly, this particular sensing device is implemented in social robots that adopt array structures of tactile sensors; in order to realize easily manufacturable and lower cost solutions, innovative flexible array-less tactile sensors have been developed [99]. Willemse and Van Erp described the importance of the touching phenomenon [100]: if the social touch has been carried out in a realistic precise way, the engagement between humans and machines can be improved as it happens in human–human interaction. In general, touches are complex actions that can reduce stress and anxiety and can enhance pro-social behavior. In order to assure a realistic touching phenomenon, Cabibihan et al. [101] have conducted some experiments on synthetic skin that may cover the forearm, palm, and fingers of a social robot arm. Thus, a power control scheme that can regulate the surface temperature of robots’ anthropometric regions such as human parts of the body (e.g., human arms and hands) has been proposed. Currently developed tactile sensors are constituted of thin polymer sheet with piezo-electric or piezo-resistive properties; moreover, they can be haptic devices with magnetic or optic properties, but in this case their implementation in social robotics is in the earliest stage. Mazzei et al. [102] have proposed an innovative technology based on stretchable silicon-made touch sensitive surface. This type of sensor records a pressure field which is elaborated by specific algorithms and exhibits a behavior similar to human mechanoreceptors.

5.4.2. Audio Sensors

Audio sensors (microphones) are an essential part of social robots. Single-microphone systems have been gradually substituted by microphone array systems in which a plurality of sensors allow them to capture sound from the entire surrounding environment, in order to obtain a natural relationship with humans and eventually locate them only through their voice perception. Complex systems as Pepper [22,23,24,25,26] and NAO [16,17,18,19] present composite audio systems with four directional microphones; e.g., the tabletop robot KURI is equipped with a microphone array [103].

5.4.3. Videos Sensors

A large number of different video sensors are implemented in social robots in order to record video parameters about the surrounding environment and humans. The main sensors used in social robotics are RGB cameras, thermal cameras, wide angle cameras, and three-dimensional cameras (stereo vision cameras and depth cameras). Different camera technologies allow for the obtaining of different results according to the image recognition algorithms. Depth cameras or kinetic sensors are constituted of infrared sensors coupled with a RGB camera where the data acquired by the second camera are represented in the three-dimensional space through proper mathematical models based on the emission and detection of an infrared ray. Examples of these technologies applied in social robotics are Microsoft Kinect [104,105] and Asus Xtion Pro Camera [106]. Stereovision cameras, based on the stereovision principle, require typically two or more sensors to obtain through the triangulation process three-dimensional information of a subject, as adopted, e.g., in Bumblebee2 FireWire camera [107]. The video sensors are not used just for facial and object recognition and mapping space; for instance in telepresence robotic systems, dedicated cameras may exist in order to provide images of humans during a remote conversation. Other types of devices, such as thermal cameras, have been adopted in human–robot interaction [108] or, specifically, to recognize the sign language, offering a new way to communicate with impaired people through a deep learning-based approach [109]. Commonly, the cameras are situated on the head or trunk of a humanoid robot or in specific and strategic points for non-humanoid systems. For instance, in the telepresence robot RP-VITA (In-touch Health), a camera is incorporated in the superior part of the larger display to record the patients and can be frontally rotated. The camera is mounted at the same level as an average height interlocutor in an upright position. Thus, in remote medical examinations, the face of involved people can be at the same height, generating an adequate interaction among them. Finally, QR code scanners may be useful for identifying patients and clinical history; an example of this solution is adopted in the Tico robot [110].

5.4.4. Navigation Sensors

The navigation system of a social robot could be characterized by huge complexity, and it is realized with different sensors. These devices and the vision sensors, mounted in different positions, can work together, to assure collision avoidance, humans following a path, and autonomous navigation inside environments or aid in teleoperation during remote controlling of the robot. Many types of sensors may be adopted to allow robot navigation, such as ultrasonic sensors (SONAR), bumpers, LiDAR (Light Detection And Ranging) sensors, infrared sensors, IMUs (Inertial Measurement Units), and laser rangefinders. Inertial Measurement Units (IMUs) are used for the self-perception of a robot’s kinematics, e.g., in order to guarantee the equilibrium during their motion and working. IMUs are mainly composed of gyroscopes, magnetometers, and accelerometers [111]. Infrared sensors (IR) are used, for instance, in combination with sonar to improve robots’ capability to efficiently follow physicians in hospitals wards. In fact, ultrasonic sensors are adopted in some applications to map environments in a reliable and accurate way, while infrared sensors are used to define edges of an area. In other cases, IR sensors are mounted on a nurse robot and used in combination with a marking path line, on ward floor, to obtain precise following capability [112]. This last type of device may be affected by interference caused by sunlight; thus, sonar must be used to acquire precise information about the distance between robots and objects or humans, in outdoor and indoor environments. The constant innovation requested in this field has brought the application of new and more precise sensors such as LiDAR sensors. A Pepper robot application in assistive care scenario integrated with the AMIRO framework [26] utilizes LiDAR sensors to drive its obstacle avoidance module. The main purpose of the module is to detect obstacles, plan the path, and localize itself inside the environment. In order to perform this task, 360° RP1 LiDAR has been implemented with ROS, and it was connected to an acquisition board. Advanced solutions with LiDAR sensors allow an improvement in terms of embeddability and accuracy with the aid of SLAM algorithms integrated in a SLAM module, to provide a precise mapped environment for the robots. Most affordable solutions found in the patent literature [113,114] base avoidance obstacles on infrared sensors and sonars. In order to provide an optimal solution in terms of reliability and performances, Dongqing Du et al. have proposed a multi-sensor obstacle recognition system [115], adopting simultaneously sonar, IR, and LiDAR sensors. Table 5 reports the main feature of each sensor. A proper combination of these devices allows for the accurate mapping of wide-range environments, which reduces interference from lights and improves reliability, in accordance with desired applications.

5.4.5. RFID Devices

Radio frequency identification devices (RFID) are sometimes implemented in social robots in the healthcare environment [116] and are based on the emission of radio waves at defined frequencies by emitters or solicited passive tags that have been read by scanners to extract information. The scanners are mounted on nursing robots and are able to read unique tags, related to the patient, in order to deliver specified drugs [117]. Moreover, radio frequency identification (RFID) can be used also in a localization system based on a Particle Filter (PF) and implemented in robots [118]. In this case, deployed tags, whose position is known, can provide 2D and 3D spatial and navigation information to robotic devices.

5.4.6. Sensors for Medical Purposes

Several specialized sensors can be successfully implemented in assistive care environments and hospitals. The main application of these devices involves the use of nurse robots and telepresence robots that allow medical personnel to visit hard-to-reach people and in general monitor inpatient rooms and their bio-signals. According to the results of the current analysis, sensors for medical purposes may include blood pressure sensors [119,120], blood glucose sensors [121,122], blood oxygen saturation meters [119,123], temperature sensors [124,125], humidity sensors [125], and gas sensors [124]. Finally, it has to be reported that previous devices that normally constituted equipment for nurse robots, especially after the COVID-19 pandemic, can also be provided with disinfectant sprayers and UV-C light sterilizers [126].

5.5. Algorithms

5.5.1. Data Elaboration

Several sensors extract and transform various analogic signals to digital parameters that are computed together to define the behavior of robots during human–robot interaction (HRI), allowing robots to fulfill tasks and to move around the environment. In this way, algorithms supported by the robots’ framework exploit the hardware calculation power to make robot autonomous during movements, in speaking and listening, as well as in people and object detection or mapping environments. In this sense, for instance, Ravindu et al. [107] have proposed an advanced mapping system able to identify objects and establish attributes to them, amalgamating spatial and objects maps to provide a better human–robot interaction. Matsuo et al. [127] have elaborated upon an innovative entropy method to avoid collisions for omnidirectional platforms. Their considerations are based on probability of collision; once the environment is mapped in blocks, each one has been related to a definite entropy, and higher values correspond to higher probability of collisions. Moreover, the movements of social robots, particularly inside crowded environments, are one of the biggest sources of concern. Thus, Correa et al. [128] developed a specific tracker, the Probability Hypothesis Density (PHD) filter, implemented in a system using a laser range sensor and capable of tracking people in a throng. Although algorithms are involved in every single feature of robots’ automaticity, localization algorithms are particularly under development, being the object of numerous studies. Systems based on Particle Filter (PF) localization filled the gap left by SLAM algorithm, solving positioning problem in an unknown environment. The PF framework can be also used in combination with Unscented Kalman Filter (UKF) to achieve self-localization, as demonstrated in trials executed using the NAO robot. In this context, Ullah et al. have proposed UKF and PF localization algorithms to enhance localization systems involved in robotic applications and wireless sensors networks [118].

5.5.2. Artificial Intelligence

Algorithms and frameworks together constitute artificial intelligence, in which machine learning and deep learning are revealed to be promising techniques in the realization process of more realistic social partners for people, especially in the healthcare field. Concerning the machine learning relevance, some experiments have been conducted using the Support Vector Machine, Hidden Markov Model, and Artificial Neural Network, in order to provide human activity recognition [129]. Other machine learning techniques are implemented in facial expression recognition (FER) using artificial neural network architecture (e.g., with Autoencoders), transfer function, and a regression model to predict continuous emotions (e.g., with Support Vector Regressors) [130]. Social robots may have both the capability of recognizing emotions and expressing them. In this context, numerous scientific efforts have been made to develop robots capable of providing sympathy, an altruistic response to limit and reduce other’s pain and discomfort, rather than programming emphatic robots, which still remains the main limit in this field [131]. Furthermore, every emotional aspect of the robot or way of behavior in the interaction with people can be tailored to the user target, enhancing their mutual engagement. According to these considerations, multiple kinds of personalities have been developed in the analyzed social robots [132]. To investigate the connection between patent analysis and the scientific one, an interesting illustrative case is the patent [133] filed by Korea Ind Tech Inst. This patent is based on considerations made by C. Breazeal on his Kismet’s emotional space [35] and proposes an emotional robot, with a lip sync unit that is able to modulate mouth shape with the acoustic signals emitted. Concerning frameworks, they constitute the initial point to generate algorithms that can provide the robots with the ability to interact with humans. In particular, the CORTEX framework [134], Robocomp framework [135], UoA Robotic Software Framework [19], KnowRob [19], NAOqi [16,26], and AMIRO framework based on ROS (Robot Operating System) [26] have been highlighted as specific framework used in social robotics. The artificial intelligence inside a social robot is complex, and its structure is composed of many different types of algorithms, such as a speech recognition algorithm, autonomous navigation algorithm, gesture recognition algorithm, facial recognition algorithm, object detection algorithm, and emotion detection algorithm. Table 6 briefly collects the set of algorithms and AI services captured with the current analysis, depicting the main characteristics of each. On the results of the patents, analysis suggest that a lot of effort have been made to develop AIs capable of giving to social robots human behavior [63,68,136,137,138,139,140,141,142,143]. Thus, particular importance has been given to AI embedded in humanoid robots; these may be equipped with algorithms that allow realizing spatial-temporal and emotional reasoning. In this context, the main purpose of the patent filed by Singh et al. [136] is to build a cognitive structure able to show empathy, focus, and goal-reasoning skills. Empathy and the whole emotional sphere more in general have not to be intended just as output features provided by robots, but to generate bi-directional communication between social robots and humans, and several algorithms are employed in this field to interpret inputs from users. For instance, in chat robots, the adoption of a BERT pre-trained transformer model has been proposed for the emotions recognizing from texts [144]. Other types of algorithms, defined as text-to-speech algorithms, have been allowed to synthesize voices, specializing in transmitting robot-simulated emotions and instilling trust in users (i.e., patient, children, the elderly, and medical personnel). This is performed through different paralinguistic cues such as tones, accents, and vocal fillers. Finally, the complex system represented by HRI can benefit from the integration of all these types of algorithms, in a single more autonomous cyber entity, but it has to be mentioned that facial and movements cues are the most important in evoking users’ trust [145]. Thus, major efforts have to be made specifically for algorithms involved in these fields.

5.6. Connectivity and Hardware

The connectivity aspect is important in collaborative robotics when a large amount of data needs to be processed and it is not possible to keep the computational elaborator on board the machine for economic or technical reasons. In this case, various technologies and data transfer protocols are generally available over the air that enable the transfer of sufficient data for this type of operation. In order to carry out a virtual bridge between remote computers and robots in cloud-based systems, the most applied technologies are Wi-Fi (wireless connection), Bluetooth (wireless connection), ZigBee (wireless connection), and GPRS/3G/4G/5G (wireless connection). The first three types of connections are short range wireless technologies. While Wi-Fi and Bluetooth have become mass-adopted in human smart devices, ZigBee constitutes a less common mesh wireless protocol: it is standardized with IEEE 802.15.4-2003 and has found great use in IoT and advanced communications between smart devices (e.g., wearable devices [119]). The adoption of 5G transmission allows an increase of the amount of information that can be transferred wirelessly, reducing latency and incrementing the possibility of implementation of more intelligent systems [158]. Concerning edge-based systems, data collected and elaborated from the robots are exchanged with computers through Ethernet ports (wired connection) or USB ports (wired connection). Compared to a wireless connection, cabled connections have the advantage of transferring sensible data in a safe way, preventing possible leakage due to malicious attacks. Moreover, wires allow faster transfer speed, resulting in a better solution for a larger dataset or for fast applications. Starting from Intel Pentium-based computer mounted on the Security Warrior [147], the realism and the activities required to the social robots have become increasingly complex; then, the technological and scientific response to these needs has been more advanced and modular hardware. Complex architecture could require specific solutions; for instance, Meng et al. [159] present an example of an innovative edge-based service robot where a complex hardware system is composed of two main parts, and every algorithm is executed on specific modules according to the calculation power needed and the hardware architecture. The architecture is divided into two different systems: the lower microprocessor (STM32) has to be the core of the control system, meanwhile the upper microprocessor (Nvidia Jetson TX2 board) hosts the artificial intelligence based on deep learning technique. Another use case for the Nvidia Jetson TX2 is the social robot ARI [160], in which the GPU provides artificial intelligence to carry out self-learning and deep learning processes, and Intel i5 or i7 microprocessors are destined to computational operations. The literature presents some systems with high complexity, in which every part is specialized in a definite work, allowing for the building of smart robots capable of interacting with humans and the environment in natural and pro-social ways. Each hardware architecture is designed according to the goals that have to be achieved by the robots and the needed features related to the field of work. In general, it can be argued that parallel computing devices can be kept on board the machine for fast computation associated with social and security functions that require quick response, such as computer vision. All computational elements associated with mechanical functions must remain onboard the machine, whether they are strictly social, such as gestures, or useful for sociality, such as movement in an unstructured human social environment. Computational systems for more complex functions, such as verbal communication, must or may reside remotely for the time being, depending on the level of cognition to be incorporated into the robot (See in the Figure 18).

5.7. Human–Robot Interaction

The social aspect of the robot is based on the depth and intricacy of its interactions with the human entity. Undoubtedly, engaging in close proximity and ensuring the safety of the human subject can promote a sense of positive social interaction. In this context, the role of actuators [161,162] and the implementation of secure compliant transmissions [163,164,165], as well as the implementation of control systems [166,167] and the utilization of intelligence to anticipate intentions by analyzing human movements [168,169], are of the utmost significance. Certainly the fundamental components of auditory and visual communication [170] along with the complex cognitive abilities required for comprehending language, visual stimuli, and audiovisual content serve as the foundation upon which the social robot can fulfill its intended purpose. In addition to possessing typical human capabilities, the robot also possesses the capacity to gather supplementary information through its on-board sensors, which are capable of perceiving audio and optical phenomena that surpass the typical human range. The Internet of Things (IoT) [171,172] enables the transmission of information from a sensorized environment to a robot. For instance, Zhao et al. [173] proposed a method, involving inertial measurement units (i.e., wearable sensors) positioned on patients body, to recognize a subject’s emotional state through convolutional neural network (CNN) inertial signal processing. Furthermore, wearable sensors facilitate the direct exchange of additional information between a human subject and the robot [174,175,176]. In the case of active devices [177,178], this exchange occurs in both directions. The availability of extensive information from curated databases or direct access to the Internet offers a valuable repository of knowledge that can be utilized for effective socialization [179]. Furthermore, the potential to utilize powerful parallel computing hardware platforms serves as a crucial factor in augmenting the analytical and generative capabilities of social robots [180]. Hence, it can be inferred that in the forthcoming era, the convergence of aforementioned technologies will culminate in the development of progressively complex and independent social robots. These robots are complex mechatronic systems [181] empowered by artificial intelligence [182] and possess the capacity to engage in meaningful communication with human counterparts, thereby imparting novel insights and perspectives that have the potential to enhance interpersonal connections. Consequently, this dynamic interaction will foster an engaging and continuously evolving relationship, owing to the remarkable capabilities inherent to the robot.

5.7.1. Input Control Device

Social robots in healthcare and medical environments can be guided by expert operators through remote workstation and can be fully autonomous, such as the follow-robots that aid physicians and nurses in their daily work routine [183], or the control may be left to the patients themselves. Moreover, robots used in the neurorehabilitation field such as “Jessie”, do not require complex workstation to be controlled, since a simple application running on a tablet can manage them [103]. In addition, it has been demonstrated that social robots positively impact early-stage education; thus, many of them are involved in diagnostic trials and medical research programs involving children affected by neurological disease. In one case, the NAO robot (recognized as main teaching robot from subject matter experts) has been involved using the Wizard of Oz approach to investigate children with dysgraphia [21]. This type of control allows the physician or the educator to completely manage robot behavior. It is mandatory that device controls and degree of autonomy reflect caregiver, medical personnel, and patient needs. The way to realize the external controls can be very different according to the type of robot and to the type of interaction with humans. Inputs to the robot may be sent through a joystick: it can be virtual (on a screen) or physical or through a specific application with a GUI (graphic user interface). Different design solutions can be found in the literature to optimize the interaction; e.g., a patent filed by IROBOT CORPORATION and IN-TOUCH HEALTH describes some effective and simply interfaces [184,185]. Furthermore, headsets for brain–computer communication [16,22] and voice and gesture controls can offer innovative proposal to enrich the interaction. For instance, CHONGQING YOUBAN TECH CO LTD has filed a patent [123] in which an anthropomorphic system can recognize its specific partner through the voice and remain in an standby mode when other people try to interact with it. In other cases, social robots may be deployed in hospitals to assist the physician and not directly the patient. Biswas et al. have developed a touch-less nursing robot in which an integrated voice recognition module is connected to an Arduino MEGA [186]. The system can be controlled by a physicians’ voice to open his lockers or to be switched off. Moreover, RND GLOBAL CO LTD has filed a patent [122] for a wellness and human care robot able to provide several biological measurements (i.e., health state of urine test, blood glucose, blood pressure, body temperature, and body weight) and that can be controlled by elderly voices. Another field in which social robotics is especially useful and may be further implemented is childcare. In this context, parents or educators may be remotely involved in children’s activities thanks to the physical presence of a social robot that can be fully autonomous or in remote-control mode. In the first case, an HDM (head-mounted display) gives the parents the possibility of visualizing what the robot is looking at and what it is doing; in the second case, the robot can be guided, and an alter ego of the adult is provided by it [187]. Concerning the hospital environment, as proposed in the patent [188] filed by WEIN LEILA MAE, the medical robotic system has been programmed to operate in three modalities: the autonomous decision-making mode, providing output based on the robot’s embedded artificial intelligence; the doctor collaborative mode, in which artificial intelligence outputs are merged with medical staff experience; and the patient collaborative mode, in which the patient is interviewed by a robot interviewer to establish direct communication. Finally, the benefits provided by social robotics can be greatly enhanced by incorporating IoT units into systems, that can generate a dense network of information extracted from various sensors located on human users or in the surrounding environment. Different applicative examples of this technology can be found in the scientific and patent literature. For instance, Yoon Sang-seok et al. [137] proposed an IoT unit, embedded inside the robot, that collects significant parameters from a wearable device and a fine dust measuring sensor. Once information has been elaborated, an external air purifier is forced to work.

5.7.2. Display and Touchscreen

Displays and touchscreens mounted on robots allow not only medical personnel but also children, the elderly, and impaired people to interact with the systems. Using a touchscreen display, it is possible to exchange information, preferences, and feedback with the cyber assistant. In this way, social robots become an essential partner during children’s education, assisting the elderly and impaired people that could present difficulties in establishing communication. For instance, Neef et al. [28] have proposed a novel health monitoring system, based on the Pepper robot, exploiting its frontal touchscreen, customized Android application, and sensing device. The user experience and the data acquisition system have been evaluated demonstrating that less previous expertise determines a higher level of robot acceptance by the users (e.g., elderly and children have a lower level of expectations). Thus, the importance of adequate people training in using health monitoring systems and other solutions have been cleared. Moreover, it has to be more deeply investigated whether a more transparent robot, in providing information on itself, can positively affect subsequent interaction with children. In this context, the training has been performed directly by social robots [189].
Analogously, displays play a fundamental role in telepresence robots, giving the possibility to show the faces of medical personnel during an examination or family members during a visit to a nursing home.

5.8. Impact of Ethics, Security, and Privacy on the Design of Social Robots

5.8.1. Ethics in Social Robotics

In healthcare, social robots are artificial entities that can positively interact with the disabled, children, and the elderly, reducing the workload of nurses, physicians, and caregivers. Because of their artificial intelligence, they have a high level of interactivity. This feature allows them to be involved in multiple cases, and because many ethical issues are present, rules and design constraints will be considered. The main issue is the dehumanization of interactions in which vulnerable people are assisted by entities that are unable to experience emotions or understand the fragility of humans. In this sense, replacing human caregivers with care robots may be considered unethical. To combat this, developers have created algorithms that recognize emotions and facial expressions after they have been recorded as sensory data. This improves robot acceptance and usability, which is related to the systems’ natural cognitive ability. Similarly, in human–human interaction, artificial entities do not perceive people’s internal states but only their external representation. Furthermore, public trust influences the acceptance and subsequent deployment of social robots in assisting humans. This last factor improves if artificial systems increase the benefits provided to humans in terms of safety and well-being. The ethical design of social robots has become a prerequisite for their development. Some stakeholder research must be carried out at the very beginning of the design process. Thus, it may be discovered that human–robot interaction is preferred in some cases because robots have predictable behavior or because wider adoption of robots allows nurses and physicians to optimize their time in relevant activities for the benefit of the patient. The primary goal of creating a dependable and effective solution is to translate social norms, laws, and human behaviors into algorithms for intelligent artificial intelligence. Finally, to combat loneliness and dehumanization caused by robots, they can be used as companions to help people rather than as replacements for human caregivers [190].

5.8.2. Privacy and Security

As the term “social” implies, social robots are used in public and private settings where they have close contact with people. Robots in healthcare may be used by the elderly, the disabled, children, or physicians. Various end users correspond to a variety of possible locations where robots may be involved. The need for data protection stems from concerns about protecting people’s privacy. In this regard, the law has promoted requirements and constraints for safely managing data recorded by social robots, such as video and clinical files, which frequently use wireless connectivity (e.g., cloud-based systems) to store information on servers. As a result, to realize systems that process personal data, “privacy by design” must be considered as a starting point in social robot development. In this context, blockchain technology has the potential to provide two critical benefits: privacy and security. Nonetheless, this is an innovative solution, and more research is needed to validate and implement it in real-world applications. In fact, blockchain technology is currently plagued by a trilemma of security, performance, and decentralization that cannot all be verified at the same time. The BlockRobot [191] proposed by Vasylkovskyi et al. exploits this technology to provide private data access in human–robot interaction, and Table 7 reports the main principles of this solution.

5.9. Guidelines for Social Robot Design in Healthcare Applications

Drawing guidelines for the design of social robots in medicine presents at least two problems: 1-one is related to the design of each social robot precisely because it is social, that is, intended to develop a relationship with the human; 2-a further one is determined by the medical-therapeutic context. In fact, designing social robots for medicine requires first and foremost the constant collaboration of the physician, who must indicate precisely the functions for which the machine is intended but must also take into account the type of human being with whom the robot must interact. The machine in this context must know how to interact with sick, weak, and fragile people, often infirm and sometimes disabled, discriminate in relation to whom the average standard of safety must be increased, and assure compliance with interactive rules not present in the non-medical intersubjective relationship. Therefore, the delineation of guidelines for social robots in the medical-therapeutic setting must give strong consideration to the two issues outlined above. Some more precise guidelines are given below in relation to methods and content that are appropriate to follow. In general, teaching a machine how to interact with a human means knowing how the human thinks, moves, and acts but also knowing the laws and ethical values that govern human social interaction. This has led and increasingly leads robotic engineering to work in interdisciplinary teams consisting of neuroscientists (psychologists, neurologists, sociologists, psychiatrists, etc.) and philosophers, ethicists, and jurists. However, things are more complex than what can be expected, because these teams not only face completely new scenarios but are tasked with teaching the machine a model of human beings that is not fixed and unchanging but gradually modified by the same interaction with the machine. The intersubjective relationship between humans and social robots has changed and is changing the human as much as the machine: while, on the one hand, the design of the social robot is modeled on the human, on the other hand, interaction with the social robot is changing the human, its sociality, its habits, its acting, and its very way of thinking [192]. In this way, the human that neuroscience and philosophy are called upon to describe today is a “mobile,” dynamic human, with a “fluid” physiognomy, and this dynamism and continuous transformation depend precisely on the proximity with the machine, which has changed rhythms and times, values, and habits of man. The man who relates to the machine today is no longer the same man of 50 years ago. The contribution of neuroscience and philosophy here is really decisive. No researcher or scholar today questions this, which therefore must be held firm: the design of social robots needs a multidisciplinary team [193] of experts who have in mind the processuality, the becoming of the human with whom the robot must interact. Robots and humans grow and develop together: knowing that there is this process and knowing it in its particularity is fundamental to effectively design a social robot. That is why, in these teams, philosophy plays a fundamental role: not only because, thanks to its holistic approach, it establishes the necessary interdisciplinary links, but especially because it is well aware of the dynamic and processual nature of the intersubjective relationship and also has the task of urging team members to keep it always in mind in the design. Let us now see how all this impacts when it comes to implementing a social robot in medicine. As anticipated, the figure of the physician is central and decisive: the team members must first listen to the robot’s purpose, applications, and functions. Here, it is precisely the purpose that determines the type of machine: if it is intended for heavy work (such as lifting patients for bed sores), then the robot will need size and mechanics of a certain type; if it is intended for pediatric settings, then physiognomy, expression, and tactility will be decisive: the robot will have to be smaller in size, having particularly developed facial expressions, to be capable of particular movements and sounds, presenting specifically designed tactility and sensoriality, to be equipped with profound biometric recognition capabilities, etc. The composition of the team is a particularly important part of the work, to which great attention should therefore be devoted. Having constituted the work team, an interactive working method must be chosen. One approach on which many authors agree is to place the user (human) at the center of the robot design activity, through a wide range of methodologies that are more or less assimilated or have some similarity to codesign [194], which consists of having users or user representatives actively participate in the definition of technical specifications in various ways, as we shall see later. From this point of view, the problem with the healthcare domain, as pointed out, for example, in [195], is that it is not so clear who the user is. In fact, the physician prescribes a drug or therapy to the patient, that is, uses a drug or therapy to treat the patient. Generally, the therapy or drug is delivered by a therapist or other health care personnel, who uses the doctor’s prescription to actually treat the patient. The patient must definitely enter into a relationship with the drug or therapy for the treatment to occur. The caregiver participates in the entire therapeutic process both emotionally, because he or she often has a personal relationship with the patient, and practically, by following the directions of the therapist and physician, even if he or she has no specific training in treating the disease. Again, the overall goal of treating patients is precisely why there are hospitals that use the entire therapeutic team for this very purpose. Identifying the user is not so easy, so it is more reasonably proposed to put the disease at the center of the therapeutic experience, in which all these individuals participate according to their specific roles. More generally, a usability-based design approach has recently been proposed for the healthcare environment [196], but this needs to be reworked to meet the social aspect of the robot we are focusing on. Usability and sociality must be held together precisely by placing disease at the center of the design. Disease is one of the attributes of the patient, as he or she remains a social human subject and should not be identified only with the disease. This attribute identifies a specific class of patients who are generally different from a standard healthy subject, even in the way they socialize, because of disorders that may involve the physical and especially the cognitive sphere. There is a pedagogy of relationship that the social robot must implement in order to contribute to the treatment of the disease because the patient is a weakened subject of the disease itself. This weakness is a reason for additional assurance toward the patient, which is, among other things, induced by the regulations in each state, particularly the fact that the robot may be perceived as a therapeutic tool, in which case it must comply with all regulations specific to these devices. Having defined the working method, which is always implemented iteratively through trial-and-error approaches converging on the optimal accommodation of the social robot in the context in which it is to operate, a major choice must be made. Because the patient is not standard and the diseases may be very different from each other, a decision must be made about whether to make a specific device for a disease or class of diseases or to make a multifunctional system. The approach changes radically: in the case of a multifunction device, the adaptation of the system is achieved a posteriori by analyzing the characteristics of the disease. In the case of the single-function system, on the other hand, the whole robot is designed to best fulfill a specific function. In both cases, it is necessary to identify all the subfunctions that the robot must perform by starting with a general definition and articulating it as specifically as possible. Functional analysis produces a set of qualitative attributes that the social robot must have that can be further identified quantitatively by technical specifications. Each attribute of the social robot can be associated with a module, i.e., an artifact capable of accomplishing the function, e.g., a module for verbal communication, another for vision, and yet another for nonverbal communication. To the extent possible, functional modularization is a simplifying factor in design in general and, especially in such a complex context as social robotics, can become a winning element in achieving a practical goal in a reasonable amount of time even by using modules previously made by the work team or other teams. In order to facilitate reusability and interoperability of the various modules, it is essential to interpose an abstraction layer (interface) around the module itself, whether it is computer-based, electrical/electronic, or mechanical in nature. Although it is theoretically possible to decompose a function into smaller and smaller parts, there is a stage below which pragmatically it is not possible to go because the various elements of the function must be highly correlated (integrated) and the communication time between the elements is excessively high compared to the time in which the function itself must be performed. Consider, for example, the realization of gesture movements in which various motors of a robotic arm and hand must move in a coordinated manner: one can articulate this function a lot, but the level of physical integration between the various functional elements must be high. When the response time of a certain function must be reduced, it is often important that the function be performed locally. If, on the other hand, the response time can be high, it may be decided to perform the function remotely, thus centralizing complex functions in a single piece of very powerful hardware, for reasons of economic efficiency but also for the objective practical limitations of footprint, weight, and energy delivery. This is the case for natural language analysis and the processing of appropriate responses also in natural language. We can thus distinguish on-board modules from in-cloud modules. A number of key characteristics present in most social robots are outward appearance, verbal communication, sensing, cognitive abilities (artificial intelligence), personality, and safety. The outward appearance must be evaluated very carefully, for example with regard to anthropomorphism, depending on the target subject the robot is to interact with and also considering cultural aspects, among which the choice of colors plays a relevant role, and white and light blue tend to be preferred. Facial expressions may also be a desired aspect, but at the expense of high complexity in motoring. Alongside nonverbal facial communication, gestures also play a reinforcing role and, for certain types of patients, such as the deaf, are strictly necessary. In general, verbal communication is the typical vehicle of human sociality, and because of the high complexity of this function, it is actually necessary to use an artificial intelligence system on remote hardware to process information, resulting in information security issues. The ability to process images and video in a real-world environment is another function that can be complex and require remote hardware, depending on how much information is to be extracted from the data. Other sensors can be used to capture health information from the patient and transfer it to a centralized database. The choice of a personality for the social robot is critical to achieving engagement and must clarify early on what the robot’s role is within the treatment team. The ability to move around in a partially unstructured hospital setting and in the presence of various individuals who are not necessarily professionals is an important lever for all of the robot’s social functions, but it represents an additional safety issue that must be addressed to avoid accidents. In general, the presence of sensor elements, moving mechanical parts, functions performed remotely, and the particular fragility of patients interacting with social robots force the designer to focus very carefully on cybersecurity aspects, more so than in cases of interactions with healthy subjects. In particular, issues concerning data privacy, authentication and access control, network security, firmware security, physical security, human factors, and regulatory compliance must be carefully analyzed. While all the aspects listed quite obviously lead toward the technical disciplines of cybersecurity, there is one that needs to be emphasized more than all the others: the human factor. In fact, because the social robot operates within human society, it is subject to social engineering and policy circumvention strategies that cannot be fully predicted, and, therefore, there can never be total security unless the risk factors are removed at the root and the social attribute is removed from the robot.

6. Conclusions

The current investigation was conducted by collecting data from three different sources (Scopus, Espacenet, and Google), with different levels of technological readiness (R&D, patents, and market). According to results, the introduction of artificial intelligence has undoubtedly accelerated the social robots’ evolution, which is also fueled by real-world issues such as the aging population and the COVID-19 pandemic. Indeed, robots have improved healthcare management in a variety of ways, from personal assistance to the elderly (e.g., nursing robots) to reducing anxiety and mental illness in people with disabilities (e.g., animal companion bots), as well as to assisting with children’s education.

Author Contributions

Conceptualization, A.B. and L.R.; methodology, C.A. and N.C.; software, L.R.; validation, A.B., C.A. and F.V.; formal analysis, A.B.; investigation, L.R.; resources, L.R.; data curation, L.R. and C.A.; writing—original draft preparation, L.R.; writing—review and editing, A.B. and C.A.; visualization, F.V.; supervision, A.B.; project administration, A.B.; funding acquisition, A.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research is partially funded by Italian Ministry of Health-Ricerca Corrente program.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request to the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Søraa, R.A.; Tøndel, G.; Kharas, M.W.; Serrano, J.A. What do Older Adults Want from Social Robots? A Qualitative Research Approach to Human-Robot Interaction (HRI) Studies. Int. J. Soc. Robot. 2023, 15, 411–424. [Google Scholar] [CrossRef]
  2. Ferrari, O.I.; Van Gurp, J.A.M.; Zhang, F.; Broz, F.; Braam, A.A.; Barakova, E.I. Design of Child-robot Interactions for Comfort and Distraction from Post-operative Pain and Distress. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, 13–16 March 2023; pp. 686–690. [Google Scholar]
  3. Foster, M.E.; Candelaria, P.; Hudson, S.; Lindsay, A.; Pacquing, M.; Petrick, R.P.A.; Stinson, J.; Zeller, F.; Dwyer, L.J.; Nishat, F.; et al. Co-design of a Social Robot for Distraction in the Paediatric Emergency Department. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Stockholm, Sweden, 13–16 March 2023; pp. 461–465. [Google Scholar]
  4. Dosso, J.A.; Kailley, J.N.; Guerra, G.K.; Robillard, J.M. Older adult perspectives on emotion and stigma in social robots. Front. Psychiatry 2023, 13, 1051750. [Google Scholar] [CrossRef]
  5. Hegel, F.; Muhl, C.; Wrede, B.; Hielscher-Fastabend, M.; Sagerer, G. Understanding Social Robots. In Proceedings of the Second International Conferences on Advances in Computer-Human Interactions, Cancun, Mexico, 1–7 February 2009; pp. 169–174. [Google Scholar]
  6. Oliveira, O.; da Silva, F.; Juliani, F.; Ferreira, L.; Nunhes, T. Bibliometric Method for Mapping the State-of-the-Art and Identifying Research Gaps and Trends in Literature: An Essential Instrument to Support the Development of Scientific Projects. In Scientometrics Recent Advances; IntechOpen: London, UK, 2019. [Google Scholar]
  7. De Rose, A.; Buna, M.; Strazza, C.; Olivieri, N.; Stevens, T.; Peeters, L.; Tawil-Jamault, D. Technology Readiness Level: Guidance Principles for Renewable Energy Technologies: Final Report; European Commission: Petten, The Netherlands, 2017. [Google Scholar]
  8. Insights on the Social Robots Global Market to 2026—Rise in Automation is Driving Growth; Research and Markets: Dublin, Ireland, 2021.
  9. PARO. Available online: http://www.parorobots.com/ (accessed on 21 August 2022).
  10. AIBO. Available online: https://us.aibo.com/ (accessed on 21 August 2022).
  11. Khadidos, A. A Comprehensive Study on Robots in Health and Social Care. Lect. Notes Netw. Syst. 2023, 490, 515–525. [Google Scholar] [CrossRef]
  12. Aymerich-Franch, L.; Ferrer, I. Socially Assistive Robots’ Deployment in Healthcare Settings: A Global Perspective. Int. J. Humanoid Robot. 2023, 20, 2350002. [Google Scholar] [CrossRef]
  13. Martínez, A.; Belmonte, L.M.; García, A.S.; Fernández-Caballero, A.; Morales, R. Facial emotion recognition from an unmanned flying social robot for home care of dependent people. Electronics 2021, 10, 868. [Google Scholar] [CrossRef]
  14. Djugash, J.M. Computer-Based Method and System of Providing Active and Automatic Personal Assistance Using Robotic Device/Platform. U.S. Patent 9,223,837, 14 March 2013. [Google Scholar]
  15. Tan, H.; Baloch, G.A.; Zhao, Y.; Castillo-Effen, M. Controlling and Commanding an Unmanned Robot Using Natural Interfaces. U.S. Patent 20190224849A1, 23 January 2018. [Google Scholar]
  16. Braun, J.F.; Diez-Valencia, G.; Ehrlich, S.K.; Lanillos, P.; Cheng, G. A prototype of a P300 based brain-robot interface to enable multi-modal interaction for patients with limited mobility. In Proceedings of the IEEE International Conference on Cyborg and Bionic Systems (CBS), Munich, Germany, 18–20 September 2019; pp. 78–84. [Google Scholar]
  17. Kumar, P.B.; Parhi, D.R.; Sahu, C. An approach to optimize the path of humanoid robots using a hybridized regression-adaptive particle swarm optimization-adaptive ant colony optimization method. Ind. Robot. 2019, 46, 104–117. [Google Scholar] [CrossRef]
  18. Johnson, M.J.; Sobrepera, M.J.; Kina, E.; Mendonca, R. Design of an affordable socially assistive robot for remote health and function monitoring and prognostication. Int. J. Progn. Health Manag. 2019, 10, 1–15. [Google Scholar] [CrossRef]
  19. Ahn, H.S.; Yep, W.; Lim, J.; Ahn, B.K.; Johanson, D.L.; Hwang, E.J.; Lee, M.H.; Broadbent, E.; Macdonald, B.A. Hospital Receptionist Robot v2: Design for Enhancing Verbal Interaction with Social Skills. In Proceedings of the 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), New Delhi, India, 14–18 October 2019. [Google Scholar]
  20. NAO. Available online: https://www.softbankrobotics.com/emea/en/nao (accessed on 21 August 2022).
  21. Gouraguine, S.; Riad, M.; Qbadou, M.; Mansouri, K. Dysgraphia detection based on convolutional neural networks and child-robot interaction. Int. J. Electr. Comput. Eng. 2023, 13, 2999–3009. [Google Scholar] [CrossRef]
  22. Mezzina, G.; De Venuto, D. Smart Sensors HW/SW Interface based on Brain-actuated Personal Care Robot for Ambient Assisted Living. In Proceedings of the IEEE SENSORS, Rotterdam, The Netherlands, 25–28 October 2020. [Google Scholar]
  23. Rodriguez-Moreno, I.; Martinez-Otzeta, J.M.; Goienetxea, I.; Rodriguez-Rodriguez, I.; Sierra, B. Shedding Light on People Action Recognition in Social Robotics by Means of Common Spatial Patterns. Sensors 2020, 20, 2436. [Google Scholar] [CrossRef] [Green Version]
  24. Richert, A.; Schiffmann, M.; Yuan, C. A Nursing Robot for Social Interactions and Health Assessment. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, San Diego, CA, USA, 16–20 July 2020. [Google Scholar]
  25. Cruz, E.; Escalona, F.; Bauer, Z.; Cazorla, M.; García-Rodríguez, J.; Martinez-Martin, E.; Rangel, J.C.; Gomez-Donoso, F. Geoffrey: An automated schedule system on a social robot for the intellectually challenged. Comput. Intell. Neurosci. 2018, 2018, 4350272. [Google Scholar] [CrossRef]
  26. Ghiță, A.Ș.; Gavril, A.F.; Nan, M.; Hoteit, B.; Awada, I.A.; Sorici, A.; Mocanu, I.G.; Florea, A.M. The amiro social robotics framework: Deployment and evaluation on the pepper robot. Sensors 2020, 20, 7271. [Google Scholar] [CrossRef]
  27. PEPPER. Available online: https://www.softbankrobotics.com/emea/en/pepper (accessed on 21 August 2022).
  28. Neef, C.; Linden, K.; Richert, A. Exploring the Influencing Factors on User Experience in Robot-Assisted Health Monitoring Systems Combining Subjective and Objective Health Data. Appl. Sci. 2023, 13, 3537. [Google Scholar] [CrossRef]
  29. Aslan, S.N.; Ucar, A.; Guzelis, C. Fast Object Recognition for Humanoid Robots by Using Deep Learning Models with Small Structure. In Proceedings of the International Conference on INnovations in Intelligent SysTems and Applications (INISTA), Novi Sad, Serbia, 24–26 August 2020. [Google Scholar]
  30. Cheng, H.; Ji, G. Design and implementation of a low cost 3D printed humanoid robotic platform. In Proceedings of the IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Chengdu, China, 19–22 June 2016; pp. 86–91. [Google Scholar]
  31. Cruz-Sandoval, D.; Penaloza, C.I.; Favela, J.; Castro-Coronel, A.P. Towards social robots that support exercise therapies for persons with dementia. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, Singapore, 8–12 October 2018; pp. 1729–1734. [Google Scholar]
  32. Meghdari, A.; Shariati, A.; Alemi, M.; Nobaveh, A.A.; Khamooshi, M.; Mozaffari, B. Design Performance Characteristics of a Social Robot Companion “Arash” for Pediatric Hospitals. Int. J. Humanoid Robot. 2018, 15, 1–27. [Google Scholar] [CrossRef]
  33. Abiri, R.; Borhani, S.; Zhao, X.; Jiang, Y. Real-time brain machine interaction via social robot gesture control. In Proceedings of the ASME 2017 Dynamic Systems and Control Conference, Tysons, VA, USA, 11–13 October 2017. [Google Scholar]
  34. McColl, D.; Louie, W.Y.G.; Nejat, G. Brian 2.1: A Socially assistive robot for the elderly and cognitively impaired. IEEE Robot. Autom. Mag. 2013, 20, 74–83. [Google Scholar] [CrossRef]
  35. Breazeal, C. Toward sociable robots. Robot. Auton. Syst. 2003, 42, 167–175. [Google Scholar] [CrossRef]
  36. Mordoch, E.; Osterreicher, A.; Guse, L.; Roger, K.; Thompson, G. Use of social commitment robots in the care of elderly people with dementia: A literature review. Maturitas 2013, 74, 14–20. [Google Scholar] [CrossRef]
  37. NECORO. Available online: https://robotics.omron.com/ (accessed on 21 August 2022).
  38. Hung, L.; Liu, C.; Woldum, E.; Au-Yeung, A.; Berndt, A.; Wallsworth, C.; Horne, N.; Gregorio, M.; Mann, J.; Chaudhury, H. The benefits of and barriers to using a social robot PARO in care settings: A scoping review. BMC Geriatr. 2019, 19, 232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Miklósi, Á.; Gácsi, M. On the Utilization of Social Animals as a Model for Social Robotics. Front. Psychol. 2012, 3, 75. [Google Scholar] [CrossRef] [Green Version]
  40. Stiehl, W.D.; Lieberman, J.; Breazeal, C.; Basel, L.; Lalla, L.; Wolf, M. Design of a therapeutic robotic companion for relational, affective touch. In Proceedings of the ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, Nashville, TN, USA, 13–15 August 2005; pp. 408–415. [Google Scholar]
  41. Bethel, C.L.; Henkel, Z.; Darrow, S.; Baugus, K. Therabot-an adaptive therapeutic support robot. Paladyn J. Behav. Robot. 2018, 12, 23–30. [Google Scholar]
  42. Passler Bates, D.; Young, J.E. SnuggleBot: A Novel Cuddly Companion Robot Design. In Proceedings of the 8th International Conference on Human-Agent Interaction, Online, 10–13 November 2020; pp. 260–262. [Google Scholar]
  43. Yohanan, S.; MacLean, K.E. A tool to study affective touch: Goals and design of the Haptic Creature. In Proceedings of the CHI ’09 Extended Abstracts on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 4153–4158. [Google Scholar]
  44. HAPTIC CREATURE. Available online: http://yohanan.org/steve/projects/haptic-creature (accessed on 21 August 2022).
  45. Talami, F.; Romero, M.; Borga, G. Edù, a Robotic Companion in Pediatric Protective Isolation Units. In Proceedings of the Educational Robotics International Conference, Siena, Italy, 25–26 February 2021; Volume 982, pp. 103–107. [Google Scholar] [CrossRef]
  46. Ranjkar, E.; Rafatnejad, R.; Nobaveh, A.A.; Meghdari, A.; Alemi, M. Design, Fabrication, and Evaluation of the ‘Maya’ Social Robot. In Proceedings of the 7th International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 20–21 November 2019; pp. 52–62. [Google Scholar]
  47. Murwantara, I.M. An initial framework of dynamic software product line engineering for adaptive service robot. In Proceedings of the International Conference on Computer Science and Its Application in Agriculture (ICOSICA), Bogor, Indonesia, 16–17 September 2020. [Google Scholar]
  48. Ropero, F.; Vaquerizo-Hdez, D.; Muñoz, P.; Barrero, D.F.; R-Moreno, M.D. LARES: An AI-based teleassistance system for emergency home monitoring. Cogn. Syst. Res. 2019, 56, 213–222. [Google Scholar] [CrossRef]
  49. Thomsen, N.B.; Tan, Z.H.; Lindberg, B.; Jensen, S.H. A heuristic approach for a social robot to navigate to a person based on audio and range information. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 5884–5890. [Google Scholar]
  50. Abe, K.; Shiomi, M.; Pei, Y.; Zhang, T.; Ikeda, N.; Nagai, T. ChiCaRo: Tele-presence robot for interacting with babies and toddlers. Adv. Robot. 2018, 32, 176–190. [Google Scholar] [CrossRef]
  51. Um, D.; Park, J.; Shin, J.; Lee, W.H. A Social Robot, “Dali,” for Aging in Place Technology. J. Robot. 2018, 2018, 6739652. [Google Scholar] [CrossRef] [Green Version]
  52. Fragapane, G.; Hvolby, H.H.; Sgarbossa, F.; Strandhagen, J.O. Autonomous Mobile Robots in Hospital Logistics. In Proceedings of the Advances in Production Management Systems. The Path to Digital Transformation and Innovation of Production Management Systems, Novi Sad, Serbia, 30 August–3 September 2020; Volume 591, pp. 672–679. [Google Scholar] [CrossRef]
  53. VITA. Available online: https://intouchhealth.com/?gdprorigin=true (accessed on 21 August 2022).
  54. Coradeschi, S.; Cesta, A.; Cortellessa, G.; Coraci, L.; Galindo, C.; Gonzalez, J.; Karlsson, L.; Forsberg, A.; Frennert, S.; Furfari, F.; et al. GiraffPlus: A System for Monitoring Activities and Physiological Parameters and Promoting Social Interaction for Elderly. Adv. Intell. Syst. Comput. 2014, 300, 261–271. [Google Scholar] [CrossRef]
  55. Coradeschi, S.; Cesta, A.; Cortellessa, G.; Coraci, L.; Gonzalez, J.; Karlsson, L.; Furfari, F.; Loutfi, A.; Orlandini, A.; Palumbo, F.; et al. GiraffPlus: Combining social interaction and long term monitoring for promoting independent living. In Proceedings of the 6th International Conference on Human System Interactions (HSI), Sopot, Poland, 6–8 June 2013; pp. 578–585. [Google Scholar]
  56. GIRAFF. Available online: http://www.giraff.org/ (accessed on 21 August 2022).
  57. Barea, R.; Bergasa, L.M.; López, E.; Escudero, M.S.; León, C. Face recognition for social interaction with a personal robotic assistant. In Proceedings of the EUROCON 2005—The International Conference on “Computer as a Tool”, Belgrade, Serbia, 21–24 November 2005; pp. 382–385. [Google Scholar]
  58. Das, S.; Saadatzi, M.; Wijayasinghe, I.; Abubakar, S.; Robinson, C.; Popa, D. Adaptive Robotic Nursing Assistant. U.S. Patent 20,220,152,837, 16 April 2019. [Google Scholar]
  59. Xiao, X.; Liu, F.; Fang, X.; Liu, R.; Wang, H.; Huang, K.; Zhang, G. Portable Intelligent Home Health Care Robot. CN209733965U, 27 July 2018. [Google Scholar]
  60. Kim, J.W.; Dang, V.C.; Shin, Y.B. Module and Method for Controling Home Service Robot Home Service Robot Computer Program. KR101981116B1, 17 November 2017. [Google Scholar]
  61. Krauss, G.; Fox, H. Multi-Device Robot Control. WO2019018810A1, 20 July 2018. [Google Scholar]
  62. Yue, L.; Xu, S. Split Type Home Healthcare Robot. U.S. Patent 10932672B2, 23 December 2017. [Google Scholar]
  63. Zhou, M.X.; Yang, H. Method and System for Creating Interactive Inquiry and Assessment Bots. U.S. Patent 10,635,752, 27 May 2016. [Google Scholar]
  64. Mun, K. Health Care Robot. KR20170127592A, 11 May 2016. [Google Scholar]
  65. Kaewkamnerdpong, B.; Jutharee, W.; Santatiwongchai, S. Automatic Mobile Robot for Facilitating Activities to Improve Child Development. U.S. Patent 10864453B2, 21 March 2016. [Google Scholar]
  66. Kidd, C.; Edwards, D.; Arnold, G.; Mirletz, B.; Voorhees, B. Method and System for Patient Engagement. U.S. Patent 10,452,816, 8 February 2016. [Google Scholar]
  67. Stephenson, P. System and Method for Improving Healthcare Through Social Robotics. U.S. Patent 10971256B2, 25 February 2009. [Google Scholar]
  68. Wang, V.; Deng, S. Virtual Companion. U.S. Patent 20140125678A1, 10 July 2013. [Google Scholar]
  69. DINSOW MINI. Available online: https://www.dinsow.com/ (accessed on 21 August 2022).
  70. LITTLE SOPHIA. Available online: https://www.hansonrobotics.com/little-sophia-2/ (accessed on 21 August 2022).
  71. BUDDY PRO. Available online: https://buddytherobot.com/ (accessed on 21 August 2022).
  72. ELLI-Q. Available online: https://elliq.com/ (accessed on 21 August 2022).
  73. SOTA. Available online: https://it.nttdata.com/services/solutions/sota (accessed on 21 August 2022).
  74. CANBOT U05. Available online: http://www.uurobot.com/ (accessed on 21 August 2022).
  75. RELAY. Available online: https://www.savioke.com (accessed on 21 August 2022).
  76. TUG T3. Available online: https://aethon.com/ (accessed on 21 August 2022).
  77. MOXI. Available online: https://www.diligentrobots.com/moxi (accessed on 21 August 2022).
  78. CSJBOT AKER-P. Available online: https://en.csjbot.com/ (accessed on 21 August 2022).
  79. CSJBOT AMY PLUS. Available online: https://www.csjbot.net/amy-plus-product/ (accessed on 21 August 2022).
  80. CSJBOT JUPITER SERVICE ROBOT. Available online: https://www.csjbot.net/jupiter-product/ (accessed on 21 August 2022).
  81. CSJBOT SNOW ROBOT. Available online: https://www.csjbot.net/snow-product/ (accessed on 21 August 2022).
  82. ROBOVIE R3. Available online: https://www.vstone.co.jp/products/robovie_r3/index.html (accessed on 21 August 2022).
  83. SOPHIA. Available online: https://www.hansonrobotics.com/sophia/ (accessed on 21 August 2022).
  84. Zia-ul-Haque, Q.S.M.; Wang, Z.; Jadoon, N.R. Investigating the Uncanny Valley and Human Desires for Interactive Robots. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 15–18 December 2007; pp. 2228–2233. [Google Scholar]
  85. Aloulou, A.; Boubaker, O. Control of a step walking combined to arms swinging for a three dimensional humanoid prototype. J. Comput. Sci. 2010, 6, 886–895. [Google Scholar] [CrossRef] [Green Version]
  86. Chung, H.; Kang, H.; Jun, S. Verbal anthropomorphism design of social robots: Investigating users’ privacy perception. Comput. Hum. Behav. 2023, 142, 107640. [Google Scholar] [CrossRef]
  87. Benedictis, R.D.; Umbrico, A.; Fracasso, F.; Cortellessa, G.; Orlandini, A.; Cesta, A. A dichotomic approach to adaptive interaction for socially assistive robots. User Model. User-Adapt. Interact. 2023, 33, 293–331. [Google Scholar] [CrossRef]
  88. Netzev, M.; Houbre, Q.; Airaksinen, E.; Angleraud, A.; Pieters, R. Many Faced Robot—Design and Manufacturing of a Parametric, Modular and Open Source Robot Head. In Proceedings of the 16th International Conference on Ubiquitous Robots (UR), Jeju, Korea, 24–27 June 2019; pp. 342–348. [Google Scholar]
  89. Boyle, J.T. Robotic Device for Assisting Individuals with a Mental Illness. U.S. Patent 11,439,346, 3 January 2019. [Google Scholar]
  90. JIBO. Available online: https://jibo.com/ (accessed on 21 August 2022).
  91. ROBOVIE-PC. Available online: https://www.vstone.co.jp/english/products/robovie_pc/ (accessed on 21 August 2022).
  92. Salichs, M.A.; Castro-González, Á.; Salichs, E.; Fernández-Rodicio, E.; Maroto-Gómez, M.; Gamboa-Montero, J.J.; Marques-Villarroya, S.; Castillo, J.C.; Alonso-Martín, F.; Malfaz, M. Mini: A New Social Robot for the Elderly. Int. J. Soc. Robot. 2020, 12, 1231–1249. [Google Scholar] [CrossRef]
  93. Salamea, H.M.T.; Cedillo, P.A.S.; Alvarado-Cando, O.; Auquilla, A.R. Health care in the older adult by means of a bioloid robot as a social assistive to motivate physical exercise. In Proceedings of the 2019 7th International Engineering, Sciences and Technology Conference (IESTEC), Panama, Panama, 9–11 October 2019; pp. 508–513. [Google Scholar]
  94. Hasan, S.F.; Alwan, H.M. Design of hybrid controller for the trajectory tracking of wheeled mobile robot with mecanum wheels. J. Mech. Eng. Res. Dev. 2020, 43, 400–414. [Google Scholar]
  95. Salichs, M.A.; Barber, R.; Khamis, A.M.; Malfaz, M.; Gorostiza, J.F.; Pacheco, R.; Rivas, R.; Corrales, A.; Delgado, E.; García, D. Maggie: A robotic platform for human-robot social interaction. In Proceedings of the IEEE Conference on Robotics, Automation and Mechatronics, Bangkok, Thailand, 1–3 June 2006. [Google Scholar]
  96. Gupta, S.S.; Repczynska, I. Autonomous Companion Mobile Robot and System. U.S. Patent 10,946,528, 1 June 2018. [Google Scholar]
  97. Stahel, C.D.; Mcfarland, C.D.; Mckeon, S.D. Methods and Systems for a Companion Robot. U.S. Patent 16/499,573, 31 March 2017. [Google Scholar]
  98. Zheng, Y.; Cheung, J.C.W.; Tam, E.W.; Law, M.C.; Mak, A.H.Y.; Chan, T.T.C. Therapeutic Robot for Facilitating Training and Therapy for the Elderly. WO2021164700A1, 18 February 2020. [Google Scholar]
  99. Chen, Y.; Xie, J.; Gong, Y. Design, Analysis and Modeling of Flexible Arrayless Tactile Sensor under Sharp Contact. In Proceedings of the Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 3642–3647. [Google Scholar]
  100. Willemse, C.J.A.M.; van Erp, J.B.F. Social Touch in Human–Robot Interaction: Robot-Initiated Touches can Induce Positive Responses without Extensive Prior Bonding. Int. J. Soc. Robot. 2019, 11, 285–304. [Google Scholar] [CrossRef] [Green Version]
  101. Cabibihan, J.J.; Jegadeesan, R.; Salehi, S.; Ge, S.S. Synthetic skins with humanlike warmth. In Proceedings of the Second International Conference on Social Robotics, ICSR 2010, Singapore, 23–24 November 2010; Volume 6414, pp. 362–371. [Google Scholar] [CrossRef]
  102. Mazzei, D.; De Maria, C.; Vozzi, G. Touch sensor for social robots and interactive objects affective interaction. Sens. Actuators A Phys. 2016, 251, 92–99. [Google Scholar] [CrossRef] [Green Version]
  103. Kubota, A.; Peterson, E.I.C.; Rajendren, V.; Kress-Gazit, H.; Riek, L.D. JESSIE: Synthesizing social robot behaviors for personalized neurorehabilitation and beyond. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambridge, UK, 23–26 March 2020; pp. 121–129. [Google Scholar]
  104. Castillo, J.C.; Cáceres-Domínguez, D.; Alonso-Martín, F.; Castro-González, Á.; Salichs, M.Á. Dynamic Gesture Recognition for Social Robots. In Proceedings of the 9th International Conference, ICSR 2017, Tsukuba, Japan, 22–24 November 2017; Volume 10652, pp. 495–505. [Google Scholar] [CrossRef]
  105. Muthugala, M.A.V.J.; Srimal, P.H.D.A.S.; Jayasekara, A.G.B.P. Improving robot’s perception of uncertain spatial descriptors in navigational instructions by evaluating influential gesture notions. J. Multimodal User Interfaces 2021, 15, 11–24. [Google Scholar] [CrossRef]
  106. Abubakar, S.; Das, S.K.; Robinson, C.; Saadatzi, M.N.; Cynthia Logsdon, M.; Mitchell, H.; Chlebowy, D.; Popa, D.O. ARNA, a Service robot for Nursing Assistance: System Overview and User Acceptability. In Proceedings of the IEEE 16th International Conference on Automation Science and Engineering (CASE), Hong Kong, China, 20–21 August 2020; pp. 1408–1414. [Google Scholar]
  107. Bandara, H.M.R.T.; Priyanayana, K.S.; Jayasekara, A.G.B.P.; Chandima, D.P. Enhancing human-robot interaction by amalgamating spatial and attribute based cognitive maps of assistive robot. In Proceedings of the 18th International Conference, ICAISC 2019, Zakopane, Poland, 16–20 June 2019; Volume 11509, pp. 633–642. [Google Scholar] [CrossRef]
  108. Filippini, C.; Perpetuini, D.; Cardone, D.; Chiarelli, A.M.; Merla, A. Thermal infrared imaging-based affective computing and its application to facilitate human robot interaction: A review. Appl. Sci. 2020, 10, 2924. [Google Scholar] [CrossRef] [Green Version]
  109. Breland, D.S.; Skriubakken, S.B.; Dayal, A.; Jha, A.; Yalavarthy, P.K.; Cenkeramaddi, L.R. Deep Learning-Based Sign Language Digits Recognition from Thermal Images with Edge Computing System. IEEE Sens. J. 2021, 21, 10445–10453. [Google Scholar] [CrossRef]
  110. TICO. Available online: https://www.adelerobots.com (accessed on 21 August 2022).
  111. Kaichi, T.; Maruyama, T.; Tada, M.; Saito, H. Resolving position ambiguity of imu-based human pose with a single RGB camera. Sensors 2020, 20, 5453. [Google Scholar] [CrossRef]
  112. Lazar, A.M.; Devassy, B.R.; King, G. Real-Time Operated Medical Assistive Robot. In Proceedings of the Third Congress on Intelligent Systems, Bangalore, India, 5–6 September 2022; Volume 613, pp. 777–788. [Google Scholar] [CrossRef]
  113. Cross, M.; Vu, C.; Bickmore, T.; Bolton, C.; Goetsch, J.; Gruber, A.; Singlair, K.; Wilde, L.; Williston, P.; Campbell, T.L. Companion Robot for Personal Interaction. U.S. Patent 7720572B2, 29 September 2006. [Google Scholar]
  114. Tanaka, M.; Shiotani, S.; Kenmochi, K.; Takizawa, K. Health Care Service System Using Robot. JP2006285425A, 31 March 2005. [Google Scholar]
  115. Du, D.; Xu, J.; Wang, Y. Obstacle recognition of indoor blind guide robot based on improved D-S evidence theory. J. Phys. Conf. Ser. 2021, 1820, 012053. [Google Scholar] [CrossRef]
  116. Corrales, A.; Salichs, M.A. Integration of a RFID system in a social robot. In Proceedings of the FIRA RoboWorld Congress 2009, Incheon, Korea, 16–20 August 2009; Volume 44, pp. 63–73. [Google Scholar] [CrossRef]
  117. Manikandan, P.; Ramesh, G.; Likith, G.; Sreekanth, D.; Durga Prasad, G. Smart Nursing Robot for COVID-19 Patients. In Proceedings of the International Conference on Advance Computing and Innovative Technologies in Engineering (ICACITE), Greater Noida, India, 4–5 March 2021; pp. 839–842. [Google Scholar]
  118. Ullah, I.; Shen, Y.; Su, X.; Esposito, C.; Choi, C. A Localization Based on Unscented Kalman Filter and Particle Filter Localization Algorithms. IEEE Access 2020, 8, 2233–2246. [Google Scholar] [CrossRef]
  119. Jayawardena, C.; Kuo, I.H.; Broadbent, E.; MacDonald, B.A. Socially Assistive Robot HealthBot: Design, Implementation, and Field Trials. IEEE Syst. J. 2016, 10, 1056–1067. [Google Scholar] [CrossRef]
  120. Hong, T. Family Healthcare Management Robot. CN204318739U, 19 December 2014. [Google Scholar]
  121. Kim, Y.K. Nursing Robot and Mornitoring System Using Nursing Robot. KR20040034164A, 21 October 2002. [Google Scholar]
  122. Choi, B.H. Wellness and Human Care Robot. KR20130032891A, 11 March 2013. [Google Scholar]
  123. Pan, X.; Peng, L. Anthropomorphic System of Elder Nursing Robot. CN107336246B, 15 June 2017. [Google Scholar]
  124. Gonçalves, P.J.S.; Torres, P.M.B.; Lopes, P. ROBIHO—A robot companion for elderly people’s homes. Appl. Mech. Mater. 2013, 282, 158–161. [Google Scholar] [CrossRef]
  125. Portugal, D.; Santos, L.; Alvito, P.; Dias, J.; Samaras, G.; Christodoulou, E. SocialRobot: An interactive mobile robot for elderly home care. In Proceedings of the IEEE/SICE International Symposium on System Integration (SII), Nagoya, Japan, 11–13 December 2016; pp. 811–816. [Google Scholar]
  126. Saha, A.; Bhuiyan, M.M.H.; Saha, D.; Sara, S.A.; Zishan, M.S.R. Medbot- Design and Development of Medical Robot for Healthcare Digitalization. AIUB J. Sci. Eng. 2023, 22, 1–8. [Google Scholar] [CrossRef]
  127. Matsuo, K.; Barolli, L. Collision avoidance for omnidirectional automated transportation robots considering entropy approach. Lect. Notes Data Eng. Commun. Technol. 2018, 17, 142–151. [Google Scholar] [CrossRef]
  128. Correa, J.; Liu, J.; Yang, G.Z. Real time people tracking in crowded environments with range measurements. In Proceedings of the 5th International Conference, ICSR 2013, Bristol, UK, 27–29 October 2013; Volume 8239, pp. 471–480. [Google Scholar] [CrossRef]
  129. Cheng, L.; Zhao, A.; Wang, K.; Li, H.; Wang, Y.; Chang, R. Activity Recognition and Localization based on UWB Indoor Positioning System and Machine Learning. In Proceedings of the 11th IEEE Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON), Vancouver, BC, Canada, 4–7 November 2020; pp. 528–533. [Google Scholar]
  130. Clement Allognon, S.O.; De Britto, A.S.; Koerich, A.L. Continuous Emotion Recognition via Deep Convolutional Autoencoder and Support Vector Regressor. In Proceedings of the International Joint Conference on Neural Networks (IJCNN), Glasgow, UK, 19–24 July 2020. [Google Scholar]
  131. Lajante, M.; Tojib, D.; Ho, T.I. When interacting with a service robot is (not) satisfying: The role of customers’ need for social sharing of emotion. Comput. Hum. Behav. 2023, 146, 107792. [Google Scholar] [CrossRef]
  132. Favis, S.J.; Srivastava, D. Robots for Interactive Comedy and Companionship. U.S. Patent 16/289,569, 31 August 2016. [Google Scholar]
  133. Lee, T.G.; Lee, S.W.; Lee, D.W.; Kim, J.Y.; Lee, H.G. Emotional Expression Equipment and Method in Android Robot. KR100813668B1, 20 December 2006. [Google Scholar]
  134. Bustos, P.; García, J.C.; Cintas, R.; Martirena, E.; Bachiller, P.; Núñez, P.; Bandera, A. DSR d: A Proposal for a Low-Latency, Distributed Working Memory for CORTEX. In Proceedings of the 21st International Workshop of Physical Agents (WAF 2020), Alcalá de Henares, Spain, 19–20 November 2020; Volume 1285, pp. 109–122. [Google Scholar] [CrossRef]
  135. Calderita, L.V.; Vega, A.; Barroso-Ramírez, S.; Bustos, P.; Núñez, P. Designing a cyber-physical system for ambient assisted living: A use-case analysis for social robot navigation in caregiving centers. Sensors 2020, 20, 5. [Google Scholar] [CrossRef] [PubMed]
  136. Singh, S.P.; Singh, D.K.; Krishnan, J.; Ayub, S.; Billewar, S.R.; Singh, N.; Laddha, S.S.; Mane, P.S. Artificial Intelligence Based Computational Model for Spatial-Temporal and Emotional Reasoning of Humanoid Robots. AU2021102704A4, 20 May 2021. [Google Scholar]
  137. Yoon, S.-S. Robot System for Health Care Service and Method Thereof. WO2020122485A3, 14 December 2018. [Google Scholar]
  138. Wang, Y.; Bauknight, S.H.; Rogow, A.F.; Larson, J.D.; Drouin, J.P. Computer network architecture with machine learning and artificial intelligence and dynamic patient guidance. US10923233B1, 13 June 2018. [Google Scholar]
  139. Stokman, H.M.G.; van Oldenborgh, M.J.B. Adaptive Artificial Intelligence System for Event Categorizing by Switching between Different States. U.S. Patent 17/042,061, 29 March 2018. [Google Scholar]
  140. Ye, G.; Ye, J.Z. First Trainable Robot Doctor Named AIPD Using New Methods and Systems for Its Artificial Brain. U.S. Patent 15/844,666, 18 December 2017. [Google Scholar]
  141. Favis, S.; Srivastava, D. Multiple Interactive Personalities Robot. WO2017189559A1, 26 April 2016. [Google Scholar]
  142. Zhou, H.; Lin, L.; Zhuang, Y. Robot Intelligent Hospital Guidance System. CN106965193A, 31 March 2017. [Google Scholar]
  143. Lim, C.G.; Lee, C.S. Method for Recognizing User Intention. KR20170014704A, 31 July 2015. [Google Scholar]
  144. Zhang, B.; Sun, X.; Gao, Y.; Jia, J.; Wang, S.; Li, H.; Liu, D.; Wang, M. Emotional Dialogue Generation Method and System Based on Interactive Fusion. CN113254625B, 15 July 2021. [Google Scholar]
  145. Xu, K.; Chen, M.; You, L. The Hitchhiker’s Guide to a Credible and Socially Present Robot: Two Meta-Analyses of the Power of Social Cues in Human–Robot Interaction. Int. J. Soc. Robot. 2023, 15, 269–295. [Google Scholar] [CrossRef]
  146. Strathearn, C.; Ma, M. Modelling user preference for embodied artificial intelligence and appearance in realistic humanoid robots. Informatics 2020, 7, 28. [Google Scholar] [CrossRef]
  147. Lima, M.R.; Wairagkar, M.; Natarajan, N.; Vaitheswaran, S.; Vaidyanathan, R. Robotic Telemedicine for Mental Health: A Multimodal Approach to Improve Human-Robot Engagement. Front. Robot. AI 2021, 8, 618866. [Google Scholar] [CrossRef]
  148. Pu, Y.; Peng, J. Nounou Intelligent Monitoring Device for Health Care and Accompanying of the Old. CN107016224A, 3 October 2016. [Google Scholar]
  149. Kim, N.V.; Zhidkov, V.N.; Bodunkov, N.E.; Mamonov, A.V.; Fedorova, T.A.; Kim, T.V. Development of a medical social robot. In Proceedings of the 12th International Conference on Developments in eSystems Engineering (DeSE), Kazan, Russia, 7–10 October 2019; pp. 243–246. [Google Scholar]
  150. Gavril, A.F.; Ghita, A.S.; Sorici, A.; Florea, A.M. Towards a modular framework for human-robot interaction and collaboration. In Proceedings of the 22nd International Conference on Control Systems and Computer Science (CSCS), Bucharest, Romania, 28–30 May 2019; pp. 667–674. [Google Scholar]
  151. Callemein, T.; Van Beeck, K.; Goedemé, T. Multi-view real-time 3D occupancy map for machine-patient collision avoidance. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2021), Online, 8–10 February 2021; pp. 627–636. [Google Scholar]
  152. Ngo, H.Q.T.; Le, V.N.; Thien, V.D.N.; Nguyen, T.P.; Nguyen, H. Develop the socially human-aware navigation system using dynamic window approach and optimize cost function for autonomous medical robot. Adv. Mech. Eng. 2020, 12, 1–17. [Google Scholar] [CrossRef]
  153. Fischinger, D.; Einramhof, P.; Papoutsakis, K.; Wohlkinger, W.; Mayer, P.; Panek, P.; Hofmann, S.; Koertner, T.; Weiss, A.; Argyros, A.; et al. Hobbit, a care robot supporting independent living at home: First prototype and lessons learned. Robot. Auton. Syst. 2016, 75, 60–78. [Google Scholar] [CrossRef]
  154. Wang, Y.; Jordan, C.S.; Wright, T.; Chan, M.; Pinter, M.; Hanrahan, K.; Sanchez, D.; Ballantyne, J.; Herzog, C.; Whitney, B.; et al. Interfacing with a Mobile Telepresence Robot. WO2012103525A3, 28 January 2011. [Google Scholar]
  155. Gross, H.M.; Debes, K.; Einhorn, E.; Mueller, S.; Scheidig, A.; Weinrich, C.; Bley, A.; Martin, C. Mobile robotic rehabilitation assistant for walking and orientation training of stroke patients: A report on work in progress. In Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA, USA, 5–8 October 2014; pp. 1880–1887. [Google Scholar]
  156. Masmoudi, R.; Bouchouicha, M.; Gorce, P. Expressive robot to support elderly. Assist. Technol. Res. Ser. 2011, 29, 557–564. [Google Scholar] [CrossRef]
  157. Infantino, I.; Machí, A. Towards an assistive social robot interacting with human patient to establish a mutual affective support. In Proceedings of the 14th Italian Workshop, WIVACE 2019, Rende, Italy, 18–20 September 2019; Volume 1200, pp. 1–6. [Google Scholar] [CrossRef]
  158. Ozdemir, D.; Cibulka, J.; Stepankova, O.; Holmerova, I. Design and implementation framework of social assistive robotics for people with dementia—A scoping review. Health Technol. 2021, 11, 367–378. [Google Scholar] [CrossRef]
  159. Meng, L.; Yuesong, W.; Jinqi, L. Design of an Intelligent Service Robot based on Deep Learning. In Proceedings of the Proceedings of the 2020 8th International Conference on Information Technology: IoT and Smart City, Xi’an, China, 25–27 December 2020; pp. 153–158.
  160. Cooper, S.; Di Fava, A.; Vivas, C.; Marchionni, L.; Ferro, F. ARI: The Social Assistive Robot and Companion. In Proceedings of the 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Naples, Italy, 31 August–4 September 2020; pp. 745–751. [Google Scholar]
  161. Borboni, A.; Aggogeri, F.; Pellegrini, N.; Faglia, R. Innovative modular SMA actuator. Adv. Mater. Res. 2012, 590, 405–410. [Google Scholar] [CrossRef]
  162. Goris, K.; Saldien, J.; Vanderborght, B.; Lefeber, D. How to achieve the huggable behavior of the social robot Probo? A reflection on the actuators. Mechatronics 2011, 21, 490–500. [Google Scholar] [CrossRef]
  163. Borboni, A.; De Santis, D.; Faglia, R. Large deflection of a non-linear, elastic, asymmetric Ludwick cantilever beam. In Proceedings of the ASME 2010 10th Biennial Conference on Engineering Systems Design and Analysis, ESDA2010, Istanbul, Turkey, 12–14 July 2010; pp. 99–106. [Google Scholar]
  164. Rattanagulvaranon, P.; Nithisoponpat, P.; Maliwan, N.; Mongkolluck, N.; Maneewarn, T. Embodiment of Interaction Design for a Compliant Social Robot. In Proceedings of the 19th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology, ECTI-CON 2022, Prachuap Khiri Khan, Thailand, 24–27 May 2022. [Google Scholar]
  165. Amici, C.; Borboni, A.; Faglia, R.; Fausti, D.; Magnani, P.L. A parallel compliant meso-manipulator for finger rehabilitation treatments: Kinematic and dynamic analysis. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS, Nice, France, 22–26 September 2008; pp. 735–740. [Google Scholar]
  166. Islam, M.S.; Rahman, M.M.; Muhammad, G.; Hossain, M.S. Design of a Social Robot Interact with Artificial Intelligence by Versatile Control Systems. IEEE Sens. J. 2022, 22, 17542–17549. [Google Scholar] [CrossRef]
  167. Borboni, A.; Aggogeri, F.; Pellegrini, N.; Faglia, R. Precision point design of a cam indexing mechanism. Adv. Mater. Res. 2012, 590, 399–404. [Google Scholar] [CrossRef]
  168. Negrini, S.; Piovanelli, B.; Amici, C.; Cappellini, V.; Bovi, G.; Ferrarin, M.; Zaina, F.; Borboni, A. Trunk motion analysis: A systematic review from a clinical and methodological perspective. Eur. J. Phys. Rehabil. Med. 2016, 52, 583–592. [Google Scholar] [PubMed]
  169. Görür, O.C.; Rosman, B.; Sivrikaya, F.; Albayrak, S. Social Cobots: Anticipatory Decision-Making for Collaborative Robots Incorporating Unexpected Human Behaviors. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 398–406. [Google Scholar]
  170. Adamini, R.; Antonini, N.; Borboni, A.; Medici, S.; Nuzzi, C.; Pagani, R.; Pezzaioli, A.; Tonola, C. User-friendly human-robot interaction based on voice commands and visual systems. In Proceedings of the 2021 24th International Conference on Mechatronics Technology, ICMT 2021, Singapore, 18–22 December 2021. [Google Scholar]
  171. Luperto, M.; Monroy, J.; Renoux, J.; Lunardini, F.; Basilico, N.; Bulgheroni, M.; Cangelosi, A.; Cesari, M.; Cid, M.; Ianes, A.; et al. Integrating Social Assistive Robots, IoT, Virtual Communities and Smart Objects to Assist at-Home Independently Living Elders: The MoveCare Project. Int. J. Soc. Robot. 2022, 15, 517–545. [Google Scholar] [CrossRef]
  172. Zhang, Q.; Quan, R.; Qimuge, S.; Wei, R.; Zan, X.; Wang, F.; Chen, C.; Wei, Q.; Liu, X.; Qiao, F. On the Way from Lightweight to Powerful Intelligence: A Heterogeneous Multi-Robot Social System with IoT Devices. In Proceedings of the IEEE International Conference on Automation Science and Engineering, Mexico City, Mexico, 20–24 August 2022; pp. 842–848. [Google Scholar]
  173. Zhao, Y.; Guo, M.; Sun, X.; Chen, X.; Zhao, F. Attention-based sensor fusion for emotion recognition from human motion by combining convolutional neural network and weighted kernel support vector machine and using inertial measurement unit signals. IET Signal Process. 2023, 17, e12201. [Google Scholar] [CrossRef]
  174. Borboni, A.; Elamvazuthi, I.; Cusano, N. EEG-Based Empathic Safe Cobot. Machines 2022, 10, 603. [Google Scholar] [CrossRef]
  175. Villafañe, J.H.; Valdes, K.; Vanti, C.; Pillastrini, P.; Borboni, A. Reliability of handgrip strength test in elderly subjects with unilateral thumb carpometacarpal osteoarthritis. Hand 2015, 10, 205–209. [Google Scholar] [CrossRef]
  176. Al-Quraishi, M.S.; Elamvazuthi, I.; Tang, T.B.; Al-Qurishi, M.S.; Adil, S.H.; Ebrahim, M.; Borboni, A. Decoding the User’s Movements Preparation from EEG Signals Using Vision Transformer Architecture. IEEE Access 2022, 10, 109446–109459. [Google Scholar] [CrossRef]
  177. Aggogeri, F.; Borboni, A.; Merlo, A.; Pellegrini, N.; Ricatto, R. Real-time performance of mechatronic PZT module using active vibration feedback control. Sensors 2016, 16, 1577. [Google Scholar] [CrossRef] [Green Version]
  178. Paterson, M. Social robots and the futures of affective touch. Senses Soc. 2023, 18, 110–125. [Google Scholar] [CrossRef]
  179. Takagi, K.; Rzepka, R.; Araki, K. Just keep tweeting, dear: Web-mining methods for helping a social robot understand user needs. In Proceedings of the AAAI Spring Symposium—Technical Report, Stanford, CA, USA, 21–23 March 2011; pp. 60–65. [Google Scholar]
  180. Toussaint, C.; Schwarz, P.T.; Petermann, M. Navel—A social robot with verbal and nonverbal communication skills. In Proceedings of the Conference on Human Factors in Computing Systems—Proceedings, Hamburg, Germany, 23–28 April 2023. [Google Scholar]
  181. Aggogeri, F.; Borboni, A.; Faglia, R. Reliability roadmap for mechatronic systems. Appl. Mech. Mater. 2013, 373–375, 130–133. [Google Scholar] [CrossRef]
  182. Sætra, H.S. First, They Came for the Old and Demented: Care and Relations in the Age of Artificial Intelligence and Social Robots. Hum. Arenas 2022, 5, 25–43. [Google Scholar] [CrossRef]
  183. Islam, M.J.; Hong, J.; Sattar, J. Person-following by autonomous robots: A categorical overview. Int. J. Robot. Res. 2019, 38, 1581–1618. [Google Scholar] [CrossRef] [Green Version]
  184. Wang, Y.; Jordan, C.S.; Wright, T.; Chan, M.; Pinter, M.; Hanrahan, K.; Sanchez, D.; Ballantyne, J.; Herzog, C.; Whitney, B.; et al. Time-Dependent Navigation of Telepresence Robots. U.S. Patent 9785149B2, 2 August 2013. [Google Scholar]
  185. Jordan, C.S.; Young, A.; Ng, M.S.; Lurie, Y.; Lai, F.; Wright, T.C.; Herzog, C.; Whitney, B.; Rizzi, B.; Ballantyne, J.; et al. Graphical User Interfaces Including Touchpad Driving Interfaces for Telemedicine Devices. U.S. Patent 9361021B2, 22 May 2012. [Google Scholar]
  186. Biswas, T.; Kumar Maduri, P.; Singh, R.; Srivastava, R.; Singh, K. Autonomous Robot to Perform Touch-less Assistance for Doctors. In Proceedings of the 2nd International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 18–19 December 2020; pp. 929–933. [Google Scholar]
  187. Arroyo, D.; Ishiguro, Y.; Tanaka, F. Design of a home telepresence robot system for supporting childcare. In Proceedings of the Computer Supported Cooperative Work and Social Computing, Portland, OR, USA, 25 February–1 March 2017; pp. 131–134. [Google Scholar]
  188. Wein, L.M. Robotic Medical System Having Human Collaborative Modes. U.S. Patent 17/311,201, 4 December 2018. [Google Scholar]
  189. Van Straten, C.L.; Peter, J.; Kühne, R. Transparent robots: How children perceive and relate to a social robot that acknowledges its lack of human psychological capacities and machine status. Int. J. Hum. Comput. Stud. 2023, 177, 103063. [Google Scholar] [CrossRef]
  190. Yew, G.C.K. Trust in and Ethical Design of Carebots: The Case for Ethics of Care. Int. J. Soc. Robot. 2021, 13, 629–645. [Google Scholar] [CrossRef] [PubMed]
  191. Vasylkovskyi, V.; Guerreiro, S.; Sequeira, J.S. BlockRobot: Increasing Privacy in Human Robot Interaction by Using Blockchain. In Proceedings of the IEEE International Conference on Blockchain (Blockchain), Rhodes, Greece, 2–6 November 2020; pp. 106–115. [Google Scholar]
  192. Cusano, N. Cobot and Sobot: For a new Ontology of Collaborative and Social Robots. Found. Sci. 2022. [Google Scholar] [CrossRef]
  193. Alves-Oliveira, P.; Orr, A.; Björling, E.A.; Cakmak, M. Connecting the Dots of Social Robot Design from Interviews with Robot Creators. Front. Robot. AI 2022, 9, 720799. [Google Scholar] [CrossRef]
  194. Olivier, M.; Rey, S.; Voilmy, D.; Ganascia, J.G.; Lan Hing Ting, K. Combining Cultural Probes and Interviews with Caregivers to Co-Design a Social Mobile Robotic Solution. IRBM 2023, 44, 100729. [Google Scholar] [CrossRef]
  195. Borboni, A.; Mor, M.; Faglia, R. Gloreha-Hand Robotic Rehabilitation: Design, Mechanical Model, and Experiments. J. Dyn. Syst. Meas. Control Trans. ASME 2016, 138, 111003. [Google Scholar] [CrossRef]
  196. Formicola, R.; Amici, C.; Mor, M.; Bissolotti, L.; Borboni, A. Design of Medical Devices with Usability in Mind: A Theoretical Proposal and Experimental Case Study Using the LEPRE Device. Designs 2023, 7, 9. [Google Scholar] [CrossRef]
Figure 1. Number of documents related to the year of publication in the range 1992–2022.
Figure 1. Number of documents related to the year of publication in the range 1992–2022.
Sensors 23 06820 g001
Figure 2. Number of documents related to the type of publication.
Figure 2. Number of documents related to the type of publication.
Sensors 23 06820 g002
Figure 3. Number of documents by subject area.
Figure 3. Number of documents by subject area.
Sensors 23 06820 g003
Figure 4. Number of documents related to the affiliation territory.
Figure 4. Number of documents related to the affiliation territory.
Sensors 23 06820 g004
Figure 5. Keywords cloud obtained from VOSviewer software has been reported.
Figure 5. Keywords cloud obtained from VOSviewer software has been reported.
Sensors 23 06820 g005
Figure 6. Flowchart diagram of the research: the starting subset has been obtained from Scopus database after the application of the inclusion criteria; tag 2 identifies very relevant documents; tag 1 labels documents with some relevant information; tag 0 collects not pertinent documents.
Figure 6. Flowchart diagram of the research: the starting subset has been obtained from Scopus database after the application of the inclusion criteria; tag 2 identifies very relevant documents; tag 1 labels documents with some relevant information; tag 0 collects not pertinent documents.
Sensors 23 06820 g006
Figure 7. Blocks diagrams of the analytical analysis’ general arrangement.
Figure 7. Blocks diagrams of the analytical analysis’ general arrangement.
Sensors 23 06820 g007
Figure 8. Number of documents that contain detailed description according to the related labels.
Figure 8. Number of documents that contain detailed description according to the related labels.
Sensors 23 06820 g008
Figure 9. TRL (Technological readiness level) for every document, assigning a numerical label from 1 to 9 in relation to its maturity state.
Figure 9. TRL (Technological readiness level) for every document, assigning a numerical label from 1 to 9 in relation to its maturity state.
Sensors 23 06820 g009
Figure 10. Applicants per country chart. Country is the country or organization where the patent application was filed or granted [1].
Figure 10. Applicants per country chart. Country is the country or organization where the patent application was filed or granted [1].
Sensors 23 06820 g010
Figure 11. Earliest priority date chart. Earliest priority date is the filing date of the very first patent application for a specific invention. Within 12 months of that first filing, a subsequent patent application for the same invention can be filed claiming this “priority right” [1].
Figure 11. Earliest priority date chart. Earliest priority date is the filing date of the very first patent application for a specific invention. Within 12 months of that first filing, a subsequent patent application for the same invention can be filed claiming this “priority right” [1].
Sensors 23 06820 g011
Figure 12. Earliest publication date chart. Earliest publication date is the date on which a patent application is first published. It is the date on which the document is made available to the public, thereby making it part of the state of the art [1].
Figure 12. Earliest publication date chart. Earliest publication date is the date on which a patent application is first published. It is the date on which the document is made available to the public, thereby making it part of the state of the art [1].
Sensors 23 06820 g012
Figure 13. Applicants by number of documents chart. Applicant is a person (i.e., natural person) or organization (i.e., legal entity) that has filed a patent application. There may be more than one applicant per application. The applicant may (but need not) also be the inventor [1].
Figure 13. Applicants by number of documents chart. Applicant is a person (i.e., natural person) or organization (i.e., legal entity) that has filed a patent application. There may be more than one applicant per application. The applicant may (but need not) also be the inventor [1].
Sensors 23 06820 g013
Figure 14. Flowchart of patent analysis.
Figure 14. Flowchart of patent analysis.
Sensors 23 06820 g014
Figure 15. Number of patents that contain detailed description according to the related labels.
Figure 15. Number of patents that contain detailed description according to the related labels.
Sensors 23 06820 g015
Figure 16. Main IPC groups. Descriptions from Espacenet website are reported in the following [1]: A61B5: Measuring for diagnostic purposes; Identification of persons; H04L29: Arrangements, apparatus, circuits, or systems, not covered by a single one of groups; G06F3: Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g., interface arrangements; G06Q50: Systems or methods specially adapted for specific business sectors, e.g., utilities or tourism; G06K9: Methods or arrangements for recognizing patterns; G06F17: Digital computing or data processing equipment or methods, specially adapted for specific functions.
Figure 16. Main IPC groups. Descriptions from Espacenet website are reported in the following [1]: A61B5: Measuring for diagnostic purposes; Identification of persons; H04L29: Arrangements, apparatus, circuits, or systems, not covered by a single one of groups; G06F3: Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g., interface arrangements; G06Q50: Systems or methods specially adapted for specific business sectors, e.g., utilities or tourism; G06K9: Methods or arrangements for recognizing patterns; G06F17: Digital computing or data processing equipment or methods, specially adapted for specific functions.
Sensors 23 06820 g016
Figure 17. Labels comparison between SL and PL analysis of the reduced dataset (SL) and final dataset (PL) with the addition of the cumulated to identify the review’s amplitude.
Figure 17. Labels comparison between SL and PL analysis of the reduced dataset (SL) and final dataset (PL) with the addition of the cumulated to identify the review’s amplitude.
Sensors 23 06820 g017
Figure 18. Factors that affect the final design of a social robot. Starting from the goals to be achieved, in the conceptual layer, a technical choice has to be made to locate the hardware for the computational power needed. Both these factors influence the final system.
Figure 18. Factors that affect the final design of a social robot. Starting from the goals to be achieved, in the conceptual layer, a technical choice has to be made to locate the hardware for the computational power needed. Both these factors influence the final system.
Sensors 23 06820 g018
Table 1. TRL levels and related description.
Table 1. TRL levels and related description.
TRLDescription
1basic principles observed
2technology concept formulated
3experimental proof of concept
4technology validated in lab
5technology validated in relevant environment (industrially relevant environment in the case of key enabling technologies)
6technology demonstrated in relevant environment (industrially relevant environment in the case of key enabling technologies)
7system prototype demonstration in operational environment
8system complete and qualified
9actual system proven in operational environment (competitive manufacturing in the case of key enabling technologies or in space)
Table 2. List of all IPC groups present in the main dataset related to their frequency of occurrence. Concerning the first letter in the description of IPC number: index A identifies the “human necessities” field; index B identifies “performing operations” and “transporting” fields; index C identifies “textiles or flexible materials not otherwise provided for” field; index G identifies “instruments” field; index H identifies “electricity” field [1]. Subsequent indexes (numbers and letters) are a guide to more detailed information on patents and can be obtained from Espacenet website.
Table 2. List of all IPC groups present in the main dataset related to their frequency of occurrence. Concerning the first letter in the description of IPC number: index A identifies the “human necessities” field; index B identifies “performing operations” and “transporting” fields; index C identifies “textiles or flexible materials not otherwise provided for” field; index G identifies “instruments” field; index H identifies “electricity” field [1]. Subsequent indexes (numbers and letters) are a guide to more detailed information on patents and can be obtained from Espacenet website.
IPC GroupN. of PatentsIPC GroupN. of PatentsIPC GroupN. of PatentsIPC GroupN. of Patents
A61B5181G10L1543A61P2521G05B1315
H04L29162B25J1142A63F1321G06N9915
G06F3137G06T741B25J1921G06T1515
G06Q50135G05B1937G06F1121G10L2515
G06K9121G06F1536C12M120H04W815
G06F17111H04M136C12N519A61B9014
G06Q30101H04W1235G01N119A61P4314
G06Q1095H04N734H04B119G01B1114
G06F1988C12N1533H04N2119G09B514
G16H4086H04L933H04W8419H04L514
G06F2184G06F131A61K4518H04M314
G16H5081H04N531B01L318A61B613
G16H2078G06T1930C12N118A63B7113
G06F1677G01N3529G01S718C07H2113
G16H1074G06Q4029G06F4018G06F713
H04W473G08B2129G06N718G16H3013
G06N2068G09B1927A61B817A61B3412
G01N3358A61K3125G06F1317A61H112
G06N558B25J1325H04W8817A63B2112
H04L1258G16H8024A61K916G06F812
B25J956G02B2723A61N116G08B2512
G06N354G05D123C07K1416G09G512
C12Q151G06T1122G01N2116G16H7012
G06F945H04Q922C07K1615H01L3112
G06Q2044A61K3921G01S1715H04B1712
H04N1912
Table 3. Robotic platforms relevant in social robotics for medical environment.
Table 3. Robotic platforms relevant in social robotics for medical environment.
Type of RobotModels
Humanoid robotsNAO [16,17,18,19,20,21]; PEPPER [22,23,24,25,26,27,28]; ROBOTIS OP3 [29]; INMOOV [30]; KIRO [31]; ARASH [32]; RAPIRO [33]; BRIAN 2.1 [34]; KISMET [35].
Pet companion -botsNECORO [36,37]; PARO [9,38,39]; AIBO [10,36,39]; HUGGABLE [39,40]; THERABOT [41]; PLEO [39], SNUGGLEBOT [42], HAPTIC CREATURE [43,44], EDU’ [45], MAYA [46]
Telepresence robotsTURTLEBOT 2 [47,48,49]; CHICARO [50]; DALI [51]; INTOUCH VITA [52,53]; GIRAFFPLUS [54,55,56]; SIRA [57].
Table 4. Social robot classification. Highly actuated robots with DOF ≥14; Slightly actuated robots with DOF < 14.
Table 4. Social robot classification. Highly actuated robots with DOF ≥14; Slightly actuated robots with DOF < 14.
Highly Actuated Robots (DOF)Slightly Actuated Robots (DOF)
NAO (25)RP-VITA (5)
PEPPER (20)HUGGABLE (8)
INMOOV (47)THERABOT (10)
ROBOTIS OP3 (20)CHICARO (4)
KIRO (18)EMIR (4)
ARASH (15)RAPIRO (12)
KISMET (21—actuated face)ARNA (7—robotic arm)
AIBO (22)ROREAS (6)
Table 5. Comparison between LiDAR, infrared, and ultrasonic sensors.
Table 5. Comparison between LiDAR, infrared, and ultrasonic sensors.
DeviceAdvantagesDisadvantages
LiDARHigh detection accuracyLong response time
Expensive
Infrared sensorNot expensiveLow accuracy
Light provides interfaces
Ultrasonic sensorHigh frequenciesEasily affected by noise
Good directivityLow accuracy
Short wavelengthsExpensive
Indicated for low-speed and short-distance measurements
Table 6. Most frequently detected algorithms and AI service in social robotics, referring documents, and peculiar features.
Table 6. Most frequently detected algorithms and AI service in social robotics, referring documents, and peculiar features.
Service Peculiar Features
Amazon Polly Speech Synthesis [146]Text to speech algorithm
Google Text-To-Speech [147]Text to speech algorithm
Google Speech-To-Text [147]Speech to text algorithm
PocketSphynx [19,148]Speech recognition algorithm
IBM Tone Analyzer [147]Emotions from text algorithm
YOLO V3 [25,26,134,149,150]Object detection algorithm
Yolact ++ [151]Object detection algorithm
SLAM [26,32,47,51,106,118,152,153,154]Mapping and navigation algorithm
Baudi [146]Face recognition algorithm
Procrob Functional [19]Face recognition algorithm
Euclid [146]Face recognition algorithm
Viola-Jones algorithm [155,156]Face recognition algorithm
Google Dialogflow [19]Speech and text machine learning service
IBM Watson Assistant [147,157]Question answering computing system
Amazon Lex [19,146]Deep learning service for natural speech recognition
Table 7. Main features of the BlockRobot.
Table 7. Main features of the BlockRobot.
FeatureDescription
Correct data outlineRobotic events (i.e., videos or RFID tags read) are sent to BlockRobot.
Human’s identities are pseudonymized and unrecognizable to robots.
Data persistencePrivate data are hashed and stored on blockchain for verifiability.
Private data can be stored in a ledger (off-chain device) external repository and encrypted.
The solution proposed is a public key infrastructure.
Robots’ signature needed to mark data recorded.
User interface and blockchain transactionIntuitive “GUI” is provided.
Possibility for the user involved to erase or access data.
Digital signature by user is required.
Blockchain’s block building is immutable and unchangeable.
Data accountability is assured because on blockchain is present the history of all data.
Identity management by blockchainRegistration of users on the network through the BlockRobot API with subsequent smart contract authorization.
Authentication of pre-registered users.
Core functionalitiesAdding private data by hashing. RFID events and video recordings are both collected by the robot and processed by BlockRobot API. The private data are hashed to provide a new transaction to build the block for the chain. Private data are finally encrypted and stored off-chain.
Accessing private data through smart contract that verifies the user identity, a new transaction is generated, and private data hashed before are accessible as the original one on user’s GUI.
Deleting private data through smart contract transaction after verifying user’s identity.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ragno, L.; Borboni, A.; Vannetti, F.; Amici, C.; Cusano, N. Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions. Sensors 2023, 23, 6820. https://doi.org/10.3390/s23156820

AMA Style

Ragno L, Borboni A, Vannetti F, Amici C, Cusano N. Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions. Sensors. 2023; 23(15):6820. https://doi.org/10.3390/s23156820

Chicago/Turabian Style

Ragno, Luca, Alberto Borboni, Federica Vannetti, Cinzia Amici, and Nicoletta Cusano. 2023. "Application of Social Robots in Healthcare: Review on Characteristics, Requirements, Technical Solutions" Sensors 23, no. 15: 6820. https://doi.org/10.3390/s23156820

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop