Next Article in Journal
Boundary Element Method for Tangential Contact of a Coated Elastic Half-Space
Next Article in Special Issue
Augmenting Mobile App with NAO Robot for Autism Education
Previous Article in Journal
Green Machining of NFRP Material
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios

by
Galya Georgieva-Tsaneva
1,*,
Anna Andreeva
1,2,
Paulina Tsvetkova
1,3,
Anna Lekova
1,
Miglena Simonska
2,
Vaska Stancheva-Popkostadinova
4,
Georgi Dimitrov
1,3,
Katia Rasheva-Yordanova
3 and
Iva Kostadinova
3
1
Institute of Robotics, Bulgarian Academy of Sciences, 1113 Sofia, Bulgaria
2
Department of Logopedics, South-West University “Neofit Rilski”, 2700 Blagoevgrad, Bulgaria
3
Faculty of Information Sciences, University of Library Studies and Information Technologies, 1784 Sofia, Bulgaria
4
Department of Medical-Social Sciences, South-West University “Neofit Rilski”, 2700 Blagoevgrad, Bulgaria
*
Author to whom correspondence should be addressed.
Machines 2023, 11(7), 693; https://doi.org/10.3390/machines11070693
Submission received: 30 April 2023 / Revised: 23 June 2023 / Accepted: 27 June 2023 / Published: 1 July 2023
(This article belongs to the Special Issue Design and Applications of Service Robots)

Abstract

:
The use of innovative technology in the field of Speech and Language Therapy (SLT) has gained significant attention nowadays. Despite being a promising research area, Socially Assistive Robots (SARs) have not been thoroughly studied and used in SLT. This paper makes two main contributions: firstly, providing a comprehensive review of existing research on the use of SARs to enhance communication skills in children and adolescents. Secondly, organizing the information into tables that categorize the interactive play scenarios described in the surveyed papers. The inclusion criteria for play scenarios in the tables are based only on their effectiveness for SLT proven by experimental findings. The data, systematically presented in a table format, allow readers to easily find relevant information based on various factors, such as disorder type, age, treatment technique, robot type, etc. The study concludes that the despite limited research on the use of social robots for children and adolescents with communication disorders (CD), promising outcomes have been reported. The authors discuss the methodological, technical, and ethical limitations related to the use of SARs for SLT in clinical or home environments, as well as the huge potential of conversational Artificial Intelligence (AI) as a secondary assistive technology to facilitate speech and language interventions.

1. Introduction

There has been a growing interest in the use of innovative technology in Speech and Language Therapy in the recent years. SARs have drawn significant attention in the field of speech and language therapy in the last decade. While initial results have been promising, further exploration is needed to fully understand the potential and usefulness of SARs in the SLT. It has been observed that the robots provide effective and engaging therapy experiences for children and adolescents with different communication disorders. The objectives of this paper are to examine the use of SARs in the treatment of such disorders among juveniles and to explore the benefits and limitations of existing research in this area.
Challenges, which are commonly associated with the use of SLT in a virtual environment and with social robots, are related to technical issues such as software compatibility, hardware failures, internet connectivity, methodological issues and lack of customization, limited personal social interaction, constraints in social cues, lack of ethical guidelines, training for therapists, integration with traditional therapy methods, integration of technology, and collaboration between therapists and technology (guidance and feedback during therapy sessions), as well as high cost. Ongoing research and evaluation of therapy based on social robots can provide evidence-based insights regarding their efficacy, and facilitate the identification of areas that could be improved.
A communication disorder refers to a condition in which an individual has difficulty to send, receive, process, and comprehend information that is represented in verbal, nonverbal, or graphical modes [1]. This can include difficulties with speech, language, and/or hearing. An impairment of articulation of speech sounds, voice, and/or fluency is defined as a speech disorder. According to DSM-V, a language disorder is due to persistent difficulties in the production and/or comprehension of any language forms—spoken, written, sign language, etc. [2]. A congenital or acquired-in-early-childhood hearing disorder can cause limitations in speech and language development. A communication disorder has an impact on interpersonal, educational, and social functioning and it can affect a person’s and their family’s quality of life [1,2].
Play is widely recognized as a fundamental activity for the overall development of every child [3] and it is essential for health and growth [4,5]. Play is an important part of speech and language therapy. Today, a range of play-based interventions have been developed to foster these skills in autistic children [6]. The child needs a combination of skills-based therapeutic work and play-based learning. Assistive technology allows for comfort and can broaden the scope for the child to play [7]. The majority of interventions have focused on children’s social, communication, and language skills [8].
The current survey was performed via exploring the following scientific databases: Scopus, Web of Science, PubMed, Google Scholar, MDPI, and IEEE.
This review investigates the potential of social robots for speech and language therapy. The authors focused on the following research questions:
Are SARs used in speech and language therapy and how common are they?
What type of scenarios with social robots have been developed for therapies of children/adolescents with communication disorders?
What are the technical, methodological, and ethical limitations and challenges of using social robots in speech and language therapy?
The objectives of the current paper are: (1) to present a literature review of the research in the implications of social robots in logopedic therapy and rehabilitation for children and adolescents with communication disorders, and (2) to design tables that systematically display the interactive play scenarios between humans and robots, as described in the papers reviewed. The inclusion criteria for the play scenarios in the tables are solely based on their effectiveness for SLT, supported by experimental findings, and involve the use of socially assistive robots. This table presentation enables readers to quickly locate relevant information based on factors such as the type of disorder, age, treatment technique, type of robot, and others.

2. Methodology

Six electronic databases were used: Scopus, Web of Science, PubMed, Google Scholar, MDPI, and IEEE for the period of fifteen years (January 2008–April 2023).
Eligibility criteria:
  • Sample—Children and adolescents with Disabilities, Neurodevelopmental disorders, Language Disorders, Hearing impairments, Cerebral Palsy, Learning Disabilities, Fluency disorder, Stuttering, Autism Spectrum Disorders, Intellectual disability.
  • Intervention—Study reports speech and language therapy, rehabilitation for social skills, language and communication disorders.
Search strategy
Keywords and phrases: Communication disorders, Children with disabilities, Neurodevelopmental disorders, Language Disorders, Hearing impairments, Cerebral Palsy, Learning Disabilities, Fluency disorder, Stuttering, Autism Spectrum Disorders, Intellectual disability, Rehabilitation of Speech and Language Disorders, Speech-language therapy, Communication skills, Turn-taking, Joint attention, Social interaction, Non-verbal communication; Robot-assisted therapy; Play-Based Intervention, Speech therapy, Robot-mediated intervention, Robotic assistant, Humanoid robot, Human–Robot Interaction, Social robot, Nao, Furhat Robot.
Inclusion criteria: The searched keywords must be present in the title, abstract, and full text of the articles. Articles must include a minimum of two of the keywords: robot + at least one Communication disorder, and SAR was used as an assistant in SLT.
In total, 100 articles were reviewed, but only 30 of them met the inclusion criteria and were analyzed.
The references in Section 5 are related to future directions on how to optimize the role and importance of social robots in speech and language therapy.
The main criteria for the selection of articles in the review was existence of scenarios of interventions between child and robot.
The human–robot interactive scenarios are presented in two tables. Table 1 includes description of scenarios from pilot studies, and Table 2—from empirical ones.
The structure of the presented scenarios includes: reference number, objectives, treatment domain, type of CD, treatment technique, play type (social, cognitive), interaction technique, age, activity description, robot configuration and mode of operation, used software, setting and time, variation.

3. Related Works on Social Robots for the Therapy of Communication Disorders

This section reviews the existing research on the use of SARs to enhance communication skills in children and adolescents in the last 15 years. Then, the authors discuss the methodological, technical, and ethical limitations related to the use of SARs for SLT in clinical or home environments.
The specific focus of our research questions is narrowed to the use of SARs in speech and language therapy and the interactive scenarios developed for therapies of children/adolescents with communication disorders. Therefore, more general research questions concerning the degree of integration of social robots in special education, the impairments that social robots have been used for, and the challenges in integrating social robots in special education are not directly addressed; however, these can be seen in [9] and [10]. The study in [9] analyzed the interaction of social robots with children in special education, including quantitative information on studies with multiple impairments that aim to improve social, cognitive, and communication skills. Table 2 in [9] presents quantitative information on the examined studies. In the study [10], 344 social robots were examined with their applications in various domains including service, healthcare, entertainment, education, research, and telepresence.
The outcome of our research on the systematic review of the use of SARs in speech and language therapy for children and adolescents produced table format scenarios presented in Section 3. Twenty-three articles have been analyzed in a tabular format and can be searched using keywords such as treatment domain, type of communication disorder, treatment technique, play type, interaction technique, participants’ role and behavior, age, activity description, robot configuration and mode of operation, used software, setting and time, and variation.

3.1. Social Robots as ATs in the Rehabilitation of Communication Disorders

In the 2022 UNICEF report [11] on assistive technologies for children with neurodevelopmental disorders, socially assistive robots and virtual reality are identified as high-tech assistive technologies with the most promising results in promoting social interaction and communication. SARs have the greatest potential: robots can play the role of a friend in a game or a mediator in the interaction with other children or adults, promote social interaction, and change the role of the child from a spectator to an active participant. Research shows that Assistive Technologies (AT) improve the skills of children with CD [12,13]; however, their use is still limited, possibly due to a lack of methodological and ethical guidelines and instructions for their use. SARs must be “empathetic”, “digitally intelligent”, stable, and reliable so that they can be assistants in speech and language therapy work. Therefore, they need assistive technologies such as mobile platforms for telepresence, virtual reality, interfaces for tracking body behavior, user interfaces for interaction through gestures or emotions, natural language interfaces, etc.
There are very few scientific studies on the use of SAR in speech and language therapy practice [14,15,16,17,18,19,20,21,22,23,24,25]. The most frequently used robot is of the NAO type, which participates with children with CD in individual sessions. Reported results show that it increases motivation and enhances children’s attention [12,13,14,15,16,17]. In addition to making the intervention more engaging, the robot supports therapists and the child’s family. For example, NAO does not have a human mouth and does not allow lip reading, and this makes children use their hearing to hear what the robot says when it speaks [15]. As a result of experiments [17] on the use of NAO in speech therapy conducted by members of the project team, it was concluded that there was a need to expand the environment in which children and robots communicate by applying more innovative image recognition and improving verbal interactivity with the robots. For example, one of the favorite games of children is shopping and telling a story via a series of pictures, but actions based on pictures are difficult to be animated by NAO, while a 3D project offering shopping in a virtual grocery store would be more realistic and attractive. In addition,, NAO is not so engaging for children with CD in middle schools, so there is a need for different 3D applications as well as a more innovative SAR to be developed and used. Other robots used to improve the social and communication skills of children with CD are robots which change their emotional facial expression or have cloud-based chat services, such as iRobiQ [16] and QTrobot [18]. Unfortunately, their price is not affordable for home use.
Robot assistants using other innovative assistive technologies in speech therapy intervention and inclusive education are presented in [26,27]. They integrate intelligent ICT components and tools, robotic systems such as cloud services [27], expert systems with speech therapy purposes, and a database of knowledge, ontologies, and concepts from the language–speech field [26]. Assistive technologies such as virtual reality are used more among children with autism spectrum disorders [28,29,30,31,32]. The STAR [29] platform is oriented to speech and language therapy and integrates augmented reality for practicing communication skills and strategies for analyzing alternative communication and applied behavior analysis (ABA). Another platform suitable for speech therapy is the VRESS [31] platform for developing customized scenarios for virtual reality to support children who practice and develop their social skills by participating in selected social stories. The platform is integrated with sensors for heart rate detection and eye tracking, which provides important feedback for further customization of scenarios as well as their evaluation. In general, eye tracking technology has recently been widely used to assess engagement in intervention tasks. With the help of the new-generation Tobii eye-and head-tracking devices, not only the attention but also the emotions of the child can be assessed.
Although research on the use of social robots in communication disorders is limited, some studies have reported promising results. For example, in [33], children with special needs who interacted with social robots Nao and CommU showed increased verbal production and engagement in therapy sessions. Another study [34] found that the presence of two social robots in a disability unit for adolescents with special needs led to improvements in articulation, verbal participation, and spontaneous conversations over a two-year period. The robot Kaspar has also been used in a long-term study by caregivers in a nursery school for children with ASD and has shown beneficial outcomes for the participants [35], in some special moments children use phrases and show interactive behaviour that are learned during the interactions with Kaspar and apply them to situations outside of their play with the robot.. Additionally, the humanoid robot iRobi positively impacted communication skills in children with pervasive developmental disorders using augmentative and alternative communication strategies. A low-cost robot named SPELTRA was also used to support therapy sessions for children with neurodevelopmental disorders [36], resulting in improvements in phonological, morphosyntactical, and semantic communication measures. The work [37] designed a program for improving articulation in children with cleft lip and palate using a social robot named Buddy. Similarly, Castillo et al. [38] created an application utilizing a desktop social robot called Mini to support rehabilitation exercises for adults with apraxia. In addressing stuttering, Kwaśniewicz et al. [39] employed the social robot Nao to provide “echo” by combining delayed auditory feedback and choral speech while clients worked on improving their fluency.
The reviewed paper [6] provides a comprehensive map of the research on play-based interventions targeting social and communication outcomes for autistic children and the development of a conceptual framework for the appraisal of play-based interventions to inform clinical decision-making/practitioners and families. The study reported positive outcomes in terms of social and communication skills, including related skills in social cognition, following play interventions in autistic children aged 2–8 years. This paper provides a summary of the disparate literature on the role of play in social and communication interventions in a manner that is relevant to stakeholders. In terms of clinical implications, the conceptual framework proposed in this review can help practitioners evaluate the literature and aid families in making joint decisions about an intervention (p. 21).

3.2. Technical, Methodological, and Ethical Limitations and Challenges of Using SARs in Speech and Language Therapy

Technical limitations include the high cost of purchasing and maintaining SARs, potential technical malfunctions and limitations in their ability for speech recognition and natural language processing capabilities, and emotional and social intelligence. Methodological limitations include the lack of standardization in the use of SARs in speech and language therapy. Ethical concerns include issues related to privacy and data protection, as well as potential negative effects on the therapeutic relationship between the therapist and the patient. In addition, there is a risk that children may become overly dependent on SARs for communication, rather than developing their natural communication skills.
The use of SARs in speech and language therapy can presents several technical challenges and limitations, including:
  • Limited adaptability and personalization: Most SARs are pre-programmed with a fixed set of responses and behaviors, which may not be tailored to the individual needs and preferences of each patient.
  • Limited physical capabilities: SARs may have limited physical capabilities, such as the ability to manipulate objects or to move around in the environment, which may limit their effectiveness in certain therapy contexts.
  • Limited speech recognition and natural language processing capabilities: SARs may have difficulty accurately recognizing and understanding speech, especially in noisy environments or when dealing with non-standard dialects or accents, or in cases of speech and/or language disorders.
  • Limited emotional and social intelligence: Although SARs are designed to interact with humans, they may lack the emotional and social intelligence needed to provide appropriate responses to patients who are experiencing strong emotions or who have complex social communication needs.
  • Technical failures and maintenance issues: Like any technology, SARs may experience technical failures or require maintenance and updates, which can disrupt therapy sessions and create additional stress for patients and therapists.
  • Cost: The cost of SAR technology and maintenance may be prohibitively high for some healthcare organizations, limiting their opportunity to provide this type of therapy to patients who could benefit from it.
The use of SARs in speech and language therapy poses several methodological challenges, including:
  • Reliability and Validity: One of the main challenges is ensuring the reliability and validity of the results when using SARs in speech and language therapy. This requires careful control of the methods of study design and data collection methods to minimize sources of bias and error.
  • Usability and User Acceptance: SARs must be usable and acceptable to the target population, including children with communication disorders, to be effective. This may require significant efforts to design and refine the user interface and user experience of the robot.
  • Standardization: There is a lack of standardized protocols and assessment methods for using SARs in speech and language therapy, which can make it difficult to compare results across studies and determine the effectiveness of different approaches.
  • Evaluation: Assessing the effectiveness of SARs in speech and language therapy often requires multiple raters to evaluate the therapy sessions. Ensuring inter-rater reliability, or consistent long-term effectiveness: Another challenge is demonstrating the long-term effectiveness of SARs in speech and language therapy. Many studies have only measured short-term outcomes, so there is a need for longer-term studies to determine the sustainability of the benefits of using SARs in therapy.
  • A number of participants: Usually, the sample is small in most published research about children/adolescents with communication disorders who interact with the SARs. The study groups consist of heterogenous types of neurodevelopmental disorders and lack control groups; therefore, it is difficult to apply statistical analysis.
The use of SARs in speech and language therapy includes several ethical challenges, including:
  • Privacy and Confidentiality: SARs collect and store sensitive information about the users, such as their speech and language patterns, which can raise concerns about privacy and confidentiality. This requires appropriate data protection measures, such as encryption and secure storage, to prevent unauthorized access to the data.
  • Bias and Discrimination: SARs are designed and programmed by humans, which raises the possibility of unintended bias and discrimination in their behavior and interactions with users. This requires careful consideration of the design and programming of SARs to ensure that they do not perpetuate or amplify existing biases and discrimination.
  • Responsibility and Liability: SARs are increasingly being used in healthcare settings, which raises questions about who is responsible and liable for any harm caused by their use. This requires clear and well-defined policies and procedures for the use of SARs in healthcare and speech and language therapy, as well as appropriate insurance coverage and risk management strategies.
  • Interpersonal Relationships: SARs may have the potential to affect interpersonal relationships and human interactions, including the relationships between patients, therapists, and caregivers. This requires careful consideration of the design and use of SARs to ensure that they enhance, rather than undermine, existing relationships and interactions.
  • Dependence and Over-Reliance: There is a risk that users may become overly dependent on SARs and cease to engage in important interpersonal relationships and activities, which can have negative impacts on their health and well-being. This requires careful monitoring and evaluation of the use of SARs in speech and language therapy to ensure that they are not creating negative consequences for users.

4. Related Works on Interactive Scenarios with Social Robots for the Therapy of Communication Disorders

This section presents the interactive play scenarios involving SARs for SLT derived from the surveyed papers. We analyzed some of the most useful, informative, or easily replicable scenarios in more detail. The interactive play scenarios, which have been verified by experiments, are organized and categorized in table format. Table 1 describes scenarios from pilot studies, while Table 2 presents scenarios from empirical studies.
The authors in the article [40] describe a case study with an eight-year old child with autism spectrum disorder (ASD). The research aims are to improve the joint attention and social skills of the student. For this purpose, one assistive technology is used—the socially assistive robot NAO. The article offers robot-assisted procedures, used in the study, to practitioners in schools. In addition, the authors claim that using robots to teach and practice social and communication skills can be interesting, motivating, and fun for students and encourage researchers to focus more on robots in their future work so that the communication skills of students with ASD can be enhanced.
In [41], a robot-assisted therapy is applied via a single assistive technology—a pet robot called CuDDler (A*STAR Singapore) to three children with ASD. The authors design a training protocol based on experimental psychology which emphasizes the cognitive mechanisms of responding to joint attention (RJA). Results indicate improvement in RJA in the post-training in comparison with the pre-training test. However, the scientists evaluate the joint attention skills soon after the last session (around 3 days later). Therefore, future work should try to investigate whether an improvement in RJA is observed in the long term. Moreover, follow-up studies should include a larger sample and test the effects of longer training. The authors also suggest that future work will have to develop methods to make robot training more engaging.
In [42], the scientists apply a music-based therapy, assisted by the social robot NAO, to four autistic children. The case study continues for three months and the results show that the students have learned to play musical notes and the scenarios positively influenced fine movements, communication abilities, and autism as a whole. The limitations in that study are related to the small number of autistic participants and the access to valid tools which can measure children’s behavior accurately. Thus, the authors propose bigger samples in future research so that statistical analysis can be applied.
The article [43] presents a case study in which children with ASD interact with the humanoid robot NAO in a football game scenario during four sessions. The qualitative and quantitative analyses show that there is improvement in communication skills, social interplay, turn taking, and eye contact. Although the article provides valuable insights into child–robot interaction, the size of the sample is small and a bigger number of individuals should be considered in future work.
The case study in [44] again presents the use of the social robot NAO as an assistant in logopedic and pedagogical therapy with children with different needs, especially ones related to speech and language. The authors propose an architecture that develops “adaptive behavior” and is applied to engineer–therapist–child interaction. Findings in the study suggest that the use of humanoid robots is promising and can improve interventions in speech-therapy centers. Regarding future work, the scientists point to the development of intelligent algorithms so that the presence of the engineer–programmer in the sessions can be eliminated.
Another study [45] reveals robot-based augmentative and alternative ways of communication for non-verbal children with communication disorders. The assistive technology used in the case study is the humanoid robot iRobi. The study evaluates changes in gestures, speech, vocalization, and verbal expression and the results show a positive influence on the communication abilities of non-verbal children. The limitations of the work concern the small number of participants and the authors recommend that robot-based AAC be modified to expand the vocabulary of the non-verbal children.
The authors in the article [46] propose a conceptual framework for designing linguistic activities for children with developmental speech and language impairments or autism. This is a pilot study that develops and assesses only a tablet experimental condition and the next empirical study is expected to be conducted regarding children’s performance in linguistic activities during interaction with robots. The results of the study are promising and in the future, scientists will explore the use of Activity Patterns for other technological solutions, such as augmented reality (AR), virtual reality (VR), and smart spaces.
The paper [47] introduces a newly developed socially assistive robot (MARIA T21) used in psychomotor therapies for children with Down syndrome and psychosocial and cognitive therapies for children with ASD. The authors describe a pilot study with serious games in four stages and highlight the children’s emotional interplay with the robot, and it is observed that the robot MARIA T21 demonstrates notable gain in the field of assistive robotics. In terms of the limitations of the study, the main difficulty was encountered during the selection of children, due to the COVID–19 pandemic which reduced the options of available clinics. Subsequently, the scientists carried out a new testing protocol with 15 children with ASD and will analyze new data for future publications.
In [48], the authors offer the use of a robotic assistant and a mobile support environment included in speech and language therapy. The approach in the pilot study is based on an integrative environment that involves mobile Information and Communications Technology (ICT) tools, an expert system, a knowledge layer, and standardized vocabularies. The experiment has been tested on 65 children suffering from different types of disabilities, and the results achieved are encouraging. It was observed that it was possible to automate several activities linked to speech language therapy. Regarding future work, the scientists propose developing an inference mechanism that can automatically select activities according to the changes in the concentration levels of each patient and design more specific activities by stimulation requirements for the skills affected in each patient.
The authors in [49] present a robotic assistant which can provide support during speech language therapy for children with communication disorders. The researchers conducted a pilot experiment in two stages—one to find out the robot’s response in controlled environments, and another to analyze the children’s responses to the robot. The variety of functionalities implemented in the robot offers speech language therapists to perform therapy sessions efficiently. In terms of future work, the scientists propose developing a mobile application for smart watches with to monitor the user response to robot-assisted therapy (pulse, vital signs, etc.); integrating the robotic assistant to multisensory stimulation rooms; developing a module based on computer vision with to incorporate face gestures recognition to support oral motor therapy; designing a voice analyzer with the objective of determining the patient’s voice quality (vocal tract configuration + laryngeal anatomy + learned component).
The research study [14] is based on a team approach: professionals in engineering/programming, qualitative analysis and education, psychology, special education, and speech therapy. For logopedic and educational treatment Aldebaran Robotics NAO named “EBA” (educational behavior aid) is used. Five Spanish-language children aged 9–12 years are included in the study, assessed by a psychologist and a speech therapist to determine the cognitive and language level of development. All children are with different diagnoses: (1) cleft palate and cleft lip, (2) specific language impairment (SLI), attention deficit and hyperactivity disorder (ADHD) with comorbidity, dyslexia and disruptive mood dysregulation disorder (DMDD) with oppositional defiant disorder (ODD), (3) SLI, (4) language developmental delay and several types of dyslalia, (5) dyslexia with evolving dysgraphic and dysortographic symptoms with an ADD. The sessions with EBA are designed on the following treatment aspects: (1) for the children with dyslalia and dyslexia: Reading comprehension: short/medium- and long-term memory, literacy, storytelling, tales, vocabulary, phonological awareness, articulation and phonetical–phonological pronunciation, phonetic segmentation; (2) for the children with ADD: attention and writing; and (3) for the children with SLI: oral and written comprehension, reading and writing. The obtained results draw attention to both the positive aspects and limitations of NAO implementation in the speech therapy. The data show improvement in the child’s learning process, vocalization, construction, and structure of sentences, and an increase in the children’s confidence in themselves and motivation. An essential part of the results in an educational and therapeutic context is the non-judgmental nature of the work and well-structured language for better understanding by children. The limitations of the study are related to the small number of children included in the study, as well as the use of a single camera, which leads to the impossibility of analyzing the entire non-verbal behavior of the children. Concerning future work, the research team state that more research is needed to find out how EBA can best be implemented in speech and language therapy for more inclusive education.
The paper [50] presents the use of a robot-like assistant that provides tactile, auditory, and visual stimuli for children with speech and language disorders with the aim of motivating children to perform exercises and treatment activities and to contribute to extending the children’s attention span. For the study, a FONA robot is developed that “has 6 degrees of freedom (can move arms and walk), weights six pounds, implements a touch screen (5 inches), has three push buttons (chest) and two resistive sensors (head) that can trigger any script written in Python or C, and can play sounds through a Bluetooth speaker” (p. 588). The study included eight children, aged 4–5 years, with functional dyslalia for two months. Speech and language therapy is based on the combination of robotic assistant, a rules-based reasoning system, and the Motor Learning Theory. Data show improvement in attention span during the treatment sessions with the robot (the average child’s attention span increases by more than 17 min for a therapy session consisting of 40 min). Regarding future research work, it is necessary to develop a remote control system based on video game controls that assists children with severe motor disabilities to interact with the box and include new elements and relationships in the ontology, allowing the modeling of psychomotor abilities and concept development of children up to 7 years of age.
The paper [39] describes an application that allows the use of a humanoid robot NAO as a stutterer’s assistant and therapist based on auditory and visual feedback. For auditory feedback, the “echo” method, known also as delayed auditory feedback (DAF), is modified. For visual feedback, changes in the robot’s hand movements according to the shape of the speech signal envelope and possibility of controlling speech with a metronome effect are used. Applications for DAF are installed both on the computer and NAO, because it is important for the microphone to be close to the mouth for the feedback and for this purpose a computer/laptop’s microphones are used. It is important to note that the sound from the microphones has to be sent by the application and network to the NAO’s speakers with some delay, causing the impression of echo (data streaming). The robot moves its hands based on an auditory signal and metronome mode which allows the robot hand’s movement up and down from the lowest to the highest location. The authors state that a robot can replace the speech therapist while accompanying the patient and leading his treatment, and various types of exercises such as reading, conversing, or running a monologue can be used. Researchers suggest treatment procedures to be used for at least 20 min every day. Another advantage is that by using this type of program it will be possible for the robot to use pre-prepared questions directly, aiming to give the patient the impression of a real conversation. This will break through the social barriers associated with shyness and discomfort that result from stuttering. For future research, it is stated that the application will be tested on a group of people with speech disfluency and compared to other methods.
The study [19] investigates parental attitudes towards the robot-assisted therapy in pediatric (re)habilitation with humanoid robots. Thirty-two parents are included in the study. They had to give feedback about their children who took part in the treatment using a mobile anthropomorphic robot with cognitive skills, and to complete the Frankenstein Syndrome Questionnaire. Children had been referred to a two week-long corrective exercise training program due to poor posture.
The findings of this investigation showed that parental attitudes towards humanoid robots and robot-assisted therapy are positive, and that most of them are informed about the latest technological advances. They have neutral attitudes and positive expectations of humanoid robots and have accepted them socially. It also was found that less educated parents were more apprehensive of technology, and older parents tended to be more anxious.
In [51], the authors investigate the potential of a robotic learning assistant which could monitor the engagement of children with ASD and support them when necessary. The scientists focused on acceptable and content-specific behavior of the robot which they handled through an AI-based detection system. The results of the survey suggest that an emphasis on speech and interaction competences is needed as well as adaptability, customizability, and variability of the robot’s behavior. In addition, the robot needs to detect and react according to the current state of the children. There are a few limitations of the study. On the one hand, the sample size is relatively small and the participants were recruited through the same ASD program and interviewed via Internet-based technologies, which can impact the applicability of the findings. On the other hand, this was a qualitative study and it is not possible to determine the frequency of the identified requirements.

4.1. Components of the Play Scenarios (in Tables)

Taken from the current survey, useful interactive play scenarios that involve social robots as assistants during SLT are presented in a table format. The format of the following play scenarios is inspired by previously developed robot-assisted play scenarios, originated from [13], and broadly conforms to several current robot-assisted interventions that have been used for children, in particular for children with ASD [52,53].

4.1.1. Description of Interactive Scenarios with SARs (Pilot Studies)

A review of the scientific studies was made in which interactive scenarios with SARs are presented to support the development of children with CD. Table 1 presents interactive scenarios with SARs described in pilot studies. They are ordered chronologically, with the most recent publications appearing first.
Table 1. Description of interactive scenarios with SARs (pilot studies).
Table 1. Description of interactive scenarios with SARs (pilot studies).
Reference: [17], 2022Name of Scenario: Farm Animals—Voices and Names
ObjectivesRemote speech and language therapy; Enrich the child’s vocabulary.
Treatment domain, Type of CDLanguage domain, Farm animals’ voices and names; children with neurodevelopmental disorders.
Treatment techniqueIdentification of farm animal voice. Identification and pronunciation of words for farm.
Play type (social∣cognitive)Cognitive play.
Interaction techniqueChild–robot interaction.
AgeFour years old.
Participants’ role and behaviorThere are five participants in this scenario, a speech and language therapist (control the game) a social robot (instructor–Nao), a social robot EmoSan (playmate), parent (co-therapist), and a child with neurodevelopmental disorders (playmate).
Activity description[17], page 123 (https://youtu.be/KpeQcIXG6cA, accessed on 16 April 2023).
Robot configuration and missionA social robot NAO, a social robot EmoSan, pictures of farm animals, a tablet and a laptop, BigBlueButton platform for telepresence.
Used softwareNAOqi software v.2.8.6.23, Python v.2.7, Node-RED v.2.1.3.
Setting and timeThis scenario was carried out in a clinical setting over multiple sessions.
VariationThe activity can also include more participants.
Reference: [17], 2022Name of Scenario: Storytime
ObjectivesFollow a story and representation of a story as a sequence of scenes in time.
Treatment domain, Type of CD Language domain, children with neurodevelopmental disorders.
Treatment techniqueStory as a sequence of scenes in time.
Play type (social∣cognitive)Cognitive play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate).
Age3–10 years old (15 children)
Activity description[17], page 123 (https://youtu.be/AZhih7KlaPc, accessed on 16 April 2023)
Robot configuration and mode of operation A social robot NAO, a social robot EmoSan was used with 3 pictures of story scenes and a whisk.
Used softwareNAOqi software, v.2.8.6.23 Python 2.7, Node-RED v.2.1.3.
Setting and time This scenario was carried out in a clinical setting over multiple sessions.
VariationThe activity can also include more participants to promote cooperative play.
Variation-
Reference: [46], 2021Name of Scenario: Different interactive activities with a tablet; robots are expected to be used.
Objectives To propose a conceptual framework for designing linguistic activities (for assessment and training), based on advances in psycholinguistics.
Treatment domain, Type of CD Speech and language impairments—developmental language disorder, autism spectrum disorder.
Treatment techniqueInteractive therapeutic activities.
Play type (social∣cognitive)Social and cognitive.
Interaction techniqueThe child performs activities on a tablet.
Age4–12 years old.
Participants’ role and behaviorThe participants in this scenario are the children (30), performing activities via a tablet.
Activity description[46], page 2–6.
Robot configuration and missionSocially assistive robots/tablets with different modules for training and assessing linguistic capabilities of children with structural language impairments.
Used softwareSocially assistive robot and/or mobile device.
Setting and time This scenario has been carried out in clinical settings over multiple sessions, two groups have been included—a target and a control group.
VariationThere are different linguistic tasks which evaluate different linguistic skills. Activities can include more than one participant.
Reference: [47], 2021Name of Scenario: Serious games conducted by a social robot via embedded mini-video projector
Objectives To show the application of a robot, called MARIA T21 as a therapeutic tool.
Treatment domain, Type of CD Autism spectrum disorder, Down syndrome.
Treatment techniqueInteractive serious games.
Play type (social∣cognitive)Social and cognitive.
Interaction techniqueRobot–child interaction.
Age4–9 years old.
Participants’ role and behaviorThe participants in this scenario are the social robot and eight children, supervised by the therapist and a group of researchers.
Activity description[47], page 6–14 (see in Section 5 Methodology)
Robot configuration and missionA new socially assistive robot termed MARIA T21 which uses an innovative embedded mini-video projector able to project Serious Games on the floor or tables.
Used software A set of libraries-PyGame, written in Python 2.7; an open-source robot operating system.
Setting and time The tests were carried out partly in a countryside region and partly in a metropolitan area, in order to expand socioeconomic diversity.
VariationThe games were created with all their possible events, characters, awards, and stories and have included different types of serious games.
Reference: [52], 2021Name of Scenario: Questions and Answering with NAO Robot
ObjectivesInitiation of conversation.
Treatment domain, Type of CDLanguage domain, Language disorder due to ASD.
Treatment techniqueAsking and answering simple questions.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Age5–24 years old (4 children).
Participants’ role and behaviorThere are five participants in this scenario, two teachers, two researchers, social robot, and the child.
Activity description[52], page 0357
Robot configuration and missionA social robot NAO is talking with a child.
Used softwareNAOqi software v.2.8.6.23
Setting and timeThis scenario was carried out in a classroom of special school, in 4 sessions.
Variation-
Reference: [52], 2021Name of Scenario: Physical Activities with NAO Robot.
ObjectivesInitiation of physical movements.
Treatment domain, Type of CDBasic communication domain, Social and communication interaction due to ASD.
Treatment techniqueProvocation of imitation of physical movements.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Age5–24 years old (4 children)
Participants’ role and behaviorThere are five participants in this scenario, two teachers, two researchers, social robot, and the child.
Activity description[52], page 0357
Robot configuration and missionA social robot NAO is talking with a child.
Used softwareNAOqi software v.2.8.6.23
Setting and timeThis scenario was carried out in a classroom of special school, in 4 sessions.
Variation-
Reference: [54], 2021Name of Scenario: I like to eat popcorn
ObjectivesLearning Bulgarian Sign Language.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueDemonstration of signs, video and pronunciation of words from Sign Language.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are two participants in this scenario social robot (instructor) and the typically developed toddler.
Age5 years
Activity description[54] page 72–73
Robot configuration and mode of operation A social robot Pepper.
Used softwareNAOqi v.2.8.6.23
Setting and time This scenario has been carried out in a lab setting, in one session.
VariationThe activity can also include more participants to promote cooperative play.
Reference: [49], 2016Name of Scenario: Different activities between a robot and children
ObjectivesTo present a robotic assistant which can provide support during therapy and can manage the information.
Treatment domain, Type of CDCommunication disorders.
Treatment techniqueTasks and exercises for language, pragmatics, phonetics, oral-motor, phonological, morphosyntactic, and semantic interventions.
Play type (social∣cognitive)Social and cognitive.
Interaction technique Robot–child interaction.
Age-
Participants’ role and behaviorThe participants in this scenario are the robot and 32 children of regular schools.
Activity description[49], see pages 4–6
Robot configuration and missionThe robot was designed via 3D technology, and has a humanoid form with possibility to wear any costume representing animals (dogs, cats, etc.), children (boys or girls), or any other characters.
The main controller of the robot (brain).
Used softwareA Raspberry PI 2 plate that contains the operative system (Raspbian-Raspberry Pi Model 2 B+).
Setting and timeThe pilot experiment consists of two stages—lab tests to determine robot’s performance (over multiple activities) and analyses of patients’ responses to the robot’s appearance.
VariationThe robot offers different activities (playing, dancing, talking, walking, acting, singing, jumping, moving, and receiving voice commands. The system automates reports generation, monitoring of activities, patient’ data management, and others. The robot’s appearance can be customized according to the preferences of the patients.
Reference: [36], 2016Name of Scenario: Therapy mode
ObjectivesDevelopment of phonological, morphological, and semantic areas.
Treatment domain, Type of CDLanguage and speech domain; Children with Cerebral Palsy.
Treatment techniqueThe robot displays on its screen some activities related to speech therapy such as phonological, semantic, and morphosyntactic exercises.
Play type (social∣cognitive)Cognitive play.
Interaction techniqueChild–robot interaction.
Age7 years
Participants’ role and behaviorThere are three participants in this scenario, a speech and language therapist, social robot, and the child.
Activity description[36], page 4
Robot configuration and missionSPELTRA (Speech and Language Therapy Robotic Assistant) with a display,
Used softwarea Raspberry Pi Model 2 B+ (2015); mobile application (Android-Raspberry Pi Model 2 B+,2015).
Setting and timeThis scenario was carried out in a school setting, in three sessions
VariationGenerates a complete report of activities and areas of language which the child has worked; it could be used by parents and their children at home.
Reference: [55], 2016Name of Scenario: Fruit Salad
ObjectivesAssessment of nonverbal communication behavior and verbal utterances, transferring skills in life.
Treatment domain, Type of CDNonverbal behavior and Language domain, Children with ASD.
Treatment techniqueThe robot had the role of presenting each trial by following the same repetitive pattern of behaviors: calling the child’s name, looking at each fruit, expressing the pre-established facial expression, and providing an answer at the end after the child placed a fruit in the salad bowl.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Age5–7 years
Participants’ role and behaviorThere are three participants in this scenario, an adult, social robot, and the child.
Activity description[55], page 118
Robot configuration and missionSocial robot Probo and plastic fruit toys.
Used softwareElan—Linguistic Annotator, version 4.5
Setting and timeThis scenario has been carried out in the therapy rooms in three schools, in two sessions.
VariationThe game is played in child–adult condition or in child–robot condition.
Reference: [56], 2016Name of Scenario: Shapes
ObjectivesAssessment of decoding/understanding words.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueIdentification; listening and following spoken instructions; Sign Language interpreter helps with the instructions if the child needs it.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment.
Age5–15 years old
Activity description[56], page 257
Robot configuration and mode of operation A social robot NAO was used with pictures of different shapes and colors.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a school setting, in one session.
VariationThe activity can also include more participants to promote cooperative play.
Reference: [56], 2016Name of Scenario: Emotions
Objectives Understanding emotion sounds and naming the emotion, transferring skills in life.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueIdentification of emotion sounds; Sign Language interpreter helps with the instructions if the child needs it.
Play type (social∣cognitive)Cognitive play.
Interaction techniquePeer interaction.
Participants’ role and behaviorThere are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment.
Age5–15 years
Activity description[56], page 257
Robot configuration and mode of operation A social robot NAO was used with pictures of emotions.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a school setting, in one session.
VariationThe activity can also include more participants to promote cooperative play.
Reference: [56], 2016Name of Scenario: Shopping_1
Objectives Identification of environment sounds and words pronunciation, transferring skills in life.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueIdentification of environmental sounds; Demonstration of body movements; Sign Language interpreter helps with the instructions if the child needs it.
Play type (social∣cognitive)Cognitive play.
Interaction techniquePeer interaction.
Participants’ role and behaviorThere are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment.
Age5–15 years
Activity description[56], page 257
Robot configuration and mode of operation A social robot NAO and hygienic products (soap, shampoo, sponge, toothpaste and etc.).
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario wascarried out in a school setting, in one session.
VariationThe activity can also include more participants to promote cooperative play.
Reference: [56], 2016Name of Scenario: Shopping_2
Objectives Identification of sentence and words pronunciation, transferring skills in life.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueIdentification of sentence; categorization of words according to a certain criterion; Sign Language interpreter helps with the instructions if the child need.
Play type (social∣cognitive)Cognitive play.
Interaction techniquePeer interaction.
Participants’ role and behaviorThere are three participants in this scenario, a speech and language therapist (mediator), social robot (instructor), and the child with hearing impairment.
Age5–15 years
Activity description[56], page 258
Robot configuration and mode of operation A social robot NAO and toys.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a school setting, in one session.
VariationThe activity can also include more participants to promote cooperative play.
Reference: [57], 2016Name of Scenario: Order a doughnut
Objectives How to order a doughnut from a menu in a doughnut shop, transferring skills in life.
Treatment domain, Type of CD Language domain, ASD.
Treatment techniqueImitation of actions and words.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThe child’s family, the robot programmer, the special education teacher, social robot NAO, and the child.
Age6 years old
Activity description[57], page 132–133
Robot configuration and mode of operation A social robot NAO and a menu
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out at subject’s home, in two sessions.
Variation-
Reference: [57], 2016Name of Scenario: Joint Attention
ObjectivesJoint attention skills
Treatment domain, Type of CD Joint attention; Developmental Delay and Speech-Language Impairments.
Treatment techniqueUnderstanding instructions.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThe robot programmer, the speech and language pathologist, social robot NAO, and two children.
Age7 and 9 years old
Activity description[57], page 135
Robot configuration and mode of operation A social robot NAO and objects in speech and language pathologist’s office.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out at speech and language pathologist’s office in five sessions.
VariationAfter each session, the modification of the robot behaviors were designed according to the child’s needs.
Reference: [57], 2016Name of Scenario: Joint Attention, Turn-Taking, Initiative
Objectives Joint attention, introduction of turn-taking and initiative skills
Treatment domain, Type of CD Language domain, Speech-Language Impairment.
Treatment techniqueImitation of actions and sentences.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThe robot operator, the speech and language pathologist, social robot NAO, and a child
Age7 years
Activity description[57], page 136–137
Robot configuration and mode of operationA social robot NAO and cue cards.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out at school’s playroom, in eight months, twice a week sessions.
Playing the game without the cue cards.
Reference: [48], 2015Name of Scenario: Auditory Memory Stimulation, Comprehensive Reading, Visual Stimulation, Stimulation of Motor Skills
ObjectivesTo offer a robotic assistant able to provide support for Speech Language Practitioners.
Treatment domain, Type of CDAutism spectrum disorder, Down syndrome, Cerebral Palsy, Mild and Moderate Intellectual Disability, Epilepsy, Unspecified intellectual disabilities, other disabilities.
Treatment techniqueInteractive therapy exercises, assessment tasks.
Play typeSocial and cognitive.
Interaction techniqueTherapist–patient interaction via an intelligent integrative environment.
Age-
Participants’ role and behaviorThe participants in this scenario are the therapist, the children, the robotic assistant (the model can be used by relatives and students, too).
Activity description[48], page 75
Robot configuration and missionRAMSES (v.2)—an intelligent environment that uses mobile devices, embedded electronic systems, and a robotic assistant. The robotic assistant consists of a central processor (an Android smartphone or tablet, or an embedded electronic system) and a displacement.
Used softwareElectronic platform.
Setting and timeThis is a pilot study, conducted in clinical settings over multiple activities.
VariationThe proposed model relies on different ICT tools, knowledge structures, and functionalities.
Reference: [58], 2014Name of Scenario: The impact of humanoid robots in teaching sign languages
ObjectivesTeaching Sign Language
Treatment domain, Type of CDLanguage domain, Language disorder due to hearing impairment.
Treatment techniqueDemonstration of sign language and special flashcards illustrating the signs.
Play type (social∣cognitive)Cognitive play.
Interaction techniqueChild–robot interaction.
Age9–16 years (10 children hearing impairment).
Participants’ role and behaviorIndividual and group sessions of a therapist in sign language, a social robot, and a child/ children.
Activity description[58], page 1124–1125
Robot configuration and missionA social robot Robovie R3 and pictures of sings.
Used softwareRobovie Maker 2 software (v.1.4).
Setting and timeThis scenario was carried out in a computer laboratory, in one session.
VariationIndividual or group sessions.
Reference: [59], 2014Name of Scenario: Sign Language Game for Beginners
ObjectivesLearning signs from Turkish Sign Language
Treatment domain, Type of CDLanguage domain, Language disorder due to hearing impairment.
Treatment techniqueIdentification of words in Turkish Sign Language for beginners’ level (children of early age group), most frequently used daily signs.
Play type (social∣cognitive)Cognitive play.
Interaction techniqueChild–robot interaction.
AgeAverage age of 10:6 (years:months)
Participants’ role and behaviorThere are two participants in this scenario, the typically developed child and a humanoid social robot (instructor).
Activity description[59], page 523, 525
Robot configuration and missionA social robot NAO H25 and a modified Robovie R3 robot.
Used softwareNAOqi software v.2.8.6.23
Setting and timeThis scenario wa carried out in a university setting for one session.
VariationThe game can also be played with children with hearing impairment.

4.1.2. Description of Interactive Scenarios with SARs (Empirical Use Studies)

Table 2 presents interactive scenarios with SARs described in empirical use cases.
Table 2. Description of human–robot interactive scenarios—empirical.
Table 2. Description of human–robot interactive scenarios—empirical.
Reference: [60], 2022Name of Scenario: Ling Six-Sound Test
ObjectivesAssessment of auditory skills/identification.
Treatment domain, Type of CD Frequency speech sounds, children with neurodevelopmental disorders.
Treatment techniqueDiscrimination and identification of speech sounds.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate).
Age3–10 years old
Activity description[60], page 491
Robot configuration and mode of operation A social robot NAO; a social robot EmoSan was used with pictures of different speech sounds.
Used softwareNAOqi software v.2.8.6.23 and Python 2.7.
Setting and time This scenario was carried out in a clinical setting over multiple sessions.
VariationThe instructions play in random order. The activity can also include more participants to promote cooperative play.
Reference: [60], 2022Name of Scenario: Warming up
ObjectivesIdentification of speech.
Treatment domain, Type of CD Common greeting and introduction of someone, children with neurodevelopmental disorders.
Treatment techniqueIdentification of speech.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate).
Age3–10 years old
Activity description[60], page 491
Robot configuration and mode of operation A social robot NAO, a social robot EmoSan.
Used softwareNAOqi software v.2.8.6.23 and Python 2.7.
Setting and timeThis scenario was carried out in a clinical setting over multiple sessions.
VariationThe activity can also include more participants to promote cooperative play.
Reference: [60], 2022Name of Scenario: Farm animals—receptive vocabulary
ObjectivesReceptive vocabulary of children for this particular closed set of words.
Treatment domain, Type of CD Receptive vocabulary of closed set of words, children with neurodevelopmental disorders.
Treatment techniqueIdentification of vocabulary of closed set of words.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild-robot interaction.
Participants’ role and behaviorThere are four participants in this scenario, a speech and language therapist (control the game) a social robot (instructor-Nao), a social robot EmoSan (playmate) and a child with neurodevelopmental disorders (playmate).
Age3–10 years old
Activity description[60], page 492
Robot configuration and mode of operation A social robot NAO, a social robot EmoSan has been used with pictures of different farm animals.
Used softwareNAOqi software v.2.8.6.23 and Python 2.7.
Setting and time This scenario has been carried out in a clinical setting over multiple sessions.
VariationThe instructions are played in random order. The activity can also include more participants to promote cooperative play.
Reference: [60], 2022Name of Scenario: Colors.
ObjectivesReceptive vocabulary of children for this particular closed set of words.
Treatment domain, Type of CD Receptive vocabulary of closed set of words, children with neurodevelopmental disorders.
Treatment techniqueIdentification of vocabulary of closed set of words.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate).
Age3–10 years old
Activity description[60], page 492
Robot configuration and mode of operation A social robot NAO; a social robot EmoSan has been used with pictures of different colors.
Used softwareNAOqi software v.2.8.6.23 and Python 2.7.
Setting and time This scenario has been carried out in a clinical setting over multiple sessions.
VariationThe instructions are played in random order. The activity can also include more participants to promote cooperative play.
Reference: [60], 2022Name of Scenario: Shopping game
ObjectivesIdentification of environmental sounds and expressive vocabulary of closed set of words, transferring skills in life.
Treatment domain, Type of CD Identification of sounds and expressive vocabulary of closed set of words, children with neurodevelopmental disorders.
Treatment techniqueIdentification of sounds and words.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild-robot interaction.
Participants’ role and behaviorThere are four participants in this scenario, a speech and language therapist (control the game), a social robot (instructor-Nao), a social robot EmoSan (playmate), and a child with neurodevelopmental disorders (playmate).
Age3–10 years old
Activity description[60], page 492
Robot configuration and mode of operation A social robot NAO; a social robot EmoSan was used with pictures of different colors.
Used softwareNAOqi software v.2.8.6.23 and Python 2.7.
Setting and time This scenario was carried out in a clinical setting over multiple sessions.
VariationThe instructions are played in random order. The activity can also include more participants to promote cooperative play.
Reference: [61], 2022Name of Scenario: Imitation games and speech therapy sessions
Objectives To compare the children’s engagement while playing a mimic game with the affective robot and the therapist; to assess the efficacy of the robot’s presence in the speech therapy sessions alongside the therapist.
Treatment domain, Type of CD Language disorders.
Treatment techniqueMimic game; speech therapy sessions.
Play type Social and cognitive play.
Interaction techniqueRobot–child–therapist interaction.
AgeAverage age of 6.4 years.
Participants’ role and behaviorThe participants in the scenarios are the social robot (RASA), six children in the intervention group, six children in the control group, and the therapist.
Activity description[61], pages 10–11
Robot configuration and missionA humanoid Robot Assistant for Social Aims (RASA). Designed to be utilized primarily for teaching Persian Sign Language to children with hearing disabilities.
Used software The robot is controlled by a central PC carrying out high level control, and two local controllers.
Setting and time Scenarios have been carried out in a clinical setting over ten therapy sessions (one per week).
VariationThe robot uses external graphics processing unit to execute facial expression recognition due to the limited power of the robot’s onboard computer.
Reference: [12], 2022Name of Scenario: Reading skills
Objectives Social robots are used as the tutor with the assistance of a special educator.
Treatment domain, Type of CD Special Learning Disorder (dyslexia, dysgraphia, dysorthography).
Treatment techniqueTeaching cognitive and metacognitive strategies.
Play type Cognitive play.
Interaction techniqueRobot–child interaction enhanced by the special education teacher;
Agemean age 8.58.
Participants’ role and behaviorAll scenarios were similar in content; structure and succession for both the NAO and the control group with the only difference that the welcoming, the instructions, the support, and the feedback for the activities was delivered by the special educator for the control group.
Activity description[12], pages 5–4
Robot configuration and missionA humanoid robot Nao.
Used software NAOqi software v.2.8.6.23.
Setting and time Interventions took place in a specially designed room in a center; 24 sessions with a frequency of two sessions per week
Variation-
Reference: [14], 2021Name of Scenario: Therapy session with EBA.
ObjectivesFormulation of questions and answers, Comprehension and construction of sentences, Articulation and pronunciation, Voice volume, Dictations, Literacy, Reading comprehension
Treatment domain, Type of CD Treatment domain—nasality, vocalization, language, attention, motivation, memory, calculation, visual perception; children with language disorders—cleft palate and cleft lip, ADHD, dyslexia, language development delay.
Treatment techniqueStory-telling, making dictations to check the spelling, asking questions about the text that has been read or listened, ask the child for words starting with a letter or will ask the child to identify how many syllables are contained in a word told, to repeat more clearly everything the child does not say properly, give instructions to the child for all the activities defined.
Play type (social∣cognitive)Social and cognitive play.
Interaction techniqueRobot-child-therapist interaction.
Participants’ role and behaviorThere are 3 participants in this scenario, a speech and language therapist (control the game) a social robot Nao and the child with language disorder.
Age9–12 years old (five children)
Activity description[14], page 8–9
Robot configuration and mode of
operation
A social robot NAO has been used, preprogrammed with the modules: reading comprehension; dictations, stories and vocabulary, improvement of oral comprehension; articulation and phonetic-phonological pronunciation; phonological awareness and phonetic segmentation; literacy skills.
Used softwareNAOqi software v.2.8.6.23 and Python 2.7.
Setting and time Thirty-minute sessions with children were conducted once a week for 30 weeks. The intervention was conducted during ordinary therapy sessions in a room at the speech therapist centre.
VariationPossible software modifications for different behaviors and scenarios.
Reference: [44], 2020Name of Scenario: Different scenarios for child–robot interaction
Objectives To achieve significant changes in social interaction and communication.
Treatment domain, Type of CD Different speech and language impairments—specific language impairment, ADHD, dyslexia, oppositional defiant disorder, misuse of oral language, dyslalia, ADD, problems with oral language, nasality, vocalization.
Treatment techniqueLogopedic and pedagogical therapy.
Play type Social and cognitive play.
Interaction techniqueRobot–child–therapist interaction.
Age9–12 years old (9,10,12)
Participants’ role and behaviorThe participants in this scenario are the social robot (instructor), five children, the therapist, and a researcher-programmer.
Activity description[44], page 564–565
Robot configuration and missionA social robot NAO was used, preprogrammed with the modules: reading comprehension; dictations, stories and vocabulary, improvement of oral comprehension; articulation and phonetic-phonological pronunciation; phonological awareness and phonetic segmentation; literacy skills.
Used software NAOqi v.2.8.6.23
Setting and time This scenario was carried out in a clinical setting over multiple sessions—once a week for 30 weeks.
VariationPossible software modifications for different behaviors, faster modules, and adaptation to unpredictable scenarios.
Reference: [35], 2020Name of Scenario: Physically explore the robot
Objectives Joint attention, identification of emotional expressions.
Treatment domain, Type of CD Language disorders in children with complex social and communication conditions.
Treatment techniqueCause and effect game.
Play type Social and cognitive play.
Interaction techniqueRobot–child–therapist interaction.
AgeFrom 2 to 6 years.
Participants’ role and behaviorThe participants in the scenarios are the social robot Kaspar, staff at the nursery, teachers and volunteers, children with complex social and communication conditions.
Activity description[35], pages 306–307
Robot configuration and missionA social robot Kaspar.
Used software The robot is controlled by a specific Kaspar software that have been developed to facilitate semi-autonomous behavior and make it more user-friendly for non-technical users.
Setting and time Scenarios were carried out in a nursery and the children interacted with the robot for as many sessions as were deemed meaningful within the day-to-day running of the nursery. Number of interactions with the robot per child was 27.37 and the standard deviation was 18.62.
VariationThe robot Kaspar can be used in different play scenarios.
Reference: [15], 2019Name of Scenario: Ling sounds story
Objectives Acquisition of hearing skills.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueLing sounds, auditory-verbal therapy method.
Play type (social/cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner).
Age3–4 years old
Activity description[15], page 442
Robot configuration and mode of operation A social robot NAO was used with toys correlated with the Ling sounds.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a clinical setting over multiple sessions.
VariationThe level of difficulty can be adjusted. The activity can also include more participants to promote cooperative play.
Reference: [15], 2019Name of Scenario: Music density
Objectives Acquisition of hearing skills.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueListening of environmental sounds; discrimination and identification; sound intensity, auditory-verbal therapy method.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner).
Age3–4 years old
Activity description[15], page 443
Robot configuration and mode of operation A social robot NAO was used with toys correlated with musical instruments.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a clinical setting over multiple sessions.
VariationThe level of difficulty can be adjusted. The activity can also include more participants to promote cooperative play.
Reference: [15], 2019Name of Scenario: Farm animals—discrimination and identification of animal sounds
Objectives Acquisition of hearing skills.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueDiscrimination and identification of animal sounds which are with different frequency (e.g., low frequency—cow sound, high frequency—cat sound); auditory-verbal therapy method.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner).
Age3–4 years old
Activity description[15], page 443
Robot configuration and mode of operationA social robot NAO has been used with toys correlated with farm animals.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a clinical setting over multiple sessions.
VariationThe instructions play in random order. The activity can also include more participants to promote cooperative play.
Reference: [15], 2019Name of Scenario: Vegetables
Objectives Acquisition of decoding of words/understanding.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueDiscrimination and identification of words; auditory-verbal therapy method.
Play type (social∣cognitive)Cooperative and practice play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are two participants in this scenario, a social robot (instructor) and the individual who has hearing impairments (learner).
Age3–4 years old
Activity description[15], page 443
Robot configuration and mode of operation A social robot NAO has been used with vegetable-toys and a basket.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario has been carried out in a clinical setting over multiple sessions.
VariationThe instructions play in random order. The activity can also include more participants to promote cooperative play.
Reference: [33], 2019Name of Scenario: Responding to directives
ObjectivesLanguage expansion.
Treatment domain, Type of CD Language domain, autism spectrum.
Treatment techniqueThe robot tells the student what to do and initiates social engagement.
Play type Cooperative and practice play.
Interaction techniqueTeacher–robot–student.
AgeEight-year-old student.
Participants’ role and behaviorThere are three participants in this scenario, a social robot (instructor), a speech-language pathologist (a teacher) and the individual who has a communication disorder (learner).
Activity description[33], page 5–6
Robot configuration and mission A social robot NAO has been used with favorite toys of the learner
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario has been carried out in clinical settings over multiple sessions.
VariationThe level of difficulty can be adjusted. The activity can also include more participants to promote cooperative play.
Reference: [42], 2016Name of Scenario: Teaching fundamentals of music
ObjectivesFacilitation multisystem development in children with autism.
Treatment domain, Type of CDAutism, fine movements, communication skills.
Treatment techniqueRobot–Child or Robot–Child–Therapist/Parent imitation turn taking games and playing a Kinect based virtual xylophone on the screen.
Play type Cooperative and practice play.
Interaction techniqueInteraction between a robot, a child, and a therapist/parent.
Age6-year-old children.
Participants’ role and behaviorThere are four participants in this scenario, a social robot (instructor), and the individual who has autism (learner).
Activity description[42], page 543
Robot configuration and missionA social robot NAO has been used with drum and xylophone.
Used software NAOqi v.2.8.6.23
Setting and time This scenario has been carried out in a clinical settings over 11 sessions.
VariationThe design study contains a Baseline, pre-test, post-test, and a follow-up test. Each participant’s skill is compared with his previous skill based on assessment tools.
Reference: [43], 2016Name of Scenario: Football game
Objectives To achieve significant changes in social interaction and communication.
Treatment domain, Type of CD ASD, communication, and social behavior.
Treatment techniquePlay therapy.
Play type Collaborative physical play.
Interaction techniqueInteraction between a robot, a child, a therapist, and a parent.
Age3–10 years old (5, 7, 3.5)
Participants’ role and behaviorThe participants in this scenario are the social robot (instructor), the individual who has autism spectrum disorder, his parent, and trainer (teacher at the elementary school).
Activity description[43], page 564–565
Robot configuration and missionA social robot NAO uses a ball and participates in an interactive football game with the child.
Used software NAOqi v.2.8.6.23
Setting and time This scenario was carried out in clinical settings over four sessions.
VariationThere are various specific autonomous behaviors that may lead to cross-platform utility of socially assistive robots.
Reference: [62], 2016Name of Scenario: Interactive play with a song
ObjectivesPromoting foundational communication and socialization skills.
Treatment domain, Type of CD To elicit child communication and socialization, Language disorder due to ASD.
Treatment techniquePlaying a song and performing appropriate hand/arm motions.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThe participants in this scenario are a social robot, two researchers (a computer scientist and a clinical instructor), and a child.
Age3–6 years (8 children)
Activity description[62], page 643, 645
Robot configuration and mode of operation A robot Probo, named CHARLIE.
Used softwareNew software was designed to promote two fundamental skills linked to communication—turn-taking and imitation.
Setting and time This scenario was carried out in a university setting 2 times a week for 6 weeks.
VariationThe game could be played by one or more participants (the child with ASD + sibling/caregiver).
Reference: [62], 2016Name of Scenario: The hat game
ObjectivesVerbal utterances.
Treatment domain, Type of CD To encourage eye contact, directed attention, speech, and social interaction by providing a positive sensory response to reinforce each child’s efforts to communicate. Language disorder due to ASD.
Treatment techniqueAsk and answer to a simple question.
Play type (social∣cognitive)Social play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThe participants in this scenario are a social robot, two researchers (a computer scientist and a clinical instructor), and the child.
Age3–6 years (8 children)
Activity description[62], page 645–646.
Robot configuration and mode of operationA robot Probo, named CHARLIE.
Used softwareNew software was designed to promote two fundamental skills known to be closely linked to communication—turn-taking and imitation.
Setting and time This scenario has been carried out in a university setting 2 times a week for 6 weeks.
VariationThe game could be played by one or more participants (the child with ASD + sibling/caregiver).
Reference: [41], 2015Name of Scenario: Giving color responses
ObjectivesTraining joint attention skills.
Treatment domain, Type of CDJoint attention skills, autism.
Treatment techniqueThe student is instructed to follow the robot’s head movement, to verbally (out-loud) name the color of the target, at which the robot is looking, and additionally press the corresponding button.
Play type Cooperative and practice play.
Interaction techniqueRobot–student (and a teacher in the pre- and post-tests).
AgeMean age 4.6 (from 4 to 5)
Participants’ role and behaviorThere are three participants in this scenario, a pet robot (instructor), a speech-language pathologist (a teacher), and the individual who has autism (learner).
Activity description[41], page 3–5
Robot configuration and missionA social robot, CuDDler (A*STAR), was used with 10 colorful line drawings of various objects in 4 colors.
Used software Two Android phones execute software modules.
Setting and time This scenario was carried out in a clinical settings over 3 sessions.
VariationThe level of difficulty can be adjusted. The activity can also include more participants.
Reference: [16], 2015Name of Scenario: “Special Friend, iRobiQ”
ObjectivesTo promote of language interaction for children with speech-language disorders
Treatment domain, Type of CDSpeech and language disorders, emotional expression
Treatment techniqueScripts for practical language goals by robot as an interlocutor friend (turn-taking; functional communication)
Play type Entertaining educational elements for initiated conversations with a robot
Interaction techniqueRobot–child–therapist interaction.
AgeFour autism/MR (Mental Retardation) children who were 4–5 years old.
Participants’ role and behaviorThe participants in this scenario are the social robot (instructor), four children, and the therapist.
Activity description[16] Greetings and a birthday celebration script (cake, gift, song), with the theme of the practical language goals.
Robot configuration and missionA humanoid robot iRobiQ with touch screen.
Used software iRobiQ software v.2.8.6.23
Setting and time This scenario was carried out in a clinical setting over eight sessions.
VariationVariation of facial expressions (happy, surprised, neutral, disappointed and shy)
Reference: [45], 2014Name of Scenario: Different interactive plays
Objectives To achieve a positive impact on the communication skills of nonverbal children via robot-based augmentative and alternative communication.
Treatment domain, Type of CD Communication disorders, language impairments—pervasive development disorder.
Treatment techniquePlay therapy.
Play type Social and cognitive play
Interaction techniqueRobot–child–therapist interaction.
AgeFrom 2 years, 9 months to 5 years, 4 months old.
Participants’ role and behaviorThe participants in this scenario are the social robot (instructor), four children, the therapist, and a researcher-programmer.
Activity description[45], page 855–857
Robot configuration and missionA humanoid robot iRobi with a robot-based augmentative and alternative programs.
Used software Can be controlled by a smartphone through Wi-Fi.
Setting and time This scenario was carried out in a clinical setting over multiple sessions in 3 phases for 6 months—three times a week.
VariationMulti-functional sensors which can motivate children to initiate social communication.
Reference: [59], 2014Name of Scenario: Sign Language Game for Advanced Users
ObjectivesRecognition of signs from Turkish Sign Language.
Treatment domain, Type of CD Language domain, Language disorder due to hearing impairment.
Treatment techniqueRecognition of words in Turkish Sign Language for advanced level.
Play type (social∣cognitive)Cognitive play.
Interaction techniqueChild–robot interaction.
Participants’ role and behaviorThere are two participants in this scenario, a humanoid social robot (instructor), and the child with hearing impairment.
Age7–11 years (21 children) and 9–16 years (10 children).
Activity description[59], page 525
Robot configuration and mode of operation A social robot NAO H25 and a modified Robovie R3 robot.
Used softwareNAOqi software v.2.8.6.23
Setting and time This scenario was carried out in a university setting for 6–9 games with each robot.
VariationThe participants can randomly select a robot to play with.
We also reviewed articles which serve as models for future implications of different frameworks [13,23]. In the article [23], the authors offer possible employment of social robots as additional tools in stuttering intervention. The scientists describe eight scenarios with social robots which can be adjusted in therapies with children and adults. The authors emphasize that HRI (Human–Robot Interaction) can significantly aid people who stutter and argue that there is a need to explore the prospects of robotics via experiments and studies with relevant participants.
The paper [39] reports an application which provides an opportunity to use a humanoid robot as a stutterer’s aide and therapist. Visual and auditory feedback was applied during the therapy with the robot. The major advantage of the suggested application is the possibility of using a humanoid robot in therapy sessions accompanied by the “echo” method and expanded by the visual feedback. The robot can substitute the therapist and can lead the treatment of the patient who performs different activities, such as conversing, reading, or running a monologue. Another advantage is the potential to remotely connect to the robot which removes external noise. The proposed scenario will be tested on a group of people and more experiments are necessary to prove the successful relevance of this application.
The article [63] offers a systematic review of research on therapies assisted by robots for children with autism. The authors try to understand the tendencies in studies on this type of therapy so that they can propose probable prospects in the field. Thirty-eight articles were analyzed and it was concluded that there is a substantial number of publications on robot-assisted autism therapy (RAAT). This points to growing interest in the use of robots in logopedic sessions. The advances of artificial intelligence and machine learning have impacted that interest greatly. The above-mentioned data postulate that robot-assisted therapies are promising tools which can support and help cognitive, social, and emotional development of children with ASD. The authors hope that the challenges which people face at present will be addressed successfully via skilled interdisciplinary cooperation.
The scientific team in [64] compared two situations of storytelling to a diseased person with neurodevelopmental disorder: 1. human–human interaction and 2. robot–human interaction. Their results showed that the story told by the plush robot ELE is more engaging. The potential advantages of the presented social robot are: enhancing and encouraging verbal communication in person with neurodevelopmental disorders; limited non-verbal characteristics of communication of the robot that make the playful situation predictable; monitoring, gathering, and analyzing the data of the client’s behavior from a distance; saving time and money, as it enables remote therapy. The future work is directed to the application of the social robot for a larger number of people with neurodevelopmental disorders. This study can be taken as a model for working with children with neurological disorders.
A summary of the results from this Section is presented in Figure 1. In conclusion, we may say that over the years, empirical studies have increased, while the pilot studies have decreased. More experimental studies will facilitate the establishment of standards and common methodology on how to apply SARs in SLT. At the same time, there is an emerging trend in publications offering only models and interactive scenarios with SARs without experiments. This provides directions for future studies.

5. Discussion and Future Directions

To summarize, the potential scenarios for using SARs in the rehabilitation of communication disorders in children and adolescents is huge. Social robots can assist in vocabulary and language development, articulation therapy, speech rate control, storytelling, and improvement in social skills. Through engaging and playful activities, social robots can offer real-time feedback and guidance to help individuals practice and enhance their communication skills.
The types of communication disorders (Figure 2) indicated in the studies mentioned are few, such as dyslexia, dysgraphia, specific language impairment, and dyslalia. The number of articles where the participation of team speech therapists is included is small and for this reason, we assume that the authors have preferred to describe the primary disorder, for example, ASD, cerebral palsy, or hearing impairment. All these conditions have different kinds of communication disorders. They belong to the category of neurodevelopmental disorders; in most of them, the language acquisition is affected at different levels and it varies in severity.
Figure 3 presents the age distribution of participants interacting with robots in the pilot and case (empirical) studies. There was a tendency of a larger and heterogeneous age range in the groups of children studied in the pilot studies, while in empirical studies, children interacting with robots have a small age difference. Sixty percent of them focused on a contingent between 2 and 6 years of age. The child develops rapidly in the first 5–6 years of age. This can be explained by the fact that this is the period of tremendous growth and change in language, cognition, adaptive skills, emotional intelligence, and social functioning. The evidence of many studies in neurolinguistics has shown that the critical and most sensitive period for language development is in the early years of life. After the end of this period, there is a reduction in the plasticity of the specific neural pathways responsible for language coding and decoding, and functional communication. This means that early intervention is crucial. Language exposure and multichannel stimulation (more senses—hear, watch, touch, experience) in the early years has a significant effect on the verbal skills of children. Child interaction with robots gives opportunities to play, and experience and repeat scenarios that copy everyday situations with communication models. This stimulation in the early age period will enhance the child development and will affect positively language, cognition, and behavior in later stages of life.

5.1. How Social Robots Can Assist in the Intervention of Communication Disorders

Possible applications of SARs in the intervention of communication disorders in children and adolescents based on the reviewed papers are:
  • Vocabulary and language development (verbal and sign language): Social robots can assist children in practicing and improving their language skills through playful and engaging activities, offering real-time feedback and encouragement. SARs are able to initiate and support communication and enrich child’s vocabulary. They also help therapists train and assess linguistic capabilities of children and adolescents with language impairments [6,7,8,13,15,16,17,23,32,35,38,39,41,42,43,44,45,47,48,49,53,54,55,56,57,58,59,62].
  • Articulation therapy: Social robots can help children with speech disorders practice pronunciation and articulation exercises. The youngsters are observed to show increased verbal production and participation via SARs. The latter contribute to improvements in articulation, and phonological, morphosyntactical, and semantic communication [13,33,35,36,37,43,44,48,49,57]. Auditory skills: Children learn and develop language through listening. Some SARs are used to develop auditory skills as well as verbal speech. Robots are able to offer sounds with different frequency. SARs can also repeat words and provide support when necessary. In addition, robots can give visual and auditory feedback which is essential for therapists [15,48,60].
  • Speech rate control: Social robots can aid children in practicing speaking at a slower rate, offering real-time feedback to improve fluency gradually [22,23,39].
  • Storytelling: Social robots can assist children in practicing storytelling and engaging in conversation. Stories told by robots are found to be more engaging and funnier for children. SARs encourage verbal communication and enhance cognitive abilities in youngsters. Robots can also monitor, gather, and analyze data from the child’s behavior [16,33,35,64].
  • Social skills: Social robots can help children improve social skills, such as turn-taking, joint attention, emotion regulation, and eye contact through playful and engaging activities. During these activities, different participants, together with the robots, can take part—peers, therapists, or parents. Children are provided support and guidance during play. Youngsters learn to interact and cooperate with the others and robot-based therapies enhance their cognitive and emotional abilities [6,8,15,17,39,42,43,46,55,56,62].
  • Transfer the skills in life: Some of the studies indicate that the skills acquired in play-based interaction between a child and the SAR are transferred to real life and applied in everyday situations [55,56,57,60].
  • Personalization and adaptation: SARs have the ability to personalize the interactive scenarios by utilizing individual data, performance metrics, and individual progress to adapt therapy exercises, content, and level of difficulty to the specific CD [15,33,36,41,44,46,47,49,57].
The scenarios described in Table 1 and Table 2 represent different studies of child–robot interaction. The main goal for all of them is the development of communication. Table 3 provides a summary of the objectives and different levels of communication, pre-verbal, non-verbal, and verbal, aimed at in the research.
NAO is the most commonly utilized robot, as evidenced by the data presented in Table 4, which display the number of articles reporting the use of SAR-types in pilot or case studies. Our finding is in line with other studies about the children’s acceptance and perception of SARs [65]. The cost of the robots is indicated in the table, and in cases where it is not specified, the cost is considered to be moderate, neither low nor high. Due to the strong emphasis on communication and language skills in SLT, intensive practice of speaking and listening is crucial. As a result, it is important for robots utilized in SLT to have access to cloud-based chat services, such as iRobi [16] and QTrobot [18]. Unfortunately, the price of these robots is not affordable for home use.

5.2. Future Directions on How to Optimize the Role and Importance of Social Robots in Speech and Language Therapy

Our research findings indicated that integrating innovative technologies such as Conversational AI, extended reality, biofeedback, affective computing, additional tactile, visual and auditory modalities, can enhance the human–robot interaction in the intervention. SLT focuses on communication and language skills, and everyday practicing of speaking and listening in a dialog are central. This requires integration of services and models for Natural Language Processing (NLP) and adaptive dialogue in SARs. Additionally, the expectations of children for robots to understand and produce human-like text are huge, which lead to the issue that Conversational AI, which combines NLP with machine and deep learning models, is a key machine learning technology. An additional challenge is the difficulty for SARs to replicate complex scenarios from everyday life only by the embedding in robot skills/behaviors and physical/digital speech therapy materials. More effective rehabilitation by “perception-cognition-action-experience” requires extended reality and multimodal interactions within this virtual environment (VE). Due to its interactivity, extended reality can be considered with potential broader than simple replication of the real world. Through the use of VEs that simulate real-world situations, social communication training will become more effective, leading to enhanced development of brain areas responsible for sensory-motor processing. This, in turn, improves language learning and its usage.
Here are some of the technological advancements that have great potential to assist SARs in their role in SLT:
  • Natural Language Processing (NLP): Advanced NLP techniques can enable SARs to better understand and interpret speech and language patterns via speech recognition models tailored for specific speech and language therapy tasks, such as articulation exercises or language comprehension activities.
  • Adaptive Dialogue Systems: Implementation of adaptive dialogue systems that allow SARs to adapt their conversational style, pacing, and prompts based on the individual’s progress and needs.
  • Multimodal Interaction: Such interaction enables SARs to engage in multimodal interaction by incorporating visual, auditory, and tactile modalities. SARs can use visual aids, such as interactive displays or gesture recognition, to supplement verbal instructions and support visual learning. Incorporates tactile aids such as interactive touchscreens to facilitate hands-on activities or audio.
  • Virtual (VR), Augmented (AR), and Mixed Reality (MR): These technologies can assist SARs to support intervention sessions with more immersive and interactive therapy environments. By using AR, SARs will overlay virtual objects or visual cues in the real world to support language or articulation exercises. VR will help in simulating real-life scenarios for social communication training, providing children and adolescent with a safe and controlled environment to practice their skills. Three-dimensional modeling or avatars can be applied to support both children and robots to immerse themselves in a shared virtual environment. Furthermore, VR can be a source of adaptation in a protected environment.
  • Affective Computing for improving SARs’ emotion recognition and facial expression capabilities: Incorporating emotion recognition algorithms (visual, voice, speech, gesture, physiologically based) can enhance SARs’ ability to detect and respond to individuals’ emotional states during therapy sessions. By tailoring their responses and interventions accordingly, SARs can create a more personalized and empathetic therapeutic environment. Furthermore, developing expressive capabilities for SARs enables them to display appropriate emotional responses and gestures, further enhancing their ability to provide sympathetic and supportive communications.
Assistive technologies that can support the SARs during SLT are described and connected to some references by citation:
  • Integrating SARs with Conversational AI can create a more engaging and interactive speech and language experience. This can be achieved by providing personalized intervention and real-time feedback to children with CD, as well as encouragement and guidance in tasks or play. Currently, robots of types Furhat [66], iRobi [16], and QTrobot [18] have access to cloud-based services and are used for rehabilitation of children with ASD. As an instance, researchers have successfully integrated OpenAI’s text completion services, including GPT-3 and ChatGPT, into the Furhat robot, so that real-time, non-scripted, automated interactions can be provided [67]. Furthermore, different frameworks based on cloud computing and clustering are employed to enhance the capabilities of social robots and address the limitations of existing embedded platforms [68,69,70]. The papers present approaches which enhance the capabilities of social robots via NLP cloud services. The authors also provide detailed descriptions of the architecture and components of the proposed framework and discuss the challenges and opportunities of using cloud computing for social robots, such as security and privacy concerns, network latency, and scalability.
  • Integrating SARs with Adaptive Dialogue Systems can contribute to effective dialogue management. In [71], the authors present a design and implementation of a cloud system for knowledge-based autonomous interaction created for Social Robots and other conversational agents. The system is particularly convenient for low-cost robots and devices: it can be used as a stand-alone dialogue system or as an integration to provide “background” dialogue capabilities to any preexisting Natural Language—(CAIR, Cloud-based Autonomous Interaction with Robots).
  • Automatic Speech Recognition and Text Generation technologies can aid children in language learning through story telling. They also practice social skills and behaviors [72]. The robot Furhat can tell stories to children through interactive conversation, natural language processing, expressive facial expressions and gestures, voice and speech synthesis, and personalization. It engages children in dialogue-like interactions, understands their speech, and adapts the story based on their preferences for a personalized experience [73]. In [74], authors propose how to recommend books in a child–robot interactive environment based on the speech input and child’s profile. Experimental results show that the proposed recommender system has an improved performance and it can operate on embedded consumer devices with limited computational resources. Speech recognition software can also help to provide real-time feedback on the accuracy of a child’s speech production.
  • Graphics, Touch, Sound, and Barcode user interface design can enhance the multimodal interaction with SARs by enriching visual, auditory, and tactile modalities. Graphical interfaces are great support for a robot interaction by listing complex information, allowing text input or showing a game interface. Many SARs provide GUI touch by their own tablet [54] or external touchscreen [16,18,75,76]. Usually, the GUI is used to acquire and repeat knowledge by pictures displayed on a touch screen connected to the SAR. An additional tool for therapists and children to interact with, either in clinics or at home, is the QR-code-scanning capabilities of SARs [16,17,18,77]. An auditory interface can also be integrated into SARs in order to enable users to interact with robots via spoken commands, voice recognition, altered auditory feedback [23], etc. Regarding future work in [25], a voice analyzer can determine the quality of the patient’s voice (configuration of the vocal tract + anatomy of the larynx + scientific component). The AI methods used for automatic classification of autistic children include Support Vector Machines (SVMs) and Convolutional Neural Networks (CNNs). SVMS use acoustic feature sets from multiple Interspeech COMPARE challenges, while CNNs extract deep spectrum features from the spectrogram of autistic speech instances [78].
  • VR/AR/MR technologies can provide significant support for both children and robots to immerse themselves in a shared virtual environment. Through the use of a 3D model, SARs can engage with children and adolescents in a virtual setting, allowing them to explore and interact within the virtual environment together [79]. Virtual reality-based therapy can reduce anxiety and improve speech fluency, as well as social and emotional skills [80]. In terms of application categories, simulations make up approximately 60% of the content, while games account for the remaining 40%. The majority of simulation content focuses on specific scenes and aim to imitate various real-life events and situations, such as classroom scenes, bus stop scenes, and pedestrian crossing scenes. Many studies report on virtual reality training with children with CD and have explored these scenes and their potential therapeutic benefits [81,82,83]. For example, the therapy in [84] involves different stages, including auto-navigation from outside the school to the classroom, welcome and task introduction, and completion of tasks. These stages are accompanied by verbal and nonverbal cues, such as hand-waving animations. Examples how to use AR to overlay virtual objects or visual cues in the real world to support language or articulation exercises can be found in [85].
  • AI-based visual and audio algorithms for Affective computing enables enhanced human–robot interaction in therapy sessions by improving social robots’ emotion recognition and facial expression capabilities. AI-based visual and audio algorithms can detect the emotional state of the individual, allowing the SAR to tailor its responses and interventions during therapy sessions. The reviewed papers with SARs that have an integrated AI-based emotion recognition and facial expression technologies for showing real-time animation of facial emotions on a display in the head are [36,45,47,50,61]. The last illustrates lip-syncing as well. The principle of visual-based algorithms for affective computing involves analyzing visual cues to recognize and interpret emotions, such as facial expressions, body language, and gestures. These algorithms use computer vision techniques to extract relevant features from visual data and apply machine learning or deep learning models to classify and understand emotional states [86,87,88]. The principle of audio-based algorithms for affective computing involves analyzing audio signals, such as speech and vocal intonations, to detect and classify emotions. These algorithms utilize signal processing techniques, feature extraction, and machine learning to analyze acoustic properties and patterns related to emotional states. Acoustic features are extracted to capture the valence and arousal dimensions, including voice quality features [89,90,91,92]. Such AI-based visual and audio algorithms are integrated in Furhat robot, allowing it to display emotions and expressions on its face by animations that correspond with the emotional content and tone of the speech being delivered.
  • The principle of Biofeedback algorithms for Affective computing involves monitoring and analyzing physiological signals to infer and understand emotional states. These algorithms use techniques such as signal processing and machine learning to identify patterns and correlations associated with specific emotions in order to interpret physiological responses of heart rate, muscle tension, pupil dilation and eye-tracking, skin conductance, or sweating. Children and adolescents with CD, especially ASD, frequently have difficulties in their social skills, such as communicating wants and needs and maintaining eye contact. Biofeedback is a technique which is often recommended for those struggling with anxiety, anger, or stress. Some authors have explored various machine learning methods and types of data collected through wearable sensors worn by children diagnosed with ASD [93,94]. Other authors have designed a prototype to reinforce the mental skills of children with ASD through neurofeedback using EEG data and a small humanoid robot to stimulate attention towards a joint activity [95]. The results encourage the development of more precise measures of attention, combining EEG data and behavioral information. In addition, scientists have worked on EEG measures suitable for robotic neurofeedback systems and able to detect and intervene in case of attention breakdowns [96]. In [97], the children’s gaze towards the robot and other individuals present was observed. The analysis revealed that the attention and engagement towards the parents of the children increased. Eye tracking can help in understanding and quantifying where the children direct their visual attention during the therapy sessions, their engagement levels, and emotional responses to different stimuli, such as the robot, other humans, and logopedic materials. Via this feedback, SARs can assess the child’s affect during the SLT in real time and personalize the interactive scenarios.

6. Conclusions

After conducting a thorough review and answering the research questions, we can conclude that despite the limited research on the use of social robots in communication disorders, certain studies have reported promising results for speech and language therapy. It is important to consider the methodological, technical, and ethical limitations and challenges associated with their use and to carefully evaluate their effectiveness before implementing them in clinical settings.
The use of assistive technologies can create a supportive and non-intrusive environment for children, leading to better outcomes in therapy. However, continuous exploration, evaluation, and monitoring of the effectiveness of these technologies is crucial to ensure that assistive integrating of ATs in SLT has a beneficial effect on children’s communication skills. Ethical and privacy concerns should also be taken into account when implementing ATs in speech and language therapy. It is necessary for scientists to conduct more comprehensive experimental studies before considering the widespread implementation of social robots as standard therapeutic interventions in speech and language therapy.

Author Contributions

Conceptualization, A.L. and A.A.; methodology, A.L.; investigation, P.T., M.S., V.S.-P., G.D., K.R.-Y. and I.K.; writing—original draft preparation, G.G.-T. and A.L.; writing—review and editing, A.L. All authors have read and agreed to the published version of the manuscript.

Funding

These research findings are supported by the National Scientific Research Fund, Project “Innovative methodology for integration of assistive technologies in speech therapy for children and adolescents”, Grand No. KΠ-06-H67/1, 12 December 2022.

Data Availability Statement

Suggested Data Availability Statements are available at https://www.youtube.com/watch?v=KpeQcIXG6cA (accessed on 16 April 2023) and https://youtu.be/AZhih7KlaPc (accessed on 16 April 2023).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

AACAdvanced Audio Coding
ABAApplied Behavior Analysis
ADDAttention Deficit Disorder
ADHDAttention deficit and hyperactivity disorder
AIArtificial Intelligence
ARAugmented Reality
ASDAutism spectrum disorder
ATAssistive Technologies
CDCommunication Disorders
DAFDelayed auditory feedback
DMDDDisruptive mood dysregulation disorder
DSM-V  Diagnostic and Statistical Manual (of Mental Disorders), Fifth Edition (V)
IEEEInstitute of Electrical and Electronics Engineers
ICTInformation and Communications Technology
HRIHuman–Robot Interaction
MDPIMultidisciplinary Digital Publishing Institute
MRMixed reality
ODDOppositional defiant disorder
RAATRobot-Assisted Autism Therapy
RJAResponding to Joint Attention
SARSocially Assistive Robots
SLISpecific Language Impairment
SLTSpeech and Language Therapy
VRVirtual reality

References

  1. Fogle, P.T. Essentials of Communication Sciences & Disorders; Jones & Bartlett Learning: Burlington, MA, USA, 2022; p. 8. [Google Scholar]
  2. American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders, 5th ed.; American Psychiatric Association: Arlington, VA, USA, 2013; Available online: https://dsm.psychiatryonline.org/doi/book/10.1176/appi.books (accessed on 20 April 2023).
  3. Besio, S.; Bulgarelli, D.; Stancheva-Popkostadinova, V. (Eds.) Play Development in Children with Disabilities; De Gruyter: Berlin, Germany, 2017. [Google Scholar]
  4. United Nations. Convention on the Rights of Persons with Disabilities; United Nations: New York, NY, USA, 2007; Available online: https://www.un.org/development/desa/disabilities/convention-on-the-rights-of-persons-withdisabilities.html (accessed on 16 April 2023).
  5. World Health Organisation. The Global Strategy for Women’s, Children’s and Adolescents’ Health, 2016–2030; World Health Organization: Geneva, Switzerland, 2015; Available online: https://www.who.int/life-course/partners/global-strategy/globalstrategy-2016-2030/en/ (accessed on 16 April 2023).
  6. Gibson, J.L.; Pritchard, E.; de Lemos, C. Play-based interventions to support social and communication development in autistic children aged 2–8 years: A scoping review. Autism Dev. Lang. Impair. 2021, 6, 1–30. [Google Scholar] [CrossRef] [PubMed]
  7. Baker, F.S. Engaging in play through assistive technology: Closing gaps in research and practice for infants and toddlers with disabilities. In Assistive Technology Research, Practice, and Theory; IGI Global: Hershey, PA, USA, 2014; pp. 207–221. [Google Scholar] [CrossRef] [Green Version]
  8. Francis, G.; Deniz, E.; Torgerson, C.; Toseeb, U. Play-based interventions for mental health: A systematic review and meta-analysis focused on children and adolescents with autism spectrum disorder and developmental language disorder. Autism Dev. Lang. Impair. 2022, 7, 1–44. [Google Scholar] [CrossRef] [PubMed]
  9. Papakostas, G.A.; Sidiropoulos, G.K.; Papadopoulou, C.I.; Vrochidou, E.; Kaburlasos, V.G.; Papadopoulou, M.T.; Holeva, V.; Nikopoulou, V.-A.; Dalivigkas, N. Social Robots in Special Education: A Systematic Review. Electronics 2021, 10, 1398. [Google Scholar] [CrossRef]
  10. Mahdi, H.; Akgun, S.A.; Salen, S.; Dautenhahn, K. A survey on the design and evolution of social robots—Past, present and future. Robot. Auton. Syst. 2022, 156, 104193. [Google Scholar] [CrossRef]
  11. World Health Organization; United Nations Children’s Fund (UNICEF). Global Report on Assistive Technology; World Health Organization: Geneva, Switzerland, 2022; Available online: https://www.unicef.org/reports/global-report-assistive-technology (accessed on 16 April 2023).
  12. Papadopoulou, M.T.; Karageorgiou, E.; Kechayas, P.; Geronikola, N.; Lytridis, C.; Bazinas, C.; Kourampa, E.; Avramidou, E.; Kaburlasos, V.G.; Evangeliou, A.E. Efficacy of a Robot-Assisted Intervention in Improving Learning Performance of Elementary School Children with Specific Learning Disorders. Children 2022, 9, 1155. [Google Scholar] [CrossRef]
  13. Robins, B.; Dautenhahn, K.; Ferrari, E.; Kronreif, G.; Prazak-Aram, B.; Marti, P.; Laudanna, E. Scenarios of robot-assisted play for children with cognitive and physical disabilities. Interact. Stud. 2012, 13, 189–234. [Google Scholar] [CrossRef] [Green Version]
  14. Estévez, D.; Terrón-López, M.-J.; Velasco-Quintana, P.J.; Rodríguez-Jiménez, R.-M.; Álvarez-Manzano, V. A Case Study of a Robot-Assisted Speech Therapy for Children with Language Disorders. Sustainability 2021, 13, 2771. [Google Scholar] [CrossRef]
  15. Ioannou, A.; Andreva, A. Play and Learn with an Intelligent Robot: Enhancing the Therapy of Hearing-Impaired Children. In Proceedings of the IFIP Conference on Human-Computer Interaction—INTERACT 2019. INTERACT 2019. Lecture Notes in Computer Science, Paphos, Cyprus, 2–6 September 2019; Springer: Cham, Switzerland, 2019; Volume 11747. [Google Scholar] [CrossRef]
  16. Hawon, L.; Hyun, E. The Intelligent Robot Contents for Children with Speech-Language Disorder. J. Educ. Technol. Soc. 2015, 18, 100–113. Available online: http://www.jstor.org/stable/jeductechsoci.18.3.100 (accessed on 16 April 2023).
  17. Lekova, A.; Andreeva, A.; Simonska, M.; Tanev, T.; Kostova, S. A system for speech and language therapy with a potential to work in the IoT. In Proceedings of the CompSysTech ‘22: International Conference on Computer Systems and Technologies 2022, Ruse, Bulgaria, 17–18 June 2022; pp. 119–124. [Google Scholar] [CrossRef]
  18. QTrobot for Education of Children with Autism and Other Special Needs. Available online: https://luxai.com/assistive-tech-robot-for-special-needs-education/ (accessed on 16 April 2023).
  19. Vukliš, D.; Krasnik, R.; Mikov, A.; Zvekić Svorcan, J.; Janković, T.; Kovačević, M. Parental Attitudes Towards The Use Of Humanoid Robots In Pediatric (Re)Habilitation. Med. Pregl. 2019, 72, 302–306. [Google Scholar] [CrossRef] [Green Version]
  20. Szymona, B.; Maciejewski, M.; Karpiński, R.; Jonak, K.; Radzikowska-Büchner, E.; Niderla, K.; Prokopiak, A. Robot-Assisted Autism Therapy (RAAT). Criteria and Types of Experiments Using Anthropomorphic and Zoomorphic Robots. Review of the Research. Sensors 2021, 21, 3720. [Google Scholar] [CrossRef]
  21. Nicolae, G.; Vlãdeanu, G.; Saru, L.M.; Burileanu, C.; Grozãvescu, R.; Craciun, G.; Drugã, S.; Hãþiş, M. Programming The Nao Humanoid Robot For Behavioral Therapy In Romania. Rom. J. Child Amp Adolesc. Psychiatry 2019, 7, 23–30. [Google Scholar]
  22. Gupta, G.; Chandra, S.; Dautenhahn, K.; Loucks, T. Stuttering Treatment Approaches from the Past Two Decades: Comprehensive Survey and Review. J. Stud. Res. 2022, 11, 1–24. [Google Scholar] [CrossRef]
  23. Chandra, S.; Gupta, G.; Loucks, T.; Dautenhahn, K. Opportunities for social robots in the stuttering clinic: A review and proposed scenarios. Paladyn J. Behav. Robot. 2022, 13, 23–44. [Google Scholar] [CrossRef]
  24. Bonarini, A.; Clasadonte, F.; Garzotto, F.; Gelsomini, M.; Romero, M. Playful interaction with Teo, a Mobile Robot for Children with Neurodevelopmental Disorders. DSAI 2016. In Proceedings of the 7th International Conference on Software Development and Technologies for Enhancing Accessibility and Fighting Info-exclusion, Portugal, 1–3 December 2016; pp. 223–231. [Google Scholar] [CrossRef] [Green Version]
  25. Kose, H.; Yorganci, R. Tale of a robot: Humanoid Robot Assisted Sign Language Tutoring. In Proceedings of the 2011 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011; pp. 105–111. [Google Scholar]
  26. Robles-Bykbaev, V.; López-Nores, M.; Pazos-Arias, J.; Quisi-Peralta, D.; García-Duque, J. An Ecosystem of Intelligent ICT Tools for Speech-Language Therapy Based on a Formal Knowledge Model. Stud. Health Technol. Inform. 2015, 216, 50–54. [Google Scholar]
  27. Fosch-Villaronga, E.; Millard, C. Cloud Robotics Law and Regulation, Challenges in the Governance of Complex and Dynamic Cyber-Physical Ecosystems. Robot. Auton. Syst. 2019, 119, 77–91. [Google Scholar] [CrossRef]
  28. Samaddar, S.; Desideri, L.; Encarnação, P.; Gollasch, D.; Petrie, H.; Weber, G. Robotic and Virtual Reality Technologies for Children with Disabilities and Older Adults. In Computers Helping People with Special Needs. ICCHP-AAATE 2022. Lecture Notes in Computer Science; Miesenberger, K., Kouroupetroglou, G., Mavrou, K., Manduchi, R., Covarrubias Rodriguez, M., Penáz, P., Eds.; Springer: Cham, Switzerland, 2022; Volume 13342. [Google Scholar] [CrossRef]
  29. da Silva, C.A.; Fernandes, A.R.; Grohmann, A.P. STAR: Speech Therapy with Augmented Reality for Children with Autism Spectrum Disorders. In Enterprise Information Systems. ICEIS 2014. Lecture Notes in Business Information Processing; Cordeiro, J., Hammoudi, S., Maciaszek, L., Camp, O., Filipe, J., Eds.; Springer: Cham, Switzerland, 2015; Volume 227. [Google Scholar] [CrossRef]
  30. Lorenzo, G.; Lledó, A.; Pomares, J.; Roig, R. Design and application of an immersive virtual reality system to enhance emotional skills for children with autism spectrum disorders. Comput. Educ. 2016, 98, 192–205. [Google Scholar] [CrossRef] [Green Version]
  31. Kotsopoulos, K.I.; Katsounas, M.G.; Sofios, A.; Skaloumbakas, C.; Papadopoulos, A.; Kanelopoulos, A. VRESS: Designing a platform for the development of personalized Virtual Reality scenarios to support individuals with Autism. In Proceedings of the 2021 12th International Conference on Information, Intelligence, Systems & Applications (IISA), Vila Real, Portugal, 1–3 December 2016; pp. 1–4. [Google Scholar] [CrossRef]
  32. Furhat and Social Robots in Rehabilitation. Available online: https://furhatrobotics.com/habilitation-concept/ (accessed on 16 April 2023).
  33. Charron, N.; Lindley-Soucy, E.D.K.; Lewis, L.; Craig, M. Robot therapy: Promoting Communication Skills for Students with Autism Spectrum Disorders. New Hampshire J. Edu. 2019, 21, 10983. [Google Scholar]
  34. Silvera-Tawil, D.; Bradford, D.; Roberts-Yates, C. Talk to me: The role of human–robot interaction in improving verbal communication skills in students with autism or intellectual disability. In Proceedings of the 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Nanjing, China, 27–31 August 2018; pp. 1–6. [Google Scholar] [CrossRef]
  35. Syrdal, D.S.; Dautenhahn, K.; Robins, B.; Karakosta, E.; Jones, N.C. Kaspar in the wild: Experiences from deploying a small humanoid robot in a nursery school for children with autism. Paladyn J. Behav. Robot. 2020, 11, 301–326. [Google Scholar] [CrossRef]
  36. Robles-Bykbaev, V.; Ochoa-Guaraca, M.; Carpio-Moreta, M.; Pulla-Sánchez, D.; Serpa-Andrade, L.; López-Nores, M.; García-Duque, J. Robotic assistant for support in speech therapy for children with cerebral palsy. In Proceedings of the 2016 IEEE International Autumn Meeting on Power, Electronics and Computing (ROPEC), Ixtapa, Mexico, 9–11 November 2016; pp. 1–6. [Google Scholar] [CrossRef]
  37. Pereira, J.; de Melo, M.; Franco, N.; Rodrigues, F.; Coelho, A.; Fidalgo, R. Using assistive robotics for aphasia rehabilitation, in: 2019 Latin American Robotics Symposium (LARS), 2019. In Proceedings of the Brazilian Symposium on Robotics (SBR) and 2019 Workshop on Robotics in Education (WRE), Rio Grande, Brazil, 23–25 October 2019; pp. 387–392. [Google Scholar] [CrossRef]
  38. Castillo, J.C.; Alvarez-Fernandez, D.; Alonso-Martin, F.; Marques-Villarroya, S.; Salichs, M.A. Social robotics in therapy of apraxia of speech. J. Healthcare Eng. 2018, 2018, 11. [Google Scholar] [CrossRef] [Green Version]
  39. Kwaśniewicz, Ł.; Kuniszyk-Jóźkowiak, W.; Wójcik, G.M.; Masiak, J. Adaptation of the humanoid robot to speech disfluency therapy. Bio-Algorithms Med-Syst. 2016, 12, 169–177. [Google Scholar] [CrossRef]
  40. Charron, N.; Lewis, L.; Craig, M. A Robotic Therapy Case Study: Developing Joint Attention Skills With a Student on the Autism Spectrum. J. Educ. Technol. Syst. 2017, 46, 137–148. [Google Scholar] [CrossRef]
  41. Kajopoulos, J.; Wong, A.H.Y.; Yuen, A.W.C.; Dung, T.A.; Kee, T.Y.; Wykowska, A. Robot-Assisted Training of Joint Attention Skills in Children Diagnosed with Autism. In Social Robotics. ICSR 2015. Lecture Notes in Computer Science; Tapus, A., André, E., Martin, J.C., Ferland, F., Ammi, M., Eds.; Springer: Cham, Switzerland, 2015; Volume 9388. [Google Scholar] [CrossRef]
  42. Taheri, A.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Poorgoldooz, P.; Roohbakhsh, M. Social Robots and Teaching Music to Autistic Children: Myth or Reality? In Social Robotics. ICSR 2016. Lecture Notes in Computer Science; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
  43. Tariq, S.; Baber, S.; Ashfaq, A.; Ayaz, Y.; Naveed, M.; Mohsin, S. Interactive Therapy Approach Through Collaborative Physical Play Between a Socially Assistive Humanoid Robot and Children with Autism Spectrum Disorder. In Social Robotics. ICSR 2016. Lecture Notes in Computer Science; Agah, A., Cabibihan, J.J., Howard, A., Salichs, M., He, H., Eds.; Springer: Cham, Switzerland, 2016; Volume 9979. [Google Scholar] [CrossRef]
  44. Egido-García, V.; Estévez, D.; Corrales-Paredes, A.; Terrón-López, M.-J.; Velasco-Quintana, P.-J. Integration of a Social Robot in a Pedagogical and Logopedic Intervention with Children: A Case Study. Sensors 2020, 20, 6483. [Google Scholar] [CrossRef]
  45. Jeon, K.H.; Yeon, S.J.; Kim, Y.T.; Song, S.; Kim, J. Robot-based augmentative and alternative communication for nonverbal children with communication disorders. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ‘14), Seattle, Washington, USA, 13–17 September 2014; Association for Computing Machinery: New York, NY, USA, 2014; pp. 853–859. [Google Scholar] [CrossRef]
  46. Spitale, M.; Silleresi, S.; Leonardi, G.; Arosio, F.; Giustolisi, B.; Guasti, M.T.; Garzotto, F. Design Patterns of Technology-based Therapeutic Activities for Children with Language Impairments: A Psycholinguistic-Driven Approach, 2021. In Proceedings of the CHI EA ‘21: Extended Abstracts of the 2021 CHI Virtual Conference on Human Factors in Computing Systems, Yokohama, Japan, 8–13 May 2021; pp. 1–7. [Google Scholar] [CrossRef]
  47. Panceri, J.A.C.; Freitas, É.; de Souza, J.C.; da Luz Schreider, S.; Caldeira, E.; Bastos, T.F. A New Socially Assistive Robot with Integrated Serious Games for Therapies with Children with Autism Spectrum Disorder and Down Syndrome: A Pilot Study. Sensors 2021, 21, 8414. [Google Scholar] [CrossRef]
  48. Robles-Bykbaev, V.E.; Lopez-Nores, M.; Pazos-Arias, J.J.; Garcia-Duque, J. RAMSES: A robotic assistant and a mobile support environment for speech and language therapy. In Proceedings of the Fifth International Conference on the Innovative Computing Technology (INTECH 2015), Galcia, Spain, 20–22 May 2015; pp. 1–4. [Google Scholar] [CrossRef]
  49. Ochoa-Guaraca, M.; Carpio-Moreta, M.; Serpa-Andrade, L.; Robles-Bykbaev, V.; Lopez-Nores, M.; Duque, J.G. A robotic assistant to support the development of communication skills of children with disabilities. In Proceedings of the 2016 IEEE 11th Colombian Computing Conference (CCC), Popayan, Colombia, 27–30 September 2016; pp. 1–8. [Google Scholar] [CrossRef]
  50. Velásquez-Angamarca, V.; Mosquera-Cordero, K.; Robles-Bykbaev, V.; León-Pesántez, A.; Krupke, D.; Knox, J.; Torres-Segarra, V.; Chicaiza-Juela, P. An Educational Robotic Assistant for Supporting Therapy Sessions of Children with Communication Disorders. In Proceedings of the 2019 7th International Engineering, Sciences and Technology Conference (IESTEC), Panama, Panama, 9–11 October 2019; pp. 586–591. [Google Scholar] [CrossRef]
  51. Horstmann, A.C.; Mühl, L.; Köppen, L.; Lindhaus, M.; Storch, D.; Bühren, M.; Röttgers, H.R.; Krajewski, J. Important Preliminary Insights for Designing Successful Communication between a Robotic Learning Assistant and Children with Autism Spectrum Disorder in Germany. Robotics 2022, 11, 141. [Google Scholar] [CrossRef]
  52. Farhan, S.A.; Rahman Khan, M.N.; Swaron, M.R.; Saha Shukhon, R.N.; Islam, M.M.; Razzak, M.A. Improvement of Verbal and Non-Verbal Communication Skills of Children with Autism Spectrum Disorder using Human Robot Interaction. In Proceedings of the 2021 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 10–13 May 2021; pp. 356–359. [Google Scholar] [CrossRef]
  53. van den Berk-Smeekens, I.; van Dongen-Boomsma, M.; De Korte, M.W.P.; Boer, J.C.D.; Oosterling, I.J.; Peters-Scheffer, N.C.; Buitelaar, J.K.; Barakova, E.I.; Lourens, T.; Staal, W.G.; et al. Adherence and acceptability of a robot-assisted Pivotal Response Treatment protocol for children with autism spectrum disorder. Sci. Rep. 2020, 10, 8110. [Google Scholar] [CrossRef] [PubMed]
  54. Lekova, A.; Kostadinova, A.; Tsvetkova, P.; Tanev, T. Robot-assisted psychosocial techniques for language learning by hearing-impaired children. Int. J. Inf. Technol. Secur. 2021, 13, 63–76. [Google Scholar]
  55. Simut, R.E.; Vanderfaeillie, J.; Peca, A.; Van de Perre, G.; Vanderborght, B. Children with Autism Spectrum Disorders Make a Fruit Salad with Probo, the Social Robot: An Interaction Study. J. Autism. Dev. Disord. 2016, 46, 113–126. [Google Scholar] [CrossRef] [PubMed]
  56. Polycarpou, P.; Andreeva, A.; Ioannou, A.; Zaphiris, P. Don’t Read My Lips: Assessing Listening and Speaking Skills Through Play with a Humanoid Robot. In HCI International 2016—Posters’ Extended Abstracts. HCI 2016. Communications in Computer and Information Science; Stephanidis, C., Ed.; Springer: Cham, Switzerland, 2016; Volume 618. [Google Scholar] [CrossRef]
  57. Lewis, L.; Charron, N.; Clamp, C.; Craig, M. Co-robot therapy to foster social skills in special need learners: Three pilot studies. In Methodologies and Intelligent Systems for Technology Enhanced Learning: 6th International Conference; Springer International Publishing: Berlin/Heidelberg, Germany, 2016; pp. 131–139. [Google Scholar]
  58. Akalin, N.; Uluer, P.; Kose, H. Non-verbal communication with a social robot peer: Towards robot assisted interactive sign language tutoring. In Proceedings of the 2014 IEEE-RAS International Conference on Humanoid Robots, Madrid, Spain, 18–20 November 2014; pp. 1122–1127. [Google Scholar] [CrossRef]
  59. Özkul, A.; Köse, H.; Yorganci, R.; Ince, G. Robostar: An interaction game with humanoid robots for learning sign language. In Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO 2014), Bali, Indonesia, 5–10 December 2014; pp. 522–527. [Google Scholar] [CrossRef]
  60. Andreeva, A.; Lekova, A.; Simonska, M.; Tanev, T. Parents’ Evaluation of Interaction Between Robots and Children with Neurodevelopmental Disorders. In Smart Education and e-Learning—Smart Pedagogy. SEEL-22 2022. Smart Innovation, Systems and Technologies; Uskov, V.L., Howlett, R.J., Jain, L.C., Eds.; Springer: Singapore, 2022; Volume 305. [Google Scholar] [CrossRef]
  61. Esfandbod, A.; Rokhi, Z.; Meghdari, A.F.; Taheri, A.; Alemi, M.; Karimi, M. Utilizing an Emotional Robot Capable of Lip-Syncing in Robot-Assisted Speech Therapy Sessions for Children with Language Disorders. Int. J. Soc. Robot. 2023, 15, 165–183. [Google Scholar] [CrossRef]
  62. Boccanfuso, L.; Scarborough, S.; Abramson, R.K.; Hall, A.V.; Wright, H.H.; O’kane, J.M. A low-cost socially assistive robot and robot-assisted intervention for children with autism spectrum disorder: Field trials and lessons learned. Auton Robot 2016, 41, 637–655. [Google Scholar] [CrossRef]
  63. Alabdulkareem, A.; Alhakbani, N.; Al-Nafjan, A. A Systematic Review of Research on Robot-Assisted Therapy for Children with Autism. Sensors 2022, 22, 944. [Google Scholar] [CrossRef]
  64. Fisicaro, D.; Pozzi, F.; Gelsomini, M.; Garzotto, F. Engaging Persons with Neuro-Developmental Disorder with a Plush Social Robot. In Proceedings of the 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Daegu, Republic of Korea, 11–14 March 2019; pp. 610–611. [Google Scholar] [CrossRef]
  65. Cifuentes, C.; Pinto, M.J.; Céspedes, N.; Múnera, M. Social Robots in Therapy and Care. Curr. Robot. Rep. 2020, 1, 59–74. [Google Scholar] [CrossRef]
  66. Available online: https://furhatrobotics.com/furhat-robot/ (accessed on 16 April 2023).
  67. Integrating Furhat with OpenAI. Available online: https://docs.furhat.io/tutorials/openai/ (accessed on 16 April 2023).
  68. Elfaki, A.O.; Abduljabbar, M.; Ali, L.; Alnajjar, F.; Mehiar, D.; Marei, A.M.; Alhmiedat, T.; Al-Jumaily, A. Revolutionizing Social Robotics: A Cloud-Based Framework for Enhancing the Intelligence and Autonomy of Social Robots. Robotics 2023, 12, 48. [Google Scholar] [CrossRef]
  69. Lekova, A.; Tsvetkova, P.; Andreeva, A. System software architecture for enhancing human-robot interaction by Conversational AI, 2023 International Conference on Information Technologies (InfoTech-2023). In Proceedings of the IEEE Conference, Bulgaria, 20–21 September 2023. in print. [Google Scholar]
  70. Dino, F.; Zandie, R.; Abdollahi, H.; Schoeder, S.; Mahoor, M.H. Delivering Cognitive Behavioral Therapy Using A Conversational Social Robot. In Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Macau, China, 3–8 November 2019; pp. 2089–2095. [Google Scholar] [CrossRef] [Green Version]
  71. Grassi, L.; Tommaso, C.; Recchiuto, A.S.A. Sustainable Cloud Services for Verbal Interaction with Embodied Agents. Intell. Serv. Robot. 2023. in print. [Google Scholar]
  72. Available online: https://furhatrobotics.com/blog/5-ways-social-robots-are-innovating-education/ (accessed on 16 April 2023).
  73. Elgarf, M.; Skantze, G.; Peters, C. Once upon a story: Can a creative storyteller robot stimulate creativity in children? In Proceedings of the 21st ACM International Conference on Intelligent Virtual Agents, Fukuchiyama, Japan, 14–17 September 2021; pp. 60–67. [Google Scholar]
  74. Liu, Y.; Gao, T.; Song, B.; Huang, C. Personalized Recommender System for Children’s Book Recommendation with A Realtime Interactive Robot. J. Data Sci. Intell. Syst. 2017. [Google Scholar] [CrossRef]
  75. Available online: furhatrobotics.com/Furhat-robot (accessed on 16 April 2023).
  76. AskNAO Tablet. Available online: https://www.asknao-tablet.com/en/home/ (accessed on 16 April 2023).
  77. Available online: https://furhatrobotics.com (accessed on 16 April 2023).
  78. Baird, A.; Amiriparian, S.; Cummins, N.; Alcorn, A.M.; Batliner, A.; Pugachevskiy, S.; Freitag, M.; Gerczuk, M.; Schuller, B. Automatic classification of autistic child vocalisations: A novel database and results. Proc. Interspeech 2017, 849–853. [Google Scholar] [CrossRef]
  79. Shahab, M.; Taheri, A.; Hosseini, S.R.; Mokhtari, M.; Meghdari, A.; Alemi, M.; Pouretemad, H.; Shariati, A.; Pour, A.G. Social Virtual Reality Robot (V2R): A Novel Concept for Educa-tion and Rehabilitation of Children with Autism. In Proceedings of the 2017 5th RSI International Conference on Robotics and Mechatronics (ICRoM), Tehran, Iran, 25–27 October 2017; pp. 82–87. [Google Scholar] [CrossRef]
  80. Marušić, P.; Krhen, A.L. Virtual reality as a therapy for stuttering. Croat. Rev. Rehabil. Res. 2022, 58. [Google Scholar] [CrossRef]
  81. Jingying, C.; Hu, J.; Zhang, K.; Zeng, X.; Ma, Y.; Lu, W.; Zhang, K.; Wang, G. Virtual reality enhances the social skills of children with autism spectrum disorder: A review. Interact. Learn. Environ. 2022, 1–22. [Google Scholar] [CrossRef]
  82. Lee, S.A.S. Virtual Speech-Language Therapy for Individuals with Communication Disorders: Current Evidence, Lim-itations, and Benefits. Curr. Dev. Disord. Rep. 2019, 6, 119–125. [Google Scholar] [CrossRef]
  83. Bailey, B.; Bryant, L.; Hemsley, B. Virtual Reality and Augmented Reality for Children, Adolescents, and Adults with Communication Disability and Neurodevelopmental Disorders: A Systemat-ic Review. Rev. J. Autism. Dev. Disord. 2022, 9, 160–183. [Google Scholar] [CrossRef]
  84. Halabi, O.; Abou El-Seoud, S.; Alja’am, J.; Alpona, H.; Al-Hemadi, M.; Al-Hassan, D. Design of Immersive Virtual Reality System to Improve Communication Skills in Individuals with Autism. Int. J. Emerg. Technolo-Gies Learn. (iJET) 2017, 12, 50–64. [Google Scholar] [CrossRef] [Green Version]
  85. Almurashi, H.; Bouaziz, R.; Alharthi, W.; Al-Sarem, M.; Hadwan, M.; Kammoun, S. Augmented Reality, Serious Games and Picture Exchange Communication System for People with ASD: Systematic Literature Review and Future Directions. Sensors 2022, 22, 1250. [Google Scholar] [CrossRef] [PubMed]
  86. Chai, J.; Zeng, H.; Li, A.; Ngai, E.W. Deep learning in computer vision: A critical review of emerging techniques and application scenarios. Mach. Learn. Appl. 2021, 6, 100134. [Google Scholar] [CrossRef]
  87. O’Mahony, N.; Campbell, S.; Carvalho, A.; Harapanahalli, S.; Hernandez, G.V.; Krpalkova, L.; Riordan, D.; Walsh, J. Deep learning vs. traditional computer vision. In Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC); Springer International Publishing: Berlin/Heidelberg, Germany, 2020; Volume 1, pp. 128–144. [Google Scholar]
  88. Debnath, B.; O’brien, M.; Yamaguchi, M.; Behera, A. A review of computer vision-based approaches for physical rehabilitation and assessment. Multimed. Syst. 2022, 28, 209–239. [Google Scholar] [CrossRef]
  89. Aouani, H.; Ayed, Y.B. Speech emotion recognition with deep learning. Procedia Comput. Sci. 2020, 176, 251–260. [Google Scholar] [CrossRef]
  90. Sekkate, S.; Khalil, M.; Adib, A. A statistical feature extraction for deep speech emotion recognition in a bi-lingual scenario. Multimed. Tools Appl. 2023, 82, 11443–11460. [Google Scholar] [CrossRef]
  91. Samyak, S.; Gupta, A.; Raj, T.; Karnam, A.; Mamatha, H.R. Speech Emotion Analyzer. In Innovative Data Communication Technologies and Appli-cation: Proceedings of ICIDCA 2021; Springer Nature: Singapore, 2022; pp. 113–124. [Google Scholar]
  92. Zou, C.; Huang, C.; Han, D.; Zhao, L. Detecting Practical Speech Emotion in a Cognitive Task. In Proceedings of the 20th International Conference on Computer Communications and Networks (ICCCN), Lahaina, HI, USA, 31 July–4 August 2011; pp. 1–5. [Google Scholar] [CrossRef]
  93. Fioriello, F.; Maugeri, A.; D’alvia, L.; Pittella, E.; Piuzzi, E.; Rizzuto, E.; Del Prete, Z.; Manti, F.; Sogos, C. A wearable heart rate measurement device for children with autism spectrum disorder. Sci Rep. 2020, 10, 18659. [Google Scholar] [CrossRef]
  94. Alban, A.Q.; Alhaddad, A.Y.; Al-Ali, A.; So, W.-C.; Connor, O.; Ayesh, M.; Qidwai, U.A.; Cabibihan, J.-J. Heart Rate as a Predictor of Challenging Behaviours among Children with Autism from Wearable Sensors in Social Robot Interactions. Robotics 2023, 12, 55. [Google Scholar] [CrossRef]
  95. Anzalone, S.M.; Tanet, A.; Pallanca, O.; Cohen, D.; Chetouani, M. A Humanoid Robot Controlled by Neurofeedback to Reinforce Attention in Autism Spectrum Disorder. In Proceedings of the 3rd Italian Workshop on Artificial Intelligence and Robotics, Genova, Italy, 28 November 2016. [Google Scholar]
  96. Nahaltahmasebi, P.; Chetouani, M.; Cohen, D.; Anzalone, S.M. Detecting Attention Breakdowns in Robotic Neurofeedback Systems. In Proceedings of the 4th Italian Workshop on Artificial Intelligence and Robotics, Bari, Italy, 14–15 November 2017. [Google Scholar]
  97. Van Otterdijk, M.T.H.; de Korte, M.W.P.; van den Berk-Smeekens, I.; Hendrix, J.; van Dongen-Boomsma, M.; den Boer, J.C.; Buitelaar, J.K.; Lourens, T.; Glennon, J.C.; Staal, W.G.; et al. The effects of long-term child–robot interaction on the attention and the engagement of children with autism. Robotics 2020, 9, 79. [Google Scholar] [CrossRef]
Figure 1. The number of published articles for the empirical vs. pilot studies for the last 15 years.
Figure 1. The number of published articles for the empirical vs. pilot studies for the last 15 years.
Machines 11 00693 g001
Figure 2. Summary of types of disorders.
Figure 2. Summary of types of disorders.
Machines 11 00693 g002
Figure 3. Age of the participants interacting with robots in the studies (A) in pilot studies (B) in case studies.
Figure 3. Age of the participants interacting with robots in the studies (A) in pilot studies (B) in case studies.
Machines 11 00693 g003
Table 3. Description of communication objectives used in the research with scenarios for child–robot interaction.
Table 3. Description of communication objectives used in the research with scenarios for child–robot interaction.
Communication ObjectivesReferencesNumber of Articles
Joint attention [35,41,45,47,57,62]6
Turn-taking[14,16,35,41,43,45,57,62]8
Imitation/repetition[35,41,43,47,48,52,57,61,62]9
Sign recognition/understanding/receptive vocabulary[54,56,58,59]4
Sign production/expressive vocabulary[58]1
Sound and speech sound recognition[14,15,17,42,47,48,56,60,61]9
Speech recognition/receptive vocabulary[14,15,33,35,44,45,47,48,55,56,57,61,62]13
Speech production/expressive vocabulary[14,17,35,36,41,44,45,48,49,54,56,60,61]13
Morphosyntax skills [12,14,33,36,44,46,48,49,52,61] 10
Functional communication/maintain conversation/ pragmatic skills [14,16,36,44,48,49,52,57,60,61]10
Table 4. Distribution of SAR-types in pilot/case studies and the number of articles they are used in.
Table 4. Distribution of SAR-types in pilot/case studies and the number of articles they are used in.
Socially Assistive Robot Articles
NAO27
Custom made 3D-printed/Arduino-based robot (low cost)8
Probo (low cost)3
Robovie R3 (high cost)2
iRobi (high cost)2
Cozmo (low cost)1
SPELTRA (low cost)1
CuDDler (low cost)1
CASPER1
RASA1
MARIA T211
QTrobot (high cost)1
Pepper (high cost)1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Georgieva-Tsaneva, G.; Andreeva, A.; Tsvetkova, P.; Lekova, A.; Simonska, M.; Stancheva-Popkostadinova, V.; Dimitrov, G.; Rasheva-Yordanova, K.; Kostadinova, I. Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios. Machines 2023, 11, 693. https://doi.org/10.3390/machines11070693

AMA Style

Georgieva-Tsaneva G, Andreeva A, Tsvetkova P, Lekova A, Simonska M, Stancheva-Popkostadinova V, Dimitrov G, Rasheva-Yordanova K, Kostadinova I. Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios. Machines. 2023; 11(7):693. https://doi.org/10.3390/machines11070693

Chicago/Turabian Style

Georgieva-Tsaneva, Galya, Anna Andreeva, Paulina Tsvetkova, Anna Lekova, Miglena Simonska, Vaska Stancheva-Popkostadinova, Georgi Dimitrov, Katia Rasheva-Yordanova, and Iva Kostadinova. 2023. "Exploring the Potential of Social Robots for Speech and Language Therapy: A Review and Analysis of Interactive Scenarios" Machines 11, no. 7: 693. https://doi.org/10.3390/machines11070693

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop