Next Article in Journal
State-of-Health Prediction Using Transfer Learning and a Multi-Feature Fusion Model
Next Article in Special Issue
ECO6G: Energy and Cost Analysis for Network Slicing Deployment in Beyond 5G Networks
Previous Article in Journal
Piezoelectric Gas Sensors with Polycomposite Coatings in Biomedical Application
Previous Article in Special Issue
Multiple Fingerprinting Localization by an Artificial Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review

by
Maurício Pasetto de Freitas
1,*,
Vinícius Aquino Piai
1,
Ricardo Heffel Farias
1,
Anita M. R. Fernandes
1,
Anubis Graciela de Moraes Rossetto
2 and
Valderi Reis Quietinho Leithardt
3,4
1
School of Sea, Science and Technology, University of the Itajaí Valley, Itajaí 88302-901, Brazil
2
Federal Institute of Education, Science and Technology Sul-Rio-Grandense, Passo Fundo 99064-440, Brazil
3
COPELABS, Lusófona University of Humanities and Technologies, Campo Grande 376, 1749-024 Lisboa, Portugal
4
VALORIZA, Research Center for Endogenous Resources Valorization, Instituto Politécnico de Portalegre, 7300-555 Portalegre, Portugal
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(21), 8531; https://doi.org/10.3390/s22218531
Submission received: 19 September 2022 / Revised: 27 October 2022 / Accepted: 31 October 2022 / Published: 5 November 2022
(This article belongs to the Special Issue Feature Papers in the Internet of Things Section 2022)

Abstract

:
According to the World Health Organization, about 15% of the world’s population has some form of disability. Assistive Technology, in this context, contributes directly to the overcoming of difficulties encountered by people with disabilities in their daily lives, allowing them to receive education and become part of the labor market and society in a worthy manner. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things (AIoT) devices. AIoT processes and analyzes the large amount of data generated by Internet of Things (IoT) devices and applies Artificial Intelligence models, specifically, machine learning, to discover patterns for generating insights and assisting in decision making. Based on a systematic literature review, this article aims to identify the machine-learning models used across different research on Artificial Intelligence of Things applied to Assistive Technology. The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. The survey results show that 50% of the analyzed research address visual impairment, and, for this reason, most of the topics cover issues related to computational vision. Portable devices, wearables, and smartphones constitute the majority of IoT devices. Deep neural networks represent 81% of the machine-learning models applied in the reviewed research.

1. Introduction

According to the World Health Organization, about 15% of the world population has some type of disability, totaling approximately 190 million people [1]. This same source also indicates this number continues to grow due to the increase in chronic health conditions and the aging of the world population. Taking into consideration that disabilities may be of a temporary or permanent nature, they may encompass a wide range of special needs, restrictions, and health conditions. These disabilities can be represented by degenerative diseases such as Parkinson’s, amyotrophic lateral sclerosis (ALS), and Alzheimer’s, physical, mental, visual, and hearing disabilities, and chronic non-communicable diseases. Additionally, conditions can result from aging, a period marked by the highest rates of the development of some disabilities [1].
According to the United Nations Convention on the Rights of Persons with Disabilities (CRPD), disability is not an attribute of the person but is the result of environmental and behavioral barriers that arise from the interaction between people with disabilities and society, whereby preventing them from participating equally, fully, and effectively as citizens in society. Therefore, dealing with the obstacles that affect people with disabilities contributes to the improvement of their social participation in general. Within this context, Assistive Technology (AT) contributes directly to the reduction of difficulties encountered by people with disabilities in their daily lives. Assistive Technology allows them to live independently, healthy, and productively, to receive education and be participants in the labor market and society in a worthy manner [2].
Assistive Technology encompasses services, products, methodologies, strategies, and practices that aim to minimize and/or eliminate restrictions and limitations imposed on a person due to a disability or incapacity [3]. It focuses on providing independence, quality of life, and social inclusion to people with disabilities. Some examples of Assistive Technology are hearing aids, memory aids, eyeglasses, wheelchairs, pill organizers, and communication aids. Assistive Technology has made great advances in its integration with Artificial Intelligence of Things—from now on referred to as AIoT—devices and machine learning [4,5,6]. AIoT processes and analyzes the large amount of data generated by Internet of Things—from now on referred to as IoT—devices and applies Artificial Intelligence techniques, specifically, machine learning, to discover patterns for generating insights and assisting in decision making [7].
When applied to AT, AIoT allows the conception of an array of disruptive solutions to address the disability issue. Some examples of such solutions are navigation systems for blind people, voice assistants for people with disabilities [8], the remote monitoring of health conditions [9], telemedicine and telehealth [10], communication systems based on sign language [11], auxiliary memory for people with cognitive disabilities, and a series of smart objects such as medicine dispensers, wheelchairs [12], exoskeletons [13], etc. These are some of the numerous applications of great value for those in need, quoting Mary Pat Radabaugh (the former director of IBM’s National Support Center for People with Disabilities in 1988) “For people without disabilities, technology makes life easier. For people with disabilities, technology makes life possible” [14].
Given the importance of the development of AIoT applied to Assistive Technology, this systematic literature review aims to identify the machine-learning models used in the different research on this topic. Since the reviews of the literature on the proposed subject are still small, when compared to the amount of relevant research conducted in the area, it becomes important that the data and evidence collected through recent studies are presented in a coherent and cohesive fashion. The present article collects previous research to present such a review.
For this objective the present Systematic Literature Review (SLR) is based on guidelines defined by Kitchenham and Charters in 2007 [15] and by Petersen et al. [16]. In this scope, an SLR carries out a survey of the relevant previous research conducted on a particular theme or research question to find evidence that it answered its proposed objectives. An SLR is then considered evidence-based research. This process takes place through a rigorous, reliable, and replicable methodology [15].
The idea of using evidence-based research in the field of computer science was originally proposed by Kitchenham, Dybå, and Magne Jørgensen in 2004 and 2005 [17,18], where researchers used an “Evidence-based Software Engineering” (EBSE) methodology in the research and practice of software engineering. The stages of a systematic literature review that were addressed in this work were: the planning, consisting of the creation of the SLR protocol; the conduction, consisting of the application of the SRL protocol; and the SLR documentation and report [19]. Each of these steps is composed of a series of steps, as defined, and presented through this SLR.
The survey of the topics approached in this article also highlights the context of such research, their application, the IoT devices used, and gaps and opportunities for further development. This article is organized as follows: Section 2 presents the concepts related to the research. Section 3 presents the research methodology used, which includes the research’s objective and questions, the steps of the systematic literature review, and the threats to the validity of the results. Section 4 presents the research results of this review. Section 5 presents the conclusion based on the study results and recommendations for future studies.

2. Assisted Technology, AIoT, and Machine Learning

Aiming at minimizing and/or eliminating restrictions and limitations imposed on a person due to a disability or incapacity, the World Health Organization defines Assistive Technology as a wider term encompassing any system or service related to assistive products and services. The Assistive Technology Industry Association defines Assistive Technology’s products and services as any item, piece of equipment, hardware, or software intended to assist people with some type of disability. Such products and services are a result of the combination of AT and AIoT [11]. Industry 4.0 is an ally for the improvement of AT. The development of devices based on Artificial Intelligence and Internet of Things, at a low cost, will benefit a large part of society that depends on ATs for a better living condition. On the other hand, persons with disabilities can qualify for the labor market in the Industry 4.0 environment by using ATs [20]. The design and forms of Assistive Technologies, whose task is to enable a greater involvement of people with disabilities in the field of employment, are extremely important.
Artificial Intelligence of Things is obtained by the combination of IoT [20] and Artificial Intelligence (AI) techniques [21]. IoT is defined as any device able to interconnect—such as sensors—and can collect relevant data in real time [22]. This relevance is revealed by processing the obtained data through Artificial Intelligence models, especially making use of machine learning (ML). Some cases also require the usage of deep learning (DL) to analyze the collected data to extract useful information for decision making [23,24,25]. The application of ML techniques shows promise for the healthcare sector [26] by improving efficiency in this sector [27].
The term AI was coined by John McCarthy [28], who is considered the father of AI, in 1956, during the first AI conference at Dartmouth College. McCarthy defines AI as: It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable [29]. Several areas have been expanding faster, recently, and need dynamic solutions that can be solved with AI [30], such as sustainability [31], health [32], telecommunication systems [33], data privacy [34,35], electric vehicles [36], and electrical power systems [37,38,39].
Alan Turing, however, proposed, in 1950, the question: “Can machines think?”. The Turing test was then launched with the aim of determining whether a computer can demonstrate the same intelligence as a human being [40]. To pass this test, the system would need to possess capabilities that are currently the subject of study in machine learning, such as natural language processing [41], knowledge representation [42], and automated reasoning [43]. Given the advances in AI models, several applications are being used to improve the quality of life of people with physical disabilities and improve applications for smart healthcare [44], such as using smart robots [45,46,47], or more specific applications, such as in sign language [48,49,50,51,52,53].
Machine learning is a subfield of AI that aims at developing models and computer programs that can learn automatically by extracting knowledge from data [54]. These programs must be able to improve and extend themselves from experience, without being explicitly programmed. In the IoT context, these models are used to process and analyze a large amount of data collected by devices, automatically discover patterns, and generate meaningful insights from this data. Such a task would be impossible for humans to perform manually. Some of these ML models are echo state networks [55], ensemble learning methods [56,57,58], k-nearest neighbors (K-NN) [59], group method of data handling (GMDH) [60], long short-term memory (LSTM) [61], convolutional neural networks (CNNs) [62,63,64,65], and adaptive neuro-fuzzy inference system (ANFIS) [66].
Deep learning is a subfield of machine learning. Deep learning specifically studies deep neural networks. Like ML, deep learning also uses data-based learning methods; however, computation and processing are completed using multilayer neural networks [67]. The experimental results and heuristic considerations suggest that deep architectures are more suitable than shallow ones for modern applications facing very complex problems, e.g., vision and human language understanding and the processing of big data. [68]. These models can be applied in prediction [69,70,71], classification [72,73,74], and optimization [75,76,77] problems.

3. Research Methodology

A systematic literature review (SLR) was performed to achieve the objectives of the current study. SLR is a methodological review of research results that aims to aggregate existing evidence on a research problem, as well as identify, select, evaluate, and summarize primary articles considered relevant on the research topic in an unbiased and repeatable way. SLR is considered a secondary study for aggregating previous studies [15].
The stages and sub-stages of a systematic review addressed in this work were: planning, conducting, and documenting the review. After presenting the research objectives and questions, the topics related to planning were displayed according to the progress of the SLR. This approach was based on the work presented by Kitchenham et al. [78], Banijamali and Ahmad et. al. [79]. Banijamali and Ahmad et. al. [79] define these research stages as: the planning, where this step includes the activities of identifying the research objective, defining the research questions, developing a review protocol, and evaluating the review protocol, during which these activities can be done interactively; the conducting, where this step includes the activities of identifying primary articles using search strategies, selecting studies using inclusion and exclusion selection criteria for studies, extracting the data, and synthesizing the data; the publication, where this step includes the activities of specifying the report, formatting the report, and evaluating the report.
Figure 1 presents the systematic review steps (guidelines) based on the proposed by Kitchenham [80].
Figure 2 presents the systematic review process model on which this work based. The process was based on the proposed by Kitchenham [80].

3.1. Research Purpose

The objective of this literature review is to identify the models of ML used in the research of Artificial Intelligence of Things-based Assistive Technology. Additionally, it aims to identify the context of these models’ applications through the survey of the topics of study, the IoT devices used, and to identify development gaps and opportunities. Due to the great magnitude of conditions characterized as disabilities, the scope of this study was defined for research that addresses the following disabilities: visual impairment, hearing impairment, cognitive impairment, and degenerative diseases such as ALS, Alzheimer’s, and Parkinson’s. Table 1 presents the research questions of this study.
QP1 intends to identify the machine-learning models applied in the development of AIoT applied to Assistive Technology. All models mentioned in the selected primary articles were identified for this purpose.
QP2 intends to provide a context related to the topics addressed in the development of AIoT applied to Assistance Technology. The study topics addressed by the selected primary articles were identified for this purpose.
QP3 intends to provide context related to the types of IoT devices that have been used for the development of AIoT applied to Assistance Technology. All IoT devices listed in the proposed solutions were identified for this analysis. The devices were also classified as Arduino, RaspberryPY, or Nvidia Jetson-based.
QP4 identifies the gaps for AIoT applied to Assistive Technology research and development, indicating for which disability or incapacity the study intends to develop a solution.

3.2. Research Process

This section aims to detail the research process used in this SLR. A database search or automatic search strategy was used for this study. It consists of searching digital libraries using a search string [80]. A search string consists of combining keywords using logical operators such as OR and AND. Usually, synonyms of each keyword are grouped by the OR operator, and an AND operation is used to join these groups.
The keywords defined for this study were: Assistive Technology, AIoT, and Machine Learning. Interactive tests were carried out on each of the databases to identify the search strings whose returns were the most significant in terms of scope and relevance of the studies. The selected libraries are justified for being among the most used in research in computer science [80]. Table 2 presents the databases used and the respective search strings defined after the iterative validation process.
The automatic database search was performed on 14 September 2021. Each database returned a set of articles, as shown in Table 3. These articles were processed to remove duplicated entries; this task was performed automatically by the tool Parsifal. Initially, a total of two hundred and sixty-seven articles were selected, of which seventy-nine were duplicates, leaving one hundred and eighty-eight articles. The set of articles resulting from the research stage passed to the next stage of the SLR, the Study Selection.

3.3. Study Selection Criteria

This section aims at detailing the research process used in this SLR. A database search or automatic search strategy was used for this study. It consists of searching digital libraries using a search string [80]. A search string consists of combining keywords using logical operators such as OR and AND. Usually, synonyms of each keyword are grouped by the OR operator, and an AND operation is used to join these groups.
This section aims to present the selection and exclusion criteria used in this SLR. The presented criteria were preconditions for the acceptance or exclusion of an article in the SLR. This selection seeks to identify relevant studies that can answer the proposed research questions [78,79,80]. The criteria were applied to the set of articles resulting from the search process. Some of the criteria were applied directly to the databases, according to the available filters, with variations for each of the databases. Table 4 presents the inclusion criteria, and the indication of those applied directly to the bases. Table 5 lists the exclusion criteria.
It is worth mentioning that it was enough that if one of the exclusion criteria was met, the article was excluded. On the other hand, for an article to be selected, it was necessary that all the inclusion criteria were satisfied. Initially, the title and abstract of each article were read and taken into consideration when applying the selection criteria and, in cases where the reading was insufficient for this, the article was read until the selection or exclusion could be confirmed. At the end of this stage, a total of 30 articles remained; this set passed to the next stage of the process, the Quality Assessment.

3.4. Quality Assessment

This section presents the quality assessment process used in this SLR. The previous step resulted in a set of pre-selected articles. However, for these articles to be considered accepted by this SLR, it was necessary to go through the quality assessment process. At this stage, each study was evaluated to ensure the quality of the data that was extracted in the subsequent data extraction stage [15,80], meaning that it intended to identify studies that were relevant to answer the research questions.
Table 6 shows a questionnaire containing five questions prepared for this SLR to assess the quality of the articles. Each question has three options as answers and each option has its respective score. Options are “yes”, “partially”, and “no”, scoring 1.0, 0.5, and 0, respectively. The total score for each article was defined by the sum of the values obtained from five answers. A maximum value of 5.0 indicated a well-matched article for this SLR and a minimum of 0.0 indicates that the article was not suitable for this SLR. A cutoff score of 2.0 was defined, so only articles with a score greater than 2.0 were effectively considered as accepted for this SLR. This quality assessment method was implemented and applied using Parsifal.
The primary articles were read in their entirety for the attribution of grades. Table 7 shows the result of the quality assessment. The input set of this stage contained 30 articles, of which 3 articles received a score less than or equal to 2.0. Thus, 27 articles passed to the next stage, the Data Extraction. The selected studies were submitted to the next stage of the SLR, the Data Extraction.

3.5. Data Extraction

The data extraction process makes use of a data extraction form generated, specifically for this SLR, using the Parsifal tool. Filling in the fields of this form after the reading of each selected article allows the recovery of data to answer the research questions raised by this SLR, as can be seen in Table 1. This form also collects metadata used to identify the studies individually, thereby assisting the extraction process [78]. Table 8 presents the fields and the purpose of each field within the extraction form.
Table 8 brings ten data properties defined for this study where PD1 to PD3 were used to identify and locate articles. The other properties were defined to answer the research questions where PD4 answers QP1; PD5 and PD6 answer QP2; PD7 answers QP3; and PD8 answers QP4. Data was extracted and organized by the Parsifal tool after the complete reading of each of the selected articles (see Table 7), thereby facilitating the extraction process.

3.6. Threats to the Validity of the Study

Biases in the identification of primary articles and in the extraction of data from the articles, were corroborated by the fact that each researcher was responsible for evaluating a set of disjointed articles, in the selection, as well as in the quality assessment, with no peer validation, is a threat to the validity of the study. Another threat is the small number of articles selected in this SLR, which implies the possibility that the sample is not representative to extract evidence that can effectively answer the research questions.
The selection of databases, or digital libraries, can also be considered a threat, since these may not cover the completeness of the studies carried out in the context of the problem, which includes questions from different areas of sociology and medicine.

4. Results

This section summarizes the findings and results of the analyses of the selected primary articles. The selection process conducted in the selected databases collected an initial amount of two hundred and sixty-seven articles. Of these articles, twenty-six were considered for this SLR (see Table 7) for their contribution to the topic. Section 4.1 presents the contribution of the articles with a score greater than 3 (see Table 7). Section 4.2 proposes answers to the research questions presented in Table 1, which were based on the data extracted by the form presented in Table 8. Section 4.3 aims at summarizing the research findings.

4.1. Contributions of Selected Articles

The articles presented below have relevant contributions to the research in AIoT applied to Assistive Technology. It can be noticed that visual impairment is the area of research with the highest number of presented works.
The work of Junior et al. [81] present a framework that applies computer vision and machine-learning techniques, through the IoT network with the use of cloud computing, for a capacity increase. The images are captured by an IoT device and sent to an edge element (IoT node), which processes them, identifies objects, computes distances, and, ultimately, converts that information into audible commands to provide guidance for visually impaired people.
Chang et al. [82] present a deep-learning-based wearable medicines recognition system for visually impaired people. The proposed system is composed of a pair of wearable smart glasses, a wearable waist-mounted drug pills recognition device, a mobile device application, and a cloud-based management platform. This system uses deep-learning technology to assist visually impaired people in identifying drug pills and help them avoid taking the wrong medicines. The experimental results show that the accuracy of the proposed system reached up to 90%.
A system for localized scene understanding to assist sufferers of visual disabilities is proposed by Ghazal et al. [83]. The system determines the user’s indoor location using Wi-Fi fingerprinting and synthesizes a real-time description of the surrounding environment, using deep learning, and sensory information collected from IoT sensors. The results show that the system can be an effective tool in helping the visually challenged navigate unknown environments by using increasingly available smart home technologies.
Chang et al. [84] present a wearable smart glasses-based drug pill recognition system, using deep learning, for visually impaired people to improve their medication-use safety. The system consists of a pair of wearable smart glasses, an artificial intelligence-based intelligent drug pill recognition box, a mobile device app, and a cloud-based information management platform. The experimental results show that a recognition accuracy of up to 95.1% can be achieved.
A wearable assistive system based on artificial intelligence edge computing techniques to help visually impaired consumers safely use marked crosswalks, or zebra crossings, is presented by Chang et al. [86]. The system consists of a pair of smart sunglasses, a waist-mounted intelligent device, and an intelligent walking cane (stick). A deep-learning technique is adopted for zebra crossing image recognition in real time. The experimental results show that the accuracy of the real-time zebra crossing recognition of the proposed system can reach up to 90%.
In the study of Su et al. [88], we find the design of a finger-worn device that can be applied by visually impaired users for recognizing traditional Chinese characters on a micro IoT processor. The system on the index finger contains a small camera and buttons, which capture images by identifying the relative position of the index finger to the printed text, and the buttons are applied to capture an image by visually impaired users and provide the audio output of the corresponding Chinese character by a voice prompt. To recognize Chinese characters, English letters, and numbers, a robust Chinese optical character recognition (OCR) system was developed according to the training strategy of an augmented convolution neural network algorithm. The Chinese OCR system can segment a single character from the captured image, and the system can accurately recognize rotated Chinese characters. The experimental results revealed that compared with the OCR application programming interfaces of Google and Microsoft, the proposed OCR system obtained a 95% accuracy rate in dealing with rotated character images, whereas the Google and Microsoft OCR APIs only obtained 65% and 34% accuracy rates.
Yadav et al. [89] propose a smart navigation system that relentlessly scans the environment, and then detects and classifies neighboring objects using a 4-layered convolutional neural network (CNN), which has been trained on a data set containing 2513 permutations of the various images of household objects that an individual may encounter in daily life. The device has achieved a success rate of serving within a response time of less than 50 ms. The accuracy of the CNN algorithm at 94.6% also sets a distinguished benchmark as an object detection algorithm, thereby contributing to the success of the simulations of the proposed device in a constrained environment.
Kandoth et al. [96] present an application developed based on a smart portable cane equipped with sensors and a camera to help the visually impaired to understand their surroundings, detect obstacles and avoid them, using computer vision, neural networks, and IoT in outdoor spaces, thereby helping them to navigate safely. The idea is to implement the YOLO (you-only-look-once) algorithm for obstacle detection using the DarkNet 1.0 framework, which is then used for object avoidance using an ultrasonic sensor. The results are considerably better than those of the SegNet networks when trained only with the original images and other state-of-the-art results using the Synthia database.
Karkar et al. [105] present the concept of scene to speech (STS). STS recognizes the elements in a captured image or a video clip and speaks, loudly, informative textual content that describes the scene. The contemporary progression of convolution neural networks allows attaining object recognition procedures, in real time, on mobile handheld devices.
Boppana et al. [93] present the design of a prototype of an assistive device for deaf–mute people to reduce the communication gap with normal people. This device is portable and can hang over the neck. This device allows the person to communicate with sign hand postures to recognize different gesture-based signs. The controller of this assistive device was developed—for processing the images of gestures—by employing various image-processing techniques and deep-learning models to recognize the sign. This sign is converted into speech in real time using a text-to-speech module. This sign language converter was found to be 99% accurate in recognizing the signs and generating the correct words.
The study of Lee et al. [101] present the design and implementation of a smart wearable American Sign Language interpretation (ASL) system, using deep learning, which applies sensor fusion that “fuses” six inertial measurement units (IMUs). The IMUs are attached to all fingertips and the back of the hand to recognize sign language gestures; thus, the proposed method is not restricted by the field of view. The model achieves an average recognition rate of 99.81% for dynamic ASL gestures. The ASL recognition system can be integrated with ICT and IoT technology to provide a feasible solution to assist hearing-impaired people in communicating with others.
The paper of Javed and Sarwar [95] proposes a smartphone-based end-to-end novel framework, named PP-SPA, for privacy-preserved human activity recognition (HAR) and real-time activity functioning support using a smartphone-based virtual personal assistant. PP-SPA helps to improve the routine life functioning of cognitive-impaired individuals. It uses a highly accurate machine-learning model that takes input from smartphone sensors (i.e., accelerometer, gyroscope, magnetometer, and GPS) for accurate HAR and uses a digital diary to recommend real-time support for the improvement of an individual’s health. PP-SPA yields a proficient accuracy of 90% with the Hoeffding tree and logistic regression algorithm, which endeavors reasonable models in terms of uncertainty.
The paper of Jacob et al. [13] presents an artificial intelligence-powered smart and light weight exoskeleton system (AI-IoT-SES), which receives data from various sensors, classifies them intelligently, and generates the desired commands via IoT for rendering rehabilitation and support, with the help of caretakers, for paralyzed patients in smart and connected communities. The navigation module uses AI-and IoT-enabled simultaneous localization and mapping. The casualties of a paralyzed person are reduced by commissioning the IoT platform to exchange data from the intelligent sensors with the remote location of the caretaker monitoring the real-time movement and navigation of the exoskeleton. The experimental results simulated show that the proposed system is an ideal method for rendering rehabilitation and support for paralyzed patients in smart communities.
Al Shabibi and Kesavan [94] present a cost-effective smart wheelchair-based Arduino Nano microcontroller and IoT technology that have several features to benefit disabled people, especially poor people, who cannot afford expensive smart wheelchairs or the required help to finish daily life tasks without external help. The smart wheelchair, which is affordable to a wide range of disabled people and is based on the Arduino Nano, is equipped with a module to give Wi-Fi access, and another to detect a fall with voice message notification using an IFTTT platform, obstacle detection with a buzzer, LEDs to work as hazard lights, a voice recognition system, and joysticks to control the wheelchair.
Wang et al. [99] present the development of a compact, non-obtrusive and ergonomic wearable device to measure signals associated with human physiological gestures, and, thereafter, generate useful commands to interact with the environment. It uses machine learning and non-invasive biosensors on the top of the ears to identify eye movements and facial expressions with over 95% accuracy. Users can control different applications, such as robots, powered wheelchairs, cell phones, smart home, or other IoT, devices. The experimental results show satisfactory performance in different applications.
Sharma et al. [97] present DeTrAs: a deep-learning-based Internet of Health framework for the assistance of Alzheimer’s patients. It works with three components: a recurrent neural network-based Alzheimer’s prediction scheme, which uses sensory movement data; an ensemble approach for the abnormality tracking of Alzheimer’s patients designed to comprise two parts: (a) a convolutional neural network-based emotion detection scheme and (b) a timestamp window-based natural language processing scheme; and finally, an IoT-based assistance mechanism for Alzheimer’s patients. The evaluation of DeTrAs depicts, almost, a 10–20% improvement in terms of accuracy in contrast to the different existing machine-learning algorithms.

4.2. Research Questions Answers

This section presents the answers to the four established research questions.
QP1. What are the Machine Learning models used in AIoT applied to Assistive Technology?
Out of the twenty-seven primary articles studied, 81% presented solutions based on ANN, 15% of them applied other ML techniques, and 7% did not present the techniques used. As shown in Table 9, within the context of neural networks, the following ML techniques were addressed: ANN, CNN, the use of multiple-CNN, clever CNN, R-CNN (region-based CNN), faster R-CNN, PNN (probabilistic neural network), RNN (recurrent neural network), and multi-trained DL models. The models not based on neural networks were: Hoeffding tree, logistic regression, naïve Bayes, random forest, k-means, linear regression, independent component analysis, support vector machines (SVM), and HOG (histogram of oriented gradients).
QP2. What are the topics of the study that have been researched in the context of AIoT applied to Assistive Technology?
Figure 3 presents a word cloud, created using the keywords of each of these articles, to provide a general idea about the topics being studied on the selected primary articles.
The research topics discussed in the primary articles were assisted locomotion, assisted navigation, facial recognition, human activity recognition, image captioning, object detection, object recognition, OCR (optical character recognition), scene to speech, self-balancing object, smart assistant, speech recognition, rehabilitation, text to speech, and text detection. Table 10 shows the articles wherein these topics occurred.
QP3. What are the IoT devices used in the context of AIoT applied to Assistive Technology?
The IoT devices used in the primary articles’ proposed solutions: portable devices, wearables, various sensors, smartphones, cane, finger worn wireless, exoskeletons, wheelchairs, and others. Table 11 presents the articles wherein these devices were used.
Out of the primary articles, 60% of them showed the use of RaspberryPY, Arduino, and Nvidia Jetson-based devices, at 41%, 15%, and 7%, respectively, of the device total. Table 12 presents the articles that used RaspberryPY, Arduino, and Nvidia Jetson-based devices in their research.
QP4. Is there a disparity in the number of studies found according to the problems selected in the research?
The selected articles point to a large disparity in the development of AIoT applied to Assistive Technology in relation to the impairments chosen for this study. A total of 52% of the total number of articles addressed issues related to visual impairment, 19% of them addressed issues related to hearing impairment, and the rest were distributed between 11% motor coordination impairments, 7% degenerative diseases, and 4% cognitive impairment. This shows that most AIoT applied to Assistive Technology developments, within the selected articles, address visual impairment.

4.3. Threats to the Validity of the Study

Biases in the identification of primary articles and in the extraction of data from the articles, were corroborated by the fact that each researcher was responsible for evaluating a set of disjointed articles, in the selection, as well as in the quality assessment, with no peer validation, is a threat to the validity of the study. Another threat is the small number of articles selected in this SLR, which implies the possibility that the sample is not representative to extract evidence that can effectively answer the research questions. The selection of databases, or digital libraries, can also be considered a threat, since these may not cover the completeness of the studies carried out in the context of the problem, which includes questions from different areas of sociology and medicine.

5. Conclusions

This article presented a systematic literature review, which aimed to identify machine-learning algorithms and techniques used in AIoT applied to Assistive Technology solutions, as well as the context of its applications. Two hundred and sixty-seven articles were pre-selected using automatic search mechanisms on previously selected databases or digital libraries. After applying the selection criteria and quality assessment process, twenty-seven articles were considered from this initial set.
The final set of articles was submitted to the data extraction process, whereafter extraction, the data was organized, summarized, and analyzed to answer the research questions raised in this SLR. After surveying the findings, it was possible to conclude that 50% of the analyzed articles addressed visual impairment, thereby identifying a gap and an opportunity to develop Assistive Technology for all other disabilities. It was also possible to observe that most topics were influenced by many studies focused on visual impairment, resulting in a majority of the topics related to, or based on, computer vision.
It was possible to identify that 81% of the studies used machine-learning algorithms and techniques based on neural networks and only 15% of the studies used different techniques. This shows not only the interest of researchers in neural networks but also the great applicability of these learning techniques in the solution of Assistive Technology problems. Conversely, it also shows that there is a gap waiting to be filled in relation to the other algorithms and techniques.
Some threats to the validity of the results were also raised, such as biases in the identification of primary articles and extraction of results, the selection of digital libraries, and the number of studies selected in the SLR. Future work, in relation to the first threat, should intend to adopt peer review in the SLR stages, such as the application of selection criteria, quality assessment, and data extraction. To handle the second threat, the inclusion of other digital libraries and databases not covered by this SLR can also significantly contribute to the scope of primary articles. Finally, for the last threat, the adoption of a hybrid search strategy, possibly using snowballing and a manual search, in addition to an automated search, should be considered in future work.

Author Contributions

Writing—original draft, methodology, software, validation, and formal analysis, M.P.d.F.; writing—review and editing, V.A.P.; writing—review and editing, R.H.F.; writing—original draft, methodology, software, validation, formal analysis, writing review and editing, A.M.R.F.; writing original draft, methodology, software, validation, formal analysis, writing review, V.R.Q.L.; and project administration, A.G.d.M.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by national funds through the Foundation for Science and Technology, I.P. (Portuguese Foundation for Science and Technology), project UIDB/05064/2020; VALORIZA—Research Center for Endogenous Resource Valorization, project UIDB/04111/2020; and ILIND—Lusophone Institute of Investigation and Development, project COFAC/ILIND/COPELABS/3/2020.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

FAPESC (Foundation for Support to Research and Innovation of the State of Santa Catarina); projects from public call FAPESC No. 29/2021, Academic Structuring Program Support for the Infrastructure of Academic Laboratories in the State of Santa Catarina; and from public call FAPESC No. 15/2021, Science, Technology, and Innovation Program to support ACAFE Research Groups.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. WHO. World Report on Disability; WHO: Geneva, Switzerland, 2011; Available online: https://apps.who.int/iris/handle/10665/44575 (accessed on 14 September 2022).
  2. King, P.; Guevara Martínez, E. Robotic Assistive Technologies: Principles and Practice. IEEE Pulse 2020, 11, 27–28. [Google Scholar] [CrossRef]
  3. Fall, C.L.; Gagnon-Turcotte, G.; Dubé, J.F.; Gagné, J.S.; Delisle, Y.; Campeau-Lecours, A.; Gosselin, C.; Gosselin, B. Wireless sEMG-Based Body–Machine Interface for Assistive Technology Devices. IEEE J. Biomed. Health Inform. 2017, 21, 967–977. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Tyagi, N.; Sharma, D.; Singh, J.; Sharma, B.; Narang, S. Assistive Navigation System for Visually Impaired and Blind People: A Review. In Proceedings of the 2021 International Conference on Artificial Intelligence and Machine Vision (AIMV), Gandhinagar, India, 24–26 September 2021; pp. 1–5. [Google Scholar] [CrossRef]
  5. Baucas, M.J.; Spachos, P.; Gregori, S. Internet-of-Things Devices and Assistive Technologies for Health Care: Applications, Challenges, and Opportunities. IEEE Signal Process. Mag. 2021, 38, 65–77. [Google Scholar] [CrossRef]
  6. Hussain Shah, S.J.; Albishri, A.A.; Lee, Y. Deep Learning Framework for Internet of Things for People With Disabilities. In Proceedings of the 2021 IEEE International Conference on Big Data (Big Data), Orlando, FL, USA, 15–18 December 2021; pp. 3609–3614. [Google Scholar] [CrossRef]
  7. Sung, T.W.; Tsai, P.W.; Gaber, T.; Lee, C.Y. Artificial Intelligence of Things (AIoT) technologies and applications. Wirel. Commun. Mob. Comput. 2021, 2021, 9781271. [Google Scholar] [CrossRef]
  8. Polyakov, E.V.; Mazhanov, M.S.; Rolich, A.Y.; Voskov, L.S.; Kachalova, M.V.; Polyakov, S.V. Investigation and development of the intelligent voice assistant for the Internet of Things using machine learning. In Proceedings of the 2018 Moscow Workshop on Electronic and Networking Technologies (MWENT), Moscow, Russia, 14–16 March 2018; pp. 1–5. [Google Scholar] [CrossRef]
  9. Qian, K.; Zhang, Z.; Yamamoto, Y.; Schuller, B.W. Artificial Intelligence Internet of Things for the Elderly: From Assisted Living to Health-Care Monitoring. IEEE Signal Process. Mag. 2021, 38, 78–88. [Google Scholar] [CrossRef]
  10. Carter, D.; Kolencik, J.; Cug, J. Smart Internet of Things-enabled Mobile-based Health Monitoring Systems and Medical Big Data in COVID-19 Telemedicine. Am. J. Med. Res. 2021, 8, 20–30. [Google Scholar]
  11. Zhang, J.; Tao, D. Empowering Things with Intelligence: A Survey of the Progress, Challenges, and Opportunities in Artificial Intelligence of Things. IEEE Internet Things J. 2021, 8, 7789–7817. [Google Scholar] [CrossRef]
  12. Soma, S.; Patil, N.; Salva, F.; Jadhav, V. An Approach to Develop a Smart and Intelligent Wheelchair. In Proceedings of the 2018 9th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Bengaluru, India, 10–12 July 2018; pp. 1–7. [Google Scholar] [CrossRef]
  13. Jacob, S.; Alagirisamy, M.; Xi, C.; Balasubramanian, V.; Srinivasan, R.; Parvathi, R.; Jhanjhi, N.Z.; Islam, S.M.N. AI and IoT-Enabled Smart Exoskeleton System for Rehabilitation of Paralyzed People in Connected Communities. IEEE Access 2021, 9, 80340–80350. [Google Scholar] [CrossRef]
  14. Bryant, B.R.; Seok, S. Introduction to the special series: Technology and disabilities in education. Assist. Technol. 2017, 29, 121–122. [Google Scholar] [CrossRef]
  15. Kitchenham, B.; Brereton, P. A systematic review of systematic review process research in software engineering. Inf. Softw. Technol. 2013, 55, 2049–2075. [Google Scholar] [CrossRef]
  16. Petersen, K.; Feldt, R.; Mujtaba, S.; Mattsson, M. Systematic mapping studies in software engineering. In Proceedings of the International Conference on Evaluation and Assessment in Software Engineering (EASE), Bari, Italy, 26–27 June 2008; pp. 68–77. [Google Scholar]
  17. Barbara, K.; Tore, D.; Magne, J. Evidence-based Software Engineering. In Proceedings of the 26th International Conference on Software Engineering, (ICSE ’04), Edinburgh, UK, 23–28 May 2004; IEEE Computer Society: Washington DC, USA, 2004; pp. 273–281, ISBN 0-7695-2163-0. [Google Scholar]
  18. Kitchenham, B.A.; Dyba, T.; Jorgensen, M. Evidence-based software engineering for practitioners. IEEE Softw. 2005, 22, 58–65. [Google Scholar]
  19. Peraković, D.; Periša, M.; Cvitić, I. Analysis of the Possible Application of Assistive Technology in the Concept of Industry 4.0. In Proceedings of the XXXVI Simpozijum o Novim Tehnologijama u Poštanskom i Telekomunikacionom Saobraćaju—PosTel 2018, Beograd, Serbia, 4–5 December 2018. [Google Scholar]
  20. Viel, F.; Silva, L.A.; Valderi Leithardt, R.Q.; Zeferino, C.A. Internet of Things: Concepts, Architectures and Technologies. In Proceedings of the 2018 13th IEEE International Conference on Industry Applications (INDUSCON), Sao Paulo, Brazil, 12–14 November 2018; pp. 909–916. [Google Scholar] [CrossRef]
  21. Sopelsa Neto, N.F.; Stefenon, S.F.; Meyer, L.H.; Bruns, R.; Nied, A.; Seman, L.O.; Gonzalez, G.V.; Leithardt, V.R.Q.; Yow, K.C. A Study of Multilayer Perceptron Networks Applied to Classification of Ceramic Insulators Using Ultrasound. Appl. Sci. 2021, 11, 1592. [Google Scholar] [CrossRef]
  22. Leithardt, V.; Santos, D.; Silva, L.; Viel, F.; Zeferino, C.; Silva, J. A Solution for Dynamic Management of User Profiles in IoT Environments. IEEE Lat. Am. Trans. 2020, 18, 1193–1199. [Google Scholar] [CrossRef]
  23. Stefenon, S.F.; Kasburg, C.; Nied, A.; Klaar, A.C.R.; Ferreira, F.C.S.; Branco, N.W. Hybrid deep learning for power generation forecasting in active solar trackers. IET Gener. Transm. Distrib. 2020, 14, 5667–5674. [Google Scholar] [CrossRef]
  24. Kasburg, C.; Stefenon, S.F. Deep Learning for Photovoltaic Generation Forecast in Active Solar Trackers. IEEE Lat. Am. Trans. 2019, 17, 2013–2019. [Google Scholar] [CrossRef]
  25. Dingli, A.; Fournier, K.S. Financial time series forecasting-a deep learning approach. Int. J. Mach. Learn. Comput. 2017, 7, 118–122. [Google Scholar] [CrossRef]
  26. Salazar, L.H.A.; Leithardt, V.R.Q.; Parreira, W.D.; da Rocha Fernandes, A.M.; Barbosa, J.L.V.; Correia, S.D. Application of Machine Learning Techniques to Predict a Patient’s No-Show in the Healthcare Sector. Future Internet 2022, 14, 3. [Google Scholar] [CrossRef]
  27. Salazar, L.H.; Fernandes, A.M.R.; Dazzi, R.; Raduenz, J.; Garcia, N.M.; Leithardt, V.R.Q. Prediction of Attendance at Medical Appointments Based on Machine Learning. In Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Seville, Spain, 24–27 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  28. Dick, S. Artificial intelligence. Harv. Data Sci. Rev. 2019, 1, 1–8. [Google Scholar] [CrossRef]
  29. Stefenon, S.F.; Branco, N.W.; Nied, A.; Bertol, D.W.; Finardi, E.C.; Sartori, A.; Meyer, L.H.; Grebogi, R.B. Analysis of training techniques of ANN for classification of insulators in electrical power systems. IET Gener. Transm. Distrib. 2020, 14, 1591–1597. [Google Scholar] [CrossRef]
  30. Suzin, J.C.; Zeferino, C.A.; Leithardt, V.R.Q. Digital Statelessness. In Proceedings of the New Trends in Disruptive Technologies, Tech Ethics and Artificial Intelligence; de Paz Santana, J.F., de la Iglesia, D.H., López Rivero, A.J., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 178–189. [Google Scholar] [CrossRef]
  31. Muniz, R.N.; Stefenon, S.F.; Buratto, W.G.; Nied, A.; Meyer, L.H.; Finardi, E.C.; Kühl, R.M.; Sá, J.A.S.d.; Rocha, B.R.P.d. Tools for Measuring Energy Sustainability: A Comparative Review. Energies 2020, 13, 2366. [Google Scholar] [CrossRef]
  32. da Silva, L.D.L.; Pereira, T.F.; Leithardt, V.R.Q.; Seman, L.O.; Zeferino, C.A. Hybrid Impedance-Admittance Control for Upper Limb Exoskeleton Using Electromyography. Appl. Sci. 2020, 10, 7146. [Google Scholar] [CrossRef]
  33. Kaur, J.; Khan, M.A.; Iftikhar, M.; Imran, M.; Emad Ul Haq, Q. Machine Learning Techniques for 5G and beyond. IEEE Access 2021, 9, 23472–23488. [Google Scholar] [CrossRef]
  34. Lopes, H.; Pires, I.M.; Sánchez San Blas, H.; García-Ovejero, R.; Leithardt, V. PriADA: Management and Adaptation of Information Based on Data Privacy in Public Environments. Computers 2020, 9, 77. [Google Scholar] [CrossRef]
  35. Silva, L.A.; Leithardt, V.R.Q.; Rolim, C.O.; González, G.V.; Geyer, C.F.R.; Silva, J.S. PRISER: Managing Notification in Multiples Devices with Data Privacy Support. Sensors 2019, 19, 98. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Pinto, H.; Américo, J.; Leal, O.; Stefenon, S. Development of Measurement Device and Data Acquisition for Electric Vehicle. Rev. GEINTEC 2021, 11, 5809–5822. [Google Scholar]
  37. Stefenon, S.F.; Furtado Neto, C.S.; Coelho, T.S.; Nied, A.; Yamaguchi, C.K.; Yow, K.C. Particle swarm optimization for de sign of insulators of distribution power system based on finite element method. Electr. Eng. 2022, 104, 615–622. [Google Scholar] [CrossRef]
  38. Stefenon, S.F.; Seman, L.O.; Pavan, B.A.; Ovejero, R.G.; Leithardt, V.R.Q. Optimal design of electrical power distribution grid spacers using finite element method. IET Gener. Transm. Distrib. 2022, 16, 1865–1876. [Google Scholar] [CrossRef]
  39. Stefenon, S.F.; Kasburg, C.; Freire, R.Z.; Silva Ferreira, F.C.; Bertol, D.W.; Nied, A. Photovoltaic power forecasting using wavelet Neuro-Fuzzy for active solar trackers. J. Intell. Fuzzy Syst. 2021, 40, 1083–1096. [Google Scholar] [CrossRef]
  40. Turing, A.M. Computing Machinery and Intelligence. Mind 1950, LIX, 433–460. [Google Scholar] [CrossRef]
  41. Tissot, H.C.; Shah, A.D.; Brealey, D.; Harris, S.; Agbakoba, R.; Folarin, A.; Romao, L.; Roguski, L.; Dobson, R.; Asselbergs, F.W. Natural Language Processing for Mimicking Clinical Trial Recruitment in Critical Care: A Semi-Automated Simulation Based on the LeoPARDS Trial. IEEE J. Biomed. Health Inform. 2020, 24, 2950–2959. [Google Scholar] [CrossRef]
  42. Zhou, H.; Yang, Y.; Ning, S.; Liu, Z.; Lang, C.; Lin, Y.; Huang, D. Combining Context and Knowledge Representations for Chemical-Disease Relation Extraction. IEEE/ACM Trans. Comput. Biol. Bioinform. 2019, 16, 1879–1889. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  43. Kumar, S.A.; Brown, M.A. Spatio-Temporal Reasoning within a Neural Network framework for Intelligent Physical Systems. In Proceedings of the 2018 IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 274–280. [Google Scholar] [CrossRef]
  44. Nasr, M.; Islam, M.M.; Shehata, S.; Karray, F.; Quintana, Y. Smart Healthcare in the Age of AI: Recent Advances, Challenges, and Future Prospects. IEEE Access 2021, 9, 145248–145270. [Google Scholar] [CrossRef]
  45. Fuadi, D.H.; Novita, D.; Taufik, M. Socially Assistive Robot Interaction by Objects Detection and Face Recognition on Convolutional Neural Network for Parental Monitoring. In Proceedings of the 2021 International Conference on Artificial Intelligence and Mechatronics Systems (AIMS), Bandung, Indonesia, 28–30 April 2021; pp. 1–6. [Google Scholar] [CrossRef]
  46. Kearney, K.T.; Presenza, D.; Saccà, F.; Wright, P. Key challenges for developing a Socially Assistive Robotic (SAR) solution for the health sector. In Proceedings of the 2018 IEEE 23rd International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), Barcelona, Spain, 17–19 September 2018; pp. 1–7. [Google Scholar] [CrossRef]
  47. Sarkar, P.P.; Tohin, M.A.; Khaled, M.A.; Islam, M.R. Design Process of an Affordable Smart Robotic Crutch for Paralyzed Patients. In Proceedings of the 2019 IEEE International Conference on Robotics, Automation, Artificial-Intelligence, and Internet-of-Things (RAAICON), Dhaka, Bangladesh, 29 November–1 December 2019; pp. 112–115. [Google Scholar] [CrossRef]
  48. Caliwag, A.; Angsanto, S.R.; Lim, W. Korean Sign Language Translation Using Machine Learning. In Proceedings of the 2018 Tenth International Conference on Ubiquitous and Future Networks (ICUFN), Prague, Czech Republic, 3–6 July 2018; pp. 826–828. [Google Scholar] [CrossRef]
  49. Feng, S.; Yuan, T. Sign language translation based on new continuous sign language dataset. In Proceedings of the 2022 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 24–26 June 2022; pp. 491–494. [Google Scholar] [CrossRef]
  50. Hossain, S.; Sarma, D.; Mittra, T.; Alam, M.N.; Saha, I.; Johora, F.T. Bengali Hand Sign Gestures Recognition using Convolutional Neural Network. In Proceedings of the 2020 Second International Conference on Inventive Research in Computing Applications (ICIRCA), Coimbatore, India, 15–17 July 2020; pp. 636–641. [Google Scholar] [CrossRef]
  51. Suardi, C.; Handayani, A.N.; Asmara, R.A.; Wibawa, A.P.; Hayati, L.N.; Azis, H. Design of Sign Language Recognition Using E-CNN. In Proceedings of the 2021 3rd East Indonesia Conference on Computer and Information Technology (EIConCIT), Surabaya, Indonesia, 9–11 April 2021; pp. 166–170. [Google Scholar] [CrossRef]
  52. Mohameed, R.A.A.; Naji, R.M.S.; Ahmeed, A.M.A.; Saeed, D.A.A.; Mosleh, M.A.A. Automated translation for Yemeni’s Sign Language to Text UsingTransfer Learning-based Convolutional Neural Networks. In Proceedings of the 2021 1st International Conference on Emerging Smart Technologies and Applications (eSmarTA), Sana’a, Yemen, 10–12 August 2021; pp. 1–5. [Google Scholar] [CrossRef]
  53. Zikky, M.; Hakkun, R.Y.; Ainun Rizqi, A.F.; Hamid, A.; Basuki, A. Development of Educational Game for Recognizing Indonesian 436 Sign Language (SIBI) and Breaking Down the Communication Barrier with Deaf People. In Proceedings of the 2017 21st International Computer Science and Engineering Conference (ICSEC), Bangkok, Thailand, 15–18 November 2017; pp. 1–6. [Google Scholar] [CrossRef]
  54. Stefenon, S.F.; Silva, M.C.; Bertol, D.W.; Meyer, L.H.; Nied, A. Fault diagnosis of insulators from ultrasound detection using neural networks. J. Intell. Fuzzy Syst. 2019, 37, 6655–6664. [Google Scholar] [CrossRef]
  55. Stefenon, S.F.; Seman, L.O.; Sopelsa Neto, N.F.; Meyer, L.H.; Nied, A.; Yow, K.C. Echo state network applied for classification of medium voltage insulators. Int. J. Electr. Power Energy Syst. 2022, 134, 107336. [Google Scholar] [CrossRef]
  56. Stefenon, S.F.; Ribeiro, M.H.D.M.; Nied, A.; Yow, K.C.; Mariani, V.C.; dos Santos Coelho, L.; Seman, L.O. Time series forecasting using ensemble learning methods for emergency prevention in hydroelectric power plants with dam. Electr. Power Syst. Res. 2022, 202, 107584. [Google Scholar] [CrossRef]
  57. Ribeiro, M.H.D.M.; Stefenon, S.F.; de Lima, J.D.; Nied, A.; Mariani, V.C.; Coelho, L.S. Electricity Price Forecasting Based on Self-Adaptive Decomposition and Heterogeneous Ensemble Learning. Energies 2020, 13, 5190. [Google Scholar] [CrossRef]
  58. Stefenon, S.F.; Ribeiro, M.H.D.M.; Nied, A.; Mariani, V.C.; Coelho, L.S.; Leithardt, V.R.Q.; Silva, L.A.; Seman, L.O. Hybrid Wavelet Stacking Ensemble Model for Insulators Contamination Forecasting. IEEE Access 2021, 9, 66387–66397. [Google Scholar] [CrossRef]
  59. Corso, M.P.; Perez, F.L.; Stefenon, S.F.; Yow, K.C.; Ovejero, R.G.; Leithardt, V.R.Q. Classification of Contaminated Insulators Using k-Nearest Neighbors Based on Computer Vision. Computers 2021, 10, 112. [Google Scholar] [CrossRef]
  60. Stefenon, S.F.; Ribeiro, M.H.D.M.; Nied, A.; Mariani, V.C.; Coelho, L.S.; da Rocha, D.F.M.; Grebogi, R.B.; Ruano, A.E.B. Wavelet group method of data handling for fault prediction in electrical power insulators. Int. J. Electr. Power Energy Syst. 2020, 123, 106269. [Google Scholar] [CrossRef]
  61. Fernandes, F.; Stefenon, S.F.; Seman, L.O.; Nied, A.; Ferreira, F.C.S.; Subtil, M.C.M.; Klaar, A.C.R.; Leithardt, V.R.Q. Long short-term memory stacking model to predict the number of cases and deaths caused by COVID-19. J. Intell. Fuzzy Syst. 2022, 6, 6221–6234. [Google Scholar] [CrossRef]
  62. Vieira, J.C.; Sartori, A.; Stefenon, S.F.; Perez, F.L.; de Jesus, G.S.; Leithardt, V.R.Q. Low-Cost CNN for Automatic Violence Recognition on Embedded System. IEEE Access 2022, 10, 25190–25202. [Google Scholar] [CrossRef]
  63. Stefenon, S.F.; Singh, G.; Yow, K.C.; Cimatti, A. Semi-ProtoPNet Deep Neural Network for the Classification of Defective Power Grid Distribution Structures. Sensors 2022, 22, 4859. [Google Scholar] [CrossRef] [PubMed]
  64. Leithardt, V. Classifying garments from fashion-MNIST dataset through CNNs. Adv. Sci. Technol. Eng. Syst. J. 2021, 6, 989–994. [Google Scholar]
  65. dos Santos, G.H.; Seman, L.O.; Bezerra, E.A.; Leithardt, V.R.Q.; Mendes, A.S.; Stefenon, S.F. Static Attitude Determination Using Convolutional Neural Networks. Sensors 2021, 21, 6419. [Google Scholar] [CrossRef] [PubMed]
  66. Stefenon, S.F.; Freire, R.Z.; Coelho, L.S.; Meyer, L.H.; Grebogi, R.B.; Buratto, W.G.; Nied, A. Electrical Insulator Fault Forecasting Based on a Wavelet Neuro-Fuzzy System. Energies 2020, 13, 484. [Google Scholar] [CrossRef] [Green Version]
  67. Chang, C.H. Deep and Shallow Architecture of Multilayer Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 2477–2486. [Google Scholar] [CrossRef]
  68. Kim, D.E.; Gofman, M. Comparison of shallow and deep neural networks for network intrusion detection. In Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 8–10 January 2018; pp. 204–208. [Google Scholar] [CrossRef]
  69. Salazar, L.H.; Fernandes, A.; Dazzi, R.; Garcia, N.; Leithardt, V.R. Using different models of machine learning to predict attendance at medical appointments. J. Inf. Syst. Eng. Manag. 2020, 5, em0122. [Google Scholar] [CrossRef]
  70. Medeiros, A.; Sartori, A.; Stefenon, S.F.; Meyer, L.H.; Nied, A. Comparison of artificial intelligence techniques to failure prediction in contaminated insulators based on leakage current. J. Intell. Fuzzy Syst. 2022, 42, 3285–3298. [Google Scholar] [CrossRef]
  71. Sopelsa Neto, N.F.; Stefenon, S.F.; Meyer, L.H.; Ovejero, R.G.; Leithardt, V.R.Q. Fault Prediction Based on Leakage Current in Contaminated Insulators Using Enhanced Time Series Forecasting Models. Sensors 2022, 22, 6121. [Google Scholar] [CrossRef]
  72. Stefenon, S.F.; Grebogi, R.B.; Freire, R.Z.; Nied, A.; Meyer, L.H. Optimized Ensemble Extreme Learning Machine for Classification of Electrical Insulators Conditions. IEEE Trans. Ind. Electron. 2020, 67, 5170–5178. [Google Scholar] [CrossRef]
  73. Stefenon, S.F.; Corso, M.P.; Nied, A.; Perez, F.L.; Yow, K.C.; Gonzalez, G.V.; Leithardt, V.R.Q. Classification of insulators using neural network based on computer vision. IET Gener. Transm. Distrib. 2021, 16, 1096–1107. [Google Scholar] [CrossRef]
  74. Stefenon, S.F.; Bruns, R.; Sartori, A.; Meyer, L.H.; Ovejero, R.G.; Leithardt, V.R.Q. Analysis of the Ultrasonic Signal in Polymeric Contaminated Insulators Through Ensemble Learning Methods. IEEE Access 2022, 10, 33980–33991. [Google Scholar] [CrossRef]
  75. Zahraee, S.; Khalaji Assadi, M.; Saidur, R. Application of Artificial Intelligence Methods for Hybrid Energy System Optimization. Renew. Sustain. Energy Rev. 2016, 66, 617–630. [Google Scholar] [CrossRef]
  76. Islam, J.; Vasant, P.M.; Negash, B.M.; Laruccia, M.B.; Myint, M.; Watada, J. A holistic review on artificial intelligence techniques for well placement optimization problem. Adv. Eng. Softw. 2020, 141, 102767. [Google Scholar] [CrossRef]
  77. Stefenon, S.F.; Seman, L.O.; Schutel Furtado Neto, C.; Nied, A.; Seganfredo, D.M.; Garcia da Luz, F.; Sabino, P.H.; Torreblanca González, J.; Quietinho Leithardt, V.R. Electric Field Evaluation Using the Finite Element Method and Proxy Models for the Design of Stator Slots in a Permanent Magnet Synchronous Motor. Electronics 2020, 9, 1975. [Google Scholar] [CrossRef]
  78. Kitchenham, B.; Brereton, O.P.; Budgen, D.; Turner, M.; Bailey, J.; Linkman, S. Systematic literature reviews in software engineering—A systematic literature review. Inf. Softw. Technol. 2009, 51, 7–15. [Google Scholar] [CrossRef]
  79. Banijamali, A.; Pakanen, O.P.; Kuvaja, P.; Oivo, M. Software architectures of the convergence of cloud computing and the Internet of Things: A systematic literature review. Inf. Softw. Technol. 2020, 122, 106271. [Google Scholar] [CrossRef]
  80. Keele, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering; Technical Report, ver. 2.3 ebse Technical Report; EBSE: Goyang-si, Korea, 2007. [Google Scholar]
  81. Júnior, M.J.; Maia, O.B.; Oliveira, H.; Souto, E.; Barreto, R. Assistive Technology through Internet of Things and Edge Computing. In Proceedings of the 2019 IEEE 9th International Conference on Consumer Electronics (ICCE-Berlin), Berlin, Germany, 8–11 September 2019; pp. 330–332. [Google Scholar] [CrossRef]
  82. Chang, W.J.; Yu, Y.X.; Chen, J.H.; Zhang, Z.Y.; Ko, S.J.; Yang, T.H.; Hsu, C.H.; Chen, L.B.; Chen, M.C. A Deep Learning Based Wearable Medicines Recognition System for Visually Impaired People. In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu, Taiwan, 18–20 March 2019; pp. 207–208. [Google Scholar] [CrossRef]
  83. Ghazal, M.; Basmaji, T.; Qasymeh, M.; Salim, R.; Khalil, A. Localized Assistive Scene Understanding using Deep Learning and the IoT. In Proceedings of the 2019 7th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), Istanbul, Turkey, 26–28 August 2019; pp. 53–58. [Google Scholar] [CrossRef]
  84. Chang, W.J.; Chen, L.B.; Hsu, C.H.; Chen, J.H.; Yang, T.C.; Lin, C.P. MedGlasses: A Wearable Smart-Glasses-Based Drug Pill Recognition System Using Deep Learning for Visually Impaired Chronic Patients. IEEE Access 2020, 8, 17013–17024. [Google Scholar] [CrossRef]
  85. Rao, S.; Singh, V.M. Computer Vision, and Iot Based Smart System for Visually Impaired People. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; pp. 552–556. [Google Scholar] [CrossRef]
  86. Chang, W.J.; Chen, L.B.; Sie, C.Y.; Yang, C.H. An Artificial Intelligence Edge Computing-Based Assistive System for Visually Impaired Pedestrian Safety at Zebra Crossings. IEEE Trans. Consum. Electron. 2021, 67, 3–11. [Google Scholar] [CrossRef]
  87. Bal, D.; Arfi, A.M.; Dey, S. Dynamic Hand Gesture Pattern Recognition Using Probabilistic Neural Network. In Proceedings of the 2021 IEEE International IOT, Electronics and Mechatronics Conference (IEMTRONICS), Toronto, ON, Canada, 21–24 April 2021; pp. 1–4. [Google Scholar] [CrossRef]
  88. Su, Y.S.; Chou, C.H.; Chu, Y.L.; Yang, Z.Y. A Finger-Worn Device for Exploring Chinese Printed Text with Using CNN Algorithm on a Micro IoT Processor. IEEE Access 2019, 7, 116529–116541. [Google Scholar] [CrossRef]
  89. Yadav, D.K.; Mookherji, S.; Gomes, J.; Patil, S. Intelligent Navigation System for the Visually Impaired—A Deep Learning Approach. In Proceedings of the 2020 Fourth International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 11–13 March 2020; pp. 652–659. [Google Scholar] [CrossRef]
  90. Jiang, B.; Yang, J.; Lv, Z.; Song, H. Wearable Vision Assistance System Based on Binocular Sensors for Visually Impaired Users. IEEE Internet Things J. 2019, 6, 1375–1383. [Google Scholar] [CrossRef]
  91. Li, T.; Yan, Y.; Du, W. Sign Language Recognition Based on Computer Vision. In Proceedings of the 2022 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA), Dalian, China, 24–26 June 2022; pp. 927–931. [Google Scholar] [CrossRef]
  92. Punsara, K.; Premachandra, H.; Chanaka, A.; Wijayawickrama, R.; Nimsiri, A.; Rajitha de, S. IoT Based Sign Language Recognition System. In Proceedings of the 2020 2nd International Conference on Advancements in Computing (ICAC), Malabe, Sri Lanka, 10–11 December 2020; Volume 1, pp. 162–167. [Google Scholar] [CrossRef]
  93. Boppana, L.; Ahamed, R.; Rane, H.; Kodali, R.K. Assistive Sign Language Converter for Deaf and Dumb. In Proceedings of the 2019 International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (Green- Com) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData), Atlanta, GA, USA, 14–17 July 2019; pp. 302–307. [Google Scholar] [CrossRef]
  94. Al Shabibi, M.A.K.; Kesavan, S.M. IoT Based Smart Wheelchair for Disabled People. In Proceedings of the 2021 International Conference on System, Computation, Automation and Networking (ICSCAN), Puducherry, India, 30–31 July 2021; pp. 1–6. [Google Scholar] [CrossRef]
  95. Javed, A.; Sarwar, M. ur Rehman S, Khan HU, Al-Otaibi YD, Alnumay WS. Pp-spa: Privacy preserved smartphone-based personal assistant to improve routine life functioning of cognitive impaired individuals. Neural Process Lett 2021, 10, 1–18. [Google Scholar] [CrossRef]
  96. Kandoth, A.; Arya, N.R.; Mohan, P.R.; Priya, T.V.; Geetha, M. Dhrishti: A Visual Aiding System for Outdoor Environment. In Proceedings of the 2020 5th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India, 10–12 June 2020; pp. 305–310. [Google Scholar] [CrossRef]
  97. Sharma, S.; Dudeja, R.K.; Aujla, G.S.; Bali, R.S.; Kumar, N. DeTrAs: Deep learning-based healthcare framework for IoT-based assistance of Alzheimer patients. Neural Comput. Appl. 2020, 1, 1–13. [Google Scholar] [CrossRef]
  98. Baby, C.J.; Mazumdar, A.; Sood, H.; Gupta, Y.; Panda, A.; Poonkuzhali, R. Parkinson’s Disease Assist Device Using Machine Learning and Internet of Things. In Proceedings of the 2018 International Conference on Communication and Signal Processing (ICCSP), Chennai, India, 3–5 April 2018; pp. 0922–0927. [Google Scholar] [CrossRef]
  99. Wang, K.J.; Tung, H.W.; Huang, Z.; Thakur, P.; Mao, Z.H.; You, M.X. EXGbuds: Universal wearable assistive device for disabled people to interact with the environment seamlessly. In Proceedings of the Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, Chicago, IL, USA, 5–8 March 2018; pp. 369–370. [Google Scholar] [CrossRef]
  100. Kumar, P.; Gandhi, U.; Varatharajan, R.; Manogaran, G.; Jidhesh, R.; Vadivel, T. Intelligent face recognition and navigation system using neural learning for smart security in Internet of Things. Clust. Comput. 2017, 22, 7733–7744. [Google Scholar] [CrossRef]
  101. Lee, B.G.; Chong, T.W.; Chung, W.Y. Sensor fusion of motion-based sign language interpretation with deep learning. Sensors 2020, 20, 6256. [Google Scholar] [CrossRef]
  102. Sreeraj, M.; Joy, J.; Kuriakose, A.; Bhameesh, M.B.; Babu, A.K.; Kunjumon, M. VIZIYON: Assistive handheld device for visually challenged. Procedia Comput. Sci. 2020, 171, 2486–2492. [Google Scholar] [CrossRef]
  103. Hengle, A.; Kulkarni, A.; Bavadekar, N.; Kulkarni, N.; Udyawar, R. Smart Cap: A Deep Learning and IoT Based Assistant for the Visually Impaired. In Proceedings of the 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT), Tirunelveli, India, 20–22 August 2020; pp. 1109–1116. [Google Scholar] [CrossRef]
  104. Akbari, Y.; Hassen, H.; Subramanian, N.; Kunhoth, J.; Al-Maadeed, S.; Alhajyaseen, W. A vision-based zebra crossing detection method for people with visual impairments. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 118–123. [Google Scholar] [CrossRef]
  105. Karkar, A.; Kunhoth, J.; Al-Maadeed, S. A Scene-to-Speech Mobile based Application: Multiple Trained Models Approach. In Proceedings of the 2020 IEEE International Conference on Informatics, IoT, and Enabling Technologies (ICIoT), Doha, Qatar, 2–5 February 2020; pp. 490–497. [Google Scholar] [CrossRef]
Figure 1. The systematic review steps (guidelines) [80].
Figure 1. The systematic review steps (guidelines) [80].
Sensors 22 08531 g001
Figure 2. The systematic review model [80].
Figure 2. The systematic review model [80].
Sensors 22 08531 g002
Figure 3. Word cloud.
Figure 3. Word cloud.
Sensors 22 08531 g003
Table 1. Research questions.
Table 1. Research questions.
IDQuestionJustification
QP1Which are the machine-learning models
used in AIoT applied to Assistive Technology?
Identification of ML models used in AIoT applied to AT.
QP2What are the topics of study that have been
researched in the context of AIoT applied to
Assistive Technology?
Providing context related to research topics.
QP3What are the IoT devices used in the context of
AIoT applied to Assistive Technology?
Providing context related to the types of IoT devices used.
QP4Is there a disparity in the number of studies
found according to the problems selected in the research?
Identify gaps for research and the development of solutions.
By the authors.
Table 2. Databases and search strings.
Table 2. Databases and search strings.
Data BaseIDSearch StringURL
El CompendexEIC(“assistive technology” OR
“impaired people”) AND (“AIoT” OR “IoT” OR “internet of things”) AND (“machine learning” OR “deep learning” OR “neural networks”)
http://www.engineeringvillage.com (accessed on 4 August 2022)
IEEE Digital LibraryIEEE(“assistive technology" OR
“impaired" OR “parkinson" OR “alzheimer") AND (“IoT" OR “AIoT" OR “Internet of Things" OR “artificial intelligence or things”) AND (“machine learning” OR “deep learning” OR “neural network”)
http://ieeexplore.ieee.org (accessed on 12 August 2022)
ISI Web of ScienceWOS(“assistive technology” OR
“impaired” OR “parkinson” OR “alzheimer”) AND (“iot” OR “aiot” OR “Internet of Things” OR “artificial intelligence or things”) AND (“machine learning” OR “deep learning” OR “neural network”)
http://www.isiknowledge.com (accessed on 6 August 2022)
ScienceDirectSCD(“assistive technology”) AND
(“IoT” OR “internet of things”) AND (“machine learning” OR “deep learning” OR “neural networks”)
http://www.sciencedirect.com (accessed on 10 August 2022)
ScopusSCPS(“assistive technology” OR
“impaired” OR “parkinson” OR “alzheimer”) AND (“iot” OR “aiot” OR “Internet of Things” OR “artificial intelligence or things”) AND (“machine learning” OR “deep learning” OR “neural network”)
http://www.scopus.com (accessed on 15 August 2022)
By the authors.
Table 3. Number of selected articles from each database.
Table 3. Number of selected articles from each database.
Data BaseNumber of Selected Articles
El Compendex37
IEEE Digital Library63
ISI Web of Science32
ScienceDirect67
Scopus68
Number of articles267
Number of duplicated articles79
Number of selected articles188
By the authors.
Table 4. Inclusion criteria.
Table 4. Inclusion criteria.
IDCriteriaApplied Directly to the Databases
IC 1Studies published between 2017 and 2021EIC, IEEE, WOS, SCD, SCPS
IC 2Peer-reviewed primary articlesEIC, WOS, SCD, SCPS
IC 3Studies within the context of AIoT applied to AT, within the scope of deficiencies established-
IC 4Articles published in EnglishEIC, IEEE, WOS, SCPS
By the authors.
Table 5. Exclusion Criteria.
Table 5. Exclusion Criteria.
IDCriteriaApplied Directly to the Databases
EC 1Secondary or tertiary studies, studies within the context of AIoT applied to AT-
EC 2Studies within the context of AIoT applied to AT, within the scope of deficiencies established-
EC3Short articles, books, and gray literature (manuals, reports, theses, and dissertations)EIC, WOS, SCD, SCPS
EC 4Not having access to the study-
EC 5Duplicated study-
EC 6Redundant studies by the same author-
EC 7Studies prior to 2017EIC, IEEE, WOS, SCD, SCPS
By the authors.
Table 6. Questionnaire for quality evaluation.
Table 6. Questionnaire for quality evaluation.
IDQuestion
AQ 1Are the study objectives clearly defined?
AQ 2Is the problem to be solved clearly described?
AQ 3Do the authors describe in detail the use of the ML models used in the solution?
AQ 4Did the study perform a well-described experiment to evaluate the proposal?
AQ 5Do the findings of the study indicate a validity relevant to it?
By the authors.
Table 7. Quality evaluation-selected articles.
Table 7. Quality evaluation-selected articles.
IDSubjectAuthorScore
A01Assistive TechnologyJúnior et al. [81]4.0
A02Medicines RecognitionChang et al. [82]3.5
A03Localized Assistive SceneGhazal et al. [83]4.5
A04Drug Pill RecognitionChang et al. [84]5.0
A05Visually Impaired PeopleRao and Singh [85]2.5
A06Visually Impaired PedestrianChang et al. [86]4.5
A07Pattern RecognitionBal et al. [87]3.0
A08Exploring Printed TextSu et al. [88]5.0
A09Intelligent NavigationYadav et al. [89]5.0
A10Rehabilitation of PeopleJacob et al. [13]5.0
A11Visually Impaired UsersJiang et al. [90]3.0
A12Sign Language RecognitionLi et al. [91]2.5
A13Sign Language RecognitionPunsara et al. [92]2.5
A14Assistive Sign LanguageBoppana et al. [93]5.0
A15Smart WheelchairAl Shabibi and Kesavan [94]4.0
A16Personal AssistantJaved and Sarwar [95]5.0
A17Visual Aiding SystemKandoth et al. [96]3.5
A18Assistance of PatientsSharma et al. [97]5.0
A19Parkinson’s Disease AssistBaby et al. [98]3.0
A20Assistive DeviceWang et al. [99]3.5
A21Navigation SystemKumar et al. [100]3.0
A22Sign Language InterpretationLee et al. [101]5.0
A23Visual AssistiveSreeraj et al. [102]4.0
A24Visual AssistantHengle et al. [103]5.0
A25Zebra Crossing DetectionAkbari et al. [104]5.0
A26Scene-to-Speech MobileKarkar et al. [105]4.5
By the authors.
Table 8. Data extraction form.
Table 8. Data extraction form.
IDFieldValuesObjectives
PD 1IDIncremental Numeric ValueStudy Identification
PD 2TitleTextual ValueStudy Identification
PD 3DOITextual ValueStudy Location
PD 4Machine-learning ModelTextual ValueAnswer QP1
PD 5TopicsTextual ValueAnswer QP2
PD 6Key WordsTextual ValueAnswer QP2
PD 7IoT DeviceTextual ValueAnswer QP3
PD 8Addressed IssueMultiple selection options: hearing impairment, cognitive, impairment, motor impairment,
visual impairment, and degenerative disease
Answer QP4
By the authors.
Table 9. Algorithms and machine-learning techniques applied, and the respective deficiency addressed for each study.
Table 9. Algorithms and machine-learning techniques applied, and the respective deficiency addressed for each study.
Applied ML ModelsArticlesImpairments
ANNA1, A10, A13, A21Visual, motor coordination, hearing
CNNA6, A8, A9, A11, A12, A14, A24Visual, hearing, degenerative
RNNA18, A22Degenerative, auditory
Multiple CNNA25Visual
Clever CNNA5, A17Visual
R-CNNA3, A4Visual, elderly care
Faster R-CNNA2, A23Elderly care, visual
PNNA7Hearing
Multi-trained DL modelsA26Visual
Linear regressionA29Degenerative
SVMA20, A24Motor coordination, visual
Independent component analysis A20Motor coordination
Naïve BayesA16, A18Cognitive, degenerative
Hoeffding treeA16Cognitive
Logistic regressionA16Cognitive
Random forestE16Cognitive
K-meansE16Cognitive
HOGE25Visual
By the authors.
Table 10. Topics and occurrences in selected articles.
Table 10. Topics and occurrences in selected articles.
TopicsPrimary Articles
Scene to speechA1, A3
Assisted navigationA3, A5, A6, A9, A17, A21, A25
Sign recognitionA7, A12, A13, A14, A22
Object recognitionA2, A4, A9
Object detectionA11, A15, A23
Facial recognitionA21, A24
OCRA7, A24
Assisted locomotionA15
Speech recognitionA16
Text to speechA24
Image captioningA24
Text detectionA24
Smart assistantA24
Human activity recognitionA16, A18
RehabilitationA10
Self-balancing objectA19
By the authors.
Table 11. IoT devices and their occurrences in the selected articles.
Table 11. IoT devices and their occurrences in the selected articles.
IoT DevicesPrimary Articles
Portable deviceA1, A2, A4, A5, A6, A7, A9, A13, A14, A17, A19, A23, A24
WearableA2, A4, A5, A6, A7, A11, A13, A20, A22, A24
Various sensorsA3, A9, A13, A15, A16, A18, A19, A21, A22
SmartphoneA3, A4, A5, A13, A16, A21, A26
CaneA6, A17
Finger worn wirelessA8
ExoskeletonA10
WheelchairA15
OtherA18
Non-definedA12, A25
By the authors.
Table 12. RaspberryPY, Arduino, and Nvidia Jetson-based IoT devices.
Table 12. RaspberryPY, Arduino, and Nvidia Jetson-based IoT devices.
BoardPrimary Articles
RaspberryPYA1, A4, A5, A7, A9, A13, A14, A17, A23, A24
ArduinoA8, A15, A21, A23
Nvidia JetsonA2, A4
By the authors.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

de Freitas, M.P.; Piai, V.A.; Farias, R.H.; Fernandes, A.M.R.; de Moraes Rossetto, A.G.; Leithardt, V.R.Q. Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review. Sensors 2022, 22, 8531. https://doi.org/10.3390/s22218531

AMA Style

de Freitas MP, Piai VA, Farias RH, Fernandes AMR, de Moraes Rossetto AG, Leithardt VRQ. Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review. Sensors. 2022; 22(21):8531. https://doi.org/10.3390/s22218531

Chicago/Turabian Style

de Freitas, Maurício Pasetto, Vinícius Aquino Piai, Ricardo Heffel Farias, Anita M. R. Fernandes, Anubis Graciela de Moraes Rossetto, and Valderi Reis Quietinho Leithardt. 2022. "Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review" Sensors 22, no. 21: 8531. https://doi.org/10.3390/s22218531

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop