Machine Learning Techniques for Assistive Robotics

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: closed (15 March 2020) | Viewed by 59262

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor

E-Mail Website
Guest Editor
University Institute for Computer Research, University of Alicante, 03690 San Vicente del Raspeig (Alicante), Spain
Interests: 3D sensors; deep learning; depth estimation; calibration
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
RoViT, University of Alicante, 03690 San Vicente del Raspeig (Alicante), Spain
Interests: object detection and action recognition
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Assistive robots are a category of robots, which share their area of work and interact with humans. Their main objective is to help humans, especially people with disabilities. To achieve this goal, it is necessary that these robots possess a series of characteristics: the ability to perceive their environment from their sensors and act consequently, to interact with people in a multimodal manner, and to navigate and make decisions autonomously. This complexity demands computationally expensive algorithms to be performed in real-time. So, with the advent of high-end embedded processors, several algorithms could be processed concurrently and in real-time.

All these capabilities involve, to a greater or lesser extent, the use of machine learning techniques. New deep learning techniques have enabled a very important qualitative leap in different areas of perception.

Novel theoretical approaches or practical applications of all aspects involving assistive robotics are welcomed. Reviews, datasets, benchmarks, and surveys of the state-of-the-art are also welcomed. Topics of interest to this Special Issue include, but are not limited to, the following topics:

  • Emotion recognition models and systems
  • Object recognition & pose estimation for assistive robotics
  • Activity recognition
  • Navigation, localization, and mapping
  • Ambient assistive living
  • Robot vision
  • Applications for people with disabilities
  • Scene understanding & description
  • Human-robot interaction
  • Embedded systems for assistive robotics

Prof. Dr. Miguel Cazorla
Dr. Sergio Orts-Escolano
Dr. Ester Martinez-Martin
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (12 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 165 KiB  
Editorial
Machine Learning Techniques for Assistive Robotics
by Ester Martinez-Martin, Miguel Cazorla and Sergio Orts-Escolano
Electronics 2020, 9(5), 821; https://doi.org/10.3390/electronics9050821 - 16 May 2020
Cited by 2 | Viewed by 2201
Abstract
Assistive robots are a category of robots that share their area of work and interact with humans [...] Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)

Research

Jump to: Editorial, Review

22 pages, 960 KiB  
Article
Pattern Recognition Techniques for the Identification of Activities of Daily Living Using a Mobile Device Accelerometer
by Ivan Miguel Pires, Gonçalo Marques, Nuno M. Garcia, Francisco Flórez-Revuelta, Maria Canavarro Teixeira, Eftim Zdravevski, Susanna Spinsante and Miguel Coimbra
Electronics 2020, 9(3), 509; https://doi.org/10.3390/electronics9030509 - 19 Mar 2020
Cited by 28 | Viewed by 4623
Abstract
The application of pattern recognition techniques to data collected from accelerometers available in off-the-shelf devices, such as smartphones, allows for the automatic recognition of activities of daily living (ADLs). This data can be used later to create systems that monitor the behaviors of [...] Read more.
The application of pattern recognition techniques to data collected from accelerometers available in off-the-shelf devices, such as smartphones, allows for the automatic recognition of activities of daily living (ADLs). This data can be used later to create systems that monitor the behaviors of their users. The main contribution of this paper is to use artificial neural networks (ANN) for the recognition of ADLs with the data acquired from the sensors available in mobile devices. Firstly, before ANN training, the mobile device is used for data collection. After training, mobile devices are used to apply an ANN previously trained for the ADLs’ identification on a less restrictive computational platform. The motivation is to verify whether the overfitting problem can be solved using only the accelerometer data, which also requires less computational resources and reduces the energy expenditure of the mobile device when compared with the use of multiple sensors. This paper presents a method based on ANN for the recognition of a defined set of ADLs. It provides a comparative study of different implementations of ANN to choose the most appropriate method for ADLs identification. The results show the accuracy of 85.89% using deep neural networks (DNN). Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

18 pages, 2793 KiB  
Article
A Low-Cost Cognitive Assistant
by Angelo Costa, Jaime A. Rincon, Vicente Julian, Paulo Novais and Carlos Carrascosa
Electronics 2020, 9(2), 310; https://doi.org/10.3390/electronics9020310 - 11 Feb 2020
Cited by 2 | Viewed by 2291
Abstract
In this paper, we present in depth the hardware components of a low-cost cognitive assistant. The aim is to detect the performance and the emotional state that elderly people present when performing exercises. Physical and cognitive exercises are a proven way of keeping [...] Read more.
In this paper, we present in depth the hardware components of a low-cost cognitive assistant. The aim is to detect the performance and the emotional state that elderly people present when performing exercises. Physical and cognitive exercises are a proven way of keeping elderly people active, healthy, and happy. Our goal is to bring to people that are at their homes (or in unsupervised places) an assistant that motivates them to perform exercises and, concurrently, monitor them, observing their physical and emotional responses. We focus on the hardware parts and the deep learning models so that they can be reproduced by others. The platform is being tested at an elderly people care facility, and validation is in process. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

16 pages, 255 KiB  
Article
Activities of Daily Living and Environment Recognition Using Mobile Devices: A Comparative Study
by José M. Ferreira, Ivan Miguel Pires, Gonçalo Marques, Nuno M. García, Eftim Zdravevski, Petre Lameski, Francisco Flórez-Revuelta, Susanna Spinsante and Lina Xu
Electronics 2020, 9(1), 180; https://doi.org/10.3390/electronics9010180 - 18 Jan 2020
Cited by 12 | Viewed by 3962
Abstract
The recognition of Activities of Daily Living (ADL) using the sensors available in off-the-shelf mobile devices with high accuracy is significant for the development of their framework. Previously, a framework that comprehends data acquisition, data processing, data cleaning, feature extraction, data fusion, and [...] Read more.
The recognition of Activities of Daily Living (ADL) using the sensors available in off-the-shelf mobile devices with high accuracy is significant for the development of their framework. Previously, a framework that comprehends data acquisition, data processing, data cleaning, feature extraction, data fusion, and data classification was proposed. However, the results may be improved with the implementation of other methods. Similar to the initial proposal of the framework, this paper proposes the recognition of eight ADL, e.g., walking, running, standing, going upstairs, going downstairs, driving, sleeping, and watching television, and nine environments, e.g., bar, hall, kitchen, library, street, bedroom, living room, gym, and classroom, but using the Instance Based k-nearest neighbour (IBk) and AdaBoost methods as well. The primary purpose of this paper is to find the best machine learning method for ADL and environment recognition. The results obtained show that IBk and AdaBoost reported better results, with complex data than the deep neural network methods. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

20 pages, 784 KiB  
Article
Recognition of Activities of Daily Living and Environments Using Acoustic Sensors Embedded on Mobile Devices
by Ivan Miguel Pires, Gonçalo Marques, Nuno M. Garcia, Nuno Pombo, Francisco Flórez-Revuelta, Susanna Spinsante, Maria Canavarro Teixeira and Eftim Zdravevski
Electronics 2019, 8(12), 1499; https://doi.org/10.3390/electronics8121499 - 07 Dec 2019
Cited by 23 | Viewed by 3748
Abstract
The identification of Activities of Daily Living (ADL) is intrinsic with the user’s environment recognition. This detection can be executed through standard sensors present in every-day mobile devices. On the one hand, the main proposal is to recognize users’ environment and standing activities. [...] Read more.
The identification of Activities of Daily Living (ADL) is intrinsic with the user’s environment recognition. This detection can be executed through standard sensors present in every-day mobile devices. On the one hand, the main proposal is to recognize users’ environment and standing activities. On the other hand, these features are included in a framework for the ADL and environment identification. Therefore, this paper is divided into two parts—firstly, acoustic sensors are used for the collection of data towards the recognition of the environment and, secondly, the information of the environment recognized is fused with the information gathered by motion and magnetic sensors. The environment and ADL recognition are performed by pattern recognition techniques that aim for the development of a system, including data collection, processing, fusion and classification procedures. These classification techniques include distinctive types of Artificial Neural Networks (ANN), analyzing various implementations of ANN and choosing the most suitable for further inclusion in the following different stages of the developed system. The results present 85.89% accuracy using Deep Neural Networks (DNN) with normalized data for the ADL recognition and 86.50% accuracy using Feedforward Neural Networks (FNN) with non-normalized data for environment recognition. Furthermore, the tests conducted present 100% accuracy for standing activities recognition using DNN with normalized data, which is the most suited for the intended purpose. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

16 pages, 5482 KiB  
Article
Robot Motion Control via an EEG-Based Brain–Computer Interface by Using Neural Networks and Alpha Brainwaves
by Nikolaos Korovesis, Dionisis Kandris, Grigorios Koulouras and Alex Alexandridis
Electronics 2019, 8(12), 1387; https://doi.org/10.3390/electronics8121387 - 21 Nov 2019
Cited by 38 | Viewed by 11307
Abstract
Modern achievements accomplished in both cognitive neuroscience and human–machine interaction technologies have enhanced the ability to control devices with the human brain by using Brain–Computer Interface systems. Particularly, the development of brain-controlled mobile robots is very important because systems of this kind can [...] Read more.
Modern achievements accomplished in both cognitive neuroscience and human–machine interaction technologies have enhanced the ability to control devices with the human brain by using Brain–Computer Interface systems. Particularly, the development of brain-controlled mobile robots is very important because systems of this kind can assist people, suffering from devastating neuromuscular disorders, move and thus improve their quality of life. The research work presented in this paper, concerns the development of a system which performs motion control in a mobile robot in accordance to the eyes’ blinking of a human operator via a synchronous and endogenous Electroencephalography-based Brain–Computer Interface, which uses alpha brain waveforms. The received signals are filtered in order to extract suitable features. These features are fed as inputs to a neural network, which is properly trained in order to properly guide the robotic vehicle. Experimental tests executed on 12 healthy subjects of various gender and age, proved that the system developed is able to perform movements of the robotic vehicle, under control, in forward, left, backward, and right direction according to the alpha brainwaves of its operator, with an overall accuracy equal to 92.1%. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

20 pages, 36563 KiB  
Article
Fallen People Detection Capabilities Using Assistive Robot
by Saturnino Maldonado-Bascón, Cristian Iglesias-Iglesias, Pilar Martín-Martín and Sergio Lafuente-Arroyo
Electronics 2019, 8(9), 915; https://doi.org/10.3390/electronics8090915 - 21 Aug 2019
Cited by 31 | Viewed by 6487
Abstract
One of the main problems in the elderly population and for people with functional disabilities is falling when they are not supervised. Therefore, there is a need for monitoring systems with fall detection functionality. Mobile robots are a good solution for keeping the [...] Read more.
One of the main problems in the elderly population and for people with functional disabilities is falling when they are not supervised. Therefore, there is a need for monitoring systems with fall detection functionality. Mobile robots are a good solution for keeping the person in sight when compared to static-view sensors. Mobile-patrol robots can be used for a group of people and systems are less intrusive than ones based on mobile robots. In this paper, we propose a novel vision-based solution for fall detection based on a mobile-patrol robot that can correct its position in case of doubt. The overall approach can be formulated as an end-to-end solution based on two stages: person detection and fall classification. Deep learning-based computer vision is used for person detection and fall classification is done by using a learning-based Support Vector Machine (SVM) classifier. This approach mainly fulfills the following design requirements—simple to apply, adaptable, high performance, independent of person size, clothes, or the environment, low cost and real-time computing. Important to highlight is the ability to distinguish between a simple resting position and a real fall scene. One of the main contributions of this paper is the input feature vector to the SVM-based classifier. We evaluated the robustness of the approach using a realistic public dataset proposed in this paper called the Fallen Person Dataset (FPDS), with 2062 images and 1072 falls. The results obtained from different experiments indicate that the system has a high success rate in fall classification (precision of 100% and recall of 99.74%). Training the algorithm using our Fallen Person Dataset (FPDS) and testing it with other datasets showed that the algorithm is independent of the camera setup. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Graphical abstract

20 pages, 30715 KiB  
Article
Online Learned Siamese Network with Auto-Encoding Constraints for Robust Multi-Object Tracking
by Peixin Liu, Xiaofeng Li, Han Liu and Zhizhong Fu
Electronics 2019, 8(6), 595; https://doi.org/10.3390/electronics8060595 - 28 May 2019
Cited by 13 | Viewed by 2816
Abstract
Multi-object tracking aims to estimate the complete trajectories of objects in a scene. Distinguishing among objects efficiently and correctly in complex environments is a challenging problem. In this paper, a Siamese network with an auto-encoding constraint is proposed to extract discriminative features from [...] Read more.
Multi-object tracking aims to estimate the complete trajectories of objects in a scene. Distinguishing among objects efficiently and correctly in complex environments is a challenging problem. In this paper, a Siamese network with an auto-encoding constraint is proposed to extract discriminative features from detection responses in a tracking-by-detection framework. Different from recent deep learning methods, the simple two layers stacked auto-encoder structure enables the Siamese network to operate efficiently only with small-scale online sample data. The auto-encoding constraint reduces the possibility of overfitting during small-scale sample training. Then, the proposed Siamese network is improved to extract the previous-appearance-next vector from tracklet for better association. The new feature integrates the appearance, previous, and next stage motions of an element in a tracklet. With the new features, an online incremental learned tracking framework is established. It contains reliable tracklet generation, data association to generate complete object trajectories, and tracklet growth to deal with missing detections and to enhance the new feature for tracklet. Benefiting from discriminative features, the final trajectories of objects can be achieved by an efficient iterative greedy algorithm. Feature experiments show that the proposed Siamese network has advantages in terms of both discrimination and correctness. The system experiments show the improved tracking performance of the proposed method. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

17 pages, 944 KiB  
Article
Automatic Scene Recognition through Acoustic Classification for Behavioral Robotics
by Sumair Aziz, Muhammad Awais, Tallha Akram, Umar Khan, Musaed Alhussein and Khursheed Aurangzeb
Electronics 2019, 8(5), 483; https://doi.org/10.3390/electronics8050483 - 30 Apr 2019
Cited by 47 | Viewed by 4383
Abstract
Classification of complex acoustic scenes under real time scenarios is an active domain which has engaged several researchers lately form the machine learning community. A variety of techniques have been proposed for acoustic patterns or scene classification including natural soundscapes such as rain/thunder, [...] Read more.
Classification of complex acoustic scenes under real time scenarios is an active domain which has engaged several researchers lately form the machine learning community. A variety of techniques have been proposed for acoustic patterns or scene classification including natural soundscapes such as rain/thunder, and urban soundscapes such as restaurants/streets, etc. In this work, we present a framework for automatic acoustic classification for behavioral robotics. Motivated by several texture classification algorithms used in computer vision, a modified feature descriptor for sound is proposed which incorporates a combination of 1-D local ternary patterns (1D-LTP) and baseline method Mel-frequency cepstral coefficients (MFCC). The extracted feature vector is later classified using a multi-class support vector machine (SVM), which is selected as a base classifier. The proposed method is validated on two standard benchmark datasets i.e., DCASE and RWCP and achieves accuracies of 97.38 % and 94.10 % , respectively. A comparative analysis demonstrates that the proposed scheme performs exceptionally well compared to other feature descriptors. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

15 pages, 1523 KiB  
Article
Three-Stream Convolutional Neural Network with Squeeze-and-Excitation Block for Near-Infrared Facial Expression Recognition
by Ying Chen, Zhihao Zhang, Lei Zhong, Tong Chen, Juxiang Chen and Yeda Yu
Electronics 2019, 8(4), 385; https://doi.org/10.3390/electronics8040385 - 29 Mar 2019
Cited by 12 | Viewed by 3684
Abstract
Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, [...] Read more.
Near-infrared (NIR) facial expression recognition is resistant to illumination change. In this paper, we propose a three-stream three-dimensional convolution neural network with a squeeze-and-excitation (SE) block for NIR facial expression recognition. We fed each stream with different local regions, namely the eyes, nose, and mouth. By using an SE block, the network automatically allocated weights to different local features to further improve recognition accuracy. The experimental results on the Oulu-CASIA NIR facial expression database showed that the proposed method has a higher recognition rate than some state-of-the-art algorithms. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

16 pages, 3633 KiB  
Review
Socially Assistive Robots for Older Adults and People with Autism: An Overview
by Ester Martinez-Martin, Felix Escalona and Miguel Cazorla
Electronics 2020, 9(2), 367; https://doi.org/10.3390/electronics9020367 - 21 Feb 2020
Cited by 53 | Viewed by 9604
Abstract
Over one billion people in the world suffer from some form of disability. Nevertheless, according to the World Health Organization, people with disabilities are particularly vulnerable to deficiencies in services, such as health care, rehabilitation, support, and assistance. In this sense, recent technological [...] Read more.
Over one billion people in the world suffer from some form of disability. Nevertheless, according to the World Health Organization, people with disabilities are particularly vulnerable to deficiencies in services, such as health care, rehabilitation, support, and assistance. In this sense, recent technological developments can mitigate these deficiencies, offering less-expensive assistive systems to meet users’ needs. This paper reviews and summarizes the research efforts toward the development of these kinds of systems, focusing on two social groups: older adults and children with autism. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

16 pages, 237 KiB  
Review
Identification of Daily Activites and Environments Based on the AdaBoost Method Using Mobile Device Data: A Systematic Review
by José M. Ferreira, Ivan Miguel Pires, Gonçalo Marques, Nuno M. Garcia, Eftim Zdravevski, Petre Lameski, Francisco Flórez-Revuelta and Susanna Spinsante
Electronics 2020, 9(1), 192; https://doi.org/10.3390/electronics9010192 - 20 Jan 2020
Cited by 10 | Viewed by 3204
Abstract
Using the AdaBoost method may increase the accuracy and reliability of a framework for daily activities and environment recognition. Mobile devices have several types of sensors, including motion, magnetic, and location sensors, that allow accurate identification of daily activities and environment. This paper [...] Read more.
Using the AdaBoost method may increase the accuracy and reliability of a framework for daily activities and environment recognition. Mobile devices have several types of sensors, including motion, magnetic, and location sensors, that allow accurate identification of daily activities and environment. This paper focuses on the review of the studies that use the AdaBoost method with the sensors available in mobile devices. This research identified the research works written in English about the recognition of daily activities and environment recognition using the AdaBoost method with the data obtained from the sensors available in mobile devices that were published between 2012 and 2018. Thus, 13 studies were selected and analysed from 151 identified records in the searched databases. The results proved the reliability of the method for daily activities and environment recognition, highlighting the use of several features, including the mean, standard deviation, pitch, roll, azimuth, and median absolute deviation of the signal of motion sensors, and the mean of the signal of magnetic sensors. When reported, the analysed studies presented an accuracy higher than 80% in recognition of daily activities and environments with the Adaboost method. Full article
(This article belongs to the Special Issue Machine Learning Techniques for Assistive Robotics)
Show Figures

Figure 1

Back to TopTop