Next Article in Journal
Mechanical Analysis of Semi-Rigid Base Asphalt Pavement under the Influence of Groundwater with the Spectral Element Method
Next Article in Special Issue
Dynamic Modeling and Altitude Control for Flying Cars Based on Active Disturbance Rejection Control
Previous Article in Journal
A Vulnerability Scanning Method for Web Services in Embedded Firmware
Previous Article in Special Issue
Trajectory Tracking Control of a Skid-Steer Mobile Robot Based on Nonlinear Model Predictive Control with a Hydraulic Motor Velocity Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Eldercare Robot with Path-Planning and Fall-Detection Capabilities

1
Mechanical Engineering Department, Arab Academy for Science and Technology and Maritime Transport, Smart-Village Branch, Cairo 11736, Egypt
2
Mechanical Engineering Department, Arab Academy for Science and Technology and Maritime Transport, Sheraton Branch, Cairo 11757, Egypt
*
Author to whom correspondence should be addressed.
Appl. Sci. 2024, 14(6), 2374; https://doi.org/10.3390/app14062374
Submission received: 29 January 2024 / Revised: 22 February 2024 / Accepted: 29 February 2024 / Published: 12 March 2024
(This article belongs to the Special Issue Research and Development of Intelligent Robot)

Abstract

:
The rapid growth of the elderly population has led to an increased demand for effective and personalized eldercare solutions. In this paper, the design and development of an eldercare robot is presented. This robot is specifically tailored to meet the two specific challenges faced by the elderly. The first is the continuous indoor tracking of the elder, while the second is the fall detection. A comprehensive overview of the hardware and software components, as well as the control architecture of the robot is presented. The hardware design of the robot incorporates a range of features, including a perception system comprising a 2D Lidar, IMU, and camera for environment mapping, localization, and fall detection. The software stack of the robot is explained as consisting of layers for perception, mapping, and localization. The robot is tested experimentally to validate its path planning capability by using Hector SLAM and the RRT* technique. Experimental path planning has shown a positioning accuracy of 93.8% on average. Elderly fall detection is achieved by using the YOLOv7 algorithm at a percentage of 96%. Experimental results have been discussed and evaluated.

1. Introduction

The global population is rapidly aging, leading to an increased demand for quality care and support for the elderly. According to a recent report published by the United Nations, the world population for people over 60 has surpassed the younger age groups and can be considered now about 16% of the Earth’s population [1]. Normally, people of that age suffer from loneliness, social isolation, a lack of daily engagement, and a lack of physical and mental monitoring [2]. Such problems can lead to serious medical problems such as heart diseases, stroke, Type 2 diabetes, depression, anxiety, suicidal thinking, self-harm, and dementia.
As traditional care models face challenges in meeting these needs, emerging technologies offer promising solutions. One such technology is the usage of domestic robots designed specifically for elderly care [3]. Eldercare robots have the potential to make a real difference in the lives of elderly people. They can provide companionship, help with daily tasks, provide verbal and social communication, monitor health conditions, and provide early warnings in serious medical conditions [4]. Hence, these robots can have positive impacts on the mental and emotional health of the elderly. Such needs are further accelerated during years of pandemics like COVID-19. As technology continues to develop, it is believed that eldercare robots will become more affordable and easier to use. This will make them a more viable option for families with elders, and it will help to address the growing problem of the shortage of human caregivers [5].
Research on eldercare robots focuses on providing versatile robots that can track humans in indoor environments, monitor their vital signals, detect their postures, and detect falls and provide aid [6]. Hence, research examples show continuous development in robot design, sensing technologies, indoor path planning, fall-detection techniques, and human–robot interactions in real-world scenarios.
Examples of eldercare robots include the robot Matilda [7] which utilizes a fall-detection algorithm that analyzes sensor data from cameras, microphones, and depth sensors. This algorithm is designed to detect specific indicative fall patterns such as sudden changes in body position, impact, or abnormal behavior. Upon detecting a potential fall event, Matilda triggers immediate alerts for assistance.
Another research study was conducted on the robot Hobbit [8], in which advanced algorithms are applied to monitor the environment using cameras, temperature sensors, and RGB-D cameras. Several algorithms were used to analyze the sensor data to detect falls based on different criteria, such as changes in body posture, rapid movements followed by a sudden stop, or collisions with objects. Once a fall is detected, Hobbit promptly notifies caregivers or emergency services. Another fall-detection algorithm is used for the Nao humanoid robot [9] which utilizes its sensor suite, including motion sensors, cameras, and depth perception. This algorithm recognizes key fall-related patterns, such as sudden changes in orientation, significant acceleration or deceleration, or rapid descent.
Additionally, the robot Giraff provides remote communication and social interaction, and its fall-detection capabilities are achieved using relatively cheap hardware [10]. While it does not have a dedicated fall-detection algorithm, Giraff’s video and audio capabilities enable remote caregivers to visually monitor older adults and provide immediate assistance if a fall occurs. Another robot named PARO [11], which is a therapeutic robot resembling a baby harp seal, focuses on emotional support for older adults but does not have fall-detection capability. However, PARO’s presence and interaction with older adults have shown positive effects on emotional well-being, potentially reducing the risk of falls caused by psychological factors.
Pepper [12], another humanoid robot, contributes to the emotional well-being of older adults. It helps in potentially reducing the fall risks associated with social isolation and depression but without a fall-detection feature. Moreover, Mabu [13], an AI-powered robot designed to provide personalized healthcare assistance, does not have specific fall-detection capabilities. However, Mabu’s continuous monitoring of older adults’ health conditions and proactive intervention can contribute to the prevention of falls by addressing underlying health issues. The robot Zora [14,15] is also used in healthcare settings. It focuses on entertainment and therapy and has a dedicated fall-detection capability using a pose-estimation technique. In addition, its interactive activities and exercises contribute to cognitive stimulation and physical well-being, potentially reducing the risk of falls through enhanced mobility and engagement.
Other examples of robots in healthcare settings are the robots Care-O-bot [16] and ElliQ [17]. These robots provide social interaction, cognitive activities, and object-detection capabilities and can potentially reducing the risk of falls through enhanced mobility and engagement. However, neither of them has a fall-detection feature.
Another approach for eldercare robots is presented in [18], in which a modular robotic platform with a fixed exoskeleton arm is used for elders who suffer from disabilities like stroke and multiple sclerosis. Such a design provides indoor mobility and assistance and negates the fall possibility. However, such robotic mountable platforms are not our main concern in this research.
In conclusion to the previous eldercare robots, it is evident that Matilda, Hobbit, Nao, and Giraff stand out with their dedicated fall-detection capabilities. These robots utilize different sensing technologies, including cameras, depth sensors, and motion sensors, combined with sophisticated algorithms to accurately detect falls and provide prompt assistance. User feedback supports the effectiveness of these robots in fall detection and the positive impact on older adults’ safety. In comparison to other robots, one potential limitation in the fall-detection algorithms of PARO, Pepper, Mabu, Zora, Care-O-bot, and ElliQ is the lack of real-time and accurate detection of falls. These robots rely on slower algorithms or less advanced techniques for identifying fall events, which can result in delayed response times or false alarms.
In this research, a new eldercare robot is presented. Complete software and hardware frameworks are developed. The software framework has two main features: the first is SLAM (simultaneous localization and mapping) and indoor path planning for the robot to be able to follow the elders continuously while avoiding indoor obstacles. The implemented path planning technique is RRT* [19]. The second feature is fall and posture detection using real-time image processing. The YOLOv7 [20] image processing algorithm is used in the fall detection model. This algorithm utilizes the convolutional neural networks (CNNs) machine-learning technique to provide real-time and precise detection of various objects in addition to falling and standing postures. By integrating the YOLOv7 and RRT* algorithms into the system, the robot controller can swiftly analyze video feeds from cameras and identify fall events with a high precision in addition to continuous human tracking. This allows for immediate notification and prompt assistance, minimizing the time between the occurrence of a fall and the initiation of aid.
Additionally, a hardware architecture is developed with high-level and low-level controllers in addition to the sensors, motor drivers, and power supply unit. A custom-designed electronic board (Robot Control Unit—RCU) is developed to act as the low-level controller with SPI, UART, I2C, and Bluetooth communication capabilities. To implement the generated prediction model on the high-level controller, two optimization techniques are used to decrease the model size by 60% to enable enhanced real-time implementation of the model on the high-level controller.
Furthermore, the new care giver robot is based on a cheap modular mechanical design, which makes it easy to customize the robot’s capabilities to meet the specific needs of the elderly person. By integrating these features, the intended robot can not only enhance the quality of life for the elderly but also provide crucial assistance in their day-to-day activities.

2. Mechanical Design and Kinematics

2.1. Design Specifications

The intended robot is planned to be modular and easily manufactured. The robot was manufactured using 3D printing technology, since the appearance of eldercare robots is a key factor in eldercare effectiveness. Therefore, the robot has been designed to prioritize comfort with soft curves and rounded edges, aiming for a non-threatening and friendly appearance. Moreover, additional design features based on each elder’s personality and preferences are possible using 3D printing technology.
The robot embodies a modular mechanical architecture, thoughtfully divided into two distinct components: the platform and the pod. The platform represents an autonomous foundation, serving as the core mobility unit that navigates and interacts within the environment. On the other hand, the pod is a versatile enclosure that offers adaptable openings, allowing for a range of functionalities tailored to the unique needs of the elderly individual. This pod can be seamlessly customized to accommodate various tasks, from managing medication regimens to facilitating the delivery of refreshments or meals. As a result, the design of the pod becomes inherently flexible, sculpted by the specific requirements and preferences of the elder it serves. The robot design stages from CAD design to a fully assembled and operational robot are shown in Figure 1.
The robot design and dimensions are shown in Figure 2. The robot dimensions are 850 × 500 × 500 mm. The load carrying capacity of the robot is 10 kg. The battery life is 160 min in active mode and 8 h in standby mode, respectively. A two-wheel differential drive mechanism is selected for robot motion. The robot wheel thrust torque is 4.054 N·m, its maximum speed is 1.5 m/s, and the robot rated power is 137 W.

2.2. Kinematic Calculations

To control robot movements and velocity, differential drive is chosen for the robot. The difference in RPM between the wheels makes the robot move forward or backward or rotate. The kinematic modeling equations of the robot with a differential mechanism are shown in Equations (1)–(3) and its layout is shown in Figure 3. The robot velocity in the x-direction and y-direction and robot rotation velocity are denoted x ˙ , y ˙ , and Θ ˙ , respectively. The terms v and ω denote the robot linear displacement and linear velocity, respectively. R z is the rotation matrix of the robot, r R is the radius of the robot’s right wheel, and r L is the radius of the left wheel. φ ˙ R and φ ˙ L are the right and left wheels’ rotation velocities respectively. Finally, 2b is the width of the robot base.
x ˙ y ˙ Θ ˙ = R z θ v 0   ω = c o s ( θ ) s i n ( θ ) 0 s i n ( θ ) c o s ( θ ) 0 0 0 1   v 0   ω
v ω = r R 2 r L 2 r R 2 b r R 2 b φ ˙ R φ ˙ L  
By combining the Equations (1) and (2),
x ˙ y ˙ Θ ˙ = r R 2 c o s ( θ ) r L 2 c o s ( θ ) r R 2 s i n ( θ ) r L 2 s i n ( θ ) r R 2 b r R 2 b φ ˙ R φ ˙ L

3. Control Architecture

3.1. Low-Level Control

For robot actuation, BLDC (brushless DC) motors are selected. This motor is chosen to provide sufficient torque for the robot’s movement and perform its tasks effectively. A capable motor controller is chosen to handle the required power and provide the necessary control interface. A specially designed printed circuit board is developed to be the robot control unit (RCU) as shown in Figure 4. This RCU contains a low-level controller which is the ATMEGA2560 microcontroller. It is used to control the motor drivers and motors, obtain sensors readings, and enable seamless communication with other controllers through SPI and I2C protocols.
Additionally, this RCU contains a relay module that is incorporated to control lights or other high-power devices. Velocity feedback is attained with the resolution of 1024 CPR by Hall effect sensor-based incremental encoders. Digital and analogue I/O pins are available for general-purpose use. The RCU unit is compatible with motor drivers that operate with a frequency of 20 kHz, lights, sensors, and other peripherals. The robot power is also monitored in the developed RCU board. The battery power in the active mode of the robot is 137 W, which is calculated by adding the power consumption of the motors, sensors, and electronics. The robot uses a lithium-ion battery with a voltage of 36 V and a capacity of 13 Ah. This battery can provide power to the robot for up to 4.5 h in the active mode. The robot RCU board is also expandable, with the option to add additional components or modules for future enhancements. This RCU software also can update the firmware OTA (over-the-air) and has additional connectors and pinouts for convenient integration with other system components.

3.2. High-Level Control

A hardware stack architecture is developed with both high and low control levels as shown in Figure 5. An Nvidia Jetson Nano is used as the high-level controller. Additional sensors are connected to this controller like: the 2D Lidar, 9 DOF IMU MPU9520, and Raspberry Pi 2 camera. This high-level controller board is used to accommodate the high processing hardware and software that will later be added to the system. Continuous voltage, current monitoring, and power failure protection exist through the presence of the protection fuse unit. The final prototype of the robot is shown in Figure 6.
A system software stack was developed, as shown in Figure 7. It is used to execute the algorithms and techniques used for data processing, mapping, localization, and environment perception. The developed software consists of several layers. The perception layer, mapping and localization layer, path planning and navigation layer, actuation layer, and command execution layer are the added layers for the robot.
This approach has several advantages. First, it allows the low-level sensors to be connected directly to the RCU, which minimizes latency and improves the accuracy of the data. Second, it allows the high-level sensors to be connected to a powerful computer, which allows for more sophisticated processing and analysis of the data. The details of these layers will be explained in the following subsections.

3.3. Motion Control and Visual Interface Layers

The low-level sensors, which measure physical quantities such as acceleration, velocity, and orientation, and the encoders in the robot system are connected to the robot control unit (RCU). The high-level sensors, which perceive the environment around the robot, such as cameras, Lidar, and inertial measurement units (IMUs), are connected to a powerful computer, such as the Jetson Nano board. The RCU is responsible for collecting data from the low-level sensors and sending them to the Jetson Nano. The Jetson Nano is responsible for processing the data from the high-level sensors and detecting falls. The complete control system is built using ROS melodic with a visual user interface and continuous monitoring.

3.4. Perception Layers

For the perception system of the robot, the system incorporates a 2D Lidar sensor with a detection range of 12 m, a sampling rate of 4000 S/s, and a 360° scanning range. It is used for map building, obstacle detection, and avoidance. Additionally, an IMU sensor is added to capture orientation and motion data to enhance the robot’s navigation capabilities. These sensors work in synergy to provide a comprehensive perception system for the robot.

3.5. Indoor Navigation and SLAM Layers

The navigation stack and SLAM algorithms are usually used to enable autonomous navigation in robots. The navigation stack provides the infrastructure for the robot to navigate, while SLAM provides the map of the environment that the robot needs to navigate. Together, these two tools can be used to build accurate, robust, and efficient navigation systems for robots. There are two common SLAM algorithms that can be used: G-mapping and Hector SLAM [21]. G-mapping is a 2D SLAM algorithm that uses a laser scanner to build a map of the environment. It provides a relatively simple environment, but it is not suitable for complex environments, narrow passages, reflective surfaces, and other complex features in indoor environments. On the other hand, Hector SLAM is a 2D SLAM algorithm that uses a laser scanner to build a map of the environment and is a more sophisticated algorithm that can handle complex and detailed features effectively. Hence, Hector SLAM is chosen for this system to be used in environment navigation, where accuracy and robustness are important.

3.6. Fall Detection Capability

Specific algorithms and techniques used for fall detection were explored, considering factors such as accuracy, false positive rate, real-time processing, sensitivity, and specificity. A fall-detection model was developed using a Raspberry Pi 2 camera and an NVIDIA Jetson Nano computing platform. This study employs a custom dataset of 10K images, compiled and annotated by the author as shown in Figure 8. using the Roboflow platform [22]. To facilitate robust model training and assessment, the dataset was divided into an 85.5%–9.5%–5% split for training, validation, and testing, respectively.
The fall-detection model was developed using images acquired from various sources on the internet and processed using an NVIDIA Jetson Nano computing platform. These images encompassed a wide range of scenarios and situations in which falls might occur.
The labelling technique involved manually annotating the collected images. Human annotators reviewed each image and marked the regions where a person was falling or not falling. This process generated ground truth labels for the training dataset. Figure 8 illustrates a subset of these labelled images to provide an overview of the dataset. This approach allowed for a diverse dataset that covered various fall scenarios.
Next, YOLOv7 was used as the object-detection algorithm. An object detector performs image-recognition tasks by taking an image as input and then predicting bounding boxes and class probabilities for each object in the image. The YOLO algorithm series uses deep convolutional neural networks (CNNs) to extract features from the image to detect objects [23]. A block diagram that explains the main components of the YOLOv7 algorithm is shown in Figure 9. The “Backbone” block is responsible for creating image features. The “Neck” block is where a group of neural network layers combines and mixes features. The last block named “Head” receives the features from the previous block and generates predictions.
In this research, object detection directly pinpoints fallen individuals, bypassing complex pose analysis and offering a more efficient and reliable solution for fall detection, prioritizing precision in fall detection and minimizes false alarms.
To reduce the model size and improve its inference speed on the Jetson Nano, the model has been quantized. The model size was reduced by 40% through quantization and an additional 20% through structured pruning in hardware implementation. Normally, quantization is a common machine-learning technique to reduce model size [24]. On the other hand, pruning is a machine-learning technique that decreases the size of complex models by eliminating some of their parameters to make them run faster on hardware [25]. Structured pruning is convenient for structured models like CNN models as it preserves their structure. The final model was able to detect falls at a rate of 30 frames per second.

4. Experimental Testing

Field testing for the eldercare robot included indoor path planning and real human fall detection. An indoor testing map is prepared for testing that includes static obstacles with different sizes as shown in Figure 10. The robot and the patient positions are also shown in the same figure. The map is roughly 3 m × 7.1 m, and the number of static obstacles is 16. The RRT* indoor path planning technique was used for robot navigation.
RRT is a sampling-based path planning technique that is used for single-query problems [19]. It draws edges by iteratively adding random nodes and uses collision detection method to verify the feasibility of the edges. This algorithm generates a tree of possible trajectories that is rooted at an initial vertex. Normally, it guarantees that the probability of finding a possible path is close to 1 when the number of iterations tends to infinity. RRT* is an optimized version of the RRT technique. It considers the path cost, hence decreasing the time for path generation. The algorithm is explained in Pseudocode 1 and can be summarized as follows (Algorithm 1):
Algorithm 1 Pseudocode for the path planning algorithm RRT* [19]
Input:   q I , n of nodes, step size α
Output: Tree T = (V, E)
initialize V = { q S }, E =
  • for i = 0: n do
  • q i   ← random sample from C
  • q n e a r   ← nearest neighbor of qi in C
  •   q n e w q n e a r   + α q i q n e a r   q i q n e a r
  • if    q n e w   C f r e e    then V ← V ∪ {   q n e w },
    E ← E ∪ {( q n e a r   ,   q n e w )}
  • end for
The experimental robot path and the reconstructed 2D map using Lidar feedback is shown in Figure 11. Robot feedback and control were conducted according to the perception procedures that were previously mentioned in Section 3. The robot actual path has a travel distance = 3.528 × 103 mm, an average velocity of 0.784 m/s, a travel time of 4.5 s, and a maximum acceleration of 0.174 m/s2. The final point is varied from the destination point as the mean absolute error point on the x- and y-axes and is 0.187 m and 0.248 m, respectively.
Fall detection was simulated by humans, and the results were recorded and analyzed by using the YOLOv7 classifier. Fall-detection results were evaluated using standard parameters like accuracy, precision, recall, F-score, and true negative rate [26]. The true positive (TP) is obtained when an abnormal event is detected between the first and the last frame where the abnormal action took place. A true negative (TN) is a normal action that is not detected as abnormal. False positives (FPs) are normal actions reported as abnormal, and false negatives (FNs) are abnormal behaviors not reported by the system. These parameters are calculated as follows:
A c c u r a c y = T P + T N T P + T N + F P + F N = ( 5119 + 5153 ) / ( 5119 + 5153 + 51 + 350 ) = 96 %
P r e c i s i o n = T P T P + F P = 5119 / ( 5119 + 51 ) = 99.1 %
R e c a l l = T P T P + F N = 5119 / ( 5119 + 350 ) = 93.6 %
F s c o r e = 2 × R e c a l l × P r e c i s i o n R e c a l l + P r e c i s i o n = 96 %
T r u e   n e g a t i v e   r a t e = T N T N + F P = 99 %
The experimental results of the prediction model are shown in Figure 12. The figure is divided into two rows, the first indicates the training results in addition to its precision and recall, while the second indicates the validation results in addition to its precision and recall. Three types of loss are shown in this figure: box loss, objectness loss, and categorization loss. The box loss shows the algorithm’s ability to locate the object center and estimate its bounding box. The objectness metric quantifies how likely it is that an object can be found in a given area. High objectivity suggests that it is likely that an object lies inside the visible region of an image. On the other hand, classification loss indicates the accuracy with which an algorithm can determine the right class of an object. All these results are obtained during 0–250 iterations. The accuracy, precision, recall, F-score, and true negative rate parameters were found to be 96%, 99.1%, 93.5%, 96.2%, and 99%, respectively. The confusion matrix for the prediction model is shown in Figure 13, while the real images for the experiment are shown in Figure 14.

5. Discussions

The developed robot has performed both path planning and fall detection successfully, but there are several observations that were found. For path planning implementation, it was found that the robot experiences some difficulty in reaching its destination when there are dynamic obstacles (other than the subject) in the map. Moreover, for fall detection to be optimum and achieve the highest accuracy, the robot must be situated in a distance between 185 and 220 cm from the subject. This result is shown in Figure 15a. Hence, 200 cm is chosen as the default distance and is continuously monitored by the Lidar sensor. This remark also explains the need for the path-planning feature in the new eldercare robot. Additionally, the robot velocity is also a contributing factor in increasing the detection accuracy. Several experiments were performed to determine the fall-detection accuracy with respect to the expected subject speed. The fall detection accuracy ranges from 96.3% to 95.1% when the fall is at a velocity of 0 to 1.1 m/s, respectively. Such a speed range is convenient to the expected speed of the elder subject age range (60 to 80 years old) [27]. Different robot velocities were compared regarding the fall-detection accuracy, as can be seen in Figure 15b.
Based on the literature of eldercare robots that was explained in Section 1, several remarks are observed as well: Firstly, it is difficult to find a robot with both indoor navigation and fall-detection capabilities. Secondly, most of the robots in the literature use pose estimation instead of object detection in fall detection, which consumes more processing power and decreases the prediction accuracy [28]. The developed robot and its features were compared to other eldercare robots in the literature, as can be seen in Table 1.

6. Conclusions and Future Work

In this paper, an eldercare robot was developed to address the unique needs of the elderly population. The comprehensive approach included hardware design, motor calculations, system integration, and experimental testing. The robot’s control architecture accommodates both hardware and software layers, utilizing Hector SLAM and local path planners within the ROS framework. This enables the robot to navigate, detect obstacles, and provide fall detection, showcasing its potential for enhancing elderly well-being. RRT* was chosen as the path-planning technique in experimental testing. The robot was tested in a real indoor environment and achieved a positioning accuracy of 94.7% and 93% in the x- and y-axes, respectively. Additionally, a fall-detection model was developed using the YOLOv7 algorithm which achieved an accuracy of 96% and a precision of 99.1% after testing. The model size was reduced by 40% through quantization and an additional 20% through pruning to enable its deployment in the Jetson Nano controller.
However, this study acknowledges that this is an initial step, and further validation and real-world testing are needed across caregiving settings. Continuous refinement of both hardware and software components is crucial to address evolving elderly needs. Future exploration could involve enhancing human–robot interactions through incorporating natural language processing and emotion recognition. Also, advanced fall detection can be achieved through integrating more sensors for real-time and accurate results. Additional features can be used to classify falls and generate datasets that can be segmented according to different features, such as the human body, head pose, joint angles, gait, background, lighting, and any noise or distortion. Finally, an automatic docking station is planned to be added to the robot.

Author Contributions

Conceptualization, Y.E.-S.; methodology, A.A. and Y.E.-S.; software, A.E. and A.A.; formal analysis, A.A.; investigation, A.E. and Y.E.-S.; writing—original draft, A.A.; visualization, A.E.; supervision, A.A. and Y.E.-S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Publicly available dataset was analyzed in this study. This data can be found here: https://www.kaggle.com/datasets/elwalyahmad/fall-detection (accessed on 28 February 2024).

Conflicts of Interest

The authors declare that there is no conflict of interest regarding this manuscript.

References

  1. United Nations Department of Economic and Social Affairs. World Social Report 2023 Leaving no One Behind in an Ageing World; United Nations Department of Economic and Social Affairs: New York, NY, USA, 2023; Available online: https://desapublications.un.org/publications/world-social-report-2023-leaving-no-one-behind-ageing-world (accessed on 29 January 2024).
  2. Asgharian, P.; Panchea, A.M.; Ferland, F. A Review on the Use of Mobile Service Robots in Elderly Care. Robotics 2022, 11, 127. [Google Scholar] [CrossRef]
  3. Sawik, B.; Tobis, S.; Baum, E.; Suwalska, A.; Kropińska, S.; Stachnik, K.; Pérez-Bernabeu, E.; Cildoz, M.; Agustin, A.; Wieczorowska-Tobis, K. Robots for Elderly Care: Review, Multi-Criteria Optimization Model and Qualitative Case Study. Healthcare 2023, 11, 1286. [Google Scholar] [CrossRef] [PubMed]
  4. Vandemeulebroucke, T.; de Casterlé, B.D.; Gastmans, C. How do older adults experience and perceive socially assistive robots in aged care: A systematic review of qualitative evidence. Aging Ment. Health 2017, 22, 149–167. [Google Scholar] [CrossRef] [PubMed]
  5. Nakamura, K.; Saga, N. Current Status and Consideration of Support/Care Robots for Stand-Up Motion. Appl. Sci. 2021, 11, 1711. [Google Scholar] [CrossRef]
  6. Vagnetti, R.; Camp, N.; Story, M.; Ait-Belaid, K.; Bamforth, J.; Zecca, M.; Magistro, D. Robot Companions and Sensors for Better Living: Defining Needs to Empower Low Socio-economic Older Adults at Home. In Proceedings of the International Conference on Social Robotics, Doha, Qatar, 3–7 December 2023; Ali, A.A., Cabibihan, J.-J., Meskin, N., Rossi, S., Jiang, W., He, H., Ge, S.S., Eds.; Lecture Notes in Computer Science. Springer: Singapore, 2024; Volume 14453. [Google Scholar] [CrossRef]
  7. Khosla, R.; Nguyen, K.; Chu, M.-T. Assistive robot enabled service architecture to support home-based dementia care. In Proceedings of the 2014 IEEE 7th International Conference on Service-Oriented Computing and Applications, Matsue, Japan, 17–19 November 2014; pp. 73–80. [Google Scholar]
  8. Fischinger, D.; Einramhof, P.; Papoutsakis, K.; Wohlkinger, W.; Mayer, P.; Panek, P.; Hofmann, S.; Koertner, T.; Weiss, A.; Argyros, A.; et al. Hobbit, a care robot supporting independent living at home: First prototype and lessons learned. Robot. Auton. Syst. 2016, 75, 60–78. [Google Scholar] [CrossRef]
  9. Zhang, T.; Zhang, W.; Qi, L.; Zhang, L. Falling detection of lonely elderly people based on NAO humanoid robot. In Proceedings of the 2016 IEEE International Conference on Information and Automation (ICIA), Ningbo, China, 1–3 August 2016; pp. 31–36. [Google Scholar] [CrossRef]
  10. Gonzalez-Jimenez, J.; Galindo, C.; Ruiz-Sarmiento, J. Technical improvements of the Giraff telepresence robot based on users’ evaluation. In Proceedings of the 2012 RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 827–832. [Google Scholar]
  11. Shibata, T.; Hung, L.; Petersen, S.; Darling, K.; Inoue, K.; Martyn, K.; Hori, Y.; Lane, G.; Park, D.; Mizoguchi, R.; et al. PARO as a Biofeedback Medical Device for Mental Health in the COVID-19 Era. Sustainability 2021, 13, 11502. [Google Scholar] [CrossRef]
  12. Vidovićová, L.; Menšíková, T. Materiality, Corporeality, and Relationality in Older Human–Robot Interaction (OHRI). Societies 2023, 13, 15. [Google Scholar] [CrossRef]
  13. Mahdi, H.; Akgun, S.A.; Saleh, S.; Dautenhahn, K. A survey on the design and evolution of social robots—Past, present and future. Robot. Auton. Syst. 2022, 156, 104193. [Google Scholar] [CrossRef]
  14. Huisman, C.; Kort, H. Two-Year Use of Care Robot Zora in Dutch Nursing Homes: An Evaluation Study. Healthcare 2019, 7, 31. [Google Scholar] [CrossRef] [PubMed]
  15. Melkas, H.; Hennala, L.; Pekkarinen, S.; Kyrki, V. Impacts of robot implementation on care personnel and clients in elderly-care institutions. Int. J. Med. Inform. 2020, 134, 104041. [Google Scholar] [CrossRef] [PubMed]
  16. Reiser, U.; Connette, C.; Fischer, J.; Kubacki, J.; Bubeck, A.; Weisshardt, F.; Jacobs, T.; Parlitz, C.; Hagele, M.; Verl, A. Care-O-bot® 3-creating a product vision for service robot applications by integrating design and technology. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 11–15 October 2009; pp. 1992–1998. [Google Scholar]
  17. Coghlan, S. Robots and the Possibility of Humanistic Care. Int. J. Soc. Robot. 2022, 14, 2095–2108. [Google Scholar] [CrossRef] [PubMed]
  18. Catalan, J.M.; Blanco, A.; Bertomeu-Motos, A.; Garcia-Perez, J.V.; Almonacid, M.; Puerto, R.; Garcia-Aracil, N. A Modular Mobile Robotic Platform to Assist People with Different Degrees of Disability. Appl. Sci. 2021, 11, 7130. [Google Scholar] [CrossRef]
  19. Massoud, M.M.; Abdellatif, A.; Atia, M.R.A. Different Path Planning Techniques for an Indoor Omni-Wheeled Mobile Robot: Experimental Implementation, Comparison and Optimization. Appl. Sci. 2022, 12, 12951. [Google Scholar] [CrossRef]
  20. Wang, C.-Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar] [CrossRef]
  21. Saat, S.; Rashid, W.A.; Tumari, M. Saealal hectorslam 2D mapping for simultaneous localization and mapping (slam). J. Phys. Conf. Ser. 2020, 1529, 042032. [Google Scholar] [CrossRef]
  22. Roboflow Universe Projects. Fall Detection Dataset. March 2023. Available online: https://universe.roboflow.com/roboflow-universe-projects/fall-detection-ca3o8 (accessed on 29 January 2024).
  23. Liu, H.; Luo, J. YES-SLAM: YOLOv7-Enhanced-Semantic Visual SLAM for Mobile Robots in Dynamic Scenes. Meas. Sci. Technol. 2023, 35, 035117. [Google Scholar] [CrossRef]
  24. Rybczak, M.; Popowniak, N.; Lazarowska, A. A Survey of Machine Learning Approaches for Mobile Robot Control. Robotics 2024, 13, 12. [Google Scholar] [CrossRef]
  25. Yang, C.; Liu, H. Channel pruning based on convolutional neural network sensitivity. Neurocomputing 2022, 507, 97–106. [Google Scholar] [CrossRef]
  26. Han, T.; Kang, W.; Choi, G. IR-UWB Sensor Based Fall Detection Method Using CNN Algorithm. Sensors 2020, 20, 5948. [Google Scholar] [CrossRef] [PubMed]
  27. Duim, E.; Lebrão, M.L.; Antunes, J.L.F. Walking speed of folder people and pedestrian crossing time. J. Trans. Health 2017, 5, 70–76. [Google Scholar] [CrossRef]
  28. Oñoro-Rubio, D.; López-Sastre, R.J.; Redondo-Cabrera, C.; Gil-Jiménez, P. The challenge of simultaneous object detection and pose estimation: A comparative study. Image Vis. Comput. 2018, 79, 109–122. [Google Scholar] [CrossRef]
Figure 1. Manufacturing stages of the new eldercare robot from CAD design to a full assembled robot.
Figure 1. Manufacturing stages of the new eldercare robot from CAD design to a full assembled robot.
Applsci 14 02374 g001
Figure 2. Mechanical design of the new eldercare robot (all dimension in mm).
Figure 2. Mechanical design of the new eldercare robot (all dimension in mm).
Applsci 14 02374 g002
Figure 3. Kinematics of the developed eldercare robot.
Figure 3. Kinematics of the developed eldercare robot.
Applsci 14 02374 g003
Figure 4. Developed robot control board unit.
Figure 4. Developed robot control board unit.
Applsci 14 02374 g004
Figure 5. Architecture of the control system for the robot.
Figure 5. Architecture of the control system for the robot.
Applsci 14 02374 g005
Figure 6. The developed prototype for the eldercare robot.
Figure 6. The developed prototype for the eldercare robot.
Applsci 14 02374 g006
Figure 7. The developed software stack for the eldercare robot.
Figure 7. The developed software stack for the eldercare robot.
Applsci 14 02374 g007
Figure 8. The collected images are uploaded and labeled images on the Roboflow platform.
Figure 8. The collected images are uploaded and labeled images on the Roboflow platform.
Applsci 14 02374 g008
Figure 9. The architecture of the YOLOv7 neural network.
Figure 9. The architecture of the YOLOv7 neural network.
Applsci 14 02374 g009
Figure 10. The prepared map for indoor field testing of the new eldercare robot (all dimensions in mm).
Figure 10. The prepared map for indoor field testing of the new eldercare robot (all dimensions in mm).
Applsci 14 02374 g010
Figure 11. The experimental map generated by Lidar for the field testing of the new eldercare robot.
Figure 11. The experimental map generated by Lidar for the field testing of the new eldercare robot.
Applsci 14 02374 g011
Figure 12. The experimental results of the prediction model of fall-detection classification.
Figure 12. The experimental results of the prediction model of fall-detection classification.
Applsci 14 02374 g012
Figure 13. The confusion matrix for the fall-detection classification.
Figure 13. The confusion matrix for the fall-detection classification.
Applsci 14 02374 g013
Figure 14. Real images for fall-detection classification.
Figure 14. Real images for fall-detection classification.
Applsci 14 02374 g014
Figure 15. Experimental comparison that shows the effect of a change in distance and robot velocity with the fall-detection accuracy (a) Experimental distance of the eldercare robot vs. fall-detection accuracy; (b) Experimental velocity of robot vs. fall-detection accuracy.
Figure 15. Experimental comparison that shows the effect of a change in distance and robot velocity with the fall-detection accuracy (a) Experimental distance of the eldercare robot vs. fall-detection accuracy; (b) Experimental velocity of robot vs. fall-detection accuracy.
Applsci 14 02374 g015
Table 1. A comparison between the new eldercare robot and other robots which were mentioned in the literature.
Table 1. A comparison between the new eldercare robot and other robots which were mentioned in the literature.
NameMechanical DesignSensorsIndoor Navigation Capability (Y/N)Fall Detection Technique (Y/N)
Matilda [7]Differential Ultrasonic, Bumper, Touch, Sound, CameraY
(Obstacle avoidance)
N
Hobbit [8]Differential Bumper, Temperature, Sound, CameraNY
(Pose estimation and object detection)
Giraff [10]Differential 2D Lidar, RGB camera, Fisheye cameraY
(Teleoperated with obstacle avoidance capability)
N
Mabu [13]StationaryCamera NN
Zora [14,15]Legged humanoid robotBumper, IMU, Sonars, Camera NY
(Pose estimation)
The new eldercare robotDifferential mobile robot2D Lidar, IMU, CSI cameraY
(Hector SLAM and RRT*)
Y
(Object detection using YOLOv7)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Elwaly, A.; Abdellatif, A.; El-Shaer, Y. New Eldercare Robot with Path-Planning and Fall-Detection Capabilities. Appl. Sci. 2024, 14, 2374. https://doi.org/10.3390/app14062374

AMA Style

Elwaly A, Abdellatif A, El-Shaer Y. New Eldercare Robot with Path-Planning and Fall-Detection Capabilities. Applied Sciences. 2024; 14(6):2374. https://doi.org/10.3390/app14062374

Chicago/Turabian Style

Elwaly, Ahmad, A. Abdellatif, and Y. El-Shaer. 2024. "New Eldercare Robot with Path-Planning and Fall-Detection Capabilities" Applied Sciences 14, no. 6: 2374. https://doi.org/10.3390/app14062374

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop