Next Article in Journal
Impact of Grid-Connected Inverter Parameters on the Supraharmonic Emissions in Distributed Power Generation Systems
Next Article in Special Issue
Heuristics and Rescheduling in Prioritised Multi-Robot Path Planning: A Literature Review
Previous Article in Journal
Multi-Stage Approach Using Convolutional Triplet Network and Ensemble Model for Fault Diagnosis in Oil Plant Rotary Machines
Previous Article in Special Issue
A Novel Fully Automatic Concept to Produce First Subset of Bowden Cables, Improving Productivity, Flexibility, and Safety
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Deformable Object Manipulation in Caregiving Scenarios: A Review

School of Physics, Engineering and Technology, University of York, Heslington, York YO10 5DD, UK
Author to whom correspondence should be addressed.
Machines 2023, 11(11), 1013;
Submission received: 16 October 2023 / Revised: 30 October 2023 / Accepted: 31 October 2023 / Published: 7 November 2023
(This article belongs to the Special Issue New Trends in Robotics, Automation and Mechatronics)


This paper reviews the robotic manipulation of deformable objects in caregiving scenarios. Deformable objects like clothing, food, and medical supplies are ubiquitous in care tasks, yet pose modeling, control, and sensing challenges. This paper categorises caregiving deformable objects and analyses their distinct properties influencing manipulation. Key sections examine progress in simulation, perception, planning, control, and system designs for deformable object manipulation, along with end-to-end deep learning’s potential. Hybrid analytical data-driven modeling shows promise. While laboratory successes have been achieved, real-world caregiving applications lag behind. Enhancing safety, speed, generalisation, and human compatibility is crucial for adoption. The review synthesises critical technologies, capabilities, and limitations, while also pointing to open challenges in deformable object manipulation for robotic caregiving. It provides a comprehensive reference for researchers tackling this socially valuable domain. In conclusion, multi-disciplinary innovations combining analytical and data-driven methods are needed to advance real-world robot performance and safety in deformable object manipulation for patient care.

1. Introduction

The growing importance of deformable object manipulation (DOM) in caregiving settings can be attributed to various factors. One significant driving force is the aging population, which has led to increasing demand for caregiving services [1]. In addition to supporting the elderly, deformable object manipulation technologies can assist individuals with disabilities and special needs, improving their quality of life and promoting independence [2]. Another essential aspect of deformable object manipulation in caregiving scenarios is the potential to enhance precision and safety in medical procedures. These technologies can reduce the physical strain experienced by human caregivers, preventing injuries and enabling them to provide better patient care [2,3,4]. Despite the promising applications of deformable object manipulation in caregiving settings, numerous challenges must be addressed to harness its potential fully. Deformable objects exhibit complex and high-dimensional behaviour, making them difficult to model and control [5]. Additionally, their non-linear behaviour and interactions with the environment further complicate the development of effective manipulation strategies [6]. Furthermore, robustness and real-time performance are crucial in caregiving scenarios, as unpredictable situations and time-sensitive tasks demand prompt and reliable responses from robotic systems [7].

1.1. The Scope of the Review

This review delves deep into the mechanisms of manipulating deformable objects in caregiving contexts. It examines fundamental operations and the intricate simulation techniques used to model these objects accurately. Additionally, it explores the subtle strategies of perception, planning, and control for precise object manipulation and their application in purpose-built robots and assistive technologies across various caregiving scenarios.

1.2. The Outline of This Review

The outline diagram of this article is represented by Figure 1.
Section 2 summarises the methodology, statistics, and trends in the literature reviewed. Key publication years, research areas, and technology evolutions are analysed.
Section 3 provides an outlook on deformable object manipulation in caregiving over time. Applications are classified and analysed based on utility, frequency, complexity, criticality, and maturity. A multi-factor prioritisation method is also proposed. Key challenges are discussed.
Section 4 examines core technical elements for deformable object manipulation in caregiving. This includes modeling and simulation methods, perception and sensing, planning and control strategies, end-to-end learning frameworks, and purpose-built manipulator and assistive device designs.
Section 5 discusses critical considerations in safety and the responsible integration of technologies into caregiving settings. Mechanical, control, perceptual, and human interaction factors for safe operation are reviewed. Social acceptance barriers and effects on human caregivers are also analysed.
Section 6 concludes by summarising key findings, limitations, and directions needed for continued progress in deformable object manipulation for robotic caregiving assistance.

1.3. Related Works

Previous surveys have examined various aspects of deformable object manipulation, including sensing and control [8], vision-based perception [9], tactile sensing [10], and dynamic manipulation [11]. Recent reviews have focused on learning-based manipulation [12], continuum manipulators [13], and applications in surgery [14]. However, there has been limited synthesis of advances specifically for caregiving contexts. This survey provides a comprehensive overview tailored to caregiving needs by analyzing suitable sensing modalities, control strategies, system designs, and human–robot interaction factors. It uniquely covers the categorisation of objects based on caregiving activities and the integration of manipulation capabilities into assistive robots. By thoroughly examining the latest technological and human-centric considerations, this review offers novel insights to guide further research on deformable object manipulation for improved caregiving.

2. Methodology and Statistics

This literature review was conducted according to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) 2020 statement [15].
An in-depth search was conducted using the Google Scholar search engine on 13 August 2023. The scope was restricted to articles and reviews, exclusively in the English language, using the following search terms: ((“deformable” OR “soft”) AND “object manipulation” AND “modeling” AND “robot”) OR (“caregiving robot” AND (“deformable” OR “soft”) AND “manipulation”) OR (“assistive robot” AND (“deformable” OR “soft”) AND “manipulation”). The retrieved time range was set from 1983 to 2023. This initial search identified 7130 records. After removing duplicates, 6821 records remained.
The study selection process is outlined in the PRISMA flow diagram in Figure 2. The titles and abstracts of the 6821 records were screened for relevance. A total of 5641 records were excluded during this screening, leaving 1180 potentially relevant abstracts. The 1180 abstracts underwent full-text screening, which led to the exclusion of an additional 808 abstracts. The most common reasons for exclusion were irrelevant focus, outdated technologies, lack of ongoing research, or weak relevance to caregiving scenarios. This screening process resulted in 372 eligible full-text articles for full-text review. After acquiring and assessing the full-text articles, 231 were excluded primarily due to lacking focus on deformable object manipulation in caregiving contexts. Ultimately, 141 studies met the inclusion criteria and were included in this review. The included studies comprised 128 articles and 13 systematic reviews. They were sourced from several databases integrated within Google Scholar, including Web of Science, Scopus, IEEE Xplore, and PubMed.
Statistical analysis, as depicted in Figure 3, revealed that the primary sources of references are journal articles followed by conference proceedings, with other types of sources being relatively rare, constituting less than 10 percent.
The foundational references for this sweeping review arc over a significant historical span, ranging from the technologically formative year of 1984 up to the present cutting-edge developments in 2023. However, the spotlight primarily illuminates the period from 2018 to 2022, reflecting the vibrant crescendo of innovation and discovery in this sphere, as shown in Figure 4. The mainstay of these references hails from scholarly journals rich with meticulous studies and groundbreaking findings, whilst conference papers encapsulating spirited discourse and emerging trends within the academic community represent a substantial contingent. This review serves as a panoramic lens, capturing nearly four decades of academic achievement and technological evolution, with a heightened focus on the dazzling array of research breakthroughs witnessed in the past decade.
As shown in Figure 5, early work in the 1980s–1990s focused on mathematical modeling techniques like finite element methods [16] and mass-spring systems [17] to represent deformable object behavior. These allowed some simulation and analysis but were limited in real-world applications. In the 2000s–2010s, vision-based sensing and model-based control strategies started emerging as key areas of research for the perception and manipulation of deformable objects. However, these relied heavily on accurate models and could not handle uncertainties well. Over the last 5 years or so, data-driven techniques like deep learning have become prominent for DOM. Convolutional and recurrent neural networks have been applied for perception and control learning using large datasets. End-to-end learning frameworks combining deep neural networks with reinforcement learning have recently demonstrated promise for learning complex DOM policies directly from sensor inputs. Hybrid approaches are also gaining traction, integrating physics-based modeling with data-driven methods to get the best of both worlds. Advanced simulation environments that leverage differentiable physics have further accelerated DOM research. Overall, the trend has been towards more data-driven methods and learning-based techniques to overcome the challenges of mathematical modeling and achieve more adaptive policies for real-world DOM tasks and applications. The integration of deep neural networks, reinforcement learning, and differentiable physics simulations has been at the forefront of recent advances.

3. DOM in Caregiving: Outlook, Analysis, and Challenges

The growing aging population primarily drives the increasing demand for caregiving robots over the past two decades, the need for cost-effective healthcare solutions, and technological advancements in robotics [1]. This section reviews the critical milestones in the development and deployment of caregiving robots, focusing on real-world cases and the evolving landscape of their applications.

3.1. An Outlook by Timeline

As shown in Figure 6, since the 1980s, deformable object manipulation in caregiving has progressed from early simulation work [16,17] to real-world assistive robot systems, enabled by advances in analytical modeling, computer vision [18], and AI techniques. Key applications that emerged over the decades include robotic dressing [19], bedding [20,21,22], feeding [23,24,25,26], personal hygiene [27], and medical care assistance [28]. While early research in the 2000s relied on model-based methods [29,30,31], recent breakthroughs leveraged data-driven deep learning [32,33] for more flexible policies, like robotic knot-tying in 2021 [34,35]. The timeline illustrates milestones from early vision sensing in the 2000s [18] to robotic meal assistance in 2020 [24] and sway adaptation for bandaging in 2023 [28]. The overarching trend is the translation of deformable object manipulation from simulation environments to real-world caregiving tasks, with increasing autonomy and intelligence. This promises continued improvement in robotic assistance for the elderly and disabled through handling real-world deformable objects.

3.2. Classification of Applications

Over the past decade, several cases of DOM in caregiving scenarios have been documented, showcasing advancements in robotic capabilities, methodologies, and applications in the caregiving domain. User perspectives have also been studied, such as factors influencing the adoption of caregiving robots by the elderly and attitudes towards assistive systems [36]. In this critical outlook, we classify the cases based on tasks and functions and calculated the proportion of each application in the references in this review in Figure 7.
  • Dressing assistance is an essential application of DOM in caregiving scenarios, particularly for users with limited mobility or dexterity. Researchers have developed robots capable of assisting users in putting on various clothing items, including T-shirts [19], pants [37], and footwear [38]. These studies have demonstrated the potential of DOM in addressing the challenges faced by individuals with disabilities and the elderly in performing daily dressing tasks.
  • Bedding and Cloth Management is another crucial application of DOM in caregiving scenarios, as it involves handling large, deformable objects and ensuring user comfort and hygiene. Researchers have developed robotic bed-making systems to grasp tension and smooth fitted sheets [20]. Robots capable of managing blankets [21] and pillows [22] have also been developed. The success of these works highlights the potential of DOM in addressing the challenges associated with bedsheet management and the need for continued research and development in this area. Cloth folding is also essential in caregiving settings, particularly for maintaining order and cleanliness. Researchers have developed robots capable of folding clothes, such as towels, shirts, and ropes [19,34,39,40,41,42,43].
  • Personal Hygiene Support is another critical application of DOM in caregiving scenarios. Researchers have developed robots that handle soft materials such as gauze [27] and diapers [44]. These studies have highlighted the importance of integrating various sensory modalities and control techniques for effective soft material handling in caregiving scenarios.
  • Meal Assistance is another important application of DOM in caregiving scenarios, particularly for users with limited mobility or dexterity. Researchers have developed robots capable of manipulating deformable objects such as food items and utensils [23,24,25,26]. These studies have demonstrated the potential of DOM in addressing the challenges faced by individuals with disabilities and the elderly in performing daily meal assistance tasks.
  • Daily Medical Care. In the context of bandaging, deformable object manipulation systems offer improved precision and control, enabling more effective and efficient wound-dressing procedures. These systems can adapt to the varying shapes and contours of the human body, as well as the patient’s involuntary swaying, ensuring proper bandage placement and tension for optimal healing [28]. Deformable object manipulation technologies assist patients during therapeutic exercises and activities in rehabilitation. They provide real-time feedback, support, and guidance, enhancing repair and promoting faster recovery [45,46,47].
  • Some other Applications. DOM has also been explored in other caregiving applications, such as housekeeping tasks like laundry management [48,49], window cleaning [50], and dishwashing [51].
The past decade has witnessed significant advancements in deformable object manipulation in caregiving scenarios. The increasing demand for caregiving robots highlights the importance of continued research and development to advance their capabilities in DOM and ensure their responsible deployment in various care settings.
Deformable objects are ubiquitous across the diverse caregiving applications discussed, highlighting the importance of continued advancements in deformable object manipulation capabilities. In the following section, we will delve deeper into categorising deformable objects based on operational modes relevant to caregiving tasks and analyse the unique challenges they present.

3.3. A New Method to Classify and Analyse

In past research, the classification of deformable objects primarily adopted methodologies such as Sanchez’s [27], which emphasised the physical properties and morphology of the objects. The main objective of these techniques was to facilitate precise modelling of the objects in question. However, a paradigm shift in recent research trends, particularly in Artificial Intelligence, has changed this approach considerably.
The introduction of innovative learning methodologies like imitation learning and reinforcement learning has accelerated the capabilities of robotic arm operations. Consequently, the necessity to model different deformable objects separately has become less prominent. This study presents a unique classification framework focusing on distinct material behaviors and properties that influence manipulation strategies, as shown in Figure 8. This classification takes into account both the functionality of deformable items in caregiving scenarios and the physical characteristics of the items, which facilitates further analysis of the operating objects faced by the caregiving robot/robotic arm.

3.3.1. Common Types of Deformable Objects in Caregiving Scenarios

  • Textiles: This category would cover all cloth and fabric objects like clothing, sheets, towels, etc. Key properties are flexibility, drape, and shear.
  • Elastomers: Includes stretchable/elastic materials like bandages, tubing, and exercise bands. Key properties are elongation and elasticity.
  • Fluids: Encompasses materials like water, shampoo, and creams that flow and conform to containers. Key behaviors are pourability and viscosity.
  • Aggregates/Granular: Covers aggregated materials like rice, beans, and tablets. Flows but maintains loose particulate structure.
  • Gels: Highly viscous/elastic fluids like food gels, slime, and putty. Resist flow due to cross-linked molecular structure.
  • Cellular/Porous: Materials with internal voids like sponges and soft foams. Compressible and exhibit springback.
  • Composite/Hybrid: Combinations of the above categories, like stuffed animals and packaged goods. Display complex interactions of properties.
This review summarises the proportions of research objects found in the reference literature, as depicted in Figure 9, where textiles stand out as the most popular deformable objects for research, whereas studies on operations involving cellular/porous and aggregates/granular objects are currently quite limited, which is a scarcity that could indicate deficiencies in practical applications.

3.3.2. A Multi-Factor Analysis Method

While assigning arbitrary subjective scores lacks rigor, a multi-factor analysis can systematically prioritise deformable object manipulation research needs for caregiving. Several key factors should be considered:
  • Application Utility (U)—Potential to reduce caregiver burden by automating tasks
  • Object Frequency (F)—How often the object occurs in caregiving activities
  • Task Complexity (C)—Technical challenges posed by physical properties and handling difficulties
  • Safety Criticality (S)—Risks of injury or harm during object manipulation
  • Research Maturity (M)—Existing state of manipulation methods for the object
To apply this analysis, quantitative metrics for each factor are gathered via:
  • Surveys, interviews, and activity logging (for Application Utility)
  • Workflow observations and activity logging (for Object Frequency)
  • Material testing, caregiver surveys, and interviews (for Task Complexity)
  • Incident data and healthcare professional feedback (for Safety Criticality)
  • Literature review (for Research Maturity)
The metrics are then combined as follows:
  • Each metric is normalised on a 0–1 scale based on the maximum value observed. This transforms metrics to a common scale.
  • Criteria weights are assigned to each factor based on the caregiving context. For example, Safety Criticality may be weighted higher for hospital settings compared to home care.
  • Weighted sums are calculated by multiplying each normalised metric by the criteria weight.
  • The weighted sums are aggregated to derive an overall priority score P for each deformable object, where U, F, C, S, and M are the normalised metrics and w1 to w5 are the criteria weights.:
    P = w 1 × U + w 2 × F + w 3 × C + w 4 × S w 5 × M
  • Objects are ranked by priority score P, identifying high-impact areas needing research innovations.
Tuning the criteria weights for specific caregiving applications allows us to systematically prioritise the most promising research directions via this data-driven analysis. Multi-factor analysis indicates general priorities, while subjective scoring adds nuanced use-case customisation. Using both techniques together provides rigor and versatility in determining the key deformable object manipulation research needs for a maximal real-world impact.
However, our exploration continues further. In the next part, we will delve into the many challenges of handling these deformable objects, particularly when considering their high-dimensional, non-linear, and time-varying behaviour. Whether it is the need for the real-time perception of the object’s shape and interactions or the development of novel control frameworks, we are faced with a complex task that requires a multi-disciplinary approach. This highlights the necessity of moving beyond traditional methodologies and stepping into the realm of data-driven, hybrid models that can more adequately cater to the demands of real-world caregiving scenarios.

3.4. Challenges of DOM in Caregiving

Accurately modelling deformable objects is challenging due to their high-dimensional, non-linear, and time-varying behaviour [16]. While mathematical models, such as the Finite Element Method (FEM) and Mass-Spring models, can capture some of these properties [17], they may not always be sufficient for real-world caregiving scenarios. This necessitates exploring data-driven and hybrid modelling approaches, such as deep learning-based models [41,43,52,53,54,55] and Gaussian process regression [56,57,58,59,60].
Deformable object manipulation often requires real-time, robust, and precise perception of the object’s shape, deformation, and environmental interactions [61]. This can be challenging due to occlusions, sensor noise, and varying environmental conditions. Multi-modal sensing techniques, such as vision and tactile sensing, have enhanced perception capabilities [62,63,64]. Recent advancements in soft tactile sensors and RGB-D cameras have further improved the accuracy and reliability of deformable object perception [65].
Developing effective control and planning algorithms for deformable object manipulation is challenging due to the non-linear dynamics, uncertainties, and constraints associated with these objects [66]. Traditional robotic control strategies may not be suitable, leading to the investigation of novel control frameworks, including learning-based, adaptive control methods [67] and impedance control [68].
In caregiving scenarios, ensuring the safety of patients and human caregivers during deformable object manipulation is of paramount importance. This requires the development of control strategies that can adapt to uncertainties and unforeseen events, as well as methods for effective communication and collaboration between robots and humans. Recent research on shared autonomy, where a human operator and a robotic system jointly control the manipulation process, has shown promise in achieving safe and efficient deformable object manipulation in caregiving scenarios [69,70,71].

4. Key Technologies for DOM in Caregiving

As mentioned above, addressing these challenges requires a multi-disciplinary approach, combining advancements in modelling, sensing, control, and human–robot interaction to develop practical solutions for deformable object manipulation in caregiving scenarios.
Having explored the categories and complexities of deformable objects commonly encountered in caregiving scenarios, we now shift our focus to developing solutions to manage these challenges. In the upcoming sections, we will examine key technologies and strategies to enhance the robotic manipulation of delicate and unpredictable materials for assistive applications.

4.1. Modelling and Simulation

Robots and assistive systems are increasingly employed in various care applications; they must interact with complex, delicate, and flexible materials, such as human tissues, garments, and assistive devices [72,73,74]. Developing accurate models and simulations for these deformable objects is essential to understanding their behaviour and dynamics [27,75]. This knowledge allows robotic systems to plan and execute precise manipulations, ensuring patients’ safety and comfort [4,76]. In this context, modelling and simulation serve as indispensable tools for designing, testing, and optimising the performance of robotic and assistive systems in the ever-evolving field of caregiving [77,78,79]. This section will discuss mathematical, data-driven, and hybrid modelling for manipulating deformable objects and provide outstanding simulation environments.

4.1.1. Mathematical Models

Mathematical models are essential for understanding the complex behaviour of deformable objects in caregiving scenarios. Continuous models, such as those based on linear elasticity, nonlinear elasticity, and viscoelasticity, describe deformation using partial differential equations (PDEs) [75,80]. Discrete models, like the FEM and Mass-Spring models, use interconnected elements or particles to represent deformable objects, simplifying PDEs into ordinary differential equations (ODEs) or algebraic equations [81,82]. The appropriate mathematical model depends on accuracy, computational resources, and application requirements.

4.1.2. Data-Driven Models

Data-driven models offer an alternative approach to deformable object modelling, leveraging machine learning techniques to learn object behaviour from the data instead of explicit mathematical equations. These models can adapt to varying object properties and environmental conditions, making them suitable for many caregiving applications.
Deep learning-based models, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and the graph neural network (GNN), have shown great promise in modelling complex deformations and predicting the dynamics of deformable objects from visual and tactile data [32]. Researchers have used CNNs to learn mapping between tactile sensor measurements and object deformation, enabling robots to manipulate soft objects with high precision [83].
Gaussian process regression (GPR), a non-parametric Bayesian method, has been employed to model the deformation of soft objects in robotic grasping tasks. GPR provides a probabilistic representation of the object’s state, allowing uncertainty quantification and robust decision-making [60].

4.1.3. Hybrid Models

Hybrid models are an emerging class of deformable object models that combine the strengths of both mathematical and data-driven models. By integrating mathematical models with data-driven techniques, hybrid models can achieve improved accuracy, adaptability, and generalisation capabilities compared to standalone models. One example of a hybrid model is the combination of the FEM with neural networks, where FEM provides a physics-based prior and the neural network refines the model parameters or captures the residuals between model predictions and the observed data [84].
Researchers have also explored using GPR in conjunction with mathematical models in order to better capture the uncertainty in deformable object manipulation tasks. By incorporating GPR into a physics-based model, these hybrid models can provide probabilistic representations of the object’s state, allowing for uncertainty quantification and more robust decision-making [17].

4.1.4. Simulation Tools and Environments

Simulation tools and environments are crucial in developing, testing, and validating deformable object manipulation strategies. These tools provide a controlled virtual environment for evaluating the performance of various algorithms and models and generating synthetic data for training and testing data-driven models.
A popular physics engine for deformable object simulations is the Bullet Physics Library [85], which supports both rigid body dynamics and soft body dynamics, including FEM-based deformable models. The SOFA (Simulation Open Framework Architecture) framework is another widely used tool for simulating deformable objects, offering a modular architecture and support for various modelling and simulation techniques, such as FEM, mass-spring, and continuum mechanics models [86].
In recent years, developing sophisticated simulation environments incorporating diverse learning algorithms and heightened intelligence has dramatically enhanced the functional progress of deformable object manipulation. These advancements have not only improved the development efficiency but also reduced the associated costs. This section will name a few of these cutting-edge simulation environments and provide a critical analysis of their respective strengths and weaknesses. The simulation tools to be introduced below are displayed on a timeline in Figure 10.
  • SoftGym [52] is a simulation environment focusing on soft-body manipulation tasks, providing researchers with a platform to develop and test algorithms for various applications such as robotic grasping and manipulation. While SoftGym may offer unique benefits and opportunities, it is essential to examine its limitations and potential areas for improvement critically. One possible drawback is that the simulation may need to fully represent the complex real-world conditions, which could lead to discrepancies when applying developed algorithms to actual tasks. Additionally, the simulation might only cover some possible soft-body objects and scenarios, potentially limiting its applicability to a narrower range of cases. Further research and development in SoftGym may be required to address these limitations and ensure the platform’s continued relevance and effectiveness.
  • DeformableRavens [87] is an open-source simulated benchmark, with 12 tasks manipulating 1D, 2D, and 3D deformable objects to help accelerate research progress in the robotic manipulation of deformable materials. It creates an end-to-end target conditional transportation network that learns visual-based multi-step operations for deformable 1D, 2D, and 3D structures. However, the current scope of DeformableRavens is limited to a set of predefined tasks and objects, which may hinder its adaptability to more diverse scenarios.
  • ReForm [88] is another simulation environment focusing on deformable objects like metal wires with elastic and plastic properties. While it addresses the limitations of SoftGym, more information about its usability, versatility, and performance in a broader range of applications would be needed to assess its overall effectiveness.
  • PlasticineLab [77] is a simulation environment focusing on soft-body manipulation, utilising differentiable physics to optimise control policies for robotic manipulation tasks. While PlasticineLab offers a novel approach to solving soft-body manipulation problems, it is crucial to consider potential limitations and areas for improvement. One concern could be the accuracy of the differentiable physics models, which might need to capture the complex interactions between objects and their environment fully. Furthermore, the scalability of the simulation to more complex and diverse scenarios may be limited, which could affect its applicability to real-world situations.
  • DefGraspSim [78] is a simulation environment focusing on grasping 3D deformable objects like fruits, vegetables, and internal organs. While the simulation provides valuable insights into robotic grasping strategies, it is essential to consider potential limitations and areas for improvement. For instance, the simulation may not account for all possible variations in object shape, material properties, and environmental factors, which could affect the performance of developed algorithms in real-world applications. Additionally, the simulation’s efficiency may be limited by the processing power of the GPU used, potentially restricting the ability to test a wide range of objects and scenarios in a reasonable time frame.
  • RCareWorld [79] is a human-centric simulation environment designed to develop physical and social robotic caregiving. The simulation incorporates inputs from stakeholders such as care recipients, caregivers, occupational therapists, and roboticists. While RCareWorld offers a promising platform for developing robotic caregiving solutions, examining potential limitations and areas for improvement is essential. For example, the simulation may not fully capture the complexities of human–robot interaction in real-world caregiving settings, which could lead to discrepancies when applying developed algorithms to actual tasks. Additionally, the simulation might only cover some possible care scenarios and patient needs, potentially limiting its applicability to a narrower range of cases. Ongoing research and development in RCareWorld will be crucial to address these limitations and ensure the platform’s continued relevance and effectiveness.
Combining hybrid models [84] and advanced simulation tools and environments [86] facilitates the development and evaluation of deformable object manipulation strategies in caregiving scenarios. By leveraging the strengths of both mathematical and data-driven models and using comprehensive simulation environments, researchers can create more effective and robust techniques for handling deformable objects in a wide range of caregiving applications. These advances will ultimately improve the performance and safety for caregivers and care recipients in various healthcare and home care settings.

4.2. Perception and Sensing

Perception and sensing in deformable object manipulation within caregiving scenarios [1] is critical as it directly impacts the quality of care provided. Robots and assistive systems must handle delicate, flexible, and unpredictable materials in these settings, making accurate assessment and interaction essential. Adequate perception and sensing techniques allow these systems to gather crucial information about the objects’ properties and behaviours, enabling improved decision-making and action planning. By prioritising the development and integration of innovative perception and sensing methods, deformable object manipulation systems can achieve enhanced precision, adaptability, and safety in caregiving contexts, ultimately leading to better patient outcomes [2,3,4]. This section will explore vision-based [5,6,7,8], tactile-based [61], and sensor fusion approaches [12] for perception and sensing in deformable object manipulation.

4.2.1. Vision-Based Techniques

Vision-based techniques enable robots to perceive and manipulate deformable objects in caregiving scenarios. These techniques primarily involve processing and analysing camera image data to extract information about the object’s shape, deformation, and pose. Various vision-based methods have been developed to address the unique challenges associated with deformable object manipulation, such as occlusions, varying appearances, and non-rigid transformations.
One popular approach uses RGB-D sensors, which provide colour and depth information, allowing for more accurate and robust object segmentation and pose estimation [18]. Researchers have developed methods for fusing colour and depth data to improve the analysis of object deformation and grasp planning [89,90]. Deep learning-based techniques, such as CNNs, RNNs, and GNN, have been widely used for vision-based deformable object manipulation tasks, including object segmentation, pose estimation, and deformation prediction [43,83,91]. These techniques can learn complex, high-dimensional representations of deformable objects from large amounts of image data, enabling robots to handle various things with different shapes and properties.
Template matching and model-based methods are also employed to estimate the pose and deformation of known deformable objects [92,93]. These methods involve matching the observed image features to a pre-defined template or a 3D model, allowing for an accurate estimation of the object pose and deformation under varying conditions.

4.2.2. Tactile Sensing

Tactile sensing is another essential aspect of deformable object manipulation in caregiving scenarios as it allows robots to perceive contact forces, object properties, and local deformations during manipulation tasks. Tactile sensors can complement vision-based techniques by providing additional information that may not be easily accessible via visual observations, such as the distribution of contact forces, friction, and local compliance of the object [94].
Various types of tactile sensors have been developed for robotic manipulation tasks, including capacitive, piezoresistive, and piezoelectric sensors [95]. Each sensor type offers unique advantages and trade-offs regarding sensitivity, resolution, and robustness, making them suitable for different applications and environments.
Researchers have explored the integration of tactile sensing into deformable object manipulation tasks, such as object grasping, manipulation, and deformation control. For instance, tactile feedback has improved grasp stability and adapted to object deformation during manipulation [96]. Tactile sensing can also be utilised for real-time deformation control, enabling robots to monitor and adjust the applied forces to prevent excessive deformation or damage to the object [97].
Machine learning techniques, including deep learning and reinforcement learning, have been applied to process and interpret tactile sensor data for deformable object manipulation tasks [98,99]. These methods can learn complex relationships between tactile sensor readings and object properties or manipulation outcomes, allowing robots to make more informed decisions during manipulation tasks.
Recent studies show that robots can better understand the object’s properties and behaviour by combining tactile sensing with vision-based techniques, improving performance and safety during manipulation tasks.

4.2.3. Sensor Fusion

Sensor fusion integrates multiple sensor data types, such as vision, tactile, and force sensing, and has emerged as a promising approach for deformable object manipulation in caregiving scenarios. By combining information from various sources, sensor fusion can enhance robotic systems’ accuracy, robustness, and adaptability, enabling them to better handle the complexities associated with deformable objects [100].
Vision-based sensing is widely used in robotic manipulation, providing rich information about the object’s shape, colour, and texture [101]. However, vision-based methods may need help with occlusions, specular reflections, or poor lighting conditions. Tactile sensing, on the other hand, can complement vision by providing direct contact information, such as pressure distribution, contact location, and object compliance [102,103]. Force sensing further augments the system’s capabilities by measuring the forces exerted during manipulation, which can be essential for ensuring patient safety and comfort [95].
Recent studies have demonstrated the benefits of combining these sensing modalities in various caregiving scenarios. For instance, in surgical applications, integrating vision and force sensing has proven effective in providing more accurate feedback during delicate tissue manipulations [104]. In rehabilitation settings, fusing tactile and force information has improved the control of soft robotic exoskeletons, leading to better patient outcomes [72]. One example of sensor fusion in deformable object manipulation is the work of Luo et al. [105], which describes a framework that fuses vision and forces feedback to control highly deformable objects. This approach demonstrated an improved performance compared to using vision or force sensing alone, highlighting the potential benefits of sensor fusion in handling deformable objects.
Another example can be found in the work of Choi et al. [106], who developed a multi-fingered robotic system for dexterously manipulating deformable objects using visual and tactile sensing. Their approach used a combination of visual features extracted from RGB-D images and tactile measurements from an array of pressure sensors to model the object’s deformation. This information was then used to control a robotic manipulator to achieve the desired deformations, demonstrating the effectiveness of sensor fusion in controlling deformable objects.
Moreover, researchers have explored deep learning techniques for sensor fusion in deformable object manipulation. For example, Yang et al. [32] proposed a deep learning-based framework for tactile and visual sensing fusion in robotic grasping. Their method combined CNNs to extract features from tactile and visual data, fusing them using a multimodal fusion layer. This approach led to an improved grasping performance, showing the potential of deep learning for sensor fusion in deformable object manipulation.
There are also efforts to develop general frameworks for sensor fusion in robotic manipulation. One such example is the work of Inceoglu et al. [107], who proposed a Deep Multimodal framework for integrating multiple sensor modalities, including vision, sonar, microphone, and tactile sensor data. Their approach aimed to account for the uncertainties associated with each sensing modality and provided a principled way to fuse the information for an improved object manipulation performance in order to make robots more aware of the unintended outcomes of their actions for ensuring safety.

4.3. Planning and Control

The significance of planning and control in deformable object manipulation within caregiving scenarios cannot be overstated [72,105,106]. In these contexts, robots and assistive systems must interact with delicate and often unpredictable objects, such as human tissues, textiles, or prosthetics, making precise planning and control crucial for patient safety and comfort [32]. Efficient planning enables the system to navigate complex interactions with deformable objects by predicting their behaviour and adjusting actions accordingly [108]. This reduces the risk of unintended consequences and enhances the system’s adaptability and overall performance [18]. Furthermore, robust control techniques ensure that manipulations are executed accurately and smoothly, accounting for the inherent uncertainties and complexities associated with deformable objects [29]. By integrating advanced planning and control strategies, deformable object manipulation systems in caregiving scenarios can achieve a higher level of dexterity, responsiveness, and reliability, ultimately improving the quality of care and the overall patient experience [72]. This section examines model-based, model-free, and hybrid control strategies for manipulating deformable objects.

4.3.1. Model-Based Control

Model-based control strategies for deformable object manipulation rely on accurate models of the object’s geometry, material properties, and dynamics to predict its behaviour and plan appropriate control actions [29]. These approaches typically involve using mathematical models, such as finite element models, mass-spring systems, or continuum mechanics models, to represent the object’s deformations and dynamics [30,31]. Model-based control methods can offer high performance and accuracy. Still, they may require extensive prior knowledge of the object’s properties and can be computationally expensive for complex objects or real-time applications [16,17].
One advantage of model-based control strategies is that they allow us to explicitly consider constraints, such as object deformation limits, contact forces, or actuator limitations, during the planning and control process [11,29]. This can lead to safer and more effective manipulation strategies [30], particularly in caregiving scenarios where the object’s integrity and patient safety are paramount [109,110].
Various model-based control algorithms have been proposed for deformable object manipulation tasks, such as force control, impedance control, and model predictive control (MPC) [108]. These methods can adapt to varying object properties and environmental conditions, enabling a robust and reliable manipulation performance.

4.3.2. Model-Free Control

Model-free control strategies for deformable object manipulation do not rely on explicit object models but instead learn control policies directly from the sensor data and interaction experiences [111,112]. These approaches can be particularly advantageous when the object’s properties are unknown, difficult to model, or subject to change over time. Model-free control methods often employ machine learning techniques, such as reinforcement learning or imitation learning, to learn control policies from the data.
Reinforcement learning algorithms enable robots to learn control policies by interacting with the environment and receiving feedback as rewards or penalties [113]. These methods can discover effective manipulation strategies via trial-and-error exploration without explicit object models or human supervision. Recent advances in deep reinforcement learning have shown promising results in learning complex control policies for deformable object manipulation tasks [35].
Imitation learning techniques allow robots to learn control policies by observing and mimicking human demonstrations or other expert behaviour. These methods can leverage human expertise to bootstrap the learning process and accelerate the acquisition of effective manipulation strategies. Imitation learning has been successfully applied to various deformable object manipulation tasks, such as knot tying, cloth folding, and surgical suturing [35,114,115].

4.3.3. Hybrid Control Strategies

Hybrid control strategies combine the advantages of both model-based and model-free approaches to achieve a more effective and robust deformable object manipulation performance [60,74,77,78]. By integrating explicit object models with learning-based methods, hybrid control strategies can leverage the benefits of prior knowledge while also adapting to uncertainties or changes in the object’s properties and environment [35,108,114]. One common approach in hybrid control strategies is using model-based controllers, such as MPC or impedance control, in conjunction with reinforcement learning or imitation learning algorithms [116,117,118]. The model-based controller provides an initial control policy based on the object’s known properties. At the same time, the learning-based algorithm refines and adapts the approach through interaction with the environment or observation of expert demonstrations [74,115]. The control techniques discussed enable the precise and adaptive manipulation of deformable objects. Building on these methods, the integration of end-to-end learning promises more dynamic and flexible solutions by learning complex mappings directly from the sensor data [33,119,120]. Next, we will discuss the application of end-to-end learning to deformable object manipulation.

4.4. End-to-End Learning

End-to-end learning has emerged as a powerful approach for addressing complex robotics and artificial intelligence tasks. This paradigm aims to learn direct mapping from the raw sensory inputs to the control outputs, bypassing the need for explicit intermediate representations or handcrafted features. In deformable object manipulation, end-to-end learning has shown great potential for enabling robots to perform tasks with high levels of complexity, adaptability, and robustness.
One of the critical advantages of end-to-end learning is its ability to leverage deep learning techniques, such as CNNs and RNNs, to extract relevant features and representations from the raw sensory data automatically. This allows robots to learn more effectively from large amounts of data and generalise the data to new situations. Several studies have demonstrated the effectiveness of deep learning-based approaches for deformable object manipulation, such as cloth folding, knot tying, and surgical suturing [33,119].
Reinforcement learning is another critical component of end-to-end learning in deformable object manipulation [111]. Reinforcement learning algorithms enable robots to learn control policies by interacting with their environment and receiving feedback in the form of rewards or penalties [113]. This trial-and-error learning process allows robots to discover effective manipulation strategies without requiring explicit object models or human supervision. Recent advances in deep reinforcement learning have shown promising results in learning complex control policies for deformable object manipulation tasks [112,120].
Imitation learning is another end-to-end learning technique successfully applied to deformable object manipulation. Robots can learn control policies that leverage human expertise by observing and mimicking human demonstrations or other expert behaviour [115]. This approach can accelerate learning and enable robots to perform challenging tasks more efficiently, such as cloth folding, knot tying, and surgical suturing.
One of the critical challenges in end-to-end learning for deformable object manipulation is the need for large amounts of training data, as deep learning techniques typically require a considerable amount of data to achieve a good generalisation performance [121]. Collecting this data can be time-consuming and labour-intensive, especially in real-world caregiving scenarios.
Researchers have explored various data augmentation techniques to address this challenge, such as domain randomisation and synthetic data generation, to generate more diverse and representative training data [122].
Another challenge in end-to-end learning is specifying appropriate reward functions for reinforcement learning algorithms. Defining a suitable reward function that accurately captures task objectives and constraints can be challenging in many deformable object manipulation tasks. One promising solution is to use inverse reinforcement learning techniques, which learn reward functions from demonstrations or expert behaviour. By combining inverse reinforcement learning with deep reinforcement learning, robots can learn more effective control policies for deformable object manipulation tasks [116,117,118,123].

4.5. Manipulator and Assistive Device Designs

The focus of the following sections is on discussing practical robotic manipulator and assistive device designs for handling deformable objects in caregiving scenarios, rather than simulation techniques [77,78,79]. This provides an overview of recent hardware innovations that can directly improve real-world caregiving tasks and quality of life [21,22,74]. The aim is to highlight advancements in robotic arms, grippers, and assistive technologies that address specific requirements and challenges in deformable object manipulation during service provision [8,73].
In service provision, caregiving robot actuator designs can be categorised into manipulator and assistive device designs [13,124]. This section examines the design characteristics of robots handling deformable objects within these two service modalities, highlighting the distinct creation focuses for each type.

4.5.1. Manipulator Design

Robotic arms and grippers are essential components in deformable object manipulation within caregiving scenarios. In recent years, there have been significant advancements in robotic arm and gripper designs, aiming to address the specific requirements and challenges of various caregiving tasks.
For instance, lightweight robotic arms with enhanced dexterity and adaptability have been developed, which are particularly suitable for tasks such as dressing assistance [74], food feeding [26], or personal hygiene assistance [109]. These robotic arms often incorporate advanced sensing and control technologies to ensure safe and efficient human–robot interaction. Collaborative robots (cobots) have also been explored for caregiving applications, emphasising human–robot collaboration while maintaining safety standards [125].
Regarding grippers, many recent designs have been proposed to handle deformable objects in caregiving tasks. Soft robotic grippers have gained significant attention due to their ability to conform to the shape of deformable objects and provide gentle yet firm grasping. Recent advancements in soft robotic grippers include adaptability, sensing capabilities, and control strategies [73].
Another noteworthy development in gripper technology is the use of AI-enabled multi-fingered hands which are capable of learning and adapting grasping strategies based on sensory feedback, making them particularly suitable for handling deformable objects with complex geometries [126]. These grippers often employ advanced machine learning techniques, such as deep learning and reinforcement learning, to acquire efficient grasping strategies from the data [33].
In addition, researchers have investigated the use of underactuated grippers, which can adapt to the shape of deformable objects using fewer actuators than the number of degrees of freedom [127]. These grippers balance complexity, weight, and adaptability, making them suitable for specific caregiving tasks. Recent research in robotic arms and grippers for deformable object manipulation in caregiving scenarios has focused on enhancing dexterity, adaptability, and human–robot interaction and incorporating advanced sensing and learning capabilities. This progress in robotics and assistive technology promises to improve the quality of life and independence for individuals requiring caregiving assistance.

4.5.2. Assistive Device Design

Assistive technologies are essential components of robotic systems for deformable object manipulation in caregiving as they provide the necessary support, guidance, and augmentation of the robot’s capabilities to perform complex tasks safely and effectively [128]. These technologies encompass many solutions, such as sensor-based guidance systems, haptic interfaces, teleoperation, and shared control [128].
Sensor-based guidance systems can enhance the robot’s perception and understanding of deformable objects and their environment, enabling a more accurate and reliable manipulation performance. These systems often integrate multiple sensing modalities, such as vision, tactile, and force sensing, to comprehensively represent the object’s shape, deformation, and interaction forces [129].
Haptic interfaces can provide intuitive and effective human–robot interaction in caregiving scenarios, allowing the human operator to feel and manipulate the deformable object via force feedback and tactile sensations. These interfaces can facilitate precise control of the robot’s actions, improve the operator’s situational awareness, and enable more natural and intuitive communication between humans and robots [130].
Teleoperation and shared control methods enable human operators to guide or supervise the robot’s actions during deformable object manipulation tasks, leveraging their expertise and cognitive capabilities [131]. Teleoperation allows the operator to control the robot’s motions and activities directly. In contrast, shared control methods distribute the control responsibilities between humans and robots based on their strengths and limitations.

5. Safety Ensurance and Social Challenges

While advances in modeling, sensing, control, and learning show great promise for improving the robotic manipulation of deformable objects, ensuring these technologies are safely and responsibly integrated into caregiving contexts remains critically important. As robots and assistive devices interact with patients in sensitive healthcare and home environments, maintaining safety, trust, and acceptance by users must be prioritised. Seamless collaboration between humans and robots is needed to leverage their complementary strengths. Additionally, ethical considerations regarding privacy, autonomy, and effects on caregiver employment must be proactively addressed.
The following section examines key factors in ensuring safety during deformable object manipulation and overcoming societal barriers to enable patient-centered integration of robots in caregiving roles. A holistic approach considering mechanical design, adaptive control, robust perception, intuitive interfaces, and transparent data practices can lead to the responsible adoption of robotic assistance in domains like elderly care and medicine. By coupling technological innovation with human-centric practices, deformable object manipulation in caregiving can be advanced to improve care quality while mitigating risks.

5.1. Safety Ensurance

Ensuring safety during the robotic manipulation of deformable objects in caregiving settings requires addressing risks across multiple areas: mechanical design, control strategies, perception, and human–robot interaction. By considering safety holistically across these domains, robotic systems can safely assist with care tasks involving deformable objects.
  • Mechanical design plays a key role in mitigating safety risks. Soft robotic systems built with compliant materials can conform to objects and distribute forces more evenly during manipulation. This reduces risks of damage or injury compared to traditional rigid robots [132]. Compliant joints and actuators also help absorb impacts from collisions.
  • Control strategies must adapt in real time to deformable objects’ changing shapes and material properties. Model-based methods like impedance control allow us to adjust robotic stiffness and damping [133]. Incorporating adaptive, safety-aware algorithms enables them to respond appropriately to variations in the environment and task [73]. This helps maintain stability and prevent unsafe interactions.
  • Accurate sensing and perception provide the feedback needed for safe control. Estimating object poses, deformations, and material properties enable better manipulation by detecting possible risks [124]. Vision, tactile sensors, and sensor fusion give comprehensive data to guide actions [134].
  • Effective human–robot communication establishes shared understanding between the robot and user, resolving uncertainties that may lead to unsafe conditions [135]. Intuitive interfaces, legible motion, and recognising human actions facilitate safe collaboration when assisting with care tasks [76].
By holistically addressing safety across these areas, robotic systems can reliably and responsibly manipulate deformable objects to assist with caregiving needs. The integration of compliant designs, adaptive control, robust perception, and human-aware interaction enables safe, effective assistance.

5.2. Social and Psychological Challenges

As the development and application of robotics in caregiving scenarios continue to grow, there is an increasing need to address the social and psychological challenges associated with integrating robots into the caregiving process. This section discusses several critical social and psychological issues, including user acceptance, human–robot interaction, privacy concerns, and the potential impact on the caregiver workforce [136].
User acceptance is crucial for successfully implementing robotic systems in caregiving environments [137]. Several factors can influence user acceptance, such as perceived usefulness, ease of use, and the degree to which the robot’s appearance and behaviour conform to user expectations and cultural norms. Studies have shown that the elderly and people with disabilities generally have positive attitudes toward assistive robots [138]. Still, concerns about the robot’s reliability, safety, and potential impact on personal autonomy and human contact may hinder acceptance [139]. Research indicates that considerations like perceived usefulness and ease of use influence user acceptance of caregiving robots [36], which should be priorities in design. To address these concerns, researchers and designers should actively involve end-users in the development process, conducting user studies and evaluations to ensure that robotic systems meet their needs and preferences.
Human–robot interaction (HRI) is a critical aspect of robotic caregiving systems, as effective communication and collaboration between humans and robots are necessary to complete caregiving tasks successfully. HRI research in the context of caregiving has focused on developing natural and intuitive interfaces, such as speech recognition, gesture recognition, and touch-based interaction, to facilitate seamless communication between humans and robots. Moreover, research on social robotics has explored the development of robots with socially appropriate behaviours, emotional intelligence, and the ability to recognise and respond to human emotions and social cues [140]. These advances can help build trust, rapport, and a sense of companionship between humans and robots, which is essential in caregiving scenarios where emotional support and empathy are crucial [141].
Privacy is a significant concern in deploying robotic systems in caregiving environments, as the collection and processing of sensitive personal data, such as health information and daily routines, could be misused or compromised [142]. Researchers have proposed various strategies to address these privacy concerns, including using privacy-preserving algorithms, secure data storage and transmission protocols, and transparent data management policies that respect user consent and preferences [143]. It is essential to balance the need for data to enable practical robotic assistance and the protection of user privacy.
Caregiver workforce. As robotic systems become more capable and autonomous, there are concerns that human caregivers may need to be replaced or marginalised, leading to job loss, deskilling, and reduced quality of care.
Some studies have also found that using robots in nursing environments can reduce the physical and mental needs of caregivers, but in reality, they can increase the workload of caregivers [2]. However, by taking over repetitive, physically demanding, or time-consuming tasks, robots can free human caregivers to focus on more complex, emotionally engaging, and personalised aspects of care. To ensure that integrating robots into caregiving environments positively impacts caregivers and care recipients, it is crucial to develop collaborative approaches that leverage the complementary strengths of humans and robots.

6. Summary

This paper provides a comprehensive review of deformable object manipulation technologies for robotic caregiving applications. The growing demand for assistive robots highlights the needs of advancing robotic capabilities in handling real-world deformable objects.
Common materials like textiles, gels, and aggregates pose modeling, control, and sensing challenges due to their complex dynamics and properties. Representing behaviours, planning manipulations, and perceiving states accurately remain as difficult, open problems.
Covered innovations include analytical modeling, data-driven learning, sensor fusion, and end-to-end policy learning. Hybrid analytical-learned approaches are promising. Tactile, visual, and depth sensing enable key perception. Recent laboratory successes exhibit potential, but real-world deployment lags. Enhancing speed, safety, adaptability, and human compatibility is vital for adoption. Cost, reliability, and workflow integration also remain open challenges.
This review has some limitations. It focuses on technical approaches without deeply analyzing ethical, legal, and social implications. It is also limited to published academic literature, excluding proprietary innovations. Additionally, it takes an engineering perspective without emphasising caregiver and healthcare professional perspectives.
Furthermore, Section 3.3.2 provides a qualitative multi-factor analysis for prioritising research directions. However, it lacks specific quantitative data and only provides analytical dimensions and theoretical methods. Future work should gather quantitative data via surveys or field visits to conduct more rigorous statistical analysis and draw informed, data-driven conclusions.
Further research should evaluate technologies with patients and caregivers for safe, ethical, human-compatible solutions. Collaboration between engineers, computer scientists, and healthcare researchers is critical for understanding the needs from all stakeholder viewpoints. Studies into practical implementation factors like cost, reliability, and integration are also needed to drive real-world adoption and impact.
In conclusion, this review synthesises critical work and indicates areas needing further multi-disciplinary innovation. Analytical methods, data-driven learning, and human factors considerations should be combined moving forward. The aim is to guide the advancement of real-world capabilities for improved robotic caregiving.

Author Contributions

Conceptualization, L.W. and J.Z.; methodology, L.W. and J.Z.; software, L.W. and J.Z.; validation, L.W. and J.Z.; formal analysis, L.W. and J.Z.; investigation, L.W. and J.Z.; resources, L.W. and J.Z.; data curation, L.W. and J.Z.; writing—original draft preparation, L.W.; writing—review and editing, J.Z.; visualization, J.Z.; supervision, J.Z.; project administration, L.W. and J.Z. All authors have read and agreed to the published version of the manuscript.


This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.


  1. WHO. Ageing and Health; World Health Organization: Geneva, Switzerland, 2022. [Google Scholar]
  2. Persson, M.; Redmalm, D.; Iversen, C. Caregivers’ use of robots and their effect on work environment—A scoping review. J. Technol. Hum. Serv. 2021, 40, 251–277. [Google Scholar] [CrossRef]
  3. Madan, R.; Jenamani, R.K.; Nguyen, V.T.; Moustafa, A.; Hu, X.; Dimitropoulou, K.; Bhattacharjee, T. SPARCS: Structuring Physically Assistive Robotics for Caregiving with Stakeholders-in-the-loop. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar]
  4. Pfadenhauer, M.; Dukat, C. Robot Caregiver or Robot-Supported Caregiving? Int. J. Soc. Robot. 2015, 7, 393–406. [Google Scholar] [CrossRef]
  5. Suomalainen, M.; Karayiannidis, Y.; Kyrki, V. A survey of robot manipulation in contact. Robot. Auton. Syst. 2022, 156, 104224. [Google Scholar] [CrossRef]
  6. Schulman, J.; Lee, A.; Ho, J.; Abbeel, P. Tracking deformable objects with point clouds. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 7–9 May 2013. [Google Scholar]
  7. Kesner, S.B.; Howe, R.D. Force control of flexible catheter robots for beating heart surgery. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011. [Google Scholar]
  8. Nadon, F.; Valencia, A.; Payeur, P. Multi-Modal Sensing and Robotic Manipulation of Non-Rigid Objects: A Survey. Robotics 2018, 7, 74. [Google Scholar] [CrossRef]
  9. Jiménez, P. Survey on model-based manipulation planning of deformable objects. Robot. Comput.-Integr. Manuf. 2012, 28, 154–163. [Google Scholar] [CrossRef]
  10. Zou, L.; Ge, C.; Wang, Z.; Cretu, E.; Li, X. Novel Tactile Sensor Technology and Smart Tactile Sensing Systems: A Review. Sensors 2017, 17, 2653. [Google Scholar] [CrossRef]
  11. Mason, M.T. Dynamic Manipulation. In Mechanics of Robotic Manipulation; The MIT Press: Cambridge, MA, USA, 2001. [Google Scholar] [CrossRef]
  12. Han, D.; Mulyana, B.; Stankovic, V.; Cheng, S. A Survey on Deep Reinforcement Learning Algorithms for Robotic Manipulation. Sensors 2023, 23, 3762. [Google Scholar] [CrossRef] [PubMed]
  13. Kolachalama, S.; Lakshmanan, S. Continuum Robots for Manipulation Applications: A Survey. J. Robot. 2020, 2020, 4187048. [Google Scholar] [CrossRef]
  14. Han, J.; Davids, J.; Ashrafian, H.; Darzi, A.; Elson, D.S.; Sodergren, M. A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches. Int. J. Med. Robot. Comput. Assist. Surg. 2021, 18, e2358. [Google Scholar] [CrossRef]
  15. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  16. Jalon, J.G.; Cuadrado, J.; Avello, A.; Jimenez, J.M. Kinematic and Dynamic Simulation of Rigid and Flexible Systems with Fully Cartesian Coordinates. In Computer-Aided Analysis of Rigid and Flexible Mechanical Systems; Springer: Dordrecht, The Netherlands, 1994; pp. 285–323. [Google Scholar]
  17. Spong, M.W.; Hutchinson, S.; Vidyasagar, M. Robot Modeling and Control; John Wiley and Sons: Hoboken, NJ, USA, 2020. [Google Scholar]
  18. Bohg, J.; Morales, A.; Asfour, T.; Kragic, D. Data-Driven Grasp Synthesis—A Survey. IEEE Trans. Robot. 2014, 30, 289–309. [Google Scholar] [CrossRef]
  19. Koganti, N.; Tamei, T.; Matsubara, T.; Shibata, T. Real-time estimation of Human-Cloth topological relationship using depth sensor for robotic clothing assistance. In Proceedings of the 23rd IEEE International Symposium on Robot and Human Interactive Communication, Edinburgh, UK, 25–29 August 2014. [Google Scholar]
  20. Goldberg, K. Deep Transfer Learning of Pick Points on Fabric for Robot Bed-Making. In Proceedings of the Robotics Research: The 19th International Symposium ISRR; Springer Nature: Cham, Switzerland, 2022; Volume 20, p. 275. [Google Scholar]
  21. Avigal, Y.; Berscheid, L.; Asfour, T.; Kroger, T.; Goldberg, K. SpeedFolding: Learning Efficient Bimanual Folding of Garments. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar]
  22. Yang, P.C.; Sasaki, K.; Suzuki, K.; Kase, K.; Sugano, S.; Ogata, T. Repeatable Folding Task by Humanoid Robot Worker Using Deep Learning. IEEE Robot. Autom. Lett. 2017, 2, 397–403. [Google Scholar] [CrossRef]
  23. Park, D.; Hoshi, Y.; Mahajan, H.P.; Kim, H.K.; Erickson, Z.; Rogers, W.A.; Kemp, C.C. Active robot-assisted feeding with a general-purpose mobile manipulator: Design, evaluation, and lessons learned. Robot. Auton. Syst. 2020, 124, 103344. [Google Scholar] [CrossRef]
  24. Bhattacharjee, T.; Lee, G.; Song, H.; Srinivasa, S.S. Towards Robotic Feeding: Role of Haptics in Fork-Based Food Manipulation. IEEE Robot. Autom. Lett. 2019, 4, 1485–1492. [Google Scholar] [CrossRef]
  25. Feng, R.; Kim, Y.; Lee, G.; Gordon, E.K.; Schmittle, M.; Kumar, S.; Bhattacharjee, T.; Srinivasa, S.S. Robot-Assisted Feeding: Generalizing Skewering Strategies Across Food Items on a Plate. In Springer Proceedings in Advanced Robotics; Springer International Publishing: Cham, Switzerland, 2022; pp. 427–442. [Google Scholar]
  26. Hai, N.D.X.; Thinh, N.T. Self-Feeding Robot for Elder People and Parkinson’s Patients in Meal Supporting. Int. J. Mech. Eng. Robot. Res. 2022, 11, 241–247. [Google Scholar] [CrossRef]
  27. Sanchez, J.; Corrales, J.A.; Bouzgarrou, B.C.; Mezouar, Y. Robotic manipulation and sensing of deformable objects in domestic and industrial applications: A survey. Int. J. Robot. Res. 2018, 37, 688–716. [Google Scholar] [CrossRef]
  28. Li, J.; Sun, W.; Gu, X.; Guo, J.; Ota, J.; Huang, Z.; Zhang, Y. A Method for a Compliant Robot Arm to Perform a Bandaging Task on a Swaying Arm: A Proposed Approach. IEEE Robot. Autom. Mag. 2023, 30, 50–61. [Google Scholar] [CrossRef]
  29. Park, F.; Bobrow, J.; Ploen, S. A Lie Group Formulation of Robot Dynamics. Int. J. Robot. Res. 1995, 14, 609–618. [Google Scholar] [CrossRef]
  30. Mergler, H. Introduction to robotics. IEEE J. Robot. Autom. 1985, 1, 215. [Google Scholar] [CrossRef]
  31. Lévesque, V.; Pasquero, J.; Hayward, V.; Legault, M. Display of virtual braille dots by lateral skin deformation: Feasibility study. ACM Trans. Appl. Percept. 2005, 2, 132–149. [Google Scholar] [CrossRef]
  32. Yang, Y.; Li, Y.; Fermuller, C.; Aloimonos, Y. Robot Learning Manipulation Action Plans by “Watching” Unconstrained Videos from the World Wide Web. Proc. AAAI Conf. Artif. Intell. 2015, 29, 3692–3696. [Google Scholar] [CrossRef]
  33. Li, S.; Ma, X.; Liang, H.; Gorner, M.; Ruppel, P.; Fang, B.; Sun, F.; Zhang, J. Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), IEEE, Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  34. Wu, Y.; Yan, W.; Kurutach, T.; Pinto, L.; Abbeel, P. Learning to Manipulate Deformable Objects without Demonstrations. In Proceedings of the Robotics: Science and Systems XVI. Robotics: Science and Systems Foundation, Corvalis, OR, USA, 12–16 July 2020. [Google Scholar]
  35. Teng, Y.; Lu, H.; Li, Y.; Kamiya, T.; Nakatoh, Y.; Serikawa, S.; Gao, P. Multidimensional Deformable Object Manipulation Based on DN-Transporter Networks. IEEE Trans. Intell. Transp. Syst. 2023, 24, 4532–4540. [Google Scholar] [CrossRef]
  36. Bedaf, S.; Draper, H.; Gelderblom, G.J.; Sorell, T.; de Witte, L. Can a Service Robot Which Supports Independent Living of Older People Disobey a Command? The Views of Older People, Informal Carers and Professional Caregivers on the Acceptability of Robots. Int. J. Soc. Robot. 2016, 8, 409–420. [Google Scholar] [CrossRef]
  37. Yamazaki, K.; Oya, R.; Nagahama, K.; Okada, K.; Inaba, M. Bottom dressing by a life-sized humanoid robot provided failure detection and recovery functions. In Proceedings of the 2014 IEEE/SICE International Symposium on System Integration, Tokyo, Japan, 13–15 December 2014. [Google Scholar]
  38. Li, Y.; Xiao, A.; Feng, Q.; Zou, T.; Tian, C. Design of Service Robot for Wearing and Taking off Footwear. E3S Web Conf. 2020, 189, 03024. [Google Scholar] [CrossRef]
  39. Jia, B.; Pan, Z.; Hu, Z.; Pan, J.; Manocha, D. Cloth Manipulation Using Random-Forest-Based Imitation Learning. IEEE Robot. Autom. Lett. 2019, 4, 2086–2093. [Google Scholar] [CrossRef]
  40. Tsurumine, Y.; Matsubara, T. Goal-aware generative adversarial imitation learning from imperfect demonstration for robotic cloth manipulation. Robot. Auton. Syst. 2022, 158, 104264. [Google Scholar] [CrossRef]
  41. Verleysen, A.; Holvoet, T.; Proesmans, R.; Den Haese, C.; Wyffels, F. Simpler Learning of Robotic Manipulation of Clothing by Utilizing DIY Smart Textile Technology. Appl. Sci. 2020, 10, 4088. [Google Scholar] [CrossRef]
  42. Jia, B.; Hu, Z.; Pan, Z.; Manocha, D.; Pan, J. Learning-based feedback controller for deformable object manipulation. arXiv 2018, arXiv:1806.09618. [Google Scholar]
  43. Deng, Y.; Xia, C.; Wang, X.; Chen, L. Graph-Transporter: A Graph-based Learning Method for Goal-Conditioned Deformable Object Rearranging Task. In Proceedings of the 2022 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Prague, Czech Republic, 9–12 October 2022. [Google Scholar]
  44. Baek, J. Smart predictive analytics care monitoring model based on multi sensor IoT system: Management of diaper and attitude for the bedridden elderly. Sensors Int. 2023, 4, 100213. [Google Scholar] [CrossRef]
  45. Torrisi, M.; Maggio, M.G.; De Cola, M.C.; Zichittella, C.; Carmela, C.; Porcari, B.; la Rosa, G.; De Luca, R.; Naro, A.; Calabrò, R.S. Beyond motor recovery after stroke: The role of hand robotic rehabilitation plus virtual reality in improving cognitive function. J. Clin. Neurosci. 2021, 92, 11–16. [Google Scholar] [CrossRef]
  46. Vaida, C.; Birlescu, I.; Pisla, A.; Ulinici, I.M.; Tarnita, D.; Carbone, G.; Pisla, D. Systematic Design of a Parallel Robotic System for Lower Limb Rehabilitation. IEEE Access 2020, 8, 34522–34537. [Google Scholar] [CrossRef]
  47. Chockalingam, M.; Vasanthan, L.T.; Balasubramanian, S.; Sriram, V. Experiences of patients who had a stroke and rehabilitation professionals with upper limb rehabilitation robots: A qualitative systematic review protocol. BMJ Open 2022, 12, e065177. [Google Scholar] [CrossRef] [PubMed]
  48. Frei, J.; Ziltener, A.; Wüst, M.; Havelka, A.; Lohan, K. Iterative Development of s Service Robot for Laundry Transport in Nursing Homes. In Social Robotics; Springer Nature: Cham, Switzerland, 2022; pp. 359–370. [Google Scholar]
  49. Hussin, E.; Jie Jian, W.; Sahar, N.; Zakariya, A.; Ridzuan, A.; Suhana, C.; Mohamed Juhari, R.; Wei Hong, L. A Healthcare Laundry Management System using RFID System. Proc. Int. Conf. Artif. Life Robot. 2022, 27, 875–880. [Google Scholar] [CrossRef]
  50. Zhang, A.; Yao, Y.; Hu, Y. Analyzing the Design of Windows Cleaning Robots. In Proceedings of the 2022 3rd International Conference on Big Data, Artificial Intelligence and Internet of Things Engineering (ICBAIE), IEEE, Xi’an, China, 15–17 July 2022. [Google Scholar]
  51. Lo, W.S.; Yamamoto, C.; Pattar, S.P.; Tsukamoto, K.; Takahashi, S.; Sawanobori, T.; Mizuuchi, I. Developing a Collaborative Robotic Dishwasher Cell System for Restaurants. In Lecture Notes in Networks and Systems; Springer International Publishing: Cham, Switzerland, 2022; pp. 261–275. [Google Scholar]
  52. Lin, X.; Wang, Y.; Olkin, J.; Held, D. SoftGym: Benchmarking Deep Reinforcement Learning for Deformable Object Manipulation. Proc. Mach. Learn. Res. 2021, 155, 432–448. [Google Scholar]
  53. Scheikl, P.M.; Tagliabue, E.; Gyenes, B.; Wagner, M.; Dall’Alba, D.; Fiorini, P.; Mathis-Ullrich, F. Sim-to-Real Transfer for Visual Reinforcement Learning of Deformable Object Manipulation for Robot-Assisted Surgery. IEEE Robot. Autom. Lett. 2023, 8, 560–567. [Google Scholar] [CrossRef]
  54. Thach, B.; Kuntz, A.; Hermans, T. DeformerNet: A Deep Learning Approach to 3D Deformable Object Manipulation. arXiv 2021, arXiv:2107.08067v1. [Google Scholar]
  55. Yin, H.; Varava, A.; Kragic, D. Modeling, learning, perception, and control methods for deformable object manipulation. Sci. Robot. 2021, 6, eabd8803. [Google Scholar] [CrossRef] [PubMed]
  56. Delgado, A.; Corrales, J.; Mezouar, Y.; Lequievre, L.; Jara, C.; Torres, F. Tactile control based on Gaussian images and its application in bi-manual manipulation of deformable objects. Robot. Auton. Syst. 2017, 94, 148–161. [Google Scholar] [CrossRef]
  57. Frank, B.; Stachniss, C.; Abdo, N.; Burgard, W. Using Gaussian Process Regression for Efficient Motion Planning in Environments with Deformable Objects. In Proceedings of the AAAIWS’11-09: Proceedings of the 9th AAAI Conference on Automated Action Planning for Autonomous Mobile Robots, San Francisco, CA, USA, 7 August 2011. [Google Scholar]
  58. Hu, Z.; Sun, P.; Pan, J. Three-Dimensional Deformable Object Manipulation Using Fast Online Gaussian Process Regression. IEEE Robot. Autom. Lett. 2018, 3, 979–986. [Google Scholar] [CrossRef]
  59. Antonova, R.; Yang, J.; Sundaresan, P.; Fox, D.; Ramos, F.; Bohg, J. A Bayesian Treatment of Real-to-Sim for Deformable Object Manipulation. IEEE Robot. Autom. Lett. 2022, 7, 5819–5826. [Google Scholar] [CrossRef]
  60. Zheng, C.X.; Colomé, A.; Sentis, L.; Torras, C. Mixtures of Controlled Gaussian Processes for Dynamical Modeling of Deformable Objects. In Proceedings of the 4th Annual Learning for Dynamics and Control Conference, PMLR, Stanford, CA, USA, 23–24 June 2022; Volume 168, pp. 415–426. [Google Scholar]
  61. Li, R.; Platt, R.; Yuan, W.; ten Pas, A.; Roscup, N.; Srinivasan, M.A.; Adelson, E. Localization and manipulation of small parts using GelSight tactile sensing. In Proceedings of the 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA, 14–18 September 2014. [Google Scholar]
  62. Cui, S.; Wang, R.; Wei, J.; Li, F.; Wang, S. Grasp State Assessment of Deformable Objects Using Visual-Tactile Fusion Perception. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020. [Google Scholar]
  63. Khalil, F.F.; Payeur, P. Robotic Interaction with Deformable Objects under Vision and Tactile Guidance—A Review. In Proceedings of the 2007 International Workshop on Robotic and Sensors Environments, IEEE, Kyoto, Japan, 23–27 October 2007. [Google Scholar]
  64. Yamaguchi, A.; Atkeson, C.G. Tactile Behaviors with the Vision-Based Tactile Sensor FingerVision. Int. J. Humanoid Robot. 2019, 16, 1940002. [Google Scholar] [CrossRef]
  65. Liang, L.; Liu, M.; Martin, C.; Sun, W. A deep learning approach to estimate stress distribution: A fast and accurate surrogate of finite-element analysis. J. R. Soc. Interface 2018, 15, 20170844. [Google Scholar] [CrossRef]
  66. Kapitanyuk, Y.A.; Proskurnikov, A.V.; Cao, M. A Guiding Vector-Field Algorithm for Path-Following Control of Nonholonomic Mobile Robots. IEEE Trans. Control Syst. Technol. 2018, 26, 1372–1385. [Google Scholar] [CrossRef]
  67. Kalashnikov, D.; Irpan, A.; Pastor, P.; Ibarz, J.; Herzog, A.; Jang, E.; Quillen, D.; Holly, E.; Kalakrishnan, M.; Vanhoucke, V.; et al. QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation. arXiv 2018, arXiv:1806.10293. [Google Scholar]
  68. Hogan, N. Impedance Control: An Approach to Manipulation. In Proceedings of the 1984 American Control Conference, IEEE, San Diego, CA, USA, 6–8 June 1984. [Google Scholar]
  69. Edsinger, A.; Kemp, C.C. Human-Robot Interaction for Cooperative Manipulation: Handing Objects to One Another. In Proceedings of the RO-MAN 2007—The 16th IEEE International Symposium on Robot and Human Interactive Communication, Jeju, Republic of Korea, 26–29 August 2007. [Google Scholar]
  70. Kruse, D.; Radke, R.J.; Wen, J.T. Collaborative human-robot manipulation of highly deformable materials. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015. [Google Scholar]
  71. Sirintuna, D.; Giammarino, A.; Ajoudani, A. Human-Robot Collaborative Carrying of Objects with Unknown Deformation Characteristics. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar]
  72. Lotti, N.; Xiloyannis, M.; Durandau, G.; Galofaro, E.; Sanguineti, V.; Masia, L.; Sartori, M. Adaptive Model-Based Myoelectric Control for a Soft Wearable Arm Exosuit: A New Generation of Wearable Robot Control. IEEE Robot. Autom. Mag. 2020, 27, 43–53. [Google Scholar] [CrossRef]
  73. Shintake, J.; Cacucciolo, V.; Floreano, D.; Shea, H. Soft Robotic Grippers. Adv. Mater. 2018, 30, 1707035. [Google Scholar] [CrossRef] [PubMed]
  74. Zhu, J.; Gienger, M.; Franzese, G.; Kober, J. Do You Need a Hand?—A Bimanual Robotic Dressing Assistance Scheme, 2023. arXiv 2023, arXiv:2301.02749. [Google Scholar] [CrossRef]
  75. Yarin, A. Mathematical Modeling in Continuum Mechanics, R. Temam and A. Miranville. Cambridge University Press, Cambridge, 2000. Int. J. Multiph. Flow 2002, 28, 881–883. [Google Scholar] [CrossRef]
  76. ISO 13482:2014; Robots and Robotic Devices—Safety Requirements for Personal Care Robots. International Organization for Standardization: Geneva, Switzerland, 2014.
  77. Huang, Z.; Hu, Y.; Du, T.; Zhou, S.; Su, H.; Tenenbaum, J.B.; Gan, C. Plasticinelab: A soft-body manipulation benchmark with differentiable physics. arXiv 2021, arXiv:2104.03311. [Google Scholar]
  78. Huang, I.; Narang, Y.; Eppner, C.; Sundaralingam, B.; Macklin, M.; Bajcsy, R.; Hermans, T.; Fox, D. DefGraspSim: Physics-Based Simulation of Grasp Outcomes for 3D Deformable Objects. IEEE Robot. Autom. Lett. 2022, 7, 6274–6281. [Google Scholar] [CrossRef]
  79. Ye, R.; Xu, W.; Fu, H.; Jenamani, R.K.; Nguyen, V.; Lu, C.; Dimitropoulou, K.; Bhattacharjee, T. RCare World: A Human-centric Simulation World for Caregiving Robots. In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan, 23–27 October 2022. [Google Scholar]
  80. Roy, A.; Bera, R.K. Linear and Non-Linear Deformations of Elastic Solids; CRC Press: Boca Raton, FL, USA, 2019. [Google Scholar]
  81. Zienkiewicz, O.C.; Taylor, R.L. The Finite Element Method for Solid and Structural Mechanics; Butterworth-Heinemann: Oxford, UK, 2013. [Google Scholar]
  82. Georgii, J.; Westermann, R. Mass-spring systems on the GPU. Simul. Model. Pract. Theory 2005, 13, 693–702. [Google Scholar] [CrossRef]
  83. Tan, Q.; Pan, Z.; Gao, L.; Manocha, D. Realtime Simulation of Thin-Shell Deformable Materials Using CNN-Based Mesh Embedding. IEEE Robot. Autom. Lett. 2020, 5, 2325–2332. [Google Scholar] [CrossRef]
  84. Mitusch, S.K.; Funke, S.W.; Kuchta, M. Hybrid FEM-NN models: Combining artificial neural networks with the finite element method. J. Comput. Phys. 2021, 446, 110651. [Google Scholar] [CrossRef]
  85. Bullet Physics Library. Available online: (accessed on 15 October 2023).
  86. Faure, F.; Duriez, C.; Delingette, H.; Allard, J.; Gilles, B.; Marchesseau, S.; Talbot, H.; Courtecuisse, H.; Bousquet, G.; Peterlik, I.; et al. SOFA: A Multi-Model Framework for Interactive Physical Simulation. In Studies in Mechanobiology, Tissue Engineering and Biomaterials; Springer: Berlin/Heidelberg, Germany, 2012; pp. 283–321. [Google Scholar]
  87. Seita, D.; Florence, P.; Tompson, J.; Coumans, E.; Sindhwani, V.; Goldberg, K.; Zeng, A. Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  88. Laezza, R.; Gieselmann, R.; Pokorny, F.T.; Karayiannidis, Y. ReForm: A Robot Learning Sandbox for Deformable Linear Object Manipulation. In Proceedings of the 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi’an, China, 30 May–5 June 2021. [Google Scholar]
  89. Ramisa, A.; Alenyà, G.; Moreno-Noguer, F.; Torras, C. Learning RGB-D descriptors of garment parts for informed robot grasping. Eng. Appl. Artif. Intell. 2014, 35, 246–258. [Google Scholar] [CrossRef]
  90. Song, Y.; Wen, J.; Liu, D.; Yu, C. Deep Robotic Grasping Prediction with Hierarchical RGB-D Fusion. Int. J. Control Autom. Syst. 2022, 20, 243–254. [Google Scholar] [CrossRef]
  91. Ma, X.; Hsu, D.; Lee, W.S. Learning Latent Graph Dynamics for Visual Manipulation of Deformable Objects. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), IEEE, Philadelphia, PA, USA, 23–27 May 2022. [Google Scholar]
  92. Rusinkiewicz, S.; Hall-Holt, O.; Levoy, M. Real-time 3D model acquisition. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, TX, USA, 23–26 July 2002. [Google Scholar]
  93. Del Bue, A.; Llad, X.; Agapito, L. Non-Rigid Metric Shape and Motion Recovery from Uncalibrated Images Using Priors. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition—Volume 1 (CVPR’06), New York, NY, USA, 17–22 June 2006. [Google Scholar]
  94. Dahiya, R.; Metta, G.; Valle, M.; Sandini, G. Tactile Sensing—From Humans to Humanoids. IEEE Trans. Robot. 2010, 26, 1–20. [Google Scholar] [CrossRef]
  95. Bicchi, A. Force distribution in multiple whole-limb manipulation. In Proceedings of the 1993 IEEE International Conference on Robotics and Automation, Atlanta, GA, USA, 2–6 May 1993. [Google Scholar]
  96. Kaboli, M.; Yao, K.; Feng, D.; Cheng, G. Tactile-based active object discrimination and target object search in an unknown workspace. Auton. Robot. 2018, 43, 123–152. [Google Scholar] [CrossRef]
  97. Kemp, C.; Edsinger, A.; Torres-Jara, E. Challenges for robot manipulation in human environments [Grand Challenges of Robotics]. IEEE Robot. Autom. Mag. 2007, 14, 20–29. [Google Scholar] [CrossRef]
  98. Bekiroglu, Y.; Laaksonen, J.; Jorgensen, J.A.; Kyrki, V.; Kragic, D. Assessing Grasp Stability Based on Learning and Haptic Data. IEEE Trans. Robot. 2011, 27, 616–629. [Google Scholar] [CrossRef]
  99. Calandra, R.; Owens, A.; Upadhyaya, M.; Yuan, W.; Lin, J.; Adelson, E.; Levine, S. The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes? arXiv 2017, arXiv:1710.05512. [Google Scholar]
  100. Siciliano, B.; Khatib, O. Springer Handbook of Robotics; Springer: Cham, Switzerland, 2016. [Google Scholar]
  101. International Conference on Multisensor Fusion and Integration for Intelligent Systems [front matter]. In Proceedings of the 1996 IEEE/SICE/RSJ International Conference on Multisensor Fusion and Integration for Intelligent Systems (Cat. No.96TH8242), Washington, DC, USA, 8–11 December 1996; pp. i–xv. [CrossRef]
  102. O’Doherty, J.E.; Lebedev, M.A.; Ifft, P.J.; Zhuang, K.Z.; Shokur, S.; Bleuler, H.; Nicolelis, M.A.L. Active tactile exploration using a brain–machine–brain interface. Nature 2011, 479, 228–231. [Google Scholar] [CrossRef] [PubMed]
  103. Li, R.; Peng, B. Implementing Monocular Visual-Tactile Sensors for Robust Manipulation. Cyborg Bionic Syst. 2022, 2022, 9797562. [Google Scholar] [CrossRef]
  104. Uneri, A.; Balicki, M.A.; Handa, J.; Gehlbach, P.; Taylor, R.H.; Iordachita, I. New steady-hand Eye Robot with micro-force sensing for vitreoretinal surgery. In Proceedings of the 2010 3rd IEEE RAS and EMBS International Conference on Biomedical Robotics and Biomechatronics, Tokyo, Japan, 26–29 September 2010. [Google Scholar]
  105. Luo, Y.; Nelson, B.J. Fusing force and vision feedback for manipulating deformable objects. J. Robot. Syst. 2001, 18, 103–117. [Google Scholar] [CrossRef]
  106. Choi, S.H.; Tahara, K. Dexterous object manipulation by a multi-fingered robotic hand with visual-tactile fingertip sensors. ROBOMECH J. 2020, 7, 14. [Google Scholar] [CrossRef]
  107. Inceoglu, A.; Aksoy, E.E.; Cihan Ak, A.; Sariel, S. FINO-Net: A Deep Multimodal Sensor Fusion Framework for Manipulation Failure Detection. In Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic, 27 September–1 October 2021. [Google Scholar]
  108. Wang, C.; Zhang, Y.; Zhang, X.; Wu, Z.; Zhu, X.; Jin, S.; Tang, T.; Tomizuka, M. Offline-Online Learning of Deformation Model for Cable Manipulation With Graph Neural Networks. IEEE Robot. Autom. Lett. 2022, 7, 5544–5551. [Google Scholar] [CrossRef]
  109. Zlatintsi, A.; Dometios, A.; Kardaris, N.; Rodomagoulakis, I.; Koutras, P.; Papageorgiou, X.; Maragos, P.; Tzafestas, C.; Vartholomeos, P.; Hauer, K.; et al. I-Support: A robotic platform of an assistive bathing robot for the elderly population. Robot. Auton. Syst. 2020, 126, 103451. [Google Scholar] [CrossRef]
  110. Bhattacharjee, T.; Gordon, E.K.; Scalise, R.; Cabrera, M.E.; Caspi, A.; Cakmak, M.; Srinivasa, S.S. Is More Autonomy Always Better? In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, Cambrdige, UK, 23–26 March 2020. [Google Scholar]
  111. Kober, J.; Peters, J. Reinforcement Learning in Robotics: A Survey. In Adaptation, Learning, and Optimization; Springer: Berlin/Heidelberg, Germany, 2012; pp. 579–610. [Google Scholar]
  112. Nguyen, H.; La, H. Review of Deep Reinforcement Learning for Robot Manipulation. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019. [Google Scholar]
  113. Vieira, A.; Ribeiro, B. Reinforcement Learning and Robotics. In Introduction to Deep Learning Business Applications for Developers; Apress: Berkeley, CA, USA, 2018; pp. 137–168. [Google Scholar]
  114. Florence, P.; Manuelli, L.; Tedrake, R. Self-Supervised Correspondence in Visuomotor Policy Learning. IEEE Robot. Autom. Lett. 2020, 5, 492–499. [Google Scholar] [CrossRef]
  115. Qin, Y.; Wu, Y.H.; Liu, S.; Jiang, H.; Yang, R.; Fu, Y.; Wang, X. DexMV: Imitation Learning for Dexterous Manipulation from Human Videos. In Lecture Notes in Computer Science; Springer Nature: Cham, Switzerland, 2022; pp. 570–587. [Google Scholar]
  116. Zhang, X.; Sun, L.; Kuang, Z.; Tomizuka, M. Learning Variable Impedance Control via Inverse Reinforcement Learning for Force-Related Tasks. IEEE Robot. Autom. Lett. 2021, 6, 2225–2232. [Google Scholar] [CrossRef]
  117. Zakka, K.; Zeng, A.; Florence, P.; Tompson, J.; Bohg, J.; Dwibedi, D. XIRL: Cross-embodiment Inverse Reinforcement Learning. Proc. Mach. Learn. Res. 2021, 164, 537–546. [Google Scholar]
  118. Das, N.; Bechtle, S.; Davchev, T.; Jayaraman, D.; Rai, A.; Meier, F. Model-Based Inverse Reinforcement Learning from Visual Demonstrations. Proc. Mach. Learn. Res. 2021, 155, 1930–1942. [Google Scholar]
  119. Hu, Z.; Han, T.; Sun, P.; Pan, J.; Manocha, D. 3-D Deformable Object Manipulation Using Deep Neural Networks. IEEE Robot. Autom. Lett. 2019, 4, 4255–4261. [Google Scholar] [CrossRef]
  120. Whitman, J.; Bhirangi, R.; Travers, M.; Choset, H. Modular Robot Design Synthesis with Deep Reinforcement Learning. Proc. AAAI Conf. Artif. Intell. 2020, 34, 10418–10425. [Google Scholar] [CrossRef]
  121. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
  122. Tobin, J.; Fong, R.; Ray, A.; Schneider, J.; Zaremba, W.; Abbeel, P. Domain randomization for transferring deep neural networks from simulation to the real world. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vancouver, BC, Canada, 24–28 September 2017. [Google Scholar]
  123. Hansen, E.B.; Andersen, R.E.; Madsen, S.; Bogh, S. Transferring Human Manipulation Knowledge to Robots with Inverse Reinforcement Learning. In Proceedings of the 2020 IEEE/SICE International Symposium on System Integration (SII), Honolulu, HI, USA, 12–15 January 2020. [Google Scholar]
  124. Cheney, N.; Bongard, J.; Lipson, H. Evolving Soft Robots in Tight Spaces. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, Madrid, Spain, 11–15 July 2015; pp. 935–942. [Google Scholar] [CrossRef]
  125. Brinkmann, A.; Böhlen, C.F.v.; Kowalski, C.; Lau, S.; Meyer, O.; Diekmann, R.; Hein, A. Providing physical relief for nurses by collaborative robotics. Sci. Rep. 2022, 12, 8644. [Google Scholar] [CrossRef]
  126. Cretu, A.M.; Payeur, P.; Petriu, E.M. Soft Object Deformation Monitoring and Learning for Model-Based Robotic Hand Manipulation. IEEE Trans. Syst. Man Cybern. Part B (Cybern.) 2012, 42, 740–753. [Google Scholar] [CrossRef]
  127. Zaidi, S.; Maselli, M.; Laschi, C.; Cianchetti, M. Actuation Technologies for Soft Robot Grippers and Manipulators: A Review. Curr. Robot. Rep. 2021, 2, 355–369. [Google Scholar] [CrossRef]
  128. Ju, F.; Wang, Y.; Xie, B.; Mi, Y.; Zhao, M.; Cao, J. The Use of Sports Rehabilitation Robotics to Assist in the Recovery of Physical Abilities in Elderly Patients with Degenerative Diseases: A Literature Review. Healthcare 2023, 11, 326. [Google Scholar] [CrossRef]
  129. Ehrampoosh, A.; Shirinzadeh, B.; Pinskier, J.; Smith, J.; Moshinsky, R.; Zhong, Y. A Force-Feedback Methodology for Teleoperated Suturing Task in Robotic-Assisted Minimally Invasive Surgery. Sensors 2022, 22, 7829. [Google Scholar] [CrossRef] [PubMed]
  130. Lederman, S.J.; Klatzky, R.L. Haptic perception: A tutorial. Atten. Percept. Psychophys. 2009, 71, 1439–1459. [Google Scholar] [CrossRef]
  131. Si, W.; Wang, N.; Yang, C. A review on manipulation skill acquisition through teleoperation-based learning from demonstration. Cogn. Comput. Syst. 2021, 3, 1–16. [Google Scholar] [CrossRef]
  132. Gao, Y.; Huang, X.; Mann, I.S.; Su, H.J. A novel variable stiffness compliant robotic gripper based on layer jamming. J. Mech. Robot. 2020, 12, 051013. [Google Scholar] [CrossRef]
  133. Ajoudani, A.; Zanchettin, A.M.; Ivaldi, S.; Albu-Schäffer, A.; Kosuge, K.; Khatib, O. Progress and prospects of the human–robot collaboration. Auton. Robot. 2018, 42, 957–975. [Google Scholar] [CrossRef]
  134. Dollar, A.M.; Howe, R.D. The Highly Adaptive SDM Hand: Design and Performance Evaluation. Int. J. Robot. Res. 2010, 29, 585–597. [Google Scholar] [CrossRef]
  135. Zinn, M.; Roth, B.; Khatib, O.; Salisbury, J.K. A New Actuation Approach for Human Friendly Robot Design. Int. J. Robot. Res. 2004, 23, 379–398. [Google Scholar] [CrossRef]
  136. Mutlu, B.; Forlizzi, J. Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction. In Proceedings of the 2008 3rd ACM/IEEE International Conference on Human-Robot Interaction (HRI), Amsterdam, The Netherlands, 12–15 March 2008; pp. 287–294. [Google Scholar] [CrossRef]
  137. Heerink, M.; Kröse, B.; Evers, V.; Wielinga, B. Assessing Acceptance of Assistive Social Agent Technology by Older Adults: The Almere Model. Int. J. Soc. Robot. 2010, 2, 361–375. [Google Scholar] [CrossRef]
  138. Bennett, C.C.; Sabanovic, S.; Piatt, J.A.; Nagata, S.; Eldridge, L.; Randall, N. A Robot a Day Keeps the Blues Away. In Proceedings of the 2017 IEEE International Conference on Healthcare Informatics (ICHI), Park City, UT, USA, 23–26 August 2017. [Google Scholar]
  139. Rosenthal-von der Pütten, A.M.; Krämer, N.C. How design characteristics of robots determine evaluation and uncanny valley related responses. Comput. Hum. Behav. 2014, 36, 422–439. [Google Scholar] [CrossRef]
  140. Dautenhahn, K. Socially intelligent robots: Dimensions of human–robot interaction. Philos. Trans. R. Soc. B Biol. Sci. 2007, 362, 679–704. [Google Scholar] [CrossRef] [PubMed]
  141. Sabanovic, S.; Bennett, C.C.; Chang, W.L.; Huber, L. PARO robot affects diverse interaction modalities in group sensory therapy for older adults with dementia. In Proceedings of the 2013 IEEE 13th International Conference on Rehabilitation Robotics (ICORR), Seattle, WA, USA, 24–26 June 2013. [Google Scholar]
  142. Beresford, A.; Stajano, F. Location privacy in pervasive computing. IEEE Pervasive Comput. 2003, 2, 46–55. [Google Scholar] [CrossRef]
  143. Vaidya, J.; Clifton, C. Privacy-preserving data mining: Why, how, and when. IEEE Secur. Priv. Mag. 2004, 2, 19–27. [Google Scholar] [CrossRef]
Figure 1. The outline diagram of this review.
Figure 1. The outline diagram of this review.
Machines 11 01013 g001
Figure 2. Flow diagram of identification, screening, and inclusion of studies.
Figure 2. Flow diagram of identification, screening, and inclusion of studies.
Machines 11 01013 g002
Figure 3. The distribution chart of reference type for this review.
Figure 3. The distribution chart of reference type for this review.
Machines 11 01013 g003
Figure 4. The distribution chart of the references for this review. As the year 2023 has not yet concluded, the analysis chart does not encompass data from this year; however, this review encapsulates research papers from 2023 available up to the present.
Figure 4. The distribution chart of the references for this review. As the year 2023 has not yet concluded, the analysis chart does not encompass data from this year; however, this review encapsulates research papers from 2023 available up to the present.
Machines 11 01013 g004
Figure 5. Emerging technological trends in the DOM over the past few decades included in this review.
Figure 5. Emerging technological trends in the DOM over the past few decades included in this review.
Machines 11 01013 g005
Figure 6. Key applications of deformable object manipulation in caregiving over time.
Figure 6. Key applications of deformable object manipulation in caregiving over time.
Machines 11 01013 g006
Figure 7. The proportion of literature on each type of application in caregiving included in this review.
Figure 7. The proportion of literature on each type of application in caregiving included in this review.
Machines 11 01013 g007
Figure 8. Classification of deformable objects in caregiving environments.
Figure 8. Classification of deformable objects in caregiving environments.
Machines 11 01013 g008
Figure 9. The proportion of literature on each type of deformable object included in this review.
Figure 9. The proportion of literature on each type of deformable object included in this review.
Machines 11 01013 g009
Figure 10. Timeline for representative simulation tool works for DOM included in this review.
Figure 10. Timeline for representative simulation tool works for DOM included in this review.
Machines 11 01013 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, L.; Zhu, J. Deformable Object Manipulation in Caregiving Scenarios: A Review. Machines 2023, 11, 1013.

AMA Style

Wang L, Zhu J. Deformable Object Manipulation in Caregiving Scenarios: A Review. Machines. 2023; 11(11):1013.

Chicago/Turabian Style

Wang, Liman, and Jihong Zhu. 2023. "Deformable Object Manipulation in Caregiving Scenarios: A Review" Machines 11, no. 11: 1013.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop