Next Article in Journal
Rope Jumping Strength Monitoring on Smart Devices via Passive Acoustic Sensing
Previous Article in Journal
Semi-Supervised Framework with Autoencoder-Based Neural Networks for Fault Prognosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Adaptive Interaction Control of Compliant Robots Using Impedance Learning

School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(24), 9740; https://doi.org/10.3390/s22249740
Submission received: 8 October 2022 / Revised: 5 December 2022 / Accepted: 6 December 2022 / Published: 12 December 2022
(This article belongs to the Topic Vehicle Dynamics and Control)

Abstract

:
This paper presents an impedance learning-based adaptive control strategy for series elastic actuator (SEA)-driven compliant robots without the measurement of the robot–environment interaction force. The adaptive controller is designed based on the command filter-based adaptive backstepping approach, where a command filter is used to decrease computational complexity and avoid the requirement of high derivatives of the robot position. In the controller, environmental impedance profiles and robotic parameter uncertainties are estimated using adaptive learning laws. Through a Lyapunov-based theoretical analysis, the tracking error and estimation errors are proven to be semiglobally uniformly ultimately bounded. The control effectiveness is illustrated through simulations on a compliant robot arm.

1. Introduction

Safety in robot–environment interaction is of significant value and can be improved by passive compliant devices. As a popular compliant device, a series elastic actuator (SEA) is developed by introducing elastic elements between the motor and the load and can bring some benefits including low output impedance, tolerance to shocks, and energy efficiency [1,2,3]. The introduction of SEAs in robots improves interaction compliance to some extent but (1) it cannot root out the conflict between high robot stiffness and the requirement of high compliance, and (2) the compliant actuators have bad adaptability and limited applications since SEAs make robots behave only in a certain impedance. The compliance of SEA-driven robots should be further improved by the regulation of robot impedance using active compliance control.
As one of the most popular compliance control approaches, impedance control proposed by Hogan in the 1980s [4] provides interaction compliance through a dynamical relationship between the position and interaction force. To date, extensive impedance control strategies for rigid-link robots have been developed based on adaptive learning [5,6,7], sliding mode [8], neural networks [9,10,11], and so on. For impedance control implementation, one significant problem to be solved is the determination of the desired robot impedance, which is highly dependent on environmental impedance. Although a variety of methods, including least-squares techniques and programming by demonstration [12,13,14], were developed for impedance learning, the impedance controllers based on these impedance learning methods were usually designed without stability guarantees. Recently, model-based impedance learning control strategies [15,16,17] were developed for robot–environment interactions and validated in repetitive tasks with stability guarantees. The control approach can provide variable impedance regulations for robots without the requirement of interaction force sensing. However, the existing model-based impedance learning controllers mainly focus on rigid-link robots. The extension of model-based impedance learning control to SEA-driven compliant robots is not direct since the introduction of an SEA significantly increases control design complexity and turns the control system into a fourth-order underactuated system from a second-order fully actuated system.
Based on the above analysis, designing model-based impedance learning control for SEA-driven robots can exploit the advantages of passive compliant devices and active compliance control to improve robot–environment interaction performance, but to date, no results on this topic have been produced.
In this paper, stability-guaranteed adaptive control using model-based impedance learning is proposed for SEA-driven robots with fourth-order underactuated systems. Impedance parameters of interaction forces and model uncertainty parameters are estimated using differential adaptation laws updated by tracking errors. In the control design, the command filter-based adaptive backstepping approach is used to decrease computational complexity and avoid the requirement of the high derivatives of the robot position in the backstepping control of SEA-driven robots. We prove the semiglobal stability of the closed-loop control system theoretically and illustrate the control effectiveness through simulations on a SEA-driven robot arm. The proposed control strategy can be applied to categories of robot–environment interactions including robot-assisted rehabilitation, exoskeletons, and polishing. Compared to related results, the contribution of this paper lies in the design of the adaptive impedance learning controller for SEA-driven compliant robots to obtain variable impedance regulations without interactive force sensing.

2. Robot Dynamics

The considered compliant robot has the following dynamics:
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + G ( q ) = K ( θ q ) + τ e n , B θ ¨ + K ( θ q ) = τ
where q R n and θ R n denote the positions of the rigid-link robot and the SEA, respectively; M ( q ) and B denote the inertial matrices; C ( q , q ˙ ) denotes the Coriolis and centrifugal matrix; G ( q ) is the gravity torque; K is the stiffness matrix for the SEA; τ e n denotes the interaction force between the robot and its environment; and τ is the system control input.
Property 1.
M ( q ) and B are symmetric and positive definite matrices that satisfy
σ 1 I M ( q ) σ 2 I
where σ 1 and σ 2 are positive constants.
Property 2.
M ˙ ( q ) 2 C ( q , q ˙ ) is a skew symmetric, i.e.,
ξ T ( M ˙ ( q ) 2 C ( q , q ˙ ) ) ξ = 0 , ξ R n .
Property 3.
The robot dynamics have the following parameterized form
M ( q ) ϕ 1 + C ( q , q ˙ ) ϕ 2 + G ( q ) = Y ( ϕ 1 , ϕ 2 , q , q ˙ ) W
where W is a constant vector and contains unknown parameters.
Remark 1.
The model in (1) derived by Spong [18] takes a balance between the complexity and physical validity by neglecting the inertial coupling between the link-side dynamics and the motor. The viability of the model in (1) has been demonstrated for compliant robots with SEAs [19].
Denote q d as the desired trajectory of the robot in the interaction. Define the tracking error e as
e 1 = q d q .
As proven and presented in [17], the robot–environment interaction force can be expanded as
τ e n = K s e 1 + K d e ˙ 1
where K s = diag { K s i } , and K d = diag { K d i } denote the stiffness and damping terms in the interaction, respectively. Denote Q e = [ diag { e } , diag { e ˙ } ] and V = [ K s 1 , , K s n , K d 1 , , K d n ] T . Then, the force τ e n can be expressed as
τ e n = Q e V .
The objective of this paper is to design model-based adaptive impedance learning control using differential adaptation to estimate the impedance profiles in Q e so that the tracking error e 1 and impedance estimation errors are uniformly ultimately bounded (UUB) without the measurement of the interactive force τ e n .

3. Impedance Learning-Based Interaction Control

This section presents an impedance learning-based adaptive interaction control strategy for the considered compliant robot using the CFAB approach. The control design procedure is stated as follows:
Step 1: Define the error e 2 as
e 2 = e ˙ 1 + k 1 e 1
where k 1 is a positive parameter. Based on (1), the dynamics of e 2 can be stated as
M ( q ) e ˙ 2 = C ( q , q ˙ ) e 2 + M ( q ) ( q ¨ d + k 1 e ˙ 1 ) + C ( q , q ˙ ) ( q ˙ + e 2 ) + G ( q ) K ( θ q ) τ e n = C ( q , q ˙ ) e 2 + Y e W + Q e V K ( θ q ) τ
where Y e Y ( q ¨ d + k 1 e ˙ 1 , q ˙ + e 2 , q , q ˙ ) .
Design the virtual control α 1 as
α 1 = K 1 ( K q + k 2 e 2 + Y e W ^ + Q e V ^ )
where k 2 is a positive control gain and W ^ and V ^ are the estimators of W and V, respectively. The estimators are updated by
W ^ ˙ = γ 1 Y e T e 2 , V ^ ˙ = γ 2 Q e T e 2
where γ 1 and γ 2 are the positive learning rates.
Pass α 1 through the following command filter
δ ˙ 1 δ ˙ 2 = 0 I ω 2 I 2 ξ ω I δ 1 δ 2 + 0 ω 2 α 1
where ω and ξ R are the frequency and the damping ratio, respectively.
Define α 1 c = δ 1 , α ˙ 1 c = δ 2 , and
α ˜ 1 = α 1 c α 1 .
Substituting (10) and (13) into (9) yields
M ( q ) e ˙ 2 = C ( q , q ˙ ) e 2 k 2 e 2 + K α ˜ 1 + Y e W ˜ + Q e V ˜
where W ˜ = W W ^ and V ˜ = V V ^ .
Step 2: For the SEA, define the errors e 3 and e 4 as
e 3 = α 1 c θ , e 4 = e ˙ 3 + k 3 e 3 .
From (1), the dynamics of e 4 can be presented as
B e ˙ 4 = K ( θ q ) τ + B ( δ ˙ 2 + k 3 e ˙ 3 ) .
Design the control input τ as
τ = K ( θ q ) + k 4 e 4 B ( δ ˙ 2 + k 3 e ˙ 3 )
where k 4 > 0 . Then,
B e ˙ 4 = k 4 e 4 .
Remark 2.
The use of the command filter in (12) can decrease computational complexity and can avoid the requirement of the high derivatives of positions in conventional backstepping control of SEA-driven robots.
Lemma 1
([20]). Consider the command filter in (12) on t [ 0 , T ) , with T being a finite value. Given a small ϵ R + , there exists a sufficiently large ω such that | | α ˜ 1 | | ϵ on t [ 0 , T ) .
Theorem 1.
Design the impedance learning-based adaptive interaction controller in (17) with the learning law in (11) for the considered compliant robot dynamics in (1). The tracking error e 1 and the estimation errors V ˜ and θ ˜ are semiglobally uniformly ultimately bounded (SUUB).
Proof. 
Consider the following Lyapunov function candidate
L = 1 2 e 2 T M ( q ) e 2 + 1 2 e 4 T B e 4 + 1 2 γ 1 W ˜ T W ˜ + 1 2 γ 2 V ˜ T V ˜ .
Taking the time derivative of L and substituting (14) and (16), one can obtain
L ˙ = k 2 e 2 T e 2 1 2 e 2 T ( M ˙ ( q ) 2 C ( q , q ˙ ) ) e 2 + e 2 T K α ˜ 1 + e 2 T Y e W ˜ + e 2 T Q e V ˜ k 4 e 4 T e 4 W ˜ T γ 1 W ^ ˙ V ˜ T γ 2 V ^ ˙
From Property 2 and the update laws in (11),
L ˙ = k 2 e 2 T e 2 k 4 e 4 T e 4 + e 2 T K α ˜ 1 .
According to Lemma 1, if the parameter ω chosen is sufficiently large, | | α ˜ 1 | | ϵ on [ 0 , T ) . Using Young’s inequality,
e 2 T K α ˜ 1 k 2 2 e 2 T e 2 + 1 2 k 2 α ˜ 1 T K T K α ˜ 1 k 2 2 e 2 T e 2 + ϵ 2 k d 2 k 2
where k d = λ m a x ( K T K ) .
Based on (21) and (22), one can obtain
L ˙ k 2 2 e 2 T e 2 k 4 e 4 T e 4 + ϵ 2 k d 2 k 2 , t [ 0 , T )
which implies
L ( t ) L ( 0 ) + ϵ 2 k d 2 k 2 , t [ 0 , T ) .
Based on Lemma 1 and (24), we can conclude that | | α ˜ 1 | | ϵ can be satisfied for t [ 0 , ) if the parameter ω chosen is sufficiently large. Given the initial values for the closed-loop control system, the inequality in (23) is satisfied for t [ 0 , ) if the control parameters are properly chosen. Therefore, the proposed impedance learning-based adaptive controller makes the closed-loop control system SUUB.    □

4. Simulation Results

Simulations are implemented on the compliant robot arm in (1) with
Y e = [ q ¨ d + k 1 e ˙ 1 , sin ( q ) , q ˙ ] , Q e = [ e 1 , e ˙ 1 ] ,
W = [ 1 , 4.9 , 1 ] T , V = [ 10 , 3 ] , K = 20 .
For the considered robot, the initial value is chosen as q ( 0 ) = q ˙ = θ ( 0 ) = θ ˙ = 0 , and the control parameters for the proposed impedance learning-based robot adaptive control in (17) are chosen as k 1 = 2 , k 2 = 5 , k 3 = 3 , k 4 = 6 , ω = 15 , ξ = 0.8 , and γ 1 = 12 , γ 2 = 10 .
In the simulation, a regulation problem and a tracking problem are considered as two cases, where q d = 0.7 rad for Case 1 and q d = 0.2 + 0.3 cos ( π t / 6 ) for Case 2. The simulation results in Case 1 and Case 2 are presented in Figure 1, Figure 2 and Figure 3 and Figure 4, Figure 5 and Figure 6, respectively.
In Case 1, by using the proposed controller shown in Figure 3, the regulation error e 1 in Figure 1 is very close to zero after 10 s. In Figure 2, it can be seen that although W ˜ and V ˜ are not close to zero owing to not enough excitation and the coupling between the robotic parameters’ uncertainties and the impedance’s uncertainties, the robotic parameters’ estimation error W ˜ and the impedance profiles’ estimation error V ˜ are significantly decreased after 5 s and the force estimation errors Y e W ˜ and Q e V ˜ are very close to zero after 10 s.
In Case 2, the proposed impedance learning controller shown in Figure 6 renders the tracking error e 1 in Figure 4 ultimately close to zero. In Figure 5, it can be seen that the robotic parameter estimation error W ˜ and the impedance profile estimation error V ˜ are highly decreased, Q e V ˜ is close to zero, and Y e W ˜ is bounded but not close to zero. The reason is that Q e V ˜ plays a more important role in e 1 than Y e W ˜ and V ˜ receives more excitation.
The above simulation results illustrate the effectiveness of the proposed impedance learning-based controller in (17) and the adaptive impedance learning in (11). The proposed controller can ensure that robot impedance is more close to human impedance than impedance control with constant impedance profiles.

5. Conclusions

The variable impedance control of robots can improve human–robot interaction performance through the regulation of the impedance of robots to adjust the motions of human limbs. However, impedance variation affects the control stability of robots. Based on impedance learning, this paper has designed an adaptive controller of SEA-driven robots for human–robot interaction using the command filter-based adaptive backstepping approach. Adaptive estimators have been designed to approximate the robot modeling uncertainty and impedance parameters of interaction force. We have validated the practical control stability through theoretical analysis and showed the control effectiveness through simulations. The designed impedance learning control provides variable robot impedance regulation without interactive force sensing. By exploiting the advantages of impedance learning control and compliance actuators, this paper improves the safety and compliance of robot–environment interactions. In this paper, we only guarantee that the control system is SUUB. Guaranteeing that the impedance learning-based control is asymptotically stable and improving impedance estimation performance are our future research directions.

Author Contributions

Conceptualization, J.Y.; Methodology, T.S.; Software, J.Y.; Formal analysis, T.S. and J.Y.; Writing—original draft, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Key Research and Development Project (No. 2019YFB1312500) and in part by the National Natural Science Foundation of China (Nos. 62073156, 62103280).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Pan, Y.; Wang, H.; Li, X.; Yu, H. Adaptive command-filtered backstepping control of robot arms with compliant actuators. IEEE Trans. Control Syst. Technol. 2018, 26, 1149–1156. [Google Scholar] [CrossRef]
  2. Li, X.; Pan, Y.; Chen, G.; Yu, H. Adaptive human-robot interaction control for robots driven by series elastic actuators. IEEE Trans. Robot. 2017, 33, 169–182. [Google Scholar] [CrossRef]
  3. Yu, H.; Huang, S.; Chen, G.; Pan, Y.; Guo, Z. Human-robot interaction control of rehabilitation robots with series elastic actuators. IEEE Trans. Robot. 2015, 31, 1089–1100. [Google Scholar] [CrossRef]
  4. Hogan, N. Impedance control: An approach to manipulation: Part i-theory. J. Dyn. Syst. Meas. Control 1985, 107, 1–7. [Google Scholar] [CrossRef]
  5. He, W.; Dong, Y.; Sun, C.Y. Adaptive neural impedance control of a robotic manipulator with input saturation. IEEE Trans. Syst. Man Cybern. Syst. 2016, 46, 334–344. [Google Scholar] [CrossRef]
  6. Sun, T.R.; Peng, L.; Cheng, L.; Hou, Z.G.; Pan, Y.P. Composite learning enhanced robot impedance control. IEEE Trans. Neural Networks Learn. Syst. 2020, 31, 1052–1059. [Google Scholar] [CrossRef] [PubMed]
  7. Sharifi, M.; Behzadipour, S.; Salarieh, H.; Tavakoli, M. Cooperative modalities in robotic tele-rehabilitation using nonlinear bilateral impedance control. Control Eng. Pract. 2017, 67, 52–63. [Google Scholar] [CrossRef]
  8. Chan, S.P.; Yao, B.; Gao, W.B.; Chen, M. Robust impedance control of robot manipulators. Int. J. Robot. Autom. 1991, 6, 220–227. [Google Scholar]
  9. Jung, S.; Hsia, T.C. Neural network impedance force control of robot manipulator. IEEE Trans. Ind. Electron. 1998, 2, 451–461. [Google Scholar] [CrossRef] [Green Version]
  10. Sasaki, M.; Honda, N.; Njeri, W.; Matsushita, K.; Ngetha, H. Gain tuning using neural network for contact force control of flexible arm. J. Sustain. Res. Eng. 2020, 5, 138–148. [Google Scholar]
  11. Njeri, W.; Sasaki, M.; Matsushita, K. Gain tuning for high-speed vibration control of a multilink flexible manipulator using artificial neural network. J. Vib. Acoust. 2019, 141, 041011. [Google Scholar] [CrossRef]
  12. Rozo, L.; Calinon, S.; Caldwell, D.G.; Jimnez, P.; Torras, C. Learning physical collaborative robot behaviors from human demonstrations. IEEE Trans. Robot. 2016, 32, 513–527. [Google Scholar] [CrossRef]
  13. Fong, J.; Tavakoli, M. Kinesthetic teaching of a therapist’s behavior to a rehabilitation robot. In Proceedings of the 2018 International Symposium on Medical Robotics, Atlanta, GA, USA, 1–3 March 2018; pp. 1–6. [Google Scholar]
  14. Zeng, C.; Yang, C.; Cheng, H.; Li, Y.; Dai, S. Simultaneously encoding movement and semg-based stiffness for robotic skill learning. IEEE Trans. Ind. Inform. 2021, 17, 1244–1252. [Google Scholar] [CrossRef] [Green Version]
  15. Yang, C.; Ganesh, G.; Sami, H.; Sven, P.; Alin, A.-S.; Burdet, E. Human-like adaptation of force and impedance in stable and unstable interactions. IEEE Trans. Robot. 2011, 27, 918–930. [Google Scholar] [CrossRef] [Green Version]
  16. Li, Y.; Ganesh, G.; Jarrasse, N.; Haddadin, S.; Albu-Schaeffer, A.; Burdet, E. Force, impedance, and trajectory learning for contact tooling and haptic identification. IEEE Trans. Robot. 2018, 34, 1170–1182. [Google Scholar] [CrossRef] [Green Version]
  17. Sharifi, M.; Azimi, V.; Mushahwar, K.V.; Tavakoli, M. Impedance learning-based adaptive control for human-robot interaction. IEEE Trans. Control Syst. Technol. 2021. [Google Scholar] [CrossRef]
  18. Spong, M. Modeling and Control of Elastic Joint Robots. J. Dyn. Syst. Meas. Control 1987, 109, 310–319. [Google Scholar] [CrossRef]
  19. Braun, D.J.; Petit, F.; Huber, F.; Haddadin, S.; Van Der Smagt, P.; Albu-Schäffer, A.; Vijayakumar, S. Robots driven by compliant actuators: Optimal control under actuation constraints. IEEE Trans. Robot. 2013, 29, 1085–1101. [Google Scholar] [CrossRef] [Green Version]
  20. Hu, J.; Zhang, H. Immersion and invariance based command-filtered adaptive backstepping control of vtol vehicles. Automatica 2013, 49, 2160–2167. [Google Scholar] [CrossRef]
Figure 1. The performance of the tracking error e 1 in Case 1.
Figure 1. The performance of the tracking error e 1 in Case 1.
Sensors 22 09740 g001
Figure 2. The estimation errors W ˜ , Y e W ˜ , V ˜ , and Q e V ˜ in Case 1.
Figure 2. The estimation errors W ˜ , Y e W ˜ , V ˜ , and Q e V ˜ in Case 1.
Sensors 22 09740 g002aSensors 22 09740 g002b
Figure 3. The control input of (17) in Case 1.
Figure 3. The control input of (17) in Case 1.
Sensors 22 09740 g003
Figure 4. The performance of the tracking error e 1 in Case 2.
Figure 4. The performance of the tracking error e 1 in Case 2.
Sensors 22 09740 g004
Figure 5. The estimation errors W ˜ , Y e W ˜ , V ˜ , and Q e V ˜ in Case 2.
Figure 5. The estimation errors W ˜ , Y e W ˜ , V ˜ , and Q e V ˜ in Case 2.
Sensors 22 09740 g005
Figure 6. The control input of (17) in Case 2.
Figure 6. The control input of (17) in Case 2.
Sensors 22 09740 g006
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Sun, T.; Yang, J. Adaptive Interaction Control of Compliant Robots Using Impedance Learning. Sensors 2022, 22, 9740. https://doi.org/10.3390/s22249740

AMA Style

Sun T, Yang J. Adaptive Interaction Control of Compliant Robots Using Impedance Learning. Sensors. 2022; 22(24):9740. https://doi.org/10.3390/s22249740

Chicago/Turabian Style

Sun, Tairen, and Jiantao Yang. 2022. "Adaptive Interaction Control of Compliant Robots Using Impedance Learning" Sensors 22, no. 24: 9740. https://doi.org/10.3390/s22249740

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop