Next Article in Journal
Laparoscopic Video Analysis Using Temporal, Attention, and Multi-Feature Fusion Based-Approaches
Next Article in Special Issue
Diagnosis of the Pneumatic Wheel Condition Based on Vibration Analysis of the Sprung Mass in the Vehicle Self-Diagnostics System
Previous Article in Journal
Multiplicative Attacks with Essential Stealthiness in Sensor and Actuator Loops against Cyber-Physical Systems
Previous Article in Special Issue
Fault Diagnosis of Lubrication Decay in Reaction Wheels Using Temperature Estimation and Forecasting via Enhanced Adaptive Particle Filter
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Min–Max Optimal Control of Robot Manipulators Affected by Sensor Faults

Faculty of Mechanical Engineering and Naval Architecture, University of Zagreb, HR-10000 Zagreb, Croatia
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(4), 1952; https://doi.org/10.3390/s23041952
Submission received: 5 December 2022 / Revised: 1 February 2023 / Accepted: 6 February 2023 / Published: 9 February 2023
(This article belongs to the Special Issue Feature Papers in Fault Diagnosis & Sensors Section 2022)

Abstract

:
This paper is concerned with the control law synthesis for robot manipulators, which guarantees that the effect of the sensor faults is kept under a permissible level, and ensures the stability of the closed-loop system. Based on Lyapunov’s stability analysis, the conditions that enable the application of the simple bisection method in the optimization procedure were derived. The control law, with certain properties that make the construction of the Lyapunov function much easier—and, thus, the determination of stability conditions—was considered. Furthermore, the optimization problem was formulated as a class of problem in which minimization and maximization of the same performance criterion were simultaneously carried out. The algorithm proposed to solve the related zero-sum differential game was based on Newton’s method with recursive matrix relations, in which the first- and second-order derivatives of the objective function are calculated using hyper-dual numbers. The results of this paper were evaluated in simulation on a robot manipulator with three degrees of freedom.

1. Introduction

In recent years, the control of robot manipulators in the presence of faults, and also, in general, the control of nonlinear dynamic systems in the presence of faults, has been a very active area of research. The approaches have been primarily focused on systems affected by sensor faults [1,2,3], actuator faults [4,5] and simultaneously both sensor and actuator faults [6,7,8,9]. Many well-known advanced control methods have been proposed as a way to cope with fault occurrences: these methods include—but are not limited to—sliding mode control [10,11,12,13], adaptive control [14], model predictive control [15,16], artificial neural network control [17,18,19], fuzzy control [20], and hybrid control [21,22,23,24].
Furthermore, an optimal and robust control in the presence of sensor and actuator faults has, for the last decade, been a topic of import for researchers on the control of dynamical systems. Many authors consider the L 2 -gain of a nonlinear system (often called nonlinear H , because, according to [25,26], the L 2 -gain is equivalent to the H norm of a linear system) to be a measure of the influence of the faults. A fixed L 2 -gain fault-tolerant controller, for a class of Lipschitz nonlinear system with actuator saturation, was designed in [27]. In [28], a data-driven output-feedback approach to the fault-tolerant control (FTC) problem, considering L 2 -gain properties, was proposed. A robust H FTC scheme to regulate the quadrotor system was adopted in [29]: the approach was based on linear matrix inequalities (LMI), so the overall dynamics of the quadrotor were linearized. In [30], backstepping and adaptive control methods based on L 2 -gain were combined together, to provide a passive fault-tolerant attitude controller of a nonlinear dynamic model of morphing aircraft.
All the above-mentioned works were based on the formulation of the problem in the form of LMI, or on the determination of the approximate solution of the associated Hamilton–Jacobi–Isaacs (HJI) equation. The application of LMI requires the linearization of system dynamics, and therefore robustness cannot be guaranteed in all operating points; on the other hand, solutions of the HJI equation can be complex, and therefore difficult to apply in real control tasks.
The difficulties described above motivated us to conduct the research presented here. We formulated the control problem of a robot manipulator affected by sensor faults, as a typical representative of nonlinear dynamic systems, in the form of optimal L 2 -gain control and the related min–max optimization, and we approached its solution without LMI formalism or the need to approximate the solution of the HJI equation.
The main idea of the approach presented in this paper was based on the algorithmic procedure described in the authors’ previous works [31,32]. The idea of using a Newton-like algorithm with recursive matrix relations to calculate exact gradients and Hessians was applied to the synthesis of a control system which aimed to overcome the sensor malfunctions whilst maintaining desirable stability and optimal L 2 -gain performance properties.
This paper considered the synthesis of the PID control law of a robot manipulator, which kept the influence of sensor faults below the permissible limits, and ensured the robust stability of the overall closed-loop system. The problem was presented as a two-player zero-sum differential game, with the objective function including a parameter in such a way that the control vector represented the “player” that minimized the objective function, while the vector containing the sensor faults represented the “player” that maximized the same objective function.
The algorithm proposed in this paper, in one part, was based on Lyapunov’s stability analysis. The derived stability conditions were used in the optimization process for the appropriate initial setting of the algorithm parameters: these conditions were crucial in the application of the bisection method by which we determined the minimum L 2 -gain. Also, by using Lyapunov method, the stopping criteria of the algorithm was improved, by deriving inequalities dependent on PID controller gains that bound the distance of the current point from the saddle point, and thus added bound on the number of iterations.
The main contributions of this work are summarized as follows:
  • A suitable mathematical tool and systematic algorithmic procedure for the synthesis of an optimal robust PID controller for robot manipulators affected by sensor faults was developed. The presented approach admitted a min–max formulation, and hence provided a guarantee of robustness, which was achieved by optimizing the worst-case performance. To the best of the authors’ knowledge, such an algorithmic PID controller synthesis procedure had not previously been investigated.
  • In many other similar optimal control-based algorithms that have been proposed in the literature (see, for example, [33,34] and references therein), the nonlinear system dynamics were treated as equality constraints, and were included in the optimization process, using the method of Lagrange multipliers: the results were HJI equations that were very difficult or almost impossible to solve. For this reason, many approximation methods have been developed (see, for example, [35,36,37,38] and references therein) in which actual computational complexity increased with the number of system states which needed to be estimated. In contrast to such approaches, the algorithm proposed in this paper had no high-dimensional structure: this followed from the fact that, instead of incorporating the robot dynamics in the closed loop with the controller directly into an objective function, and solving the corresponding HJI equation, the state variables and PID controller gains were coupled by recursive matrix relations, used to calculate the first- and second-order derivatives that appear in the Newton-like method.
Based on the contributions noted above, the importance of the research presented in this paper lies in the fact that the proposed control architecture does not rely directly on sensor fault estimation, in order to improve the desired positioning of the robot manipulator, and is closely related to nonlinear L 2 -gain robust control, where a PID-type controller, with its well-known simplicity of structure, is designed to be robust against a sensor fault in the system. Such a control strategy has the potential for practical applications, due to its simplicity of design, it having less lag between sensor fault appearance and accommodation, and it not requiring significant computing resources.
The rest of the paper is organized as follows. In Section 2, the robot manipulator dynamics and their main properties are presented. In Section 3, the min–max optimal control problem, of a robot manipulator affected by sensor faults and in a closed loop with a PID controller, is defined. The main results are presented in Section 4 and Section 5. Based on the theory of L 2 stability and Lyapunov’s approach, the stability conditions of the entire control system are established in Section 4. In Section 5, the bisection method, the Newton-like algorithm, the Adams discretization method and the recursive matrix calculation of gradients and Hessians are integrated into an efficient algorithmic procedure. The closed-loop response of the proposed control strategy and a robot manipulator with three degrees of freedom (3DOF) are evaluated in computer simulations, and the results are given and discussed in Section 6. Section 7 concludes the paper. There are three appendices included. Appendix A describes the notation used throughout the paper. In Appendix B, a detailed derivation of expressions for the kinetic and potential energy of the cylindrical robot manipulator is given. Appendix C contains the expressions for the evaluation of coefficients resulting from the mechanical properties of a cylindrical robot manipulator with 3DOF.

2. Dynamic Model and Properties

Consider a dynamic model of an N-DOF robot manipulator system of the form
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + g ( q ) = u ,
where q R N is the vector of generalized coordinates, u R N is the vector of control forces/torques applied to the system, M ( q ) R N × N is the inertia matrix, C ( q , q ˙ ) q ˙ R N is the vector of centrifugal and Coriolis forces/torques and g ( q ) R N is the vector of gravitational forces/torques. It is well-known that Equation (1) can be obtained through modeling a robot manipulator system by Euler–Lagrange equations.
For this paper, we considered a robot manipulator that had both rotational and translational generalized coordinates, and hence had the following properties (see, for example, [39,40,41,42,43]):
Property 1.
The matrix
M ˙ ( q ) 2 C ( q , q ˙ ) ,
is skew-symmetric for all q , q ˙ R N : this implies that
M ˙ ( q ) = C ( q , q ˙ ) + C ( q , q ˙ ) T ;
Property 2.
The inertia matrix M ( q ) is a positive-definite symmetric matrix which satisfies
a 1 ξ 2 ξ T M ( q ) ξ a 2 + c 2 q + d 2 q 2 ξ 2 ,
for all ξ and q R N , where a 1 , a 2 > 0 , and c 2 , d 2 0 .
Property 3.
There exist constants, c 1 and d 1 , such that the Coriolis and centrifugal term, C ( q , q ˙ ) q ˙ , satisfies
C ( q , q ˙ ) q ˙ c 1 + d 1 q q ˙ 2 ,
for all q and q ˙ R N , where c 1 > 0 , d 1 0 .
As is well-known, the vector of gravitational forces/torques is a gradient of the potential energy. The considered class of robot manipulator was such that its potential energy depended linearly on translational generalized coordinates, while rotational generalized coordinates appeared as a trigonometric function, with period 2 π . Hence, the following property was imposed:
Property 4.
There exist positive constants, k g 1 and k g 2 , such that the Jacobian of the gravity vector satisfies
g ( q ) q k g 1 + k g 2 q ,
for all q R N .
Remark 1.
If the system (1) has no translational generalized coordinates, then c 2 , d 2 , d 1 , k g 2 = 0 . See, for example, [44,45,46,47], and references therein.

3. Optimal Control Problem

As robot manipulators often work in extreme environments, their sensors are prone to faults during operation: for example, due to environmental noise, like a magnetic field. Faults in robot manipulator sensors often occur due to vibrations that cause disruptions in communication between the control unit and the sensor, or even a short circuit. Furthermore, intermittent sensor connection, bias in sensor measurement and sensor gain drop can also be presented as faults.
For this paper, we were interested in designing a decentralized system controller (1) that guaranteed the stability around a desired constant of generalized coordinate positions q d R N , while the influence of sensor faults was kept under a permissible level. In this context, we considered the least upper bound (supremum) of the ratio of the L 2 -norm of the vector of the output signals to be controlled, and the L 2 -norm of the vector representing sensor faults. It should be noted that considering the control problem in this way was actually equivalent to the L 2 -gain (or nonlinear H ) optimal control defined in [25,26].
In a sensor fault situation, the internal robot dynamic properties are not affected, but the controller of the robot manipulator (1) receives generalized coordinates that are not exact, and are given as follows [2,8,48,49]:
q ¯ = q + F p φ p ,
q ¯ ˙ = q ˙ + F v φ v ,
where φ p L 2 t 0 , t f , R S and φ v L 2 t 0 , t f , R S are the vector functions representing sensor faults, and F p and F v are the matrices with appropriate dimensions. The assumptions about the sensor fault terms were employed as follows:
Assumption 1.
There exist positive-definite time-dependent functions f p and f v , such that the sensor faults are bounded by
φ p f p ,
φ v f v .
Remark 2.
In many other works—for example, in [2,7,8,50]—sensor fault bounds are considered as known positive real constants. In this work, we did not treat these bounds as known specific constants, but as unknown functions that needed to be determined in such a way that they maximized the objective function, so that the proposed control strategy might be available not only for robotic manipulators but also for a wide class of nonlinear dynamical systems.
A PID-type control law was proposed, with gravity vector compensation in the following form:
u = K P q ˜ K D q ¯ ˙ K I υ + g ( q ) ,
υ ˙ = q ˜ ,
where q ˜ = q ¯ q d was a generalized coordinates errors vector, and K P , K D and K I were N × N positive–definite diagonal matrices. Note that, as K P , K D and K I were diagonal matrices, the proposed control law was formed in a fully decentralized fashion.
The robot manipulator system (1) with mixed rotational and translational generalized coordinates, affected by sensor faults (7) and (8) in a closed loop with a PID-type controller (11) and (12), can be represented by a block diagram, as shown in Figure 1. Each DOF of the robot manipulator was driven by servo motors with gears or pneumatic and hydraulic cylinders. The decentralized PID control law was implemented in the microcontroller, and point-to-point communication was established by electrical interface and physical layers. The received encoder data were used to close the feedback loop in the microcontroller.
Regarding the block diagram shown in Figure 1, in our approach the emphasis was not on the structure of the controller itself, but on the method for adjusting the gains of the controller. The PID controller structure was considered because it is the most common structure used in industrial applications.
Remark 3.
It is important to emphasize the main difference between a conventional PID controller and our proposed one. In conventional applications of a PID controller, the gain values are independently and freely chosen by the designer, often using a trial-and-error method; however, in our approach, a proportional gain matrix K P , a derivative gain matrix K D and an integral gain matrix K I were tuned automatically and correlatively by a numerical algorithm that will be presented in detail in the following sections.
Inserting (7) and (8) into (11) and (12) gave
u = K P q ^ K D q ˙ K I υ K P F p φ p K D F v φ v + g ( q ) ,
υ ˙ = q ^ + F p φ p ,
where q ^ = q q d .
Finally, inserting (13) and (14) into (1) gave closed-loop equations in the following form:
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ = K P q ^ K D q ˙ K I υ K P F p φ p K D F v φ v ,
υ ˙ = q ^ + F p φ p .
Based on the definition of finite L 2 -gain and the definition of L 2 stability (see, for example, [25,51]), we defined the min–max optimal control problem of a robot manipulator affected by sensor faults, as follows:
Problem 1.
Determine the PID controller gains, which are the elements of the matrices K P , K D and K I in (11) and (12), and determine the “worst case” of sensor fault functions φ p and φ v in (7) and (8), such that the system (15) and (16) is stable around the desired constant generalized coordinates positions, the ratio of the L 2 -norm of the vector of the output signals is controlled and the L 2 -norm of the vector representing sensor faults is minimized. In other words, solve the following zero-sum differential game:
J μ * ( q 0 ) = min K P , K D , K I max φ q ^ L 2 2 + u L 2 2 μ φ L 2 2 ,
where q ^ , K P , K D , K I and φ = φ p T φ v T T are coupled via closed-loop system dynamics (15) and (16), and μ > 0 is a finite L 2 -gain; furthermore, q 0 is an a priori known vector of initial states of generalized coordinates.

4. L 2 Stability Conditions

It is well-known from the dissipativity theory [52,53,54] that a nonlinear dynamic system is L 2 -stable if, for all initial conditions, all uncertainties w and all t f t 0 there exists a storage function V, which is also a Lyapunov function, such that the following inequality holds:
V ˙ γ 2 w T w z T z ,
where z is the performance variable, and γ > 0 is the finite L 2 -gain.
Note that the right-hand side of the inequality (18) in our case actually corresponded to the argument of the min–max operator in (17). Based on this observation, we give the following proposition, by which we set the stability conditions of the control system proposed in this paper:
Proposition 1.
If the following conditions are satisfied:
(a) 
α λ min K D 1 α λ max K I α 2 a 1 > 0 ,
(b) 
α a 1 + c 1 + d 1 q q ^ λ max K D μ 0 ,
α λ min K I α μ 0 ,
1 λ max K D σ max F v 0 ,
α α λ max K D σ max F v 0 ,
for some α > 0 , where μ > 0 is a finite L 2 -gain, and where a 1 , c 1 and d 1 are defined in Properties 2 and 3, then the system (15) and (16) is locally L 2 -stable.
Proof. 
The first step was to transform the system (15) and (16) into a form with a zero steady state. The stationary state of the system (15) and (16) is q ˙ = 0 , q ^ = 0 q = q d , so it was obtained thus:
K I υ * + K P F p φ p * + K D F v φ v * = 0
υ ˙ = F p φ p * ,
where υ * , φ p * and φ v * were steady state values.
Subtracting (16) and (25) gave q ^ = F p φ p φ p * ; then, subtracting (15) and (24), we obtained the error equations in the following form:
M ( q ) q ¨ + C ( q , q ˙ ) q ˙ + K D q ˙ + K I υ ^ + K D F v φ ^ v = 0 ,
where υ ^ = υ υ * and φ ^ v = φ v φ v * .
Following the methodology from [39,46], as the first step in the construction of the Lyapunov function, the error Equations (26) were multiplied on the left side by α q ^ T + q ˙ T with some α > 0 :
α q ^ T M ( q ) q ¨ + α q ^ T C ( q , q ˙ ) q ˙ + α q ^ T K D q ˙ + α q ^ T K I υ ^ + α q ^ T K D F v φ ^ v + q ˙ T M ( q ) q ¨ + q ˙ T C ( q , q ˙ ) q ˙ + q ˙ T K D q ˙ + q ˙ T K I υ ^ + q ˙ T K D F v φ ^ v = 0 .
Some of the terms in the expression (27) can be written as follows:
q ˙ T M ( q ) q ¨ = d d t 1 2 q ˙ T M ( q ) q ˙ 1 2 q ˙ T M ˙ ( q ) q ˙ ,
α q ^ T M ( q ) q ¨ = d d t α q ^ T M ( q ) q ˙ α q ˙ T M ( q ) q ˙ α q ^ T M ˙ ( q ) q ˙ ,
α q ^ T K D q ˙ = d d t 1 2 α q ^ T K D q ^ ,
α q ^ T K I υ ^ = d d t 1 2 α υ ^ T K I υ ^ ,
q ˙ T K I υ ^ = d d t q ^ T K I υ ^ q ^ T K I q ^ .
Inserting (28)–(32) in (27) and, as M ˙ ( q ) 2 C ( q , q ˙ ) was skew-symmetric (see Property 1), which implied q ˙ T M ˙ ( q ) 2 C ( q , q ˙ ) q ˙ = 0 , we obtained:
d d t 1 2 q ˙ T M ( q ) q ˙ + α q ^ T M ( q ) q ˙ + 1 2 α q ^ T K D q ^ + 1 2 α υ ^ T K I υ ^ + q ^ T K I υ ^ = α q ˙ T M ( q ) q ˙ + α q ^ T C ( q , q ˙ ) T q ˙ q ˙ T K D q ˙ + α q ^ T K I q ^ q ˙ T K D F v φ ^ v α q ^ T K D F v φ ^ v .
Based on the nonlinear differential form (33), it could be concluded that on the left-hand side we had the Lyapunov function candidate V ( q ˙ , q ^ , υ ^ ) , while on the right-hand side we had a candidate for the time derivative of the Lyapunov function V ˙ ( q ˙ , υ ^ , φ ^ v ) .
The Lyapunov function candidate could be rewritten as follows:
V ( q ˙ , q ^ , υ ^ ) = 1 2 α q ^ + υ ^ T M ( q ) α q ^ + υ ^ 1 2 α 2 q ^ T M ( q ) q ^ + 1 2 α q ^ T K D q ^ + 1 2 1 α q ^ + α υ ^ T K I 1 α q ^ + α υ ^ 1 2 α q ^ T K I q ^ .
In order for the above expression to be positive-definite, it was necessary to determine the conditions under which
1 2 α 2 q ^ T M ( q ) q ^ + 1 2 α q ^ T K D q ^ 1 2 α q ^ T K I q ^ > 0 .
Using definitions of vector and matrix norms, bound of quadratic forms (see Notation, Appendix A) and, finally, applying Property 2 we obtained the following:
1 2 α λ min K D 1 2 α λ max K I 1 2 α 2 a 1 q ^ 2 > 0 ,
which was positive-definite if
α λ min K D 1 α λ max K I α 2 a 1 > 0 .
Next, according to the theory of L 2 stability and the passivity properties of Euler–Lagrange systems, the following inequality needed to be satisfied:
V ˙ ( q ˙ , υ ^ , φ ^ v ) μ α q ^ + q ˙ 2 α q ^ + q ˙ T φ ^ v ,
where μ > 0 was a finite L 2 -gain.
Comparing (38) to the right-hand side of (33), using Cauchy–-Schwarz inequality, triangle inequality, definitions of vector and matrix norms, bound of quadratic forms (see Notation, Appendix A) and, finally, applying Properties 2 and 3, we got the following inequality:
α a 1 + c 1 + d 1 q q ^ λ max K D μ q ˙ 2 + α λ min K I α μ q ^ 2 + 1 λ max K D σ max F v q ˙ φ ^ v + α α λ max K D σ max F v q ^ φ ^ v 0 .
The conditions that satisfied the above inequality were as follows:
α a 1 + c 1 + d 1 q q ^ λ max K D μ 0 ,
α λ min K I α μ 0 ,
1 λ max K D σ max F v 0 ,
α α λ max K D σ max F v 0 .
 □
Remark 4.
As the condition (40) depended on the generalized coordinates of the system, it was concluded that the proposed controller guaranteed only local stability. For a more detailed analysis of stability, it would be necessary to determine the domain of attraction that guarantees asymptotic stability; furthermore, in order to ensure global stability, it would be necessary to introduce a nonlinear term in the control law (see, for example, [39]). Determining the domain of attraction and deriving the global stability conditions were beyond the scope of the research presented in this paper. The stability conditions derived in this work were calculated numerically, and served only for the appropriate initialization of the algorithm, thereby ensuring satisfactory convergence properties.

5. Algorithm for the Controller Synthesis

In order to derive the algorithm for the controller synthesis, i.e., the algorithm to solve Problem 1, the PID control law (13) had to be written in parametrized form first. To do this, a vectorization operation was performed:
vec u = vec K P q ^ vec K D q ˙ vec K I υ vec K P F p φ p vec K D F v φ v = q ^ T I vec K P q ˙ T I vec K D υ T I vec K I φ p T F p T I vec K P φ v T F v T I vec K D = q ^ T + φ p T F p T I q ˙ T + φ v T F v T I υ T I vec K P vec K D vec K I .
Then, by substituting x = x 1 T x 2 T x 3 T T = q ^ T q ˙ T υ T T , from (15) and (16), we got the following first-order nonlinear differential equations:
x ˙ = f ( x ) + B ( x , φ ) k + F φ p ,
where
f ( x ) = x 2 M 1 ( x 1 ) C ( x 1 , x 2 ) x 2 x 1 , B ( x , φ p ) = 0 M 1 ( x 1 ) R ( x , φ ) 0 , R ( x , φ p ) = x 1 T + φ p T F p T I x 2 T + φ v T F v T I x 3 T I , k = vec K P vec K D vec K I , φ = φ p φ v , F = 0 0 F p .
Note that u = R ( x , φ ) k .
In accordance with the previous substitution, expression (17) then became
J μ * ( x 0 ) = min k max φ x 1 L 2 2 + R ( x , φ ) k L 2 2 μ φ L 2 2 ,
where x , k and φ were coupled via nonlinear differential Equations (45) and (46).

5.1. L 2 -Gain Minimization

The stability conditions from Proposition 1 suggested that a simple and well-known schematic bisection algorithm [55] could be applied, to minimize the L 2 -gain: the steps of this method were as follows.
  • Step initialization: Choose the initial elements of the vector k , i.e., the initial PID controller gains, such that the lower μ l 0 and upper μ u 0 bound of the L 2 -gain satisfy the conditions in expression (19). Choose a small enough positive constant, ϵ , as the stopping criteria of the bisection method.
  • Step 1: Set k := 1.
    Step 2: Set
    μ k : = 1 2 μ l k 1 + μ u k 1 ,
    then calculate the value of function J μ k * ( x 0 ) by solving the zero-sum differential game (47).
  • Step 3: If | J μ k * ( x 0 ) | < ϵ , then stop; otherwise, if J μ k * ( x 0 ) 0 then μ l k : = μ l k 1 and μ u k : = μ k ; else, μ l k : = μ k and μ u k : = μ u k 1 .
  • Step 4. Set k : = k + 1 and return to Step 2.

5.2. Zero-Sum Differential Game Solution

In the second step of the algorithmic procedure described in the previous subsection, it was necessary to solve the problem defined by expression (47). To solve this zero-sum differential game, we implemented the Newton method, which is described in the following steps.
  • Step initialization: Choose a small enough positive constant, ε , as the stopping criterion of the Newton algorithm.
  • Step 1: Set j 0 : = .
  • Step 2: Determine the search direction vector s j by solving a system of linear equations using Cholesky factorization:
    k 2 J k , φ 2 J φ , k 2 J φ 2 J s j = k T J φ T J ,
    where J is the argument of the min max operator in (47), k J , φ J , k 2 J , φ 2 J , and k , φ 2 J are gradients and Hessians with respect to the vectors k and φ , respectively. Note that the maximization with respect to φ is achieved simply by the minus sign in front of the gradient and Hessians.
  • Step 3: Use the line search strategy satisfying the Wolfe conditions [56] to compute the step-size η j > 0 .
  • Step 4: Calculate
    k j + 1 φ j + 1 = k j φ j + η j s j .
  • Step 5: If
    k J φ J T ε ,
    then stop; else, set j : = j + 1 and go to Step 2.
As is well-known, Newton’s method has a locally quadratic convergence to the saddle point, assuming that the initial point is close enough to the saddle point. In the approach we propose here, the proximity to the saddle point is ensured by the L 2 stability conditions set in Proposition 1: this further implies that k 2 J > 0 and φ 2 J > 0 . As these Hessians are positive-definite submatrices on the main diagonal of the matrix on the left-hand side of (49), and submatrices k , φ 2 J and φ , k 2 J are negative transpositions of each other, the matrix on the left-hand side of Equation (49) is asymmetric positive-definite and, therefore, the linear system (49) is well-defined; therefore, the previously described algorithmic procedure, which is based on the Newton method, produces:
k k arg min k x 1 L 2 2 + R ( x , φ ) k L 2 2 μ k φ L 2 2 ,
φ k arg max φ x 1 L 2 2 + R ( x , φ ) k L 2 2 μ k φ L 2 2 ,
in the k-th iteration of the bisection algorithm proposed in Section 5.1.
To perform the steps of the proposed Newton-like algorithm, we needed expressions for the gradients and Hessians that appear in (49). A detailed procedure for deriving recursive matrix relations for computing these gradients and Hessians is given in the next subsection.

5.3. Matrix Relations for Recursive Calculation of Gradients and Hessians

In order to derive recursive relations for calculating gradients and Hessians, we used the fact that derivatives are a measure of the sensitivity of functions to small changes in their variables. This suggested that, for their calculation, we could perform a time discretization of the system dynamics (45). In the approach presented in this paper, the discretization of system dynamics using the fourth-order Adams approximation with a small time step was applied. In [32] it is shown that an explicit Adams method can be conveniently transformed into discrete-time state space.
The explicit Adams approximation of system (45) can be simply written in the following discrete-time state-space form:
x ^ ( i + 1 ) = ϕ ^ x ^ ( i ) , k ( i ) , φ ( i ) , x ^ ( 0 ) = x ^ 0 ,
such that the time grid consists of points t i = t 0 + i τ for i = 0 , 1 , 2 , , N 1 , where τ = ( t f t 0 ) / N is the time step length, and x ^ is the extended 4 n -dimensional state vector (n is a dimension of state vector x in (45), and 4 is the order of the Adams method). More details on the Adams method can be found in [57].
The discrete-time form of the objective function resulted in
J ( x 0 ) = τ i = 0 N 1 F x ^ ( i ) , k ( i ) , φ ( i ) ,
where F was the sub-integral function of the argument of the min max operator in (47)
F ( i ) = x 1 ( i ) 2 + R ( x ( i ) , φ ( i ) ) k ( i ) 2 μ φ ( i ) 2 .
The gradient of the objective function (55), with respect to the k in the j-th iteration of the Newton algorithm (Section 5.2), was given by
k ( l ) J = τ i = 0 N 1 k ( l ) F ( i ) = τ k ( l ) F ( l ) + i = l + 1 N 1 k ( l ) F ( i ) ,
for l = 0 , 1 , 2 , , N 1 . Note that, because of the causality principle, the terms for i < l were equal to zero.
As the terms under the sum on the right-hand side of (57) depended on k ( l ) implicitly through x ^ ( i ) for i > l , this gave
k ( l ) F ( i ) = k ( l ) x ^ ( i ) x ^ ( i ) T F ( i ) .
Using the chain rule for ordered derivatives, from (54) it followed that
k ( l ) x ^ ( i ) = k ( l ) x ^ ( i 1 ) x ^ ( i 1 ) ϕ ^ ( i 1 ) ,
for i = l + 2 , , N 1 .
To simplify the following expressions, we introduced
σ ( l ) = i = l + 1 N 1 k ( l ) T F ( i ) .
Starting from l = N 2 ; i = N 1 , it followed that
σ ( N 2 ) = k ( N 2 ) T F ( N 1 ) ,
then, substituting (58) in (61)
σ ( N 2 ) = k ( N 2 ) x ^ ( N 1 ) x ^ ( N 1 ) T F ( N 1 ) ,
and substituting (59) in (62), and taking into account (54), we obtained
σ ( N 2 ) = k ( N 2 ) ϕ ^ ( N 2 ) x ^ ( N 1 ) T F ( N 1 ) .
The above procedure could be further continued for l = N 3 ; i = N 1 , N 2 ; l = N 4 ; i = N 3 , N 2 , N 1 , etc., and the final recursive expressions for calculation of gradient k J had the following form:
ω ( N 1 ) = 0 ,
ω ( l ) = x ^ ( l + 1 ) T F ( l + 1 ) + x ^ ( l + 1 ) ϕ ^ ( l + 1 ) · ω ( l + 1 ) ,
σ ( l ) = k ( l ) ϕ ^ ( l ) · ω ( l ) ,
k ( l ) J = τ k ( l ) F ( l ) + σ T ( l ) ,
for l = N 2 , N 3 , , 0 .
Before giving recursive expressions for computing the Hessians, we adopted the usual convention such that, for some function z = f ( v ) , the Hessian is defined by v 2 z = v vec v z . Then, in order to calculate k 2 J in the j-th iteration of the Newton algorithm (Section 5.2), the derivatives of Equations (64)–(67), with respect to vectors x ^ and k , were taken. We obtained the following recursive matrix relation:
W ( N 1 ) = 0 ,
W ( l ) = x ^ ( l + 1 ) vec x ^ ( l + 1 ) T F ( l + 1 ) + x ^ ( l + 1 ) vec x ^ ( l + 1 ) ϕ ^ ( l + 1 ) · W ( l + 1 ) ,
S ( l ) = k ( l ) vec k ( l ) ϕ ^ ( l ) · W ( l ) ,
k 2 ( l ) J = τ k ( l ) vec k ( l ) F ( l ) + S ( l ) ,
for l = N 2 , N 3 , , 0 .
In the previous expressions, (64)–(71), the derivatives of the system dynamics (54) and sub-integral function (56), with respect to the vectors x ^ and k ,
x ^ ( l + 1 ) ϕ ^ ( l + 1 ) , x ^ ( l + 1 ) F ( l + 1 ) , k ( l ) ϕ ^ ( l ) , k ( l ) F ( l ) , x ^ ( l + 1 ) vec x ^ ( l + 1 ) ϕ ^ ( l + 1 ) , x ^ ( l + 1 ) vec x ^ ( l + 1 ) T F ( l + 1 ) , k ( l ) vec k ( l ) ϕ ^ ( l ) , k ( l ) vec k ( l ) F ( l ) ,
were calculated by the hyper-dual number method [58,59], which enabled us to get the first- and second-order derivatives in one step, accurately to machine epsilon, which cannot be achieved by the finite difference method. Compared to the widely used and popular method of adjoint automatic differentiation, the hyper-dual number method is more robust and easier to apply.
In the previous considerations from expression (54)–(71) the derivation of recursive matrix relations for the calculation of gradients and Hessians, with respect to the vector k containing the gains of the PID controller, are shown. Matrix relations for calculating gradients and Hessians, with respect to vector φ containing the sensor faults, can be obtained by the same procedure, with obvious changes in notation, and there is no need to give them here.
A flowchart of the entire algorithmic procedure described in Section 5.1, Section 5.2 and Section 5.3 is shown in Figure 2.

6. Results and Discussions

6.1. Numerical Simulations

In this subsection, the simulation results for control of the robot manipulator, using the controller synthesis algorithm that is described in the previous sections, are presented.
Figure 3 shows the considered structure of a robot manipulator. The structure of a robot manipulator of this type was also the subject of consideration in [60,61]; however, for the sake of completeness, in this paper a more detailed derivation of the mathematical model is presented. Detailed expressions for kinetic and potential energy, and parameters that define the bounds of the robot’s dynamic properties, are given (see Appendix B and Appendix C).
The first joint is rotary, which enables the rotation of the robot around the vertical axis, while the second and third are prismatic joints, which enable movement in the vertical and horizontal directions, respectively. Such robot manipulators have a workspace in the shape of a cylinder. Robot manipulators with a cylindrical structure, due to their compact design, are most often used for simple assembly tasks, handling machine tools, applying coatings, etc. In addition, such robots can be fast, so a compromise should be sought, because speed implies problems with rotational inertia, which can affect repeatability if the entire system is not configured in accordance with its capabilities. It is usual for cylindrical robots that the up-and-down motion is achieved by pneumatic or hydraulic cylinders, while the rotation is usually achieved by electric motors and gears.
To model the kinematics and dynamics we denoted by q 1 (rad) the rotational coordinate of the first joint, and by q 2 (m) and q 3 (m) the translational coordinates of the second and third joints, respectively.
First, the forward kinematic equations for the cylindrical structure of a robot manipulator with 3DOF, using the Denavit–Hartenberg convention were developed. The Denavit–Hartenberg parameters are given in Table 1, where θ = q 1 , d 1 = 0 , d 2 = L 1 + q 2 and d 3 = L 2 + q 3 .
The homogeneous transformation matrices were as follows:
A 1 = R o t z , q 1 = cos q 1 sin q 1 0 0 sin q 1 cos q 1 0 0 0 0 1 0 0 0 0 1 ,
A 2 = T r a n 0 , 0 , L 1 + q 2 = 1 0 0 0 0 1 0 0 0 0 1 L 1 + q 2 0 0 0 1 ,
A 3 = T r a n 0 , L 2 + q 3 , 0 = 1 0 0 0 0 1 0 L 2 + q 3 0 0 1 0 0 0 0 1 .
The transformation matrices were as follows:
0 T 1 = A 1 = cos q 1 sin q 1 0 0 sin q 1 cos q 1 0 0 0 0 1 0 0 0 0 1 ,
0 T 2 = A 1 A 2 = cos q 1 sin q 1 0 0 sin q 1 cos q 1 0 0 0 0 1 L 1 + q 2 0 0 0 1 ,
0 T 3 = A 1 A 2 A 3 = cos q 1 sin q 1 0 L 2 + q 3 sin q 1 sin q 1 cos q 1 0 L 2 + q 3 cos q 1 0 0 1 L 1 + q 2 0 0 0 1 .
where 0 T 3 was the corresponding Denavit–Hartenberg matrix and, based on the last column, we had the position vector of the robot end-effector in the Cartesian space, as follows:
p x p y p z = L 2 + q 3 sin q 1 L 2 + q 3 cos q 1 L 1 + q 2 .
The solution of the inverse kinematic problem could be obtained, based on the following equation:
A 1 1 0 T 3 = A 2 A 3 ,
p x cos ( q 1 ) + p y sin ( q 1 ) p x sin ( q 1 ) + p y cos ( q 1 ) p z 0 0 0 1 = 1 0 0 0 0 1 0 L 2 + q 3 0 0 1 L 1 + q 2 0 0 0 1 .
Based on the last columns of the previous matrices on the left and right sides, we obtained:
p x cos q 1 + p y sin ( q 1 ) = 0 q 1 = arctan p x p y ,
p z = L 1 + q 2 q 2 = L 1 + p z ,
p x sin ( q 1 ) + p y cos ( q 1 ) = L 2 + q 3 q 3 = L 2 p x sin ( q 1 ) + p y cos ( q 1 ) .
In order to derive the corresponding dynamic equations of motion, the kinetic energy of the robot was first determined. The overall kinetic energy was calculated as the sum of kinetic energies from corresponding links, as follows:
T = T 1 + T 2 + T 3 ,
where
T 1 = 1 2 I 1 q ˙ 1 2 ,
T 2 = 1 6 m 2 L A 2 q ˙ 1 2 + 3 q ˙ 2 2 ,
T 3 = m 3 2 L 3 1 3 L 3 3 q ˙ 1 2 + L 3 2 L 2 + q 3 q ˙ 1 2 + L 3 L 2 + q 3 2 q ˙ 1 2 + q ˙ 2 2 + q ˙ 3 2 .
Next, the overall potential energy was calculated, as the sum of potential energies from corresponding links, as follows:
U = U 1 + U 2 + U 3 ,
where
U 1 = 1 2 m 1 g H 1 ,
U 2 = m 2 g L 1 + q 2 ,
U 3 = m 3 g L 1 + q 2 .
The Euler–Lagrange equations of motion for a considered robot manipulator were given by
τ i = d d t T q ˙ i T q i + U q i , i = 1 , 2 , 3 ,
where T was the total kinetic energy defined by (85)–(88), U was the total potential energy defined by (89)–(92), and τ i was the torque/force applied to the i-th robot link. Energy dissipation in rotational joints, and viscous friction in translational joints were neglected.
The Equation (93) can be written in matrix form (1), such that q = q 1 q 2 q 3 T , u = τ 1 τ 2 τ 3 T and
M ( q ) q ¨ = M 11 0 0 0 m 2 + m 3 0 0 0 m 3 ,
M 11 = I 1 + L 2 L 3 m 3 + L 2 2 m 3 + 1 3 L A 2 m 2 + L 3 2 m 3 + L 3 m 3 + 2 L 2 m 3 q 3 + m 3 q 3 2 ,
C ( q , q ˙ ) q ˙ = L 3 m 3 + 2 L 2 m 3 + 2 m 3 q 3 q ˙ 1 q ˙ 3 0 1 2 L 3 m 3 2 L 2 m 3 2 m 3 q 3 q ˙ 1 2 ,
g ( q ) = 0 g m 2 + m 3 0 .
A detailed derivation of expressions for the kinetic and potential energy of the considered robot manipulator is given in Appendix B.
The numerical values of the parameters relevant to the derivation of the dynamic model of the considered robot, using the Euler–Lagrange formalism, are given in Table 2.
Furthermore, for the appropriate initialization and numerical efficiency of the proposed algorithmic procedure for PID controller synthesis, we needed the constant values of the coefficients defined in properties (4)–(6). The numerical values of these parameters are given in Table 3, and the expressions for determining them for the considered cylindrical robot are given in Appendix C.
The parameters of the algorithm proposed in this paper, for this particular robot manipulator, were set as follows. The vector of initial conditions of the generalized coordinates was q 0 = 0 . The initial values of the PID controller gaining satisfying stability conditions were chosen as
K P 0 = 120 0 0 0 100 0 0 0 50 , K D 0 = 110 0 0 0 50 0 0 0 30 , K I 0 = 50 0 0 0 50 0 0 0 20 .
The stop criterion of the bisection method was chosen as ϵ = 10 3 . The criterion for stopping Newton’s method was set as ε = 10 4 . The final time was t f = 2 s, and the number of optimization time intervals was N = 2000 , so that the sampling interval was τ = 0.002 s.
The desired positions of the generalized coordinates were selected as follows: q d 1 = π / 2 ; q d 2 = 0.2 m; q d 3 = 0.1 m. By running the algorithm, we obtained the following PID controller gains:
K P = 126.6367 0 0 0 103.1782 0 0 0 10.9068 , K D = 50.3292 0 0 0 4.0766 0 0 0 42.3659 , K I = 45.9922 0 0 0 79.7230 0 0 0 33.0817 ,
and the minimum L 2 -gain, μ = 532.3096 .
The time responses of the rotational coordinates of the first joint q 1 and the translational coordinates of the second and third joints, q 2 and q 3 , respectively, are presented in Figure 4. It can be seen from the figures that the transient response lasted approximately 0.8 s. The duration of the transient response could be further improved by adding weighting matrices into the objective function (47). When selecting these matrices, one should strive to reach a compromise between the time required to achieve the desired position, and the maximum value of the control torque/force, such that the effect of the sensor faults is kept under a permissible level, and ensures the stability of the closed-loop system.
Figure 5 illustrates the time dependence of the positioning errors. It can be seen that the errors from the desired positions were approximately 10 3 . These errors could be further reduced by reducing the parameters ϵ and ε of the bisection and Newton methods. Reducing these parameters would mean an increase in the number of iterations and, thus, the execution time of the algorithm. Furthermore, the positioning errors could also be reduced by choosing a smaller sampling time τ of the Adams method; however, as is well-known, there is a minimal value of τ that guarantees numerical stability.
Figure 6 shows the time dependence of the force and torques applied to the robot links. Furthermore, Figure 7 illustrates the simulation results for the time dependence of the sensor faults calculated by the algorithm proposed in Section 5.1 and Section 5.2. Note that f p 1 , f p 2 , f p 3 , f v 1 , f v 2 and f v 3 are elements of vector φ , which is incorporated in the objective function of the zero-sum differential game (17), i.e., (47) and, since φ is the maximizing player, the sensor fault functions shown in Figure 7 affected the robot manipulator system in the worst possible manner. The obtained results can be interpreted as a sensor failure due to a deviation belonging to the class of bounded L 2 functions.
It is obvious from the figures that the robot manipulator reached the desired positions at an acceptable settling time, with a negligible steady state error and with acceptable amounts of applied forces and torques to achieve the desired motion: therefore, it can be concluded that the proposed control strategy is efficient in the presence of sensor faults.

6.2. Discussion of Comparison with Other Methods

Here, we discuss the features of our approach, in comparison to other similar approaches.
In fault-tolerant control approaches based on LMI formalism (see, for example, [6,7,23,29] and references therein), the nonlinear dynamics of the robot manipulator must be linearized, which means that robustness cannot be guaranteed in all operating points. Then, the decentralized PID controller must be transformed into a state feedback controller or an output feedback controller, and the optimization problem is presented in the form of an LMI, in which it is necessary to determine the elements of the positive-definite Lyapunov matrix and the transformation matrix. In the case of a robot with 3DOF, the Lyapunov matrix is 6 × 6 and the transformation matrix is 3 × 6 : this means that the problem has at least 21 + 9 optimization parameters. In our approach, the PID controller gains were directly optimized, which meant that we only had 9 optimization parameters. Furthermore, the Mehrotra-type predictor–corrector variant of interior-point method algorithms is commonly used to solve the LMI problem: in our approach, we use Newton’s method, which—near the saddle point of the associated differential game—produces a sequence that quadratically converges to a desired solution, while the Mehrotra-type predictor–corrector has linear convergence. Thus, compared to methods based on solving LMI, the approach proposed in this paper has less optimization variables, and requires less iterations.
Many of the approaches to fault-tolerant control (see, for example, [2,6,12] and references therein), in addition to the synthesis of the controller, also imply a certain detection or estimation of the faults that appear in the nonlinear dynamic system: this actually means additional computational requirements that increase with the number of state variables that need to be estimated due to the synthesis of an additional subsystem. Compared to these approaches, our approach does not require an additional subsystem to determine how to modify the controller structure and gains, but focuses on the L 2 -gain robust stability of the robot manipulator, considering the worst-case scenario of sensor faults rather than the desired performance for each fault occurrence scenario.
The algorithmic procedure presented in this paper is intended for the offline solution of the zero-sum differential game related to the L 2 -gain optimal control problem, with the explicit calculation of the PID controller gains: therefore, the computational complexity of our approach is not as much of a limitation as for approaches intended for online execution, such as model predictive control (see, for example, [15,16] and references therein).

7. Conclusions

In this paper, the tuning of PID-type controller gains for robot manipulators affected by sensor faults, using an algorithmic procedure that gives an explicit solution to the L 2 -gain optimality criterion and related zero-sum differential game, is presented. The main contribution of the paper includes the integration of the simple bisection method, the Newton method for solution of related zero-sum differential games without solving the HJI equation, the Adams method for time discretization, and hyper-dual numbers, to provide an effective method for control law synthesis. By applying Lyapunov stability analysis, and the dissipativity theory and passivity properties of systems described by Euler–Lagrange equations, we derived local L 2 stability conditions that we used for appropriate initialization of the bisection method, and for ensuring and controlling the convergence of Newton’s method. The simulation results for control of the 3DOF cylindrical robot manipulator showed that the proposed algorithmic procedure could efficiently calculate the controller gains, in order to position the system affected by sensor faults, as desired.
The extension of the proposed approach could be continued in the following directions:
  • Although the case of sensor faults is considered in this paper, the proposed algorithm for control law synthesis could easily be extended to the case of dynamic systems affected by actuator faults, without significant increase in its complexity.
  • Improvements in the control strategy proposed in this paper could also go towards the synthesis of a complete model-free control law, i.e., control law without gravity vector compensation: this would complicate the derivation of the stability conditions, and the controller gains should be presented as nonlinear functions of generalized coordinates.
  • Instead of the initial conditions being known in advance, one could consider a case where the initial conditions were treated as an unknown uncertainty, i.e., the variables of the min–max optimization problem that maximizes the objective function.
  • From the numerical optimization algorithms point of view, to perform a detailed analysis and comparison between a proposed algorithm and other existing methods, such as genetic algorithm or particle swarm optimization, in terms of convergence, accuracy and computational efficiency.
  • From the point of view of its application to a robot manipulator affected by sensor faults, to carry out a detailed experimental analysis and comparison between a proposed control strategy and other existing related fault-tolerant control approaches.

Author Contributions

Conceptualization, V.M. and J.K.; methodology, V.M. and J.K.; software, V.M.; validation, V.M., J.K. and M.L.; formal analysis, V.M.; investigation, V.M., J.K. and M.L.; resources, V.M.; data curation, V.M., J.K. and M.L.; writing—original draft preparation, V.M.; writing—review and editing, V.M., J.K. and M.L.; visualization, V.M., J.K. and M.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the European Regional Development Fund, Operational Program Competitiveness and Cohesion 2014–2020, grant number KK.01.1.1.04.0092.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
DOFdegrees of freedom
FTCfault-tolerant control
HJIHamilton–Jacobi–Isaacs
LMIlinear matrix inequalities
PIDproportional–integral–derivative

Appendix A. Notation

Matrices and vectors are represented in bold upper and bold lower case, respectively. Scalars are represented in italic lower case. I is an identity matrix, and 0 is a null matrix. The dimensions of the matrices and vectors can generally be determined trivially by the context. The symbols and 2 stand for the gradient and Hessian, respectively. The symbol T denotes transposition.
The vec ( · ) is an operator that stacks the columns of a matrix one underneath the other. The Kronecker product of the two matrices A ( m × n ) and B ( p × q ) , denoted by A B , is an m p × n q matrix defined by A B = ( a i j B ) i j . The definitions of the matrix differentials calculus, and the algebras related to Kronecker products, can be found in [62,63].
The Euclidean norm of vector is defined as x = x T x . λ min { A } and λ max { A } represent the smallest and the largest eigenvalues, respectively, of the symmetric positive-definite matrix A . The induced norm of matrix A is defined as A = λ max { A T A } = σ max { A } , where σ max { A } represents the largest singular value of matrix A . If A is a symmetric positive-definite matrix, it implies A = λ max { A } ; furthermore, for a symmetric positive-definite matrix, it holds that λ min { A } x 2 x T A x λ max { A } x 2 .
L 2 I , R n stands for the standard Lebesgue spaces of vector-valued square-integrable and essentially bounded functions mapping an interval I R to R n : this space is equipped with an L 2 norm defined by · L 2 = t 0 t f · 2 d t . We avoided explicitly showing the dependence of the variables from the time when not needed.

Appendix B. Detailed Derivation of Expressions for the Kinetic and Potential Energy of the Cylindrical Robot

The movement of the first link of the robot can be viewed as a rotation around an axis, so the kinetic energy can be determined as follows:
T 1 = 1 2 I 1 q ˙ 1 2 ,
where I 1 is dynamic moment of inertia I 1 = 1 2 m 1 R 2 . The potential energy of the first link is defined by
U 1 = m 1 g T p 1 ,
where p 1 = 0 0 H 1 2 1 T is the vector of the position of the center of gravity of the infinitesimal mass of the first link in relation to the fixed coordinate system, and g = 0 0 g 0 T is the vector of gravitational acceleration. By inserting them into the Equation (A2), we get
U 1 = 1 2 g m 1 H 1 .
The kinetic energy of the second link is
T 2 = 1 2 ( m ) v 2 2 d m 2 .
If we assume that the second link is homogeneous, then it follows:
d m 2 m 2 = d u 2 L A d m 2 = m 2 L A d u 2 ,
where d m 2 is infinitesimal mass and d u 2 is infinitesimal displacement. Substituting expression (A5) into (A4) and changing the bounds of integration, it follows:
T 2 = m 2 2 L A 0 L A v 2 2 d u 2 .
The position of the d m 2 in the robot coordinate system is determined by the position vector
R 2 = 0 u 2 0 1 T .
The position vector of the mass d m 2 , with respect to the fixed coordinate system, is
p 2 = 0 T 2 R 2 .
By inserting expressions (77) and (A7) into expression (A8), we get
p 2 = u 2 sin q 1 u 2 cos q 1 L 1 + q 2 1 T .
The velocity of the mass d m 2 is
v 2 = d p 2 d t = u 2 q ˙ 1 cos q 1 u 2 q ˙ 1 sin q 1 q ˙ 2 0 T ,
and the square of the velocity is
v 2 2 = v 2 · v 2 = u 2 2 q ˙ 1 2 cos 2 q 1 + u 2 2 q ˙ 1 2 sin 2 q 1 + q ˙ 2 2 .
Inserting Equation (A11) into (A6), and integrating from 0 to L A , it follows:
T 2 = m 2 2 L A L A 3 3 q ˙ 1 2 cos 2 q 1 + L A 3 3 q ˙ 1 2 sin 2 q 1 + L A q ˙ 2 2 = m 2 2 L A L A 3 3 q ˙ 1 2 cos 2 q 1 + sin 2 q 1 + L A q ˙ 2 2 = 1 6 m 2 L A 2 q ˙ 1 2 + 3 q ˙ 2 2 .
The potential energy of the second link is calculated as follows:
U 2 = m 2 g T p 2 ,
where the vector p 2 is determined by (A9) for u 2 = L A / 2 , so the potential energy is equal to
U 2 = m 2 0 0 g 0 1 2 L A sin q 1 1 2 L A cos q 1 L 1 + q 2 1 = g m 2 L 1 + q 2 .
The same assumptions about the homogeneity of the second link and the linearity of the mass distribution are applied to the calculation of the kinetic and potential energy of the third link.
The kinetic energy of the third link is
T 3 = 1 2 ( m ) v 3 2 d m 3 ,
and, with the d m 3 = m 3 L 3 d u 3 , it follows:
T 3 = m 3 2 L 3 L 3 0 v 3 2 d u 3 .
The position vector of the infinitesimal mass d m 3 in the robot coordinate system is
R 3 = 0 u 3 0 1 T .
The position vector of the mass d m 3 , with respect to the fixed coordinate system, is equal to
p 3 = 0 T 3 R 3 = u 3 sin q 1 L 2 + q 3 sin q 1 L 2 + q 3 cos q 1 u 3 cos q 1 L 1 + q 2 1 ,
where the transfer matrix from the third coordinate system to the fixed coordinate system is defined by (78). The velocity of the mass d m 3 is
v 3 = d p 3 d t = u 3 q ˙ 1 cos q 1 L 2 + q 3 q ˙ 1 cos q 1 q ˙ 3 sin q 1 u 3 q ˙ 1 sin q 1 L 2 + q 3 q ˙ 1 sin q 1 + q ˙ 3 cos q 1 q ˙ 2 0 ,
while the square of the velocity is
v 3 2 = v 3 · v 3 = q ˙ 1 2 L 2 u 3 + q 3 2 + q ˙ 2 2 + q ˙ 3 2 .
By inserting Equation (A20) into (A16), and after integrating within the limits from 0 to L 3 , we get kinetic energy of the third link as follows:
T 3 = m 3 2 L 3 1 3 L 3 3 q ˙ 1 2 + L 3 2 L 2 + q 3 q ˙ 1 2 + L 3 L 2 + q 3 2 q ˙ 1 2 + q ˙ 2 2 + q ˙ 3 2 .
The vector of the position of the center of gravity of the third link with respect to the fixed coordinate system is obtained by inserting u 3 = L 3 / 2 into (A18):
p 3 = 1 2 L 3 sin q 1 L 2 + q 3 sin q 1 L 2 + q 3 cos q 1 1 2 L 3 cos q 1 L 1 + q 2 1
The potential energy of the third link is calculated as follows:
U 3 = m 3 g T p 3 ,
according to which, we get
U 3 = m 3 0 0 g 0 1 2 L 3 sin q 1 L 2 + q 3 sin q 1 L 2 + q 3 cos q 1 1 2 L 3 cos q 1 L 1 + q 2 1 = g m 3 L 1 + q 2 .

Appendix C. Properties of the Dynamic Model of the Cylindrical Robot

Based on the inertia matrix defined by (94) and (95), using expression (4) with q = q 1 q 2 q 3 T and ξ = ξ 1 ξ 2 ξ 3 T , the parameters a 1 , a 2 , c 2 and d 2 are determined as follows.
On the left-hand side of expression (4), we have
I 1 + L 2 L 3 m 3 + 1 3 L A 2 m 2 + L 3 2 m 3 + L 3 m 3 q 3 + m 3 ( L 2 + q 3 ) 2 ξ 1 2 + ( m 2 + m 3 ) ξ 2 2 + m 3 ξ 3 2 L 3 m 3 + 2 L 2 m 3 2 4 m 3 p 4 m 3 ξ 1 2 + ( m 2 + m 3 ) ξ 2 2 + m 3 ξ 3 2 min L 3 m 3 + 2 L 2 m 3 2 4 m 3 p 4 m 3 , m 2 + m 3 , m 3 ξ 1 2 + ξ 2 2 + ξ 3 2 ,
where
p = I 1 + L 2 L 3 m 3 + L 2 2 m 3 + 1 3 ( L A 2 m 2 + L 3 2 m 3 ) .
from which it follows:
a 1 = min L 3 m 3 + 2 L 2 m 3 2 4 m 3 p 4 m 3 , m 2 + m 3 , m 3 a 1 = min I 1 + 1 12 L 3 2 m 3 + 1 3 L A 2 m 2 , m 3
Furthermore, on the right-hand side of expression (4), we have
I 1 + L 2 L 3 m 3 + L 2 2 m 3 + 1 3 L A 2 m 2 + L 3 2 m 3 p ξ 1 2 + L 3 m 3 + 2 L 2 m 3 q 3 + m 3 q 3 2 ξ 1 2 + m 2 + m 3 ξ 2 2 + m 3 ξ 3 2 a 2 + c 2 q 1 2 + q 2 2 + q 3 2 + d 2 ( q 1 2 + q 2 2 + q 3 2 ) ( ξ 1 2 + ξ 2 2 + ξ 3 2 ) .
We can write the above expression as follows:
a 2 p + c 2 q 1 2 + q 2 2 + q 3 2 L 3 m 3 + 2 L 2 m 3 q 3 + d 2 ( q 1 2 + q 2 2 + q 3 2 ) m 3 q 3 2 ξ 1 2 + a 2 m 2 + m 3 ξ 2 2 + a 2 m 3 ξ 3 2 0 ,
from which it follows:
a 2 p , a 2 m 2 + m 3 , a 2 m 3 , c 2 L 3 m 3 + 2 L 2 m 3 , d 2 m 3 ,
that is, we have
a 2 = min { p , m 3 } , c 2 = L 3 m 3 + 2 L 2 m 3 , d 2 = m 3 .
Based on the Coriolis vector (96), using expression (5), the parameters c 1 and d 1 are determined as follows.
First, we will square the expression (5):
C ( q , q ˙ ) q ˙ 2 ( c 1 + d 1 q ) 2 q ˙ 4 .
On the left side of (A32) we have
C ( q , q ˙ ) q ˙ 2 = C ( q , q ˙ ) q ˙ T C ( q , q ˙ ) q ˙ = h 2 q ˙ 3 2 + 1 4 q ˙ 1 2 q ˙ 1 2 ,
where
h = L 3 m 3 + 2 L 2 m 3 + 2 m 3 q 3 .
Inserting (A33) into (A32) gives the following inequality:
h 2 q ˙ 3 2 + 1 4 q ˙ 1 2 q ˙ 1 2 b q ˙ 1 2 + q ˙ 2 2 + q ˙ 3 2 2 ,
where
b = c 1 + d 1 q 1 2 + q 2 2 + q 3 2 2 .
Furthermore, from (A35), it follows
b 1 4 h 2 q ˙ 1 4 + 2 b 1 2 h 2 q ˙ 1 2 q ˙ 3 2 + b q ˙ 2 4 + q ˙ 3 4 2 + 2 b q ˙ 1 2 q ˙ 2 2 0 ,
which will be satisfied if
b 1 4 h 2 , c 1 + d 1 q 1 2 + q 2 2 + q 3 2 2 2 L 3 m 3 + 2 L 2 m 3 + 2 m 3 q 3 ,
so that, in the end, we get
c 1 = 2 2 L 3 m 3 + 2 L 2 m 3 , d 1 = 2 m 3 .
The parameter k g 2 from (6) is equal to zero, because the considered cylindrical robot structure does not have translational degrees of freedom of motion at an angle that changes in relation to the gravitational field. As the gravitational vector defined by (97) is not a function of the controlled coordinates, it also follows from (6) that the parameter k g 1 is equal to zero.

References

  1. Baioumy, M.; Pezzato, C.; Ferrari, R.; Corbato, C.H.; Hawes, N. Fault-tolerant control of robot manipulators with sensory faults using unbiased active inference. In Proceedings of the 2021 European Control Conference (ECC), Delft, The Netherlands, 29 June–2 July 2021; pp. 1119–1125. [Google Scholar]
  2. Kang, Y.; Yao, L.; Wu, W. Sensor fault diagnosis and fault tolerant control for the multiple manipulator synchronized control system. ISA Trans. 2020, 106, 243–252. [Google Scholar] [CrossRef] [PubMed]
  3. Boukhari, M.R.; Chaibet, A.; Boukhnifer, M.; Glaser, S. Proprioceptive sensors’ fault tolerant control strategy for an autonomous vehicle. Sensors 2018, 18, 1893. [Google Scholar] [CrossRef] [PubMed]
  4. Xiao, B.; Cao, L.; Xu, S.; Liu, L. Robust tracking control of robot manipulators with actuator faults and joint velocity measurement uncertainty. IEEE/ASME Trans. Mechatron. 2020, 25, 1354–1365. [Google Scholar] [CrossRef]
  5. Nguyen, N.P.; Xuan, M.N.; Hong, S.K. Actuator fault detection and fault-tolerant control for hexacopter. Sensors 2019, 19, 4721. [Google Scholar] [CrossRef] [PubMed]
  6. Ashraf, M.A.; Ijaz, S.; Javaid, U.; Hussain, S.; Anwaar, H.; Marey, M. A robust sensor and actuator fault tolerant control scheme for nonlinear system. IEEE Access 2022, 10, 626–637. [Google Scholar] [CrossRef]
  7. Jani, F.; Hashemzadeh, F.; Baradarannia, M.; Kharrati, H. Robust event-triggered finite-time control of faulty networked flexible manipulator under external disturbance. J. Vib. Control 2022, 29, 317–333. [Google Scholar] [CrossRef]
  8. Ma, H.; Yang, G. Simultaneous fault diagnosis for robot manipulators with actuator and sensor faults. Inf. Sci. 2016, 366, 12–30. [Google Scholar] [CrossRef]
  9. Tan, C.P.; Edwards, C. Sliding mode observers for robust detection and reconstruction of actuator and sensor faults. Int. J. Robust Nonlinear Control 2003, 13, 443–463. [Google Scholar] [CrossRef]
  10. Zuev, A.; Zhirabok, A.N.; Filaretov, V.; Protsenko, A. Fault identification in electric servo actuators of robot manipulators described by nonstationary nonlinear dynamic models using sliding mode observers. Sensors 2022, 22, 317. [Google Scholar] [CrossRef]
  11. Ghaf-Ghanbari, P.; Mazare, M.; Taghizadeh, M. Active fault-tolerant control of a Scho n ¨ flies parallel manipulator based on time delay estimation. Robotica 2021, 39, 1518–1535. [Google Scholar] [CrossRef]
  12. Truong, T.N.; Vo, A.T.; Kang, H.J.; Van, M. A novel active fault-tolerant tracking control for robot manipulators with finite-time stability. Sensors 2021, 21, 8101. [Google Scholar] [CrossRef]
  13. Van, M.; Ceglarek, D. Robust fault tolerant control of robot manipulators with global fixed-time convergence. J. Frankl. Inst. 2021, 358, 699–722. [Google Scholar] [CrossRef]
  14. Cao, Y.; Song, Y.D. Adaptive PID-like fault-tolerant control for robot manipulators with given performance specifications. Int. J. Control 2020, 93, 377–386. [Google Scholar] [CrossRef]
  15. Jung, W.; Bang, H. Fault and failure tolerant model predictive control of quadrotor UAV. Int. J. Aeronaut. Space Sci. 2021, 22, 663–675. [Google Scholar] [CrossRef]
  16. Karras, G.C.; Fourlas, G.K. Model predictive fault tolerant control for omni-directional mobile robots. J. Intell. Robot. Syst. 2020, 97, 635–655. [Google Scholar] [CrossRef]
  17. Zhang, F.; Wu, W.; Song, R.; Wang, C. Dynamic learning-based fault tolerant control for robotic manipulators with actuator faults. J. Frankl. Inst. 2023, 360, 862–886. [Google Scholar] [CrossRef]
  18. Zhang, S.; Yang, P.; Kong, L.; Chen, W.; Fu, Q.; Peng, K. Neural networks-based fault tolerant control of a robot via fast terminal sliding mode. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 4091–4101. [Google Scholar] [CrossRef]
  19. Dybkowski, M.; Klimkowski, K. Artificial neural network application for current sensors fault detection in the vector controlled induction motor drive. Sensors 2019, 19, 571. [Google Scholar] [CrossRef]
  20. You, G.; Xu, B.; Cao, Y.; Hou, X.; Zhao, S.; Liao, W. Adaptive fuzzy finite-time fault-tolerant control design for non-linear systems under sensor faults. IET Control Theory Appl. 2022, 16, 1560–1572. [Google Scholar] [CrossRef]
  21. Zhang, L.; Liu, H.; Tang, D.; Hou, Y.; Wang, Y. Adaptive fixed-time fault-tolerant tracking control and its application for robot manipulators. IEEE Trans. Ind. Electron. 2022, 69, 2956–2966. [Google Scholar] [CrossRef]
  22. Van, M.; Ge, S.S. Adaptive fuzzy integral sliding-mode control for robust fault-tolerant control of robot manipulators with disturbance observer. IEEE Trans. Fuzzy Syst. 2021, 29, 1284–1296. [Google Scholar] [CrossRef]
  23. Makni, S.; Bouattour, M.; El Hajjaji, A.; Chaabane, M. Robust fault tolerant control based on adaptive observer for Takagi-Sugeno fuzzy systems with sensor and actuator faults: Application to single-link manipulator. Trans. Inst. Meas. Control 2020, 42, 2308–2323. [Google Scholar] [CrossRef]
  24. Wang, H.; Xie, S.; Zhou, B.; Wang, W. Non-fragile robust filtering of Takagi-Sugeno fuzzy networked control systems with sensor failures. Sensors 2020, 20, 27. [Google Scholar] [CrossRef] [PubMed]
  25. Van Der Schaft, A. L 2 -Gain and Passivity Techniques in Nonlinear Control; Springer: London, UK, 1996. [Google Scholar]
  26. Van Der Schaft, A. 2-gain analysis of nonlinear systems and nonlinear state feedback control. IEEE Trans. Autom. Control 1992, 37, 770–784. [Google Scholar] [CrossRef]
  27. Zuo, Z.; Wang, Y.; Yang, W. 2-gain fault tolerant control of singular Lipschitz systems in the presence of actuator saturation. Int. J. Robust Nonlinear Control 2015, 25, 1751–1766. [Google Scholar] [CrossRef]
  28. Wang, J.S.; Yang, G.H. Data-driven output-feedback fault-tolerant 2 control of unknown dynamic systems. ISA Trans. 2016, 63, 182–195. [Google Scholar] [CrossRef]
  29. Li, C.; Jing, H.; Bao, J.; Sun, S.; Wang, R. Robust fault tolerant control for quadrotor attitude regulation. Proc. Inst. Mech. Eng. Part I J. Syst. Control Eng. 2018, 232, 1302–1313. [Google Scholar] [CrossRef]
  30. Yuan, L.H.; Wang, L.D.; Xu, J.T. Adaptive fault-tolerant controller for morphing aircraft based on the 2 gain and a neural network. Aerosp. Sci. Technol. 2023, 132, 107985. [Google Scholar] [CrossRef]
  31. Milic, V.; Arandia-Kresic, S.; Lobrovic, M. An application of Newton-like algorithm for proportional–integral–derivative controller synthesis of seesaw-cart system. Trans. Inst. Meas. Control 2022, 44, 1777–1793. [Google Scholar] [CrossRef]
  32. Milic, V.; Kasac, J.; Novakovic, B. An analytical fuzzy-based approach to 2-gain optimal control of input-affine nonlinear systems using Newton-type algorithm. Int. J. Syst. Sci. 2015, 46, 2448–2460. [Google Scholar] [CrossRef]
  33. Aliyu, M.D.S. An improved iterative computational approach to the solution of the Hamilton–Jacobi equation in optimal control problems of affine nonlinear systems with application. Int. J. Syst. Sci. 2020, 51, 2625–2634. [Google Scholar] [CrossRef]
  34. Sun, W.; Pan, Y.; Lim, J.; Theodorou, E.A.; Tsiotras, P. Min-max differential dynamic programming: Continuous and discrete time formulations. J. Guid. Control. Dyn. 2018, 41, 2568–2580. [Google Scholar] [CrossRef]
  35. Xi, A.; Cai, Y. A nonlinear finite-time robust differential game guidance law. Sensors 2022, 22, 6650. [Google Scholar] [CrossRef]
  36. Liu, D.; Xue, S.; Zhao, B.; Luo, B.; Wei, Q. Adaptive dynamic programming for control: A survey and recent advances. IEEE Trans. Syst. Man Cybern. Syst. 2021, 51, 142–160. [Google Scholar] [CrossRef]
  37. Mu, C.; Wang, K. Approximate-optimal control algorithm for constrained zero-sum differential games through event-triggering mechanism. Nonlinear Dyn. 2019, 95, 2639–2657. [Google Scholar] [CrossRef]
  38. Vamvoudakis, K.G.; Modares, H.; Kiumarsi, B.; Lewis, F.L. Game theory-based control system algorithms with real-time reinforcement learning: How to solve multiplayer games online. IEEE Control Syst. Mag. 2017, 37, 33–52. [Google Scholar]
  39. Kasac, J.; Novakovic, B.; Majetic, D.; Brezak, D. Global positioning of robot manipulators with mixed revolute and prismatic joints. IEEE Trans. Autom. Control 2006, 51, 1035–1040. [Google Scholar] [CrossRef]
  40. Pervozvanski, A.; Freidovich, L. Robust stabilization of robotic manipulators by PID controllers. Dyn. Control 1999, 9, 203–222. [Google Scholar] [CrossRef]
  41. Hodgson, S.P.; Stoten, D.P. Robustness of the minimal control synthesis algorithm to non-linear plant with regard to the position control of manipulators. Int. J. Control 1999, 72, 1288–1298. [Google Scholar] [CrossRef]
  42. Gunawardana, R.; Ghorbel, F. On the boundedness of the Hessian of the potential energy of robot manipulators. J. Robot. Syst. 1999, 16, 613–625. [Google Scholar] [CrossRef]
  43. Ghorbel, F.; Gunawardana, R. A uniform bound for the Jacobian of the gravitational force vector for a class of robot manipulators. J. Dyn. Syst. Meas. Control 1997, 119, 110–114. [Google Scholar] [CrossRef]
  44. Zhou, N.; Xia, Y. Coordination control of multiple Euler-Lagrange systems for escorting mission. Int. J. Robust Nonlinear Control 2015, 25, 3515–3830. [Google Scholar] [CrossRef]
  45. Su, Y. Global continuous finite-time tracking of robot manipulators. Int. J. Robust Nonlinear Control 2009, 19, 1871–1885. [Google Scholar] [CrossRef]
  46. Kasać, J.; Novaković, B.; Majetić, D.; Brezak, D. Passive finite-dimensional repetitive control of robot manipulators. IEEE Trans. Control Syst. Technol. 2008, 16, 570–576. [Google Scholar] [CrossRef]
  47. Kelly, R.; Santibanez, V.; Loria, A. Control of Robot Manipulators in Joint Space; Springer: London, UK, 2005. [Google Scholar]
  48. Caccavale, F.; Marino, A.; Muscio, G.; Pierri, F. Discrete-time framework for fault diagnosis in robotic manipulators. IEEE Trans. Control Syst. Technol. 2013, 21, 1858–1873. [Google Scholar] [CrossRef]
  49. Namvar, M.; Aghili, F. Failure detection and isolation in robotic manipulators using joint torque sensors. Robotica 2010, 28, 549–561. [Google Scholar] [CrossRef]
  50. Chen, Y.; Guo, B. Sliding mode fault tolerant tracking control for a single-link flexible joint manipulator system. IEEE Access 2019, 7, 83046–83057. [Google Scholar] [CrossRef]
  51. Khalil, H.K. Nonlinear Systems; Prentice-Hall Inc.: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  52. Hill, D.; Moylan, P. The stability of nonlinear dissipative systems. IEEE Trans. Autom. Control 1976, 21, 708–711. [Google Scholar] [CrossRef]
  53. Yuliar, S.; James, M.R.; Helton, J.W. Dissipative control systems synthesis with full state feedback. Math. Control Signals Syst. 1998, 11, 335–356. [Google Scholar] [CrossRef]
  54. Willems, J.C. Dissipative dynamical systems part I: General theory. Arch. Ration. Mech. Anal. 1972, 45, 321–351. [Google Scholar] [CrossRef]
  55. Beck, A.; Ben-Tal, A.; Teboulle, M. Finding a global optimal solution for a quadratically constrained fractional quadratic problem with applications to the regularized total least squares. SIAM J. Matrix Anal. Appl. 2006, 28, 425–445. [Google Scholar] [CrossRef] [Green Version]
  56. Nocedal, J.; Wright, S.J. Numerical Optimization; Springer Science + Business Media, LLC: New York, NY, USA, 2006. [Google Scholar]
  57. Hairer, E.; Nørsett, S.P.; Wanner, G. Solving Ordinary Differential Equations I—Nonstiff Problems, 2nd ed.; Springer: Berlin, Germany, 2008. [Google Scholar]
  58. Fike, J.A. Multi-Objective Optimization Using Hyper-Dual Numbers. Ph.D. Thesis, Stanford University, Stanford, CA, USA, 2013. [Google Scholar]
  59. Fike, J.; Alonso, J. The development of hyper-dual numbers for exact second-derivative calculations. In Proceedings of the 49th AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Orlando, FL, USA, 4–7 January 2011; pp. 1–17. [Google Scholar]
  60. De Souza, D.A.; Batista, J.G.; Vasconcelos, F.J.S.; Dos Reis, L.L.N.; Machado, G.F.; Costa, J.R.; Junior, J.N.N.; Silva, J.L.N.; Rios, C.S.N.; Júnior, A.B.S. Identification by recursive least squares with Kalman filter (RLS-KF) applied to a robotic manipulator. IEEE Access 2021, 9, 63779–63789. [Google Scholar] [CrossRef]
  61. Batista, J.; Souza, D.; dos Reis, L.; Barbosa, A.; Araújo, R. Dynamic model and inverse kinematic identification of a 3-DOF manipulator using RLSPSO. Sensors 2020, 20, 416. [Google Scholar] [CrossRef]
  62. Graham, A. Kronecker Products and Matrix Calculus: With Applications; Ellis Horwood Limited: West Sussex, UK, 1981. [Google Scholar]
  63. Brewer, J.W. Kronecker products and matrix calculus in system theory. IEEE Trans. Circuits Syst. 1978, 25, 772–781. [Google Scholar] [CrossRef]
Figure 1. Block diagram of closed-loop system, with algorithm for controller gains calculation.
Figure 1. Block diagram of closed-loop system, with algorithm for controller gains calculation.
Sensors 23 01952 g001
Figure 2. Flowchart of algorithmic procedure for calculation of PID controller gains and determination of the “worst case” of sensor fault functions.
Figure 2. Flowchart of algorithmic procedure for calculation of PID controller gains and determination of the “worst case” of sensor fault functions.
Sensors 23 01952 g002
Figure 3. Computer model and schematic representation of a robot manipulator with three degrees of freedom.
Figure 3. Computer model and schematic representation of a robot manipulator with three degrees of freedom.
Sensors 23 01952 g003
Figure 4. Time responses of the rotational coordinates of the first joint, and of the translational coordinates of the second and third joints.
Figure 4. Time responses of the rotational coordinates of the first joint, and of the translational coordinates of the second and third joints.
Sensors 23 01952 g004
Figure 5. Time dependence of the positioning errors.
Figure 5. Time dependence of the positioning errors.
Sensors 23 01952 g005
Figure 6. Time dependence of the force and torque applied to the robot links.
Figure 6. Time dependence of the force and torque applied to the robot links.
Sensors 23 01952 g006
Figure 7. Time dependence of the “worst case” sensor faults obtained by proposed algorithms.
Figure 7. Time dependence of the “worst case” sensor faults obtained by proposed algorithms.
Sensors 23 01952 g007
Table 1. Denavit–Hartenberg parameters of cylindrical manipulator.
Table 1. Denavit–Hartenberg parameters of cylindrical manipulator.
Parameter θ i d i a i α i
Joint 1 θ 1 d 1 00
Joint 20 d 2 0 π / 2
Joint 30 d 3 00
Unitradmmrad
Table 2. Numerical values of the robot’s constant parameters.
Table 2. Numerical values of the robot’s constant parameters.
Parameter L 2 L 3 L A m 2 m 3 I 1 g
Value 0.5 0.3 0.4 2.5 1 0.1 9.81
Unitmmmkgkgkgm 2 m/s 2
Table 3. Numerical values of the coefficients defined in properties (4)–(6).
Table 3. Numerical values of the coefficients defined in properties (4)–(6).
Parameter a 1 a 2 c 2 d 2 c 1 d 1
Value 0.2408 0.6633 1.3 1.0 0.9192 1.4142
Unitkgm 2 kgm 2 kgmkgkgmkg
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Milić, V.; Kasać, J.; Lukas, M. Min–Max Optimal Control of Robot Manipulators Affected by Sensor Faults. Sensors 2023, 23, 1952. https://doi.org/10.3390/s23041952

AMA Style

Milić V, Kasać J, Lukas M. Min–Max Optimal Control of Robot Manipulators Affected by Sensor Faults. Sensors. 2023; 23(4):1952. https://doi.org/10.3390/s23041952

Chicago/Turabian Style

Milić, Vladimir, Josip Kasać, and Marin Lukas. 2023. "Min–Max Optimal Control of Robot Manipulators Affected by Sensor Faults" Sensors 23, no. 4: 1952. https://doi.org/10.3390/s23041952

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop