Next Article in Journal
The Use of the K-Sim Polaris Simulator in the Process of Automatic Assessment of Navigator Competence in the Aspect of Anticollision Activities
Previous Article in Journal
A Framework to Determine the Utilization of Vacant Taxis on HOV Lanes with the Optimal Deployment
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Observer-Based Model Reference Tracking Control of the Markov Jump System with Partly Unknown Transition Rates

School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, 516 Jungong Road, Yangpu District, Shanghai 200082, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(2), 914; https://doi.org/10.3390/app13020914
Submission received: 6 December 2022 / Revised: 29 December 2022 / Accepted: 2 January 2023 / Published: 9 January 2023

Abstract

:
This paper deals with the model reference tracking control problem of linear systems based on the observer for Markov jump systems with unknown transition rates. The main contributions are as follows: Firstly, we designed a descriptor observer for a given model by the matrix transformation. Then, a tracking control law composed of a feedforward compensator and feedback control law was designed by calculating variations based on the designed observer. The feedback part can stabilize the system. The feedforward part is the complete parametric feedforward tracking compensator. The two parts can be solved separately, and a controller that can make the system stable is proposed under the condition that transition rates are partially unknown through the Lyapunov stability theory. The feedforward parametric solution is given by the generalized Sylvester equation. The algorithm and criteria are proved by several examples and compared with the existing conclusions.

1. Introduction

Markov jump systems are composed of a series of continuous time or discrete time subsystems, and the jump between the modes is determined by the Markov chain. Many practical stochastic mutation systems can be modeled using Markov jump systems. Thus, the jump systems have been extensively studied and many useful results have been obtained. The stochastic stability of a Markov jump system was proposed by Yuandong Ji and Howard Jay Chizeck [1]. Many researchers have discussed the problems surrounding stochastic stability and stochastic admissibility for Markov jump systems, such as stability for discrete-time systems [2,3,4]. For continuous-time systems, Boukas solved stability and stabilization problems [5]; the stability of singular hybrid systems was discussed in [6] and a nonlinear Markov system was discussed in [7,8,9]. On the other hand, a model reference adaptive control (MRAC) constructs a reference model to characterize the desired control performance of a closed-loop system [10], which has been widely used in the industry and theoretical control. The authors of [11] designed a tracking controller for the switched systems based on a reference model. There are some applications used in different fields of MRAC, such as economics [12], vehicles [13,14], sleep monitoring [15], polysolenoid linear motors [16], and the visual servo system [17]. In [18], the model reference control problem for the Markov system was first discussed. However, the model reference controls for Markov jump systems with circular or elliptical orbits, such as satellites, did not appear until 2017 (in [19]). However, he does not take the incompleteness of the state and unknown transition rates (TRs) into consideration. In practical engineering, the state of the system is difficult to obtain. Thus, the observer that obtains the system state by the output is also widely used in the control field for a traditional system, e.g., the traditional Luenberger observer [20,21,22,23] or the robust observer [24,25]. In [26], a novel robust observer was designed, moreover, some sliding observers were reviewed in [27,28]. For Markov systems, observers were designed, such as a proportional integral observer [29], whose performance is excellent but does not fit high-order systems. Wu studied the state estimation and sliding mode control of the Markov jump system [30] and the mode-dependent observer [31] for the discrete Markov system. Yang designed an observer for Markov jump systems [32], which can directly obtain the state estimation without any supplementary design. In this paper, the state was obtained by a series of transformations.
Many studies are based on the condition that the transition matrix is completely known. In engineering, some values of transition matrices are hard to obtain, while some are completely inaccessible. Thus, the unknown part of the transition matrix comes into view, e.g., the information is incomplete in Markov chains or games [33,34,35], and the TRs are partially unknown [36,37,38,39,40]. In [38], the author assumed that the lower bound of the unknown self-transition rate was known and successfully solved the stability problem. However, in practical applications, the above conditions are difficult to meet. In 2014, Kao attempted to solve the problem of a completely unknown situation of the self-transition rate in [41], but the proof process was wrong. In [42], Park proposed sufficient and necessary conditions for the resolution with partly unknown transition rates. However, the author just set up a new function that was still a sufficient condition. 2021, In [43], the author solved the H∞ filtering of a discrete Markov jump system with unknown transition rates, where the self-transition rate could be absolutely unknown. However, the self-transition rate of a discrete jump system is positive, while the self-transition rate of a continuous jump system is negative, so the above method does not apply to the continuous system.
Although there are many studies on the control problem of a Markov jump system, very few studies have investigated the problem of the model reference tracking control of a Markov jump system. Because different reference models correspond to completely different algorithms, it is difficult to guarantee the tracking effects of different models. The model reference tracking problem is much more complicated than the analysis or stability of Markov jump systems only. Partly unknown transition rates and state incompleteness are still open issues. Therefore, this motivated us to address this problem subject to partially unknown TRs based on the observer.
In this paper, we studied the observer-based model reference tracking control along with partly unknown transition rates. Firstly, an observer was designed for a given system by a descriptor augmentation transformation. Then the controller was designed by using the properties of the observer and variational methods, which consist of two parts: a feedback controller and feedforward compensator. We propose sufficient conditions for the stochastic stability of the system under the condition where the self-transition rate can be completely unknown; moreover, the parametric solution of the feedback controller is given using the above condition. Finally, the parametric solution of the feedforward compensator is given using the generalized Sylvester equation. The feasibility of the above theorem and algorithm is proved via several numerical examples in the last part.
The notations adopted in this paper are standard. R n is the n-dimensional real Euclidean space. R m × n is the normed linear space of all m by n real matrices. x represent the Euclidean norm of R n or the operator norm of R m × n . A T is the transpose of a matrix (or vector) A. A < ( ) 0 mean A is a negative(semi-) definite symmetric matrix. In symmetric block matrices, we use “*” as an ellipsis for the terms induced by symmetry, d i a g { } for a block-diagonal matrix, and s y m ( X ) = X + X T .

2. Problem Description and Preliminaries

Consider the continuous-time Markov jump system. The model of the system is defined by the following differential equations:
x ˙ ( t ) = A γ t x ( t ) + B γ t u ( t ) , y ( t ) = C γ t x ( t ) ,
where A γ t , B γ t and C γ t are the parameter matrices of the system. x ( t ) R n , u ( t ) R m , y ( t ) R d are the state vector, input vector, and output vector, respectively. γ t , t > 0 is the Markov process that represents the jump mode of system (1), which takes values in a finite set U = { 1 , 2 , , N } . Moreover, the transition probability matrix Π is given by:
Pr γ t + Δ = j γ t = i = π i j Δ + o ( Δ ) , i j , 1 + π i j Δ + o ( Δ ) , i = j ,
where Δ > 0. lim Δ 0 o ( Δ ) Δ = 0 , π i j > 0 , i j . Denote the transition rate from i to j and π i i = j i π i j for i, j. When transition rates are partly unknown, the transition matrix can be described as follows:
Π = π 11 ? π 1 N ? π 22 π 2 N π N 1 ? π N N ,
where the “?” denotes the unknown transition rates. To facilitate the future discussion, for all i N , the set U ( i ) denotes U ( i ) = U k i U u k i
U k i = { j | π i j is known for every j U } ,
U u k i = { j | π i j is unknown for every j U } .
Moreover, if U u k ( i ) . It is further described as U k ( i ) = k 1 i , k 2 i , k m i . Where k j i represent the jth bounded-known element with the index k j i in the ith row of Π . Denote A i , B i , and C i to replace A γ t , B γ t , and C γ t , when γ t = i .
Definition 1
([44]). The closed loop system x ˙ ( t ) = A i x ( t ) + B i u ( t ) with u ( t ) = 0 is stochastically stable (SS) if for every initial condition x ( 0 ) = x 0 and γ ( 0 ) = γ 0 the following holds
E { t 0 x ( t ) 2 d t | x 0 , γ 0 } < .
E represents the mathematical expectation operator. This paper researches the model reference tracking control problem for continuous-time systems with a circular-like orbit. Without losing generality, the reference signal can be generated by the following reference model:
x ˙ m ( t ) = A m γ t x m ( t ) , y m ( t ) = C m γ t x m ( t ) .
where x m ( t ) R n 1 , y m ( t ) R n 2 are the state vector and output vector of the reference mode. A m and C m are the known matrices with appropriate dimensions. The purpose is to design a controller that can make the output y ( t ) track the reference model’s output y m ( t ) in the meaning square:
E { t 0 y ( t ) y m ( t ) 2 d t | x 0 , γ 0 } < .
For arbitrary initial values x 0 , γ 0 , and x m 0 . The following problem needs to be done to realize the purpose of the paper. For systems (1) and (7), a controller is designed in the form of
u ( t ) = K i x ( t ) + K m i x m ( t ) .
Using this algorithm, the closed-loop system can be stochastic stable and y ( t ) can track y m ( t ) in the meaning square.

3. Observer and Control Law Design

3.1. Observer Design

In order to apply a generalized augmented transform to the system. First, we must define the following variables and augmented matrices.
x ¯ ( t ) = x ( t ) 0 p , A ¯ i = A i 0 n × p 0 p × n I p , E ¯ = I n 0 n × p 0 p × n 0 p B ¯ i = B i 0 , C ¯ i = C i I p ,
Based on the definitions above, the descriptor system can be constructed as:
E ¯ x ¯ ˙ ( t ) = A ¯ i x ¯ ( t ) + B ¯ i u ( t ) , y ( t ) = C ¯ i x ¯ ( t ) ,
where E ¯ R ( n + p ) × ( n + p ) , A ¯ i R ( n + p ) × ( n + p ) , B ¯ i R ( n + p ) × m , C ¯ i R p × ( n + p ) .
Let C ¯ i be an orthogonal matrix of C ¯ i , which means C ¯ i C ¯ i = 0 . We define a coordinate transformation matrix
T ¯ i = C ¯ i T C ¯ i = T ¯ i 1 T ¯ i 2 T ¯ i 3 T ¯ i 4 R ( n + p ) × ( n + p ) ,
where T ¯ i 1 R n × n , T ¯ i 2 R n × p , T ¯ i 3 R p × n , T ¯ i 4 R p × p .
T ¯ i 1 = X ¯ i = C ¯ i C ¯ i T C ¯ i 1 C ¯ i T C ¯ i C ¯ i T 1 .
We can easily obtain C ¯ i T ¯ i 1 = 0 p × n I p . For convenience, we denote T ¯ i 1 as
T ¯ i 1 = R ¯ i 1 R ¯ i 2 R ¯ i 3 R ¯ i 4 R ( n + p ) × ( n + p ) ,
where R ¯ i 1 R n × n , R ¯ i 2 R n × p , R ¯ i 3 R p × n , R ¯ i 4 R p × p with x ¯ ( 1 ) ( t ) = T ¯ i x ¯ ( t )
E ¯ i ( 1 ) x ¯ ˙ ( 1 ) ( t ) = A ¯ i ( 1 ) x ¯ ( 1 ) ( t ) + B ¯ i u ( t ) , y ( t ) = C i ¯ ( 1 ) x ( t ) = x ¯ 2 ( 1 ) ( t ) ,
where x ¯ ( 1 ) ( t ) = x 1 ¯ ( 1 ) ( t ) x 2 ¯ ( 1 ) ( t ) T , A i ¯ ( 1 ) = A i ¯ T i 1 , E ¯ i ( 1 ) = E ¯ i T i 1 , C i ¯ ( 1 ) = C ¯ i T i 1 = 0 p × n I p . Further, we use M ¯ i = R ¯ i 1 R ¯ i 2 , obviously, E ¯ i ( 1 ) = M ¯ i 0 p × ( n + p ) T
We define a new matrix as follows:
H ¯ i 1 = R ¯ i 1 R ¯ i 2 0 ¯ p × n I ¯ p .
We rank( H ¯ i 1 ) = n+p, multiplying H ¯ i on both sides of (14):
E ¯ i ( 2 ) x ¯ ˙ ( 1 ) ( t ) = A ¯ i ( 2 ) x ¯ ( 1 ) ( t ) + B ¯ i ( 2 ) u ( t ) , y ( t ) = x ¯ 2 ( 1 ) ( t ) .
It can be seen that E ¯ i ( 2 ) = H ¯ i E ¯ i ( 1 ) , A ¯ i ( 2 ) = H ¯ i A ¯ i ( 1 ) , B ¯ i ( 2 ) = H ¯ i B ¯ i further denote H ¯ i as:
H ¯ i = H ¯ i 1 H ¯ i 2 ,
where H ¯ i 1 R ( n + p ) × n , H ¯ i 2 R ( n + p ) × p we can further derived as H ¯ i 1 M ¯ i + H ¯ i 2 C ¯ i ( 1 ) = I n + p , thus we have:
H ¯ i 1 M ¯ i = I n + p H ¯ i 2 C ¯ i ( 1 ) .
By decomposing H ¯ i 2 = H ¯ i 21 H ¯ i 22 , Equation (17) can be rewritten as:
H ¯ i 1 M ¯ i = I n H ¯ i 21 0 I p H ¯ i 22 .
Then we derive:
E ¯ i ( 2 ) = H ¯ i 1 E ¯ i ( 1 ) = H ¯ i = H ¯ i 1 H ¯ i 2 M ¯ i 0 p × ( n + p ) T = I n H ¯ i 21 0 I p H ¯ i 22 .
Similarly, use the decomposing that A ¯ i ( 2 ) = H ¯ i A ¯ i ( 1 ) = A ¯ i 11 ( 2 ) A ¯ i 12 ( 2 ) A ¯ i 21 ( 2 ) A ¯ i 22 ( 2 ) , B ¯ i ( 2 ) = H ¯ i B ¯ i = B ¯ i 1 ( 2 ) B ¯ i 2 ( 2 ) obtain the following equation:
x ˙ 1 ( 1 ) ( t ) H ¯ i 21 x ˙ 2 ( 1 ) ( t ) = A ¯ i 11 ( 2 ) x ¯ 1 ( 1 ) ( t ) + A ¯ i 12 ( 2 ) x ¯ 2 ( 1 ) ( t ) + B ¯ i 1 ( 2 ) u ( t ) .
The reduced linear observer is designed as follows, based on the above:
x ¯ ^ 1 ( 1 ) ( t ) = z ( t ) + H ¯ i 21 y ( t ) , z ˙ ˙ ( t ) = A ¯ i 11 ( 2 ) z ( t ) + B ¯ i 1 ( 2 ) u ( t ) + A ¯ i 11 ( 2 ) H ¯ i 21 + A ¯ i 12 ( 2 ) y ( t ) , x ¯ ^ ( t ) = T ¯ i 1 x ¯ ^ 1 ( 1 ) ( t ) y ( t ) , x ^ ( t ) = R ¯ i 1 x ¯ ^ 1 ( 1 ) ( t ) + R ¯ i 2 y ( t ) .
In the above equation, x ¯ ^ ( t ) R ( n + p ) and x ¯ ^ 1 ( 1 ) ( t ) R n are the estimations of x ¯ ( t ) and x ¯ ( 1 ) ( t ) . z(t) is the intermediate variable.
Lemma 1.
By the process above, we have the following equations:
(1) A i R ¯ i 1 = R ¯ i 1 A ¯ i 11 2 + R ¯ i 2 A ¯ i 21 2 ,(2) A ¯ i 21 2 = R ¯ i 3 ,
(3) R ¯ i 1 A ¯ i 12 ( 2 ) R ¯ i 2 R ¯ i 4 A i R ¯ i 2 = 0 ,(4) R ¯ i 2 + R ¯ i 1 H ¯ i 21 = 0 ,
(5) R ¯ i 1 B ¯ i 1 2 = B i ,(6) R ¯ i 3 x ¯ ^ 1 1 ( t ) + R ¯ i 4 y ( t ) = 0 .
The proof of Lemma 1 is shown in Appendix A.

3.2. Control Law Design

According to Section 2, in order to design a feedback controller and feedforward compensator, the existence conditions of the controllers should be determined first. The following theorem gives the existing conditions of the controller.
Theorem 1.
The problem has a solution if the system is SS and matrices exist, G i R n × n 1 , and H i R m × n 2 , satisfying the following equations:
G ˙ i = A i G i + B i H i G i A m , 0 = C i G i C m .
Proof of Theorem 1. 
Let
δ x ^ ( t ) = x ^ ( t ) G i x m ( t ) , δ u ( t ) = u ( t ) H i x m ( t ) , δ y ( t ) = y ( t ) y m ( t ) .
After, we take the derivatives on both sides of the first expression, and we obtain the following equation:
δ x ^ ˙ ( t ) = x ^ ˙ ( t ) G i x ˙ m ( t ) G ˙ i x m ( t )
We substitute x ^ ˙ (t) to the original formula:
= R ¯ i 1 x ¯ ˙ 1 ( 1 ) ( t ) + R ¯ i 2 y ˙ ( t ) G i A m x m ( t ) G ˙ i x m ( t )
We substitute x ¯ ˙ 1 ( 1 ) ( t ) by (22) in the above equation:
= R ¯ i 1 z ¯ ˙ ( t ) + H ¯ i 21 y ˙ ( t ) + R ¯ i 2 y ˙ ( t ) G i A m x m ( t ) G ˙ i x m ( t )
Similarly, we substitute z ¯ ˙ ( t ) from the above equation:
= R ¯ i 1 A ¯ i 11 ( 2 ) z ( t ) + B ¯ i 1 ( 2 ) u ( t ) + A ¯ i 12 ( 2 ) + A ¯ i 11 ( 2 ) H ¯ i 21 y ( t ) + H ¯ i 21 y ˙ ( t ) + R ¯ i 2 y ˙ ( t ) G i A m x m ( t ) G ˙ i x m ( t ) = R ¯ i 1 A ¯ i 11 ( 2 ) z ( t ) + R ¯ i 1 A ¯ i 12 ( 2 ) + R ¯ i 1 A ¯ i 11 ( 2 ) H ¯ i 21 y ( t ) + R ¯ i 1 B ¯ i 1 ( 2 ) u ( t ) H i x m ( t ) + R ¯ i 1 H ¯ i 21 y ˙ ( t ) + R ¯ i 2 y ˙ ( t ) G i A m x m ( t ) G ˙ i x m ( t ) + R ¯ i 1 B ¯ i 1 ( 2 ) H i x m ( t )
By (22), we can conclude that z ( t ) = x ¯ ^ 1 ( 1 ) ( t ) H ¯ i 21 y ( t ) , which means equation above is:
= R ¯ i 1 A ¯ i 11 ( 2 ) x ¯ ^ 1 ( 1 ) ( t ) H ¯ i 21 y ( t ) + R ¯ i 1 A ¯ i 12 ( 2 ) + R ¯ i 1 A ¯ i 11 ( 2 ) H ¯ i 21 y ( t ) + R ¯ i 2 + R ¯ i 1 H ¯ i 21 y ˙ ( t ) G i A m + G ˙ i + B i H i x m ( t ) + R ¯ i 1 B ¯ i 1 ( 2 ) u ( t ) H i x m ( t )
By (4) and (5) in Lemma 1 and Equation (23), and the equation above, we have:
= R ¯ i 1 A ¯ i 11 ( 2 ) x ¯ ^ 1 ( 1 ) ( t ) + R ¯ i 1 A ¯ i 12 ( 2 ) y ( t ) + B i δ u ( t ) G i A m + G ˙ i x m ( t )
By (1) of Lemma 1, R ¯ i 1 A ¯ i 11 2 = A i R ¯ i 1 R ¯ i 2 A ¯ i 21 2 . Combined with (2) in Lemma 1, R ¯ i 1 A ¯ i 11 2 = A i R ¯ i 1 + R ¯ i 2 R ¯ i 3 , we plug into the above equation
= A i R ¯ i 1 + R ¯ i 2 R ¯ i 3 x ¯ ^ 1 ( 1 ) ( t ) + R ¯ i 1 A ¯ i 12 ( 2 ) y ( t ) + B i δ u ( t ) G i A m + G ˙ i x m ( t ) = A i δ x ^ ( t ) + R ¯ i 2 R ¯ i 3 x ¯ ^ 1 ( 1 ) ( t ) A i R ¯ i 2 y ( t ) + R ¯ i 1 A ¯ i 12 ( 2 ) y ( t ) + B i δ u ( t ) G i A m + G ˙ i A i G i x m ( t ) = A i δ x ^ ( t ) + R ¯ i 2 R ¯ i 3 x ¯ ^ 1 ( 1 ) ( t ) + R ¯ i 4 y ( t ) + R ¯ i 1 A ¯ i 12 ( 2 ) A i R ¯ i 2 R ¯ i 2 R ¯ i 4 y ( t ) + B i δ u ( t ) G i A m + G ˙ i A i G i x m ( t )
By Theorem 1 and Equations (3) and (6) in Lemma 1, we are able to show that:
δ x ˙ ( t ) = A i δ x ( t ) + B i δ u ( t ) , δ y ( t ) = C i δ x ^ ( t ) + ( C i G i C m ) x m ( t ) = C i δ x ^ ( t ) .
Clearly, system (23) and system (1) have the same structures and, hence, the state feedback control law u ( t ) = K i x ( t ) , which can stabilize system (1), and the state feedback δ u ( t ) = K i δ x ( t ) can also stabilize system (21), which is to say that the system
δ x ˙ ( t ) = A i + B i K i δ x ( t ) , y ( t ) = C i δ x ( t ) .
is stochastically stable; thus, (6) holds, and then (22) and (24). The controller can be rewritten as
u ( t ) = K i x ( t ) + H i K i G i x m ( t )
combined with (24). The following can be obtained K m i = H i K i G i . Next, we design the state feedback controller and the feedforward tracking compensator.

4. Parameter Solutions

The above theorem gives the existence conditions of the controller. In this section, we will solve the feedback controller and the feedforward compensator, respectively.

4.1. State Feedback Control Law Design

From (25) and (26), the state feedback control law K i involves making the system SS. In the following theorem, based on the stability theory and LMI method, we propose a sufficient condition for SS of the system when the transition rates are partially unknown, and the parameter solution of the controller can be derived from the above condition.
Theorem 2.
System (1) with u ( t ) = K i x ( t ) and partly unknown TRs is stochastically stable if there exists a set of symmetric and positive-definite matrices X i , and a set of matrices Y i , i U ( i ) , satisfying:
ϵ 1 k i Ω 1 k i Δ 1 k i < 0 , i U k ( i ) .
ϵ 2 k i Ω 2 k i Δ 2 k i < 0 , i U u k ( i ) .
A i X i + X i A i T X i X j < 0 , i U k ( i ) , j i .
A i X i + X i A i T + X i 0 , i U u k ( i ) .
Moreover, if the above inequations are solvable, the controller gain can be computed from the relation K i = Y i X i 1 , where
ϵ 1 k i = 1 + j U k ( i ) π i j A i X i + X i A i T + π i i X i + Y i T B i T + B i Y i .
Ω 1 k i = π 1 , l 1 i X i , π 1 , l 2 i X i , , π 1 , l m 1 i X i .
Δ 1 k i = diag X l 1 i , , X l 2 i , , X l m 1 i .
ϵ 2 k i = 1 + j U k ( i ) π i j A i X i + X i A i T + Y i T B i T + B i Y i .
Ω 2 k i = π 1 , k 1 i X i , π 1 , k 2 i X i , , π 1 , k m i X i .
Δ 2 k i = diag X k 1 i , , X k 2 i , , X k m i . l 1 i , l 2 i , l m 1 i = k 1 i , k 2 i , k m i { i } .
To prove Theorem 2, we first introduce the following lemma.
Lemma 2
([44]). The unforced system (1) ( u ( t ) = 0 ) is SS if and only if there exists a set of symmetric and positive-definite matrices X i , i U , satisfying:
A i T P i + P i A i + j = 1 N π i j P j < 0
Lemma 2 shows the SS of the Markov jump systems. System (1) with u ( t ) = K i x ( t ) can be rewritten as x ˙ ( t ) = A i + B i K i x ( t ) . It can be concluded from Lemma 2 that the SS condition of the system with the feedback controller can be obtained simply by replacing A i with A i + B i K i .
Proof of Theorem 2. 
Consider two cases: i U k ( i ) and i U u k ( i ) .
Case 1: i U k ( i ) .
Since j = 1 N π i j = 0 , the SS inequality of the system with u ( t ) = K i x ( t ) can be rewritten as:
A i + B i K i T P i + P i A i + B i K i + j = 1 N π i j P j + j = 1 N π i j A i T P i + P i A i < 0
Denote
θ i = A i T P i + K i T B i T P i + P i B i K i + P i A i + π i i P i + j U k ( i ) , j i π i j P j + j U u k ( i ) π i j P j + j = 1 N π i j A i T P i + P i A i .
θ i = 1 + j U k ( i ) π i j A i T P i + P i A i + π i i P i + j U k ( i ) , j i π i j P j + j U u k ( i ) . π i j P j + A i T X i + P i A i + K i T B i T P i + P i B i K i .
Thus, θ i < 0 can be guaranteed by
1 + j U k ( i ) π i j A i T P i + P i A i + K i T B i T P i + P i B i K i + π i i P i + j U k ( i ) . π i j P j < 0 , P j + A i T P i + P i A i 0 .
Multiply X i = P i 1 on both sides of (36), which hold by using (28) and (30) through the Schur complement.
Case 2: i U u k ( i ) Similarly, we denote
φ i = A i T P i + P i A i + K i T B i T P i + P i B i K i + π i i P i + j U k ( i ) π i j P j + j U u k ( i ) , j i π i j P j + j = 1 N π i j A i T P i + P i A i .
φ i = 1 + j U ( i ) π i j A i T P i + P i A i + π i i A i T P i + P i A i + P i + j U k ( i ) π i j P j + j U u k . ( i ) , j i π i j P j + A i T P i + P i A i + K i T B i T P i + P i B i K i .
Thus, φ i < 0 can be guaranteed by
1 + j U k ( i ) π i j A i T P i + P i A i + j U k ( i ) π i j P j + K i T B i T P i + P i B i K i < 0 , A i T P i + P i A i + P i 0 , P j + A i T P i + P i A i 0 .
We multiply X i = P i 1 on both sides of (39) Thus, φ i < 0 can be guaranteed by (29)–(31) by using the Schur complement. □
Note that the controller derived from Theorem 2 can stochastically stabilize system (1).

4.2. Feedforward Control Law Design

Since K m i = H i K i G i , and K i can be obtained by Section 4.1. The key to solving the feedforward compensator is to find matrices G i and H i satisfied (23). According to the generalized Sylvester equation, the matrix in the form of (23) has a complete parametric solution, which can be solved by the following Lemmas.
Lemma 3
([45]). If the matrix pair s I A i I C i C i , B i is controllable, there exists a unimodular matrix V i ( s ) R ( n + r ) × ( n + r ) , satisfying
s I A i I C i C i B i V i ( s ) = [ Δ 0 ] .
where C i is the generalized inverse matrix of matrix C i , such that C i C i C m = C m . Δ is a diagonal matrix and satisfies det ( Δ ) 0 . We partition the unimodular matrix V i ( s ) as:
V i ( s ) = U i ( s ) L i ( s ) Q i ( s ) D i ( s ) .
U i ( s ) , L i ( s ) , Q i ( s ) , D i ( s ) can be written in the following form:
U i ( s ) = j = 0 α U j s j , U j R n × n , L i ( s ) = j = 0 α L j s j , L j R n × r , Q i ( s ) = j = 0 α Q j s j , Q j R r × n , D i ( s ) = j = 0 α D j s j , D j R r × r .
Lemma 4
([19]). Four polynomial matrices U j , L j , Q j , and D j are as above. If the matrix pair s I A i I C i C i , B i is controllable, the parameter solutions of G i , H i can be obtained as follows:
G i = I C i C i j = 0 α L i j Z i + U i j ( A i C C m C C m A m ) A m j + C C m .
H i = j = 0 α D i j Z i + Q i j ( A i C C m C 1 C m A m ) A m j .
where Z i is an arbitrary parameter matrix, which represents the degree of freedom with appropriate dimensions in the solution. In the calculation, we judge whether the matrix pair s I A i I C i C i , B i is controllable. If it is controllable, we decompose the matrix according to Lemma 3, and then solve G i and H i through Lemma 4.

4.3. Algorithm for Solving the Controller

Given the reference model coefficient matrices A m , C m , and A i , B i , C i for every i = 1 , 2 , , N , and the transition probability matrix Π , then the following algorithm can be presented as follow:
  • According to Theorem 2, compute the state feedback gain matrix.
  • Judge whether the matrix pair s I A i I C i C i , B i is controllable. If it is controllable, solve Lemma 3, and go on to the next step; otherwise, the feedforward compensator does not exist.
  • Compute G i and H i based on Lemma 4, then compute the gain matrix of the feedforward compensator.

5. Numerical Example

Several examples are presented to explain the feasibility of the algorithm.
Example 1.
Consider plant (1) and give the system parameters in the following form to illustrate the effectiveness of our results.
A 1 = 9.9477 0.7476 0.2632 5.0337 52.1659 2.7452 5.5532 24.4221 26.0922 2.6363 4.1975 19.2774 0 0 1 0 , B 1 = 0 0 0 1 1 0 0 0 ,
A 2 = 4.5010 0.7102 0.3011 5.0005 52 2.9021 5.5504 24 20.0028 2.6 4.1905 17.2203 0 0 1 2 , B 2 = 0.2 0.15 3.5 7.6 6 4.5 0 0 ,
C 1 = 0.01 0 0 0 0 0.01 0 0 0 0 0 0.01 , C 2 = 0.01 0 0 0 0 0.01 0 0 0 0 0 0 . The control input u ( t ) = [ c o s ( t ) ; c o s ( t ) ] T . The transition rate matrix Π is chosen in 0.6 0.6 0.8 0.8 and the transition rate is shown in Figure 1. Following the observer design process, the simulation results are presented in Figure 2 with the initial value x ( 0 ) = [ 1 ; 2 ; 1 ; 0 ] T . Figure 2a–d are the trajectories of the system states and their estimated states. It can be seen that the achieved observer performance is ideal. From the error image—after about 1.6 s, the error between the estimated value and the actual value is very small. However, the matrix s I A 2 I C 2 C 2 , B 2 is not a full rank. So this model is unable to complete the model reference tracking control; whatever the reference model is, the controller does not exist. This example shows that the effect of the observer is ideal, but the algorithm is not suitable for all systems.
Example 2.
Consider the following satellite tracking system with three modes.
A 1 = 0 0 0 0.94786 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1.6829793 × 10 8 0 0 0 1.45842 × 10 4 0 0 0 0 1.45842 × 10 4 0 0 0 0 5.31747 × 10 9 0 0 0 ,
A 2 = 0 0 0 1.24649664327 0 0 0 0 0 0 0.886083509114 0 0 0 0 0 0 0.1 1.2797796 × 10 8 0 0 0 1.45842 × 10 4 0 0 0 0 1.45842 × 10 4 0 0 0 0 5.31747 × 10 9 0 0 0 ,
A 3 = 0 0 0 1.015383053256841 0 0 0 0 0 0 1.1889863481182 0 0 0 0 0 0 0.1 1.5710731 × 10 8 0 0 0 1.45842 × 10 4 0 0 0 0 1.45842 × 10 4 0 0 0 0 5.31747 × 10 9 0 0 0 ,
B 1 = 0 0 0 0 0 0 0 0 0 0.8 0 0 0 1 0 0 0 1 , B 2 = 0 0 0 0 0 0 0 0 0 1 0 0 0 0.8 0 0 0 1 , B 3 = 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0.8 , C 1 = C 2 = C 3 = I 3 0 where the transition rate Π = ? ? 0.6 0.5 0.8 0.3 ? ? 0.8 . The matrices A m and C m in the reference model are designed as A m = 0 ω 1 0 ω 1 0 0 0 0 2 , C m = I 3 . ω 1 = 0.008 π . x m 0 = [ 100 / 2 ; 100 / 2 ; 0 ] . If the output of the satellite model can track the circular signal generated by the reference model, the orbiting motion can be completed. Set the simulation time at 5000 s.
After computations by the LMI Toolbox in Matlab,
K 1 = 0.0045365 0.0007 0.0004 0.0306 0.0122 0.0081 0.002321 0.0034 0.001 0.0169 0.0646 0.0124 0.0015825 0.001 0.0026 0.0112 0.0124 0.0542 .
K 2 = 0.003449668335 0.00101570562 0.006 0.039 0.0104 0.0073 0.001524272055 0.00372425394 0.008 0.0213 0.0623 0.0079 0.001203372675 0.00124141798 0.0026 0.0127 0.0143 0.055 .
K 3 = 0.004234855 0.00067284204 0.005 0.0331 0.012 0.0074 0.00216667 0.00285957867 0.011 0.0198 0.0648 0.014 0.001280305 0.00067284204 0.0025 0.0152 0.0075 0.0522 .
K m 1 = 0.0004 0.0001 0.0154 0.0039 0.0023 0.0238 0.0018 0.0007 3.8942 . K m 2 = 0.0033 0 0.0135 0.0035 0.0022 0.015 0.0019 0.0008 3.8927 .
K m 3 = 0.004 0 0.014 0.0039 0.0017 0.0269 0.0015 0.0004 3.8981 .
The trajectory of the system under initial conditions x 0 = [ 400 ; 300 ; 100 ; 0 ; 0 ; 0 ] is shown in Figure 3. The trajectory of the system and position error between the system and model is shown in Figure 4. As can be seen from the Figure 4, the error between the system and the reference model is about 0.5 m after 400 s. In addition, it can be seen from Figure 3 that the system trajectory perfectly tracks the reference model, and the reason for the errors is that, to a certain extent, the system lags in the phase. The controller in [5] is selected as the contrast test. Since this control law is based on the completely known transition rates, we may as well set the known transition rate matrix to obtain the results as shown in Figure 5. We can see that the system error is about 10 times in this paper after almost 500 s. It can be seen that although the transition rates are partly unknown in this paper, the results are better. Figure 6 is the trajectory of the system based on the observer. The observer-based system tracking control can also meet the conditions. The trajectory without a feedforward compensator is shown in Figure 7, which embodies the superiority of the algorithm compared with the feedforward compensator only. This example verifies the tracking performance of the algorithm, and the effects are ideal in the observer-based case.

6. Conclusions

In this paper, the observer-based model reference tracking control problem with partly unknown TRs was studied. For a given system, a descriptor observer was designed by the matrix transformation, which solved the estimation problem. A tracking controller composed of a state feedback controller and the feedforward of the state of the model that has a complete parametric form was designed for the given reference model. The parametric solution of the controller was given by analyzing the stochastic stability of the system with transition rates partly unknown, and the self-transition rate of the system can be unknown. A parametric method was established for the feedforward part of the tracking problem based on the theory of the generalized Sylvester equations. The performance of the observer, the stability of the control system, and the tracking effect were verified by several numerical examples. It has fewer errors compared to the existing conclusions.

Author Contributions

Methodology, W.S.; Software, W.S.; Validation, W.S.; Formal analysis, W.S.; Investigation, W.S.; Resources, A.J.; Data curation, A.J.; Writing—original draft, W.S.; Writing—review & editing, W.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Most of the data is given in the text.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Proof of Lemma 1. 
Define the following matrices:
(1) A i R ¯ i 1 = R ¯ i 1 A ¯ i 11 2 + R ¯ i 2 A ¯ i 21 2 ,(2) A i R ¯ i 2 = R ¯ i 1 A ¯ i 12 ( 2 ) + R ¯ i 2 A ¯ i 22 ( 2 ) ,
(3) A ¯ i 21 2 = R ¯ i 3 ,(4) A ¯ i 22 ( 2 ) = R ¯ i 4 ,
(5) R ¯ i 1 A ¯ i 12 ( 2 ) R ¯ i 2 R ¯ i 4 A i R ¯ i 2 = 0 ,(6) R ¯ i 2 + R ¯ i 1 H ¯ i 21 = 0 ,
(7) R ¯ i 1 B ¯ i 1 2 = B i ,(8) R ¯ i 3 x ¯ ^ 1 1 ( t ) + R ¯ i 4 y ( t ) = 0 .
Moreover, (1)–(4) can be obtained by the equivalence of corresponding terms of the following equation:
A ¯ i ( 1 ) = A ¯ i T ¯ i 1 = A i R ¯ i 1 A i R ¯ i 2 R ¯ i 3 R ¯ i 4 = H ¯ i 1 A ¯ i ( 2 ) = R ¯ i 1 R ¯ i 2 0 p × n I p A ¯ i 11 ( 2 ) A ¯ i 12 ( 2 ) A ¯ i 21 ( 2 ) A ¯ i 22 ( 2 ) = R ¯ i 1 A ¯ i 11 2 + R ¯ i 2 A ¯ i 21 2 R ¯ i 1 A ¯ i 12 ( 2 ) + R ¯ i 2 A ¯ i 22 ( 2 ) A ¯ i 21 ( 2 ) A ¯ i 22 ( 2 ) .
Thus, (1)–(4) hold. Using (2) and (4), we have:
R ¯ i 1 A ¯ i 12 ( 2 ) R ¯ i 2 R ¯ i 4 A i R ¯ i 2 = R ¯ i 1 A ¯ i 12 ( 2 ) R ¯ i 2 R ¯ i 4 R ¯ i 1 A ¯ i 12 ( 2 ) R ¯ i 2 A ¯ i 22 ( 2 ) = R ¯ i 1 A ¯ i 12 ( 2 ) R ¯ i 2 R ¯ i 4 R ¯ i 1 A ¯ i 12 ( 2 ) + R ¯ i 2 R ¯ i 4 = 0 .
Thus, (5) holds, then
H ¯ i 1 H ¯ i = R ¯ i 1 R ¯ i 2 0 p × n I p H ¯ i 11 H ¯ i 21 H ¯ i 12 H ¯ i 22 = R ¯ i 1 H ¯ i 11 + R ¯ i 2 H ¯ i 12 R ¯ i 1 H ¯ i 21 + R ¯ i 2 H ¯ i 22 H ¯ i 12 H ¯ i 22 = I n 0 n × p 0 p × n I p .
By the equivalence of corresponding terms, we have R ¯ i 1 H ¯ i 21 + R ¯ i 2 H ¯ i 22 = 0 . So (6) hold
B ¯ i = B i 0 = H ¯ i 1 B ¯ i 2 = R ¯ i 1 R ¯ i 2 0 p × n I p B ¯ i 1 2 B ¯ i 2 2 = R ¯ i 1 B ¯ i 1 2 + R ¯ i 2 B ¯ i 2 2 B ¯ i 2 2 .
Thus, B ¯ i 2 ( 2 ) = 0 ,   a n d   R ¯ i 1 B ¯ i 1 2 + R ¯ i 2 B ¯ i 2 2 = R ¯ i 1 B ¯ i 1 2 = B i . So (7) hold.
x ¯ ^ ( t ) = x ^ ( t ) 0 = R ¯ i 1 R ¯ i 2 R ¯ i 3 R ¯ i 4 x ¯ ^ 1 ( 1 ) ( t ) y ( t ) = R ¯ i 1 x ¯ ^ 1 ( 1 ) ( t ) + R ¯ i 2 y ( t ) R ¯ i 3 x ¯ ^ 1 1 ( t ) + R ¯ i 4 y ( t ) .
Thus, (8) holds. From the above equation, it can be seen that the equations in Lemma 1 are true. The proof is finished. □

References

  1. Ji, Y.; Chizeck, H.J. Controllability, stabilizability, and continuous-time Markovian jump linear quadratic control. IEEE Trans. Autom. Control 1990, 35, 777–788. [Google Scholar] [CrossRef]
  2. Hou, T.; Liu, Y.; Deng, F. Stability for discrete-time uncertain systems with infinite Markov jump and time-delay. Sci. China Inf. Sci. 2021, 64, 152202. [Google Scholar] [CrossRef]
  3. Wang, P.; Wang, W.; Su, H.; Feng, J. Stability of stochastic discrete-time piecewise homogeneous Markov jump systems with time delay and impulsive effects. Nonlinear Anal. Hybrid Syst. 2020, 38, 100916. [Google Scholar] [CrossRef]
  4. Rakkiyappan, R.; Chandrasekar, A.; Lakshmanan, S.; Park, J. Exponential stability for markovian jumping stochastic BAM neural networks with mode-dependent probabilistic time-varying delays and impulse control. Complex 2015, 20, 39–65. [Google Scholar] [CrossRef]
  5. Zhang, L.; Boukas, E.K. Stability and stabilization of Markovian jump linear systems with partly unknown transition probabilities. Automatica 2009, 45, 463–468. [Google Scholar] [CrossRef]
  6. Xia, Y.; Boukas, E.K.; Shi, P.; Zhang, J. Stability and stabilization of continuous-time singular hybrid systems. Automatica 2009, 45, 1504–1509. [Google Scholar] [CrossRef]
  7. Zhang, X.; Baron, L.; Liu, Q.; Boukas, E.K. Design of Stabilizing Controllers With a Dynamic Gain for Feedforward Nonlinear Time-Delay Systems. IEEE Trans. Autom. Control 2011, 56, 692–697. [Google Scholar] [CrossRef]
  8. Boukas, E.K. Stabilization of stochastic singular nonlinear hybrid systems. Nonlinear-Anal.-Theory Methods Appl. 2006, 64, 217–228. [Google Scholar] [CrossRef]
  9. Zhang, X.; Liu, Q.; Baron, L.; Boukas, E.K. Feedback stabilization for high order feedforward nonlinear time-delay systems. Automatica 2011, 47, 962–967. [Google Scholar] [CrossRef]
  10. Ji, H.; Cui, B.; Liu, X. Adaptive control of Markov jump distributed parameter systems via model reference. Fuzzy Sets Syst. 2020, 392, 115–135. [Google Scholar]
  11. Tan, M.; Mai, J.; Song, Z. Exponential H output tracking control for coupled switched systems with states of different dimensions. Int. J. Control 2021, 94, 190–201. [Google Scholar] [CrossRef]
  12. Carravetta, F.; Sorge, M.M. Model reference adaptive expectations in Markov-switching economies. Econ. Model. 2013, 32, 551–559. [Google Scholar] [CrossRef]
  13. Chen, Z.; Hu, H.; Wu, Y.; Zhang, Y.; Li, G.; Liu, Y. Stochastic model predictive control for energy management of power-split plug-in hybrid electric vehicles based on reinforcement learning. Energy 2020, 211, 118931. [Google Scholar] [CrossRef]
  14. Zhao, X.; Guo, G. Model Reference Adaptive Control of Vehicle Slip Ratio Based on Speed Tracking. Appl. Sci. 2020, 10, 3459. [Google Scholar] [CrossRef]
  15. Peng, L.; Yin, A.; Song, W.; Yao, W.; Ren, H.; Yang, L. Sleep Monitoring With Hidden Markov Model for Physical Conditions Tracking. IEEE Sens. J. 2021, 21, 14232–14239. [Google Scholar] [CrossRef]
  16. Nguyen, H.Q. Observer-Based Tracking Control for Polysolenoid Linear Motor with Unknown Disturbance Load. Actuators 2020, 9, 23. [Google Scholar] [CrossRef] [Green Version]
  17. Lala, T.; Chirla, D.; Radac, M. Model Reference Tracking Control Solutions for a Visual Servo System Based on a Virtual State from Unknown Dynamics. Energies 2021, 15, 267. [Google Scholar] [CrossRef]
  18. Boukas, E.K. On reference model tracking for Markov jump systems. Int. J. Syst. Sci. 2009, 40, 393–401. [Google Scholar] [CrossRef]
  19. Fu, Y.; Lu, Y.; Zhang, M. Model reference tracking control of continuous-time periodic linear systems with actuator jumping fault and its applications in orbit maneuvering. Int. J. Control. Autom. Syst. 2017, 15, 2182–2192. [Google Scholar] [CrossRef]
  20. Afri, C.; Andrieu, V.; Bako, L.; Dufour, P. State and Parameter Estimation: A Nonlinear Luenberger Observer Approach. IEEE Trans. Autom. Control 2017, 62, 973–980. [Google Scholar] [CrossRef] [Green Version]
  21. CastroRego, F.; Pu, Y.; Alessandretti, A.; Aguiar, A.; Pascoal, A.M.; Jones, C.N. A Distributed Luenberger Observer for Linear State Feedback Systems With Quantized and Rate-Limited Communications. IEEE Trans. Autom. Control 2021, 66, 3922–3937. [Google Scholar]
  22. Yin, Z.; Bai, C.; Du, N.; Du, C.; Liu, J. Research on Internal Model Control of Induction Motors Based on Luenberger Disturbance Observer. IEEE Trans. Power Electron. 2021, 36, 8155–8170. [Google Scholar] [CrossRef]
  23. Bejarano, F.J.; Mera, M. Robust Luenberger-like observer for control of linear switched systems under arbitrary unknown switched function. Asian J. Control. 2020, 23, 2527–2536. [Google Scholar] [CrossRef]
  24. Rincón, A.; Restrepo, G.M.; Velasco, F.E. A Robust Observer-Based Adaptive Control of Second—Order Systems with Input Saturation via Dead-Zone Lyapunov Functions. Computation 2021, 5, 82. [Google Scholar] [CrossRef]
  25. Al-Gabalawy, M.; Mahmoud, K.; Darwish, M.M.; Dawson, J.A.; Lehtonen, M.; Hosny, N.S. Reliable and Robust Observer for Simultaneously Estimating State-of-Charge and State-of-Health of LiFePO4 Batteries. Appl. Sci. 2021, 11, 3609. [Google Scholar] [CrossRef]
  26. Hua, H.; Fang, Y.; Zhang, X.; Lu, B. A Novel Robust Observer-Based Nonlinear Trajectory Tracking Control Strategy for Quadrotors. IEEE Trans. Control. Syst. Technol. 2021, 29, 1952–1963. [Google Scholar] [CrossRef]
  27. Yao, D.; Lu, R.; Xu, Y.; Ren, H. Observer-based sliding mode control of Markov jump systems with random sensor delays and partly unknown transition rates. Int. J. Syst. Sci. 2017, 48, 2985–2996. [Google Scholar] [CrossRef]
  28. Jiang, B.; Shi, P.; Mao, Z. Sliding Mode Observer-Based Fault Estimation for Nonlinear Networked Control Systems. Circuits, Syst. Signal Process. 2011, 30, 1–16. [Google Scholar] [CrossRef]
  29. Vijayakumar, M.; Sakthivel, R.; Mohammadzadeh, A.; Karthick, S.A.; Anthoni, S.M. Proportional integral observer based tracking control design for Markov jump systems. Appl. Math. Comput. 2021, 410, 126467. [Google Scholar] [CrossRef]
  30. Wu, L.; Shi, P.; Gao, H. State Estimation and Sliding-Mode Control of Markovian Jump Singular Systems. IEEE Trans. Autom. Control 2010, 55, 1213–1219. [Google Scholar]
  31. Huo, S.; Ge, L.; Li, F. Robust H Consensus for Markov Jump Multiagent Systems Under Mode-Dependent Observer and Quantizer. IEEE Syst. J. 2020, 15, 2443–2450. [Google Scholar] [CrossRef]
  32. Yang, H.; Yin, S. Descriptor Observers Design for Markov Jump Systems with Simultaneous Sensor and Actuator Faults. IEEE Trans. Autom. Control 2019, 64, 3370–3377. [Google Scholar] [CrossRef]
  33. Lei, C.; Zhang, H.; Wang, L.; Liu, L.; Ma, D. Incomplete information Markov game theoretic approach to strategy generation for moving target defense. Comput. Commun. 2018, 116, 184–199. [Google Scholar] [CrossRef]
  34. Ashkenazi-Golan, G.; Rainer, C.; Solan, E. Solving two-state Markov games with incomplete information on one side. Games Econ. Behav. 2020, 122, 83–104. [Google Scholar] [CrossRef] [Green Version]
  35. Cui, D.; Wang, Y.; Su, H.; Xu, Z.; Que, H. Fuzzy-model-based tracking control of Markov jump nonlinear systems with incomplete mode information. J. Frankl. Inst. 2021, 358, 3633–3650. [Google Scholar] [CrossRef]
  36. Fang, H.; Zhu, G.; Stojanovic, V.; Nie, R.; He, S.; Luan, X.; Liu, F. Adaptive optimization algorithm for nonlinear Markov jump systems with partial unknown dynamics. Int. J. Robust Nonlinear Control 2021, 31, 2126–2140. [Google Scholar] [CrossRef]
  37. Li, F.; Xu, S.; Shen, H.; Ma, Q. Passivity-Based Control for Hidden Markov Jump Systems With Singular Perturbations and Partially Unknown Probabilities. IEEE Trans. Autom. Control 2020, 65, 3701–3706. [Google Scholar] [CrossRef]
  38. Zhang, L.; Lam, J. Necessary and Sufficient Conditions for Analysis and Synthesis of Markov Jump Linear Systems With Incomplete Transition Descriptions. IEEE Trans. Autom. Control 2010, 55, 1695–1701. [Google Scholar] [CrossRef] [Green Version]
  39. Li, L.; Zhang, Q. Finite-time H control for singular Markovian jump systems with partly unknown transition rates. Appl. Math. Model. 2016, 40, 302–314. [Google Scholar] [CrossRef]
  40. Shen, M.; Park, J.; Ye, D. A Separated Approach to Control of Markov Jump Nonlinear Systems with General Transition Probabilities. IEEE Trans. Cybern. 2016, 46, 2010–2018. [Google Scholar] [CrossRef]
  41. Kao, Y.; Xie, J.; Wang, C. Stabilization of Singular Markovian Jump Systems With Generally Uncertain Transition Rates. IEEE Trans. Autom. Control 2014, 59, 2604–2610. [Google Scholar] [CrossRef]
  42. Park, C.; Kwon, N.K.; Park, I.S.; Park, P. H filtering for singular Markovian jump systems with partly unknown transition rates. Automatica 2019, 109, 108528. [Google Scholar] [CrossRef]
  43. Shen, A.; Li, L.; Li, C. H Filtering for Discrete-Time Singular Markovian Jump Systems with Generally Uncertain Transition Rates. Circuits, Syst. Signal Process. 2021, 40, 3204–3226. [Google Scholar] [CrossRef]
  44. Boukas, E.K. Stochastic Switching Systems: Analysis and Design; Birkhauser: Basel, Switzerland; Berlin, Germany, 2005. [Google Scholar]
  45. Horn, R.A.; Johnson, C.R. Matrix analysis. In Statistical Inference for Engineers and Data Scientists; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
Figure 1. Transition rates.
Figure 1. Transition rates.
Applsci 13 00914 g001
Figure 2. Results of the observer under initial conditions [ 1 , 2 , 1 , 0 ] .
Figure 2. Results of the observer under initial conditions [ 1 , 2 , 1 , 0 ] .
Applsci 13 00914 g002
Figure 3. Trajectory.
Figure 3. Trajectory.
Applsci 13 00914 g003
Figure 4. Error between the system and model.
Figure 4. Error between the system and model.
Applsci 13 00914 g004
Figure 5. Contrast test.
Figure 5. Contrast test.
Applsci 13 00914 g005
Figure 6. Trajectory based on the observer.
Figure 6. Trajectory based on the observer.
Applsci 13 00914 g006
Figure 7. Trajectory without the feedforward compensator.
Figure 7. Trajectory without the feedforward compensator.
Applsci 13 00914 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Song, W.; Jin, A. Observer-Based Model Reference Tracking Control of the Markov Jump System with Partly Unknown Transition Rates. Appl. Sci. 2023, 13, 914. https://doi.org/10.3390/app13020914

AMA Style

Song W, Jin A. Observer-Based Model Reference Tracking Control of the Markov Jump System with Partly Unknown Transition Rates. Applied Sciences. 2023; 13(2):914. https://doi.org/10.3390/app13020914

Chicago/Turabian Style

Song, Weiqiang, and Aijuan Jin. 2023. "Observer-Based Model Reference Tracking Control of the Markov Jump System with Partly Unknown Transition Rates" Applied Sciences 13, no. 2: 914. https://doi.org/10.3390/app13020914

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop