Next Article in Journal
Long-Range Dependence Involutional Network for Logo Detection
Next Article in Special Issue
A New Nonlinear Dynamic Speed Controller for a Differential Drive Mobile Robot
Previous Article in Journal
A Semantic-Enhancement-Based Social Network User-Alignment Algorithm
Previous Article in Special Issue
Event-Triggered Tracking Control for Adaptive Anti-Disturbance Problem in Systems with Multiple Constraints and Unknown Disturbances
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

From Nonlinear Dominant System to Linear Dominant System: Virtual Equivalent System Approach for Multiple Variable Self-Tuning Control System Analysis

School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
*
Author to whom correspondence should be addressed.
Entropy 2023, 25(1), 173; https://doi.org/10.3390/e25010173
Submission received: 1 December 2022 / Revised: 30 December 2022 / Accepted: 13 January 2023 / Published: 15 January 2023
(This article belongs to the Special Issue Nonlinear Control Systems with Recent Advances and Applications)

Abstract

:
The stability and convergence analysis of a multivariable stochastic self-tuning system (STC) is very difficult because of its highly nonlinear structure. In this paper, based on the virtual equivalent system method, the structural nonlinear or nonlinear dominated multivariable self-tuning system is transformed into a structural linear or linear dominated system, thus simplifying the stability and convergence analysis of multivariable STC systems. For the control process of a multivariable stochastic STC system, parameter estimation is required, and there may be three cases of parameter estimation convergence, convergence to the actual value and divergence. For these three cases, this paper provides four theorems and two corollaries. Given the theorems and corollaries, it can be directly concluded that the convergence of parameter estimation is a sufficient condition for the stability and convergence of stochastic STC systems but not a necessary condition, and the four theorems and two corollaries proposed in this paper are independent of specific controller design strategies and specific parameter estimation algorithms. The virtual equivalent system theory proposed in this paper does not need specific control strategies, parameters and estimation algorithms but only needs the nature of the system itself, which can judge the stability and convergence of the self-tuning system and relax the dependence of the system stability convergence criterion on the system structure information. The virtual equivalent system method proposed in this paper is proved to be effective when the parameter estimation may have convergence, convergence to the actual value and divergence.

1. Introduction

The stability and convergence analysis of stochastic self-tuning control systems is more difficult than that of deterministic self-tuning control systems. It is difficult to analyze and understand in theory, which makes it very difficult for engineers and technicians to analyze the stability and convergence of such systems in practice.
References [1,2,3,4] studied the stability and convergence of the self-tuning control system consisting of the minimum variance control strategy and the stochastic gradient parameter estimation algorithm. References [5,6] provided the stability and convergence results of the self-tuning control system consisting of the minimum variance control strategy and the least squares parameter estimation algorithm. References [7,8] presented the stability and convergence results of the self-tuning control system consisting of the pole-placement control strategy and the weighted least squares parameter estimation algorithm. As a summary of the stability and convergence of STC, the results regarding the minimum variance control strategy do not require parameter estimation convergence; however, the results regarding other control strategies, such as pole placement, require parameter estimation convergence [9,10].
The above results are all for the minimum phase object, and the common feature is that the convergence of parameter estimation is not required. If the controlled object is of a non-minimum phase, the self-tuning algorithm of the minimum variance type cannot be used for control, but pole assignment and other control strategies need to be used. The corresponding stability and convergence analysis are more difficult than the adaptive control system of the minimum variance type. The existing results are basically obtained under the premise of parameter estimation convergence (can converge to the real value or non-real value); there are also some results that do not require the convergence of parameter estimation but can only guarantee the stability and robustness of the system, and cannot guarantee the convergence of the system.
For the controlled object with non-minimum phase certainty, [11,12] proved the stability and convergence of pole placement self-tuning control by introducing additional excitation signals. The stability and convergence of pole assignment self-tuning control can also be obtained by modifying the estimation model without introducing an additional excitation signal [13]. The literature [1,14] analyzes the stability and convergence of the pole assignment algorithm with a “key technology lemma”. For non-minimum phase random controlled objects, refs. [15,16] adopted the method of adding an “attenuated excitation signal” to ensure the stability and convergence of random pole assignment self-tuning control, while [7,8] provided that it does not need an external excitation signal, but uses a self-convergent weighted least squares parameter estimation algorithm to ensure the stability and convergence of random pole assignment self-tuning control. The literature [17,18,19,20] analyzed the stability and convergence of the adaptive decoupling control system, and the literature [21] analyzed the stability and convergence of generalized minimum variance self-tuning control for minimum phase objects and some non-minimum phase objects based on the Lyapunov function. The literature [22] proposed a theory of virtual equivalent systems, but it mainly focused on single-variable systems and did not study multivariable systems.
The disadvantage of the minimum variance self-tuning control method is that it is not suitable for non-minimum phase objects. The main reason is that the unstable pole of the regulator cannot be exactly canceled with the zero point of the object, resulting in the instability of the system. In addition, even if the generalized minimum variance self-tuning controller is used, in order to ensure the closed-loop stability of the system, the control weight factor is usually determined through trial and error. This constraint also introduces significant inconvenience to specific applications. The computation amount of the random gradient algorithm is much less than that of the least squares algorithm, but its convergence speed is very slow. Moreover, under the conditions of strong, persistent excitation, the parameter estimation error of the system using the stochastic gradient algorithm converges to zero uniformly, but under other conditions, it is very difficult to prove the convergence of the stochastic gradient algorithm. The least squares estimation method is simple in its algorithm and is easy to implement. It does not need to know the statistical information of the measurement error, but its accuracy is difficult to improve. Its limitations are reflected in two aspects: first, it can only estimate the deterministic constant value vector but cannot estimate the time process of the random vector; second, it can only ensure the minimum mean square error of the measurement, but it does not ensure the best estimation error of the estimator, and the accuracy is not high. The stochastic self-tuning system using the pole placement method requires high accuracy of the model, has the problem of modeling error, and requires the convergence of parameter estimation.
In view of the characteristics and shortcomings of the above control algorithms, this paper, based on the theory of virtual equivalent systems, weakens the conditions required by the stability and convergence criteria of the stochastic self-tuning system, mainly eliminating the direct dependence on the order information of the controlled object, reducing the requirements for parameter estimation errors, and eliminating the dependence of the pole placement self-tuning control strategy on the convergence of parameter estimation, The difficulty of analysis is transferred from the system structure to the compensation signal, thus reducing the difficulty of the original problem.
New self-tuning control schemes are still emerging, such as the robust multi-model adaptive control system, fuzzy parameter self-tuning PID method, intelligent AC contactor self-tuning control technology, self-tuning control method of simulation turntable based on the accurate identification of the model parameters, sliding model adaptive control method [23,24,25,26,27], and the traditional approach cannot prove its stability and convergence due to the lack of a general theory of adaptive control. In general, it is expected that the stability and convergence analysis methods of stochastic STC systems are independent of specific control strategies and parameter estimation methods. Some scholars have made efforts in this field to develop a general theory [28,29,30,31], but the results are not satisfactory.
There are many related achievements that are difficult to enumerate one by one. In recent years, many adaptive control schemes related to stability and convergence have achieved good results in practical applications [32], but there are no theoretical analysis results.
New adaptive control schemes are emerging, and it is difficult to analyze the stability and convergence of each adaptive control system one by one. For this reason, people have been expecting to find a unified stability and convergence analysis method [23,33,34,35,36]. However, despite some sporadic results [32,34,37,38], the expected unified analysis method and theory still need to be explored. The concept of the virtual equivalent system and its corresponding analysis methods are generated under such a background [25,39,40,41]. Weicun Zhang, one of the authors of this paper, proposed the concept of the virtual equivalent system and then analyzed the stability and convergence of various self-tuning control systems in a unified framework, converting the nonlinear system into an equivalent linear system with an infinitesimal nonlinear compensation signal.
In this paper, we will consider three cases of parameter estimation: (1) The parameter estimation converges to the real value; (2) the parameter estimation converges to the non-real value; (3) and the parameter estimation may not converge. The second and third cases do not require the structural information of the plant. Considering the particularity of the minimum variance self-tuning control, a criterion with an intuitive explanation is given for this kind of STC system.
It is worth pointing out that for a general multiple-input multiple-output stochastic self-tuning control system, not only the minimum variance control system but also the convergence of parameter estimation is not a necessary condition for the stability and convergence of the STC system.
Through the theoretical and experimental research in this paper, it is concluded that for the self-tuning control system of nonlinear controlled objects (deterministic or stochastic, minimum phase or non-minimum phase), to ensure its stability and convergence, only the boundedness of parameter estimation, slow time variation and the output approximation effect of the estimation model (i.e., the parameter estimation error is relatively infinitesimal) are required, and the control strategy meets the stability and tracking requirements according to the principle of deterministic equivalence.

2. Virtual Equivalent System of Stochastic Self-Tuning Control System

For the convenience of description, we first consider the following multivariable stochastic system Σ P with known structural information but unknown parameters (for the discussion of general stochastic systems containing colored noise, see Section 4).
Σ P : A ( q 1 ) y ( k ) = q d B ( q 1 ) u ( k ) + ω ( k )
where, y ( k ) , u ( k ) and ω ( k ) , are the output signals, the input signal and the noise signal with the appropriate dimension of the plant to be controlled, respectively.
Assuming that
y ( k ) = 0 , u ( k ) = 0 , ω ( k ) = 0 , k < 0 ,
lim n 1 n i = 1 n ω ( i ) 2 = R < , a . s .
A ( q 1 ) = I + A 1 q 1 + + A n q n , n 1 B ( q 1 ) = B 0 + B 1 q 1 + + B m q m , m 1
Introducing symbols
θ T = [ A 1 , A n , B 0 , .. B m ] ,
ϕ T ( k d ) = [ y ( k 1 ) , y ( k n ) , , u ( k d ) , .. u ( k d m ) ] ,
Then, we have
y ( k ) = ϕ T ( k d ) θ + ω ( k )
The estimated model is denoted as Σ P m ( k ) , and its parameter matrix is
Σ P m ( k ) : θ ^ T ( k ) = A ^ 1 , , A ^ n , B ^ 0 , .. , B ^ m
The performance of the parameter estimation can be expressed by the (posterior) model output error
e ( k ) = y ( k ) ϕ T ( k d ) θ ^ ( k ) ω ( k ) = ϕ T ( k d ) θ 0 ϕ T ( k d ) θ ^ ( k ) .
The self-tuning controller is denoted by Σ C ( k ) , and can also be treated as a matrix θ c ( k ) . The controller can be obtained by various design methods, such as pole placement. Different control strategies actually represent different mapping, i.e.,
f : θ ^ ( k ) θ c ( k ) ,   or   θ c ( k ) = f ( θ ^ ( k ) ) ,
Additionally, the control law is generally denoted by
u ( k ) = ϕ c T ( k ) θ c ( k ) ,
where
ϕ c T ( k ) = [ y r ( k ) , y r ( k 1 ) y ( k ) , y ( k 1 ) , u ( k 1 ) , ] .
where y r ( k ) is a known bounded reference signal.
The above-described self-tuning control system is shown in Figure 1, which is abbreviated as Σ C ( k ) , Σ P .
Accordingly, the real plant corresponds to an ‘ideal’ controller, i.e.,
{ θ c = f ( θ ) u 0 ( k ) = ϕ c T ( k ) θ c .
This constant control system is abbreviated as ( Σ C , Σ P ) ,as shown in Figure 2. On the basis of ( Σ C , Σ P ) , we can artificially construct a system that is equivalent in the input–output sense to the self-tuning control system. It consists of the constant control system of Figure 2 and a compensational signal Δ u ( k ) , abbreviated as Σ C , Σ P , Δ u ( k ) , as shown in Figure 3.
Δ u ( k ) = u ( k ) u 0 ( k ) = ϕ c T ( k ) θ c ( k ) ϕ c T ( k ) θ c .
Since θ c is unknown, Δ u ( k ) cannot be calculated exactly, but it can be estimated (see the analysis below for details).
The above equivalent system is recorded as the “virtual equivalent system” of the self-tuning control system. The reason why it is recorded as “virtual” is that it exists but is unknown. One of the merits of the “virtual equivalent system” is that it quantitatively reflects the difference between a self-tuning control system and the corresponding constant control system. The definition of the convergence of the self-tuning control system is based on the constant system shown in Figure 2.
The stability of the self-tuning control system is defined by
lim n sup 1 n i = 1 n ( y ( i ) 2 + u ( i ) 2 ) < .
The convergence of the self-tuning control system is defined by
lim n 1 n i = 1 n ( y ( i ) y r ( i ) ) 2 = lim n 1 n i = 1 n ( y 1 ( i ) y r ( i ) ) 2 .

3. Main Results

3.1. Parameter Estimation Converges to the True Value

Considering a self-tuning control system based on an arbitrary control strategy and an arbitrary parameter estimation algorithm, the following results are obtained.
Theorem 1.
For the self-tuning control system of (1), if the following conditions are met.
The parameter estimation converges to a true value.
The control strategy satisfies the stability requirements of the object with known parameters, then the close- loop system composed by  Σ P , Σ C is stable.
The mapping  f is continuous at  θ ^ ( k ) = θ .
Then, the self-tuning control system is stable and convergent.
Proof. 
The stability of the system is proved by the method of contradiction, and then the convergence is proved.
Since the virtual equivalent system shown in Figure 3 is a linear constant structure (its time-varying nonlinear features are transferred to Δ u ( k ) ), it can be decomposed into two subsystems, one is the constant control system shown in Figure 2, and the other is the system shown in Figure 4.
y ( k ) = y 1 ( k ) + y 2 ( k ) ,
u ( k ) = u 1 ( k ) + u 2 ( k ) .
By the superposition principle and considering the situation of the two subsystems separately, the system shown in Figure 2 is a conventional random control system. The second condition in Theorem 1 ensures that it is closed-loop stable, so there is
lim n sup 1 n i = 1 n ( y 1 ( i ) 2 + u 1 ( i ) 2 ) < ,
lim n 1 n i = 1 n ( y 1 ( i ) y r ( i ) ) 2 < .
For the system shown in Figure 4, there is no influence of noise due to the closed-loop system being stable, so we have [42]
k = 1 n | | y 2 ( k ) | | 2 M 1 k = 1 n Δ u ( k ) 2 + M 2 , 0 < M 1 < , 0 M 2 < ,
k = 1 n | | u 2 ( k ) | | 2 M 3 k = 1 n Δ u ( k ) 2 + M 4 , 0 < M 3 < , 0 M 4 < ,
That is,
k = 1 n | | y 2 ( k ) | | 2 = O k = 1 n Δ u ( k ) 2 + M 2 , k = 1 n | | u 2 ( k ) | | 2 = O k = 1 n Δ u ( k ) 2 + M 4 .
By theorem condition (1) and condition (3) in Theorem 1, we have θ c ( k ) θ c , Δ u ( k ) = o ( ϕ c ( k ) ) .
By the composition of ϕ c ( k ) , we know that ϕ c ( k ) = O ( ϕ ( k d ) ) + M , M is a bounded constant.
Furthermore, by the convergence of parameter estimation, we have
1 n k = 1 n Δ u ( k ) 2 = o ( 1 n k = 1 n 1 n k = 1 n ϕ ( k d ) 2 ) .
Thus,
1 n k = 1 n | | y 2 ( k ) | | 2 = o 1 n k = 1 n ϕ ( k d ) 2
1 n k = 1 n | | u 2 ( k ) | | 2 = o 1 n k = 1 n ϕ ( k d ) 2
Then, to prove the following formula
lim n sup 1 n i = 1 n ( y ( i ) 2 + u ( i ) 2 ) < .
It suffices to prove that
lim n sup 1 n i = 1 n ϕ ( k d ) 2 < .
We can construct ϕ 1 ( k d ) (corresponding to the system of Figure 2) and ϕ 2 ( k d ) (corresponding to the system of Figure 4), so that
ϕ ( k d ) = ϕ 1 ( k d ) + ϕ 2 ( k d ) .
It can be seen from Equations (13) and (14) that
1 n i = 1 n ϕ 2 ( k d ) 2 = o ( 1 n k = 1 n ϕ ( k d ) 2 )
By the triangle inequalities, we have
1 n i = 1 n ϕ ( k d ) 2 = 1 n i = 1 n ϕ 1 ( k d ) + ϕ 2 ( k d ) 2 1 n i = 1 n ϕ 1 ( k d ) 2 + 1 n i = 1 n ϕ 2 ( k d ) 2 = 1 n i = 1 n ϕ 1 ( k d ) 2 + o ( 1 n k = 1 n ϕ ( k d ) 2 ) .
Furthermore, considering
lim n sup 1 n i = 1 n y 1 ( i ) 2 + u 1 ( i ) 2 <
Thus, we obtain
lim n sup 1 n i = 1 n ϕ 1 k d 2 < .
Taking (15) into consideration, we obtain
lim n sup 1 n i = 1 n ϕ ( k d ) 2 < .
Thus, combining the above formula with (13) and (14), it follows that
1 n k = 1 n | | y 2 ( k ) | | 2 = o ( 1 ) ,
1 n k = 1 n | | u 2 ( k ) | | 2 = o ( 1 ) .
Next, we prove
lim n 1 n i = 1 n | | y ( i ) y r ( i ) | | 2 = lim n 1 n i = 1 n | | y 1 ( i ) y r ( i ) | | 2 .
Combining Cauchy’s Inequality with (10) and (16), we have
0 1 n i = 1 n y 1 ( i ) y r ( i ) y 2 ( i ) 2 { 1 n i = 1 n y 1 ( i ) y r ( i ) 2 } . { 1 n i = 1 n y 2 ( i ) 2 } 0 .
By the Squeeze Theorem, we obtain
lim n 1 n i = 1 n y 1 ( i ) y r ( i ) y 2 ( i ) 2 = 0 .
It follows that
lim n 1 n i = 1 n y 1 ( i ) y r ( i ) y 2 ( i ) = 0 .
Finally, let us consider
lim n 1 n i = 1 n y ( i ) y r ( i ) 2 = lim n 1 n i = 1 n ( y 1 ( i ) y r ( i ) ) + y 2 ( i ) 2 .
According to the norm triangle inequality and (16) and (18), we have
1 n i = 1 n ( y 1 ( i ) y r ( i ) ) + y 2 ( i ) 2 1 n i = 1 n y 1 ( i ) y r ( i ) 2 + 1 n i = 1 n y 2 ( i ) 2 + 2 n i = 1 n y 1 ( i ) y r ( i ) y 2 ( i ) 1 n i = 1 n y 1 ( i ) y r ( i ) 2 .
Similarly,
1 n i = 1 n ( y 1 ( i ) y r ( i ) ) + y 2 ( i ) 2 1 n i = 1 n y 1 ( i ) y r ( i ) 2 + 1 n i = 1 n y 2 ( i ) 2 2 n i = 1 n y 1 ( i ) y r ( i ) y 2 ( i ) 1 n i = 1 n y 1 ( i ) y r ( i ) 2 .
According to the Squeeze Theorem, we have
lim n 1 n i = 1 n ( y 1 ( i ) y r ( i ) ) + y 2 ( i ) 2 = lim n 1 n i = 1 n y 1 ( i ) y r ( i ) 2 ,
i.e.,
lim n 1 n i = 1 n y ( i ) y r ( i ) 2 = lim n 1 n i = 1 n y 1 ( i ) y r ( i ) 2 .
That completes the proof of Theorem 1. □

3.2. Parameter Estimation Converges to Non-True Value

Considering the fact that the structure information of the controlled object is unknown, the order of the estimated model can be lower than the order of the real controlled object, which is often the case in practical engineering applications.
Theorem 2.
For the self-tuning control system of the plant (1), if
(1) 
The parameter estimate converges to   θ 0 , the estimated model   Σ P m ( k ) is consistently controllable,
k = 1 n y ( k ) ϕ T ( k d ) θ ^ ( k ) ω ( k ) 2 = o ( 1 + k = 1 n ϕ ( k d ) 2 ) .
(2) 
The control strategy satisfies the stability requirements for the known objects of the parameters;
(3) 
The mapping f ( ) is continuous at θ ^ ( k ) = θ 0 .
Then, the self-tuning control system is stable and convergent.
Proof. 
The stability of the system is proved by the method of contradiction, and then the convergence is proved.
In order to prove Theorem 2, we need to build another virtual equivalent system, as shown in Figure 5.
Σ P 0 represents the model corresponding to the convergence value θ 0 of the parameter estimation, and Σ C 0 represents the controller corresponding to Σ P 0 .
The virtual equivalent system shown in Figure 5 is different from the virtual equivalent system shown in Figure 3. First, Σ C 0 and Σ P 0 are different from Σ C and Σ P , respectively. Second, the definitions of e ( k ) and Δ u ( k ) are different from (5) and (6), respectively. In Figure 5
e ( k ) = y ( k ) ϕ T ( k d ) θ 0 = y ( k ) ϕ T ( k d ) θ ^ ( k ) + ϕ T ( k d ) θ ^ ( k ) ϕ T ( k d ) θ 0 , is
e ( k ) = e ( k ) + ϕ T ( k d ) θ ^ ( k ) ϕ T ( k d ) θ 0 ,
Δ u ( k ) = u ( k ) u 0 ( k ) = ϕ c T ( k ) θ c ( k ) ϕ c T ( k ) θ c 0 .
It is known from condition (1) in Theorem 2.
k = 1 n e ( k ) ω ( k ) 2 = o ( 1 + k = 1 n ϕ ( k d ) 2 ) .
Then, we have
e ( k ) ω ( k ) = e ( k ) ω ( k ) + ϕ T ( k d ) θ ^ ( k ) ϕ T ( k d ) θ 0 .
It is also known by condition (1) that θ ^ ( k ) θ 0 , so we have
k = 1 n e ( k ) ω ( k ) 2 = o ( 1 + k = 1 n ϕ ( k d ) 2 ) .
Further, from condition (3), we have
Δ u ( k ) = o ( β + ϕ ( k d ) ) .
Thus, Δ u ( k ) has the same properties as in the proof of Theorem 1, i.e.,
1 n k = 1 n Δ u ( k ) 2 = o ( 1 n k = 1 n ϕ ( k d ) 2 ) .
Decomposing the system shown in Figure 5 into three subsystems (as shown in Figure 6, Figure 7 and Figure 8, respectively), it is known from condition (2) that the subsystem, as shown in Figure 6, is stable; and the rest of the proof process is similar to that of Theorem 1 (details are omitted to save space). □

3.3. Parameter Estimation May Not Converge

This section demonstrates two theorems for the STC, consisting of the minimum variance control strategy and the arbitrary control strategy. As described in Section 3.2, the structural information of the estimated model could be inconsistent with the real control plant.
First, let us consider the minimum variance control strategy and explain why this particular type of self-tuning control system does not require the convergence of parameter estimation.
Theorem 3.
For the minimum variance type self-tuning control system of plant (1), any feasible parameter estimation algorithm can be used if the following conditions are met.
(1) 
B q 1 is Hurwitz stable polynomial, and B 0 0 .
(2) 
Control strategy   u ( k ) exists.
(3) 
Parameter estimation satisfies.
k = 1 n e ( k ) ω ( k ) 2 = o ( α + k = 1 n ϕ ( k d ) 2 ) , α is a non-zero constant.
The self-tuning control system is then stable and convergent.
Proof. 
Using the virtual equivalent system shown in Figure 3, under the condition that condition (2) and condition (3) are satisfied, it can be proved that the minimum variance self-tuning control has the following special properties [22].
Δ u ( k ) = B 0 1 [ ϕ T ( k 1 ) ( θ 0 θ ^ ( k ) ) ] .
Further,
Δ u ( k ) = B 0 1 [ e ( k ) ω ( k ) ] .
Therefore, Δ u ( k ) has the following property
1 n k = 1 n Δ u ( k ) 2 = o ( 1 n k = 1 n ϕ ( k d ) 2 ) .
Decompose the virtual equivalent system of the minimum variance self-tuning control system into two subsystems, as shown in Figure 3 and Figure 4. The rest of the proof process is similar to that of Theorem 1, and the details are omitted.
In fact, the key to the proof process of the three theorems is to prove the property of Δ u ( k ) or Δ u ( k ) in the virtual equivalent system. In Theorems 1 and 2, considering the arbitrary (linear) control strategy, the mapping relationship between the estimated parameters and the controller parameters is complicated. Therefore, parameter estimation convergence is required to ensure the properties of Δ u ( k ) . In the minimum variance control strategy, the controller parameters can be directly represented by the estimated parameters. Additionally, we have Δ u ( k ) = B 0 1 [ e ( k ) ω ( k ) ] ; therefore, parameter estimation convergence is not required. Only
k = 1 n e ( k ) ω ( k ) 2 = o ( α + k = 1 n ϕ ( k d ) 2 ) is needed to ensure the stability and convergence of the minimum variance controller. □
Corollary 1.
Considering a minimum-variance type self-correcting control system using any feasible parameter estimation algorithm, if
(1) 
B ( q 1 ) is a Hurwitz stable polynomial, and B 0 0 .
(2) 
Control law u ( k ) exists.
(3) 
The parameter estimation error is bounded; that is, 1 n k = 1 n e ( k ) ω ( k ) 2 M < , then the self-tuning control system is stable.
The stability and convergence of a self-tuning control system consisting of an arbitrary control strategy when parameter estimation may not converge are considered below.
Theorem 4.
Self-tuning control system for the controlled plant (1), if
(1) 
θ ^ ( k ) M < , θ ^ ( k ) θ ^ ( k l ) 0 , l is a finite value.
(2) 
k = 1 n e ( k ) ω ( k ) 2 = o ( α + k = 1 n ϕ ( k d ) 2 ) , α is a non-zero constant.
(3) 
The control strategy satisfies the stability requirements for the known parameters and tracks the reference signal
(4) 
The controller parameter is a continuous function of the parameter estimates; that is, θ c ( k ) is a continuous function of θ ^ ( k ) .
 
If the above conditions are met, the self-tuning control system is stable and convergent.
Remark 1.
To ensure that the parameter estimates are bounded, a projection approach can be used, see references [43,44,45,46].
Proof. 
Considering another virtual equivalent system, as shown in Figure 9, where P m ( k ) and C ( k ) are corresponding to Σ P m ( k ) and Σ C ( k ) , respectively.
The system shown in Figure 9 is further converted into the virtual equivalent system shown in Figure 10. From conditions (1) and (4) of Theorem 4, the interval between t k and t k 1 can be chosen to be sufficiently large, such that to maintain the property of the required Δ u ( k ) . Therefore, the system as shown in Figure 10 is a “slow switching” system.
Next, the virtual equivalent system shown in Figure 10 is decomposed into three subsystems, as shown in Figure 11, Figure 12 and Figure 13, respectively. Figure 11 is a stochastic system. Figure 12 and Figure 13 are deterministic systems. By conditions (1) and (2), we have
k = 1 n e i ( k ) ω ( k ) 2 = o ( α + k = 1 n ϕ ( k d ) 2 ) .
Based on the results of the “slow switching” stochastic system [43,44,45,46] and conditions (1) and (3), it is known that the system shown in Figure 11 is stable and tracking. The rest of the proof process is similar to that of Theorem 1.
Further, considering the low-order modeling situation, we have the following result.
Corollary 2.
Consider a self-tuning control system consisting of any feasible parameter estimation algorithm and control strategy if the following conditions hold.
(1) 
θ ^ ( k ) M < , θ ^ ( k ) θ ^ ( k l ) 0 , l is a finite value.
(2) 
1 n k = 1 n e ( k ) ω ( k ) 2 M < .
(3) 
The control strategy satisfies the stability requirements for the known objects of the parameters and tracks the reference.
(4) 
The controller parameter is a continuous function of the parameter estimate; that is, θ c ( k ) is a continuous function of θ ^ ( k ) .
Then, the self-tuning control system is stable and convergent. □

4. Extended Results

The above results can be extended to the colored noise situation. The difficulty is that the noise must be estimated together with the parameter estimation.
Considering a general multivariate stochastic system Σ P .
A ( q 1 ) y ( k ) = q d B ( q 1 ) u ( k ) + C ( q 1 ) ω ( k ) ,
where y ( k ) , u ( k ) , ω ( k ) has the same meanings as in (1).
A ( q 1 ) = I + A 1 q 1 + + A n q n , n 1 B ( q 1 ) = B 0 + B 1 q 1 + + B m q m , m 1 C ( q 1 ) = I + C 1 q 1 + + C r q r , r 1
Introducing symbols
θ T = [ A 1 , , A n , B 0 , , B m , C 1 , , C r ] , ϕ 0 T ( k d ) = [ y ( k 1 ) , y ( k n ) , , u ( k d ) , .. u ( k d m ) , ω ( k 1 ) , , ω ( k r ) ] .
Then, we have
y ( k ) = ϕ 0 T ( k d ) θ + ω ( k ) .
The elements of the parameter matrix have changed to.
θ ^ T ( k ) = [ A ^ 1 , , A ^ n , B ^ 0 , .. , B ^ m , C ^ 1 , , C ^ r ] .
Since ϕ 0 T ( k d ) contains unknown noise terms, it is necessary to estimate the noise terms while estimating the parameters so that the regression matrix (vector) of the parameter estimation is as follows.
ϕ T ( k d ) = [ y ( k 1 ) , y ( k n ) , , u ( k d ) , .. u ( k d m ) , ω ^ ( k 1 ) , , ω ^ ( k r ) ] ,
where
ω ^ ( k ) = y ( k ) ϕ T ( k d ) θ ^ ( k ) .
The other symbols are the same as before. The self-tuning controller is denoted as Σ C ( k ) , and can also be regarded as a matrix θ c ( k ) , which can be obtained by various control design methods. The self-tuning control system is abbreviated as ( Σ C ( k ) , Σ P ) , and the corresponding non-adaptive control system is abbreviated as ( Σ C , Σ P ) . The virtual equivalent system of the self-tuning control system is abbreviated as Σ C , Σ P , Δ u ( k ) , which can still be illustrated by Figure 3.
If the calculation of the control law does not require the estimation of noise, the calculation of u ( k ) , Δ u ( k ) , ϕ c ( k ) will not cause noise estimation problems.
If the calculation of the control law requires the estimation of noise, there will be the following problem. The regression matrices (vector) used to calculate u ( k ) and calculate u 0 ( k ) are different; that is,
u ( k ) = ϕ c T ( k ) θ c ( k ) ,   u 0 ( k ) = ϕ c 0 T ( k ) θ c .
where
ϕ c T ( k ) = [ y r ( k ) , y r ( k 1 ) y ( k ) , y ( k 1 ) , u ( k 1 ) , , ω ^ ( k 1 ) , ] , ϕ c 0 T ( k ) = [ y r ( k ) , y r ( k 1 ) y ( k ) , y ( k 1 ) , u ( k 1 ) , , ω ( k 1 ) , ] .
However, due to condition (2) in Theorem 3.
k = 1 n e ( k ) ω ( k ) 2 = o ( α + k = 1 n ϕ ( k d ) 2 ) .
That is equivalent to (by definition, ω ^ ( k ) is e ( k ) )
k = 1 n ω ( k ) ω ( k ) 2 = o ( α + k = 1 n φ ( k d ) 2 ) .
Thus, the difference between ϕ c ( k ) and ϕ c 0 ( k ) can be merged into Δ u ( k ) without affecting the property of Δ u ( k ) . Therefore, the above results (with white noise) still hold true for the general stochastic system (25).
Remark 2.
For simulation verification, see reference [47].

5. Conclusions

Based on the equivalent system concept, a unified analysis of multivariable stochastic self-tuning control (STC) systems is presented. In this paper, by the virtual equivalent system, the difficulty of analyzing the stability and convergence of the stochastic self-tuning control system is transferred from the system structure to the compensational signal, which reduces the difficulty of the original problem, making the stability and convergence analysis of the stochastic self-tuning control system more intuitive and easier to understand. We investigated three situations, i.e., parameter estimation converges to the true value, parameter estimation converges to a non-true value, and parameter estimation may not converge.

Author Contributions

Conceptualization, J.P.; methodology, J.P., W.Z. and K.P.; software, W.Z.; validation, J.P.; formal analysis, K.P.; investigation, J.P. and K.P.; resources, W.Z. and K.P.; data curation, J.P. and W.Z.; writing—original draft preparation, J.P.; writing—review and editing, J.P. and K.P.; visualization, K.P. and W.Z.; supervision, W.Z.; project administration, W.Z.; funding acquisition, W.Z. and K.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key R&D Program of China, grant number 2019YFB2005804.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article. The data presented in this study can be requested from the authors.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Goodwin, G.C.; Sin, K.S. Adaptive Filtering, Prediction and Control; Prentice Hall: Englewood, IL, USA, 1984. [Google Scholar]
  2. Goodwin, G.C.; Ramadge, P.J.; Caines, P.E. Discrete time adaptive control. SIAM J. Control Optim. 1981, 19, 829–853. [Google Scholar] [CrossRef]
  3. Bidikli, B.; Bayrak, A. A self-tuning robust full-state feedback control design for the magnetic levitation system. Control Eng. Pract. 2018, 78, 175–185. [Google Scholar] [CrossRef]
  4. Zou, Z.; Zhao, D.; Liu, X.; Guo, Y.; Guan, C.; Feng, W.; Guo, N. Pole-placement self-tuning control of nonlinear Hammerstein system and its application to pH process control. Chin. J. Chem. Eng. 2015, 23, 1364–1368. [Google Scholar] [CrossRef]
  5. Guo, L.; Chen, H.F. Stability and Optimality of Self-tuning Regulator. Sci. China (Ser. A) 1991, 9, 905–913. [Google Scholar]
  6. Guo, L.; Chen, H.F. The Astrom-Wittenmark self-tuning regulator revisited and ELS-based adaptive trackers. IEEE Trans. Autom. Control 1991, 36, 802–812. [Google Scholar]
  7. Guo, L. Self-convergence of weighted least-squares with applications to stochastic adaptive control. IEEE Trans. Autom. Control 1996, 41, 79–89. [Google Scholar]
  8. Nassiri-Toussi, K.; Ren, W. Indirect adaptive pole-placement control of MIMO stochastic systems: Self-tuning results. IEEE Trans. Autom. Control 1997, 42, 38–52. [Google Scholar] [CrossRef]
  9. Yagmur, O. Clonal selection algorithm based control for two-wheeled self-balancing mobile robot. Simul. Model. Pract. Theory 2022, 118, 102552. [Google Scholar]
  10. Dario, R. Pole-zero assignment by the receptance method: Multi-input active vibration control. Mech. Syst. Signal Process. 2022, 172, 108976. [Google Scholar]
  11. Anderson, B.D.O.; Johnstone, R.M.G. Global adaptive pole positioning. IEEE Trans. Autom. Control 1985, 30, 11–22. [Google Scholar] [CrossRef]
  12. Elliott, H.; Cristi, R.; Das, M. Global stability of adaptive pole placement algorithms. IEEE Trans. Autom. Control 1985, 30, 348–356. [Google Scholar] [CrossRef]
  13. Lozano, R.; Zhao, X.H. Adaptive pole placement without excitation probing signals. IEEE Trans. Autom. Control 1994, 39, 47–58. [Google Scholar] [CrossRef]
  14. Chan, C.Y.; Sirisena, H.R. Convergence of adaptive pole-zero placement controller for stable non-minimum phase systems. Int. J. Control 1989, 50, 743–754. [Google Scholar] [CrossRef]
  15. Lai, T.L.; Wei, C.Z. Extended least squares and their applications to adaptive control and prediction in linear systems. IEEE Trans. Autom. Control 1986, 31, 898–906. [Google Scholar]
  16. Chen, H.F.; Guo, L. Asymptotically optimal adaptive control with consistent parameter estimates. SIAM J. Control. Optim. 1987, 25, 558–575. [Google Scholar] [CrossRef]
  17. Wittenmark, B.; Middleton, R.H.; Goodwin, G.C. Adaptive decoupling of multivariable systems. Int. J. Control 1987, 46, 1993–2009. [Google Scholar] [CrossRef]
  18. Chai, T.Y. The global convergence analysis of a multivariable decoupling self-tuning controller. Acta Autom. Sin. 1989, 15, 432–436. [Google Scholar]
  19. Chai, T.Y. Direct adaptive decoupling control for general stochastic multivariable systems. Int. J. Control 1990, 51, 885–909. [Google Scholar] [CrossRef]
  20. Chai, T.Y.; Wang, G. Globally convergent multivariable adaptive decoupling controller and its application to a binarydistillation column. Int. J. Control 1992, 55, 415–429. [Google Scholar] [CrossRef]
  21. Patete, A.; Furuta, K.; Tomizuka, M. Stability of self-tuning control based on Lyapunov function. Int. J. Adapt. Control. Signal Process. 2008, 22, 795–810. [Google Scholar] [CrossRef]
  22. Zhang, W. On the stability and convergence of self-tuning control–virtual equivalent system approach. Int. J. Control 2010, 83, 879–896. [Google Scholar] [CrossRef]
  23. Fekri, S.; Athans, M.; Pascoal, A. Issues, progress and new results in robust adaptive control. Int. J. Adapt. Control. Signal Process. 2006, 20, 519–579. [Google Scholar] [CrossRef]
  24. Wang, Y.; Li, P.; Tang, J. MPPT control of photovoltaic power generation system based on fuzzy parameter self-tuning PID method. Electr. Power Autom. Equip. 2008, 28, 4. [Google Scholar]
  25. Tang, L.; Xu, Z. The Control Technology of Self-correction for Intelligent AC Contactors. Proc. CSEE 2015, 35, 1516–1523. [Google Scholar]
  26. Chen, S.; Wu, J.; Yang, B.; Ma, J. A Self-Tuning Control Method for Simulation Turntable Based on Precise Identification of Model Parameters. CN Patent 201710271289.4, 18 February 2020. [Google Scholar]
  27. Shao, K.; Zheng, J.; Wang, H.; Wang, X.; Lu, R.; Man, Z. Tracking Control of a Linear Motor Positioner Based on Barrier Function Adaptive Sliding Mode. IEEE Trans. Ind. Inform. 2021, 17, 7479–7488. [Google Scholar] [CrossRef]
  28. Nassiri-Toussi, K.; Ren, W. A unified analysis of stochastic adaptive control: Potential self-tuning. In Proceedings of the American Control Conference, Seattle, DC, USA, 21–23 June 1995. [Google Scholar]
  29. Nassiri-Toussi, K.; Ren, W. A unified analysis of stochastic adaptive control: Asymptotic self-tuning. In Proceedings of the 34th IEEE Conference on Decision and Control, New Orleans, LA, USA, 13–15 December 1995; p. 3. [Google Scholar]
  30. Morse, A.S. Towards a unified theory of parameter adaptive control: Tunability. IEEE Trans. Autom. Control 1990, 35, 1002–1012. [Google Scholar] [CrossRef]
  31. Morse, A.S. Towards a unified theory of parameter adaptive control-part II: Certainty equivalence and implicit tuning. IEEE Trans. Autom. Control 1992, 37, 15–29. [Google Scholar] [CrossRef]
  32. Katayama, T.; McKelvey, T.; Sano, A.; Cassandras, C.G.; Campi, M.C. Trends in systems and signals. Annu. Rev. Control 2006, 30, 5–17. [Google Scholar] [CrossRef]
  33. Li, Q.Q. Adaptive control. Comput. Autom. Meas. Control 1999, 7, 56–60. [Google Scholar]
  34. Li, Q.Q. Adaptive Control System Theory, Design and Application; Science Press: Beijing, China, 1990. [Google Scholar]
  35. Aström, K.J.; Wittenmark, B. Adaptive Control; Addison-Wesley: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  36. Ioannou, P.A.; Sun, J. Robust Adaptive Control; Prentice-Hall: Englewood Cliffs, NJ, USA, 1996. [Google Scholar]
  37. Kumar, P.R. Convergence of adaptive control schemes using least-squares parameter estimates. IEEE Trans. Autom. Control 1990, 35, 416–424. [Google Scholar] [CrossRef]
  38. van Schuppen, J.H. Tuning of Gaussian stochastic control systems. IEEE Trans. Autom. Control 1994, 39, 2178–2190. [Google Scholar] [CrossRef]
  39. Zhang, W.C. The convergence of parameter estimates is not necessary for a general self-tuning control system-stochasticplant. In Proceedings of the 48th IEEE Conference on Decision and Control, Shanghai, China, 15–18 December 2009. [Google Scholar]
  40. Zhang, W.C.; Li, X.L.; Choi, J.Y. A unified analysis of switching multiple model adaptive control—Virtual equivalent system approach. In Proceedings of the 17th IFAC World Congress, Seoul, Republic of Korea, 6–11 July 2008; Volume 41, pp. 14403–14408. [Google Scholar]
  41. Zhang, W.C. Virtual equivalent system theory for self-tuning control. J. Harbin Inst. Technol. 2014, 46, 107–112. [Google Scholar]
  42. Feng, C.; Shi, W. Adaptive Control; Publishing House of Electronics Industry: Beijing, China, 1986. [Google Scholar]
  43. Chatterjee, D.; Liberzon, D. Stability analysis of deterministic and stochastic switched systems via a comparison principle and multiple Lyapunov functions. SIAM J. Control. Optim. 2006, 45, 174–206. [Google Scholar] [CrossRef]
  44. Chatterjee, D.; Liberzon, D. On stability of stochastic switched systems. In Proceedings of the 43rd Conference on Decision and Control, Nassau, Bahamas, 14–17 December 2004; Volume 4, pp. 4125–4127. [Google Scholar]
  45. Prandini, M. Switching control of stochastic linear systems: Stability and performance results. In Proceedings of the 6th Congress of SIMAI, Chia Laguna, Cagliari, Italy, 27–31 May 2002. [Google Scholar]
  46. Prandini, M.; Campi, M.C. Logic-based switching for the stabilization of stochastic systems in presence of unmodeled dynamics. In Proceedings of the 40th IEEE Conference on Decision and Control, Orlando, FL, USA, 4–7 December 2001. [Google Scholar]
  47. Zhang, W.C.; Wei, W. Virtual equivalent system theory for adaptive control and simulation verification. Sci. Sin. Inf. 2018, 48, 947–962. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Stochastic self-tuning control system.
Figure 1. Stochastic self-tuning control system.
Entropy 25 00173 g001
Figure 2. Constant control system corresponding to self-tuning control system.
Figure 2. Constant control system corresponding to self-tuning control system.
Entropy 25 00173 g002
Figure 3. Virtual equivalent system of stochastic self-tuning control system.
Figure 3. Virtual equivalent system of stochastic self-tuning control system.
Entropy 25 00173 g003
Figure 4. Decomposition subsystem of the virtual equivalent system 2.
Figure 4. Decomposition subsystem of the virtual equivalent system 2.
Entropy 25 00173 g004
Figure 5. Virtual equivalent system when parameter estimates converge to non-true values.
Figure 5. Virtual equivalent system when parameter estimates converge to non-true values.
Entropy 25 00173 g005
Figure 6. Decomposition system of virtual equivalent system 1.
Figure 6. Decomposition system of virtual equivalent system 1.
Entropy 25 00173 g006
Figure 7. Decomposition system of virtual equivalent system 2.
Figure 7. Decomposition system of virtual equivalent system 2.
Entropy 25 00173 g007
Figure 8. Decomposition system of virtual equivalent system 3.
Figure 8. Decomposition system of virtual equivalent system 3.
Entropy 25 00173 g008
Figure 9. Virtual equivalent system I when parameter estimation may not converge.
Figure 9. Virtual equivalent system I when parameter estimation may not converge.
Entropy 25 00173 g009
Figure 10. Virtual Equivalent System II when parameter Estimation may not convergence.
Figure 10. Virtual Equivalent System II when parameter Estimation may not convergence.
Entropy 25 00173 g010
Figure 11. Decomposed subsystem 1.
Figure 11. Decomposed subsystem 1.
Entropy 25 00173 g011
Figure 12. Decomposed subsystem 2.
Figure 12. Decomposed subsystem 2.
Entropy 25 00173 g012
Figure 13. Decomposed subsystem 3.
Figure 13. Decomposed subsystem 3.
Entropy 25 00173 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pan, J.; Peng, K.; Zhang, W. From Nonlinear Dominant System to Linear Dominant System: Virtual Equivalent System Approach for Multiple Variable Self-Tuning Control System Analysis. Entropy 2023, 25, 173. https://doi.org/10.3390/e25010173

AMA Style

Pan J, Peng K, Zhang W. From Nonlinear Dominant System to Linear Dominant System: Virtual Equivalent System Approach for Multiple Variable Self-Tuning Control System Analysis. Entropy. 2023; 25(1):173. https://doi.org/10.3390/e25010173

Chicago/Turabian Style

Pan, Jinghui, Kaixiang Peng, and Weicun Zhang. 2023. "From Nonlinear Dominant System to Linear Dominant System: Virtual Equivalent System Approach for Multiple Variable Self-Tuning Control System Analysis" Entropy 25, no. 1: 173. https://doi.org/10.3390/e25010173

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop