Next Article in Journal
Mimicking Human Verification Behavior for News Media Credibility Evaluation
Previous Article in Journal
Plasma Ion Bombardment Induced Heat Flux on the Wafer Surface in Inductively Coupled Plasma Reactive Ion Etch
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parameter-Optimal-Gain-Arguable Iterative Learning Control for Linear Time-Invariant Systems with Quantized Error

1
School of Mathematics and Information Sciences, North Minzu University, Yinchuan 750021, China
2
Department of Applied Mathematics, School of Mathematics and Statistics, Xi′an Jiaotong University, Xi’an 710049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(17), 9551; https://doi.org/10.3390/app13179551
Submission received: 25 May 2023 / Revised: 13 August 2023 / Accepted: 17 August 2023 / Published: 23 August 2023

Abstract

:
In this paper, a parameter optimal gain-arguable iterative learning control algorithm is proposed for a class of linear discrete-time systems with quantized error. Based on the lifting model description for ILC systems, the iteration time-variable derivative learning gain in the algorithm is optimized by resolving a minimization problem regarding the tracking error energy and the learning effort amplified by a weighting factor. Further, the tracking error can be monotonically convergent to zero when the condition is guaranteed and the rate of convergence can be adjusted by scaling the weighting factor of an optimization problem. This algorithm is more innovative when compared with the existing iterative learning control algorithm for quantization systems. The innovations of this algorithm are as follows: (i) this optimization-based strategy for selecting learning gains can improve the active learning ability of the control mechanism and avoid the passivity of existing selective learning gains; (ii) the algorithm of POGAILC with data quantization can improve the convergence performance of tracking errors and reduce the negative effects of data quantization on the control performance of the logarithmic quantizer; and (iii) we provide a rigorous algorithm convergence analysis by deriving the existence of the unique solution for the optimal learning-gain vector under a singular and nonsingular tracking-error diagonalized matrix. Finally, numerical simulations are used to demonstrate the effectiveness of the algorithm.

1. Introduction

Iterative learning control (ILC) is a technique used to leverage historical tracking information to refine current control inputs. It can successfully drive the system to track the desired trajectory precisely over a fixed time interval. The roots of ILC trace back to robotics, in which a robotic manipulator attempts to repetitively track a predetermined ideal trajectory [1]. The originality of ILC is that it creates a servomechanism to execute the tracking task by virtue of experience or knowledge. The multi-operation feature enables the ILC mechanism to utilize the observed tracking discrepancy of the current operation, correcting control commands and generating an upgrade control command for subsequent operations. Classical ILC algorithms have a range of different control schemes, including derivative-type (D-type), proportional-type (P-type), proportional–derivative-type (PD-type), and proportional–integral–derivative-type (PID-type) laws, all of which are designed to improve the performance of dynamic systems [2,3,4]. Notably, ILC can deal with complicated systems with minimal prior knowledge, thanks to its simple algorithmic structure and independence of accurate systems dynamics. In the past few decades, the effectiveness of tracking performance with various ILC algorithms has been recognized in the aspects of theoretical developments. Benefiting from constructive algorithmic innovations, the ILC technique has been utilized in numerous applications [5,6,7,8,9,10]. In particular, with the development of computer science, realization of existing efficacious ILCs has been executed in digital mode. This transition necessitates an effective method for handling the discrete-time ILC system’s description and analytical techniques. In this regard, benefiting from the finite sampling instants of the ILC system, the lifting technique has been witnessed as one effective manner. In addition, the 2D method has been acknowledged as a powerful method [11].
With the development of ILC technology, ILC-based advanced control technologies have attracted a lot of attention, yielding a variety of new algorithms. Amann and Owens et al. first introduced the concept of a norm-optimal ILC (NOILC) algorithm based on the lifted model description for a class of linear time-varying systems [12,13,14,15,16]. The NOILC algorithm minimizes a quadratic objective function that contains the additive norms of the tracking error and adjacent iterative control input increments. Its control input for the next operation induces an optimized ILC updating law in a recursive form. However, the NOILC requires solving the Riccati equation for the compensation gain, which is challenging. To avoid such computation difficulty, Owens, Feng, and Hätönen et al. proposed a parameter optimization ILC (POILC) using the lower-dimensional optimization for the linear discrete-time invariant systems while retaining the most important properties of the NOILC. The performance index is formulated using the linear quadratic norms of the tracking error and the learning gain, and the iteration-wise scalar gain of the D-type ILC scheme is assigned in the paper [17]. Additional POILC research has been conducted into optimization techniques for improvement of the convergence rate and robustness [18,19,20,21,22]. Liu and Ruan et al. argued the existence and uniqueness of the optimized iteration time-variable learning-gain vector by sequentially minimizing the sum of the tracking error energy and the learning effort intensity, which is amplified by an iteration-wise tuning factor. In addition, they analyzed the strictly non-conditional monotone convergence of the optimized first-order and higher-order iterative learning control scheme [23].
Recently, with the rapid advancement of internet services and communications technologies, network control systems (NCSs) have been widely applied in smart grids, water/gas distribution, and modern public transportation systems. NCSs, coupled with novel analytical techniques and controller design schemes, have been used to solve problems in dynamic control systems and networks [24,25,26]. The combination of networked control and iterative learning control (NILC) has received a great deal of attention, owing to its advantages including simple installation, decreased system wiring, simple system maintenance and diagnosis, among others [27]. However, NILC’s potential has been curtailed by the physical characteristics of wired or wireless network communication devices, compounded by several factors that may degrade the tracking performance of ILC systems. For example, the quantized error, temporal oscillation of the network, time delays, and packet dropout can all contribute to such deterioration. Numerous studies have been conducted to overcome the limitations of NILC. The common method used to deal with the problem of communication delays is to replace the delayed data with the most recently captured data when the delay is smaller than the sampling step length [28,29]. The solution to the packet dropout issue is to replace the dropped data with the latest captured data [30,31]. In order to alleviate the transmission burden, the quantizer converts the continual signals into the piecewise constant signals with a finite set of values in control system. For instance, quantization as a form of data compression that reduces the resolution of the signal by rounding its values to the nearest allowable value. However, quantization may result in chaotic vibrations [32,33]. The stabilities of ILC for linear and nonlinear with data quantization have been studied by Bu and Shen et al. [34]. The convergence conditions related to the quantized density are investigated by using a sector-bound approach for the logarithmic quantizer and the contraction mapping approach of the ILC, but the tracking error of the system with the data quantized signal still converges to a small bound. Later on, Xu and Shen improved such theoretical results by proposing an ILC law using the quantized error information for linear and nonlinear systems; the tracking error of this scheme is zero-error convergence [35]. Two new multi-lagged input-based quantized iterative learning control methods are proposed using the output quantization and the error quantization in the paper [36]. Later, by using the lifting representation, some schemes for the convergence conditions are given for the different data quantization in papers [37,38,39], but when the convergence conditions are guaranteed, the tracking error of ILC with system output quantized signal and control input quantized signal converges to a bound. Collectively, these studies underscore the significance of employing appropriate algorithms in digitally controlled systems with quantization.
In the existing analyses of convergence for the discrete-time systems with data quantization, if the static logarithmic quantizer is adopted such that the quantized density is determined, the convergence condition is provided to guide the selection of learning gain. However, the learning gain for the iterative learning control scheme with data quantization is chosen using the “trial and error method” according to experience in the convergence conditions. The learning mechanism is passive and blind, and the tracking error of the system with data quantization cannot converge to zero. Some cases have shown that the iterative learning control law exhibits asymptotic convergence, and the convergence rate is slow. Consequently, the selection of a suitable learning gain for accurate tracking characteristics is an important issue in the ILC system with data quantization.
We are motivated by the preceding discussions and inspired by the concept of the iteration time-variable derivative learning-gain vector from the minimization problem in our prior work [23]. Our aim in this paper is to propose an optimal algorithm by using the appropriate learning gain, and hence to improve the convergence characteristic in the ILC for discrete-time systems with the data quantization.
In this paper, based on the lifted model description for ILC systems with quantized error, the optimal method of the learning gain is presented on the basis of the POILC algorithm. The optimal learning gain is a solution to the minimization problem about the tracking error energy and the learning effort amplified by an iteration-wise weighting factor. By discussing the convergence condition of the parameter optimal gain-arguable ILC with quantized error (POGAILCQe) laws, it is found that the optimal iteration time-varying learning gain is existent, and the convergence factors are related to the model parameters, the quantized density, and the weighting factor. The algorithm of POGAILCQe can improve the convergence performance of tracking errors and reduce the negative effects of data quantization for control performance from the logarithmic quantizer, and we provide a rigorous algorithm convergence analysis based on the existence of this selection of learning gains by optimization method, which can improve the active learning ability of the control mechanism and avoid the passivity of existing selective learning gains. The paper is arranged as follows. In Section 2, the quantized ILC law with quantized error is constructed, and the convergence condition is analyzed. The optimal ILC law with data quantization is constructed and the convergence conditions are given in Section 3. Section 4 exhibits the numerical simulations and the last section concludes the paper.

2. Problem Formulation and the Convergence of the Quantized ILC Law

Suppose that a repetitive single input–single output (SISO) linear time-invariant discrete system is described as follows
x k ( n + 1 ) = A x k ( n ) + B u k ( n ) , y k ( n + 1 ) = C x k ( n + 1 ) ,    x k ( 0 ) = 0 ,    n S ,
where n is the discrete sampling time and S = 0 , 1 , 2 , , N 1 stands for the collection of samples, and N represents the total number of samples, while the subscript k denotes the operation index or repetition, named as the iteration. x k ( n ) and y k ( n ) are the P-dimensional state vector and output for the k-th iteration; u k ( n ) is the scalar input. A , B and C are matrices with the appropriate dimensions, respectively. In this paper, we will conduct the investigation for the case in which C B 0 , then the relative degree of the system (1) is united.
For any input sequence u k = u k ( 0 ) | u k ( 1 ) | | u k ( N 1 ) T , the output sequence of the system (1) is expressed as
y k ( n + 1 ) = l = 0 n C A n l B u k ( l ) ,   n = 0 , 1 , , N 1 .
Let
H = C B 0 0 0 C A B C B 0 0 C A 2 B C A B C B 0 0 C A N 1 B C A N 2 B C A N 3 B C B ,
and
y k = y k ( 1 ) | y k ( 2 ) | | y k ( N ) T , u k = u k ( 0 ) | u k ( 1 ) | | u k ( N 1 ) T .
Then, the system (1) or (2) becomes
y k = H u k .
While the system (1) attempts to track a desired trajectory, an iterative learning control scheme is adapted. Suppose that y d ( n ) is the desired trajectory and e k ( n ) = y k ( n ) y d ( n ) is the output error. The mechanism of the ILC is to compensate the control command u k ( n ) by e k ( n ) so that u k + 1 ( n ) may drive the system to better the tracking performance.
For the practical execution, the output signal will be quantized before transmission to the controller. To achieve this, we construct the updating law of the iterative learning control with quantized error (ILCQe) as follows
u 1 ( n ) : given arbitrarily;
u k + 1 ( n ) = u k ( n ) + Γ Q ( e k ( n + 1 ) ) , n D .
Here, the parameter Γ is a derivative learning gain; it is a concrete number that was chosen using the “trial and error method” according to experience in the convergence conditions, and the Q ( e k ( n + 1 ) ) is a scalar function about a quantizer. In the parts that follow, we will discuss the quantizer.
This study utilizes the logarithmic type of quantizer described in [40]; the set of the quantized level is represented by
U = ± s i : s i = η i s 0 ,   i = 0 , ± 1 , ± 2 ,     0 ,   0 < η < 1 ,   s 0 > 0 .
where s 0 denotes the initial of quantized levels, and i corresponds to quantized series, s i corresponds to the i-th segment of the quantizer to translate the quantized data in this segment to the appropriate quantized level. The signal parameter η is considered as a measure of quantized density in the logarithmic quantizer. The quantized level has a range; when the quantized density is lower, then the quantized level is also less, so it can be said that the quantizer is rough.
The associated quantizer Q ( ) is defined as
Q ( v ) =              s i      i f   1 1 + σ s i < v < 1 1 σ s i              0      i f   v = 0 Q ( v )      i f    v < 0 ,
where σ = 1 η 1 + η is called sector bound. This sector bound method has been used to deal with the data quantization in the paper [41]. A sector-bound expression can be described as Q ( v ) = ( 1 + Δ ( v ) ) v by the given signal v and the quantizer Q ( v ) , where Δ ( v ) is a scalar function that satisfies | Δ ( v ) | σ .
We can calculate σ e when the quantized density η e is known, the sector-bound expression in the ILCQe law (4) can be expressed as
Q ( e k ( n + 1 ) ) = ( 1 + Δ e k ( n + 1 ) ) e k ( n + 1 ) ,
where Δ e k ( n ) σ e and σ e = 1 η e 1 + η e .
Thus, the quantized ILC law (4) becomes
u k + 1 ( n ) = u k ( n ) + Γ ( 1 + Δ e k ( n + 1 ) ) e k ( n + 1 ) .
For the sake of statement conciseness, a group of lifting vectors and matrix are denoted as
e k = y d y k = e k ( 1 ) | e k ( 2 ) | | e k ( N ) T , Δ e k = Δ e k ( 1 ) | Δ e k ( 2 ) | | Δ e k ( N ) T , y d = y d ( 1 ) | y d ( 2 ) | | y d ( N ) T , E ¯ k = ( I + Δ e k ) e k .
Then, the ILCQe law (7) with the data quantization is lifted as
u k + 1 = u k + Γ E ¯ k .
When the learning gain Γ is chosen properly, the output tracking error will be improved. The convergence of the output tracking error is discussed in the following sections.
Theorem 1.
Assume that the ILCQe law (8) is applied to the system (1), matrices  B and C , as well as the learning gain  Γ , satisfy the condition as  ρ 1 = 1 Γ C B + σ e Γ C B < 1 , where  σ e = 1 η e 1 + η e , then the system’s tracking error convergences to zero as  k .
Proof of Theorem 1.
Taking Equations (7) and (8) into account, then
e k + 1 = y d y k + 1 = y d y k y k + 1 y k = e k y k + 1 y k = e k Γ H E ¯ k = I Γ H Γ H Δ e k e k .
Here,
I Γ H Γ H Δ e k = 1 Γ C B ( 1 + Δ e k ( 1 ) ) 0 0 Γ C A B ( 1 + Δ e k ( 2 ) ) 1 Γ C B ( 1 + Δ e k ( 1 ) ) 0 Γ C A N 1 B ( 1 + Δ e k ( N ) ) Γ C A N 2 B ( 1 + Δ e k ( N 1 ) ) 1 Γ C B ( 1 + Δ e k ( 1 ) ) .
Computing the norm on both sides of Equation (9), we find
| | e k + 1 | |     | 1 Γ C B ( 1 + Δ e k ( 1 ) ) | | | e k | |
Since | Δ e k ( 1 ) | < σ e , then
| | e k + 1 | | ( | 1 Γ C B | + σ e | Γ C B | ) | | e k | |
Let
ρ 1 = 1 Γ C B + σ e Γ C B ,
Then
| | e k + 1 | | = ρ 1 | | e k | | ,
where | | H | | b 1 . If ρ 1 < 1 is satisfied, we can obtain
lim k | | e k + 1 | | = 0 .
This completes the proof. □
Remark 1.
From the derivative process of Theorem 1, for the given system (1) or (2) with  C B 0 , the convergence condition  ρ 1 < 1  depends upon the system’s input matrix  B , output matrix  C  and the derivative learning gain  Γ  and the quantized density  η e . When the quantized density  η e  equals one, then the quantized error  σ e  decreases to zero. This situation is consistent with the system not having data quantization and the algorithm is the derivative-type ILC. Then, we can get  ρ 1 = 1 Γ g 1 ( 1 ) = 1 Γ C B < 1 , so  lim k | | e k + 1 | | = 0 . It has coincided with the algorithms of the D-type ILC.
Remark 2.
While the matrix (10) is a lower triangular Toeplitz as shown in Theorem 1, the spectral radius of this matrix satisfies the condition when  0 < ρ 1 = 1 Γ C B + σ e Γ C B < 1 . By benefiting from the property of the matrix, the convergence is directly made in terms of the tracking error vector. This indicates that the tracking error measured using any type of norm is convergent when the iteration tends to infinity.
Remark 3.
In the above-mentioned analysis of convergence, the systems’ information, learning gain, and quantized density are all factors in the convergence condition of the ILC law with quantized signals. Therefore, the performance of the tracking behavior is affected by selecting an appropriate learning gain, but the selection of learning gain in the control mechanism above is given by experience. Thus, the learning mechanism needs to be improved. Additionally, even though the tracking error converges as the number of iterations increases, we still want to select an effective optimization strategy to accelerate the speed of convergence. As a result, the optimization of the learning gain is worthwhile research in the systems with data quantization.
Remark 4.
The paper considers the ILC (7) and the derivation for the SISO system (3). When benefiting the framework by adopting an appropriate super-vector and super-matrix presentation, it is not difficult to generalize issues to multiple input–multiple output systems. In this circumstance, the derivation would be more complex.

3. Parameter-Optimal Iterative Learning Control with Quantized Information

Define two functions as
f s = a 0 + a 1 s + + a m s m ,   s 0 , + , g s = a i s i + + a 1 s 1 + a 0 + a 1 s + + a j s j ,   s 0 , + ,
where j and i are both positive integers and a i , , a 1 , a 0 , a 1 , , a j are constant coefficients, respectively.
Lemma 1.
For a matrix  A  which is a square matrix,  λ A  is an eigenvalue of the matrix  A . If there exists a number  λ A  and a vector  ξ  so that  A ξ = λ A ξ , then  f A ξ = f λ A ξ  is true. In particular, if the matrix  A  is invertible, that is  λ A 0  and  g A ξ = g λ A ξ .
In this context, we denote λ min A = min λ A and λ max A = max λ A , respectively.
Lemma 2.
Let two matrices  F and S  be invertible matrices of identical dimensions. Under this assumption, the identical equality  F S = F S 1 F 1 S  holds.
The POGAILC with quantized error (POGAILCQe) updating law is constructed as follows
u 1 ( n ) : given arbitrarily;
u k + 1 ( n ) = u k ( n ) + Υ k ( n + 1 ) Q ( e k ( n + 1 ) ) , n D , k = 1 , 2 , ,
where the subscript k denotes the repetition index, and the parameter Υ k ( n + 1 ) is an iteration time-variable learning gain with data quantization.
Substituting expression (6) into (11), the POGAILCQe law (11) becomes
u k + 1 ( n ) = u k ( n ) + γ k ( n + 1 ) ( 1 + Δ e k ( n + 1 ) ) e k ( n + 1 ) .
Denote
M k = d i a g ( ( 1 + Δ e k ( 1 ) ) e k ( 1 ) , ( 1 + Δ e k ( 2 ) ) e k ( 2 ) , , ( 1 + Δ e k ( N ) ) e k ( N ) ) , Υ k = Υ k ( 1 ) | Υ k ( 2 ) | | Υ k ( N ) T .
Thus, the law (12) of the system becomes
u k + 1 = u k + M k Υ k .
Here, Υ k is termed as an iteration time-variable derivative learning gain vector with a quantizer, and it is the solution to the optimization problem. In order to make the tracking error as small as possible along with the iteration increases, the optimal function is defined as
min Υ k J Υ k = e k + 1 2 + w k Υ k 2 ,
where w k > 0 is an appropriate weighting factor, and Υ k is the solution to the optimization problem (14). In the formulation (14), w k is assigned as an iteration-wise tuning factor, which notes the ratio of concerning importance from the compensation cost to the tracking error energy e k + 1 2 . The minimization problem means that the tracking error energy and the learning intensity arguing the derivative iteration time-varying learning-gain vector are considered in the point of a trade-off view.
Theorem 2.
There exists a tuning factor  w k  ( w k > 0 ) and an unique learning gain  Υ k  of POGAILCQe in order for the output trajectory to converge uniformly to the desired trajectory, and the law (13) drives the system (3); satisfying  e k + 1 ρ e k with 0 < ρ < 1 , the constant  ρ  is termed as the monotone convergence rate.
Proof of Theorem 2.
Substituting this Equation (13) into the system (3), we obtain
e k + 1 = e k H u k + 1 u k = e k H M k Υ k .
Thus,
e k + 1 2 = e k + 1 T e k + 1 = e k H M k Υ k T e k H M k Υ k = e k 2 e k T H M k Υ k Υ k T M k H T e k + Υ k T M k H T H M k Υ k .
Substituting equality (16) into (14), we have
J Υ k = Υ k T M k H T H M k Υ k 2 e k T H M k Υ k + e k T e k + w k Υ k T Υ k .
Let Δ J Υ k = J ( Υ k ) Υ k = 0 , then
w k I + M k H T H M k Υ k = M k H T e k .
When the matrix M k H T H M k is nonnegative, the matrix w k I + M k H T H M k is positive definite and thus nonsingular for w k > 0 , then the matrix w k I + M k H T H M k is invertible. Therefore, there exists a unique solution Υ k of Equation (17), and it is the parameter-optimal iteration time-varying learning-gain vector. The optimal solution is derived as
Υ k = w k I + M k H T H M k 1 M k H T e k .
Be aware that the tracking error vector e k is related to the matrix M k . This indicates that the matrix M k is very likely to be singular when some components of e k are zero. Equation (17) is indeterminate in the given situation, necessitating some changes for its solution. Under the particular circumstances, the solution of Equation (17) must be obtained when the matrix M k is nonsingular and singular, respectively.
Case 1.
The matrix M k is nonsingular.
Given the presumption C B 0 , which denotes that the Markov parameters matrix H is nonsingular, it is certain that the matrix M k H T H M k is positive definite.
Substituting Equation (18) into (15), we obtain
e k + 1 = e k H M k Υ k = e k H M k w k I + M k H T H M k 1 M k H T e k = I I + w k H T M k 1 M k 1 H 1 1 e k .
Here, H T = H T 1 .
Calculating inner product to both sides of Equation (19) results in
e k + 1 2 = e k T I I + w k H T M k 1 M k 1 H 1 1 2 e k .
Based on the matrix theory, Equation (22) is assessed for
e k T I I + w k H T M k 1 M k 1 H 1 1 2 e k λ max I I + w k H T M k 1 M k 1 H 1 1 2 e k 2 .
If any two invertible matrices I and I + w k H T M k 1 M k 1 H 1 1 have identical dimensions, we have
I I + w k H T M k 1 M k 1 H 1 1 = w k H T M k 1 M k 1 H 1 I + w k H T M k 1 M k 1 H 1 1 .
Based on Lemma 1, Equation (22) creates
λ max I I + w k H T M k 1 M k 1 H 1 1 2 = λ max w k 2 H T M k 1 M k 1 H 1 2 I + w k H T M k 1 M k 1 H 1 2 = w k λ max H T M k 1 M k 1 H 1 1 + w k λ max H T M k 1 M k 1 H 1 2 .
Let
ρ ~ = w k λ max H T M k 1 M k 1 H 1 1 + w k λ max H T M k 1 M k 1 H 1 2 .
Assuming C B 0 and the Markov parameters matrix H is nonsingular, it is confirmed that the matrix H T M k 1 M k 1 H 1 is positive definite. Therefore, λ max H T M k 1 M k 1 H 1 > 0 . Then, Equation (24) indicates that 0 < ρ ~ < 1 . Therefore, expressions (20) deliver
e k + 1 2 ρ ~ e k 2 ,
then the tracking error monotone convergences to zero.
Case 2.
The Matrix M k is singular.
In this case, we cannot obtain I + w k H T M k 1 M k 1 H 1 1 from the expression w k I + M k H T H M k Υ k = M k H T e k in (17). Then, the following row- and column-exchanging transformations can be used to evaluate the monotonicity of the tracking error energy in an indirect way.
Suppose that Rank ( M k ) = r k < N with e k ( i q ) 0 for q = 1 , , r k and e k ( l m ) = 0 for m = 1 , 2 , , N r k .
Here,
i 1 < i 2 < < i r k , l 1 < l 2 < < l N r k .
Denote
R k = ε i 1 | ε i 2 | | ε i r k | ε l 1 | ε l 2 | | ε l N r k
where ε i , for i = i 1 , i 2 , , i r k , l 1 , , l N r k , in the unit vector, the i-th element is 1 and the others are 0 . Obviously, the matrix R k is orthogonal, so that R k 1 = R k T .
We know if a matrix is multiplied right by the matrix R k , then the i 1 -th up to i r k -th and l 1 -th up to l N r k -th columns of the matrix may exchange to the first up to the N-th columns based on the structure of the matrix R k . Similarly, the i 1 -th up to i r k -th then l 1 -th up to l N r k -th rows of the matrix can be exchanged to the first up to the N-th rows by pre-multiplying R k T . Thus, the matrices R k and R k T are called the column-exchanging and row-exchanging transformation matrices [42]. We have
R k T H R k = C B 0 0 * * C A i 2 i 1 B C B 0 * * C A i r k i 1 B C A i r k i 2 B C B * * * * * * * * * * * * .
In the top matrix, “∗” are some elements of the Markov parameters matrix H .
Let
R k T H R k = H ~ k H ¯ k H ^ k H k .
Here,
H ~ k = C B 0 0 C A i 2 i 1 B C B 0 C A i r k i 1 B C A i r k i 2 B C B
is an r k × r k -dimensional lower triangular matrix, and H ^ k , H ¯ k , H k are matrices with appropriate dimensions.
Specifically,
R k T e k = e ~ k e ^ k ,
R k T Υ k = Υ ~ k Υ ^ k ,
R k T M k R k = M ~ k 0 0 M ^ k .
Here,
M ~ k = d i a g ( ( 1 + Δ e k ( i 1 ) ) e k ( i 1 ) , ( 1 + Δ e k ( i 2 ) ) e k ( i 2 ) , , ( 1 + Δ e k ( i r k ) ) e k ( i r k ) ) ,
M ^ k = d i a g ( ( 1 + Δ e k ( l 1 ) ) e k ( l 1 ) , ( 1 + Δ e k ( l 2 ) ) e k ( l 2 ) , , ( 1 + Δ e k ( l N r k ) ) e k ( l N r k ) ) ,
Υ ~ k = Υ k ( i 1 ) | Υ k ( i 2 ) | | Υ k ( i r k ) T ,
Υ ^ k = Υ k ( l 1 ) | Υ k ( l 2 ) | | Υ k ( l N r k ) T .
Then, e k l 1 | e k l 2 | | e k l N r k = 0 , M ^ k = 0 and M ~ k is invertible.
To Equation (17), we pre-multiply R k T , then
w k I + R k T M k R k R k T H R k T R k T H R k R k T M k R k R k T Υ k = R k T M k R k ( R k T H R k ) T ( R k T e k ) .
Substituting denotations (26), (28), (29) and (30) into the above equality (35)
w k I + M ~ k 0 0 0 H ~ k T H ^ k T H ¯ k T H k T H ~ k H ¯ k H ^ k H k M ~ k 0 0 0 Υ ~ k Υ ^ k = M ~ k 0 0 0 H ~ k T H ^ k T H ¯ k T H k T e ~ k e ^ k .
Equation (36) is further collated as
w k I r k + M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k 0 0 w k I N r k Υ ~ k Υ ^ k = M ~ k H ~ k T M ~ k H ^ k T 0 0 e ~ k e ^ k .
Therefore,
w k I r k + M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k Υ ~ k = M ~ k H ~ k T e ~ k + M ~ k H ^ k T e ^ k .
w k I N r k Υ ^ k = 0 .
As w k 0 , Equation (39) results in
Υ ^ k = 0 .
Because the matrix w k I r k + M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k is positive definite and thus invertible, Equation (38) have a unique solution as
Υ ~ k = w k I r k + M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k 1 ( M ~ k H ~ k T e ~ k + M ~ k H ^ k T e ^ k ) = L ~ k ( M ~ k H ~ k T e ~ k + M ~ k H ^ k T e ^ k ) ,
where
L ~ k = w k I r k + M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k 1 .
For simplicity, denote
Π ~ k = M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k .
Then,
L ~ k = w k I r k + Π ~ k 1 .
Therefore, Equations (29), (40), and (41) lead to
Υ k = R k T Υ ~ k Υ ^ k = R k Υ ~ k 0 = R k L ~ k M ~ k H ~ k T L ~ k M ~ k H ^ k T 0 0 e ~ k e ^ k .
Pre-multiplying Equation (15) by R k T results in
R k T e k + 1 = e ~ k + 1 e ^ k + 1 = R k T e k R k T H R k R k T M k R k R k T Υ k = e ~ k e ^ k H ~ k H ¯ k H ^ k H k M ~ k 0 0 0 Υ ~ k Υ ^ k = e ~ k e ^ k H ~ k H ¯ k H ^ k H k M ~ k 0 0 0 L ~ k M ~ k H ~ k T L ~ k M ~ k H ^ k T 0 0 e ~ k e ^ k = I H ~ k M ~ k L ~ k M ~ k H ~ k T H ~ k M ~ k L ~ k M ~ k H ^ k T H ^ k M ~ k L ~ k M ~ k H ~ k T H ^ k M ~ k L ~ k M ~ k H ^ k T e ~ k e ^ k .
For simplicity, denote
Q ~ k = H ~ k M ~ k L ~ k M ~ k H ~ k T ,
Q ^ k = H ^ k M ~ k L ~ k M ~ k H ^ k T ,
Ω ~ k = H ~ k M ~ k L ~ k M ~ k H ^ k T ,
Ω ^ k = H ^ k M ~ k L ~ k M ~ k H ~ k T .
From denotations (36), (43) and (45), the expression (46) takes a form of
Q ~ k = H ~ k M ~ k w k I r k + M ~ k H ~ k T H ~ k + H ^ k T H ^ k M ~ k 1 M ~ k H ~ k T = I r k + w k H ~ k T M ~ k T M ~ k 1 H ~ k 1 + H ~ k T H ^ k T H ^ k H ~ k 1 1 = I r k + w k F ~ k + Θ k 1
Here,
F ~ k = H ~ k T M ~ k 1 M ~ k 1 H ~ k 1 ,
Θ k = H ~ k T H ^ k T H ^ k H ~ k 1 .
Then,
Q ^ k = H ^ k H ~ k 1 Q ~ k H ~ k T H ^ k T ,
Ω ~ k = Q ~ k H ~ k T H ^ k T ,
Ω ^ k = H ^ k H ~ k 1 Q ~ k .
We substitute Equations (51)–(55) into (45); it follows that
R k T e k + 1 = I r k Q ~ k Q ~ k H ~ k T H ^ k T H ^ k H ~ k 1 Q ~ k I N r k H ^ k H ~ 1 Q ~ k H ~ T H ^ k T e ~ k e ^ k = I r k Q ~ k Ω ~ k Ω ^ k I N r k Q ^ k e ~ k e ^ k .
Taking 2-norm to Equation (56), we have
R k T e k + 1 2 = e k + 1 2 = e ~ k + 1 T e ~ k + 1 + e ^ k + 1 T e ^ k + 1 = e ~ k T ( I r k Q ~ k ) T ( I r k Q ~ k ) e ~ k + e ~ k T Ω ^ k T Ω ^ k e ~ k + e ^ k T ( I N r k Q ^ k ) T ( I N r k Q ^ k ) e ^ k + e ^ k T Ω ~ k T Ω ~ k e ^ k e ~ k T ( I r k Q ~ k ) T Ω ~ k e ^ k e ^ k T Ω ~ k ( I r k Q ~ k ) T e ~ k e ^ k T ( I N r k Q ^ k ) T Ω ^ k e ~ k e ~ k T Ω ^ k T ( I N r k Q ^ k ) e ^ k .
Based on denotations (50) and (51), the right side of expression (57) takes a form of
e ~ k + 1 T ( I r k Q ~ k ) T ( I r k Q ~ k ) e ~ k + 1 + e ~ k T Ω ^ k T Ω ^ k e ~ k = e ~ k T ( I r k Q ~ k ) e ~ k w k e ~ k T Q ~ k F ~ k Q ~ k e ~ k ,
e ^ k + 1 T ( I N r k Q ^ k ) T ( I N r k Q ^ k ) e ^ k + 1 + e ^ k T Ω ~ k T Ω ~ k e ^ k = e ^ k T ( I N r k Q ^ k ) e ^ k w k e ^ k T Q ^ k F ~ k Q ^ k e ^ k ,
Substituting Equations (58) and (59) into (57), we have
e k + 1 2 = e ~ k T ( I r k Q ~ k ) e ~ k w k e ~ k T Q ~ k F ~ k Q ~ k e ~ k + e ^ k T ( I N r k Q ^ k ) e ^ k w k e ^ k T Q ^ k F ~ k Q ^ k e ^ k .
According to the matrix theory, the equality (60) is estimated as
e k + 1 2 θ ~ e ~ k 2 w k ε ~ e ~ k 2 + θ ^ e ^ k 2 w k ε ^ e ^ k 2 θ ~ e ~ k 2 + θ ^ e ^ k 2 ,
There,
θ ~ = λ max I r k Q ~ k ,
ε ~ = λ max Q ~ k F ~ k Q ~ k ,
θ ^ = λ max I N r k Q ^ k ,
ε ^ = λ max Q ^ k F ~ k Q ^ k .
Substituting (50)–(52), it follows that:
I r k Q ~ k = I r k ( I r k + w k F ~ k + Θ k ) 1 = ( w k F ~ k + Θ k ) ( I r k + w k F ~ k + Θ k ) 1 ,
Substituting (52) and (53), it follows that:
I N r k Q ^ k = I N r k ( I N r k + w k F ^ k + Θ k T ) 1 = ( w k F ~ k + Θ k T ) ( I N r k + w k F ~ k + Θ k T ) 1 ,
On the base of Lemma 1 and 2, we have
θ ~ = λ max ( w k F ~ k + Θ k ) ( I r k + w k F ~ k + Θ k ) 1 = λ max ( w k F ~ k + Θ k ) 1 + λ max ( w k F ~ k + Θ k ) ,
θ ^ = λ max ( w k F ^ k + Θ k ) ( I r k + w k F ^ k + Θ k ) 1 = λ max ( w k F ^ k + Θ k ) 1 + λ max ( w k F ^ k + Θ k ) .
Let
ρ ^ = θ ~ + θ ^ .
Because of e ^ k = 0 , so θ ^ = 0 , then ρ ^ = θ ~ . Noticing that 0 < ρ ^ < 1 , we have e k + 1 2 ρ ^ e k 2 .
Let ρ = max ρ ~ , ρ ^ , then 0 < ρ < 1 . From (27) and (61), we conduct
e k + 1 2 ρ e k 2 .
This completes the proof. □
Remark 5.
According to the equations of the convergence rate in (24) and (69), it is observed that a smaller convergence rate  ρ  will be driven by a smaller tuning factor  w k , and the law of the POGAILCQe will have a faster convergence rate. In the performance index (14), a smaller tuning factor will be chosen, resulting in a lower compensation cost. Therefore, the choice of tuning factor has a uniform influence on the tracking performance and object function.

4. Numerical Example

Example 1. Consider a SISO linear LDTI system in paper [34,35] as follows
x k ( n + 1 ) = 0.8 0.22 1 0 x k ( n ) + 0.5 1 u k ( n ) , y k ( n + 1 ) = 1 0.5 x k ( n + 1 ) ,      n N ,  
where the operation time sampling is set as N = 0 , 1 , 100 and the initial state as [ x 1 ( 0 ) , x 2 ( 0 ) ] T = 0 . The desired trajectory is chosen as y d ( n ) = sin ( 4 n 25 ) + s i n ( 2 n 25 ) ,      n [ 0 , 100 ] .
The parameters in the quantizer are given as s 0 = 2 , η e = 0.85 , then σ e = 0.08 . Setting the learning gain Γ as a concrete number, so numerical calculation yields that ρ = 1 Γ C B + σ e Γ C B = 0.402 < 1 when Γ = 0.65 and ρ = 1 Γ C B + σ e Γ C B = 0.218 < 1 when Γ = 0.85 , satisfies the convergence condition of the ILCQe law (4) in Theorem 1, and the weighting factor of the POGAILCQe law is selected as w k = 0.01 0.5 k and w k = 0.05 0.5 k in Theorem 2.
Figure 1 exhibits the tracking behavior comparison of the ILCQe law and the POGAILCQe law in the second iteration. The solid curve represents the desired trajectory, whereas the dashed and the dash-dotted curves stand for the output with Γ = 0.65 and Γ = 0.85 , respectively. It is observed that the tracking behavior can be improved by fine-tuning the value of Γ in the ILCQe law. However, the tracking performance of the POGAILCQe law still performs better than the ILCQe law for the same number of iterations.
The tracking error convergence curve is shown in Figure 2 with the tracking error of the POGAILCQe law plotted on the solid curve and the tracking error of the ILCQe law shown on the dotted curve, respectively. The tracking error of the POGAILCQe law with w k = 0.001 0.1 k produces faster convergence than the ILCQe law. Figure 3 displays comparable tracking errors, where the dash, dotted, and solid curves are tracking error tendencies of POGAILCQe with the tuning factor sequences w k = 0.01 0.5 k for the different quantized density being η e = 0.35 , η e = 0.65 and η e = 0.85 , respectively. From Figure 1, Figure 2 and Figure 3, it is observed that an optimal algorithm with an appropriate weighting factor can ensure a zero-error convergence in the ILC for discrete-time systems with error data quantization.
We simulate the POGAILCQe method with the various tuning factors sequence using the same number of quantized densities to discuss the impact of the tuning factors on the convergence speed. In Figure 4, the parameters in the quantizer are given as s 0 = 2 , η e = 0.65 , then σ e = 0.21 . The dash tracking error for the lower tuning factor sequence w k = 0.01 0.5 k achieves faster convergence than that for the larger tuning factor sequence w k = 0.05 0.5 k . Therefore, the simulation demonstrates that the convergence performance can be improved when we choose a lesser tuning factors sequence for the POGAILCQe law. This result is consistent with the theoretical analysis of Theorem 2.
Example 2. A joint motion position control system of a robot manipulator is taken into consideration to demonstrate the validity and efficacy of the proposed POGAILCQe; the engineering example is derived from reference [42]. Figure 5 displays the structural diagram of the control system in the s-domain.
In Figure 5, K P , K I and K D are the proportional, integral, and derivative gains for the conventional proportional–integral–derivative (PID) controller G c ( s ) = K P + K I s + K D s , respectively. And G p ( s ) = 1 1 K 1 s 2 + K 2 s + K 3 is the identified robot manipulator; here, K 1 , K 2 , and K 3 are the robot manipulator’s parameters. The system shown in Figure 5’s transfer function is calculated to be
G ( s ) = Y s U s = K D K 1 s 2 + K P K 1 s + K I K P s 3 + K 1 ( K 2 + K D ) s 2 + K g ( K 3 + K P ) s + K I K g
The transfer function (73) of the PID controller-tuned closed-loop system in Figure 5 is converted into a time-domain canonical form which is described as follows for system (1)
Select parameters K P = 0.2 , K I = 2 , K D = 0.6 , K 1 = 10 , K 2 = 0.4 and K 3 = 0.8 , correspondingly. The robot manipulator elbow joint’s desired motion trajectory is set to y d ( n + 1 ) = 0.0012 n + 1 2 1 0.01 n + 1 ,      n D . The sampling set is selected as D = 0 , 1 , , 99 . By setting the sampling step as Δ t = 0.05 , the state, the input, and the output matrices become
A = 1 0.05 0 0 1 0.05 1 0.5 0.5 ,   B = 0 0 0.05   and   C = 10 2 6 .
So the relative degree of the system is united because of C B = 0.3 0 .
In Figure 6, the solid curve is the desired trajectory; the dash and the dash-dotted ones are the outputs of the ILCQe law and the POGAILCQe law with w k = 0.01 0.5 k at the third iteration, respectively. According to the tracking behavior comparison, we obtain the same result that the tracking performance of the POGAILCQe law still performs better than the ILCQe law. The tracking error convergence curve is shown in Figure 6 with the tracking error of the POGAILCQe law plotted on the solid curve and the tracking error of the ILCQe law shown on the dotted curve, respectively. Figure 6 shows the same result: that the tracking error of the POGAILCQe law with w k = 0.01 0.5 k produces faster convergence than the ILCQe law. Thus, the comparison of the simulations in Figure 6 and Figure 7 further supports the validity and the engineering practicability.

5. Conclusions

The paper establishes the POGAILCQe scheme for a class of linear discrete time-invariant systems with error quantized signals using the lifted vector-matrix technique. To achieve optimal performance, the iteration time-variable learning-gain vector is argued, and the optimization problem is defined as additive quadratic forms of the tracking error and the learning-gain vector. The tuning factor is introduced as a straightforward mechanism for balancing the relative contributions of the tracking error and learning effort intensity in optimization function. The convergence of the POGAILCQe has been analyzed using Markov parameters available in the system and the parameters of the quantizer. It showed that the scheme would strictly monotonically converge to zero under assured convergence conditions. Additionally, the rate of convergence can be adjusted by scaling the weighting factor when the quantization is fixed. The algorithms of POGAILCQe with data quantization can improve the tracking performance and the negative effects of data quantization, as the logarithmic quantizer can be mitigated via the algorithms. Numerical simulations have demonstrated the effectiveness of the POGAILCQe scheme. In future works, we shall study the robustness of the law against the systematic parameter uncertainty for systems with quantized signals.

Author Contributions

Data curation, Y.L.; formal analysis, Y.L. and X.R.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L.; supervision, X.R. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Natural Science Foundation in Ningxia of China (2022AAC03269); Special Found of Basic Scientific Research Founds for Colleges and Universities of North Minzu University (2021KYQD13).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Arimoto, S.; Kawamura, S.; Miyazaki, F. Bettering operation of robots by learning. J. Robot. Syst. 1984, 1, 123–140. [Google Scholar] [CrossRef]
  2. Liu, J.; Ruan, X.E.; Zheng, Y.S.; Yi, Y.; Wang, C. Learning-ability of discrete-time iterative learning control systems with feedforward. SIAM J. Contr. Optim. 2023, 61, 543–559. [Google Scholar] [CrossRef]
  3. Tao, H.F.; Zhou, L.H.; Hao, S.; Paszke, W.; Yang, H.Z. Output feedback based PD-type robust iterative learning control for uncertain spatially interconnected systems. Int. J. Robust Nonlinear Control 2021, 31, 5962–5983. [Google Scholar] [CrossRef]
  4. Memon, F.; Shao, C. Robust optimal PID-type ILC for linear batch process. Int. J. Control Autom. Syst. 2020, 19, 777–787. [Google Scholar] [CrossRef]
  5. Saab, S.A. A stochastic iterative learning control algorithm with application to an induction motor. Int. J. Control 2004, 77, 144–163. [Google Scholar] [CrossRef]
  6. Chen, Y.Q.; Moore, K.L. A practical iterative learning path-following control of an omni-directional vehicle. Asian J. Control 2002, 4, 90–98. [Google Scholar] [CrossRef]
  7. Wang, X.; Luo, Y.; Qin, B.; Guo, L. Power dynamic allocation strategy for urban rail hybrid energy storage system based on iterative learning control. Energy 2022, 245, 123263. [Google Scholar] [CrossRef]
  8. Saab, S.S.; Shen, D.; Orabi, M.; Kors, D.; Jaafar, R.H. Iterative learning control: Practical implementation and automation. IEEE Trans. Ind. Electron. 2021, 69, 1858–1866. [Google Scholar] [CrossRef]
  9. Adlakha, R.; Zheng, M.H. A two-step optimization-based iterative learning control for quadrotor unmanned aerial vehicles. J. Dyn. Syst. Meas. Control 2021, 143, 8. [Google Scholar] [CrossRef]
  10. Shen, D.; Wang, Y. Survey on stochastic iterative learning control. J. Process Control 2014, 24, 64–77. [Google Scholar] [CrossRef]
  11. Shi, J.; Gao, F.; Wu, T.J. Integrated design and structure analysis of robust iterative learning control system based on a two-dimensional model. Ind. Eng. Chem. Res. 2005, 44, 8095–8105. [Google Scholar] [CrossRef]
  12. Amann, N.; Owens, D.H.; Rogers, E. Iterative learning control for discrete-time systems with exponential rate of convergence. IEE Proc.-Control Theory Appl. 1996, 143, 217–224. [Google Scholar] [CrossRef]
  13. Ruan, X.; Liu, Y. Monotone Convergence rate of Norm-Optimal-Gain-Arguable Iterative Learning Control for LDTI Systems. Asian J. Control 2022, 24, 920–941. [Google Scholar] [CrossRef]
  14. Barton, K.L.; Alleyne, A.G. A norm optimal approach to time-varying ILC with application to a multi-axis robotic tested. IEEE Trans. Control Syst. Technol. 2010, 19, 166–180. [Google Scholar] [CrossRef]
  15. Chen, B.; Chu, B. Distributed norm optimal iterative learning control for point-to-point consensus tracking. IFAC-PapersOnLine 2019, 52, 292–297. [Google Scholar] [CrossRef]
  16. Owens, D.H. Iterative Learning Control: An Optimization Paradigm; Springer: London, UK; New York, NY, USA, 2016; pp. 233–319. [Google Scholar]
  17. Owens, D.H.; Feng, K. Parameter optimization in iterative learning control. Int. J. Control 2003, 76, 1059–1069. [Google Scholar] [CrossRef]
  18. Gunnarsson, S.; Nprrlöf, M. On the design of ILC algorithms using optimization. Automatica 2011, 37, 2011–2016. [Google Scholar] [CrossRef]
  19. Owens, D.H.; Chu, B.; Songjun, M. Parameter-optimal iterative learning control using polynomial representations of the inverse plant. Int. J. Control 2012, 85, 533–544. [Google Scholar] [CrossRef]
  20. Jin, X.; Xu, J.X. A barrier composite energy function approach for robot manipulators under alignment condition with position constraints. Int. J. Robust Nonlinear Control 2014, 24, 2840–2851. [Google Scholar] [CrossRef]
  21. Chi, R.H.; Hou, Z.S.; Huang, B.; Jin, S.T. A unified data-driven design framework of optimality-based general iterative learning control. Comput. Chem. Eng. 2015, 77, 10–23. [Google Scholar] [CrossRef]
  22. Hätönen, J.J.; Owens, D.H.; Moore, K.L. An algebraic approach to iterative learning control. Int. J. Control 2004, 77, 45–54. [Google Scholar] [CrossRef]
  23. Liu, Y.; Ruan, X.E.; Li, X.H. Optimized Iterative Learning Control for Linear Discrete-Time-Invariant Systems. IEEE Access 2019, 7, 75378–75388. [Google Scholar] [CrossRef]
  24. Zhang, X.M.; Han, Q.L.; Ge, X.H.; Ge, X.; Ding, D.; Ding, L.; Yue, D.; Peng, C. Networked control systems: A survey of trends and techniques. IEEE/CAA J. Autom. Sin. 2020, 7, 1–17. [Google Scholar] [CrossRef]
  25. Jiang, T.; Zhang, Y.; Zhong, S.; Shi, K.; Cai, X. Finite-time analysis for network predictive control systems with induced time delays and data packet dropouts. Phys. A Stat. Mech. Its Appl. 2021, 581, 126209. [Google Scholar] [CrossRef]
  26. Chen, J.; Sun, J.; Wang, G. From unmanned systems to autonomous intelligent systems. Engineering 2022, 12, 16–19. [Google Scholar] [CrossRef]
  27. Hespanha, J.P.; Naghshtabrizi, P.; Xu, Y.A. A survery of recent results in networked control systems. Proc. IEEE 2007, 95, 138–162. [Google Scholar] [CrossRef]
  28. Bu, X.H.; Yu, F.S.; Hou, Z.S.; Wang, F.Z. Iterative learning control for a class of nonlinear systems with random packet losses. Nonlinear Anal. Real World Appl. 2013, 14, 567–580. [Google Scholar] [CrossRef]
  29. Liu, C.P.; Xu, J.X.; Wu, J. Iterative learning control for remote control systems with communication delay and data dropout. Math. Probl. Eng. 2012, 2012, 131–152. [Google Scholar] [CrossRef]
  30. Wu, J.; Chen, T.W. Design of networked control systems with packet dropouts. IEEE Trans. Autom. Control 2007, 52, 1314–1319. [Google Scholar] [CrossRef]
  31. Wang, D.; Wang, J.L.; Wang, W. H-infinity controller design of networked control systems with markov packet dropouts. IEEE Trans. Syst. Man Cybern. Syst. 2013, 43, 689–697. [Google Scholar] [CrossRef]
  32. Delchamps, F.D. Stabilizing a linear system with quantized state feedback. IEEE Trans. Autom. Control 1990, 35, 916–924. [Google Scholar] [CrossRef]
  33. Csernak, G.; Stepan, G. Life Expectancy of Transient Microchaotic Behaviour. J. Nonlinear Sci. 2005, 15, 63–91. [Google Scholar] [CrossRef]
  34. Bu, X.; Wang, T.; Hou, Z. Iterative learning control for discrete-time systems with quantized measurements. IET Control. Theory Appl. 2015, 9, 1455–1460. [Google Scholar]
  35. Xu, Y.; Shen, D. Zero-error convergence of iterative learning control using quantized error information. IMA J. Math. Control Inf. 2017, 34, 1061–1077. [Google Scholar] [CrossRef]
  36. Zhang, H.; Chi, R. Multi-lagged-input information enhancing quantized iterative learning control. Trans. Inst. Meas. Control 2021, 43, 313–324. [Google Scholar] [CrossRef]
  37. Xu, P.P.; Bu, X.H.; Hou, Z.S. Convergence analysis of quantized iterative learning control using lifting representation. In Proceedings of the 35th Chinese Control Conference, Chengdu, China, 27–29 July 2016; pp. 3137–3141. [Google Scholar]
  38. Bu, X.H.; Hou, Z.S. Stability analysis of quantized iterative learning control systems using lifting representation. Int. J. Adapt. Control Signal Process. 2017, 31, 1327–1336. [Google Scholar] [CrossRef]
  39. Xiong, W.; Yu, X.; Chen, Y.; Gao, J. Quantized iterative learning consensus tracking of digital networks with limited information communication. IEEE Trans. Neural Netw. Learn. Syst. 2017, 28, 1473–1480. [Google Scholar] [CrossRef] [PubMed]
  40. Fu, M.Y.; Xie, L.H. The sector bound approach o quantized feedback control. IEEE Trans. Autom. Control 2005, 50, 1698–1711. [Google Scholar]
  41. Yang, S.P.; Xu, J.X. Optimal iterative learning control design for multi-agent systems consensus tracking. Syst. Control Lett. 2014, 69, 80–89. [Google Scholar] [CrossRef]
  42. Liu, Y.; Ruan, X.E. Linearly monotonic convergence of nonlinear parameter-optimal iterative learning control to linear discrete-time-invariant systems. Int. J. Robust Nonlinear Control 2021, 31, 3955–3981. [Google Scholar] [CrossRef]
Figure 1. Comparison of the outputs.
Figure 1. Comparison of the outputs.
Applsci 13 09551 g001
Figure 2. Comparison of tracking error tendency.
Figure 2. Comparison of tracking error tendency.
Applsci 13 09551 g002
Figure 3. Tracking error tendency of POGAILCQe.
Figure 3. Tracking error tendency of POGAILCQe.
Applsci 13 09551 g003
Figure 4. Tracking error tendency of POGAILCQe.
Figure 4. Tracking error tendency of POGAILCQe.
Applsci 13 09551 g004
Figure 5. The feedback control diagram of a robot manipulator.
Figure 5. The feedback control diagram of a robot manipulator.
Applsci 13 09551 g005
Figure 6. Comparison of the outputs.
Figure 6. Comparison of the outputs.
Applsci 13 09551 g006
Figure 7. Comparison of tracking error tendency.
Figure 7. Comparison of tracking error tendency.
Applsci 13 09551 g007
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Ruan, X. Parameter-Optimal-Gain-Arguable Iterative Learning Control for Linear Time-Invariant Systems with Quantized Error. Appl. Sci. 2023, 13, 9551. https://doi.org/10.3390/app13179551

AMA Style

Liu Y, Ruan X. Parameter-Optimal-Gain-Arguable Iterative Learning Control for Linear Time-Invariant Systems with Quantized Error. Applied Sciences. 2023; 13(17):9551. https://doi.org/10.3390/app13179551

Chicago/Turabian Style

Liu, Yan, and Xiaoe Ruan. 2023. "Parameter-Optimal-Gain-Arguable Iterative Learning Control for Linear Time-Invariant Systems with Quantized Error" Applied Sciences 13, no. 17: 9551. https://doi.org/10.3390/app13179551

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop