Next Article in Journal
On Local Unique Solvability for a Class of Nonlinear Identification Problems
Next Article in Special Issue
Finite-Time Passivity and Synchronization for a Class of Fuzzy Inertial Complex-Valued Neural Networks with Time-Varying Delays
Previous Article in Journal
Simulation of an Elastic Rod Whirling Instabilities by Using the Lattice Boltzmann Method Combined with an Immersed Boundary Method
Previous Article in Special Issue
Global Fixed-Time Sliding Mode Trajectory Tracking Control Design for the Saturated Uncertain Rigid Manipulator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partial Singular Value Assignment for Large-Scale Systems

School of Science, Hunan University of Technology, Zhuzhou 412007, China
*
Author to whom correspondence should be addressed.
Axioms 2023, 12(11), 1012; https://doi.org/10.3390/axioms12111012
Submission received: 15 September 2023 / Revised: 18 October 2023 / Accepted: 24 October 2023 / Published: 27 October 2023
(This article belongs to the Special Issue Control Theory and Control Systems: Algorithms and Methods)

Abstract

:
The partial singular value assignment problem stems from the development of observers for discrete-time descriptor systems and the resolution of ordinary differential equations. Conventional techniques mostly utilize singular value decomposition, which is unfeasible for large-scale systems owing to their relatively high complexity. By calculating the sparse basis of the null space associated with some orthogonal projections, the existence of the matrix in partial singular value assignment is proven and an algorithm is subsequently proposed for implementation, effectively avoiding the full singular value decomposition of the existing methods. Numerical examples exhibit the efficiency of the presented method.

1. Introduction

Partial singular value assignment (PSVA) is a powerful technique with application in a variety of fields, including control theory, optimization problems, signal processing and many more [1,2,3,4,5,6,7,8,9,10]. PSVA focuses on selectively modifying the singular values of a matrix while keeping others unchanged. This technique enables engineers, researchers, and practitioners to address various problems in a nuanced and efficient manner. Notably, PSVA plays a pivotal role in minimizing errors when designing observers for the following discrete-time descriptor systems
E z k + 1 = A z k + B u k + w k y k = C z k + r k ,
where the coefficient matrices E, A R n × n , B R n × l and C R m × n , w k R n × 1 and r k R m × 1 are the system errors; z k R n × 1 , u k R l × 1 and y k R m × 1 are the state vector, input vector and output vector, respectively. This system, initially introduced by Luenberger in 1977 [11], has been extensively explored, particularly in cases involving the singular matrix E. Contemporary research has yielded a wealth of theoretical insights from control theorists and numerical analysts, covering the continuous-time descriptor systems. These investigations encompass a wide array of subjects, including controllability, observability [12,13], pole assignment [14,15], eigenstructure assignment [16], Kronecker’s canonical form [17], solution algorithms [18] and the inequality method [19]. It is worth noting that many of the findings from these studies are applicable to discrete-time systems as well.
In the context of the discrete-time descriptor system, the observer can be described as follows
E z k + 1 + H ( y k + 1 C z k + 1 ) = A z k + G ( y k C z k ) + B u k ,
where H, G R n × m . The resulting error vector,
e k = z k x k
satisfies the equality
( E H C ) e k + 1 = ( A G C ) e k H r k + 1 + G r k w k .
A key objective in PSVA is to find a matrix H such that the smallest singular values (which might be zeros) of ( E H C ) are sufficiently large, or, equivalently, the norm of ( E H C ) 1 is sufficiently small; it then greatly reduces the impact of unmodeled error terms [20,21]. For further insights into the error reduction achieved by PSVA in other optimal state-space controller designs, such as H and H 2 systems, please refer to [22,23] for detailed discussions.
PSVA has another important application in numerical methods solving ordinary differential equations (ODEs) [24]. It is widely recognized that Euler’s method plays a crucial role in the context of ODEs. Despite its rapidly increasing error as calculations progress, Euler’s method stands out as one of the few options in generating initial values needed for higher-order methods. When we apply the explicit Euler method to a differential equation with initial values, the resulting numerical solution tends to underestimate the true solution. Conversely, when we employ the implicit Euler method, the numerical method tends to overestimate the true solution. This suggests that a combination of these two methods could lead to better results. However, straightforward averaging has its limitations. An intriguing alternative is to consider “randomly switching” between the two methods. If we average all possible outcomes generated by the “random switching”, the resulting method can be approximately tens or even hundreds of times more accurate than either the explicit or implicit Euler method. For example, consider a numerically linear method to solve the ODE given by
x ˙ = λ x
with λ being a constant; the numerical solution can be achieved by closing a feedback loop, following the principles of linear control theory [24].
In our discussion, we will leverage this control to modify the method so that singular values can be assigned. To illustrate this, we consider two linear methods
x k + 1 = A x k + b u k , y k = c x k
and
x k + 1 = F x k + g u k , y k = h x k
of solving the differential Equation (1), where b, g, c and h are vectors of compatible dimensions with the systems above. The matrices A and F, both belonging to R n × n , possess properties determined by the chosen numerical method. However, they share the common attribute of having 1 as an eigenvalue with a multiplicity of 1. If there are other eigenvalues located on the unit circle, they also have a multiplicity of 1, while all remaining eigenvalues lie within the unit circle.
Similar to the improvements achieved with the Euler methods, a natural enhancement of the linear method can be obtained by randomly switching between these two linear methods. In practice, the feedback loop in control theory yields the following results:
x k + 1 = ( A λ b c ) x k
and
x k + 1 = ( F λ g h ) x k .
Let δ i be a random variable with values in { 0 , 1 } . The “random switching” iteration
x k + 1 = i = 0 k ( A λ b c ) δ i ( F λ b c ) 1 δ i x 0
depends heavily on the product
T k = i = 0 k ( A λ b c ) δ i ( F λ b c ) 1 δ i .
It has been shown in [25] that the “random switching” iteration converges if and only if the 2-norm of T k is less than 1. Since the 2-norm of T k associates with the largest singular value, it raises the question of when the singular values of a linear system can be assigned through the state feedback.
The above applications indicate that the PSVA problem involves assigning several desired singular values and determining how the remaining singular values behave. In mathematics, PSVA can be expressed explicitly as follows.
  • PSVA: Given matrices A, B and the desired (non-zero) singular values { θ 1 , , θ p } , find a matrix F such that
    A + B F
    has the p desired singular values.
This problem was initially addressed in [20] for small and medium-scale systems. Typically, one needs a full QR decomposition of matrix B, i.e.,
Q B Q B ¯ B = R B 0
and a full singular value decomposition (SVD) Q ¯ B A to construct the matrix F. If all singular values need to be reassigned, the interlacing theory on singular values of A + B F and an SVD of the orthogonal projection of A onto the null space of B are required for the formation of F [26]. However, for large-scale systems, methods related to the full SVD of a large matrix are not feasible due to the computational complexity of O ( n 3 ) and memory limitations. Similar issues were encountered in pole assignment [14,15,27,28]. Readers can find details in [29,30,31,32] and their references.
In this paper, we introduce an efficient algorithm to conduct PSVA on large-scale systems. The primary contributions and advantages of our algorithm are listed below.
  • The core innovation of our devised approach is rooted in the construction of a sparse basis within the null space of the orthogonal projection of matrix A. This strategy offers a notable advantage by obviating the necessity for the full SVD of a large-scale matrix.
  • Our algorithm brings to the forefront the possibility of computational savings, especially when dealing with matrices of significant scale and sparse structure (i.e., the number of non-zeros in each column of the matrix is bounded by a constant much smaller than n). Then, the proposed algorithm strives to assign the desired singular values with exceptional computational efficiency, hopefully scaling with the computational complexity of O ( n ) floating-point operations.
  • To validate the practical utility of our proposed algorithm, we conduct a series of numerical experiments. These experiments not only confirm the feasibility of our approach but also demonstrate its effectiveness in real-world scenarios.
Throughout this paper, we denote the i-th singular value, 2-norm and the condition number of A R n × n by σ i ( A ) , A 2 = σ 1 ( A ) (the maximal singular value) and κ 2 ( A ) = σ 1 ( A ) / σ n ( A ) (the ratio of the maximal and the minimal singular value), respectively. The determinant of the matrix A is represented by d e t ( A ) . The equivalence of two pairs of matrices is defined as follows.
Definition 1 
([33]). Two pairs of matrices ( A , B ) and ( C , D ) are equivalent if there are orthogonal matrices U and V, a matrix E and a nonsingular matrix R with compatible sizes such that
C = U ( A + B E ) V , D = U B R .
The equivalence of matrix pairs is attractive as it preserves the same assignability of singular values. Thus, one can use the simplest representation to describe a class of equivalent pairs and study the existence of PSVA.

2. PSVA for Single Singular Value

We first delve into the single PSVA. This problem can be simplified to the following.
  • Single PSVA: Given a matrix A R n × n , a vector b R n × 1 and the desired (non-zero) singular values θ , find a vector f R n × 1 such that
    A + b f
    inclues the desired singular value θ .
To solve the single PSVA, we initiate the process by ensuring that the smallest singular value of a matrix associated with A is in proximity to zero, or even equal to zero. Once this condition is met, we can proceed with constructing the orthogonal projection of matrix A onto the null space of vector b by using
f 1 = ( ( b b ) 1 b A ) ,
which effectively enforces a minimal singular value of zero. In fact, by performing the QR decomposition of b by
b = Q r = [ q 1 , Q 2 ] r 0 = q 1 r ,
where q 1 R n × 1 and Q 2 R n × ( n 1 ) , we have the smallest singular value of
D = A + b f 1 = ( I b ( b b ) 1 b ) A = ( I q 1 q 1 ) A = ( Q 2 Q 2 ) A
forced to be zero. Let the SVD of D be given by
U D V = Σ 1 0 ,
where U = [ U 1 , u 2 ] and V = [ V 1 , v 2 ] are orthogonal matrices with U 1 ; V 1 R n × ( n 1 ) , u 2 , v 2 R n × 1 , Σ 1 = d i a g { σ n , , σ 2 } R ( n 1 ) × ( n 1 ) is a diagonal matrix with its n 1 non-zero diagonal elements being the singular values of D. We can demonstrate the existence of the single PSVA through the following theorem.
Theorem 1. 
For a given singular value θ > 0 , there exists a vector f R n × 1 such that the singular values of A + b f are { σ n , , σ 2 , θ } .
Proof. 
By using the SVD of D in (4), a constant c 0 can be found such that
U b c = 0 0 1 .
We can then define vectors f 2 = V ω with ω = [ ω 1 , , ω n ] R 1 × n and f = f 1 + f 2 . As a result, we have
A + b f = U Σ V
with
Σ = σ n σ 2 z n z 2 z 1
and z i = ω i / c for i = 1 , , n . The assignment of the singular value θ in A + b f is now reduced to the assignment in Σ according to Definition 1.
Now, we consider the characteristic polynomial of the matrix δ I Σ Σ . It follows
p ( δ ) = d e t ( δ I Σ Σ ) = d e t δ σ n 2 σ n z n δ σ 2 2 σ 2 z 2 σ n z n σ 2 z 2 δ Σ i = 1 n z i 2 = δ i = 1 n z i 2 i = 2 n δ σ i 2 z 2 2 σ 2 2 j 2 δ δ j 2 z n 2 σ n 2 j n δ δ j 2 = δ i = 2 n δ δ i 2 i = 1 n z i 2 j i δ δ j 2 = δ i = 2 n δ δ i 2 1 z 1 2 δ i = 2 n z i 2 δ σ i 2 .
By letting
p ( θ 2 ) = 0
with 0 < θ < σ 2 , we see that there are multiple solutions to Equation (5). One simple choice is to take z 1 = θ and z 2 = z 3 = = z n = 0 . By setting f 2 = V [ 0 , , 0 , θ ] , we have the desired vector of f = f 1 + f 2 to complete the PSVA for a single singular value.    □
Remark 1. 
Although there are multiple solutions to Equation (5), e.g., the vector z could be also taken as
[ θ / n , ( θ 2 σ 2 2 ) / n , , ( θ 2 σ n 2 ) / n ] ,
the choice of z = [ 0 , , 0 , θ ] in Theorem 1 is the most valid as it ensures that the other σ i 2 ( i 2 ) is also a root of p ( δ ) = 0 .
The above theorem provides the motivation for constructing the vector f for the single PSVA. In fact, by equating the two sides of the SVD of D in (4), it follows that
D V 1 = U 1 Σ 1 , D v 2 = 0 .
By taking u 2 and f 2 as q 1 and θ v 2 , respectively, then, for a singular value of θ > 0 to be assigned, we have
U ( D + q 1 f 2 ) V = Σ 1 + U 1 q 1 g V 1 U 1 q 1 g v 2 u 2 q 1 g V 1 u 2 q 1 g v 2 = Σ 1 θ
and the vector
f = f 1 + θ v 2 r
completes the single PSVA. Algorithm 1 outlines the concrete steps involved in the single PSVA.
Algorithm 1 PSVA for the Single Singular Value
     Input: A large-scale sparse matrix A R n × n , b R n × 1 and the desired singular value θ .
     Output: A vector f R n × 1 such that the minimal singular value of A + b f is θ .
1. Compute f 1 = ( ( b b ) 1 b A ) ;
2. Compute the economic QR decomposition of b, i.e., b = q 1 r ;
3. Compute the null space of the matrix D = A + b f 1 ;
4. Form the vector f = f 1 + θ v 2 / r such that the smallest singular value of A + b f is θ .
Remark 2. 
(1) The QR decomposition of b has a complexity of O ( 2 ( n 1 / 3 ) ) when the Householder transformation is used [34].
(2) The vector v 2 in the null space of
D = ( I q 1 q 1 ) A
should be as sparse as possible. Two methods have been tested in our experiments. One involves computing the complement of D using the QR decomposition, while the other involves computing the LU factorization. If the elements in each column of A are sparse, with an upper bound of a (which is much less than n), then the computational complexity may hopefully be O ( a n ) .

3. PSVA for p Singular Values

The single PSVA can be readily generalized to the p singular value assignment with p > 1 .
  • p-PSVA: Given a large-scale sparse matrix A R n × n with p smallest singular values { σ 1 , , σ p } being close to (or equal to) zero, and a matrix B R n × p , find a matrix F R n × p such that
    A + B F
    its p smallest singular values are the given { θ p , , θ 1 } .
Let us construct the matrix
F 1 = ( ( B B ) B A ) ,
which is the orthogonal projection of A on the null space of B. By using the QR decomposition of B
( i . e . , B = Q R = [ Q 1 , Q 2 ] = R 0 = Q 1 R )
with Q 1 R n × p , the matrix
D = A + B F 1 = ( I Q 1 Q 1 ) A
will have p zero singular values. Let the SVD of D be
U D V = Σ 1 0
with the diagonal matrix Σ 1 R ( n p ) × ( n p ) having the diagonal elements { σ n , , σ p + 1 } . The following theorem provides the existence of the PSVA.
Theorem 2. 
For singular values { θ 1 , , θ p } to be assigned, there exists a matrix F R n × p such that the singular values of A + B F are { σ n , , σ p + 1 , θ p , , θ 1 } .
Proof. 
Set
Σ ( i ) = Σ 1 0 z p n z p p z p 1 z i n z i p z i 1 0 0 0 0 0 0 n × n
for 1 i p . According to Definition 1, the form of z i j must be determined such that the singular values of A + B F are those of Σ ( i ) . Firstly, consider i = p . By setting z p = [ z p n , , z p p , , z p 1 ] with z p p = θ p and other elements being zeros, Theorem 1 indicates that there exists a vector f p such that the non-zero singular values of A + B ( F 1 + [ f p , 0 , , 0 ] ) are { σ n , , σ p + 1 , θ p } . By using induction and Theorem 1 continuously, it can be shown that there is a matrix F 2 = [ f p , f p 1 , , f 1 ] R n × p such that the singular values of A + B F ( F = F 1 + F 2 ) are { σ n , , σ p + 1 , θ p , , θ 1 } .    □
We can further elaborate on the form of F in terms of Theorem 2. In fact, let the orthogonal matrices U and V in (4) be given as
U = [ U 1 , U 2 ] a n d V = [ V 1 , V 2 ]
with U 2 , V 2 R n × p , U 1 , V 1 R n × ( n p ) , respectively. Then, by setting U 2 = Q 1 and F 2 = ( R 1 Σ 2 V 2 ) , we have
U ( A + B F ) V = Σ 1 + U 1 Q 1 Σ 2 V 2 V 1 U 1 Q 1 Σ 2 V 2 V 2 U 2 Q 1 Σ 2 V 2 V 1 U 2 Q 1 Σ 2 V 2 V 2 = Σ 1 Σ 2 ,
where Σ 2 represents a diagonal matrix, with its diagonal elements containing the desired non-zero singular values θ 1 , , θ p . Furthermore, the matrix F can be expressed as
F = R 1 ( Q 1 A Σ 2 V 2 ) .
The steps involved in the PSVA for p singular values are outlined in Algorithm 2.
A crucial aspect of Algorithm 2 is to minimize the sparsity of the computation for the null sparse basis of matrix D. While the heuristic method suggested in [20] is an option, empirical testing has revealed that QR decomposition or LU factorization in MATLAB can produce adequately sparse vectors, particularly when the matrix A possesses a significant degree of sparsity.
Remark 3. 
As indicated by Remark 2, the computational complexity of Algorithm 2 is advantageous, with a time complexity of O ( a p n ) when the matrix A exhibits a degree of sparsity (characterized by relatively small values of a and p in comparison to the matrix size n). In this case, Algorithm 2 offers a substantial advantage over methods relying on the the full SVD, which typically entail a much higher time complexity of O ( n 3 ) . This stark contrast positions Algorithm 2 as a clear choice for the handling of large-scale systems, where the efficiency and computational speed are obviously superior. Moreover, when the matrix A reaches a considerable order (scaling up to the thousands), methods that rely on the full SVD approach tend to become prohibitively time-consuming, particularly when executed on standard personal computers. This temporal limitation restricts their applicability and practicality. This also the reason for exclusively showing the performance of our algorithm through numerical experiments on large-scale problems.
Algorithm 2 PSVA for p Singular Values
    Input: A large-scale sparse matrix A R n × n , B R n × p and the desired singular values θ 1 , , θ p .
    Output: A matrix F R n × p such that p smallest singular values of A + B F are { θ 1 , , θ p } .
1. Compute the economic QR decomposition of B, i.e., B = Q 1 R 1 with Q 1 = [ Q 11 , , Q 1 p ] and
R = r 11 r 1 p r p 1 , p 1 r p p .
2. Compute the null space of D V 2 = ( I Q 1 Q 1 ) A V 2 = 0 with V 2 = [ V 21 , , V 2 p ] .
3. Compute f p by Algorithm 1 with the available A, Q 1 p and θ p .
4. For i = p 1 : 1
f i = 1 r i i θ i V 2 i A Q 1 i j = i + 1 p r i j f j
End.
5. Form the matrix F = [ f 1 , , f p ] such that the smallest p singular values of A + B F are { θ 1 , , θ p } .
Another consideration in PSVA is to keep the condition number of the assigned matrix A + B F relatively low. This can be achieved through optimization methods based on heuristic rules [29,30]. For instance, Shields [21] employed the optimization of some matrix function over a set of assigned matrices with a fixed condition number. However, this approach becomes unfeasible for large-scale problems due to the full SVD of a large matrix. On the other hand, if the singular values assigned are well separated from those in Σ 1 , the following theorem shows that the condition number of A + B F will be no worse than that of Σ 1 .
Theorem 3 
([21]). For any matrix
P = P 1 P 2
with P 2 being nonsingular and
σ ( A + B F ) σ Σ 1 P 1 P 2 ,
the matrix F in the PSVA satisfies
κ 2 ( A + B F ) κ 2 ( Σ 1 ) .
Furthermore, there exists a non-unique matrix F to make the equality hold.
Remark 4. 
From Theorem 3, it follows that A + B F will have a poor condition number if the gap between the maximal and the minimal singular values of Σ 1 is large. Conversely, if suitable singular values θ 1 , , θ p and a proper F are chosen to implement the PSVA, the obtained condition number of A + B F may be as close to that of Σ 1 as possible. This fact can be used to assess the effectiveness of Algorithm 2 in the next section.

4. Numerical Examples

In this section, we examine the efficacy of Algorithm 2 in addressing large-scale PSVA problems. The algorithm was coded by MATLAB 2014 and all examples were executed on a desktop equipped with an Intel i5 3.4 GHz processor and 16 GB RAM. We denote the authentically assigned singular value by θ and the computed singular value by  θ ^ . The relative error is computed as
E R R = | θ θ ^ | θ
when the matrix F is at hand.
We opted not to carry out a direct comparison between Algorithm 2 and the methods based on full SVD. The rationale behind this choice is grounded in the fact, as stated in Remark 3, these conventional approaches can become excessively time-consuming, particularly when executed on typical personal desktop setups.
Example 1. 
For i = 1 , , n , we set a i = 10 + 90 ξ i and b i = η i , where ξ i and η i are random numbers drawn from a uniformed distribution over the interval [ 0 , 1 ] (generated by the MATLAB command “rand(n,1)”). We define the matrices
A = a 1 b 1 b n 1 a n
and
B = 1 1 0 0 0 0 1 1 0 1 0 1 1 0 1 1 1 0 1 0 0 1 1 0 1 0 1 1 1 0 1 1 0 1 1 0 1 1 0 1 .
The singular values to be assigned are [ θ 1 , , θ 5 ] = [ 2.2 , 4.3 , 5.5 , 7.6 , 8.2 ] .
Algorithm 2 was utilized to carry out the PSVA, and 50 random experiments were conducted for n = 10,000. The resulting errors are depicted in Figure 1, with the vertical axis representing the error level and the horizontal axis reflecting the number of experiments. The assigned singular values are [ D e l ( 1 ) , , D e l ( 5 ) ] , corresponding to [ θ 1 , , θ 5 ] . It is evident from Figure 1 that all errors are within a range of 3 × 10 14 . Furthermore, it is noteworthy that among 50 experiments, the errors of the assigned singular values D e l ( 1 ) (represented by the star line) and D e l ( 2 ) (represented by the diamond line) fluctuate more dramatically compared to the other errors, while the errors of D e l ( 5 ) (represented by the cross line) fluctuate the least.
In Figure 2, we also list the computed CPU time and condition numbers of matrices D (CondS) and A + B F (CondPSVA), with all data on the vertical axis being recorded in log 10 to minimize the gaps between the various lines. It can be seen that the CPU time of all tests is approximately 0.7 s, and the range of CondPSVA lies between 48.1 and 51.5, compared to that of CondS, which lies between 8.81 and 13.43. This highlights the efficiency of Algorithm 2 in carrying out PSVA. It is worth mentioning that the condition number of A + B F is closely linked to the assigned singular values. In our experiments, we observed that CondPSVA approached CondS more closely when the minimal singular values to be assigned were larger.
Example 2. 
This example is taken from a random perturbation of the PDE problem. We set n = 10,000 and define matrices A and B as
A = 734 + δ 0 , 1 171 + δ 1 , 1 196 + δ 7 , 1 9 δ 1 , 1 734 + δ 0 , 2 171 + δ 1 , 2 9 δ 1 , 2 734 + δ 0 , 3 196 + δ 7 , n 7 196 + δ 7 , 1 734 + δ 0 , n 1 171 + δ 1 , n 1 196 + δ 7 , n 7 9 δ 1 , n 1 734 + δ 0 , n ,
B = 1 1 0 0 0 0 0 0 0 1 1 0 1 0 1 1 1 0 1 0 1 1 1 0 1 0 1 1 1 0 1 1 0 1 0 1 1 0 1 1 1 0 1 1 0 1 1 0 1 0 0 1 1 0 1 ,
where δ i j in A are random numbers from the normal distribution, and the numbers in the middle columns of the matrix B are 4999, 5000 and 5001. As in Example 1, we conduct 50 random experiments and assign singular values [ θ 1 , , θ 5 ] = [ 1 , 2 , 3 , 4 , 5 ] .
Figure 3 presents the results from 50 tests, with a specific focus on the obtained errors. Within this figure, the vector [ D e l ( 1 ) , , D e l ( 5 ) ] represents the assigned singular values corresponding to [ θ 1 , , θ 5 ] . A notable observation from the data displayed in Figure 3 is that all of the errors fall well below the threshold of 2 × 10 13 , suggesting a high degree of accuracy in the assignment of singular values. Furthermore, the range of fluctuation among these errors is relatively small. In other words, the variations between the assigned singular values and the desired values are quite minor, reinforcing the precision and consistency of the assignments.
We also present the consumed CPU time in Figure 4, which shows that the range varies from 0.40 to 0.46 s. The obtained CondS and CondPSVA of 50 random experiments fall within the interval [4.47, 9.11] and [366.53, 435.61], respectively (with the exception of the value 160.74 in the 35th experiment). Analogously, the value of CondPSVA can be reduced further by assigning relatively larger singular values in Σ 1 . In fact, if the assigned singular values are increased to 160, 162, 164, 166 and 168, the condition number of A + B F in 50 random experiments will fall within the interval [8.05, 12.30].
Example 3. 
In this example, we seek to affirm the robustness and efficiency of Algorithm 2 by subjecting it to real-world testing. The examination involves the application of this algorithm to nine large-scale sparse matrices, which have been drawn from practical applications. These specific matrices are conveniently available in the SuiteSparse Matrix Collection, which was previously known as the University of Florida Sparse Matrix Collection, accessed via the following link: https://sparse.tamu.edu/, accessed on 10 September 2023.
The matrices under scrutiny bear the following names: ‘dubcova1’, ‘gridgena’, ‘onetone2’, ‘wathen100’, ’poli_large’, ‘msc10848’, ‘bodyy4’, ‘bodyy5’ and ‘bodyy6’. Each of these matrices varies in size, catering to the demands of diverse applications. The dimensions of these matrices are as follows: 16,129 × 16,129, 48,962 × 48,962, 36,057 × 36,057, 30,401 × 30,401, 15,575 × 15,575, 10,848 × 10,848, 17,546 × 17,546, 18,589 × 18,589 and 19,366 × 19,366. Their sparse structures are plotted in Figure 5, where ‘nz’ indicates the non-zero elements.
In our experiments, the matrix
B = 1 1 0 0 0 0 1 1 0 1 1 1 1 0 1 1
remains invariant for all different problems, but the two assigned singular values vary for each matrix.
We ran Algorithm 2 in dealing with the nine large-scale matrices and the obtained results are exhibited in Table 1. The CPU column in the table provides information regarding the elapsed time, measured in seconds, required to perform the PSVA. The DesSV and AsiSV columns, respectively, denote the two sets of singular values, one representing the desired values (DesSV) and the other representing the values that were actually assigned (AsiSV). These columns allow us to assess how closely the assigned singular values align with the desired values. They serve as a crucial indicator of the accuracy and precision of the PSVA process. The next column, labeled RelErr, offers insights into the relative errors encountered during the PSVA computation. A smaller relative error indicates the more accurate assignment of singular values. The final two columns, CondS and CondPSVA, provide information about the condition numbers of two distinct entities. CondS represents the condition number of Σ 1 , while CondPSVA represents the condition number of the matrix A + B F . These condition numbers help to assess the numerical stability and quality of the PSVA solution. Smaller condition numbers are indicative of more favorable and robust solutions.
It can be seen that Algorithm 2 is efficient in assigning the required two singular values in a relatively short CPU time for all matrices. The RelErr column indicates that the relative error is very small, except for two matrices, ‘gridgena’ and ‘msc10848’, with respective levels 10 7 and 10 9 . The CondS column lists the condition numbers of Σ 1 for different matrices, where a larger order means a wider gap between the maximal and the minimal singular values. The CondPSVA column records the condition number of A + B F after the assignment, showing a relatively modest increase in the condition number from CondS. This also reflects that Algorithm 2 is efficient in implementing PSVA, as stated in Remark 4, and provides valuable insights into the algorithm’s capacity to handle complex, large-scale, real-world systems.

5. Conclusions

In the context of large-scale discrete-time systems, a novel algorithm has been introduced to facilitate partial singular value assignment (PSVA), particularly when the matrices A and B exhibit a significant degree of sparsity. This method leverages the capabilities of a sparse solver and incorporates the orthogonal projection of matrix A onto the null space of matrix B as a fundamental component. Crucially, we have established the existence of a matrix F that plays a pivotal role in both single and multiple PSVA problems. This implies that it can be applied effectively to address a range of singular value assignment scenarios, offering flexibility and versatility in practical applications.
The proposed approach is validated through a series of numerical experiments. The derived results affirm the algorithm’s efficacy in achieving the partial assignment of desired singular values, thereby underscoring its real-world applicability for large-scale discrete-time systems. For future research, we will explore the possibility to tackle more complex and practical systems in various fields, from control theory to areas of signal processing.

Author Contributions

Conceptualization, B.Y.; methodology, B.Y.; software, Y.H.; validation, Y.H.; and formal analysis, Q.T. All authors have read and agreed to the final version of this manuscript.

Funding

This work was supported partly by the NSF of Hunan Province (2021JJ50032, 2023JJ50164, 2023JJ50165), the NSF of China (12305016), Degree and Postgraduate Education Reform Project of Hunan University of Technology and Hunan Province (JG2315, 2023JGYB210).

Acknowledgments

We are grateful to the anonymous referees for their useful comments and suggestions, which significantly enhanced the quality the original paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bangle, H.E. Feature extraction of cable partial discharge signal based on DT_CWT_Hankel_SVD. In Proceedings of the 2019 6th International Conference on Information Science and Control Engineering (ICISCE), Shanghai, China, 20–22 December 2019; pp. 1063–1067. [Google Scholar]
  2. Cooke, S.A.; Minei, A.J. The pure rotational spectrum of 1, 1, 2, 2, 3-pentafluorocyclobutane and applications of singular value decomposition signal processing. J. Mol. Spectrosc. 2014, 306, 37–41. [Google Scholar] [CrossRef]
  3. Datta, B.N. Linear and numerical linear algebra in control theory: Some research problems. Linear Algebra Its Appl. 1994, 197, 755–790. [Google Scholar] [CrossRef]
  4. Hou, Z.; Jin, S. Model Free Adaptive Control: Theory and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  5. Sontag, E.D. Mathematical Control Theory: Deterministic Finite Dimensional Systems; Springer Science & Business Media: New York, NY, USA, 2013; Volume 6. [Google Scholar]
  6. Sogaard-Andersen, P.; Trostmann, E.; Conrad, F. A singular value sensitivity approach to robust eigenstructure assignment. In Proceedings of the 1986 25th IEEE Conference on Decision and Control, Athens, Greece, 10–12 December 1986; pp. 121–126. [Google Scholar]
  7. Van Dooren, P. Some numerical challenges in control theory. In Linear Algebra for Control Theory; Springer: New York, NY, USA, 1994; pp. 177–189. [Google Scholar]
  8. Weinmann, A. Uncertain Models and Robust Control; Springer Science & Business Media: Vienna, Austria, 2012. [Google Scholar]
  9. Zimmerman, D.C.; Kaouk, M. Eigenstructure assignment approach for structural damage detection. AIAA J. 1992, 30, 1848–1855. [Google Scholar] [CrossRef]
  10. Saraf, P.; Balasubramaniam, K.; Hadidi, R.; Makram, E. Partial right eigenstructure assignment based design of robust damping controller. In Proceedings of the 2016 IEEE Power and Energy Society General Meeting (PESGM), Boston, MA, USA, 17–21 July 2016; pp. 1–5. [Google Scholar]
  11. Luenberger, D. Dynamic equations in descriptor form. IEEE Trans. Autom. Control 1977, 22, 312–321. [Google Scholar] [CrossRef]
  12. Ishihara, J.Y.; Terra, M.H. Impulse controllability and observability of rectangular descriptor systems. IEEE Trans. Autom. Control 2001, 46, 991–994. [Google Scholar] [CrossRef]
  13. Feng, Y.; Yagoubi, M. Robust Control of Linear Descriptor Systems; Springer: Singapore, 2017. [Google Scholar]
  14. Chu, E.K. A pole-assignment algorithm for linear state feedback. Syst. Control. Lett. 1986, 7, 289–299. [Google Scholar] [CrossRef]
  15. Chu, E.K. A pole-assignment problem for 2-dimensional linear discrete systems. Int. J. Control 1986, 43, 957–964. [Google Scholar] [CrossRef]
  16. Duan, G.R. Eigenstructure assignment in descriptor systems via output feedback: A new complete parametric approach. Int. J. Control 1999, 72, 345–364. [Google Scholar] [CrossRef]
  17. Kumar, A.; Kumar, A.; Pait, I.M.; Gupta, M.K. Analysis of impulsive modes in Kronecker Canonical form for rectangular descriptor systems. In Proceedings of the IEEE 8th Indian Control Conference (ICC), Chennai, India, 14–16 December 2022; pp. 349–354. [Google Scholar]
  18. Shahzad, A.; Jones, B.L.; Kerrigan, E.C.; Constantinides, G.A. An efficient algorithm for the solution of a coupled Sylvester equation appearing in descriptor systems. Automatica 2011, 47, 244–248. [Google Scholar] [CrossRef]
  19. Masubuchi, I.; Kamitane, Y.; Ohara, A.; Suda, N. H control for descriptor systems: A matrix inequalities approach. Automatica 1997, 33, 669–673. [Google Scholar] [CrossRef]
  20. Pearson, D.W.; Chapman, M.J.; Shields, D.N. Partial singular-value assignment in the design of robust observer for discrete-time descriptor systems. IMA J. Math. Control Inform. 1988, 5, 203–213. [Google Scholar] [CrossRef]
  21. Shields, D.N. Singular-value assignment in the design of observers for discrete-time systems. IMA J. Math. Control Inform. 1991, 8, 151–164. [Google Scholar] [CrossRef]
  22. Hague, T.N. An Application of Robust H2/H Control Synthesis to Launch Vehicle Ascent. Ph.D. Thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. [Google Scholar]
  23. Sun, X.D.; Clarke, T. Application of hybrid μH control to modern helicopters. In Proceedings of the International Conference on Control (Control’94 IET), Coventry, UK, 21–24 March 1994; Volume 2, pp. 1532–1537. [Google Scholar]
  24. Holder, D.; Luo, S.; Martin, C. The control of error in numerical methods, in modeling, estimation and control. In Lecture Notes in Control and Information Sciences; Springer: Berlin/Heidelberg, Germany, 2007; pp. 183–192. [Google Scholar]
  25. Elsner, L.; Friedland, S. Norm conditions for the convergence of infinite products. Linear Algebra Appl. 1997, 250, 133–142. [Google Scholar] [CrossRef]
  26. Martin, C.F.; Wang, X.A. Singular value assignment. SIAM J. Control Optim. 2009, 48, 2388–2406. [Google Scholar] [CrossRef]
  27. Kautsky, J.; Nichols, N.K.; Van Dooren, P. Robust pole assignment in linear state feedback. Int. J. Control. 1985, 41, 1129–1155. [Google Scholar] [CrossRef]
  28. Nichols, N.K. Robustness in partial pole placement. IEEE Trans. Auto. Control 1987, 32, 728–732. [Google Scholar] [CrossRef]
  29. Chu, E.K. Optimisation and pole assignment in control system design. Int. J. Appl. Math. Comput. Sci. 2001, 11, 1035–1053. [Google Scholar]
  30. Datta, S.; Chakraborty, D.; Chaudhuri, B. Partial pole placement with controller optimization. IEEE Tans. Auto. Control 2012, 57, 1051–1056. [Google Scholar] [CrossRef]
  31. Datta, S.; Chaudhuri, B.; Chakraborty, D. Partial pole placement with minimum norm controller. In Proceedings of the 49th IEEE Conference on Decision and Control, Atlanta, GA, USA, 15–17 December 2010; pp. 5001–5006. [Google Scholar]
  32. Saad, Y. Projection and deflation methods for partial pole assignment in linear state feedback. IEEE Trans. Automat. Control 1988, 33, 290–297. [Google Scholar] [CrossRef]
  33. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 1985. [Google Scholar]
  34. Golub, G.H.; Van Loan, C.F. Matrix Computations; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
Figure 1. Errors of 50 random tests in Example 1.
Figure 1. Errors of 50 random tests in Example 1.
Axioms 12 01012 g001
Figure 2. Condition number and time of 50 random tests in Example 1.
Figure 2. Condition number and time of 50 random tests in Example 1.
Axioms 12 01012 g002
Figure 3. Error in 50 random tests in Example 2.
Figure 3. Error in 50 random tests in Example 2.
Axioms 12 01012 g003
Figure 4. Condition number and time of 50 random tests in Example 2.
Figure 4. Condition number and time of 50 random tests in Example 2.
Axioms 12 01012 g004
Figure 5. Sparse structures of nine matrices in Example 3.
Figure 5. Sparse structures of nine matrices in Example 3.
Axioms 12 01012 g005
Table 1. Assigned results for nine large-scale matrices in Example 3.
Table 1. Assigned results for nine large-scale matrices in Example 3.
ProblemCPUDesSVAsiSVRelErrCondSCondPSVA
dubcova11.290.016; 0.015 1.59 × 10 2 ; 1.49 × 10 2 1.21 × 10 13 ; 3.34 × 10 13 2.74 × 10 2 3.19 × 10 2
gridgena9.770.16; 0.15 1.59 × 10 1 ; 1.49 × 10 1 1.08 × 10 6 ; 2.52 × 10 7 1.55 × 10 5 1.84 × 10 5
onetone22.480.001; 0.005 1.00 × 10 3 ; 5.00 × 10 3 9.39 × 10 15 ; 8.76 × 10 15 3.08 × 10 6 3.92 × 10 6
wathen1004.290.10; 0.05 1.00 × 10 1 ; 5.00 × 10 2 9.99 × 10 15 ; 3.99 × 10 14 5.81 × 10 3 7.39 × 10 3
poli_large0.060.010; 0.005 1.00 × 10 2 ; 5.00 × 10 3 5.63 × 10 15 ; 6.99 × 10 15 2.51 × 10 3 3.75 × 10 3
msc108484.5780.0; 81.0 7.99 × 10 1 ; 8.09 × 10 1 8.06 × 10 9 ; 7.34 × 10 9 7.60 × 10 9 7.86 × 10 9
bodyy40.951.0; 0.9 9.99 × 10 1 ; 8.99 × 10 1 3.99 × 10 15 ; 3.99 × 10 1 8.06 × 10 2 9.24 × 10 2
bodyy51.211.0; 0.9 1.00 × 10 0 ; 8.99 × 10 1 2.66 × 10 14 ; 8.99 × 10 14 7.87 × 10 3 8.93 × 10 3
bodyy61.111.0; 0.9 9.99 × 10 1 ; 8.99 × 10 1 1.29 × 10 14 ; 9.99 × 10 15 7.69 × 10 4 8.68 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, Y.; Tang, Q.; Yu, B. Partial Singular Value Assignment for Large-Scale Systems. Axioms 2023, 12, 1012. https://doi.org/10.3390/axioms12111012

AMA Style

Huang Y, Tang Q, Yu B. Partial Singular Value Assignment for Large-Scale Systems. Axioms. 2023; 12(11):1012. https://doi.org/10.3390/axioms12111012

Chicago/Turabian Style

Huang, Yiting, Qiong Tang, and Bo Yu. 2023. "Partial Singular Value Assignment for Large-Scale Systems" Axioms 12, no. 11: 1012. https://doi.org/10.3390/axioms12111012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop