Next Article in Journal
Instructional Moves that Increase Chances of Engaging All Students in Learning Mathematics
Next Article in Special Issue
Emergent Intelligence via Self-Organization in a Group of Robotic Devices
Previous Article in Journal
The Analysis of a Model–Task Dyad in Two Settings: Zaplify and Pencil and Paper
Previous Article in Special Issue
Web Traffic Time Series Forecasting Using LSTM Neural Networks with Distributed Asynchronous Training
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Random Sampling Many-Dimensional Sets Arising in Control †

1
Institute of Control Sciences, Russian Academy of Science, 117997 Moscow, Russia
2
Department of Biomedical Engineering, Huazhong University of Science and Technology, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
This paper is an extended version of the paper published in the Proceedings of the MTNS 2010, the 19th International Symposium on Mathematical Theory of Networks and Systems, Budapest, Hungary, 5–9 July 2010.
Mathematics 2021, 9(5), 580; https://doi.org/10.3390/math9050580
Submission received: 26 January 2021 / Revised: 4 March 2021 / Accepted: 5 March 2021 / Published: 9 March 2021
(This article belongs to the Special Issue Machine Learning and Data Mining in Pattern Recognition)

Abstract

:
Various Monte Carlo techniques for random point generation over sets of interest are widely used in many areas of computational mathematics, optimization, data processing, etc. Whereas for regularly shaped sets such sampling is immediate to arrange, for nontrivial, implicitly specified domains these techniques are not easy to implement. We consider the so-called Hit-and-Run algorithm, a representative of the class of Markov chain Monte Carlo methods, which became popular in recent years. To perform random sampling over a set, this method requires only the knowledge of the intersection of a line through a point inside the set with the boundary of this set. This component of the Hit-and-Run procedure, known as boundary oracle, has to be performed quickly when applied to economy point representation of many-dimensional sets within the randomized approach to data mining, image reconstruction, control, optimization, etc. In this paper, we consider several vector and matrix sets typically encountered in control and specified by linear matrix inequalities. Closed-form solutions are proposed for finding the respective points of intersection, leading to efficient boundary oracles; they are generalized to robust formulations where the system matrices contain norm-bounded uncertainty.

1. Introduction

One of the first issues in data mining and pattern recognition is an economy representation of implicitly specified massive data arrays with the subsequent extraction of specific features and classification [1,2,3]. Every element of the data set can be associated with a vector of which the components are the numerical values of certain properties of the phenomenon of interest under one or another operating condition. As a rule, a complete description of the data is not available due to high dimensions and cardinality, and the very absence of an explicit “generating” mechanism. Instead, a representative enough sample from the data set may be considered, leading to a point representation of the respective continuum domain, and the desired characteristics can then be measured just at these sample points.
There exist many ways to organize such samples. The most obvious one is to arrange a deterministic rectangular grid over the respective many-dimensional domain. However, this approach suffers several drawbacks. First, for high dimensions, the density of the grid needed for a reasonable representation of the set is very large; second, it can be efficiently implemented only for box-shaped sets; finally, sets that differ from rectangles may be embedded into larger boxes, however, the so-called rejection rate (the amount of “idle” grid points lying outside the domain of interest) may grow very quickly with the growth of dimension. A more efficient technique relates to the generation of quasi-random points in the feasible domain; they are also referred to as L P τ sequences introduced in [4]; however, this mechanism heavily exploits the box shape of the set. For the discussion of these questions, see [5] and the references therein.
Since recently, a purely randomized approach to the solution of various problems in the above-mentioned fields became very popular [6]. The overall idea is to embed a deterministic formulation of the problem into a stochastic setup and to obtain an approximate solution, which is “close” to that of the original problem with certain probability. Often, such a reformulation considerably diminishes computational efforts; moreover, the associated probabilities can be efficiently estimated. This is particularly important for many-dimensional problems involving massive data arrays and containing various uncertainties and inaccuracies.
As compared to deterministic techniques, straightforward random sampling can be applied to a much wider class of sets; still, its application is limited to “regular” sets in a sense, for example, such as those bounded in some vector or matrix norm [6]. However, it often happens that the only information about the set is available in the form of the so-called boundary oracle, which is formulated as follows. Given a point inside the set and an arbitrary line through it, the oracle returns the intersection points of this line and the boundary of the set. The term “boundary oracle” introduced presumably in [7] originates in the optimization theory, where various assumptions on the available information about the function under minimization and the constraint set lead to estimates of complexity of the corresponding optimization methods [8,9]. For instance, search methods assume just the knowledge of the function value at any point; in first-order methods, the derivative is also available, i.e., we possess the gradient oracle; in second-order methods, on top of that we also have a Hessian oracle. The same goes for the constraint set, for which various assumptions (such as membership oracle, separation oracle) are adopted [9].
One of the efficient sampling procedures that are based on the availability of boundary oracles is the Hit-and-Run (HR) algorithm, which was proposed in [10] with the primary goal to facilitate numerical calculation of multi-dimensional integrals over convex domains. It became popular after the publication of [11], and later, various modifications and ramifications of this algorithm were proposed; for example, see [12,13]. In particular, a promising Markov chain sampler, the billiard walk exploits “reflections” of a moving particle from the boundary of a convex set. It was deeply analyzed in [14] and showed itself rather efficient; however, it requires not only the availability of a boundary oracle but also the information about the curvature of the boundary, and is more time-consuming in implementation.
The basic HR procedure is very simple in implementation and requires little information about the problem. It was proved to have a polynomial mixing time; i.e., the number of iterations required to “approximately” reach the desired theoretical distribution grows polynomially in the dimension [12]. In applications, it possesses quite fast practical convergence for so-called isotropic sets. It also can be generalized to deal with non-convex sets and to produce limiting distributions that differ from the uniform one. At the same time, the algorithm is not free of drawbacks. First, the proved theoretical convergence is very slow. Also, the method tends to get stuck at the corners of “thin” non-isotropic sets; on the other hand, there exist modifications of the HR algorithm, which are capable of escaping the corners [15].
Overall, the Hit-and-Run is widely considered to be one of the preferred practical and theoretical tools for uniform sampling inside large-dimensional convex bodies; for example, see [14,16,17]. Moreover, in [17], a comparison tool was proposed to quantify the performance of several commonly used Markov chain sampling techniques, and it turned out that the basic HR procedure outperforms other approaches and is less intensive computationally.
Those interested in the history of the HR algorithm are referred to [10,11,12]; later developments can be found in [18] and also in [19,20], where a promising modification using barrier functions was proposed; of the most recent papers we mention [17,21,22].
The HR algorithm has found applications in random sampling inside various sets (non-convex and not connected as well), approximate computation of the volume of convex sets [23] and, most efficiently, in the probabilistic approach to numerical optimization. For the latter, see, for example, [24,25], where the authors proposed new versions of the cutting plane scheme for use in efficient methods of optimization. Namely, the HR points were generated with the aim of approximating the center of gravity of the support set of the optimized function. Other applications include random walks in graph spaces [26], machine learning [16,27], etc. In particular, for a specific determinantal point process problem, a version of the HR sampler based on the zonotopic structure of the problem was proposed in [16]. This approach, though being faster than the conventional HR, requires solving additional linear programs and is not easy to implement.
However, in spite of a wide range of applications of the Hit-and-Run sampler, to the best of our knowledge, almost no research was devoted to its use in control. Perhaps the only paper in this direction is [28], where this method was applied to the design of stabilizing controllers in the simplest form.
In this paper we do not deal with applications of the HR to specific control problems; this is the subject for future research. Nor do we analyze the convergence and optimization of the performance of the HR algorithm. Instead, we limit ourselves to the considerations related to its BO component, and propose several numerical schemes for the implementation of this component.
In principle, instead of having a BO, the availability of a weaker membership oracle might be assumed, and a straightforward line search can then be used. However, this search has to be performed many times, so that it is important to provide a “closed-form” solution to this operation in order to improve the performance of the algorithm. By closed-form solution we also mean various numerical procedures such as finding the roots of a polynomial or the eigenvalues of a matrix, solving semidefinite programs (SDPs) of moderate size and the like. These operations are indeed very fast and numerically efficient in the standard Matlab implementation.
In this paper, we propose efficient computations of several BOs for various vector and matrix sets defined by linear matrix inequalities (LMIs).
The sets that we analyze in the paper appear in control problems in a natural way, and we consider sets specified by LMIs in the canonical form, the classical matrix Lyapunov inequality, and algebraic matrix Riccati inequalities. We also pay attention to various robust versions of these inequalities, where the matrix coefficients contain additive uncertainty.
We focus on the sets defined by LMIs because of the generality and flexibility of this technique and it is wide spread in systems theory; see the classical book [29] and the recent survey paper [30]. Overall, LMI-sets describe a wast variety of convex sets. Notably, because of convexity, only two intersection points are to be found.
One of the few works in this research direction is the conference paper [31], where first attempts were made to the implementation of such boundary oracles. Since then, possible applications of the HR algorithm became broader, and some of them motivated us to get back to this subject and consider it in a different context. For instance, one of the novel applications of this technique might be processing huge data arrays obtained from ultrasound computed tomography in medical diagnostics; for example, such as in [32]. This may lead to fast image reconstruction algorithms and improve the quality of the image.
In the current paper several major changes are performed as compared to [31]. The introduction section is completely re-written to better substantiate the importance of the research; for the same purpose, the order of the exposition and the overall wording is changed, the bibliography list is considerably extended to account for the new available developments, some proofs are added and some of them are corrected and made more transparent, several mathematical inaccuracies in the exposition are also corrected, many new notations are introduced to facilitate the understanding of the material, and typos are removed.
The notation used in the paper is very standard. Namely, R n and R k × m denote the space of real column vectors of length n and real k × m matrices; is the transposition sign; ⊕ and ⊗ stand for the Kronecker sum and Kronecker product of two matrices; the signs  , , , denote the negative (positive) definiteness (semidefiniteness) of a matrix; · is the Euclidean vector norm or the spectral or the Frobenius matrix norm (this will be clear from the context); W 1 / 2 is the matrix square root of the positive-definite matrix W.
We use the following acronyms:
  • BO—Boundary Oracle;
  • HR—Hit-and-Run;
  • LMI—Linear Matrix inequality;
  • LQR—Linear Quadratic Regulation;
  • SDP—SemiDefinite Programming.

2. Background and an Illustration

2.1. The Hit-and-Run Procedure

We briefly formulate the main components of the HR procedure as applied to convex sets and uniform distributions.
Denote by x 0 R n an initial point in the interior of a closed convex bounded domain  D R n , and let  x j denote the point generated at the jth step of the procedure. The next point is generated as follows. First, a random unit-length vector  y R n is generated uniformly on the sphere in R n and the line x j + λ y through this point in the direction y is considered. Denote by  x ̲ j and  x ¯ j the points of intersection of this line and the boundary of  D ; they are assumed to be obtained with the use of BO. Then the next point  x j + 1 is generated uniformly randomly on the segment [ x ̲ j , x ¯ j ] , a new random direction is generated, etc. The scheme of the HR algorithm is presented in Figure 1 for n = 2 .
In the HR-related literature it is shown that, under mild conditions on the set  D and the choice of initial  x 0 , the resulting sequence of random points tend to the uniform distribution over  D as the number of points grow.
It is interesting to note that the HR method and particularly, its BO component can be used not only for generating points inside sets, but also for approximate point description of the boundary of implicitly specified sets. This description is obtained at no extra cost by simply memorizing the “side-product” endpoints x ̲ j and  x ¯ j . We note that there exist specialized methods that generate points exactly on the boundary of the sets of interest [33].

2.2. Illustration: Static Output Feedback Stabilization

Let us consider a linear time invariant system in the state-space description
x ˙ = A x + B u , y = C x .
Here the system matrices A R n × n , B R n × m , C R k × m are fixed and known. Assume that there exists a static output control u = K y that makes the closed-loop matrix A + B K C Hurwitz stable, and let the set D R k × m of all such stabilizing gain matrices K be bounded. A typical control problem is to optimize certain “engineering” performance indices over the set of these matrix gains, such as overshoot, oscillation, damping time.
First, it is well known that the very existence of a stabilizing output feedback is hard to establish [34], not talking about optimizing such a control. Second, the engineering performance indices mentioned above are not easy to formalize and optimize even in a much simpler situation where state feedback is considered.
As an alternative to the existing techniques we can instead arrange random sampling over the domain  D , compute the value of the performance index for each sample, and pick the best one. This approach to optimal controller design suggests the availability of a representative enough sample set in  D , and it can be obtained by using the HR-algorithm. In its simplest form this approach was considered in [35,36], where only low-order controllers were considered and the domain  D was two-dimensional. No random sampling was used, but the authors suggested to pick a point in  D with a mouse on the computer screen and compute the respective performance index. Such a “manual” semi-heuristic approach, being very simple, often leads to controllers that outperform those obtained by the methods available in the literature.
To arrange the HR algorithm in the general many-dimensional setting, assume that we possess a feasible K D , denote by L R k × m a matrix direction, take K + λ L instead of K, and consider the λ -dependent n × n matrix A + B ( K + λ L ) C , where λ R . Then the goal of the BO is to find the minimum λ ̲ and maximum λ ¯ values of λ R that guarantee the stability of A + B ( K + λ L ) C over the whole segment [ λ ̲ , λ ¯ ] .
To further simplify notation, denote A 0 :   = A + B K C and A 1 :   = B L C ; then we have A + B ( K + λ L ) C = A 0 + λ A 1 , where A 0 is stable. Hence, the boundary oracle reduces to finding the so-called unidirectional stability segment of maximum length [37].
For the clarity of the overall exposition, we present the corresponding result.
Lemma 1
([37]). Let A 0 , A 1 R n × n , and let A 0 be Hurwitz stable. The minimal and the maximal values of the parameter λ R that guarantee the stability of A 0 + λ A 1 are defined by
λ ̲ = 1 min λ i < 0 λ i , , i f a l l λ i > 0 ;
λ ¯ = 1 max λ i > 0 λ i , + , i f a l l λ i < 0 ,
where λ i are real eigenvalues of the matrix ( A 0 A 0 ) 1 ( A 1 A 1 ) , and A A denotes the Kronecker sum.
We recall that the Kronecker sum of the two square matrices A and B is defined as A B = A I a + I b B , where I a , I b are identity matrices of appropriate dimensions and ⊗ stands for the Kronecker product of matrices.
Back to our problem, we see that the matrix segment [ K + λ ̲ L ; K + λ ¯ L ] obtained with the use of the lemma above belongs to the set D of output stabilizing gain matrices. Therefore, the next matrix (i.e., the next HR point) is then generated randomly on this segment.
Unfortunately, in Lemma 1 one has to compute eigenvalues of n 2 × n 2 matrices, which makes computations slow for high dimensions. For instance, already for n = 25 , the computations would take about a second on a standard laptop, which is considered rather slow, keeping in mind multiple executions of this procedure as required in the HR-algorithm. On the other hand, (i) typically, problems arising in control are of lower dimensions, and (ii) the availability of boundary oracle can be relaxed to the assumption of having a membership oracle, so that a boundary oracle can always be constructed by means of a linear search. However, for a wide range of problems, a boundary oracle can be easily formulated in closed form, and below we present particular boundary oracles for matrix sets specified by linear matrix inequalities.

3. LMI Sets in Canonical Form

We turn to formation of boundary oracles related to sets defined by linear matrix inequalities and consider LMIs in the canonical form.

3.1. The Uncertainty-Free Setup

Introduce the following matrix-valued function of a vector variable:
F 0 + i = 1 n x i F i F ( x ) ,
where x = ( x 1 , , x n ) R n is the variable and F i = F i R m × m , i = 0 , , n , are fixed matrix coefficients.
Consider the convex domain defined as
D = { x R n : F ( x ) 0 } ,
where the sign ≼ denotes the negative semidefiniteness of a matrix, and we assume that  D as nonempty.
Let x , y R n be, respectively, a point in the interior of  D and a direction. To formulate a boundary oracle for this domain, we need to compute the minimal and the maximal values λ ̲ , λ ¯ of the scalar parameter λ R that guarantee the sign-definiteness of F ( x + λ y ) over the interval [ λ ̲ , λ ¯ ] . We have
F ( x + λ y ) = F 0 + i = 1 n ( x + λ y i ) F i A 0 + λ A 1 .
where we denoted A 0 : = F ( x ) and A 1 : = i = 1 n y i F i .
It is seen that A 0 0 and A 1 = A 1 ; hence, similarly to the above, the goal is to compute the minimum and the maximum values of λ R that guarantee the sign-definiteness of the affine matrix function A 0 + λ A 1 . The lemma below is a slight modification of the result in [7] it shows that this can be done by computing the generalized eigenvalues of the matrix pair ( A 0 , A 1 ) .
Lemma 2.
Let A 0 0 and A 1 = A 1 , then the critical values of the scalar λ that guarantee negative semidefiniteness of the matrix function A 0 + λ A 1 are obtained as:
λ ̲ = max λ i < 0 λ i , , i f a l l λ i > 0 ;
λ ¯ = min λ i > 0 λ i , + , i f a l l λ i < 0 ,
where λ i are the generalized eigenvalues of the matrix pair ( A 0 , A 1 ) ; i.e., A 0 e i = λ i A 1 e i .
Proof. 
The proof is nearly trivial: The critical value of  λ that makes the matrix A 0 + λ A 1 sign indefinite is also responsible for the loss of nonsingularity; by definition, this means that there exists a nonzero e R m such that ( A 0 + λ A 1 ) e = 0 .    □
This result of the lemma above applies to symmetric matrices; i.e., it is a special case of Lemma 1, since a symmetric matrix is stable if and only if it is negative definite.
As far as the initial point x 0 int D is considered, it can be found as a solution { x ^ , η ^ } of the following SDP:
min η subject to F ( x ) η I .
If η ^ > 0 , the set  D (5) is empty; otherwise we adopt x 0 = x ^ .

3.2. A Robust Formulation

We now consider the situation where the matrix coefficients F i in (4) contain uncertainties. Specifically, they have the form F i + Δ i , where Δ i R m × m are bounded in the spectral norm Δ i ε i , i = 0 , , n , and mutually independent, and ε i 0 are given levels of uncertainty. To retain the symmetric structure of the matrices ( F i + Δ i ) , the Δ i ’s are also assumed to be symmetric. We call such uncertainties admissible.
As a result, we obtain the uncertain linear matrix-valued function
F ( x , Δ ) = ( F 0 + Δ 0 ) + i = 1 n x i ( F i + Δ i )
of the vector variable x R n and the respective (convex) robustly feasible domain
D rob = { x R n : F ( x , Δ ) 0 for all admissible Δ i } .
Likewise the uncertainty-free setup, having an x int D rob and a directional vector y R n , we aim to compute the minimal λ ̲ rob and maximal λ ¯ rob values of the parameter  λ that guarantee that the uncertain LMI F ( x + λ y , Δ ) 0 holds for all admissible uncertainties Δ i over the segment [ λ ̲ rob , λ ¯ rob ] . This procedure will be referred to as robust boundary oracle.
The construction of the feasible and the associated robustly feasible domains is depicted in Figure 2 for x R 2 , the same symmetric matrices F 0 , F 1 , F 2 R 3 × 3 that were used in the example of Figure 1, and certain numerical values of ε i .
Here, λ ̲ and λ ¯ denote the critical values of the parameter  λ obtained via formulae (6) and (7) for the corresponding uncertainty-free formulation in Lemma 2.
From the lemma below, Reference [7] presents the robust BO for the uncertain LMI F ( x , Δ ) 0 ; it amounts to solving certain nonlinear equations in one scalar variable.
Lemma 3.
Let x int D rob and y R n be given. Consider the two functions in the variable λ R :
ϕ ( λ ) = F 0 + i = 1 n ( x i + λ y i ) F i 1 ,
ε ( λ ) = 1 ε 0 + i = 1 n | x i + λ y i | ε i .
The critical values λ ̲ rob and λ ¯ rob of the parameter λ, which guarantee that the matrix  F ( x + λ y , Δ ) is negative definite for all admissible Δ, are found as the two solutions of the equation ϕ ( λ ) = ε ( λ ) on the segment [ λ ̲ , λ ¯ ] obtained in (6) and (7).
Clearly, the robust BO is more laborious than its uncertainty-free analog; indeed, not only a specific nonlinear equation is to be solved numerically, but the nonrobust bounds λ ̲ and λ ¯ are to be computed as well.
The proof of Lemma 3 uses the same basic reasonings: The loss of robust sign-definiteness of the matrix function F ( x + λ y , Δ ) coincides with the loss of nonsingularity for some admissible Δ i s. The proof heavily exploits the independence of the Δ i s and uses the symmetric analog of Theorem 3 in [38] on the radius of matrix nonsingularity.
In Figure 3, the plots of the functions ϕ ( λ ) and ε ( λ ) are depicted for an uncertain LMI with x R 4 , and the desired segments for  λ are shown.

4. Quadratic Stabilization

From this section onward we switch our attention from “abstract” LMIs in vector variables to those encountered in control problems; namely, to specific LMIs in matrix variables.

4.1. The Lyapynov Boundary Oracle

First of all we consider the Lyapunov inequality
A X + X A + G 0 ,
where G = G and A are given and X = X is the n × n matrix variable.
This inequality naturally appears in the design of stabilizing state feedback controllers for continuous-time systems. Specifically, consider the system x ˙ = A x + B u  (1) with controllable pair ( A , B ) ; then the gain matrix K for the stabilizing control u = K x may be found as K = B X 1 , where  X 0 is a solution of the LMI (9) with G = 2 B B , and V ( x ) = x X 1 x is a quadratic Lyapunov function for the closed-loop system [29].
Therefore, the convex closed set of matrices
D = { X R n × n : A X + X A + G 0 , X 0 }
specified by the LMI (9) is of obvious interest in control design, and we formulate its boundary oracle.
Such an oracle is seen to be computed with the use of Lemma 2. Given an X int D and a matrix direction Y = Y , we obtain
A ( X + λ Y ) + ( X + λ Y ) A + G 0 .
Denoting A 0 : = A X + X A + G and A 1 : = A Y + Y A and keeping in mind that A 0 0 and A 1 = A 1 , we find ourselves in the conditions of Lemma 2, which gives us the interval Λ A of sign-definiteness of the λ -dependent function A 0 + λ A 1 . Since the matrix X + λ Y is to be positive definite (as the matrix of a Lyapunov function), we again use Lemma 2 to calculate the corresponding range  Λ X of  λ retaining the sign-definiteness of X + λ Y . Then adopt Λ = Λ A Λ X as the desired maximal interval of  λ for which the matrix X + λ Y belongs to  D  (10).

4.2. A Robust Formulation

The Lyapunov BO of the previous subsection can be generalized to the situation where the matrix A in (9) contains uncertainty. We consider the model of structured uncertainty, which often appears in control applications:
A ( Δ ) = A + M Δ N .
Here the matrix perturbation Δ R p × q is only known to be bounded in some norm Δ 1 , and the matrices M R n × p , N R q × n are given.
In such a setup, the uncertain Lyapunov inequality
A ( Δ ) X + X A ( Δ ) + G 0
is used for the construction of a common quadratic Lyapunov function with matrix X for the uncertain system and yields robustly stabilizing controllers.
This gives rise to the robust Lyapunov set
D rob = X 0 : inequality ( 12 ) holds for all Δ 1 ,
and the goal is to devise a boundary oracles for it.
The main technical trick for operating with uncertainty (11) is the so-called Petersen lemma proposed in [39]; it provides necessary and sufficient conditions for an uncertain matrix to be sign-definite.
Lemma 4.
Assume that R = R R n × n , P R n × p , Q R q × n . Then, for the inequality
R + P Δ Q + ( P Δ Q ) 0
to hold for all Δ R p × q , Δ 1 , it is necessary and sufficient that there exist ε > 0 such that
R + ε P P + 1 ε Q Q 0 .
We stress that, with the lemma above, checking the robust sign-definiteness of a continuum matrix family reduces to a convex problem in one scalar variable. To see this, using the Schur complement, represent inequality (15) in the equivalent form
R + ε P P Q Q ε I 0 ,
which is an LMI in the scalar variable  ε .
Moreover, with this machinery, the maximal admissible span for the uncertainty Δ can be found:
γ max = sup { γ : inequality ( 14 ) holds for all Δ γ } .
It is sometimes called the radius of robust sign-definiteness of the uncertain matrix R + P Δ Q + ( P Δ Q ) ; computing this quantity reduces to the solution of a semidefinite program in two scalar variables and a “scaled” LMI constraint of the form (16), see [40].
With the Petersen lemma in mind, we are ready to formulate the boundary oracle for the set (13); it will be referred to as the robust Lyapunov BO.
Theorem 1.
Assume that X D rob (11)–(13) and Y = Y R n × n are given. Denote Λ A = [ λ ̲ , λ ¯ ] , where λ ̲ and λ ¯ are found as the solutions of the two semidefinite programs:
min / max λ s u b j e c t t o
A X + X A + G + λ ( A Y + Y A ) + ε M M X N + λ Y N N X + λ N Y ε I 0 , X + λ Y 0
in the scalar variables λ , ε .
The maximum interval of  λ R that guarantees X + λ Y D rob is given by Λ = Λ A Λ X , where Λ X R denotes the maximal interval of sign-definiteness of the pencil X + λ Y .
Proof. 
Having a feasible X D rob defined by (11)–(13), and a matrix direction Y, we are aimed at finding the minimum and the maximum values of  λ for which the inequality
( A + M Δ N ) ( X + λ Y ) + ( X + λ Y ) ( A + M Δ N ) + G 0
is satisfied for all Δ 1 .
To simplify notation, let us denote
R ( λ ) = A X + X A + G + λ ( A Y + Y A )
and
P = M ; Q ( λ ) = N ( X + λ Y ) ;
then (18) takes the compact form
R ( λ ) + P Δ Q ( λ ) + P Δ Q ( λ ) 0 .
By Petersen lemma, this inequality holds for all admissible Δ 1 and for some  λ if and only if there exist ε > 0 such that
R ( λ ) + ε P P Q ( λ ) Q ( λ ) ε I 0 .
Since the functions R ( λ ) and Q ( λ ) are affine in  λ , the last inequality is seen to be an LMI in the two scalar variables  λ and  ε . Therefore, finding the critical values of  λ reduces to the minimization and maximization of the variable  λ subject to the LMI constraint above. Finally, the maximum interval of the sign-definiteness of the matrix X + λ Y is found using Lemma 2.    □
We make two comments at this point.
First, an initial X int D rob can be obtained from the solution { X ^ , η ^ , ε ^ } of the following SDP in the matrix variable X and two scalar variables η , ε :
min η subject to
A X + X A + G + ε M M X N N X ε I η I , X 0 .
In complete analogy with (8), if η ^ < 0 , adopt X = X ^ ; otherwise the set  D rob (13) is empty.
This observation leads to the second comment. If the LMIs (19) are infeasible, we decrease the level of uncertainty and re-solve the SDP to test the nonemptiness of  D rob . On the other hand, by solving a similar semidefinite program, the maximal allowed level
γ rob = max { γ : X 0 : inequality ( 12 ) holds for all Δ γ } ,
the radius of robust quadratic stabilizability can be calculated in advance; cf. (17) and see [30,40] for the details.

5. Optimal Control

Assume now that, on top of stabilizing system (1) by means of linear control u = K x , we want to optimize a certain performance index.
In one of the classical optimal control problems, the linear quadratic regulation (LQR) problem, the quadratic cost functional has the form
J ( K ) = 0 ( x W x x + u W u u ) d t ,
where  W x 0 and  W u 0 are weight matrices. The minimization of this cost leads to the algebraic Riccati equation
X W x X + A X + X A B W u 1 B = 0
in the matrix variable X; under certain conditions on A , B its unique positive-definite solution defines the optimal gain K = W u 1 B X 1 .

5.1. The Riccati Boundary Oracle

In a slightly more general formulation consider the set
D = { X R n × n : X W x X + A X + X A G 0 , X 0 } ,
where G 0 is a given real n × n matrix. This set accumulates all possible solutions X 0 of the algebraic Riccati inequality, and the boundary oracle for this set is formulated in much the same way as in Section 4.1, with Lemma 2 as the main tool.
Indeed, let us first take the Schur complement and represent the quadratic inequality in the LMI form:
A X + X A G X W x 1 / 2 W x 1 / 2 X I 0 .
Next, considering X + λ Y instead of X, we arrive at
A 0 + λ A 1 0 ,
with
A 0 = A X + X A G X W x 1 / 2 W x 1 / 2 X I 0
and
A 1 = A Y + Y A Y W x 1 / 2 W x 1 / 2 Y I = A 1 .
Hence we are again in the conditions of Lemma 2, which gives us the associated segment  Λ A .
Next, accounting for the sign definiteness of the matrix X + λ Y and using Lemma 2 once again, we obtain the corresponding feasible segment Λ X and adopt Λ A Λ X as the final answer.
Overall, the computation of the boundary oracle for the set defined by the quadratic inequality reduces to the Lyapunov BO described in Section 4.1; the only essential difference is that now the size of the constraint matrices is twice as large as before.

5.2. A Robust Formulation

Let now the matrix A be subjected to the uncertainty of the same structured form (11). The corresponding robust counterpart of the LQR problem can be formulated and solved in several ways; for example, see [41]. However, following the logic of the current paper we consider the set
D rob = { X R n × n : X W x X + A ( Δ ) X + X A ( Δ ) G 0 , X 0 }
defined by the uncertain Riccati inequality and propose a boundary oracle for it.
From the exposition of the two previous subsections it is seen that the robust version of the Riccati boundary oracle can be formulated by exploiting the Schur complement (to transform quadratic inequalities into linear ones) and Petersen lemma (to handle the uncertainty). We omit the obvious and not quite insightful derivations and formulate the result in the theorem below.
Theorem 2.
Let X int D rob (21), (11) and let Y = Y R n × n . The maximal and the minimal values of λ, which guarantee that the matrix segment X + λ Y belongs to the set D rob  (21) are obtained from the solutions of the two semidefinite programs
min / max λ s u b j e c t t o
A 1 ( ε ) + λ A 2 λ Z λ Z I 0 , X + λ Y 0 ,
where
A 1 ( ε ) = X W x X + A X + X A G + ε M M X N N X ε I ;
A 2 = X W x Y + Y W x X + A Y + Y A Y N N Y 0 ;
Z = Y W x 1 / 2 0 ,
and the optimization is performed in the two scalar variables λ and ε.

6. Implementation Issues

We briefly discuss some technical implementation issues and report on the processor time required for the computation of the boundary oracles proposed above.
First, to generate a vector uniformly distributed on the surface of the unit ball in  R n , we use the Matlab routine randn to obtain a random vector y with the standard Gaussian distribution and then normalize it to obtain y = y / y , which is distributed uniformly on the unit sphere. As far as the generation of matrix directions Y in Section 4 and Section 5 is considered, we generate them randomly uniformly on the surface of the unit ball in the Frobenius norm (i.e., on the surface of the ball in  R n 2 ) with the subsequent symmetrization.
The main tool for computing the Lyapunov and Riccati BOs, Lemma 2 requires finding generalized eigenvalues of a pair of matrices, and it can be efficiently applied to matrices of rather high dimensions.
The robust boundary oracles in Section 4.2 and Section 5.2 require solving semidefinite programs in two scalar variables with LMI constraints having dimension 2 n × 2 n (Theorem 1) or 4 n × 4 n (Theorem 2). To solve these problems, we used the Matlab-compatible software cvx [42].
The following illustrative experiment was performed. We generated randomly N = 1000 samples of the data required in Lemma 2 and Theorems 1 and 2. For each of these samples, we measured the execution time for the four boundary oracles proposed in the paper, and averaged over N samples. The results obtained on a standard laptop are presented in Table 1.
From the first two rows of the table we see that nonrobust oracles are indeed very fast, since they exploit just the eigenvalue computation, and this operation is “perfectly” implemented in various available toolboxes. Instead, robust boundary oracles are based on solving a specific optimization problem, which is more time-consuming. The last row of the table clearly shows that up to n = 30 , the robust Riccati oracle is fast enough, whereas for n = 50 the required execution time is approximately one and a half second, which may be considered slow. Note however, that in the latter case, the matrices in the LMI constraints are of quite large size 200 × 200 .
In the experiment, we considered “full” matrices and used the standard freeware cvx for solving semidefinite programs. To speed up the computations in higher dimensions, one may exploit a sparse structure of the system matrices, which is typically the case for control problems, or try to use alternative optimization methods and software. On the other hand, practical control problems have lower dimensions, usually not exceeding n = 15 ; see [43], a popular collection of control-related test problems. In other words, in practice, all the proposed BOs seem to be fast enough to be used in sampling the respective sets.
Another observation is that the execution time grows nonlinearly with increase of problem dimension n. This is because the coefficient matrices in the respective LMIs contain  n 2 elements and because of a specific representation of the LMI constraints in the internal language of cvx solvers. It is also seen that, for a given n, the execution time of the robust Riccati oracle is not exactly the same as that of the robust Lyapunov oracle for the doubled dimension (recall the relative sizes of the respective LMI constraints). This is because the structure of these constraints in the robust Riccati oracle is slightly more complicated.

7. Concluding Remarks and Directions for Future Research

In this paper, we focused on boundary oracles, one of the key components of Hit-and-Run, a popular approach to random generation of points inside bounded sets. We proposed simple computational schemes that implement boundary oracles for several typical control-related matrix sets defined by linear matrix inequalities. At the nucleus of these oracles is the computation of the generalized eigenvalues of a pair of matrices or solving low-dimensional semidefinite programs in scalar variables. These operations are fast as implemented in the Matlab environment, which makes the proposed BOs efficient for use in various random walk methods.
Future research will be targeted at the implementation of boundary oracles for broader classes of sets (e.g., the sets of Schur stable polynomials, positive polynomials, uncertain affine matrix families), which account for the presence of exogenous unknown-but-bounded disturbances, and speeding up the numerical implementation of the HR procedure in these specific problems for higher dimensions. We also note that the results in Section 4 and Section 5 can be easily extended to discrete-time systems by using the discrete-time Lyapunov and Riccati inequalities.
Of the most interest is the application of the machinery presented here to problems of practical interest, in particular, to the design of stabilizing controllers subject to several engineering specifications as mentioned at the beginning of Section 2.2.

Author Contributions

Conceptualization, P.S.; methodology, P.S.; software, M.Y. and P.S.; validation, M.Y.; formal analysis, P.S. and M.D.; investigation, P.S., M.D. and M.Y.; resources, M.D.; data curation, M.D. and M.Y.; writing—original draft preparation, P.S.; writing—review and editing, P.S., M.D. and M.Y.; visualization, M.Y.; supervision, P.S. and M.D.; project administration, M.D.; funding acquisition, M.D., M.Y., and P.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded in part by the National Key Research and Development Program of China under Grant 2016YFE0203900, and in part by the National Natural Science Foundation of China under Grant 82072089. The research of P. Shcherbakov in Theorems 1 and 2, and Section 6 was supported by the Russian Science Foundation (project No. 21-71-30005).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Aggarwal, C.C. Data Mining: The Textbook; Springer: Cham, Switzerland, 2015. [Google Scholar]
  2. Bruni, R.; Bianchi, G. Effective classification using a small training set based on discretization and statistical analysis. IEEE Trans. Knowl. Data Eng. 2015, 9, 2349–2361. [Google Scholar] [CrossRef]
  3. Blum, A.; Hopcroft, J.; Kannan, R. Foundations of Data Science. Available online: https://www.cs.cornell.edu/jeh/book.pdf (accessed on 8 March 2021).
  4. Sobol, I.M. On the distribution of points in a cube and the approximate evaluation of integrals. USSR Comput. Math. Math. Phys. 1967, 4, 86–112. [Google Scholar] [CrossRef]
  5. Polyak, B.; Shcherbakov, P. Why does Monte Carlo fail to work properly in high-dimensional optimization problems? J. Optim. Theory Appl. 2017, 2, 612–627. [Google Scholar] [CrossRef] [Green Version]
  6. Tempo, R.; Calafiore, G.; Dabbene, F. Randomized Algorithms for Analysis and Control of Uncertain Systems, with Applications; Springer: London, UK, 2013. [Google Scholar]
  7. Polyak, B.T.; Shcherbakov, P.S. The D-decomposition technique for solving linear matrix inequalities. Autom. Remote Control 2006, 11, 1847–1861. [Google Scholar] [CrossRef]
  8. Nesterov, Y. Lectures on Convex Optimization; Springer: Cham, Switzerland, 2018. [Google Scholar]
  9. Lee, Y.T.; Sidford, A.; Vempala, S.S. Efficient convex optimization with membership oracles. arXiv 2017, arXiv:1706.07357v1. [Google Scholar]
  10. Turchin, V.F. On calculation of multi-dimensional integrals via the Monte Carlo method. Theory Probab. Appl. 1971, 4, 720–724. [Google Scholar] [CrossRef]
  11. Smith, R.L. Efficient Monte Carlo procedures for generating points uniformly distributed over bounded regions. Oper. Res. 1984, 6, 1296–1308. [Google Scholar] [CrossRef]
  12. Lovász, L. Hit-and-run mixes fast. Math. Program. 1999, 86, 443–461. [Google Scholar] [CrossRef]
  13. Lovász, L.; Vempala, S. Hit-and-Run Is Fast and Fun. Technical Report MSR-TR-2003-05. 2003. Available online: https://web.cs.elte.hu/~lovasz/logcon-hitrun.pdf (accessed on 8 March 2021).
  14. Gryazina, E.N.; Polyak, B.T. Random sampling: Billiard Walk algorithm. Eur. J. Oper. Res. 2014, 2, 497–504. [Google Scholar] [CrossRef] [Green Version]
  15. Lovász, L.; Vempala, S. Hit-and-run from a corner. SIAM J. Comput. 2006, 4, 985–1005. [Google Scholar] [CrossRef]
  16. Gautier, G.; Bardenet, R.; Valko, M. Zonotope Hit-and-run for efficient sampling from projection DPPs. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; pp. 1223–1232. [Google Scholar]
  17. Rudolf, D.; Ullrich, M. Comparison of hit-and-run, slice sampling and random walk Metropolis. J. Appl. Probab. 2018, 4, 1186–1202. [Google Scholar] [CrossRef] [Green Version]
  18. Brooks, S.; Gelman, A.; Jones, G.; Li, M.X. (Eds.) Handbook of Markov Chain Monte Carlo; Chapman & Hall/CRC: Hoboken, NJ, USA, 2011. [Google Scholar]
  19. Kannan, R.; Narayanan, H. Random walks on polytopes and an affine interior point method for linear programming. Math. Oper. Res. 2012, 37, 1–20. [Google Scholar] [CrossRef]
  20. Polyak, B.T.; Gryazina, E.N. Markov chain Monte Carlo method exploiting barrier functions with applications to control and optimization. In Proceedings of the 2010 IEEE International Symposium on Computer-Aided Control System Design (CACSD), Yokohama, Japan, 8–10 September 2010. [Google Scholar]
  21. Chen, Y.; Dwivedi, R.; Wainwright, M.J.; Yu, B. Fast MCMC sampling algorithms on polytopes. J. Mach. Learn. Res. 2018, 1, 2146–2231. [Google Scholar]
  22. Speagle, J.S. A conceptual introduction to Markov Chain Monte Carlo methods. arXiv 2020, arXiv:1909.12313v2. [Google Scholar]
  23. Cousins, B.; Vempala, S. A practical volume algorithm. Math. Program. Comput. 2016, 2, 133–160. [Google Scholar] [CrossRef]
  24. Bertsimas, D.; Vempala, S. Solving convex programs by random walks. J. ACM 2004, 51, 540–556. [Google Scholar] [CrossRef]
  25. Dabbene, F.; Shcherbakov, P.; Polyak, B.T. A randomized cutting plane method with probabilistic geometric convergence. SIAM J. Optim. 2010, 6, 3185–3207. [Google Scholar] [CrossRef] [Green Version]
  26. Alyami, S.; Azad, A.K.M.; Keith, J. A new version of the hit-and-run algorithm to sample graph spaces. In Proceedings of the 39th Australasian Conference on Combinatorial Mathematics and Combinatorial Computing, Brisbane, Australia, 7–11 December 2015. [Google Scholar]
  27. Gartrell, M.; Paquet, U.; Koenigstein, N. Low-rank factorization of determinantal point processes for recommendation. In Proceedings of the 31st AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; pp. 1912–1918. [Google Scholar]
  28. Polyak, B.T.; Gryazina, E.N. Hit-and-Run: New design technique for stabilization, robustness and optimization of linear systems. IFAC Proc. Vol. 2008, 4, 376–380. [Google Scholar] [CrossRef] [Green Version]
  29. Boyd, S.; El Ghaoui, L.; Ferron, E.; Balakrishnan, V. Linear Matrix Inequalities in System and Control Theory; SIAM: Philadelphia, PA, USA, 1994. [Google Scholar]
  30. Polyak, B.T.; Khlebnikov, M.V.; Shcherbakov, P.S. Linear matrix inequalities in control systems subject to uncertainty. Autom. Remote Control 2021, 1, 1–40. [Google Scholar] [CrossRef]
  31. Shcherbakov, P. Boundary oracles for control-related matrix sets. In Proceedings of the 19th International Symposium Mathematical Theory of Networks and Systems, Budapest, Hungary, 5–9 July 2010; pp. 665–670. [Google Scholar]
  32. Fang, X.; Song, J.; Liu, K.; Wu, Y.; Zhang, Q.; Ding, M.; Yuchi, M. A prior-information-based combination solution for picking the difference of time-of-flight in ultrasound computed tomography. J. Med. Imaging Health Inform. 2020, 3, 763–768. [Google Scholar] [CrossRef]
  33. Boender, C.G.E.; Caron, R.J.; McDonald, J.F.; Rinnooy Kan, A.H.G.; Romeijn, H.E.; Smith, R.L.; Telgen, J.; Vorst, A.C.F. Shake-and-bake algorithms for generating uniform points on the boundary of bounded polyhedra. Oper. Res. 1991, 6, 945–954. [Google Scholar] [CrossRef]
  34. Blondel, V.L.; Tsitsiklis, J.N. A survey of computational complexity results in systems and control. Automatica 2000, 9, 1249–1274. [Google Scholar] [CrossRef]
  35. Kiselev, O.N.; Polyak, B.T. Design of low-order controllers with H or maximal robustness performance index. Autom. Remote Control 1999, 3, 393–402. [Google Scholar]
  36. Shcherbakov, P.S. Fixed order controller design subject to engineering specifications. Autom. Remote Control 2010, 6, 1217–1229. [Google Scholar] [CrossRef]
  37. Fu, M.; Barmish, B.R. Maximal unidirectional perturbation bounds for stability of polynomials and matrices. Syst. Control Lett. 1988, 11, 173–179. [Google Scholar] [CrossRef]
  38. Polyak, B.T. Robust linear algebra and robust aperiodicity. In Directions in Mathematical Systems Theory and Optimization; Rantzer, A., Byrnes, C.I., Eds.; Springer: Berlin, Germany, 2003; pp. 249–260. [Google Scholar]
  39. Petersen, I. A stabilization algorithm for a class of uncertain systems. Syst. Control Lett. 1987, 8, 351–357. [Google Scholar] [CrossRef]
  40. Shcherbakov, P.S.; Topunov, M.V. Petersen’s lemma on matrix uncertainty and its generalizations. Autom. Remote Control 2008, 11, 1932–1945. [Google Scholar]
  41. Khlebnikov, M.V.; Shcherbakov, P.S. Linear quadratic regulator II. Robust formulations. Autom. Remote Control 2019, 10, 1847–1860. [Google Scholar] [CrossRef]
  42. Grant, M.; Boyd, S. CVX: Matlab Software for Disciplined Convex Programming, Version 2.2, January 2020. Available online: http://cvxr.com/cvx (accessed on 22 February 2021).
  43. Leibfritz, F. COMPleib: Constraint Matrix-Optimization Problem Library—A Collection of Test Examples for Nonlinear Semidefinite Programs, Control System Design and Related Problems; Technical Report; Department of Mathematics, University of Trier: Trier, Germany, 2004; Available online: http://www.complib.de (accessed on 22 February 2021).
Figure 1. The Hit-and-Run scheme.
Figure 1. The Hit-and-Run scheme.
Mathematics 09 00580 g001
Figure 2. Feasible and robustly feasible linear matrix inequalities (LMI) domains.
Figure 2. Feasible and robustly feasible linear matrix inequalities (LMI) domains.
Mathematics 09 00580 g002
Figure 3. The robust boundary oracle.
Figure 3. The robust boundary oracle.
Mathematics 09 00580 g003
Table 1. Execution time (milliseconds) of the four boundary oracles for various dimensions n.
Table 1. Execution time (milliseconds) of the four boundary oracles for various dimensions n.
Oracle/Dimn = 5n = 10n = 15n = 20n = 30n = 50
Lyapunov0.01560.03740.08830.12490.30300.7279
Riccati0.03830.12410.30410.48311.06863.4392
robust Lyapunov16.006922.341429.085038.242262.3214217.0089
robust Riccati25.141441.572267.4951139.0988332.33921385.8729
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Shcherbakov, P.; Ding, M.; Yuchi, M. Random Sampling Many-Dimensional Sets Arising in Control. Mathematics 2021, 9, 580. https://doi.org/10.3390/math9050580

AMA Style

Shcherbakov P, Ding M, Yuchi M. Random Sampling Many-Dimensional Sets Arising in Control. Mathematics. 2021; 9(5):580. https://doi.org/10.3390/math9050580

Chicago/Turabian Style

Shcherbakov, Pavel, Mingyue Ding, and Ming Yuchi. 2021. "Random Sampling Many-Dimensional Sets Arising in Control" Mathematics 9, no. 5: 580. https://doi.org/10.3390/math9050580

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop