1. Introduction
In numerical linear algebra, the design of some accurate as well as efficient numerical algorithms for a class of structured matrices remains an interesting topic in research in recent years. Among these structured matrices, an interesting class of matrices is totally positive matrices. The several classes of very structured matrices are studied in [
1,
2,
3,
4]. The important and interesting literature in the listed articles covers many aspects of both theory and application but does not contain topics such as accuracy and efficient numerical computations with such kinds of structured matrices. For more details on the computational aspects of structured totally positive matrices can be found in the work [
5,
6,
7].
We consider a special class of totally positive structured matrices that are deeply studied and analyzed in [
8] for solving a system of linear equations: Bernstein–Vandermonde matrices. Such a class of matrices has been used to solve and analyze least squares fitting problems while considering the Bernstein basis [
9]. The Bernstein–Vandermonde structured matrices are the straightforward generalization of the Vandermonde structured matrices when the most suitable choice of Bernstein basis is considered rather than monomial basis corresponding to a space spanned by algebraic polynomials with degree bounded above by
n. The Bernstein polynomials were originally discovered about a hundred years ago by Sergi Natanovich Bernstein in order to facilitate the most famous proof of the Weierstrass approximation theorem. The work by Bezier and de Casteljau introduces the Bernstein polynomials in computeraided geometric design; see [
10] for more details.
The numerical methods based upon Bezier curves are more popular in computeraided geometric design (CAGD), see [
11,
12,
13,
14,
15]. The Bernstein polynomials parameterize the Bezier curves reduce to Bernstein polynomial basis.
The theoretical basis to design fast and accurate algorithms in order to compute the greatest
$p\left(a\right)$ common divisor for the real type of polynomials and
$q\left(a\right)$ with degree of at most
n and which are expressible in Bernstein polynomial basis
$\{{\alpha}_{0}^{\left(n\right)},\cdots ,{\alpha}_{n}^{\left(n\right)}\}$, where
${\alpha}_{0}^{\left(n\right)}\left(a\right)=\left(\genfrac{}{}{0pt}{}{n}{i}\right){(1a)}^{ni}{a}^{i},0\le i\le a$, are studied in [
16]. The fast
$O\left({n}^{2}\right)$ algorithms to determine the required power form of the polynomials
$p\left(a\right)$ and
$q\left(a\right)$ are studied in [
17,
18] or its matrix counterparts [
19] for the evaluation of GCD.
In [
16], the Bezout form
$B=\left({b}_{i,j}\right)\in {\mathbb{R}}^{n\times n}$ for polynomials
$p\left(a\right)$ and
$q\left(a\right)$ is defined by the expression of the form
Furthermore, the Bezoution matrices, while considering different polynomial bases, are studied by various authors, see [
20,
21,
22,
23,
24].
The structured matrices, particularly both Vandermonde matrices and Cauchy matrices, appear in the vast areas of computation, see [
25,
26]. The Cauchy–Vandermonde structured matrices act as useful tools to study the numerical approximation of solution corresponding to singular integral equations; for more detail, we refer [
27]. Such a type of structured matrices does occur in connection with numerical approximation of solutions of problems related to study quadrature [
28]. In fact, Cauchy–Vandermonde matrices are illconditional matrices. The high accuracy of numerical approximation has been achieved for such a class of structured matrices while very carefully studying their specific structure properties [
1,
29,
30,
31,
32,
33,
34,
35,
36].
The Vandermonde matrices appear during the study of interpolation problems in order to exploit the monomial basis [
25]. The polynomialVandermonde matrices appear when the polynomial basis is considered rather than the monomial basis, and such matrices help to study many applications. A few included in the list are approximation, interpolation, and Gaussian quadrature [
37,
38,
39,
40,
41,
42].
An extensive amount of research has been done in order to analyze the high accuracy of numerical approximations for many classes of matrices having specific structures. This does includes a class of structured matrices includes: totally positive and totally negative matrices [
6,
43], totally nonpositive matrices [
44,
45], matrices having rankrevealing decomposition [
46,
47], rank structured matrices [
48,
49], the diagonally dominant structured
Mmatrices [
50,
51] and the structured sign regular matrices [
7,
52]. The numerical approximation of eigenvalues for structured quasirational Bernstein–Vandermonde matrices up to high accuracy (relative) are studied in much greater detail in [
53,
54].
An extensive amount of work has been done in the direction of discussing both necessary and sufficient criteria for swarm stability asymptotically. The system under consideration achieves the asymptotically if and only if there exists Hermitian matrices satisfying complex Lyapunov inequality for all of the system vertex matrices [
55].
The symmetry and asymmetry properties of orthogonal polynomials play a key role in solving system of differential equations that appear in mathematical modeling corresponding to realworld problems. The classical orthogonal polynomials, for instance, Hermit, Legendre, Laguerre, and Discrete, including Krawtchouk and Chebyshev, have numerous widespread applications across many very important branches of science and engineering. In [
56], Chebyshev polynomials are used to discuss the simulation of a twodimensional mass transfer equation subject to Robin and Neumann boundary conditions.
The Boolean complexity for the multiplication of structured matrices by a vector and the solution of nonsingular linear systems of equations with a class of such matrices is studied in [
57]. The main focus is to study four basic and most popular classes, that is, Toeplitz, Hankel, Cauchy, and Vandermonde matrices, for which the cited computational problems are equivalent to the task of polynomial multiplication and division and polynomial and rational multipoint evaluation and interpolation.
In this article, we present the spectral properties of a class of structured matrices. We study the behavior of eigenvalues, singular values, and structured singular values for a class of structured matrices, that is, totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian structured matrices, Cauchy—polynomialVandermonde structured matrices and quasirational Bernstein–Vandermonde structured matrices. Furthermore, we also present the numerical approximation of conditioned numbers for structured matrices considered in the current study. Our proposed approach in a recent study differs from the methodology [
58] where a lowrank ODEbased technique was developed to study the stability and instability analysis of linear timeinvariant system appearing in control and was mainly based on a two levelalgorithm, that is, innerouter algorithm.
The key contribution of this paper is to study and investigate the spectral properties (particularly the computation of structured singular) of totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomialVandermonde matrices, and structured quasirational Bernstein–Vandermonde matrices and this act as the novel contribution to this paper.
In
Section 2 of this article, we aim to present the definitions of structured totally positive Bernstein–Vandermonde matrices, Bernstein–Bezoutian matrices, Cauchy—polynomialVandermonde matrices and quasirational Bernstein–Vandermonde matrices.
We give a brief and concise introduction to the numerical approximation of the structured singular values in
Section 3.
Section 4 contains the main results on the numerical computation of the largest and the smallest singular values. Furthermore, the exact behavior of the largest and smallest singular values is also discussed. In
Section 5, we present numerical experimentation for Bernstein–Vandermonde and Bernstein–Bezoutian matrices. The numerical approximation of eigenvalues and both singular values and structured singular values are also analyzed and presented. The numerical experimentation for spectral quantities of CauchypolynomialVandermonde structured matrices and quasirational Bernstein–Vandermonde structured matrices are presented in
Section 6 and
Section 7, respectively.
Section 8 contains the numerical testing for the comparison between the approximated lower bounds of structured singular values for a class of higher dimensional structured Bernstein–Vandermonde matrices. Finally, in Section, we present concluding remarks.
4. Main Results
In this section, we aim to present our new and main results concerning the numerical approximation of the largest singular value ${\sigma}_{max}$ of $A\in {\mathbb{R}}^{n\times n}$. Furthermore, we also discuss the increasing behavior of ${\sigma}_{max}$ and the decreasing behavior of ${\sigma}_{min}$. The following Theorem 1 allows the computation of ${\sigma}_{max}$.
Definition 10. For a given matrix $A\in {\mathbb{R}}^{n\times n}$, the scalars ${\lambda}_{i}$ are called the eigenvalues of A such that $det(A{\lambda}_{i}I)=0,$ where I denotes an identity matrix possessing the same dimension as of the matrix A.
Definition 11. For a given matrix $A\in {\mathbb{R}}^{n\times n}$, the nonnegative numbers ${\sigma}_{i}$ are known as the singular values of A if A can be decomposed as $A=U\Sigma {V}^{t}$, where the matrices U and V are the orthogonal and Σ is a diagonal matrix having ${\sigma}_{i}$ on its main diagonal.
Lemma 1. Let $A:\mathbb{R}\to {\mathbb{C}}^{n\times n}$ be matrix family (smooth). Let $\lambda \left(t\right)$ denotes the eigenvalue of $A\left(t\right)$, $t\in \mathbb{R}$ that converges to an eigenvalue (simple) ${\lambda}_{0}$ of ${A}_{0}$ as $t\to 0$. Then, the continuous branch of eigenvalues $\lambda \left(t\right)$ is analytic nearby $t=0$ havingHere, ${A}_{1}=\dot{A}\left(0\right)$ and ${x}_{0}$ and ${y}_{0}$ denotes left and right hands singular vectors of ${A}_{0}$ corresponding to ${\lambda}_{0}.$ Theorem 1. For a given $A\in {\mathbb{R}}^{n\times n}$, ${\sigma}_{max}\left(A\right)$, the largest singular value is obtained aswhereand Proof. First, we show that the abovegiven expressions for
$\alpha $ and
$\beta $ are valid. We consider the following factorization of
A. For the following factorization of
A, we refer interested readers to see [
62].
Here,
D denotes a diagonal nonnegative matrix while
E is of the fullrank.
For the quantity
${\prod}_{i}{\lambda}_{i}^{\frac{1}{2n}}\left(D\right){\left(\frac{1}{2}\right)}^{\frac{n1}{(n2)}}{\left(\frac{1}{n1}\right)}^{\frac{n1}{2(n2)}}{\sum}_{i}{\left({\sigma}_{i}^{2}\right)}^{\frac{n1}{n2}}\left(D\right)$, which is the numerator of
$\alpha $, we make use of arithmeticgeometricmean inequality that allows us to write the following inequality for singular values of
D.
Next, we make use of arithmeticgeometricmean inequality on the quantity
${\sigma}_{1}^{2n4}\left(D\right){\prod}_{i}{\sigma}_{i}\left(D\right)$, which yields
Equations (
12) and (
13) allows us to write
Finally, from Equation (
14), we have
For the quantity
${n}^{\frac{1}{2}}{\sum}_{i}{\sigma}_{i}^{2}\left(D\right)$ which is the denominator of
$\alpha $, we make use of arithmeticgeometricmean inequality for the singular values of
D as,
In addition,
The inequalities in Equations (
15) and (
16) yields
Because we know that the matrix 2norm of matrix
D can be written as
Therefore, Equations (
17) and (
18) implies that
or
In a similar way, we can obtain the expressions for the numerator and denominator of
$\beta $. Now, we aim to prove that
${\sigma}_{max}\left(A\right)=\alpha \beta $.
Since,
$D={D}^{t}$ and
${\lambda}_{i}\left(D\right)\ge 0,\forall i$. Thus, the matrix
D takes the form of
$D={\left({D}^{\frac{1}{2}}\right)}^{2}$ and
${D}^{\frac{1}{2}}={\left({D}^{\frac{1}{2}}\right)}^{1}$. We obtain the following expression while making use of the singular value decomposition of
${D}^{\frac{1}{2}}E$ yields
From [
63], the
${\sigma}_{max}\left({D}^{\frac{1}{2}}E\right)$ can be written as
with
${\sigma}_{min\left(D\right)}={n}^{\frac{1}{2}}{\sum}_{i}{\sigma}_{i}^{2}\left(D\right)\ne 0.$In a similar way from [
63],
${\sigma}_{min}\left({D}^{\frac{1}{2}}E\right)$ takes the following form as, that is,
with
${\sigma}_{max}\left(D\right)={\prod}_{i}{\lambda}_{i}^{\frac{1}{2n}}\left(D\right){\left(\frac{1}{2}\right)}^{\frac{n1}{2(n2)}}{\left(\frac{1}{n1}\right)}^{\frac{n1}{2(n2)}}{\sum}_{i}{\left({\sigma}_{i}^{2}\right)}^{\frac{n1}{n2}}\left(D\right)\ne 0$.
The singular value decomposition of
${\left({E}^{t}{D}^{1}E\right)}^{1}$ yields
${\left({E}^{t}{D}^{1}E\right)}^{1}=V{\Sigma}_{1}^{2}{V}^{t}$ where
${\Sigma}_{1}=\left[\begin{array}{cccc}{\sigma}_{1}& 0& \cdots & 0\\ 0& {\sigma}_{2}& \cdots & 0\\ \vdots & & & \vdots \\ 0& 0& \cdots & {\sigma}_{n}\end{array}\right].$ As,
and
□
The increasing behavior of ${\sigma}_{max}$ for $A\in {\mathbb{R}}^{n\times n}$ is given in Theorem 2. Furthermore, ${A}_{i}={A}_{i}\left(t\right)$ and ${\sigma}_{i}={\sigma}_{i}\left(t\right)$. For simplicity, we omit the dependency of ${A}_{i}$ and ${\sigma}_{i}$ on t in Theorem 2.
Theorem 2. Let ${A}_{i}\in {\mathbb{R}}^{{n}_{i}\times {n}_{i}},\forall i\le j$ are submatrices of ${A}_{j+1}\in {\mathbb{R}}^{{n}_{j+1}\times {n}_{j+1}}\forall j$, and let ${\sigma}_{i}\left({A}_{i}\right)$ be the largest singular values of ${A}_{i}$ and ${\sigma}_{j+1}\left({A}_{j+1}\right)$ denotes the largest singular values of ${A}_{j+1}$. The largest singular values of ${\sigma}_{j+1}\left({A}_{j+1}\right)$ satisfies the inequality Proof. For
$i\le j$, the submatrices
${A}_{i}$ and matrices
${A}_{j+1}$ can be written as
and
In Equations (
19) and (
20),
${a}_{k}$ denotes all the
k components of submatrix
${A}_{i}$ and
${r}_{j+1}$ denotes all components of
${A}_{j+1}$ for
$j\ge i$. Let
${u}_{i}$ and
${v}_{i}$ denotes the left and right hand sides singular vectors of
${A}_{i}^{\ast}{A}_{i}$, then
From Equation (
21), we have
In Equation (
22),
${\widehat{u}}_{j+1}$, and
${\widehat{v}}_{j+1}$ denote the right and lefthand sides singular vectors corresponding to a family of matrices
${A}_{j+1}^{\ast}{A}_{j+1}$, respectively. From Equations (
21) and (
22), we have
□
The decreasing behavior of ${\sigma}_{min}$ for $A\in {\mathbb{R}}^{n\times n}$ is given in Theorem 3.
Theorem 3. Let ${A}_{i}\in {\mathbb{R}}^{{n}_{i}\times {n}_{i}},\forall i\le j$ are submatrices of ${A}_{j+1}\in {\mathbb{R}}^{{n}_{j+1}\times {n}_{j+1}}\forall j$, and let ${\widehat{\sigma}}_{i}\left({A}_{i}\right)$ be the smallest singular values of ${A}_{i}$ and ${\sigma}_{j+1}^{^}\left({A}_{j+1}\right)$ are the smallest singular values of ${A}_{j+1}$. The smallest singular vales ${\widehat{\sigma}}_{j+1}\left({A}_{j+1}\right)$ satisfies the inequality Proof. For
$i\le j$, the matrices
${A}_{i}$ and
${A}_{j+1}$ can be written as
and
Let
${\widehat{u}}_{i}$ and
${\widehat{v}}_{i}$ be left hand and right hand singular vectors of
${A}_{i}^{\ast}{A}_{i}$, then
Now,
Since,
${\widehat{\sigma}}_{i}$ and
${\widehat{\sigma}}_{j+1}$ are the smallest singular values of
$\left({A}_{i}^{\ast}{A}_{i}\right)$ and
$\left({A}_{j+1}^{\ast}{A}_{j+1}\right)$ for
$i\le j$, thus Equations (
24) and (
25) yields
□
Next, we aim to fix the largest singular value
${\sigma}_{max}\left(A\right)$ for
$A\in {\mathbb{R}}^{n\times n}$ such that
${\sigma}_{max}=1$. For this purpose, we make use of an innerouter algorithm. The main objective is to develop and then solve an optimization problem. In turn, this optimization problem yields a system of ordinary differential equations (ODEs). On the other hand, for the case of the outer algorithm, our main aim is to modify the perturbation level
$\u03f5$ via fast Newton’s iteration. For more details, we refer [
58].
7. QuasiRational Bernstein–Vandermonde Matrices
The following result in [
60] is the computation of the determinant of quasirational Bernstein–Vandermonde structured matrices.
Theorem 7 ([
60]).
Let ${M}_{b}=\left({m}_{ij}\right)\in {\mathbb{R}}^{n\times n}$ be a quasirational Bernstein–Vandermonde matrix. Then, the determinant $d(\xb7)$ is computed as$d\left({M}_{b}(ij+1:ii:j)\right)=\frac{\left(\genfrac{}{}{0pt}{}{n1}{0}\right)\cdots \left(\genfrac{}{}{0pt}{}{n1}{j1}\right)}{{\prod}_{k=ij+1}^{i}W\left({x}_{k}\right)}{\prod}_{k=0}^{j1}{w}_{k}{\prod}_{k=ij+1}{(1{x}_{k})}^{nj}{\prod}_{ij+1\le t<s\le i}({x}_{s}{x}_{t}),\forall i\ge j\ne n.$
In addition,
$d\left({M}_{b}(1:iji+1:j)\right)=\frac{\left(\genfrac{}{}{0pt}{}{n1}{j1}\right)\cdots \left(\genfrac{}{}{0pt}{}{n1}{j1}\right)}{\prod {\prod}_{k=1}^{i}W\left({x}_{k}\right)}{\prod}_{k=1}^{i}{x}_{k}^{ji}{\prod}_{k=ji}^{j1}{w}_{k}{\prod}_{k=1}^{i}{(1{x}_{k})}^{nj}{\prod}_{1\le t<s\le i}({x}_{s}{x}_{t}),\forall i<j$
and
$\left\{\begin{array}{c}det\left({M}_{b}\right)=(\frac{{w}_{n1}}{W\left({x}_{n}\right)}{\prod}_{k=1}^{n1}({x}_{n}{x}_{k})w{\prod}_{k=1}^{n1}(1{x}_{k}))\frac{\left(\genfrac{}{}{0pt}{}{n1}{0}\right)\cdots \left(\genfrac{}{}{0pt}{}{n1}{n2}\right)}{{\prod}_{k=1}^{n1}W\left({x}_{k}\right)}{\prod}_{k=0}^{n2}{w}_{k}{\prod}_{1\le t<s\le n1}({x}_{s}{x}_{t}),\hfill \\ det\left({M}_{b}(2:n)\right)=(\frac{{w}_{n1}{x}_{n}}{W\left({x}_{n}\right)}{\prod}_{k=2}^{n1}({x}_{n}{x}_{k})w{\prod}_{k=2}^{n1}(1{x}_{k}))\frac{\left(\genfrac{}{}{0pt}{}{n1}{1}\right)\cdots \left(\genfrac{}{}{0pt}{}{n1}{n2}\right)}{{\prod}_{k=2}^{n1}W\left({x}_{k}\right)}{\prod}_{k=1}^{n2}{w}_{k}{\prod}_{k=2}^{n1}{x}_{k}{\prod}_{2\le t<s\le (n1)}({x}_{S}{x}_{t}).\hfill \end{array}\right.$
The parametric matrix ${P}_{r}M\left({M}_{b}\right)$ for a quasirational Bernstein–Vandermonde matrix is given by the following theorem.
Theorem 8. Let ${M}_{b}=\left({m}_{ij}\right)\in {\mathbb{R}}^{n\times n}$ is a nonsingular quasirational Bernstein–Vandermonde matrix. The parametric matrix ${P}_{r}M\left({M}_{b}\right)\in {\mathbb{R}}^{n\times n}$ is the following matrix
${P}_{r}M\left({M}_{b}\right)=\left\{\begin{array}{cc}{d}_{ii}=\frac{\left(\genfrac{}{}{0pt}{}{n1}{i1}\right){w}_{i1}{(1{x}_{i})}^{ni}}{W\left({x}_{i}\right)}{\prod}_{k=1}^{i}\frac{({x}_{i}{x}_{k})}{(1{x}_{k})},\hfill & 1\le i\le n2\hfill \\ {d}_{n1,n1}=\frac{\left(\genfrac{}{}{0pt}{}{n1}{n2}\right){w}_{n2}(1{x}_{n})}{W\left({x}_{n}\right)}{\prod}_{k=2}^{n1}\frac{({x}_{n}{x}_{k})}{(1{x}_{k})},\hfill & i=n1\hfill \\ {d}_{n,n}=(w{\prod}_{k=1}^{n1}(1{x}_{k})\frac{{w}_{n1}}{W\left({x}_{n}\right)}{\prod}_{k=1}^{n1}({x}_{n}{x}_{k}))\hfill \\ \phantom{Beta}\frac{W\left({x}_{n}\right)(1{x}_{n1})}{W\left({x}_{n1}\right)(1{x}_{1})(1{x}_{n}){\prod}_{k=2}^{n1}({x}_{n}{x}_{k})}{\prod}_{k=1}^{n2}\frac{({x}_{n1}{x}_{k})}{(1{x}_{k})},\hfill & i=n,\forall 1\le i\le n.\hfill \end{array}\right.$,
${P}_{r}M{\left({m}_{b}\right)}_{ij}=\left\{\begin{array}{cc}{x}_{ij}=\frac{w\left({x}_{i1}\right){(1{x}_{i})}^{nj}(1{x}_{ij})({x}_{i}{x}_{i1})}{W\left({x}_{i}\right){(1{x}_{i1})}^{nj+1}({x}_{i1}{x}_{ij})}{\prod}_{k=ij+1}^{i2}\frac{{x}_{i}{x}_{k}}{{x}_{i1}{x}_{k}},\hfill & (i,j)\ne (n,n1)\hfill \\ {x}_{n,n1}=\frac{w\left({x}_{n}\right){(1{x}_{n})}^{2}({x}_{n1}{x}_{1})}{W\left({x}_{n1}\right)(1{x}_{n})(1{x}_{1})({x}_{n}{x}_{n1})}{\prod}_{k=2}^{n2}\frac{{x}_{n1}{x}_{k}}{{x}_{n}{x}_{k}},\hfill & (i,j)=(n,n1),\forall 1\le j\le i\le n.\hfill \end{array}\right.$ and
${P}_{r}M{\left({m}_{b}\right)}_{ij}=\left\{\begin{array}{cc}{B}_{ij}=\frac{(nj+1){w}_{j1}{x}_{i}}{(j1){w}_{j2}(1{x}_{i})},\hfill & 1\le i\le n2\hfill \\ \delta =(\frac{{w}_{n1}{x}_{n}}{W\left({x}_{n}\right)}{\prod}_{k=2}^{n1}({x}_{n}{x}_{k})w{\prod}_{k=2}^{n1}(1{x}_{k}))\hfill \\ \phantom{Beta}\frac{\left(\genfrac{}{}{0pt}{}{n1}{1}\right)...\left(\genfrac{}{}{0pt}{}{n1}{n2}\right){\prod}_{k=1}^{n2}{w}_{k}{\prod}_{k=2}^{n1}{x}_{k}{\prod}_{2\le t<s\le n1}^{k=2}({x}_{s}{x}_{t})}{{d}_{n1,n1}{\prod}_{t=3}^{n}{d}_{t2,t2}{\beta}_{t2,t1}{\gamma}_{t1,t2}{\prod}_{k=2}^{n1}W\left({x}_{k}\right)},\hfill & \forall 1\le i\le i\le n.\hfill \end{array}\right.$
Spectral Properties of QuasiRational Bernstein–Vandermonde Matrices
In this subsection, we aim to present important and meaningful spectral properties of quasirational Bernstein–Vandermonde matrices. These matrices are taken from the paper [
60] for the numerical approximations of structured singular values.
We make use of the wellknown MATLAB functions $eig(\xb7)$ and $svd(\xb7)$ to approximate numerically both eigenvalues and singular values. Our main objective is to approximate numerically the lower bounds of structured singular value or $\mu $value, which is a straightforward generalization of singular values for constant structured matrices. Furthermore, we make use of MATLAB functions mussv to compute both lower and upper bounds of structured singular values or $\mu $values for constant structured matrices numerically.
Example 4. Consider a
$6\times 6$ quasirational Bernstein–Vandermonde matrix
The computation of both eigenvalues and singular values are obtained as
$1.0\times {10}^{5}\times \left\{2.9639,0.0642,0.0030,0.0002,0,0\right\}$ and
$1.0\times {10}^{5}\times \left\{3.6568,0.0555,0.0030,0.0002,0,0\right\},$ respectively. The first and second columns represent numerically approximated upper and lower bounds of structured singular values or
$\mu $values via MATLAB function mussv. The approximation (numerical) lower bounds of structured singular values or
$\mu $values with algorithm [
58] are represented in the very last column of
Table 5.
In
Figure 3, the lefthand side subfigure represents the plots of eigenvalues, singular values, and numerically approximated bounds (from below and above) of structured singular values or
$\mu $values against the time
t. In
Figure 3, the blue color dotted line starting from point
$\left(1.0,0.3\right)$ at the bottom of the left side figure denotes the spectrum, that is, the eigenvalues of
${M}_{4}$. Because singular values are nonnegative numbers, the red color dotted line starting from point
$\left(1.0,0.37\right)$ indicates that numerically approximated eigenvalues are bounded from above by means of singular values. The golden color dotted line starting from point
$\left(1.0,0.37\right)$ shows that all quantities, that is, eigenvalues, singular values, and lower bounds of structured singular values or
$\mu $values approximated by MATLAB mussv function, which is represented with the purple color dotted line starting from point
$\left(1.0,0.37\right)$ and the numerically approximated lower bounds of structured singular values or
$\mu $values via algorithm [
58] represented with a turquoise color dotted line starting from point
$\left(1.0,0.37\right)$ are strictly bounded by upper bounds (computed numerically) of structured singular values with MATLAB function mussv.
In
Figure 3, the righthand side subfigure represents the plots of condition numbers vs. time. Furthermore, the behaviour of spectral condition numbers
${k}_{1}\left({M}_{4}\right)$ having end point
$\left(3.0,12,000\right)$,
${k}_{2}\left({M}_{4}\right)$ having end point
$\left(3.0,6000\right)$ and
${K}_{\infty}\left({M}_{4}\right)$ having end point
$\left(3.0,4000\right)$ is shown in right hand side figure of
Figure 3.
8. Numerical Testing for Matrices in Higher Dimensions
In this section, we aim to present the comparison of the numerically approximated bounds of structured singular values for BernsteinVandermonde structured matrices in higher dimensions. For numerical testing, we choose the BernsteinVandermonde matrices having sizes $10,15,20,25,30$, respectively.
The very first column in
Table 6 denoted the size of square Bernstein–Vandermonde structured matrices, which are under consideration in this article. Both the second and third columns indicate the numerical approximation to both upper and lower bounds of structured singular values or
$\mu $values with the help of MATLAB function mussv, respectively. The last and fourth column of
Table 6 shows the numerical approximation of the bounds (from below) of structured singular value or
$\mu $value computed via algorithm [
58]. For the size 10, the lower bound of structured singular value approximated numerically via MATLAB mussv function is much better. However, for the sizes
$15,20,25,30$, the lower bounds approximated by [
58] are significantly better than the lower bounds approximated by MATLAB function mussv.
Algorithm 1 Approximate perturbation level. 
procedure Given A, $\mathbb{B}$,THE TOLERANCE $tol>0$, ${\u03f5}^{\left(0\right)}$(BOUND FROM BELOW), ${\u03f5}_{l}$(BOUND FROM ABOVE), ${\u03f5}_{u}$(GIVEN UPPER BOUND), ${i}_{max}$(INITIAL OF EIGENVALUES) $\mathit{for}i\leftarrow 1\mathit{to}$ i_{max} $do$ Determine solution to system of ODEs (4.10) in [ 58] for all cases that begin with the initial choice of the value of ${\Delta}_{i}\left(0\right)$. The quantity ${\Delta}_{i}$ denotes solution (of stationary nature) and let ${\xi}_{i}$ denotes smallest eigenvalue corresponding to modified perturbed structured matrix $I{\u03f5}^{\left(0\right)}A{\Delta}_{i}$ Take ${i}_{\ast}=argmin\left{\xi}_{i}\right$ Take ${\Delta}^{\left(0\right)}={\Delta}_{{i}_{\ast}}$, ${\xi}^{\left(0\right)}={\xi}_{i}$, ${x}^{\left(0\right)}$, ${y}^{\left(0\right)}$ the computed eigenvectors Determine ${\u03f5}^{\left(1\right)}$ via a single step fast Newton iteration Set $k=1$ While ${\u03f5}^{\left(k\right)}{\u03f5}^{(k1)}>tol\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}then,\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}do$ Determine solution to ODEs (4.10) in [ 58] with $\u03f5={\u03f5}^{\left(k\right)}$ begins from $\Delta \left(0\right)={\Delta}^{(k1)}$Consider ${\Delta}^{\left(k\right)}$, solution (stationary) of (4.10) in [ 58] Consider ${\xi}^{\left(k\right)}$, the smallest eigenvalue of modified structured matrix $I{\u03f5}^{\left(0\right)}A{\Delta}^{\left(k\right)}$ if ${\xi}^{\left(k\right)}>tol\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}then,\phantom{\rule{0.166667em}{0ex}}\phantom{\rule{0.166667em}{0ex}}do$ Set perturbation level ${\u03f5}_{l}={\u03f5}^{\left(k\right)}$ Determine suitable value of ${\u03f5}^{(k+1)}$ with one step fast Newton iteration. end procedure
