#
Estimating the Quadratic Form x^{T}A^{−m}x for Symmetric Matrices: Further Progress and Numerical Computations

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

- Statistics: The inverse of the covariance matrix, which is referred to as a precision matrix, usually appears in statistics. The covariance matrix reveals marginal correlations between the variables, whereas the precision matrix represents the conditional correlations between two data variables of the other variables [2]. The diagonal of the inverse of covariance matrices provides information about the quality of data in uncertainty quantification [3].
- Network analysis: The determination of the importance of the nodes of a graph is a major issue in network analysis. Information for these details can be extracted by the evaluation of the diagonal elements of the matrix ${({I}_{n}-aA)}^{-1}$, where A is the adjacency matrix of the network, $0<a<{\displaystyle \frac{1}{\rho \left(A\right)}}$, and $\rho \left(A\right)$ is the spectral radius of A. This matrix is referred to as a resolvent matrix, see, for example, [4] and the references therein.
- Numerical analysis: Quadratic forms arise naturally in the context of the computation of the regularization parameter in Tikhonov regularization for solving ill-posed problems. In this case, the matrix has the form $A{A}^{T}+\lambda {I}_{n}$, $\lambda >0$. In the literature, many methods have been proposed for the selection of the regularization parameter $\lambda $, such as the discrepancy principle, cross-validation, generalized cross-validation (GCV), L-curve, and so forth; see, for an example, [5] (Chapter 15) and references therein. These methods involve quadratic forms of type ${x}^{T}{(A{A}^{T}+\lambda {I}_{n})}^{-m}x$, with $m=1,2,3$.

- Finding an $\alpha $ such that$$(x,{A}^{-m}x)\approx \alpha {\parallel x\parallel}^{2}.$$
- Assessing the absolute error of the above estimate, i.e., determining a bound for the quantity$$\left|{\alpha \parallel x\parallel}^{2}-(x,{A}^{-m}x)\right|.$$

## 2. Bounds on the Error

**Proposition**

**1.**

- UB1.
- $\frac{{\parallel x\parallel}^{2}\parallel b\parallel}{2\parallel {A}^{m}x\parallel}}\left({\kappa}^{m}+{\displaystyle \frac{1}{{\kappa}^{m}}}\right)$
- UB2.
- $\frac{\parallel x\parallel \xb7{\parallel b\parallel}^{2}}{2\parallel {A}^{m}b\parallel}}\left({\kappa}^{m}+{\displaystyle \frac{1}{{\kappa}^{m}}}\right)$
- UB3.
- $\frac{{\parallel x\parallel}^{2}{\parallel b\parallel}^{2}}{4\sqrt{{x}^{T}{A}^{m}x}\xb7\sqrt{{b}^{T}{A}^{m}b}}}{\left({\kappa}^{m/2}+{\displaystyle \frac{1}{{\kappa}^{m/2}}}\right)}^{2$
- UB4.
- $\frac{\parallel x\parallel \xb7\parallel b\parallel}{{\lambda}_{min}^{m}}$
- UB5.
- For estimates satisfying ${\alpha \parallel x\parallel}^{2}\le (x,{A}^{-m}x)$, we have also the family of error bounds$$\frac{{\parallel x\parallel}^{2}}{2\parallel {A}^{m}x\parallel \xb7\parallel {A}^{p}x\parallel}\left({\kappa}^{m}+\frac{1}{{\kappa}^{m}}\right)\sqrt{\parallel {A}^{p}{x\parallel}^{2}{\parallel b\parallel}^{2}-{({A}^{p}x,b)}^{2}}\phantom{\rule{0.166667em}{0ex}},$$where$p\ge 0$can be chosen as any integer such that$\frac{(x,{A}^{p}x)}{({A}^{m}x,{A}^{p}x)}<\alpha $.

**Proof.**

- UB1.

- UB2.

- UB3.

- UB4.

- UB5.

## 3. Estimate of ${\mathit{x}}^{\mathit{T}}{\mathit{A}}^{-\mathit{m}}\mathit{x}$ by the Projection Method

**Remark**

**1.**

- Observe that upper bounds UB1 and UB4 from Proposition 1 are minimal for $k=m$. In this case, we have $b\perp {A}^{m}x$; thus, b has the smallest possible norm. Therefore, from the point of view of minimizing the upper bound on the error (more precisely, minimizing upper bounds UB1 and UB4), a convenient choice is $k=m$.
- However, if the goal is fast estimation, we can take $k=0$ for even m and $k=1$ for odd m, as these two choices provide $es{t}_{proj\left(0\right)}=\frac{{\parallel x\parallel}^{4}}{\parallel {A}^{m/2}{x\parallel}^{2}}$ and $es{t}_{proj\left(1\right)}=\frac{{\parallel x\parallel}^{2}(x,Ax)}{\parallel {A}^{(m+1)/2}{x\parallel}^{2}}$, respectively, which are both easy to evaluate.

## 4. Estimate of ${\mathit{x}}^{\mathit{T}}{\mathit{A}}^{-\mathit{m}}\mathit{x}$ Using the Minimization Method

## 5. The Heuristic Approach

**Lemma**

**1.**

**Proof.**

## 6. A Comparison with Other Methods

#### 6.1. The Extrapolation Method

**Lemma**

**2.**

**Remark**

**2.**

- For $\nu =-1$, $es{t}_{extrap(-1)}\equiv es{t}_{h1}$.
- For $\nu =0$, $es{t}_{extrap\left(0\right)}\equiv es{t}_{proj\left(0\right)}$.
- For $\nu =1$, $es{t}_{extrap\left(1\right)}\equiv es{t}_{proj\left(1\right)}$.

#### 6.2. Gaussian Techniques

## 7. Application in Estimating ${\mathit{x}}^{\mathit{T}}{({\mathit{AA}}^{\mathit{T}}+\mathit{\lambda}{\mathit{I}}_{\mathit{n}})}^{-\mathit{m}}\mathit{x}$

## 8. Numerical Examples

**Example**

**1.**

**Example**

**2.**

**Example**

**3.**

**Example**

**4.**

**Example**

**5.**

## 9. Conclusions

- The projection method improves the results of the extrapolation procedure by providing bounds on the absolute error.
- Although the estimates based on the Gauss quadrature are accurate, they require more time and more mvps than the proposed approaches as the number of the Lanczos iterations increases. The methods shown in the present paper are thus convenient especially in situations when a fast estimation of moderate accuracy is sought.

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Acknowledgments

## Conflicts of Interest

## References

- Fika, P.; Mitrouli, M.; Turek, O. On the estimation of x
^{T}A^{−1}x for symmetric matrices. 2020; submitted. [Google Scholar] - Fan, J.; Liao, Y.; Liu, H. An overview on the estimation of large covariance and precision matrices. Econom. J.
**2016**, 19, C1–C32. [Google Scholar] [CrossRef] - Tang, J.; Saad, Y. A probing method for computing the diagonal of a matrix inverse. Numer. Linear Algebra Appl.
**2012**, 19, 485–501. [Google Scholar] [CrossRef] [Green Version] - Benzi, M.; Klymko, C. Total Communicability as a centrality measure. J. Complex Netw.
**2013**, 1, 124–149. [Google Scholar] [CrossRef] - Golub, G.H.; Meurant, G. Matrices, Moments and Quadrature with Applications; Princeton University Press: Princeton, NJ, USA, 2010. [Google Scholar]
- Bai, Z.; Fahey, M.; Golub, G. Some large-scale matrix computation problems. J. Comput. Appl. Math.
**1996**, 74, 71–89. [Google Scholar] [CrossRef] [Green Version] - Fika, P.; Mitrouli, M.; Roupa, P. Estimates for the bilinear form x
^{T}A^{−1}y with applications to linear algebra problems. Electron. Trans. Numer. Anal.**2014**, 43, 70–89. [Google Scholar] - Fika, P.; Mitrouli, M. Estimation of the bilinear form y
^{*}f(A)x for Hermitian matrices. Linear Algebra Appl.**2016**, 502, 140–158. [Google Scholar] [CrossRef] - Bekas, C.; Curioni, A.; Fedulova, I. Low-cost data uncertainty quantification. Concurr. Comput. Pract. Exp.
**2012**, 24, 908–920. [Google Scholar] [CrossRef] - Taylor, A.; Higham, D.J. CONTEST: Toolbox Files and Documentation. Available online: http://www.mathstat.strath.ac.uk/research/groups/numerical_analysis/contest/toolbox (accessed on 15 April 2021).
- Reichel, L.; Rodriguez, G.; Seatzu, S. Error estimates for large-scale ill-posed problems. Numer. Algorithms
**2009**, 51, 341–361. [Google Scholar] [CrossRef] [Green Version] - Hansen, P.C. Regularization Tools Version 4.0 for MATLAB 7.3. Numer. Algorithms
**2007**, 46, 189–194. [Google Scholar] [CrossRef]

**Figure 1.**Solution of the Shaw test problem via an estimation of GCV (

**left**) and the exact GCV (

**right**).

**Figure 2.**Solution of the Tomo test problem via an estimation of GCV (

**left**) and the exact GCV (

**right**).

**Figure 3.**Solution of the Baart test problem via an estimation of GCV (

**left**) and the exact GCV (

**right**).

Estimated | Upper Bounds on ${\mathit{E}}_{\mathit{abs}}$ | |||||
---|---|---|---|---|---|---|

Value | UB1 | UB2 | UB3 | UB4 | UB5 | |

$es{t}_{proj\left(0\right)}$ | 0.0103 | 0.0541 | 0.1909 | 0.0690 | 0.1080 | 0.0540 |

$es{t}_{proj\left(2\right)}$ | 0.0103 | 0.0540 | 0.1926 | 0.0692 | 0.1079 | 0.0540 |

$es{t}_{min1}$ | 0.0106 | 0.0731 | 0.1029 | 0.0499 | 0.1460 | 0.0538 |

$es{t}_{min2}$ | 0.0105 | 0.0701 | 0.1032 | 0.0497 | 0.1401 | 0.0538 |

$es{t}_{h1}$ | 0.0103 | 0.0541 | 0.1872 | 0.0684 | 0.1082 | 0.0540 |

$es{t}_{h2}$ | 0.0103 | 0.0543 | 0.1828 | 0.0677 | 0.1084 | 0.0540 |

${\mathit{est}}_{\mathit{proj}\left(0\right)}$ | ${\mathit{est}}_{\mathit{proj}\left(2\right)}$ | ${\mathit{est}}_{\mathit{min}1}$ | ${\mathit{est}}_{\mathit{min}2}$ | ${\mathit{est}}_{\mathit{h}1}$ | ${\mathit{est}}_{\mathit{h}2}$ |
---|---|---|---|---|---|

1.0176 | 0.8636 | 1.0268 | 0.9910 | 1.1990 | 1.2335 |

${\mathit{est}}_{\mathit{proj}\left(0\right)}$ | ${\mathit{est}}_{\mathit{proj}\left(3\right)}$ | ${\mathit{est}}_{\mathit{min}1}$ | ${\mathit{est}}_{\mathit{min}2}$ | ${\mathit{est}}_{\mathit{h}1}$ | ${\mathit{est}}_{\mathit{h}2}$ |
---|---|---|---|---|---|

296.6203 | 296.5306 | 299.8469 | 297.7640 | 296.7100 | 296.7562 |

**Table 4.**Mean relative errors and execution times for estimating the diagonal of the covariance matrices of order n with $(\alpha ,\beta )=(3,1)$.

n | Estimate | MRE | Time |
---|---|---|---|

1000 | $es{t}_{proj\left(0\right)}\equiv es{t}_{extrap\left(0\right)}$ | 1.2688 × ${10}^{-4}$ | 5.3683 × ${10}^{-4}$ |

$es{t}_{proj\left(1\right)}\equiv es{t}_{extrap\left(1\right)}$ | 4.3539 × ${10}^{-4}$ | 5.4723 × ${10}^{-4}$ | |

$es{t}_{min1}$ | 2.9994 × ${10}^{-4}$ | 2.3557 × ${10}^{-1}$ | |

$es{t}_{min2}$ | 3.0020 × ${10}^{-4}$ | 2.1121 × ${10}^{-1}$ | |

$es{t}_{h1}\equiv es{t}_{extrap(-1)}$ | 3.5996 × ${10}^{-4}$ | 6.5678 × ${10}^{-4}$ | |

$es{t}_{h2}$ | 3.8761 × ${10}^{-3}$ | 5.9529 × ${10}^{-2}$ | |

$es{t}_{Gauss}$ | 1.2687 × ${10}^{-4}$ | 1.7068 | |

3000 | $es{t}_{proj\left(0\right)}\equiv es{t}_{extrap\left(0\right)}$ | 4.2294 × ${10}^{-5}$ | 2.2339 × ${10}^{-3}$ |

$es{t}_{proj\left(1\right)}\equiv es{t}_{extrap\left(1\right)}$ | 1.4516 × ${10}^{-4}$ | 2.2521 × ${10}^{-3}$ | |

$es{t}_{min1}$ | 1.0508 × ${10}^{-4}$ | 1.2698 | |

$es{t}_{min2}$ | 1.0528 × ${10}^{-4}$ | 1.0726 | |

$es{t}_{h1}\equiv es{t}_{extrap(-1)}$ | 1.2004 × ${10}^{-4}$ | 2.5384 × ${10}^{-3}$ | |

$es{t}_{h2}$ | 1.6973 × ${10}^{-3}$ | 5.1289 × ${10}^{-1}$ | |

$es{t}_{Gauss}$ | 4.2294 × ${10}^{-5}$ | 1.1647 × ${10}^{1}$ | |

5000 | $es{t}_{proj\left(0\right)}\equiv es{t}_{extrap\left(0\right)}$ | 2.5377 × ${10}^{-5}$ | 1.4881 × ${10}^{-2}$ |

$es{t}_{proj\left(1\right)}\equiv es{t}_{extrap\left(1\right)}$ | 8.7099 × ${10}^{-5}$ | 1.4502 × ${10}^{-2}$ | |

$es{t}_{min1}$ | 6.6113 × ${10}^{-5}$ | 1.2790 × ${10}^{1}$ | |

$es{t}_{min2}$ | 6.6256 × ${10}^{-5}$ | 8.3479 | |

$es{t}_{h1}\equiv es{t}_{extrap(-1)}$ | 7.2027 × ${10}^{-5}$ | 1.7101 × ${10}^{-2}$ | |

$es{t}_{h2}$ | 1.1532 × ${10}^{-3}$ | 6.4850 | |

$es{t}_{Gauss}$ | 2.5377 × ${10}^{-5}$ | 2.0130 × ${10}^{2}$ |

**Table 5.**Mean relative errors and execution times (seconds) for estimating the diagonal of the resolvent matrix.

Network | ${\mathit{est}}_{\mathit{proj}\left(0\right)}$ | ${\mathit{est}}_{\mathit{proj}\left(1\right)}$ | ${\mathit{est}}_{\mathit{min}1}$ | ${\mathit{est}}_{\mathit{min}2}$ | ${\mathit{est}}_{\mathit{h}1}$ | ${\mathit{est}}_{\mathit{h}2}$ |
---|---|---|---|---|---|---|

pref | 8.770 × ${10}^{-3}$ | 1.646 × ${10}^{-2}$ | 3.008 × ${10}^{-3}$ | 1.240 × ${10}^{-2}$ | 9.218 × ${10}^{-4}$ | 6.500 × ${10}^{-4}$ |

[2.723 × ${10}^{-4}$] | [3.447 × ${10}^{-4}$] | [5.091] | [4.105] | [3.747 × ${10}^{-4}$] | [9.471 × ${10}^{-2}$] | |

lock and key | 3.590 × ${10}^{-2}$ | 6.700 × ${10}^{-2}$ | 1.540 × ${10}^{-2}$ | 4.313 × ${10}^{-2}$ | 3.620 × ${10}^{-3}$ | 3.170 × ${10}^{-4}$ |

[3.927 × ${10}^{-4}$] | [4.429 × ${10}^{-4}$] | [6.754] | [4.884] | [4.946 × ${10}^{-4}$] | [8.387 × ${10}^{-1}$] | |

renga | 7.173 × ${10}^{-2}$ | 1.014 × ${10}^{-1}$ | 2.875 × ${10}^{-2}$ | 5.516 × ${10}^{-2}$ | 4.110 × ${10}^{-2}$ | 2.936 × ${10}^{-2}$ |

[4.153 × ${10}^{-4}$] | [4.724 × ${10}^{-4}$] | [4.597] | [4.059] | [5.103 × ${10}^{-4}$] | [6.477 × ${10}^{-2}$] |

Test Problem (n, $\mathit{\sigma}$) | Method | $\Vert \mathit{x}-{\mathit{x}}_{\mathit{\lambda}}\Vert $ |
---|---|---|

Shaw | estimation | 2.1885 × ${10}^{-1}$ |

(200, ${10}^{-7}$) | exact GCV | 1.9049 × ${10}^{-1}$ |

Tomo | estimation | 1.9188 × ${10}^{-2}$ |

(100, ${10}^{-5}$) | exact GCV | 7.0236 × ${10}^{-2}$ |

Baart | estimation | 5.9189 × ${10}^{-2}$ |

(100, ${10}^{-7}$) | exact GCV | 5.9958 × ${10}^{-2}$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Mitrouli, M.; Polychronou, A.; Roupa, P.; Turek, O.
Estimating the Quadratic Form *x*^{T}*A*^{−m}*x* for Symmetric Matrices: Further Progress and Numerical Computations. *Mathematics* **2021**, *9*, 1432.
https://doi.org/10.3390/math9121432

**AMA Style**

Mitrouli M, Polychronou A, Roupa P, Turek O.
Estimating the Quadratic Form *x*^{T}*A*^{−m}*x* for Symmetric Matrices: Further Progress and Numerical Computations. *Mathematics*. 2021; 9(12):1432.
https://doi.org/10.3390/math9121432

**Chicago/Turabian Style**

Mitrouli, Marilena, Athanasios Polychronou, Paraskevi Roupa, and Ondřej Turek.
2021. "Estimating the Quadratic Form *x*^{T}*A*^{−m}*x* for Symmetric Matrices: Further Progress and Numerical Computations" *Mathematics* 9, no. 12: 1432.
https://doi.org/10.3390/math9121432