Next Article in Journal
A Stabilisation System Synthesis for Motion along a Preset Trajectory and Its Solution by Symbolic Regression
Previous Article in Journal
Emission Reduction Decisions in Blockchain-Enabled Low-Carbon Supply Chains under Different Power Structures
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Nearest Zero Eigenvector of a Weakly Symmetric Tensor from a Given Point

Department of Mathematics and Statistics, Murray State University, Murray, KY 42071, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(5), 705; https://doi.org/10.3390/math12050705
Submission received: 13 December 2023 / Revised: 12 February 2024 / Accepted: 26 February 2024 / Published: 28 February 2024

Abstract

:
We begin with a degree m real homogeneous polynomial in n indeterminants and bound the distance from a given n-dimensional real vector to the real vanishing of the homogeneous polynomial. We then apply these bounds to the real homogeneous polynomial associated with a nonzero m-order n-dimensional weakly symmetric tensor which has zero as an eigenvalue. We provide “nested spheres” conditions to bound the distance from a given n-dimensional real vector to the nearest zero eigenvector.
MSC:
15A18; 15A69

1. Introduction

Let R be the real field, and we consider an m-order n-dimensional tensor A consisting of n m entries in R :
A = ( a i 1 i m ) , a i 1 i m R , 1 i 1 , , i m n .
We denote the space of all m-order n dimensional tensor real tensors by R [ m , n ] .
To an n-vector x = ( x 1 , , x n ) , real or complex, we define the n-vector:
A x m 1 : = i 2 , , i m = 1 n a i i 2 i m x i 2 x i m 1 i n .
We denote the n-vector x [ m 1 ] : = ( x 1 m 1 , , x n m 1 ) .
The following were first introduced and studied by Qi and Lim [1,2,3,4].
Definition 1.
Let A R [ m , n ] . A pair ( λ , x ) C × ( C n { 0 } ) is called an eigenvalue–eigenvector (or simply eigenpair) of A if they satisfy the equation
A x m 1 = λ x [ m 1 ] .
We call ( λ , x ) an H-eigenpair if they are both real.
Definition 2.
Let A R [ m , n ] . A pair ( λ , x ) C × ( C n { 0 } ) is called an E-eigenvalue and E-eigenvector (or simply E-eigenpair) of A if they satisfy the equation
A x m 1 = λ x , x x = 1
We call ( λ , x ) a Z-eigenpair if they are both real.
The notion of weakly symmetric tensors was first introduced in [5].
Definition 3.
A R [ m , n ] is called weakly symmetric if the associated homogeneous polynomial
f A ( x ) : = i 1 , i 2 , , i m = 1 n a i 1 i 2 i m x i 1 x i 2 x i m
satisfies f A ( x ) = m A x m 1 . In the tensor notation, according to [2], the homogeneous polynomial f A ( x ) is also denoted by A x m .
Although this definition is not as intuitive as symmetric tensors, it nevertheless provides the same desired variational (extremal) property as symmetric tensors. It should also be noted that, for m = 2 , symmetric matrices and weakly symmetric matrices coincide. However, it is shown in [5] that a symmetric tensor is necessarily weakly symmetric for m > 2 , but the converse is not true in general. Furthermore, if A R [ m , n ] is weakly symmetric, by homogeneity, it satisfies the familiar Euler’s identity:
A x m = f A ( x ) = 1 m f A ( x ) , x = A x m 1 , x ,
where · , · denotes the standard inner product on R n .
Both H-eigenvalues and Z-eigenvalues of a given tensor have found numerous applications in numerical multilinear algebra, image processing, higher order Markov chains, and spectral hypergraph theory. In particular, it is a well-known fact (e.g., [2]) that the extremal Z-eigenvalues correspond to the constrained extremal values of f A ( x ) on the unit sphere S n 1 . However, most theoretical as well as numerical developments, refs. [6,7,8,9,10,11] have been dedicated to finding the extremal eigenvalues and eigenvectors, and very little attention has been given to the zero eigenvalue and its eigenvectors. However, it is important not to overlook the importance of the zero eigenvectors, since positive semi-definite (PSD) tensors must take on zero eigenpairs. From a practical standpoint, for large values of m or n, finding real solutions to a high-degree multivariate polynomial system may not be feasible. In particular, as the degree m increases, even solving a single multivariate polynomial equation becomes both time-consuming and costly. With this in mind, we endeavor to provide reasonable upper and lower bounds on the distance from a given initial point e R n { 0 } with f A ( e ) 0 to the nearest zero eigenvector of A without using high-power computer software.
Throughout the paper, we shall always assume our tensor is nonzero and weakly symmetric. Our paper is organized as follows. In Section 2, we begin by considering a more general problem concerning the real vanishing V R ( f ) = f 1 ( 0 ) of a degree m real homogeneous polynomial f in n indeterminants. We provide the lower bound of the distance from a given point e outside V R ( f ) to V R ( f ) . This lower bound is completely determined by the combinatorial nature of the coefficients of f itself. In Section 3, we establish an upper bound, based on the analytic and algebraic nature of the f, on the distance from a given initial point e outside V R ( f ) to V R ( f ) . In Section 4, we establish the connection between the real zeros of the associated homogeneous polynomial f A and the zero eigenvectors of a nonzero m-order n-dimensional weakly symmetric tensor A . We first examine the basic topological structure of V R ( f A ) as well as the critical point set Z ( f A ) . We then provide both upper and lower bounds on the distance from a given initial point e with f A ( e ) 0 to the nearest zero Z-eigenvector. In Section 5, we give a variety of examples to demonstrate how the upper and lower bounds work.

2. Lower Bound for the Distance to the Real Vanishing

For simplicity, we shall only work with real homogeneous polynomials. We first establish some notational convention, which will be used throughout the rest of this paper. We denote the standard Euclidean norm on R n by | | · | | 2 , the standard unit ball in R n by D n , and the standard unit sphere by S n 1 , i.e.,
D n = { x R n : | | x | | 2 1 } and S n 1 : = { x R n : | | x | | 2 = 1 } .
Let d 1 be a positive integer, we denote by R [ x 1 , , x n ] d the set of all real homogenous polynomials of degree d in the indeterminants x = ( x 1 , , x n ) . Let f R [ x 1 , , x n ] d . Since f : R n R is continuous, we denote the uniform norm of f on S n 1 by | | f | | : = max x S n 1 | f ( x ) | . Furthermore, we denote
V R ( f ) : = f 1 ( 0 ) = { x R n : f ( x ) = 0 }
to be the real vanishing of f, which is always a closed subset of R n . The goal of this section is to bound the distance from a point outside V R ( f ) to V R ( f ) from below.
Lemma 1.
Let f ( x ) R [ x 1 , , x n ] d with d 1 . Then, there exists a constant N ( f ) > 0 such that
| f ( x ) | d + n 1 n 1 · N ( f ) , x D n .
Namely, | | f | | d + n 1 n 1 · N ( f ) .
Proof. 
Let ν = ( ν 1 , , ν n ) with 1 ν i d be a multi-index such that | ν | = ν 1 + + ν n = d . We write
f ( x ) = ν = ( ν 1 , , ν n ) , | ν | = d A ν x 1 ν 1 x n ν n ,
in terms of different monomials. Let N ( f ) : = max ν , | ν | = d | A ν | be the largest monomial coefficient in absolute value. Since there are at most d + n 1 n 1 nonzero different monomials in f ( x ) , our assertion follows. □
Lemma 2.
Let f ( x ) R [ x 1 , , x n ] d with d 1 . Then, there exists a constant C d , n ( f ) > 0 such that for all x, y D n ,
| f ( x ) f ( y ) | C d , n ( f ) · | | x y | | 2 .
Proof. 
Let x , y D n . Since D n is convex, by the mean value theorem, there exists a point c along the line segment t x + ( 1 t ) y for 0 t 1 , joining x and y such that
f ( x ) f ( y ) = f ( c ) , x y .
By the Cauchy–Schwarz inequality, we have:
| f ( x ) f ( y ) | | | f ( c ) | | 2 · | | x y | | 2 .
We now compute:
| x i f ( x ) | = | ν = ( ν 1 , , ν n ) , | ν | = d A ν x i x 1 ν 1 x n ν n | , ν = ( ν 1 , , ν n ) , | ν | = d d · N ( f ) = d · d + n 1 n 1 · N ( f ) .
We set C d , n ( f ) : = d · n · d + n 1 n 1 · N ( f ) , it follows that for all x D n , | | ( f ) ( x ) | | 2 C d , n ( f ) as required. Our assertion now follows. □
Let f ( x ) R [ x 1 , , x n ] d . Let e S n 1 be such that f ( e ) 0 . Suppose V R ( f ) { 0 } , then for any y S n 1 V R ( f ) , Lemma 2 yields that
| | e y | | 2 | f ( e ) | C d , n ( f ) = | f ( e ) | d · n · d + n 1 n 1 · N ( f ) .
Since S n 1 V R ( f ) is closed and bounded, it is compact; hence, we can define
d ( e , V R ( f ) ) : = min y S n 1 V R ( f ) | | e y | | 2
as the Euclidean distance from e to V R ( f ) and d 2 ( e ) : = | f ( e ) | d · n · d + n 1 n 1 · N ( f ) , then, we have:
Theorem 1.
Let f ( x ) R [ x 1 , , x n ] d . Let e S n 1 be such that f ( e ) 0 . Assume that V R ( f ) { 0 } , then d ( e , V R ( f ) ) d 2 ( e ) .

3. Upper Bound for the Distance to the Real Vanishing

In this section, we endeavor to establish a nontrivial upper bound for the distance to V R ( f ) from a given point. Fix e S n 1 V R ( f ) and let x V R ( f ) S n 1 , since x ± e S n 1 , there exists a unique geodesic (a great circle S 1 ) on S n 1 joining e and x on S n 1 . From differential geometry, we know a geodesic is distance minimizing from a given point e until reaching its conjugate point, which in this case, is the antipodal point e x . This means that the arc length of the great circle joining x to e is the spherical distance between them. Consequently, the two distinct lines [ e ] and [ x ] can be at most π / 2 -spherical distance apart, so projectively speaking, the Euclidean distance between [ e ] and [ x ] is at most 2 , which is a trivial upper bound.
Surprisingly, a nontrivial upper bound is a much more challenging task. We will need additional tools from a special class of polynomials, known as hyperbolic polynomials. The existing literature on both real stable and hyperbolic polynomials is vast. For a more in-depth reading on this topic, we refer the interested reader to [12,13]. However, to be more self-contained, we introduce the following definitions.
Definition 4.
A nonzero polynomial p ( x ) R [ x 1 , , x n ] is called real stable if it has no zeros in H n : = { z C : Im ( z ) > 0 } n , i.e., i , Im ( x i ) > 0 p ( x 1 , , x n ) 0 . It is a well-known fact that a nonzero polynomial p ( x ) R [ x 1 , , x n ] is real stable if and only if, for all x R n and e R > 0 n , the polynomial p ( x + t e ) R [ t ] is real rooted.
Definition 5.
A degree d homogeneous polynomial p ( x ) R [ x 1 , , x n ] d is called hyperbolic in direction e R n if p ( e ) 0 and the univariate polynomial t p ( x + t e ) R [ t ] for every x R n is real rooted, i.e., it has only real zeros.
The study of hyperbolic polynomials dated back to G o ̊ rding and Hurwitz’s time and has since been playing a vital role in hyperbolic programming [12,13]. To help visualize the notion of hyperbolicity, by considering the restriction p ( x + t e ) of p ( x ) on the line originating from x parallel to the fixed direction e, we insist that p ( x + t e ) R [ t ] is real rooted.
Some of the most noteworthy examples of hyperbolic polynomials are as follows:
Example 1.
The Lorentzian quadratics p ( x 1 , x 2 , , x n ) : = x 1 2 x 2 2 x n 2 is hyperbolic in direction e = ( 1 , 0 , , 0 ) .
Example 2.
Let 1 k n . The degree k symmetric polynomial σ k ( x 1 , , x n ) : = x i 1 x i k for 1 i 1 < < i k n is hyperbolic in direction e = ( 1 , , 1 ) .
Example 3.
Let i ( x ) R [ x 1 , , x n ] 1 for 1 i d be a linear form, then their product p ( x ) = 1 ( x ) d ( x ) is hyperbolic in direction e as long as e is not a common zero to all i ( x ) .
Example 4.
Let Sym n ( R ) denote the real vector space of all n × n real symmetric matrices. The determinant function det : Sym n ( R ) R is hyperbolic in direction e = I n , the n × n identity matrix.
Example 5.
Let G = ( V , E ) be a finite graph, then the matching polynomial of G is real stable, and hence hyperbolic in any direction e 0 .
A very important property of hyperbolic polynomials states that, if p , q R [ x 1 , , x n ] are both hyperbolic in direction e, then so is their product p · q . Moreover, an algorithm in polynomial time, based on Newton’s identities, can be used to check the real rootedness of a given polynomial due to the following result:
Theorem (Hermite-Sylvester).  A polynomial p ( t ) = k = 1 n ( t λ k ) is real rooted if and only if the n × n Hermitian matrix H with H i j = k = 1 n λ k i + j 2 is a positive semi-definite (PSD) matrix.
We now return to the upper bound estimate. In [14], a similar upper bound was found by M. Shub for complex homogeneous polynomials. However, since we are only concerned with the real zeros of a homogeneous polynomial, the original argument must be accordingly modified to suit the needs of real solutions.
Theorem 2.
Let f ( x ) R [ x 1 , , x n ] d . Assume V R ( f ) { 0 } . Let e S n 1 be such that f ( e ) 0 . If f ( x ) is hyperbolic in the direction e, then the Euclidean distance from the nearest zero of f on the unit sphere to e is at most d * ( e ) , where
d * ( e ) : = 2 2 1 | f ( e ) | | | f | | 2 / d .
Proof. 
Without loss of generality, by rotation if necessary, we may adjust e = e 1 = ( 1 , 0 , , 0 ) S n 1 and | f ( e ) | > 0 . For any x R n = R × R n 1 , we can write x = ( x 1 , y ) for x 1 R and y R n 1 . Then, the homogeneous polynomial f ( x ) takes on the form
f ( x ) = f ( x 1 , y ) = H 0 x 1 d + i = 1 d H i ( y ) x 1 d i ,
where H i ( y ) R [ x 2 , , x n ] i is of homogeneous degree i for 0 i d in the remaining indeterminants. Clearly, f ( e ) = f ( 1 , 0 ) = H 0 0 , so we may instead consider the monic polynomial
F ( x 1 , y ) = f ( x ) f ( e ) = x 1 d + i = 1 d H ^ i ( y ) x 1 d i ,
where H ^ i ( y ) R [ x 2 , , x n ] i for 1 i d .
Let z 0 V R ( f ) S n 1 is the nearest zero of f from e. Let s be the arc-length of the geodesic (great circle) joining e and z 0 ; then, F ( x 1 , y ) has no real zeros inside the double cone K e , whose central symmetry axis is in the direction e of the radius tan s in the hyperplane defined by x 1 = 1 , as illustrated in the figure below.
Mathematics 12 00705 i001
Fix 0 y D n 1 and let x = ( x 1 , y ) D 1 × D n 1 D n . We now study the univariate polynomial
g ( x 1 ) : = F ( x 1 , y ) , x 1 D 1 = [ 1 , 1 ] .
By assumption, since f ( x ) is hyperbolic in direction e, g ( x 1 ) = F ( x 1 , y ) is real rooted with all real zeros inside the double cone of radius r = ( cot s ) · | | ( 0 , y ) | | 2 , whose central symmetry axis is in the direction ( 0 , y ) . According to Vietá’s formula, the coefficient H ^ i ( y ) is precisely the ith symmetric function of the roots, since all roots are real and inside the cone of radius r, we have
H ^ i ( y ) d i r i .
It follows that
| F ( x ) | = | f ( x ) | | f ( e ) | | x 1 | d + i = 1 d d i r i | x 1 | d i = ( | x 1 | + r ) d = | x 1 | + ( cot s ) · | | ( 0 , y ) | | 2 d .
If x = ( x 1 , y ) S n 1 , then
| F ( x ) | | x 1 | + ( cot s ) 1 | x 1 | 2 d .
It is a straightforward calculus exercise to see that
max | x 1 | 1 | x 1 | + ( cot s ) 1 | x 1 | 2 = 1 + cot 2 s = csc s , 0 < s π 2 .
This implies
| | f | | | f ( e ) | ( csc s ) d or sin s | f ( e ) | | | f | | 1 / d .
We now return to the Euclidean distance | | e z 0 | | 2 between e and z 0 . Note that | | e z 0 | | 2 is precisely the length of the cord connecting e and z 0 with the prescribed arc length s, and it is therefore easy to see
| | e z 0 | | 2 2 = 4 sin 2 s 2 = 2 ( 1 cos s ) ,
which implies
1 1 2 | | e z 0 | | 2 2 2 = cos 2 s = 1 sin 2 s 1 | f ( e ) | | | f | | 2 / d ,
or equivalently,
| | e z 0 | | 2 d * ( e ) = 2 2 1 | f ( e ) | | | f | | 2 / d .
This completes the proof. □
Although | | f | | is not directly computed via the coefficients of f itself, it is not difficult to obtain by available constrained optimization methods, for instance De Lathauwer et al. [15] and Kofidas and Regalia [16], or using the MaxValue and MinValue commands provided directly by Mathematica [17].
Combining the results of Theorems 1 and 2, we have the following “nested spheres” estimate:
Corollary 1.
Let f ( x ) R [ x 1 , , x n ] d . Assume V R ( f ) { 0 } . Let e S n 1 be such that f ( e ) 0 . Then
  • d 2 ( e ) d ( e , V R ( f ) ) .
  • If in addition, f ( x ) is hyperbolic in direction e, then d ( e , V R ( f ) ) d * ( e ) .

4. The Zero Eigenvectors of a Nonzero Weakly Symmetric Tensor

In this section, we turn our attention to the problem of locating the nearest zero Z-eigenvector of a nonzero weakly symmetric tensor A = ( a i 1 i m ) R [ m , n ] from a given point. We shall henceforth assume that 0 A = ( a i 1 i m ) R [ m , n ] is weakly symmetric and has zero as a Z-eigenvalue.
Given an initial point e R n { 0 } and assuming A has zero eigenvectors, we would like to afford both lower and upper bounds on the Euclidean distance from e to the nearest zero Z-eigenvector of A .
In order to make an easier transition from real homogeneous polynomials to real weakly symmetric tensors, we begin by analyzing the critical points of a homogeneous polynomial f R [ x 1 , , x n ] d . We denote by
Z ( f ) : = { x R n : f ( x ) = 0 }
the set of critical points of f. It is worth noting that both V R ( f ) and Z ( f ) are closed and path-connected with 0 Z ( f ) V R ( f ) . To see that V R ( f ) is path-connected, clearly 0 V R ( f ) and observe that for any 0 x V R ( f ) , we have f ( t x ) = 0 for all t R , and thus, the whole line [ x ] : = span { x } V R ( f ) . Similarly, since
0 Z ( f ) = i = 1 n V R ( f i ) , where f i ( x ) = f x i ( x ) ,
we have that Z ( f ) is also closed and path-connected. We see that Z ( f ) V R ( f ) follows from the fact f ( x ) = 1 d f ( x ) , x .
In multivariate calculus, given a differentiable function f : R n R , a point p R n is said to be a critical point of f if f ( p ) = 0 . Furthermore, p is said to be a non-degenerate critical point of f if the Hessian matrix of f at p is nonsingular. Following the famous Morse’s Lemma, all non-degenerate critical points are isolated; that is, if p is a non-degenerate critical point of f, then there exists a neighborhood of p, which contains no other critical points of f. Furthermore, a non-degenerate critical point is a local maximum, or a local minimum, or a saddle point of the function f.
Given a nonzero weakly symmetric tensor A R [ m , n ] , let f A ( x ) R [ x 1 , , x n ] m be its associated homogeneous polynomial. Since 0 is always a critical point of f A , if 0 is non-degenerate, then 0 must be an isolated point in Z ( f A ) . Since Z ( f A ) is path-connected, we have Z ( f A ) = { 0 } . This implies that f A ( x ) = 0 has only the trivial solution; hence, 0 must not be a Z (or H)-eigenvalue of A . This observation directs our attention to the case where 0 is a degenerate critical point of f A .
Example 6.
The classical example of the “monkey saddle” defined by f ( x , y ) = x 3 3 x y 2 , whose only critical point is at 0 = ( 0 , 0 ) , happens to be a degenerate critical point. However, since 0 is the only critical point, f ( x , y ) = 0 has only the trivial solution.
Example 6 shows a degenerate, yet isolated, critical point that still fails to be a candidate for zero eigenvectors. Suppose 0 is a non-isolated critical point of f A (and therefore necessarily degenerates), then there exists x 0 Z ( f A ) { 0 } , i.e., f A ( x 0 ) = 0 , i.e., x 0 is a Z (or H-) eigenvector of 0. Hence, 0 must be a Z (or H-)eigenvalue of A .
Putting these observations together, we reach the following conclusion:
Proposition 1.
Let A R [ m , n ] be a weakly symmetric tensor with associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . The following are equivalent:
  • 0 is a Z (or H)-eigenvalue of A .
  • 0 is a non-isolated critical point of f A .
  • dim ( Z ( f A ) ) > 0 .
We note that, when 0 is a non-isolated critical point of A , it is still possible to have Z ( f A ) V R ( f A ) as supported by the following example.
Example 7.
Consider
f ( x 1 , x 2 ) : = x 1 2 x 2 2 ( x 1 2 x 2 2 ) R [ x 1 , x 2 ] 4 .
It is easy to see that V R ( f ) consists of four lines: x 1 = 0 , x 2 = 0 , and x 1 = ± x 2 . However, Z ( f ) consists of only the coordinate axes x 1 = 0 and x 2 = 0 .
Under the framework of tensors, we have the following alternative lower bounds on the distance from a given point e R n with f A ( e ) 0 to the nearest zero Z-eigenvector of A .
Lemma 3.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Then, there exists a constant M ( A ) > 0 such that
| f A ( x ) | M ( A ) , x D n .
Namely, | | f A | | M ( A ) .
Proof. 
By definition,
f A ( x ) = i 1 , i 2 , , i m = 1 n a i 1 i 2 i m x i 1 x i 2 x i m .
We set
M ( A ) : = max 1 i 1 , i 2 , , i m n | a i 1 i 2 i m | ,
the largest entry in absolute value of A , then all x D n , we have
| f A ( x ) | M ( A ) .
Lemma 4.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Then, there exists a constant C ( A ) > 0 such that for all x, y D n ,
| f A ( x ) f A ( y ) | C ( A ) · | | x y | | 2 .
Proof. 
The proof is similar to Lemma 2. We have the following alternative form of
f A ( x ) = | α | = m , 1 j 1 < < j r n m α a j 1 α 1 j r α r x j 1 α 1 x j r α r ,
where m α = m ! α 1 ! α r ! . Let x , y D n . Since D n is convex, by the mean value theorem, there exists a point c along the line segment t x + ( 1 t ) y for 0 t 1 , joining x and y such that
f A ( x ) f A ( y ) = f A ( c ) , x y .
By the Cauchy–Schwarz Inequality, we have:
| f A ( x ) f A ( y ) | | | f A ( c ) | | 2 · | | x y | | 2 .
Since
x i ( x j 1 α 1 x j s α s x j r α r ) = α s x j 1 α 1 x j s α s 1 x j r α r , j s = i 0 , j s i ,
we have:
x i f A ( x ) = | α | = m , 1 j 1 < < j r n m α a j 1 α 1 j r α r x i ( x j 1 α 1 x j s α s x j r α r ) .
It follows that
| x i f A ( x ) | | α | = m , 1 j 1 < < j r n m α | a j 1 α 1 j r α r | ( α 1 + + α r ) | α | = m , 1 j 1 < < j r n m · M ( A ) = m · n m · M ( A ) .
We set C ( A ) : = m · n m + 1 / 2 · M ( A ) , and it yields:
| | f A ( x ) | | 2 2 = i = 1 n | x i f A ( x ) | 2 n 2 m + 1 ( m · M ( A ) ) 2 .
Thus,
| | f A ( x ) | | 2 C ( A ) ,
which completes the proof. □
Then we immediately have the following consequence.
Corollary 2.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Let e S n 1 V R ( f A ) . Assume V R ( f A ) { 0 } . Let y S n 1 V R ( f A ) , then
| | e y | | 2 | f A ( e ) | C ( A ) .
In conjunction with Theorem 1, we also have:
Corollary 3.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Let e S n 1 V R ( f A ) . Assume that V R ( f A ) { 0 } , then d ( e , V R ( f A ) ) d 2 ( e ) .
Since V R ( f A ) S n 1 is closed and bounded, it is compact; there must be a point y 0 S n 1 V R ( f A ) such that | | e y 0 | | 2 = min y S n 1 V R ( f A ) | | e y | | 2 , which is the nearest zero of f A on the unit sphere to e. Projectively speaking, the distance from the line [ e ] to V R ( f A ) RP n 1 is at least max d 2 ( e ) , | f A ( e ) | C ( A ) .
We end this section by improving upon this lower bound. First, we introduce another constant U ( f A ) as follows.
Let f A ( x ) R [ x 1 , , x n ] m be given as above. Since each partial derivative x i f A ( x ) for 1 i n is a degree ( m 1 ) homogeneous polynomial of x 1 , , x n , using Lemma 1, we can define in the same fashion constants N ( f A x i ) > 0 for 1 i n . Now, we set
U ( f A ) : = max 1 i n N f A x i .
Theorem 3.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Assume that 0 is a non-isolated critical point of f A . Let e S n 1 V R ( f A ) , then the Euclidean distance from the nearest zero Z-eigenvector of A on the unit sphere to e is at least d ^ 2 ( e ) , where
d ^ 2 ( e ) : = m | f A ( e ) | ( m 1 ) · n · m + n 2 n 1 · U ( f A ) .
Proof. 
For any e , y S n 1 , we have:
| | f A ( e ) f A ( y ) | | 2 2 = i = 1 n | f A x i ( e ) f A x i ( y ) | 2 .
It follows from Lemma 2, for 1 i n ,
| f A x i ( e ) f A x i ( y ) | ( m 1 ) · n m + n 2 n 1 · N f A x i | | e y | | 2 .
Since U ( f A ) : = max 1 i n N f A x i , we have:
| f A x i ( e ) f A x i ( y ) | 2 ( m 1 ) 2 · n · m + n 2 n 1 2 · ( U ( f A ) ) 2 | | e y | | 2 2 .
Hence,
| | f A ( e ) f A ( y ) | | 2 2 i = 1 n ( m 1 ) 2 · n · m + n 2 n 1 2 · ( U ( f A ) ) 2 | | e y | | 2 2 ( m 1 ) 2 · n 2 · m + n 2 n 1 2 · ( U ( f A ) ) 2 | | e y | | 2 2 .
i.e.,
| | f A ( e ) f A ( y ) | | 2 ( m 1 ) · n · m + n 2 n 1 · U ( f A ) · | | e y | | 2 .
Suppose y Z ( f A ) S n 1 . Then, f A ( y ) = 0 . This implies
| | f A ( e ) | | 2 ( m 1 ) · n · m + n 2 n 1 · U ( f A ) · | | e y | | 2 .
On the other hand, since f A ( e ) , e = m f A ( e ) , we obtain by the Cauchy–Schwarz inequality that
m | f A ( e ) | = | f A ( e ) , e | | | f A ( e ) | | 2 · | | e | | 2 = | | f A ( e ) | | 2 ( m 1 ) · n · m + n 2 n 1 · U ( f A ) · | | e y | | 2 .
Consequently,
| | e y | | 2 m | f A ( e ) | ( m 1 ) · n · m + n 2 n 1 · U ( f A ) = d ^ 2 ( e )
as required. Lastly, since Z ( f A ) S n 1 is compact, the Euclidean distance from the nearest zero Z-eigenvector of A on the unit sphere to e is attained at some y 0 Z ( f A ) S n 1 with | | e y 0 | | 2 d ^ 2 ( e ) . □
Remark 1.
The main difference between Corollary 3 and Theorem 3 is that Corollary 3 gives a lower bound d 2 ( e ) for the distance to the nearest real zero of f A with the unit length from e, whereas Theorem 3 gives a lower bound d ^ 2 ( e ) for the distance to the nearest zero Z-eigenvector of f A with the unit length from e. It turns out, as seen by various examples in §5, the lower bound d ^ 2 ( e ) in Theorem 3 tends to be sharper than d 2 ( e ) given in Corollary 3.
We now rephrase Theorem 2 as follows:
Theorem 4.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Assume that V R ( f A ) { 0 } . Let e S n 1 V R ( f A ) . If f A ( x ) is hyperbolic in direction e, then the Euclidean distance from the nearest zero of f A on the unit sphere to e is at most d * ( e ) , where
d * ( e ) : = 2 2 1 | f A ( e ) | | | f A | | 2 / m .
Similarly to Corollary 1, we now have:
Corollary 4.
Let 0 A = ( a i 1 i m ) R [ m , n ] be weakly symmetric with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Assume that 0 is a non-isolated critical point of f A . Let e S n 1 V R ( f A ) . Then,
  • d ^ 2 ( e ) d ( e , Z ( f A ) ) .
  • If, in addition, f A ( x ) is hyperbolic in direction e, then d ( e , V R ( f A ) ) d * ( e ) .
Remark 2.
In fact, a similar strategy is frequently adopted in single variable calculus. In order to find the inflection points of a smooth real-valued function f, we first find the real zeros of f ( x ) and then apply the first or second derivative test to check whether they are truly inflection points.

5. Some Examples

In this section, we will examine the upper and lower bounds obtained in previous sections via a collection of examples of distinct nature.
Example 8.
Let A R [ 4 , 2 ] be the weakly symmetric tensor whose associated homogeneous polynomial is
f A ( x 1 , x 2 ) = ( x 1 2 x 2 2 ) 2 = x 1 4 2 x 1 2 x 2 2 + x 2 4 .
Clearly, f A ( e 1 ) = 1 . Furthermore, since f A is a product of hyperbolic polynomials in direction e 1 , it is itself a hyperbolic polynomial in direction e 1 . Additionally, since it is a perfect square, it is automatically a PSD tensor. By direct calculation, we see that:
f A ( e 1 ) = 1 , | | f A | | = 1 , d 2 ( e ) = 1 40 2 0.0177 , d ^ 2 ( e 1 ) = 1 24 0.0417 , and d * ( e 1 ) = 2 .
The zero Z-eigenvectors are
z 1 = 1 2 , 1 2 and z 2 = z 1 .
We see that
| | e 1 z 1 | | 2 0.7654 .
It is also clear, by symmetry, that f A ( e 2 ) = 1 and f A is hyperbolic in the direction e 2 ; hence, the exact same conclusion holds for the initial point e 2 .
Example 9.
Let A R [ 4 , 2 ] be the weakly symmetric tensor whose associated homogeneous polynomial is
f A ( x 1 , x 2 ) = x 1 4 + 4 x 1 3 x 2 + 3 x 1 2 x 2 2 4 x 1 x 2 3 4 x 2 4 = ( x 1 + 2 x 2 ) 2 ( x 1 2 x 2 2 ) .
Clearly, f A ( e 1 ) = 1 . Furthermore, since f A is a product of hyperbolic polynomials in direction e 1 , it is itself a hyperbolic polynomial in direction e 1 , but it is not PSD.
On the other hand, it is easy to check f A is also hyperbolic in direction e 2 :
p ( t ) = f A ( t e 2 + ( x 1 , x 2 ) ) = x 1 4 + 4 x 1 3 ( t + x 2 ) + 3 x 1 2 ( t + x 2 ) 2 4 x 1 ( t + x 2 ) 3 4 ( t + x 2 ) 4 ,
which has all real roots t = x y and t = 1 2 ( x 1 + 2 x 2 ) . We compute to see that
| | f A | | = min x S 1 f A ( x ) 4.3223 , d 2 ( e 1 ) = 1 80 2 0.0088 , d ^ 2 ( e 1 ) = 1 96 0.0104 , d * ( e 1 ) 0.7478 ; d 2 ( e 2 ) = 1 20 2 0.0354 , d ^ 2 ( e 2 ) = 1 24 0.0417 , and d * ( e 2 ) 1.2689 .
Using Wolfram’s software Mathematica 10.2 [17], we find that the zero Z-eigenvectors are
z 1 ( 0.8944 , 0.4472 ) and z 2 = z 1 .
We now compare to
| | e 1 z 1 | | 2 0.4595 and | | e 2 z 2 | | 2 1.0514 ,
see the figure below
Mathematics 12 00705 i002
This example will be referenced later in this section.
Example 10.
Let A R [ 3 , 3 ] be the weakly symmetric tensor whose associated homogeneous polynomial is
f A ( x 1 , x 2 , x 3 ) = x 1 3 + x 1 2 x 2 x 1 x 2 2 x 2 3 + x 1 2 x 3 x 2 2 x 3 x 1 x 3 2 x 2 x 3 2 x 3 3 = ( x 1 2 x 2 2 x 3 2 ) ( x 1 + x 2 + x 3 ) .
In order to check whether f A is hyperbolic in direction e 1 , we compute:
p ( t ) = f A ( t e 1 + ( x 1 , x 2 , x 3 ) ) = [ ( t + x 1 ) 2 x 2 2 x 3 2 ] ( t + x 1 + x 2 + x 3 ) .
So p ( t ) has all real roots t = ( x 1 + x 2 + x 3 ) and t = x 1 ± x 2 2 + x 3 2 . It is clear f A ( e 1 ) = 1 . We use Mathematica [17] to find | | f A | | = min x S 2 f A ( x ) 1.4804 . We also have:
d * ( e 1 ) 1.0201 , d 2 ( e 1 ) = 1 30 3 0.0192 , and d ^ 2 ( e 1 ) = 1 36 0.0278 .
A direct elimination shows the zero Z-eigenvectors are
z 1 = 1 2 , 1 2 , 0 and z 2 = z 1 ,
whose distance | | e 1 z 1 | | 2 0.7654 .
Contrasting to the previous examples, the current tensor has order 3, so it cannot be PSD. It is also true that f A is not hyperbolic in any other direction.
Example 11.
Let
A = 1 1 2 0 0 1 2 1 0 0 0 0 2 0 0 0 0 2 and B = 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 1 .
Then, A , B Sym 4 ( R ) and x , y R , x A + y B Sym 4 ( R ) satisfying
det ( x A + y B ) = det x 1 2 x 0 0 1 2 x x 0 0 0 0 2 x + y 0 0 0 0 2 x + y = 3 4 x 2 ( 2 x + y ) 2 = 3 x 4 + 3 x 3 y + 3 4 x 2 y 2 R [ x , y ] 4 .
Let A R [ 4 , 2 ] be the weakly symmetric tensor whose associated homogeneous polynomial is f A ( x , y ) = det ( x A + y B ) . It is straightforward to see
f A ( e 1 ) = 3 and p ( t ) : = f A ( t e 1 + ( x , y ) ) = 3 4 t 2 x 2 ( 2 t x + y ) 2 ,
which is real rooted, hence f A is hyperbolic in the direction e 1 . In addition, by Mathematica [17], we find that | | f A | | 3.3646 . We also have:
d * ( e 1 ) 1.2361 , d 2 ( e 1 ) = 1 20 2 0.0353 , d ^ 2 ( e 1 ) = 1 24 0.0417 ,
and the zero Z-eigenvectors are
z 1 = ( 0 , 1 ) , z 2 = z 1 , z 3 ( 0.4472 , 0.8944 ) , and z 4 = z 3 .
It is evident that the nearest zero Z-eigenvector to e 1 is z 3 with | | e 1 z 3 | | 2 1.0514 .
Example 12.
Let A R [ 4 , 3 ] be the weakly symmetric tensor whose associated homogeneous polynomial is
f A ( x 1 , x 2 , x 3 ) = x 1 2 x 2 x 3 2 x 1 x 2 2 x 3 + 3 x 1 x 2 x 3 2 = x 1 x 2 x 3 ( x 1 2 x 2 + 3 x 3 ) .
This is not a PSD tensor. However, if we choose e = 1 3 , 1 3 , 1 3 , then it is easy to see f A ( e ) = 2 9 . We use Mathematica [17] to find that | | f A | | = min x S 2 f A ( x ) 0.6745 . Since f A is a product of the hyperbolic polynomials in direction e, it is also hyperbolic in direction e. This immediately becomes
d * ( e ) 0.8334 , d 2 ( e ) = 1 810 3 7.1278 · 10 4 , d ^ 2 ( e ) = 2 1215 8.2305 · 10 4 .
Using Mathematica [17], we find the zero Z-eigenvectors to be:
z 1 = ( 1 , 0 , 0 ) , z 2 = z 1 , z 3 = ( 0 , 1 , 0 ) , z 4 = z 3 , z 5 = ( 0 , 0 , 1 ) , z 6 = z 5 , z 7 = 2 5 , 1 5 , 0 , z 8 = z 7 , z 9 = 3 10 , 1 10 , 0 , z 10 = z 9 , z 11 = 0 , 3 13 , 2 13 , and z 12 = z 11 ,
It turns out
| | e z 11 | | 2 0.6314 , | | e z 7 | | 2 0.6714 , and | | e z 9 | | 2 0.7344 .
From this, we can see the upper bound d * ( e ) 0.8334 in fact encloses all the points z i for 7 i 12 projectively, while z 11 is the nearest to e, as shown in the figure below.
Mathematics 12 00705 i003
The following example shows that, even though f A may not be hyperbolic in any direction e, the upper bound provided by Theorem 4 may still remain valid.
Example 13.
Let A R [ 4 , 2 ] be the weakly symmetric tensor whose associated homogeneous polynomial is
f A ( x , y ) = x 4 3 x 3 y + x 2 y 2 + 4 y 4 .
It is clear
f A ( e 1 ) = 1 and d ^ 2 ( e 1 ) = 1 96 0.0104 .
We now show that f A is not hyperbolic in any direction e = ( a , b ) ( 0 , 0 ) . We compute:
p ( t ) = f A ( t e + ( x , y ) ) = ( a t + x ) 4 3 ( a t + x ) 3 ( ( b t + y ) + ( a t + x ) 2 ( b t + y ) 2 + 4 ( b t + y ) 4 .
Solving for t, Mathematica yields:
t = 2 y x a 2 b and t = 2 a x b x a y 2 b y ± 3 ( b 2 x 2 + 2 a n x y a 2 y 2 ) 2 ( a 2 + a b + b 2 ) .
However, b 2 x 2 + 2 a b x y a 2 y 2 = ( b x a y ) 2 . Thus, p ( t ) is really rooted if and only if a = b = 0 , which is impossible.
On the other hand, since | | f A | | = 4 , d * ( e 1 ) = 2 2 0.7654 . Furthermore, the zero Z-eigenvectors are
z 1 = 2 5 , 1 5 and z 2 = z 1 .
Thus, | | e 1 z 1 | | 2 0.4595 < d * ( e 1 ) .
We experimented with several other examples where f A is not hyperbolic in any obvious direction e; however, the upper bound still held. For this reason, we end this section by proposing the following procedure which may lead to promising outcomes.
[Procedure] Given a nonzero m-order n-dimensional weakly symmetric tensor A = ( a i 1 i m ) with the associated homogeneous polynomial f A ( x ) R [ x 1 , , x n ] m . Define the index set
Δ ( A ) : = { 1 j n : a j j 0 } .
Case 1.
Suppose Δ ( A ) .
  • Step 1.1. Let j 0 : = min ( Δ ( A ) ) be the least index. It is clear that f A ( e j 0 ) 0 , where e j 0 = ( 0 , , 1 , , 0 ) with the only 1 in the j 0 -th coordinate. We then compute d ^ 2 ( e j 0 ) using Theorem 3.
  • Step 1.2. Check whether f A ( x ) is hyperbolic in direction e j 0 . If it is, we then compute d * ( e j 0 ) using Theorem 5. Consequently, the nearest possible zero Z-eigenvector is nested in between the two spheres centered at e j 0 of radii d ^ 2 ( e j 0 ) and d * ( e j 0 ) , respectively.
  • Step 1.3. Repeat Step 1.2 with any other direction e j for all subsequent indices j Δ ( A ) . If f A ( x ) is also hyperbolic in direction e j , we then compute d * ( e j ) for each such j using Theorem 5. Consequently, the nearest possible zero Z-eigenvector is nested in between the two spheres centered at e j of radii d 2 ( e j ) and d * ( e j ) , respectively. Putting these spheres of different sizes together, we have projectively located many if not all of the zero Z-eigenvectors. We refer to Example 9 for a detailed demonstration.
  • Step 1.4. If f A ( x ) is not hyperbolic in direction e j for any j Δ ( A ) , it is inconclusive. However, we can still compute d * ( e j ) , but only use it as a possible upper bound with caution, as demonstrated in Examples 13 and 14.
Case 2.
Suppose Δ ( A ) = .
  • Step 2.1. If there is an obvious choice e 0 such that f A ( e ) 0 , then we can normalize e if necessary and use this as our initial point and follow the outlined steps 1.1 and 1.2 as given above. Otherwise, we move to the following step.
  • Step 2.2. Applying the shifted symmetric higher-order power method (SS-HOPM), as provided by Kolda and Mayo [9], we choose a parameter α > 0 large enough such that
    f ^ A ( x ) : = f A ( x ) + α | | x | | 2 m
    becomes either convex or concave. According to [9], it is usually required to have
    α > ( m 1 ) · max x S n 1 ρ ( A x m 2 ) ,
    where m ( m 1 ) A x m 2 denotes the Hessian matrix of f A ( x ) and ρ ( A x m 2 ) ) denotes its spectral radius. A common conservative choice of α is by letting
    α = ( m 1 ) 1 i 1 , , i m n | a i 1 i m | .
  • Step 2.3. Starting with the initial point e 1 = ( 1 , 0 , , 0 ) R n , the SS-HOPM will converge to some x 0 S n 1 such that f ^ A ( x 0 ) = 0 . As a consequence, 0 = f A ( x 0 ) + α ; hence, x 0 S n 1 V R ( f A ) .
  • Step 2.4. We now use x 0 in place of e j 0 as in Step 1.1 and proceed in a similar fashion to find d ^ 2 ( x 0 ) as well as d * ( x 0 ) .
  • Step 2.5. We choose x 1 { x 0 } S n 1 analogous to e j in Step 1.3 and proceed in a similar fashion to find d ^ 2 ( x 1 ) as well as d * ( x 1 ) . We continue this process until there is no more orthogonal vector left, at which point, we can locate many if not all of the zero Z-eigenvectors projectively.
We demonstrate the above procedure as follows.
Example 14.
Let A R [ 4 , 2 ] be the weakly symmetric tensor whose associated homogeneous polynomial is
f A ( x , y ) = 4 x 4 y 2 x 2 y 4 = x 2 y 2 ( 4 x 2 y 2 ) .
Clearly, Δ ( A ) = and f ( e 1 ) = f ( e 2 ) = 0 . However, by choosing e ^ = ( 1 , 1 ) , f A ( e ^ ) = 3 . It is easy to check
p ( t ) = f A ( t e ^ + ( x , y ) ) = 4 ( t + x ) 4 ( t + y ) 2 ( t + x ) 2 ( t + y ) 4
has all real roots: t = x , t = y , t = 2 x + y , and t = 1 3 ( 2 x + y ) . Hence, f A is hyperbolic in direction e ^ . We normalize e ^ to be the unit vector e = ( 1 2 , 1 2 ) , then f A ( e ) = 3 8 . Next, we calculate to see that
U ( f A ) = 16 , d ^ 2 ( e ) = 1 256 0.0039 , | | f A | | 0.5251 , and d * ( e ) = 2 2 1 3 / 8 0.5251 1 / 2 1.1013 .
It is not difficult to see that the zero Z-eigenvectors are
z 1 = ( 1 , 0 ) , z 2 = z 1 , z 3 = ( 0 , 1 ) , and z 4 = z 2 .
The nearest zero Z-eigenvector to e is e 1 , which satisfies | | e z 1 | | 2 0.7654 . Next, if we choose e = ( 1 2 , 1 2 ) , then
f A ( e ) = 3 8 , d ^ 2 ( e ) 0.0039 , and d * ( e ) 1.1013 .
The nearest zero Z-eigenvector to e is e 2 , which satisfies | | e z 2 | | 2 0.7654 . Hence, the nested spheres of inner radius 0.0039 and outer radius 1.1013 centered at e and e projectively encompass all zero Z-eigenvectors, as can be seen in the figure below.
Mathematics 12 00705 i004

Author Contributions

Conceptualization, K.P. and T.Z.; methodology, K.P. and T.Z.; writing—original draft preparation, K.P. and T.Z.; writing—review and editing, K.P. and T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lim, L.H. Singular values and eigenvalues of tensors, A variational approach. In Proceedings of the 1st IEEE International Workshop on Computational Advances of Multi-Tensor Adaptive Processing, Le Gosier, France, 13–15 December 2005; pp. 129–132. [Google Scholar]
  2. Qi, L. Eigenvalues of a real supersymmetric tensor. J. Symbolic Comput. 2005, 40, 1302–1324. [Google Scholar] [CrossRef]
  3. Qi, L. Eigenvalues and invariants of tensors. J. Math. Anal. Appl. 2007, 325, 1363–1377. [Google Scholar] [CrossRef]
  4. Qi, L.; Sun, W.; Wang, Y. Numerical multilinear algebra and its applications. Front. Math. 2007, 2, 501–526. [Google Scholar] [CrossRef]
  5. Chang, K.; Pearson, K.; Zhang, T. On eigenvalue problems of real symmetric tensors. J. Math. Anal. Appl. 2009, 350, 416–422. [Google Scholar] [CrossRef]
  6. Chang, K.; Pearson, K.; Zhang, T. Some variational principles of the Z-eigenvalues for nonnegative tensors. Linear Algebra Appl. 2013, 438, 4166–4182. [Google Scholar] [CrossRef]
  7. Chang, K.C.; Qi, L.; Zhang, T. A survey on the spectral theory of nonnegative tensors. Numer. Linear Algebra Appl. 2013, 20, 891–912. [Google Scholar] [CrossRef]
  8. Hu, S.; Qi, L. Convergence of a second order Markov chain. Appl. Math. Comput. 2014, 241, 183–192. [Google Scholar] [CrossRef]
  9. Kolda, T.; Mayo, J. Shifted power method for computing tensor eigenpairs. SIAM J. Matrix Anal. Appl. 2011, 34, 1095–1124. [Google Scholar] [CrossRef]
  10. Liu, Y.; Zhou, G.; Ibrahim, N.F. An always convergent algorithm for the largest eigenvalue of an irreducible nonnegative tensor. J. Comput. Appl. Math. 2010, 235, 286–292. [Google Scholar] [CrossRef]
  11. Ng, M.; Qi, L.; Zhou, G. Finding the largest eigenvalue of a nonnegative tensor. SIAM J. Matrix Anal. Appl. 2010, 31, 1090–1099. [Google Scholar] [CrossRef]
  12. Bauschke, H.H.; Güler, O.; Lewis, A.S.; Sendov, H.S. Hyperbolic polynomials and convex analysis. Can. J. Math. 2001, 53, 470–488. [Google Scholar] [CrossRef]
  13. Pemantle, R. Hyperbolicity and Stable Polynomials in Combinatorics and Probability. Available online: https://www2.math.upenn.edu/~pemantle/papers/hyperbolic.pdf (accessed on 1 July 2023).
  14. Shub, M. On the distance to the zero set of a homogeneous polynomial. J. Complexity 1989, 5, 303–305. [Google Scholar] [CrossRef]
  15. De Lathauwer, L.; De Moor, B.; Vandewalle, J. On the best rank-1 and rank-(R1, R2, ⋯,RN) approximation of higher-order tensor. SIAM J. Matrix Anal. Appl. 2000, 21, 1324–1342. [Google Scholar] [CrossRef]
  16. Kofidis, E.; Regalia, P.A. On the best rank-1 approximation of higher-order supersymmetric tensors. SIAM J. Matrix Anal. Appl. 2002, 23, 863–884. [Google Scholar] [CrossRef]
  17. Wolfram Research, Inc. Mathematica Online; Wolfram Research, Inc.: Champaign, IL, USA, 2023; Available online: https://www.wolfram.com/ (accessed on 1 July 2023).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Pearson, K.; Zhang, T. The Nearest Zero Eigenvector of a Weakly Symmetric Tensor from a Given Point. Mathematics 2024, 12, 705. https://doi.org/10.3390/math12050705

AMA Style

Pearson K, Zhang T. The Nearest Zero Eigenvector of a Weakly Symmetric Tensor from a Given Point. Mathematics. 2024; 12(5):705. https://doi.org/10.3390/math12050705

Chicago/Turabian Style

Pearson, Kelly, and Tan Zhang. 2024. "The Nearest Zero Eigenvector of a Weakly Symmetric Tensor from a Given Point" Mathematics 12, no. 5: 705. https://doi.org/10.3390/math12050705

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop