Next Article in Journal
Calculating Complete Lists of Belyi Pairs
Previous Article in Journal
Viral Infection Spreading and Mutation in Cell Culture
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Non-Iterative Method for the Difference of Means on the Lie Group of Symmetric Positive-Definite Matrices

1
School of Science, Dalian Jiaotong University, Dalian 116028, China
2
Beijing Key Laboratory on MCAACI, Beijing Institute of Technology, Beijing 100081, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(2), 255; https://doi.org/10.3390/math10020255
Submission received: 10 December 2021 / Revised: 12 January 2022 / Accepted: 12 January 2022 / Published: 14 January 2022
(This article belongs to the Section Computational and Applied Mathematics)

Abstract

:
A non-iterative method for the difference of means is presented to calculate the log-Euclidean distance between a symmetric positive-definite matrix and the mean matrix on the Lie group of symmetric positive-definite matrices. Although affine-invariant Riemannian metrics have a perfect theoretical framework and avoid the drawbacks of the Euclidean inner product, their complex formulas also lead to sophisticated and time-consuming algorithms. To make up for this limitation, log-Euclidean metrics with simpler formulas and faster calculations are employed in this manuscript. Our new approach is to transform a symmetric positive-definite matrix into a symmetric matrix via logarithmic maps, and then to transform the results back to the Lie group through exponential maps. Moreover, the present method does not need to compute the mean matrix and retains the usual Euclidean operations in the domain of matrix logarithms. In addition, for some randomly generated positive-definite matrices, the method is compared using experiments with that induced by the classical affine-invariant Riemannian metric. Finally, our proposed method is applied to denoise the point clouds with high density noise via the K-means clustering algorithm.

1. Introduction

The difference of means method is usually employed in detection problems. Its procedure can be described as Figure 1: for the detected element R 0 , the distance between R 0 and the mean of the reference elements R N , , R 1 , R 1 , , R N is computed. If the distance is bigger than the present value, we can conclude that R 0 can be distinguished from the reference elements. For example, the difference of means method is applied to detect abnormal conditions in clustered enterprise middleware systems [1,2,3]. In [4], an algorithm is proposed to detect the change-points of rigid body motions, which is defined as the difference of means method in the special Euclidean group. Based on the difference of means method, a geometric approach is developed for high-resolution Doppler processing and shows the existence of targets in fixed directions of some elements [5].
In the field of coastal radars, the issue of detecting small targets and stealth targets in a non-uniform clutter environment is vital. Recently, a high-resolution Doppler detector was proposed by Ref. [6] using the difference of means method, and the X-band radar experiments at Paris Airport show that the performance of Doppler imaging and Doppler detection has been improved. Their purpose of target detection is to judge whether there are targets in some cells according to the observed values. Based on the affine-invariant Riemannian metric, they developed an iterative algorithm for the Riemannian mean of some symmetric positive-definite Hermitian matrices.
In this paper, we propose a non-iterative method to compute the distance between a symmetric positive-definite matrix and the mean matrix on the Lie group of symmetric positive-definite matrices. Although affine-invariant Riemannian metrics, which avoid the drawbacks of Euclidean inner product, have a perfect theoretical framework, their complex formulas also lead to sophisticated and time-consuming algorithms [7,8]. Therefore, we use log-Euclidean metrics to obtain simpler formulas and faster calculations [9,10,11]. As the Lie algebra of the Lie group is the space of symmetric matrices, we transform the Riemannian computations on the Lie group into Euclidean computations on its Lie algebra. Finally, our proposed method is applied to denoise the point clouds with high density noise via the K-means clustering algorithm.
Our first contribution is that a non-iterative method for calculating the log-Euclidean distance between a symmetric positive-definite matrix and the mean matrix is obtained, and the mean matrix does not need to be calculated. Our second contribution is applied our proposed method to denoise the point clouds with high density noise via the K-means clustering algorithm, and the denoising effect of the method is evaluated by three criteria: Accuracy, Recall and Precision.
The remaining sections of this paper are as follows: Section 2 reviews the geometry on the space of symmetric positive-definite matrices, including metrics, geodesics and geodesic distances. In Section 3, a non-iterative method is presented to compute the distance between a matrix and the mean matrix on the Lie group of symmetric positive-definite matrices, and the method is compared with classical algorithms using simulations. Section 4 shows the effectiveness of our developed method to denoise the point clouds with high density noise.

2. Preliminaries

In this section, we introduce some concepts, including Euclidean metrics, affine-invariant Riemannian metrics and log-Euclidean metrics on the space of symmetric positive-definite matrices. More details are discussed in Refs. [12,13,14,15].

2.1. Exponential Map and Logarithmic Map

The set of all invertible n × n real matrices is known as the general linear group GL ( n , R ) . The set of all n × n real matrices is denoted by M n × n , where GL ( n , R ) is a differentiable manifold and M n × n is the Lie algebra gl ( n , R ) . Exponential maps and logarithmic maps can transmit the information between GL ( n , R ) and M n × n . Here, the exponential map of a matrix V M n × n is M n × n GL ( n , R ) and expressed by the convergent series
exp ( V ) = k = 0 V k k ! .
On the other hand, the logarithmic map is the local inverse of the exponential map, denoted by log and the logarithmic map of a matrix A GL ( n , R ) is the solution of the equation exp V = A . When all eigenvalues of A are non-negative, there exists a unique real logarithm, whose spectrum is just in the scope { z C : π < Im ( z ) < π } of the complex plane [16]. Furthermore, if A is in a neighborhood of the identity matrix I, then the series k = 1 ( 1 ) k + 1 ( A I ) k k converges to log A . The logarithmic map is shown in
log ( A ) = k = 1 ( 1 ) k + 1 ( A I ) k k .
Let us denote the set of all the n × n symmetric matrices by
S ( n ) = { V M n × n | V T = V }
and the set of all the n × n symmetric positive-definite matrices by
P ( n ) = { A S ( n ) | A > 0 } ,
where A > 0 implies that the quadratic form x T A x > 0 for all non-zero n-dimensional vectors x. For a general invertible matrix, the uniqueness and existence of logarithmic maps may not be available. The logarithm of a symmetric positive-definite matrix is well defined and is a symmetric matrix. Moreover, since the exponential map from S ( n ) to P ( n ) is one-to-one and onto, the exponential map of any symmetric matrix is symmetric positive-definite.

2.2. Frobenius Inner Product

On P ( n ) , the Frobenius inner product is
A , B F = tr ( A T B ) ,
where tr ( · ) is the trace of · , hence P ( n ) with the metric (5) is taken as a manifold. The tangent space at the point A P ( n ) is written as T A P ( n ) . Since P ( n ) is an open subset of S ( n ) , the tangent space T A P ( n ) at A can be identified with S ( n ) . The geodesic at A in the direction of V S ( n ) with the metric (5) is a straight line
γ F ( t ) = W + t Y , 0 t < δ ,
with δ > 0 , where t is sufficiently small so that the geodesic (6) remains on P ( n ) . The associated norm A F = A , A F 1 2 defines the Euclidean distance on P ( n ) as follows
d F ( A , B ) = A B F .

2.3. Affine-Invariant Riemannian Metric

The Riemannian metric at point A is defined by
g A ( V 1 , V 2 ) = g I ( A 1 V 1 , A 1 V 2 ) = tr ( A 1 V 1 A 1 V 2 )
with V 1 , V 2 T A P ( n ) , where P ( n ) with the metric (8) is a Riemannian manifold.
At point A P ( n ) , the geodesic in the direction V T A P ( n ) is given by
γ A f f ( t ) = A 1 2 exp ( t A 1 2 V A 1 2 ) A 1 2 ,
which is entirely contained on P ( n ) . It is noted that P ( n ) with the metric (8) is geodesically complete [17], then a geodesic curve γ A f f ( t ) can be obtained for any given matrices A , B P ( n ) , which satisfies that γ A f f ( 0 ) = A , γ A f f ( 1 ) = B and the initial velocity γ ˙ A f f ( 0 ) = A 1 2 log ( A 1 2 B A 1 2 ) A 1 2 T A P ( n ) . The Riemannian distance between A and B is represented as
d A f f ( A , B ) = log ( A 1 2 B A 1 2 ) F .
The following action a c t ( U ) of a matrix U GL ( n , R ) is defined by
a c t ( U ) A = U A U T , A P ( n ) .
This group action describes the influence of the general affine change of coordinates on the matrix A P ( n ) . In addition, this action is naturally extended to tangent vectors in the same way. For example, if ι ( t ) = A + t V + O ( t 2 ) is a curve passing through A with tangent vector V, then the curve a c t ( U ) ι ( t ) = U A U T + t U V U T + O ( t 2 ) passes through a c t ( U ) A with tangent vector a c t ( U ) V . Moreover, a c t ( U ) : P ( n ) P ( n ) is isometric for any invertible matrix U, namely
d A f f ( A , B ) = d A f f ( a c t ( U ) A , a c t ( U ) B ) ,
so the Riemannian metric (8) is affine-invariant.

2.4. Log-Euclidean Riemannian Metric

Here, P ( n ) becomes a Lie group by defining a logarithmic multiplication ⊙ as follows
A B : = exp ( log A + log B ) .
The log-Euclidean metric between tangent vectors V 1 and V 2 at the point A P ( n ) is given by
g A ( V 1 , V 2 ) = g I ( D A log V 1 , D A log V 2 ) ,
where D A denotes the differential map at the point A. Then, the geodesic γ L E ( t ) between A and B is given by
γ L E ( t ) = exp ( ( 1 t ) log A + t log B ) , 0 t 1 ,
and it can see that γ L E ( t ) is a straight line in the domain of matrix logarithms. From (14), the log-Euclidean distance between A and B is written as
d L E ( A , B ) = log A log B F .
It is evident that the Formula (16) corresponds to the Euclidean distance in the logarithmic domain. From the above formulas, we can see that the framework of the log-Euclidean metric is simpler than that of the affine-invariant metric (8).
Moreover, the log-Euclidean metrics satisfy the following invariance property
d L E ( A 1 , B 1 ) = d L E ( A , B ) ,
with A , B P ( n ) . Although the log-Euclidean metrics do not entirely satisfy affine-invariant conditions, they are similarity invariant by an orthogonal transformation
d L E ( act ( a U ˜ ) A , act ( a U ˜ ) B ) = d L E ( A , B ) ,
where U ˜ is an orthogonal matrix and a > 0 is a scaling factor.

3. Non-Iterative Method for the Difference of Means on P ( n )

In this section, for 2 N + 1 given matrices on P ( n ) , we present a non-iterative method to calculate the distance between a matrix and the mean of the remaining 2 N matrices. As an introduction, we first review the concept of the mean matrix associated with different metrics.

3.1. Mean Matrices Associated with Different Metrics

For a given metric, the associated mean of m matrices A 1 , , A m on P ( n ) is defined as the matrix M ( A 1 , , A m ) minimizing the cost function
M ( A 1 , , A m ) = arg min i = 1 m d 2 ( A , A i ) ,
where d ( · , · ) corresponds to the metric.
First, the mean matrix induced by the Frobenius inner product (5) is represented as
M F ( A 1 , , A m ) = 1 m i = 1 m A i ,
which is the usual arithmetic mean. Via the convexity of P ( n ) , the Frobenius inner product can still be used, but utilizing P ( n ) with this flat metric dose not result in a geodesically complete space. Because of this, it may not be appropriate to adopt the arithmetic mean on this manifold (see Ref. [6]).
The Riemannian mean associated with the affine-invariant metric (8) is not explicit, as denoted by M A f f ( A i ) . It can only be computed by the following iterative algorithm [6]
A ( k + 1 ) = ( A ( k ) ) 1 2 exp ( η i = 1 m log ( ( A ( k ) ) 1 2 A i ( A ( k ) ) 1 2 ) ) ( A ( k ) ) 1 2 ,
where A ( k ) means the k-th iteration of M A f f ( A i ) and η is the learning rate.
Next, the Riemannian mean in the log-Euclidean case is given. The following lemmas are useful for calculating the Riemannian mean [12,18].
Lemma 1.
Let X , Y denote function-valued matrices and A be a constant matrix. We have
d A = 0 ,
d ( X ± Y ) = d X ± d Y ,
d ( X Y ) = ( d X ) Y + X ( d Y ) ,
d tr ( X ) = tr ( d X ) .
Moreover, if the matrix X is invertible and all its eigenvalues are positive, then
tr ( d log X ) = tr ( X 1 d X ) ,
tr ( d log 2 X ) = 2 tr ( log X X 1 d X ) .
Lemma 2.
Let F ( X ) denote a scalar function of n × n matrix X. We have the following properties
d F ( X ) = tr ( W d X )
is equivalent to
F ( X ) X = W T ,
where F ( X ) X stands for the gradient of F ( X ) at X.
By the above lemmas, we get the Riemannian mean with the log-Euclidean metric as follows.
Lemma 3.
For m given matrices A i on P ( n ) , the Riemannian mean with the associated log-Euclidean metric (14) is obtained by
M L E ( A 1 , , A m ) = exp ( 1 m i = 1 m log A i ) .
Proof. 
According to (16) and (19), the Riemannian mean of A 1 , , A m minimizes the following cost function
J ( A ) = i = 1 m log A log A i F 2 = i = 1 m tr ( log A log A i ) 2 .
Using Lemma 1, the differential of the cost function J ( A ) at A is
d J ( A ) = 2 i = 1 m tr ( ( log A log A i ) A 1 d A ) = 2 tr ( ( m log A i = 1 m log A i ) A 1 d A ) .
By means of Lemma 2 and the symmetry of matrices on P ( n ) , we can obtain the gradient of J ( A ) at A as
J ( A ) A = A 1 ( m log A i = 1 m log A i ) .
The necessary condition and sufficient condition for A to be the minimum of J ( A ) is
J ( A ) A = A 1 ( m log A i = 1 m log A i ) = 0 ,
and the explicit expression of A is given by
A = exp ( 1 m i = 1 m log A i ) .
With this, the proof is completed.   □
It can be seen that the log-Euclidean mean (30) is simpler than the affine-invariant one with the iterative Formula (21), and Lemma 3 is Euclidean in the logarithmic domain.

3.2. Non-Iterative Method for the Difference

With the above work, a non-iterative method is proposed to calculate the Riemannian distance between a matrix and the mean of some matrices.
For 2 N + 1 given R N , , R 1 , R 0 , R 1 , , R N on P ( n ) , R 0 plays the role of being detected matrices in the algorithms proposed by Refs. [1,5,6].
Since the affine-invariant Riemannian framework with the iterative Formula (21) contains matrix exponential, logarithm, square root and inverse operations, it is time-consuming in practical applications. Although several other scholars have also proposed gradient descent algorithms for the Riemannian mean [6,19,20], all these algorithms are iterative. Therefore, the log-Euclidean Riemannian framework is adopted in our method so that computations on the Lie group P ( n ) being a Riemannian manifold is transformed to calculations on S ( n ) being a Euclidean space in the domain of matrix logarithms.
Initially, we transform all the matrices into symmetric matrices by the matrix logarithm
log R i : = X i ,
where N i N . Then, let X i + 1 X i : = τ i and
k = 0 N 1 N k 2 N τ k : = R i g h t ( X 0 ) , k = 0 N 1 k + 1 2 N τ k N : = L e f t ( X 0 ) .
R i g h t ( X 0 ) stands for the right weighted mean of X 1 , X 2 , , X N after X 0 and L e f t ( X 0 ) stands for the left weighted mean of X N , , X 1 before X 0 . Therefore, we can represent the log-Euclidean distance between R 0 and M L E ( R i \ R 0 ) by the following theorem, where R i \ R 0 stands for R N , , R 1 , R 1 , , R N .
Theorem 1.
For 2 N + 1 given R N , , R 1 , R 0 , R 1 , , R N on P ( n ) , the log-Euclidean distance between R 0 and the mean of the remain 2 N matrices R N , , R 1 , R 1 , , R N is obtained by
d L E ( R 0 , M L E ( R i \ R 0 ) ) = R i g h t ( X 0 ) L e f t ( X 0 ) F .
Proof. 
From Lemma 3 and (17), the log-Euclidean distance between R 0 and the mean of the remain ing 2 N matrices R N , , R 1 , R 1 , , R N is given by
d L E ( R 0 , M L E ( R i \ R 0 ) ) = 1 2 N k = 1 N ( log R k + log R k ) log R 0 F .
We can rewrite the Formula (39) as follows
d L E ( R 0 , M L E ( R i \ R 0 ) ) = 1 2 N k = 1 N ( X k + X k ) X 0 F = k = 0 N 1 N k 2 N τ k k = 0 N 1 k + 1 2 N τ k N F .
This means that d L E ( R 0 , M L E ( R i \ R 0 ) ) can be expressed by the difference of the right mean R i g h t ( X 0 ) and the left mean L e f t ( X 0 ) .
This completes the proof of Theorem 1.  □
Obviously, the Formula (38) for calculating the log-Euclidean distance between R 0 and the mean of the remain 2 N matrices R N , , R 1 , R 1 , , R N is non-iterative, and the Riemannian mean M L E ( R i \ R 0 ) ) does not need to be computed. P ( n ) with the log-Euclidean metric (14) is isomorphic and isometric with the corresponding Euclidean S ( n ) [15], therefore the calculation in the Log-Euclidean framework should be time-efficient.

3.3. Numerical Examples

Three examples are provided in the present subsection. One illustrates the difference of the arithmetic mean, the affine-invariant mean and the log-Euclidean mean on P ( n ) . Two other examples demonstrate the numerical behavior of our proposed method for the calculation of d L E ( R 0 , M L E ( R i \ R 0 ) ) and compares the performance with the algorithm proposed by Ref. [6].
In the following simulations, the error criterion of the iterative Formula (21) is
d A f f ( A ( k + 1 ) , A k ) < 10 15 ,
and all samples on P ( n ) are generated by the rule
exp ( V + V T 2 ) ,
where V R n × n is a random matrix so that V + V T 2 S ( n ) .
Example 1.
First, we consider a 3-dimensional manifold P ( 2 ) to display symmetric positive-definite matrices in the XYZ system. Figure 2 shows the arithmetic mean (diamond), the affine-invariant mean (circle) and the log-Euclidean mean (cross) of 50 randomly generated matrices (asterisk). It can be seen that the locations of the log-Euclidean mean are very close to that of the affine-invariant mean.
Then, for 10 randomly generated samples A 1 , , A 10 on P ( 4 ) , we use 50 experiments to compare the cost functions i = 1 10 d 2 ( A , A i ) of the arithmetic mean, the affine-invariant mean and the log-Euclidean mean. Figure 3 shows that the cost function of the log-Euclidean mean is the smallest of them, and that the cost function of the arithmetic mean is much larger than the others. It can be seen that the cost function of the log-Euclidean mean is very close to that of the Riemannian mean, and the difference between them is less than 1.3 for this set of experimental data.
Therefore, it is shown that the log-Euclidean mean and the Riemannian mean are more suitable than the arithmetic mean on P ( n ) .
Example 2.
Let n = 5 and N = 25 in Theorem 1, 51 randomly generated samples on P ( 5 ) are then employed to verify the effectiveness of our method. Figure 4 shows that the log-Euclidean distance between R 0 and the mean M L E ( R i \ R 0 ) ) is 1.1141 (straight line) and the bar graphs denote the log-Euclidean distance from the log-Euclidean mean to each sample.
Then, we use 100 experiments to compare our method with that developed by Ref. [6]. Figure 5 shows that the absolute value | d L E ( R 0 , M L E ( R i \ R 0 ) ) d A f f ( R 0 , M A f f ( R i \ R 0 ) ) | < 0.17 . Moreover, our method is non-iterative, and the log-Euclidean framework is simpler than that of Ref. [6]. It is obvious that our approach is more time-efficient. Thus, we do not compare the computing time of the two methods in the above simulations.
In conclusion, our non-iterative method in log-Euclidean framework is effective, and the calculation results are very close to those of the classical method proposed by Ref. [6].
Example 3.
In this example, the difference of means method is used to judge whether there are target matrices in detected matrices.
First, let n = 10 and N = 20 , we could get 100 random detected matrices R 0 1 , , R 0 100 and 40 randomly reference matrices R 20 , , R 1 , R 1 , , R 20 on P ( 10 ) , where R 0 5 , R 0 16 , R 0 35 , R 0 64 are obtained by adding b I ( 2 < b < 3 ) to random symmetric positive-definite matrices matrices. For each detected matrix R 0 j , the distance between R 0 j and the mean of the reference matrices is, respectively, calculated by our non-iterative method in Theorem 1 and the algorithm proposed by Ref. [6]. Figure 6 shows the distance curves produced by the two methods are in good agreement and R 0 5 , R 0 16 , R 0 35 , R 0 64 can be clearly distinguished from the reference matrices. Then, similar to the above process, we set n = 2 and display 100 randomly detected matrices ( circle) and 40 randomly reference matrices (blue asterisk) in XYZ system. In Figure 7, it is seen that R 0 5 , R 0 16 , R 0 35 R 0 64 (red circle) are far from the reference matrices. Consequently, our deference of means method is more valid to seek the target matrices than the algorithm developed by Ref. [6].

4. Application to Point Cloud Denoising

In this section, the method for the difference of means is applied to denoise the point clouds with high density noise via the K-means clustering algorithm. When there is a considerable distance between data within a cluster, the traditional clustering algorithm will suffer from difficulties, making it difficult to distinguish the random noise from the effective data [21]. In Ref. [22,23], clustering is carried out according to the neighborhood density of each data. It is noted that the local statistical structures of noise and effective data are different, so the data with similar local statistics should be grouped into the same cluster. Afterwards, all point clouds are projected into the parameter distribution families to obtain the parameter point clouds, which contain the local information of the origin point cloud. Then, the original point cloud is classified according to the clustering results of the parameter point clouds.

4.1. Algorithm for Point Cloud Denoising

First, the original data point clouds are mapped to the normal distribution family manifolds, whose expectation and covariance matrix are calculated by the k-nearest neighbors of each point cloud, such that the parameter point clouds are formed. In R n , we denote the point clouds with scale l as
C : = { p i R n i = 1 , , l } .
For any p C , the k-nearest neighbor method is used to select the neighborhood N ( p , k ) , which is abbreviated as N. Then, the expectation μ ( N ) : = E ( N ) p and covariance matrix Σ ( N ) : = C o v ( N ) of the data points in N is calculated. Finally, the data points are represented as parameter points on an n-ary normal distribution family manifold N n , where
N n = { P | P = P ( x ; μ , Σ ) = 1 ( 2 π ) n d e t ( Σ ) exp { ( x μ ) T 1 ( x μ ) 2 } .
Thus, the local statistical mapping is defined by
Ψ : C N n ,
which satisfies
Ψ ( P ) = P ( μ ( N ) , Σ ( N ) ) = 1 ( 2 π ) n d e t ( Σ ) exp { ( x μ ) T 1 ( x μ ) 2 } .
Under the local statistical mapping Ψ , the image C ˜ : = Ψ ( C ) N n of the point cloud C is called the parameter point cloud [24]. The barycenter of the parameter point cloud C ˜ is defined as follows
c ( C ˜ ) = a r g min ( μ , Σ ) N n i = 1 l d 2 ( ( μ i , Σ i ) , ( μ , Σ ) ) .
Moreover, the barycenter c ( C ˜ ) , that is, the geometric mean, depends on the selection of the distance functions on N n . To avoid the complex iterative calculation, the Euclidean distance (7) and the log-Euclidean distance (17) are used, respectively, in the following. Because N n and R n × P ( n ) are topologically homeomorphic [25], the geometric structure on N n can be induced by defining the metrics on R n × P ( n ) . In one case, R n × P ( n ) is embedded into R n × R n × n , the distance function on N n is obtained from the Euclidean metric
d F ( ( μ 1 , Σ 1 ) , ( μ 2 , Σ 2 ) ) = μ 1 μ 2 F + Σ 1 Σ 2 F .
From (20), the geometric mean c ( C ˜ ) is the arithmetic mean
c ( C ˜ ) = ( 1 l i = 1 l μ i , 1 l i = 1 l Σ i ) .
Another case is to define the log-Euclidean metric (14) on P ( n ) , then the distance function on N n is expressed as
d L E ( ( μ 1 , Σ 1 ) , ( μ 2 , Σ 2 ) ) = μ 1 μ 2 F + log ( Σ 1 ) log ( Σ 2 ) F .
According to Lemma 1, the geometric mean c ( C ˜ ) is the arithmetic mean in the logarithmic domain as follows
c ( C ˜ ) = ( 1 m i = 1 l μ i , exp { 1 m i = 1 l log ( Σ i ) } ) .
Note that the local statistical structure of signal is very different from that of random noise, so we will cluster C ˜ into two categories: signal and noise. Due to the statistical properties of the n-ary normal distribution parameter family, the efficiency of the algorithm depends on the selection of distance functions on N n . Algorithm for clustering signal and noise is as follows.
In the process of clustering, Theorem 1 is used to calculate the log-Euclidean distance between the geometric mean of the category and each data point in the parameter point cloud C ˜ . In addition, when clustering based on the distance function obtained by the log-Euclidean metric, the difference between the old and new barycenters is compared in the logarithmic domain to avoid calculating exponential mapping.

4.2. Numerical Examples

This subsection verifies the effectiveness of Algorithm 1 in removing high-density noise, and compares the denoising effect of the distance function obtained by the Euclidean metric and that of the log-Euclidean metric. The experimental data adopts the 3-dimensional point cloud of Teapot, and the background noise is uniformly distributed, where the signal-to-noise ratio (SNR) is 4148:1000, as shown in Figure 8.
Algorithm 1 Algorithm for clustering signal and noise.
Inputs: Point clouds C , value of k and convergence threshold ε .
Output: Two categories of point clouds C .
Main loop:
  • Map the point cloud C into a parameter point cloud C ˜ .
  • K-means algorithm (K = 2): set two barycenters of the current division as c 1 ( i ) and c 2 ( i ) , and use the K-means algorithm based on the distance function (48) or (50) to cluster the parameter point cloud C ˜ . Then, according to the clustering results, the geometric mean of each category is calculated by (49) or (51). Update two barycenters of the current division to the geometric mean c 1 ( i + 1 ) and c 2 ( i + 1 ) .
  • If d ( c 1 ( i ) , c 1 ( i + 1 ) ) < ε and d ( c 2 ( i ) , c 2 ( i + 1 ) ) < ε , two categories of the original point cloud C is output according to the current division of the parameter point cloud C ˜ . Otherwise, proceed to step 2.
Take k = 5 , K = 2 and ε = 0.01 , Figure 9 is the denoised images of the original point cloud by Algorithm 1, based on the Euclidean metric and the log-Euclidean metric, respectively. Obviously, Algorithm 1 based on the log-Euclidean metric is more effective.
The efficiency and the convergence of Algorithm 1 is shown by Figure 10. It also demonstrates that Algorithm 1 using the log-Euclidean metric has faster convergence speed. To compare the denoising effects of different metrics, we adopt three criteria: Accuracy, Recall and Precision. For different SNR, the experimental results are shown in Table 1. In conclusion, Algorithm 1 based on the distance obtained by the log-Euclidean metric has obvious denoising effect for high-density noise, which is a successful application of our proposed method (Theorem 1). Furthermore, it is proved again that the Euclidean metric can not well describe the geometric structure on manifolds.

5. Conclusions

On the Lie group of symmetric positive-definite matrices, we present a non-iterative method to calculate the distance between a matrix and the mean matrix. Based on log-Euclidean metrics, we transform computations on the Lie group into computations on the space of symmetric matrices in the domain of matrix logarithms, and obtained the conclusion that this distance can be represented by the Euclidean distance between the right weighted mean and the left weighted mean. Our proposed method is simpler and more efficient than the classical iterative approach induced by affine-invariant Riemannian metrics in Ref. [6]. In addition, one example shows that the log-Euclidean mean and the Riemannian mean are more suitable than the arithmetic mean on the space of symmetric positive-definite matrices, and the other two examples illustrate that our method in Theorem 1 is more effective than the approach developed by Ref. [6]. Finally, the method for the difference of means is applied to denoise the point clouds with high density noise via the K-means clustering algorithm, and the experimental results show that Algorithm 1, which is based on the distance induced by the log-Euclidean metric has an obvious denoising effect.

Author Contributions

Investigation, X.D.; Methodology, X.J. and H.S.; Software, H.G.; Writing—original draft, X.D.; Writing—review and editing, X.J., X.D., H.S. and H.G. All authors have read and agreed to the published version of the manuscript.

Funding

The present research is supported by the National Natural Science Foundation of China (No. 61401058, No. 61179031).

Acknowledgments

The authors would like to thank the anonymous reviewers as well as Xinyu Zhao for their detailed and careful comments, which helped improve the quality of presentation and the simulations for Section 4.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Agarwal, M.; Gupta, M.; Mann, V.; Sachindran, N.; Anerousis, N.; Mummert, L. Problem determination in enterprise middleware systems using change point correlation of time series data. In Proceedings of the 2006 IEEE/IFIP Network Operations and Management Symposium NOMS 2006, Vancouver, BC, Canada, 3–7 April 2006; pp. 471–482. [Google Scholar]
  2. Cheng, Y.; Wang, X.; Morelande, M.; Moran, B. Information geometry of target tracking sensor networks. Inf. Fusion 2013, 14, 311–326. [Google Scholar] [CrossRef]
  3. Li, X.; Cheng, Y.; Wang, H.; Qin, Y. Information Geometry Method of Radar Signal Processing; Science Press: Beijing, China, 2014. [Google Scholar]
  4. Merckel, L.; Nishida, T. Change-point detection on the lie group SE(3). In Proceedings of the International Conference on Computer Vision, Imaging and Computer Graphics, Angers, France, 17–21 May 2010; Springer: Berlin, Germany, 2010; pp. 230–245. [Google Scholar]
  5. Arnaudon, M.; Barbaresco, F.; Yang, L. Riemannian medians and means with applications to Radar signal processing. IEEE J. Sel. Top. Signal Process. 2013, 7, 595–604. [Google Scholar] [CrossRef]
  6. Barbaresco, F. Interactions between Symmetric Cone and Information Geometries: Bruhat-Tits and Siegel Spaces Models for High Resolution Autoregressive Doppler Imagery, Emerging Trends in Visual Computing; Springer: Berlin, Germany, 2008. [Google Scholar]
  7. Duan, X.; Zhao, X.; Shi, C. An extended Hamiltonian algorithm for the general linear matrix equation. J. Math. Anal. Appl. 2016, 441, 1–10. [Google Scholar] [CrossRef]
  8. Luo, Z.; Sun, H. Extended Hamiltonian algorithm for the solution of discrete algebraic Lyapunov equations. Appl. Math. Comput. 2014, 234, 245–252. [Google Scholar] [CrossRef]
  9. Chen, W.; Chena, S. Application of the non-local log-Euclidean mean to Radar target detection in nonhomogeneous sea clutter. IEEE Access 2019, 99, 36043–36054. [Google Scholar] [CrossRef]
  10. Wang, B.; Hu, Y.; Gao, J.; Ali, M.; Tien, D.; Sun, Y.; Yin, B. Low rank representation on SPD matriceswith log-Euclidean metric. Pattern Recognit. 2016, 76, 623–634. [Google Scholar] [CrossRef]
  11. Liu, Q.; Shao, G.; Wang, Y.; Gao, J.; Leung, H. Log-Euclidean metrics for contrast preserving decolorization. IEEE Trans. Image Process. 2017, 26, 5772–5783. [Google Scholar] [CrossRef]
  12. Moakher, M. A differential geometric approach to the geometric mean of symmetric positive-definite matrices. SIAM J. Matrix Anal. Appl. 2005, 26, 735–747. [Google Scholar] [CrossRef]
  13. Moakher, M. On the averaging of symmetric positive-definite tensors. J. Elast. 2006, 82, 273–296. [Google Scholar] [CrossRef]
  14. Batchelor, P.G.; Moakher, M.; Atkinson, D.; Calamante, F.; Connelly, A. A rigorous framework for diffusion tensor calculus. Magn. Reson. Med. 2005, 53, 221–225. [Google Scholar] [CrossRef] [PubMed]
  15. Arsigny, V.; Fillard, P.; Pennec, X.; Ayache, N. Log-Euclidean metrics for fast and simple calculus on diffusion tensors. Magn. Reson. Med. 2006, 56, 411–421. [Google Scholar] [CrossRef]
  16. Curtis, M.L. Matrix Groups; Springer: Berlin, Germany, 1979. [Google Scholar]
  17. Jost, J. Riemannian Geometry and Geometric Analysis, 3rd ed.; Springer: Berlin, Germany, 2002. [Google Scholar]
  18. Jorgen, W. Matrix diferential calculus with applications in statistics and econometrics. De Economist 2000, 148, 130–131. [Google Scholar]
  19. Fiori, S. Learning the Fréchet mean over the manifold of symmetric positive-definite matrices. Cogn. Comput. 2009, 1, 279–291. [Google Scholar] [CrossRef] [Green Version]
  20. Duan, X.; Sun, H.; Peng, L. Application of gradient descent algorithms based on geodesic distances. Sci. China Inf. Sci. 2020, 63, 152201:1–152201:11. [Google Scholar] [CrossRef] [Green Version]
  21. Jain, A.K. Data clustering. 50 years beyond K-means. Pattern Recognit. Lett. 2010, 31, 651–666. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Ting, K.M. Carman, M.J. Density-ratio based clustering for discovering clusters with varying densities. Pattern Recognit. 2016, 60, 983–997. [Google Scholar] [CrossRef]
  23. Rodriguez, A.; Laio, A. Clustering by fast search and find of density peaks. Science 2014, 334, 1492–1496. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Sun, H.; Song, Y.; Luo, Y.; Sun, F. A clustering algorithm based on statistical manifold. Trans. Beijing Inst. Technol. 2021, 41, 226–230. [Google Scholar]
  25. Amari, S. Information Geometry and Its Applications; Springer: Tokyo, Japan, 2016. [Google Scholar]
Figure 1. The difference of means method.
Figure 1. The difference of means method.
Mathematics 10 00255 g001
Figure 2. Mean of 50 samples.
Figure 2. Mean of 50 samples.
Mathematics 10 00255 g002
Figure 3. Comparison of cost functions.
Figure 3. Comparison of cost functions.
Mathematics 10 00255 g003
Figure 4. Log-Euclidean distance to samples.
Figure 4. Log-Euclidean distance to samples.
Mathematics 10 00255 g004
Figure 5. Absolute value of the difference gotten by two methods.
Figure 5. Absolute value of the difference gotten by two methods.
Mathematics 10 00255 g005
Figure 6. Distance from the detected matrices to the mean.
Figure 6. Distance from the detected matrices to the mean.
Mathematics 10 00255 g006
Figure 7. Target matrices.
Figure 7. Target matrices.
Mathematics 10 00255 g007
Figure 8. Original data cloud.
Figure 8. Original data cloud.
Mathematics 10 00255 g008
Figure 9. Point cloud after denoising based on Euclidean metric (left) and log-Euclidean metric (right).
Figure 9. Point cloud after denoising based on Euclidean metric (left) and log-Euclidean metric (right).
Mathematics 10 00255 g009
Figure 10. Denoising effects of different metrics (log scale).
Figure 10. Denoising effects of different metrics (log scale).
Mathematics 10 00255 g010
Table 1. Comparison of denoising effects.
Table 1. Comparison of denoising effects.
MetricSNR 4148:1000SNR 4148:2000
PrecisionRecallAccuracyPrecisionRecallAccuracy
log-Euclidean metric91.53%100%92.54%85.35%99.45%88.11%
Euclidean metric85.41%99.88%86.15%74.40%93.66%73.98%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Duan, X.; Ji, X.; Sun, H.; Guo, H. A Non-Iterative Method for the Difference of Means on the Lie Group of Symmetric Positive-Definite Matrices. Mathematics 2022, 10, 255. https://doi.org/10.3390/math10020255

AMA Style

Duan X, Ji X, Sun H, Guo H. A Non-Iterative Method for the Difference of Means on the Lie Group of Symmetric Positive-Definite Matrices. Mathematics. 2022; 10(2):255. https://doi.org/10.3390/math10020255

Chicago/Turabian Style

Duan, Xiaomin, Xueting Ji, Huafei Sun, and Hao Guo. 2022. "A Non-Iterative Method for the Difference of Means on the Lie Group of Symmetric Positive-Definite Matrices" Mathematics 10, no. 2: 255. https://doi.org/10.3390/math10020255

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop