Next Article in Journal
Adaptive Event-Triggered Neural Network Fast Finite-Time Control for Uncertain Robotic Systems
Previous Article in Journal
The Ordered Weighted Average Sector Liquid Return Index: A Method for Determining Financial Recovery from Sectoral Debt
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unitary Diagonalization of the Generalized Complementary Covariance Quaternion Matrices with Application in Signal Processing

1
Department of Mathematics, Shanghai University, Shanghai 200444, China
2
School of Finance, Shanghai University of International Business and Economics, Shanghai 201620, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(23), 4840; https://doi.org/10.3390/math11234840
Submission received: 5 October 2023 / Revised: 21 November 2023 / Accepted: 29 November 2023 / Published: 1 December 2023
(This article belongs to the Special Issue Infinite Matrices and Their Applications)

Abstract

:
Let H denote the quaternion algebra. This paper investigates the generalized complementary covariance, which is the ϕ -Hermitian quaternion matrix. We give the properties of the generalized complementary covariance matrices. In addition, we explore the unitary diagonalization of the covariance and generalized complementary covariance. Moreover, we give the generalized quaternion unitary transform algorithm and test the performance by numerical simulation.

1. Introduction

A covariance matrix describes the relationship between different dimensions, and it can be applied in stochastic modeling and principal component analysis (PCA) (e.g., [1,2,3,4,5,6]). The central idea of principal component analysis is the diagonalization of covariance matrix. Moreover, diagonalization covariance matrices play an important role in many statistical signal processing algorithms (e.g.,  [7,8,9]). The covariance matrix C = E { xx H } and pseudo-covariance matrix P = E { xx T } both compose complete second-order statistical information on the complex field [10]. Diagonalization of C and P is performed using eigen-decomposition and Takagi factorization, respectively. These two decompositions can decorrelate the data in its general form [11]. De Lathauer and De Moor [12] as well as Eriksson and Koivunen [13] introduced the strong uncorrelated transformation (SUT). Ollila and Koivunen [14] presented its extension, namely, generalized uncorrelated transformation (GUT). Cheong Took et al. [15] researched the approximate uncorrelating transform (AUT) to decorrelate the mixed signal. SUT, GUT, and AUT can be used in blind separation of non-circular complex source [16], separation of signal and noise components in harmonic signal subbands [17], and so on.
Quaternion algebra is an associative and non-commutative division algebra over the real number field. The application of quaternion matrices involves many fields, such as computer science, orbital mechanics, statistics, and so on (e.g., [1,18,19,20,21]). Especially in the field of signal processing, C.C. Took and D.P. Mandic [22] proposed the quaternion widely linear (QWL) model for quaternion valued mean square error (MSE) estimation. C.C. Took et al. [23] studied the augmented second-order statistics of quaternion random signals, in [15] they investigated the approximate diagonalization of correlation matrices in widely linear signal processing, and so on. Quaternions can be used to process high-dimensional data. As such, the quaternion matrix is a new and effective tool in signal processing. The research of quaternion signal processing has involved Fourier transforms [24], neural networks (e.g., [25,26]), independent component analysis (ICA)  [27], and so on. Due to its widespread application, in linear algebra, the structure and simultaneous diagonalization of the covariance matrices of the quaternion field have received great attention in widely linear signal processing (e.g.,  [21,23,28]).
Because quaternion multiplication cannot be exchanged, the related theory in complex field C cannot be directly extended to quaternion field H . In the quaternion field, Cheong Took et al. [23] presented the standard covariance matrix C x = E { xx H } , which is a Hermitian quaternion matrix, and the complementary covariance matrices C x η = E { xx η H } , which are η -Hermitian quaternion matrices, where η { i , j , k } , i, j, k are unit imaginary numbers and ( · ) H represents the Hermitian (conjugate transpose). In recent years, some scholars have studied the η (-skew-) Hermitian (e.g., [2,29]). The diagonalization of C x can be used in eigen-decomposition and C x η can be used in singular value decomposition [21]. As for their joint diagonalization, Enshaeifar et al.  [30] presented the quaternion uncorrelating transform (QUT), which is the generalization of SUT in the quaternion field. Moreover, Min et al. [31] introduced the quaternion approximate uncorrelating transform (QAUT), which simultaneously diagonalizes all four covariance matrices associated with improper quaternion signals.
Rodman [32] presented the definition of ϕ -Hermitian quaternion matrix A = A ϕ , where ϕ is a non-standard involution, i.e., ϕ ( a ) = δ a H δ , a H is a conjugate transposition of quaternion a, and δ = u 1 i + u 2 j + u 3 k H n × n is unit and purely imaginary. Obviously, when δ { i , j , k } , ϕ -Hermitian is the same as η -Hermitian, so ϕ -Hermitian is a more general case of η -Hermitian. In recent years, some scholars researched the ϕ -Hermitian quaternion matrices in many fields. For example, Aghamollaei et al. [33] studied the numerical ranges with respect to non-standard involutions ϕ on the quaternion field. They [34] also presented some quaternion matrix equations involving ϕ -Hermicity.
In this paper, we investigate the generalized complementary covariance quaternion matrices C x ϕ = E { xx δ H } , where δ = u 1 i + u 2 j + u 3 k H n × n is unit and purely imaginary. It is obvious that C x ϕ is a ϕ -Hermitian quaternion matrix. Furthermore, we study the simultaneous diagonalization of the standard covariance matrix and the generalized complementary covariance quaternion matrices. Moreover, we give the joint diagonalization algorithm and the numerical simulation. The main contribution of this paper is to generalize the complementary covariance quaternion matrices, which we promote from η -Hermitian matrix to ϕ -Hermitian matrix, and by comparing the separation time of mixed signals, the performance after generalization is improved.
The remainder of this paper is organized as follows. In Section 2, we review the definitions of involution, complex representation of quaternion, the ϕ -Hermitian quaternion matrix, and quaternion improperness. In Section 3, we introduce the structure of the generalized complementary covariance quaternion matrices. In Section 4, we give the conditions of unitary diagonalization of standard covariance and generalized complementary covariance quaternion matrices. In Section 5, we present the generalized quaternion unitary transform algorithm and test the performance by numerical simulation.

2. Preliminaries

2.1. Quaternion Algebra

Let R , C , H represent the real number field, complex field, and quaternion field, respectively. We know that the complex field is composed of two unit bases { 1 , i } , and the quaternion field is composed of four bases { 1 , i , j , k } . A quaternion x includes one real part and three imaginary parts, whose general expression [35] is
x = a 0 + a 1 i + a 2 j + a 3 k ,
where a 0 , a 1 , a 2 , a 3 R is a real number, and i , j , k satisfy
i 2 = j 2 = k 2 = ijk = 1 ,
ij = ji = k , jk = kj = i , ki = ik = j .
The real part and imaginary part of quaternion x are expressed as Re { x } = a 0 and Im { x } = a 1 i + a 2 j + a 3 k , respectively. In particular, when a 0 = 0 , it is called a pure quaternion. The conjugation of x is defined as
x ¯ = x = a 0 a 1 i a 2 j a 3 k ,
and the norm is defined as
| x | = x x = a 0 2 + a 1 2 + a 2 2 + a 3 2 .
When | x | = 1 , the quaternion x is called a unit quaternion.
Next, we review the definition of quaternion involution. On the quaternion field, Rodman [32] provided the definition of involution.
Definition 1 
(involution [32]). A map ϕ: H H is called an antiendomorphism if ϕ ( x y ) = ϕ ( y ) ϕ ( x ) and ϕ ( x + y ) = ϕ ( x ) + ϕ ( y ) for all x, y H . An antiendomorphism ϕ is called an involution if ϕ ( ϕ ( x ) ) = x for every x H .
Quaternion involution is divided into standard involution and non-standard involution [32]. In this paper, we only research the non-standard involution, whereby ϕ needs to be contented ϕ ( a ) = a δ H = δ a H δ , where δ = u 1 i + u 2 j + u 3 k H n × n is unit and purely imaginary, i.e., u 1 , u 2 , u 3 R and u 1 2 + u 2 2 + u 3 2 = 1 .
The non-standard involuton of quaternion x defined as x δ H = δ x H δ , where δ = u 1 i + u 2 j + u 3 k . In particular, when δ { i , j , k }  [21], the non-standard involution is
x i H = ( x i ) H = ( x H ) i = i x H i = a 0 a 1 i + a 2 j + a 3 k , i f δ = i , x j H = ( x j ) H = ( x H ) j = j x H j = a 0 + a 1 i a 2 j + a 3 k , i f δ = j , x k H = ( x k ) H = ( x H ) k = k x H k = a 0 + a 1 i + a 2 j a 3 k , i f δ = k .
Furthermore, Rodman [32] presented the definition of ϕ -Hermitian matrices as follows.
Definition 2 
( ϕ -Hermitian [32]). A H n × n is said to be ϕ-Hermitian if A = A ϕ , where ϕ is a non-standard involution.

2.2. Complex Representation of Quaternion and Quaternion Matrix

In this section, we review the complex representation of a quaternion and quaternion matrix [36]. A quaternion x can be represented as
x = a 0 + a 1 i + a 2 j + a 3 k = ( a 0 + a 1 i ) + ( a 2 + a 3 i ) j ,
Let a 0 + a 1 i = x a , a 2 + a 3 i = x b , and they are both complex numbers. Then the quaternion x can be expressed as
x = x a + x b j ,
which is the complex representation of quaternion x . Its matrix representation [35] is as follows:
x ^ = x a x b x b ¯ x a ¯ ,
where x ^ is a 2 × 2 complex matrix.
Similarly, the quaternion matrix A n × n can be expressed as
A = A 0 + A 1 i + A 2 j + A 3 k = ( A 0 + A 1 i ) + ( A 2 + A 3 i ) j = A a + A b j ,
where A 0 , A 1 , A 2 , A 3 R n × n , A a = A 0 + A 1 i , A b = A 2 + A 3 i are complex matrices. The above formula (5) is the complex representation of the quaternion matrix, whose matrix expression [35] is as follows
A ^ = A a A b A b ¯ A a ¯ ,
where A ^ is a 2 n × 2 n complex matrix.
Using complex representation, any quaternion matrix can be transformed from the quaternion field to the complex field; thus, many quaternion problems can be solved.

2.3. Quaternion Improperness

On the quaternion field, the improperness is characterized by the degree of irrelevance between the real part and the imaginary part. The properness of a quaternion can be divided into two types: H -improperness and C α - improperness [37]. The definitions are given as follows.
Definition 3 
( H -properness [37]). A quaternion random vector x is H -proper if it is uncorrelated with its vector involutions, x i , x j , and x k , so that
C x i = E x x i H = 0 , C x j = E x x j H = 0 , C x k = E x x k H = 0 .
Definition 4 
( C α -properness [37]). A quaternion random vector x is C α -improper with respect to α = i , j , or k if it is correlated only with the involution x α , so that all the complementary covariances except for C x α vanish.
According to the complex representation of a quaternion, two proper complex random vectors can produce a C α -properness vector. According to the Cayley–Dickson construction, for different α , β { i , j , k } , a quaternion can be represented as x = z 1 + z 2 β , where z 1 and z 2 are generated by { 1 , α } . Any quaternion vector x is C α -improper if and only if the following conditions are both satisfied [37]:
(1)
z 1 and z 2 are proper complex vectors, which is achieved when their real and imaginary parts are uncorrelated and with the same variance.
(2)
z 1 and z 2 have different variances.

3. The Structure of Quaternion Covariance Matrices

3.1. Three Kinds of Structure of Quaternion Covariance Matrices

There are three kinds of quaternion covariance matrices: the pseudo-covariance matrix, the standard covariance matrix, and the complementary covariance matrix.
In recent years, there has been success in using the standard covariance matrix ( C X = E { XX H } ) and pseudo-covariance matrix ( P X = E { XX T } ) of simple expression and physical meaning in complex augmented statistics [31]. The pseudo-covariance matrix can explain the complex variable improperness. Moreover, in the complex domain, the pseudo-covariance matrix is a symmetric matrix, which can use Takagi decomposition P X = Q Σ Q T , where Q is the complex unitary matrix and Σ is the real diagonal matrix. Note that the diagonal elements are the singular values of P X  [38]. Because quaternion multiplication cannot be exchanged, pseudo-covariance in a quaternion is asymmetrical. Given a random quaternion vector X = [ x 1 , x 2 , , x n ] T , the expression of the pseudo-covariance matrix is as follows:
P X = E { XX T } = E { x 1 x 1 } E { x 1 x 2 } E { x 1 x n } E { x 2 x 1 } E { x 2 x 2 } E { x 2 x n } E { x n x 1 } E { x n x 2 } E { x n x n } .
Quaternion involution plays an important role in generalized linear processing (e.g., [22,39,40]), which provides a useful basis for second-order statistics in the quaternion domain. Complete second-order statistics need to consider both the standard covariance matrix and complementary covariance matrix. The following are their respective expressions [23].
Given a random quaternion vector X = [ x 1 , x 2 , , x n ] T , the expression of standard covariance matrix is as follows:
C X = E { XX H } = E { x 1 x 1 } E { x 1 x 2 } E { x 1 x n } E { x 2 x 1 } E { x 2 x 2 } E { x 2 x n } E { x n x 1 } E { x n x 2 } E { x n x n } .
The expression of complementary covariance matrix is as follows:
C X η = E { XX η H } = E { x 1 x 1 η } E { x 1 x 2 η } E { x 1 x n η } E { x 2 x 1 η } E { x 2 x 2 η } E { x 2 x n η } E { x n x 1 η } E { x n x 2 η } E { x n x n η } ,
where η { i , j , k } , x i is defined as the conjugate of x i , and the non-diagonal elements satisfy C X η ( m , n ) = ( C X η ( n , m ) ) η , that is, m rows and n columns in C X η are equal to the conjugate of n rows and m columns.
It is worth noting that the standard covariance matrix C X is a Hermitian quaternion matrix, i.e., C X = C X H , and the complementary covariance matrix is a η -Hermitian quaternion matrix, i.e., C X η = ( C X η ) η H . In addition, the relationship among the three covariance matrices is as follows:
P X = 1 2 ( C X i + C X j + C X k C X ) .

3.2. The Generalized Complementary Covariance Quaternion Matrix

Quaternion involution basis occupies an important place in second-order statistics. We have already given the complementary covariance matrix representation (8), where η { i , j , k } . According to the definition of non-standard involution given above, in this section, we will give the generalized complementary covariance quaternion matrix, whose representation is as follows.
Definition 5 
(Generalized complementary covariance quaternion matrix). The expression of the generalized complementary covariance quaternion matrix is
C X ϕ = E XX δ H = E x 1 x 1 δ E x 1 x 2 δ E x 1 x n δ E x 2 x 1 δ E x 2 x 2 δ E x 2 x n δ E x n x 1 δ E x n x 2 δ E x n x n δ ,
where δ = u 1 i + u 2 j + u 3 k H n × n is unit and purely imaginary.
Note that C X ϕ is ϕ -Hermitian matrix. It is obvious that when δ { i , j , k } , C X ϕ = C X η . Thus, the complementary covariance matrix is a special case of the generalized complementary covariance quaternion matrix.

4. Unitary Diagonalization of Standard Covariance and Generalized Complementary Covariance Quaternion Matrices

In this section, we consider three cases where the Hermitian quaternion matrix and ϕ -Hermitian quaternion matrix are simultaneously unitary diagonalized: both are Hermitian quaternion matrices, both are ϕ -Hermitian quaternion matrices, and there is one Hermitian quaternion matrix and one ϕ -Hermitian quaternion matrix.
Firstly, we give some Lemmas:
Lemma 1 
([30]). The standard covariance quaternion matrix ( C X ) is a Hermitian quaternion matrix. Its eigen-decomposition is C X = Q H Λ X Q , where Q is the unitary quaternion matrix, Λ X is a real-valued diagonal matrix, and the elements on the diagonal are the eigenvalues of C X .
Lemma 2. 
The generalized complementary covariance quaternion matrix ( C X ϕ ) is a ϕ-Hermitian quaternion matrix. Its eigen-decomposition is C X ϕ = U Λ U δ H , where U is the unitary quaternion matrix, Λ is a real-valued non-negative diagonal matrix, and the diagonal elements are singular values of C X ϕ . δ = u 1 i + u 2 j + u 3 k is unit and purely imaginary.
Proof. 
The singular value decomposition of the ϕ -Hermitian quaternion matrix C X ϕ is [32]: C X ϕ = USV H , C X ϕ C X ϕ H can be represented as
C X ϕ C X ϕ H = US 2 U H .
Using the ϕ -Hermitian property, C X ϕ = C X ϕ δ H , we rewrite the matrix product C X ϕ C X ϕ H as
C X ϕ C X ϕ H = V δ S 2 V δ H .
On the basis of [21] Lemma 2.1, we can assume U = V δ D , where D is diagonal quaternion matrix. Therefore, we can obtain
C X ϕ = USV H = USV H U δ U δ H = US V H V D δ U δ H = U ( SD δ ) U δ H = U Λ U δ H .
   □
Lemma 3. 
If A , B H are ϕ-Hermitian matrices and A is a non-singular matrix, then A , B are simultaneously diagonalizable if and only if D = A 1 B is normal.
Proof. 
Assuming the existence of unitary matrix M , this makes
Λ a = M δ H AM , Λ b = M δ H BM ,
able to be diagonalized simultaneously. We have
A 1 = M Λ a 1 M δ H , B = M δ Λ b M H .
Therefore,
A 1 B = M ( Λ a 1 Λ b ) M H
is unitary diagonalized. In other words, D = A 1 B is normal.    □
Lemma 4 
([30]). If A = B C 0 0 H , then A is normal if and only if B is normal and C = 0 .
According to Lemmas 1–4, we will give the simultaneous diagonalization of covariance matrix conditions in the following theorem.
Theorem 1. 
Given the matrices A , B H :
(a) 
If A and B are both Hermitian quaternion matrices, then there exists a unitary matrix R H that makes R H A R and R H B R able to be diagonalized simultaneously if and only if AB is Hermitian quaternion matrix, i.e., AB = BA .
(b) 
If A and B are both ϕ-Hermitian quaternion matrices, then there exists a unitary matrix R H that makes R δ H AR and R δ H BR able to be diagonalized simultaneously if and only if AB δ is normal, i.e., AB δ B δ H A H = B δ H A H AB δ , where δ = u 1 i + u 2 j + u 3 k is unit and purely imaginary.
(c) 
If A is a Hermitian quaternion matrix and B is a ϕ-Hermitian quaternion matrix, then there exists a unitary matrix R H that makes R H AR and R δ H BR able to be diagonalized simultaneously if and only if BA is ϕ-Hermitian quaternion matrix, i.e., BA = ( BA ) δ H = A δ H B δ H = A δ B , where δ = u 1 i + u 2 j + u 3 k is unit and purely imaginary.
Proof. 
( a ) . Because A and B are both Hermitian quaternion matrices, matrix A can be decomposed into A = US a U H . Let D = S a 1 2 U H , therefore
DAD H = I , DBD H = W Λ b W H .
We consider R = W H D , then
RAR H = W H DA D H W = I = Λ a ,
RBR H = W H D B D H W = Λ b .
Therefore, there exists a unitary quaternion matrix R that makes R H AR and R H BR able to be diagonalized simultaneously. Next, we will prove the sufficiency and necessity of diagonalization, respectively.
First, we prove the sufficiency:
Assuming R H A R = Λ a , R H B R = Λ b , then A = R Λ a R H , B = R Λ b R H . Hence, we have
AB = R Λ a R H R Λ b R H = R Λ b R H R Λ a R H = B A .
Then, we prove the necessity:
Because A and B are both Hermitian quaternion matrices, i.e., A = A H , B = B H . According to Lemma 3, there exists unitary quaternion matrices U and V that make A = U H Λ a U , B = V H Λ b V . On the basis of AB = BA , we can obtain
AB = A H B H = ( BA ) H = BA .
Substituting A = U H Λ a U , B = V H Λ b V into (11), we have
BA = V H Λ b V U H Λ a U = U H Λ a U V H Λ b V = ( BA ) H .
Let U = V : = R , and we can obtain
Λ a = R H A R , Λ b = R H B R .
Therefore, there exists a unitary matrix that makes A and B able to be diagonalized simultaneously.
( b ) . If A and B are both ϕ -Hermitian quaternion matrices, and AB δ is normal, the singular value of matrix A can be decomposed into A = USV H . A can also be decomposed into A = QSQ δ H , where Q = U ( D δ ) 1 2 , U = V δ D . Therefore, there exists R = Q H that makes A and B able to be diagonalized simultaneously. Next, we will prove the sufficiency and necessity of diagonalization, respectively.
First, we prove the sufficiency:
Assuming R δ H A R = Λ a , R δ H B R = Λ b , then, A = R δ Λ a R H , B = R δ Λ b R H . Hence, we have
AB δ = R δ Λ a R H R Λ b R δ H = R δ Λ a Λ b R δ H ,
It is easy to know that AB δ is normal.
Then, we prove the necessity:
We consider the following two cases to prove:
( i ) Assuming AB δ is a normal matrix, A is a non-singular matrix. AB δ = ( A 1 ) 1 B δ is normal, on the basis of Lemma 3, and A 1 and B δ can be diagonalized simultaneously.
Because A and B are both ϕ -Hermitian quaternion matrices, we have A 1 = R Λ a 1 R δ H , B δ = R Λ b R δ H . Hence,
A = R δ Λ a R H = R δ Λ a R δ δ H ,
B = R δ Λ b R H = R δ Λ b R δ δ H .
In other words, A and B can be diagonalized simultaneously.
( i i ) Assuming AB δ is a normal matrix, A is a singular matrix, then there exists unitary matrix R H that makes R δ H AR a diagonal matrix, and the column elements of R can be rearranged:
R δ H AR = Σ 0 0 0 , R δ H B R = B 11 B 12 B 12 δ H B 22 ,
where Σ is a block diagonal matrix. The block matrix Σ , B 11 , B 22 module length of elements on the diagonal is 1. We can obtain
R δ H A R R δ H B R δ = R δ H A B δ R δ = Σ B 11 δ Σ B 12 δ 0 0 .
Because AB δ is a normal matrix, according to Lemma 4, we have Σ B 12 δ = 0 . Owing to Σ is the non-singular matrix, B 12 = 0 , i.e.,
R δ H AR = Σ 0 0 0 , R δ H B R = B 11 0 0 B 22 ,
Therefore, if there exists unitary matrix R that makes A diagonalized, it can also diagonalize B .
( c ) . Because A = US a U H is a Hermitian quaternion matrix and B is a ϕ -Hermitian quaternion matrix, let D = S a 1 2 U H , and we have
DAD H = I , DBD δ H = W Λ b W δ H .
We consider R = W H D , so
RAR H = W H DA D H W = I ,
RB R δ H = W H D B D δ H W δ = Λ b .
Hence, there exists unitary matrix R H that makes R H AR and R δ H BR able to be diagonalized simultaneously. Next, we will prove the sufficiency and necessity of diagonalization, respectively.
First, we prove the sufficiency:
If R H AR = Λ a , R δ H BR = Λ b , we obtain A = R Λ a R H , B = R δ Λ b R H . Therefore,
B A = R δ Λ b R H R Λ a R H = R η Λ a R δ H R δ Λ b R H = A δ B ,
where δ = u 1 i + u 2 j + u 3 k H n × n is unit and purely imaginary.
Then, we prove the necessity:
Because A and B are both ϕ -Hermitian matrices, i.e., A = A ϕ , B = B ϕ . According to Lemma 2, there exist unitary matrices U and V that make A = U δ H Λ a U , B = V δ H Λ b V . We have
AB = A δ H B δ H = ( BA ) δ H = BA .
Substituting A = U δ H Λ a U and B = V δ H Λ b V into (12), we can obtain
BA = V δ H Λ b V U δ H Λ a U = U δ H Λ a U V δ H Λ b V = ( BA ) δ H .
Let U = V : = R , we can obtain
Λ a = R δ H A R , Λ b = R δ H B R ,
where δ = u 1 i + u 2 j + u 3 k is unit and purely imaginary. Therefore, the matrices A and B can be diagonalized simultaneously.    □

5. Generalized Quaternion Unitary Transform

5.1. The Algorithm of Generalized Quaternion Unitary Transform

Based on Lemma 1, the Hermitian quaternion matrix C X can be decomposed into C X = U Λ X U H , where U is a unitary quaternion matrix and Λ X is a real diagonal matrix. We define the whitening transformation D = Λ X 1 2 U H , let s = D X , and s is a ϕ -Hermitian quaternion matrix. We can obtain the covariance matrix of s :
C s = D C X = U Λ X 1 2 U H U Λ X U H U Λ X 1 2 U H = I .
As a consequence, C s can be decomposed into C s ϕ = W Λ δ W δ H , where W is a unitary quaternion matrix and Λ δ is real diagonal matrix. We define the non-singular uncorrelated transformation Q = W H D , let y = Q X . Then, C y and C y ϕ can be diagonalized simultaneously:
C y = W H C s W = W H IW = I ,
C y ϕ = W H C s ϕ W δ = W H W Λ δ W δ H W δ = Λ δ .
We call the transformed Q as the generalized quaternion uncorrelated transformation.
Using the theory of simultaneous diagonalization of covariance matrix based on Section 4, we can receive the steps of solving the generalized quaternion uncorrelated transformation matrix. First, the covariance matrix can eigen-decomposed into C x = E x x H = U Λ x U H . Then, we calculate the whitening matrix D = V Λ x 1 2 V H , and we can obtain the whitened data s = D X . Next, we calculate the complementary covariance matrix of s , i.e., C s ϕ = E s s δ H . It is obvious that C s ϕ is a ϕ -Hermitian quaternion matrix, and we can calculate the eigen-decomposition of C s ϕ : C s ϕ = W Λ δ W δ H , where W is unitary quaternion matrix and Λ δ is real diagonal matrix. Finally, we can gain the generalized quaternion uncorrelated transformation Q = W H D .
The following Algorithm 1 shows the specific steps of the MATLAB implementation of the generalized quaternion unitary transform algorithm for the standard covariance matrix and generalized complementary covariance matrix.
Algorithm 1: The specific steps of the MATLAB implementation of the generalized quaternion unitary transform algorithm for the standard covariance matrix and the generalized complementary covariance matrix
function [ R ,t] = GQUT(s)
if size(s,1)>size(s,2)
s = s . ;
end
n = length(s);
%Generate the covariance matrix of s
C s = ( s s ) / n ;
%Using singular value decomposition to calculate standard covariance matrix
U , V = s v d ( C s ) ;
%Obtain whitening matrix D and whitening data q
D = d i a g ( d i a g ( V ) 1 2 ) U ;
q = D s ;
%Calculating the ϕ complementary covariance matrix of generating whitening data q
%We use ’inv’ function to calculate the ϕ involution, note that δ = u 1 i + u 2 j + u 3 k
C ϕ = ( q i n v i j k ( q , δ ) ) / n ;
%Analyze the complementary covariance matrix
%Select the unit imaginary number with the greatest correlation
c = norm C ϕ diag diag C ϕ ;
, g = max ( c . ) ;
%For the ϕ complementary covariance matrix, we use the Takagi factorization as
% C ϕ = W V 1 i n v i j k ( W , ffi )
U 2 , S 2 , V 2 = s v d ( C δ ) ;
P = i n v i j k ( V 2 , δ ) U 2 ;
W = U 2 d i a g ( s q r t ( d i a g ( i n v i j k ( P , δ ) ) ) ) ;
%Calculate the generalized quaternion unitary matrix R and after decorrelation t
R = W D ;
t = R s .

5.2. Testing the Performance of the Generalized Quaternion Unitary Transform by Numerical Simulation

In this section, we test the performance of the generalized unitary transform for C s decorrelation of improper quaternion signals. First, we use the complex representation of a quaternion to generate the C k -improper quaternion signal. Then, randomly generated a third-order mixed matrix A , which obeying standard normal distribution. We can generate the mixed C k -improper signal x = As . Each line is recorded as x 1 , x 2 , x 3 . The 3D diagram represents the degree of correlation of mixed signals, which is shown in Figure 1. From the 3D scatter diagram, it can be seen that the signals are highly correlated.
We use the generalized quaternion unitary transform to decorrelate the mixed signals. We know that the ϕ -Hermitian quaternion matrix A satisfies A = δ A H δ , where δ = u 1 i + u 2 j + u 3 k . In this section, we consider three cases, δ { i , j , k } , δ { 2 2 ( i + j ) , 2 2 ( j + k ) , 2 2 ( i + k ) } , and δ = 3 3 ( i + j + k ) , respectively.
We define non-standard involution ϕ ( a ) = δ a δ , where a is any quaternion. For the first case, the quaternion matrix A is a η -Hermitian quaternion matrix. In the second case δ { 2 2 ( i + j ) , 2 2 ( j + k ) , 2 2 ( i + k ) } [41], we can obtain:
a ( 2 / 2 ) ( i + j ) = a 0 a 2 i a 1 j + a 3 k , if δ = 2 2 ( i + j ) ; a ( 2 / 2 ) ( i + k ) = a 0 a 3 i + a 2 j a 1 k , if δ = 2 2 ( i + k ) ; a ( 2 / 2 ) ( j + k ) = a 0 + a 1 i a 3 j a 2 k , if δ = 2 2 ( j + k ) .
In the third case δ = 3 3 ( i + j + k ) , we can obtain
a 3 3 ( i + j + k ) = a 0 1 3 ( a 1 + 2 a 2 + 2 a 3 ) i 1 3 ( 2 a 1 a 2 + 2 a 3 ) j 1 3 ( 2 a 1 + 2 a 2 a 3 ) k .
We use the generalized quaternion unitary transform algorithm to de-mix the signal. For the above three cases, we obtain the 3D scatter diagram after decorrelation, which is shown in Figure 2, Figure 3 and Figure 4. Compared with Figure 1, it can be clearly seen that the signal has been de-mixed and the degree of correlation has decreased.
Next, we will compare the CPU time for the above cases, take 150, 300, 500, 650, 800, and 1000 samples, respectively, and calculate their CPU time and draw a scatter diagram as shown in Figure 5. It can be seen from the figure that case 3, δ = 3 3 ( i + j + k ) , requires the least CPU time. Case 1, δ { i , j , k } , requires a longer time than the second case, δ { 2 2 ( i + j ) , 2 2 ( j + k ) , 2 2 ( i + k ) } .

6. Conclusions

We investigated the generalized complementary covariance quaternion matrix. Afterwards, we presented the conditions of unitary diagonalization of standard covariance and generalized complementary covariance quaternion matrices. Furthermore, we investigated the generalized quaternion unitary transform algorithm and tested its performance under three different conditions by numerical simulation. Finally, we compared the CPU time required by the algorithm in three cases. In the future, we intend to generalize this result to the field of quaternion tensors for the separation of high-dimensional signals.

Author Contributions

Conceptualization, Z.-H.H. and X.-N.Z.; methodology, X.-N.Z.; software, X.-N.Z.; validation, X.-N.Z.; formal analysis, Z.-H.H. and X.-N.Z.; investigation, X.C.; resources, Z.-H.H.; data curation, Z.-H.H. and X.-N.Z.; writing—original draft preparation, Z.-H.H. and X.-N.Z.; writing—review and editing, Z.-H.H. and X.-N.Z.; visualization, Z.-H.H. and X.-N.Z.; supervision, Z.-H.H. and X.-N.Z.; project administration, Z.-H.H. and X.-N.Z.; funding acquisition, Z.-H.H. and X.-N.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the National Natural Science Foundation of China (Grant no. 12271338 and 12371023).

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bihan, N.L.; Sangwine, S.J. Quaternion principal component analysis of color images. In Proceedings of the 2003 International Conference on Image Processing (Cat. No.03CH37429), Barcelona, Spain, 14–17 September 2003. [Google Scholar]
  2. Kyrchei, I. Cramer’s rules of η-(skew-)hermitian solutions to the quaternion sylvester-type matrix equations. Adv. Appl. Clifford Algebr. 2019, 29, 56. [Google Scholar] [CrossRef]
  3. Kyrchei, I. Cramer’s rules for Sylvester quaternion matrix equation and its special cases. Adv. Appl. Clifford Algebras 2018, 28, 90. [Google Scholar] [CrossRef]
  4. Kyrchei, I. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Appl. 2018, 438, 136–152. [Google Scholar] [CrossRef]
  5. Rui, Z.; Wu, J.S.; Shao, Z.H.; Chen, Y.; Chen, B.J.; Senhadji, L.; Shu, H.Z. Color image classification via quaternion principal component analysis network. Neurocomputing 2016, 216, 416–428. [Google Scholar]
  6. Xiao, X.L.; Zhou, Y.C. Two-dimensional quaternion pca and sparse pca. IEEE Trans. Neur. Net. Lear. 2018, 30, 2028–2042. [Google Scholar] [CrossRef] [PubMed]
  7. Schreier, P.J.; Scharf, L.L. Statistical Signal Processing of Complex-Valued Data: The Theory of Improper and Noncircular Signals; Cambridge University Press: Cambridge, UK, 2010. [Google Scholar]
  8. Tao, J.; Chang, W. Adaptive beamforming based on complex quaternion processes. Math. Probl. Eng. 2014, 2014, 291249. [Google Scholar] [CrossRef]
  9. Yeredor, A. Performance analysis of the strong uncorrelating transformation in blind separation of complex-valued sources. IEEE Trans. Signal Process. 2012, 60, 478–483. [Google Scholar] [CrossRef]
  10. Mandic, D.P.; Goh, V.S.L. Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models; Wiley-Balckwell: Hoboken, NJ, USA, 2009. [Google Scholar]
  11. Horn, R.A.; Johnson, C.R. Matrix Analysis; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  12. De Lathauwer, L.; Moor, B.D. On the blind separation of non-circular sources. In Proceedings of the European Signal Processing Conference, Toulouse, France, 3–6 September 2002; pp. 1–4. [Google Scholar]
  13. Eriksson, J.; Koivunen, V. Complex random vectors and ICA models: Identifiability, uniqueness, and separability. IEEE Trans. Inf. 2006, 52, 1017–1029. [Google Scholar] [CrossRef]
  14. Ollila, E.; Koivunen, V. Complex ICA using generalized uncorrelating transform. Signal Process. 2009, 89, 365–377. [Google Scholar] [CrossRef]
  15. Took, C.C.; Douglas, S.; Mandic, D.P. On approximate diagonalization of correlation matrices in widely linear signal processing. IEEE Trans. Signal Process. 2012, 60, 1469–1473. [Google Scholar] [CrossRef]
  16. Shen, H.; Kleinsteuber, M. Complex Blind Source Separation via Simultaneous Strong Uncorrelating Transform; Latent Variable Analysis & Signal Separation; Springer: Berlin/Heidelberg, Germany, 2010; pp. 287–294. [Google Scholar]
  17. Okopal, G.; Wisdom, S.; Atlas, L. Speech analysis with the strong uncorrelating transform. IEEE/ACM Trans. Audio Speech Lang. Process. 2015, 23, 1858–1868. [Google Scholar] [CrossRef]
  18. Chen, B.; Shu, H.; Chen, G.; Sun, X.; Coatrieux, J.L. Color image analysis by quaternion-type moments. J. Math. Imaging Vis. 2015, 51, 124–144. [Google Scholar] [CrossRef]
  19. Jia, Z.G.; Ng, M.K.; Song, G.J. Robust quaternion matrix completion with applications to image inpainting. Numer. Linear Algebra Appl. 2019, 26, e2245. [Google Scholar] [CrossRef]
  20. Ling, S.T.; Li, Y.D.; Yang, B.; Jia, Z.G. Joint diagonalization for a pair of hermitian quaternion matrices and applications to color face recognition. Signal Process. 2022, 198, 108560. [Google Scholar] [CrossRef]
  21. Took, C.C.; Mandic, D.P.; Zhang, F. On the unitary diagonalisation of a special class of quaternion matrices. Appl. Math. Lett. 2011, 24, 1806–1809. [Google Scholar]
  22. Took, C.C.; Mandic, D.P. A quaternion widely linear adaptive filter. IEEE Trans. Signal Process. 2010, 58, 4427–4431. [Google Scholar] [CrossRef]
  23. Took, C.C.; Mandic, D.P. Augmented second-order statistics of quaternion random signals. Signal Process. 2011, 91, 214–224. [Google Scholar] [CrossRef]
  24. Ell, T.A.; Bihan, N.L.; Sangwine, S.J. Quaternion Fourier Transforms for Signal and Image Processing; John Wiley & Sons: Hoboken, NJ, USA, 2014. [Google Scholar]
  25. Minemoto, T.; Isokawa, T.; Nishimura, H.; Matsui, N. Feed forward neural network with random quaternionic neurous. Signal Process. 2017, 136, 59–68. [Google Scholar] [CrossRef]
  26. Xia, Y.; Jahanchahi, C.; Mandic, D.P. Quaternion-valued echo state networks. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 663–673. [Google Scholar]
  27. Via, J.; Palomar, D.; Vielva, L.; Santamaria, I. Quaternion ICA from second-order statistics. IEEE Trans. Signal Process. 2011, 59, 1586–1600. [Google Scholar] [CrossRef]
  28. Chen, B.; Liu, Q.; Li, X.; Shu, H. Removing gaussian noise for color images by quaternion representation and optimisation of weights in non-local means filter. IET Image Process. 2014, 8, 591–600. [Google Scholar] [CrossRef]
  29. Abdur, R.; Kyrchei, I.; Ilyas, A.; Muhammad, A.; Abdul, S. Explicit formulas and determinantal representation for η-skew-Hermitian solution to a system of quaternion matrix equations. Filomat 2020, 34, 2601–2627. [Google Scholar]
  30. Enshaeifar, S.; Took, C.C.; Sanei, S.; Mandic, D.P. Novel quaternion matrix factorisations. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016; pp. 3946–3950. [Google Scholar]
  31. Min, X.; Enshaeifar, S.; Stott, A.E.; Took, C.C.; Xia, Y.; Kanna, S.; Mandic, D.P. Simultaneous diagonalisation of the covariance and complementary covariance matrices in quaternion widely linear signal processing. Signal Process. 2018, 148, 193–204. [Google Scholar]
  32. Rodman, L. Topices in Quaternion Linear Algebra; Princeton University Press: Princeton, NJ, USA, 2014. [Google Scholar]
  33. Aghamollaei, G.; Rahjoo, M. On quaternionic numerical ranges with respect to nonstandard involutions. Linear Algebra Appl. 2017, 540, 11–25. [Google Scholar] [CrossRef]
  34. He, Z.H. Some quaternion matrix equations involving ϕ-Hermicity. Filomat 2019, 33, 5097–5112. [Google Scholar] [CrossRef]
  35. Zhang, F. Quaternions and matrices of quaternions. Linear Algebra Appl. 1997, 251, 21–57. [Google Scholar] [CrossRef]
  36. Jacobson, N. Basic Algebra I; W.H Freeman: New York, NY, USA, 1974. [Google Scholar]
  37. Amblard, P.O.; Bihan, N.L. On properness of quaternion valued random variances. In Proceedings of the IMA International Comference on Mathematics in Signal Processing, Cirencester, UK, 14–16 December 2004; pp. 23–26. [Google Scholar]
  38. Horn, R.A.; Zhang, F.Z. A generalization of the complex Autonne-Takagi factorization to quaternion matrices. Linear Multilinear Algebra 2012, 60, 1239–1244. [Google Scholar] [CrossRef]
  39. Xia, Y.; Jahanchahi, C.; Nitta, T.; Mandic, D.P. Performance bounds of quaternion estimators. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 3287–3292. [Google Scholar]
  40. Via, J.; Ramirez, D.; Santamaria, I. Properness and widely linear processing of quaternion random vectors. IEEE Trans. Inform. Theory 2010, 56, 3502–3515. [Google Scholar] [CrossRef]
  41. He, Z.H.; Liu, J.; Tam, T.Y. The general ϕ-Hermitian solution to mixed pairs of quaternion matrix Sylvester equations. Electron. J. Linear Algebra 2017, 32, 475–499. [Google Scholar] [CrossRef]
Figure 1. 3D scatter diagram of the original signal.
Figure 1. 3D scatter diagram of the original signal.
Mathematics 11 04840 g001
Figure 2. Generalized quaternion unitary transform algorithm to de-mix the signal ( δ ∈{i, j, k}).
Figure 2. Generalized quaternion unitary transform algorithm to de-mix the signal ( δ ∈{i, j, k}).
Mathematics 11 04840 g002
Figure 3. Generalized quaternion unitary transform algorithm to de-mix the signal ( δ { 2 2 ( i + j ) , 2 2 ( j + k ) , 2 2 ( i + k ) } ).
Figure 3. Generalized quaternion unitary transform algorithm to de-mix the signal ( δ { 2 2 ( i + j ) , 2 2 ( j + k ) , 2 2 ( i + k ) } ).
Mathematics 11 04840 g003
Figure 4. Generalized quaternion unitary transform algorithm to de-mix the signal ( δ = 3 3 ( i + j + k ) ).
Figure 4. Generalized quaternion unitary transform algorithm to de-mix the signal ( δ = 3 3 ( i + j + k ) ).
Mathematics 11 04840 g004
Figure 5. CPU Time.
Figure 5. CPU Time.
Mathematics 11 04840 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

He, Z.-H.; Zhang, X.-N.; Chen, X. Unitary Diagonalization of the Generalized Complementary Covariance Quaternion Matrices with Application in Signal Processing. Mathematics 2023, 11, 4840. https://doi.org/10.3390/math11234840

AMA Style

He Z-H, Zhang X-N, Chen X. Unitary Diagonalization of the Generalized Complementary Covariance Quaternion Matrices with Application in Signal Processing. Mathematics. 2023; 11(23):4840. https://doi.org/10.3390/math11234840

Chicago/Turabian Style

He, Zhuo-Heng, Xiao-Na Zhang, and Xiaojing Chen. 2023. "Unitary Diagonalization of the Generalized Complementary Covariance Quaternion Matrices with Application in Signal Processing" Mathematics 11, no. 23: 4840. https://doi.org/10.3390/math11234840

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop