Next Article in Journal
Raft-PLUS: Improving Raft by Multi-Policy Based Leader Election with Unprejudiced Sorting
Next Article in Special Issue
An Efficient Method for Split Quaternion Matrix Equation XAf(X)B = C
Previous Article in Journal
A Note on q-analogue of Degenerate Catalan Numbers Associated with p-adic Integral on Zp
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Method of Solving Special Solutions of Quaternion Generalized Lyapunov Matrix Equation

1
College of Mathematical Sciences, Liaocheng University, Liaocheng 252000, China
2
Research Center of Semi-Tensor Product of Matrices: Theory and Applications, Liaocheng 252000, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(6), 1120; https://doi.org/10.3390/sym14061120
Submission received: 20 April 2022 / Revised: 21 May 2022 / Accepted: 26 May 2022 / Published: 29 May 2022
(This article belongs to the Special Issue Tensors and Matrices in Symmetry with Applications)

Abstract

:
In this paper, we study the bisymmetric and skew bisymmetric solutions of quaternion generalized Lyapunov equation. With the help of semi-tensor product of matrices, some new conclusions on the expansion rules of row and column of matrix product on quaternion matrices are proposed and applied to the calculation of quaternion matrix equation. Using the H-representation method, the independent elements are extracted according to the structural characteristics of bisymmetric matrix and skew bisymmetric matrix, so as to simplify the operation process. Finally, it is compared with the real vector representation method of quaternion matrix equation to illustrate the effectiveness and superiority of the proposed method.

1. Introduction

The notations and definitions used in this paper are summarized as follows. Let R / C / Q be the sets of the real numbers/complex numbers/quaternions, respectively. R n represents the set of all real column vectors with order n. R m × n / C m × n / Q m × n represent the set of all m × n real matrices/complex matrices/quaternion matrices, respectively. BR n × n / SBR n × n / BQ n × n / SBQ n × n represent the set of all n × n real bisymmetric matrices/real skew bisymmetric matrices/quaternion bisymmetric matrices/quaternion skew bisymmetric matrices, respectively. A T / A H / A represent the transpose/conjugate transpose/Moore–Penrose inverse of matrix A, respectively. I n represents unit matrix with order n. For A = ( a i j ) Q m × n , V c ( A ) is the column stacking form of the matrix A, i.e., V c ( A ) = ( a 11 , , a m 1 , a 12 , , a m 2 , , a m n ) T ; V r ( A ) is the row stacking form of the matrix A, i.e., V r ( A ) = ( a 11 , , a 1 n , a 21 , , a 2 n , , a m n ) T . ⊗ represents the Kronecker product of matrices. · represents the Frobenius norm of a matrix or Euclidean norm of a vector.
With the development of science and technology, Lyapunov equation has been widely used in engineering, so its research has attracted more and more scholars’ attention. The equation
A X + X A T + i = 1 k C i X C i T = B
is called the generalized Lyapunov equation, which plays an important role in studying the mean square stability, precise observability and H 2 / H control of linear stochastic systems [1,2,3,4]. In particular, Equation (1) is closely related to the linear It o ^ -type system
d x ( t ) = A x ( t ) d t + i = 1 k C i x ( t ) d ω i ( t ) , x ( 0 ) = x 0 R n ,
where A , C 1 , , C k are real constant matrices of suitable dimensions, x R n is the system state, x 0 R n is a deterministic initial state, and  ω i ( t ) , i = 1 , , k are independent, standard 1-D Wiener processes defined on the filtered probability space [5].
Matrix theory is recognized as a branch of mathematics originating in China. Katz, a professor at the University of Columbia, once said, “the idea of a matrix has a long history, dated at least from its use by Chinese scholars of the Han period for solving systems of linear equations”. However, the traditional matrix theory is flawed, due to the limitations of matrix dimension, which greatly limits the application of matrix method. The semi-tensor product of matrices is the development of traditional matrix theory, which overcomes the limitation of traditional matrix theory on dimension. Therefore, it is also called the matrix theory of crossing dimension. It is now widely used in biological systems and life sciences [6,7,8], game theory [9,10,11], image encryption [12,13]. In addition, some scholars applied semi-tensor product of matrices to the solution of matrix equations. For example, Li studied the least squares solution of matrix equations under semi-tensor product of matrices [14], Ding studied the triangular Toeplitz solution of complex linear system by using semi-tensor product of matrices [15], and Wang studied the least squares Hermitian solution of quaternion matrix equation by using semi-tensor product of matrices [16].
Matrix equation is one of the important research fields of numerical algebra, and the research on quaternion matrix equation also has been widely concerned by scholars. For example, Kyrchei studied the least-norm of the general solution to some system of quaternion matrix equations and its determinantal representations [17] and Cramers rules for Sylvester quaternion matrix equation and its special cases [18], Liu and Wang studied the solvability conditions and the formula of the general solution to a Sylvester-like quaternion matrix Equation [19], Mehany and Wang investigated the solvability conditions and the general solution of three symmetrical systems of coupled Sylvester-like quaternion matrix Equations [20], Jiang and Ling studied closed-form solutions of the quaternion matrix equation A X ˜ X B = C in explicit forms [21], Wang studied the bisymmetric and central symmetric solutions of quaternion matrix Equations [22], Zhang studied the least squares biHermitian solutions and oblique biHermitian solutions of quaternion matrix Equations [23].
Because of the uniqueness of the special matrix structure, we can use its structural characteristics to simplify the operation when solving the equation. H-representation is a systematic method for extracting independent elements of special matrix proposed by Zhang [5]. With the help of H-representation, we can reduce the number of elements involved in the operation, thereby simplifying the operation process. At present, H-representation has a preliminary application in the field of system and control [24,25]. For example, Zhao studied the moment stability of nonlinear discrete time delay stochastic systems based on H-representation [26], Sheng studied the observability of time-varying stochastic Markov jump systems based on H-representation [27]. H-representation method to simplify the linear matrix equation problem. Then, can it be used in the study of linear matrix inequality? This is a question that can be considered. For example, whether the H-representation method can be applied to the almost sure consensus of multi-agent systems [28] and the event-triggered L 2 L filtering for network-based neutral systems with time-varying delays by using T-S fuzzy method [29]. In this paper, we will study the bisymmetric and skew bisymmetric solutions of quaternion matrix equation by using the expansion rules of quaternion matrix product, H-representation of matrices and semi-tensor product of matrices.
Problem 1.
Let A , B , C i Q n × n , i = 1 , , n , and 
T b = X | X BQ n × n , A X + X A T + i = 1 n C i X C i T = B .
Find out X b T b such that
X b = min X T b X .
X b is called the minimal norm bisymmetric solution of (1).
Problem 2.
Let A , B , C i Q n × n , i = 1 , , n
T s b = X | X SBQ n × n , A X + X A T + i = 1 n C i X C i T = B .
Find out X s b T s b such that
X s b = min X T s b X .
X s b is called the minimal norm skew bisymmetric solution of (1).
The main contributions of this paper are as follows: (i) By using the semi-tensor product of matrices, the new conclusions of the row and column expansion rules of matrix product over quaternion skew field are proposed, which can transform quaternion matrix equation into quaternion linear equations for solving. (ii) The H-representation method provides a systematic method to extract independent elements of special matrices.This paper applies this method to solve quaternion matrix equations, and provides a simple and feasible method to simplify the solution of quaternion matrix equations. (iii) The proposed method is compared with the real vector representation method in [16] to reflect the advantages of the proposed method in computational time and computable dimension.
This article is structured as follows. In Section 2, we introduce some basic knowledge of quaternion and semi-tensor product of matrices. In Section 3, we introduce H-representation, bisymmetric matrix and skew bisymmetric matrix, and give H-representation of these two kinds of special matrices, respectively. In Section 4, we study the minimal norm bisymmetric solution and the minimal norm skew bisymmetric solution of Equation (1) and give the necessary and sufficient conditions for the existence of solutions and the general solution expressions. In Section 5, the effectiveness of the algorithms are verified by numerical examples. Furthermore, by comparing this method with the real vector representation method, the superiority of this method is illustrated. In Section 6, a brief conclusion is given.

2. Preliminaries

In this section, we will review some basic knowledge of the quaternion and semi-tensor product of matrices.
Definition 1
([30]). The set of quaternions can be regarded as a four-dimensional algebra, that is,
Q = { q = q 1 + q 2 i + q 3 j + q 4 k | i 2 = j 2 = k 2 = 1 , i j k = 1 , q 1 , q 2 , q 3 , q 4 R } .
Quaternion q can be uniquely represented as q = b 1 + b 2 j , where b 1 = q 1 + q 2 i ,   b 2 = q 3 + q 4 i .
The conjugate of the quaternion q is defined as q ¯ = q 1 q 2 i q 3 j q 4 k = b 1 ¯ b 2 j . The norm of a quaternion q is
q = q 1 2 + q 2 2 + q 3 2 + q 4 2 = q q ¯ .
Definition 2
([30]). Note that for any quaternion matrix A Q m × n , A can be represented as
A = A 11 + A 12 i + A 13 j + A 14 k = A 1 + A 2 j ,
in which A 11 , A 12 , A 13 , A 14 R m × n , and  A 1 = A 11 + A 12 i ,   A 2 = A 13 + A 14 i .
The conjugate matrix A is defined as A ¯ = A 11 A 12 i A 13 j A 14 k = A 1 ¯ A 2 j . The Frobenius norm of quaternion matrix A is
A = A 11 2 + A 12 2 + A 13 2 + A 14 2 .
Lemma 1
([30]). Let a , b Q , A Q m × n , B Q n × p , then
a b ¯ = b ¯ a ¯ , ( A ¯ ) T = ( A T ) ¯ , V c ( A B ) ¯ = V c ( A B ¯ )
Definition 3
([31]). Let A R m × n , B R p × q , t = l c m ( n , p ) is the least common multiples of n and p, then the semi-tensor product of A and B is defined as
A B = ( A I t / n ) ( B I t / p ) .
Through the definition of semi-tensor product of matrices, we find that when n = p , it is the traditional matrix multiplication. So semi-tensor product of matrices is a generalization of traditional matrix multiplication.
Because semi-tensor product of matrices allows the expansion of the dimension of matrices, we can realize the transformation between row and column stacking form of matrix by using swap matrix.
Definition 4
([31]). Define the mn dimensional swap matrix as follows
W [ m , n ] = [ I n δ m 1 , I n δ m 2 , , I n δ m m ] ,
where δ m i is the ith column of I m .
Lemma 2
([31]). Let A R m × n , then
W [ m , n ] V r ( A ) = V c ( A ) , W [ n , m ] V c ( A ) = V r ( A ) .
With the help of semi-tensor product of matrices, we present some new conclusions on the expansion rules of quaternion matrix product.
Theorem 1.
Suppose B Q m × n , X Q n × p , Y Q q × m , then
(1) 
V r ( B X ) = B V r ( X ) , V c ( B X ) = ( I p B ) V c ( X ) ;
(2) 
V c ( Y B ¯ ) = B H V c ( Y ¯ ) , V r ( Y B ¯ ) = ( I q B H ) V r ( Y ¯ ) .
Proof. 
(1) For B = ( b i j ) Q m × n , X = ( x i j ) Q n × p , and B i is the ith row of B, then the ith block on the right side of C = B X is
B i V r ( X ) = ( B i I p ) V r ( X ) = k = 1 n b i k x k i k = 1 n b i k x k p = ( C i ) T .
So V r ( B X ) = V r ( C ) = B V r ( X ) . From Lemma 2,
V c ( B X ) = W [ m , p ] V r ( B X ) = W [ m , p ] B V r ( X ) = W [ m , p ] B W [ p , n ] V c ( X ) = ( I p B ) V c ( X ) .
(2) Note B = ( b 1 , , b n ) , b i Q m ( i = 1 , , n ) , Y = ( y 1 , , y m ) , y j Q q ( j = 1 , , m ) , then
V c ( Y B ¯ ) = V c ( Y b 1 ¯ , , Y b n ¯ ) = Y b 1 ¯ Y b n ¯ .
By Lemma 1,
Y b i ¯ = y 1 b 1 i ¯ + y 2 b 2 i ¯ + + y m b m i ¯ = b 1 i ¯ y 1 ¯ + b 2 i ¯ y 2 ¯ + + b m i ¯ y m ¯ = b 1 i ¯ I q , , b m i ¯ I q V c ( Y ¯ ) .
So
V c ( Y B ¯ ) = b 11 ¯ I q b 21 ¯ I q b m 1 ¯ I q b 12 ¯ I q b 22 ¯ I q b m 2 ¯ I q b 1 n ¯ I q b 2 n ¯ I q b m n ¯ I q V c ( Y ¯ ) .
And
V r ( Y B ¯ ) = W [ n , q ] V c ( Y B ¯ ) = W [ n , q ] B H V c ( Y ¯ ) = W [ n , q ] B H W [ q , m ] V r ( Y ¯ ) = ( I q B H ) V r ( Y ¯ ) .
   □
Lemma 3
([32]). The linear system of equation A x = b , with  A R m × n and b R m , has a solution x R n if and only if A A b = b . In case that it has the general solution
x = A b + ( I A A ) y ,
where y R n is an arbitrary vector. The minimal norm solution of the linear system of equation A x = b is A b .

3. The H-Representation of Bisymmetric Matrix and Skew Bisymmetric Matrix

This section describes the H-representation of matrices and related properties.
Definition 5
([22]). For a matrix A = ( a i j ) Q n × n , if  a i j = a n i + 1 , n j + 1 = a j i ¯ , then A is called bisymmetric matrix.
Definition 6
([33]). For a matrix A = ( a i j ) Q n × n , if  a i j = a n i + 1 , n j + 1 = a j i ¯ , then A is called skew bisymmetric matrix.
Next, a brief introduction to the H-representation is given.
Definition 7
([5]). Consider a p-dimensional complex matrix subspace X C n × n over the field C . For each matrix X = ( x i j ) n × n X , there always exist a map ψ: X X V c ( X ) . If dim ( X ) = p and e 1 , e 2 , , e p ( p n 2 ) , form a basis of X , define H = [ V c ( e 1 ) , V c ( e 2 ) , , V c ( e p ) ] , there exist x 1 , x 2 , , x p C , such that X = i = 1 p x i e i . Therefore for each X X , if we express ψ ( X ) = V c ( X ) in the form of
ψ ( X ) = V c ( X ) = H X ˜ ,
where X ˜ = [ x 1 , x 2 , , x p ] T is an order arrangement of independent elements in X, then H X ˜ is called an H -representation of ψ ( X ) , and  H is called an H -representation matrix of ψ ( X ) .
In the complex matrix subspace X , the  H -representation of the matrix is related to the selection of the basis. The H-representation of a matrix is unique when the basis are fixed.
Based on the above definitions, the following examples are given.
Example 1.
Let X = BR n × n , X = ( x i j ) 4 × 4 X , and then d i m ( X ) = 6 . If we select a basis of X  as
e 1 = 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 , e 2 = 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 , e 3 = 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 ,
e 4 = 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 , e 5 = 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 , e 6 = 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 0 .
It is easy to compute
ψ ( X ) = V c ( X ) = [ x 11 , x 21 , x 31 , x 41 , x 21 , x 22 , x 32 , x 31 , x 31 , x 32 , x 22 , x 21 , x 41 , x 21 , x 31 , x 11 ] T ,
H = 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 , X ˜ = [ x 11 , x 21 , x 31 , x 41 , x 22 , x 32 ] T .
Example 2.
Let X = SBR n × n , X = ( x i j ) 4 × 4 X , and then d i m ( X ) = 2 . If we select a basis of X as
e 1 = 0 1 0 0 1 0 0 0 0 0 0 1 0 0 1 0 , e 2 = 0 0 1 0 0 0 0 1 1 0 0 0 0 1 0 0 .
It is easy to compute
ψ ( X ) = V c ( X ) = [ 0 , x 21 , x 31 , 0 , x 21 , 0 , 0 , x 31 , x 31 , 0 , 0 , x 21 , 0 , x 31 , x 21 , 0 ] T ,
H = 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 1 1 0 0 0 , X ˜ = [ x 21 , x 31 ] T .
In general, we will give the H-representation for X = BR n × n and X = SBR n × n . Firstly, we select the standard basis of n-dimensional bisymmetric matrix subspace and n-dimensional skew bisymmetric matrix subspace.
For X = BR n × n , when n is odd, we select a set of standard basis as
E 11 , , E n 1 , E 22 , , E n 1 , 2 , , E n + 1 2 , n + 1 2 ,
where E i j = ( e k l ) n × n , and  e k l = e n k + 1 , n l + 1 = e l k = 1 , the other elements are 0. At this time, we have
X b ˜ = ( x 11 , , x n 1 , x 22 , , x n 1 , 2 , , x n + 1 2 , n + 1 2 ) T .
When n is even, we select a set of standard basis as
E 11 , , E n 1 , E 22 , , E n 1 , 2 , , E n 2 , n 2 , E n 2 + 1 , n 2 ,
where E i j = ( e k l ) n × n , and  e k l = e n k + 1 , n l + 1 = e l k = 1 , the other elements are 0. At this time, we have
X b ˜ = ( x 11 , , x n 1 , x 22 , , x n 1 , 2 , , x n 2 , n 2 , x n 2 + 1 , n 2 ) T .
Similarly, for  X = SBR n × n , when n is odd, we select a set of standard basis as
F 21 , , F n 1 , 1 , F 32 , , F n 2 , 2 , , F n + 1 2 , n 1 2 ,
where F i j = ( f p q ) n × n , and  f p q = f n p + 1 , n q + 1 = f q p = 1 , the other elements are 0. At this time, we have
X s b ˜ = ( x 21 , , x n 1 , 1 , x 32 , , x n 2 , 2 , , x n + 1 2 , n 1 2 ) T .
When n is even, we select a set of standard basis as
F 21 , , F n 1 , 1 , F 32 , , F n 2 , 2 , , F n 2 , n 2 1 , F n 2 + 1 , n 2 1 ,
where F i j = ( f p q ) n × n , and  f p q = f n p + 1 , n q + 1 = f q p = 1 , the other elements are 0. At this time, we have
X s b ˜ = ( x 21 , , x n 1 , 1 , x 32 , , x n 2 , 2 , , x n 2 , n 2 1 , x n 2 + 1 , n 2 1 ) T .
Remark 1.
Note that ψ ( X b ) is a column vector formed by all elements of X b , while X b ˜ , X s b ˜ are column vectors formed by different nonzero elements of X b , X s b , respectively. For clarity, we denote theH-matrix inH-representation corresponding to X = BR n × n by H b , theH-matrix inH-representation corresponding to X = SBR n × n by H s b .

4. The Solutions of Problems 1 and 2

In this section, by using the properties of semi-tensor product of matrices and the H-representation, we study Problems 1 and 2.
For convenience, we denote B = B 11 + B 12 i + B 13 j + B 14 k = B 1 + B 2 j , B = V c ( B 11 ) V c ( B 13 ) V c ( B 12 ) V c ( B 14 ) , I B = I n 2 0 0 0 0 0 I n 2 0 I n 2 0 0 0 0 0 I n 2 0 0 I n 2 0 0 0 0 0 I n 2 0 I n 2 0 0 0 0 0 I n 2 , D = I n A = D 1 + D 2 j , E = A ¯ I n = E 1 + E 2 j , F i = I n C i = F 1 i + F 2 i j , G i = C i ¯ I n = G 1 i + G 2 i j , M 11 = D 1 + E 1 ¯ + i = 1 n F 1 G 1 i ¯ , M 21 = E 2 i = 1 n F 1 G 2 i , M 12 = E 2 ¯ + i = 1 n F 1 G 2 i ¯ , M 22 = D 1 + E 1 + i = 1 n F 1 G 1 i , M 13 = i = 1 n F 2 G 2 i ¯ , M 23 = D 2 + i = 1 n F 2 G 1 i , M 14 = D 2 i = 1 n F 2 G 1 i ¯ , M 24 = i = 1 n F 2 G 2 i . Then M = M 11 M 12 M 13 M 14 M 21 M 22 M 23 M 24 = M 1 + M 2 i .
Theorem 2.
Suppose A , B , C i Q n × n ( i = 1 , , n ) , and denote H 1 = H b H s b H s b H s b , L 1 = M 1 M 2 M 2 M 1 I B H 1 . Hence (1) has a bisymmetric solution if and only if
L 1 L 1 I 4 n 2 B = 0 .
Moreover, if (6) holds, the solution set of (1) can be represented as
H b = X = X 11 + X 12 i + X 13 j + X 14 k | X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ = L 1 B + ( I n 2 n + 1 L 1 L 1 ) y , y R n 2 n + 1 , i f n i s o d d L 1 B + ( I n 2 n L 1 L 1 ) y y R n 2 n , i f n i s e v e n ,
where X 1 p ˜ is the independent elements of X 1 p , p = 1 , 2 , 3 , 4 .
Then, the minimal norm bisymmetric solution X ^ b satisfies
X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ = L 1 B .
Proof. 
For X = X 1 + X 2 j = X 11 + X 12 i + X 13 j + X 14 k BQ n × n , using Theorem 1 and H-representation of bisymmetric matrix, we can obtain
A X + X A T i = 1 n C i X C i T B = V c ( A X + X A T i = 1 n C i X C i T B ) = V c ( A X ) + V c ( X A T ) V c ( i = 1 n C i X C i T ) V c ( B ) = ( I n A ) V c ( X ) + ( A ¯ I n ) V c ( X ¯ ) ¯ + i = 1 n ( I n C i ) ( C i ¯ I n ) V c ( X ¯ ) ¯ V c ( B )
= ( D 1 + D 2 j ) V c ( X 1 + X 2 j ) + ( E 1 + E 2 j ) V c ( X 1 ¯ X 2 j ) ¯ + i = 1 n ( F 1 + F 2 j ) ( G 1 i + G 2 i j ) V c ( X 1 ¯ X 2 j ) ¯ V c ( B 1 + B 2 j ) = D 1 V c ( X 1 ) D 2 V c ( X 2 ¯ ) + D 1 V c ( X 2 ) j + D 2 V c ( X 1 ¯ ) j + E 1 ¯ V c ( X 1 ) + E 2 ¯ V c ( X 2 ) + E 1 V c ( X 2 ) j E 2 V c ( X 1 ) j + i = 1 n F 1 G 1 i ¯ V c ( X 1 ) + i = 1 n F 1 G 2 i ¯ V c ( X 2 ) + i = 1 n F 2 G 1 i ¯ V c ( X 2 ¯ ) + i = 1 n F 2 G 2 i ¯ V c ( X 1 ¯ ) + i = 1 n F 1 G 1 i V c ( X 2 ) j i = 1 n F 1 G 2 i V c ( X 1 ) j + i = 1 n F 2 G 1 i V c ( X 1 ¯ ) j i = 1 n F 2 G 2 i V c ( X 2 ¯ ) j V c ( B 1 ) V c ( B 2 ) j = [ M 11 V c ( X 1 ) + M 12 V c ( X 2 ) + M 13 V c ( X 1 ¯ ) + M 14 V c ( X 2 ¯ ) ] V c ( B 1 ) + [ M 21 V c ( X 1 ) + M 22 V c ( X 2 ) + M 23 V c ( X 1 ¯ ) + M 24 V c ( X 2 ¯ ) ] j V c ( B 2 ) j = M 11 V c ( X 1 ) + M 12 V c ( X 2 ) + M 13 V c ( X 1 ¯ ) + M 14 V c ( X 2 ¯ ) M 21 V c ( X 1 ) + M 22 V c ( X 2 ) + M 23 V c ( X 1 ¯ ) + M 24 V c ( X 2 ¯ ) V c ( B 1 ) V c ( B 2 ) = M 11 M 12 M 13 M 14 M 21 M 22 M 23 M 24 V c ( X 1 ) V c ( X 2 ) V c ( X 1 ¯ ) V c ( X 2 ¯ ) V c ( B 1 ) V c ( B 2 ) = M 1 + M 2 i V c ( X 11 + X 12 i ) V c ( X 13 + X 14 i ) V c ( X 11 X 12 i ) V c ( X 13 X 14 i ) V c ( B 11 + B 12 i ) V c ( B 13 + B 14 i ) = M 1 + M 2 i V c ( X 11 ) V c ( X 13 ) V c ( X 11 ) V c ( X 13 ) + V c ( X 12 ) V c ( X 14 ) V c ( X 12 ) V c ( X 14 ) i V c ( B 11 ) V c ( B 13 ) + V c ( B 12 ) V c ( B 14 ) i = M 1 M 2 M 2 M 1 V c ( X 11 ) V c ( X 13 ) V c ( X 11 ) V c ( X 13 ) V c ( X 12 ) V c ( X 14 ) V c ( X 12 ) V c ( X 14 ) V c ( B 11 ) V c ( B 13 ) V c ( B 12 ) V c ( B 14 ) = M 1 M 2 M 2 M 1 I n 2 0 0 0 0 0 I n 2 0 I n 2 0 0 0 0 0 I n 2 0 0 I n 2 0 0 0 0 0 I n 2 0 I n 2 0 0 0 0 0 I n 2 H b H s b H s b H s b X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ V c ( B 11 ) V c ( B 13 ) V c ( B 12 ) V c ( B 14 ) .
By means of the properties of the Moore–Penrose inverse, we get  
A X + X A T i = 1 n C i X C i T B = M 1 M 2 M 2 M 1 I B H 1 X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ V c ( B 11 ) V c ( B 13 ) V c ( B 12 ) V c ( B 14 ) = L 1 X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ B = L 1 L 1 L 1 X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ B = L 1 L 1 B B = L 1 L 1 I 4 n 2 B .
Therefore, for  X BQ n × n , we obtain
A X + X A T i = 1 n C i X C i T B = 0 L 1 L 1 I 4 n 2 B = 0 L 1 L 1 I 4 n 2 B = 0 .
In case that (1) is compatible, its solution X BQ n × n satisfies
L 1 X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ = B .
Moreover, we can get the bisymmetric solution X ^ b satisfies
X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ = L 1 B + ( I p L 1 L 1 ) y , y R p ,
when n is odd, p = n 2 n + 1 ; when n is even, p = n 2 n .    □
Similarly, we can obtain the minimum norm skew bisymmetric solution of Problem 2.
Theorem 3.
Suppose A , B , C i Q n × n ( i = 1 , , n ) , and denote H 2 = H s b H b H b H b , L 2 = M 1 M 2 M 2 M 1 I B H 2 . Hence (1) has a skew bisymmetric solution if and only if
L 2 L 2 I 4 n 2 B = 0 .
Moreover, if (8) holds, the solution set of (1) can be represented as
H s b = X = X 11 + X 12 i + X 13 j + X 14 k | X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ = L 2 B + ( I n 2 + n + 1 L 2 L 2 ) y , y R n 2 + n + 1 , i f n i s o d d L 2 B + ( I n 2 + n L 2 L 2 ) y y R n 2 + n , i f n i s e v e n ,
where X 1 q ˜ is the independent elements of X 1 q , q = 1 , 2 , 3 , 4 .
Then, the minimal norm skew bisymmetric solution X ^ s b satisfies
X 11 ˜ X 12 ˜ X 13 ˜ X 14 ˜ = L 2 B .

5. Algorithms and Numerical Examples

This section provides algorithms and examples of H-representation methods for solving bisymmetric and skew bisymmetric solutions of quaternion generalized Lyapunov equation. For convenience, we take the case of i = 1 .  
Next, numerical experiments are used to verify the effectiveness of the above algorithms.
Example 3.
Suppose A , C Q n × n be generated randomly for n = 3 : 50 . Denote
X b = X b 1 + X b 2 i + X b 3 j + X b 4 k BQ n × n , X s b = X s b 1 + X s b 2 i + X s b 3 j + X s b 4 k SBQ n × n .
Compute A X b + X b A T + C X b C T = B b and A X s b + X s b A T + C X s b C T = B s b . For A X + X A T + C X C T = B b and A X + X A T + C X C T = B s b , using Algorithms 1 and 2, we can obtain the calculation solutions X ^ b and X ^ s b , respectively. Denote ε 1 = l o g 10 X b X ^ b , ε 2 = l o g 10 X s b X ^ s b . As the dimension changes, ε i , ( i = 1 , 2 ) are shown in Figure 1.
Algorithm 1: (Problem 1)
  Step 1: Input A , B , C Q n × n , output M 1 , M 2 , B ,
  Step 2: Input H b , H s b , output H 1 , L 1 ,
  Step 3: According to (7), output the minimal norm bisymmetric solution X ^ b of (1).
Algorithm 2: (Problem 2)
   Step 1: Input A , B , C Q n × n , output M 1 , M 2 , B ,
  Step 2: Input H b , H s b , output H 2 , L 2 ,
  Step 3: According to (9), output the minimal norm skew bisymmetric solution X ^ s b of (1).
Next, the proposed method is compared with the real vector representation method in Reference [16]. Next, we briefly introduce the real vector representation of quaternion.
Let q = q 1 + q 2 i + q 3 j + q 4 k Q , denote
q = q 1 q 2 q 3 q 4 ,
q is called the real vector representation of q.
Let x = [ x 1 , x 2 , , x n ] , y = [ y 1 , y 2 , , y n ] T be quaternion vectors. Denote
x = x 1 x n , y = y 1 y n ,
x , y are called as the real vector representation of quaternion vector x and y, respectively.
For A = ( A i j ) Q m × n , denote
A c = C o l 1 ( A ) C o l 2 ( A ) C o l n ( A ) = A 11 A m 1 A 1 n A m n , A r = R o w 1 ( A ) R o w 2 ( A ) R o w m ( A ) = A 11 A 1 n A m 1 A m n ,
where C o l k ( A ) is the kth column of A ( k = 1 , , n ) , R o w l ( A ) is the lth row of A ( l = 1 , , m ) . A c , A r are called the real column stacking form and the real row stacking form of A, respectively. Real column stacking form and real row stacking form of A are collectively called real vector representation of A.
By comparing the computer running time and the computable dimension, the superiority of the proposed method is illustrated. Taking the bisymmetric matrix as an example, when n is odd or even, the figures are shown in Figure 2.
Since the real vector representation method takes too long in calculating large dimensions, the real vector representation method only calculates n = 18 . By comparison, in terms of time, under the premise of calculating the same dimension, the time required by this method is shorter, and with the increase of dimension, the advantages of this method are more obvious. Secondly, in terms of computable dimension, due to the smaller matrix size and less data to be stored in this method, the computable dimension is larger.

6. Conclusions

With the help of semi-tensor product of matrices, this paper puts forward some new conclusions about the product expansion rules of quaternion matrices and applies them to the calculation of matrix equation. For solving quaternion matrix equation with special structural solutions, H-representation is used to extract the independent elements of the matrix to participate in the operation, by comparing with the real vector representation of quaternion, we illustrate the effectiveness and superiority of the method. In addition, the event-triggered L 2 L filtering for network-based neutral systems with time-varying delay via T S fuzzy approach based on proposed method is a good direction worthy of studying.

Author Contributions

Methodology, Z.L., Y.L. and X.F.; software, X.F. and W.D.; writing—original draft preparation, Y.L. and W.D.; writing—review and editing, Y.L., Z.L. and X.F.; supervision, Y.L.; project administration, Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

Supported by National Natural Science Foundation of China 62176112; Natural Science Foundation of Shandong Province ZR2020MA053.

Institutional Review Board Statement

Not Applicable.

Informed Consent Statement

Not Applicable.

Data Availability Statement

Not Applicable.

Conflicts of Interest

The authors have declared that there is no conflict of interest.

References

  1. Rami, M.A.; Zhou, X.Y. Linear matrix inequalities, Riccati equations, and indefinite stochastic linear quadratic controls. IEEE Trans. Autom. Control 2000, 45, 1131–1143. [Google Scholar] [CrossRef]
  2. Zhang, W.H.; Huang, Y.L.; Zhang, H.S. Stochastic H2/H control for discrete-time systems with state and disturbance dependent noise. Automatica 2007, 43, 513–521. [Google Scholar] [CrossRef]
  3. Feng, Y.T.; Anderson, B.D.O. An iterative algorithm to solve state-perturbed stochastic algebraic Riccati equations in LQ zero-sum games. Syst. Control Lett. 2010, 59, 50–56. [Google Scholar] [CrossRef]
  4. Zhang, W.H.; Zhang, H.S.; Chen, B.S. Generalized Lyapunov equation approach to state-dependent stochastic stabilization/detectability criterion. IEEE Trans. Autom. Control 2008, 53, 1630–1642. [Google Scholar] [CrossRef]
  5. Zhang, W.H.; Chen, B.S. H-representation and applications to generalized Lyapunov equations and linear stochastic systems. IEEE Trans. Autom. Control 2012, 57, 3009–3022. [Google Scholar] [CrossRef]
  6. Cheng, D.Z.; Zhao, Y.; Xu, T.T. Receding horizon based feedback optimization for mix-valued logical networks. IEEE Trans. Autom. Control 2015, 60, 3362–3366. [Google Scholar] [CrossRef]
  7. Li, R.; Yang, M.; Chu, T.G. State feedback stabilization for Boolean control networks. IEEE Trans. Autom. Control 2013, 58, 1853–1857. [Google Scholar] [CrossRef]
  8. Li, H.T.; Wang, Y.Z. Output feedback stabilization control design for Boolean control networks. Automatica 2013, 49, 3641–3645. [Google Scholar] [CrossRef]
  9. Cheng, D.Z.; Xu, T.T.; Qi, H.S. Evolutionarily stable strategy of networked evolutionary games. IEEE Trans. Neural Netw. Learn. Syst. 2014, 25, 1335–1345. [Google Scholar] [CrossRef]
  10. Cheng, D.Z. On finite potential games. Automatica 2014, 50, 1793–1801. [Google Scholar] [CrossRef]
  11. Cheng, D.Z.; He, F.H.; Qi, H.S.; Xu, T. Modeling, analysis and control of networked evolutionary games. IEEE Trans. Autom. Control 2015, 60, 2402–2415. [Google Scholar] [CrossRef]
  12. Wang, J.M.; Wang, J.; Jiang, Y.J. Image encryption algorithm based on the semi-tensor product. J. Image Graph. 2016, 21, 282–296. [Google Scholar]
  13. Wen, W.Y.; Hong, Y.K.; Fang, Y.M.; Zhang, Y.S.; Wan, Z. Semi-tensor product compression sensing integrated to verifiable image encryption method. J. Image Graph. 2022, 27, 215–225. [Google Scholar]
  14. Li, T.; Zhou, X.L.; Li, J.F. Least-squares solutions of matrix AX = B, XC = D with semi-tensor product. J. Math UK 2018, 38, 525–538. [Google Scholar]
  15. Ding, W.X.; Li, Y.; Wang, D. Special least squares solutions of the reduced biquaternion matrix equation with applications. Comput. Appl. Math. 2021, 40, 1–15. [Google Scholar] [CrossRef]
  16. Wang, D.; Li, Y.; Ding, W.X. Several kinds of special least squares solutions to quaternion matrix equation AXB = C. J. Appl. Math. Comput. 2021, 68, 1881–1899. [Google Scholar] [CrossRef]
  17. Rehman, A.; Kyrchei, I.; Akram, M.; Ali, I.; Shakoor, A. Least-norm of the general solution to some system of quaternion matrix equations and its determinantal representations. In Abstract and Applied Analysis; Hindawi: London, UK, 2019. [Google Scholar]
  18. Kyrchei, I. Cramer’s rules for Sylvester quaternion matrix equation and its special cases. Adv. Appl. Cliford Algebr. 2018, 28, 90. [Google Scholar] [CrossRef]
  19. Liu, L.S.; Wang, Q.W.; Chen, J.F.; Xie, Y.Z. An exact solution to a quaternion matrix equation with an application. Symmetry 2022, 14, 375. [Google Scholar] [CrossRef]
  20. Mehany, M.S.; Wang, Q.W. Three symmetrical systems of coupled Sylvester-like quaternion matrix equations. Symmetry 2022, 14, 550. [Google Scholar] [CrossRef]
  21. Jiang, T.S.; Ling, S.T. On a solution of the quaternion matrix equation A X ˜ X B = C and its applications. Adv. Appl. Clifford Algebr. 2013, 23, 689–699. [Google Scholar] [CrossRef]
  22. Wang, Q.W. Bisymmetric and centrosymmetric solutions to systems of real quaternion matrix equations. Comput. Math. Appl. 2005, 49, 641–650. [Google Scholar] [CrossRef] [Green Version]
  23. Zhang, Y.Z.; Li, Y.; Zhao, H.; Zhao, J.; Wang, G. Least-squares bihermitian and skew bihermitian solutions of the quaternion matrix equation AXB = C. Linear Multilinear Algebr. 2022, 70, 1081–1095. [Google Scholar] [CrossRef]
  24. Gao, M.; Sheng, L.; Zhang, W.H. Properties of solutions to generalized Lyapunov equations for discrete stochastic Markov jump systems. Control Decis. 2014, 29, 1693–1697. [Google Scholar]
  25. Wang, W.; Zhang, Y.H.; Liang, X.Q. Exact observability of Markov jump stochastic systems with multiplicative noise. J. Shandong Univ. Sci. Technol. 2016, 35, 99–105. [Google Scholar]
  26. Zhao, X.Y.; Deng, F.Q. Moment stability of nonlinear discrete stochastic systems with time-delays based on H-representation technique. Automatica 2014, 50, 530–536. [Google Scholar] [CrossRef]
  27. Sheng, L.; Gao, M.; Zhang, W.H. Observability of time-varying stochastic Markov jump systems based on H-representation. Control Decis. 2015, 30, 181–184. [Google Scholar]
  28. Wu, Y.; Wang, Y.; Gunasekaran, N.; Vadivel, R. Almost Sure Consensus of Multi-Agent Systems: An Intermittent Noise. IEEE Trans. Circuits Syst. II 2022. [Google Scholar] [CrossRef]
  29. Vadivel, R.; Suresh, R.; Hammachukiattikul, P.; Unyong, B.; Gunasekaran, N. Event-Triggered L2L Filtering for Network-Based Neutral Systems With Time-Varying Delays via TS Fuzzy Approach. IEEE Access 2021, 9, 145133–145147. [Google Scholar] [CrossRef]
  30. Yuan, S.F.; Liao, A.P.; Lei, Y. Least squares Hermitian solution of the matrix equation (AXB, CXD) = (E, F) with the least norm over the skew field of quaternions. Math. Comput. Model. 2008, 48, 91–100. [Google Scholar] [CrossRef]
  31. Cheng, D.Z.; Qi, H.S. Lecture Notes in Semi-Tensor Product of Matrices: Basic Theory and Multilinear Operation; Science Press: Beijing, China, 2020. [Google Scholar]
  32. Dai, H. The Theory of Matrices; Science Press: Beijing, China, 2001. (In Chinese) [Google Scholar]
  33. Yuan, S.F.; Cao, H.D. Least squares skew bisymmetric solution for a kind of quaternion matrix equation. Appl. Mech. Mater. 2011, 50–51, 190–194. [Google Scholar] [CrossRef]
Figure 1. Errors in different dimensions.
Figure 1. Errors in different dimensions.
Symmetry 14 01120 g001
Figure 2. Comparison of running time of different methods.
Figure 2. Comparison of running time of different methods.
Symmetry 14 01120 g002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, Z.; Li, Y.; Fan, X.; Ding, W. A New Method of Solving Special Solutions of Quaternion Generalized Lyapunov Matrix Equation. Symmetry 2022, 14, 1120. https://doi.org/10.3390/sym14061120

AMA Style

Liu Z, Li Y, Fan X, Ding W. A New Method of Solving Special Solutions of Quaternion Generalized Lyapunov Matrix Equation. Symmetry. 2022; 14(6):1120. https://doi.org/10.3390/sym14061120

Chicago/Turabian Style

Liu, Zhihong, Ying Li, Xueling Fan, and Wenxv Ding. 2022. "A New Method of Solving Special Solutions of Quaternion Generalized Lyapunov Matrix Equation" Symmetry 14, no. 6: 1120. https://doi.org/10.3390/sym14061120

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop