Next Article in Journal
Almost Contractions under Binary Relations
Next Article in Special Issue
Higher-Order Jacobsthal–Lucas Quaternions
Previous Article in Journal
A Multi-Phase Method for Euclidean Traveling Salesman Problems
Previous Article in Special Issue
The Eigensharp Property for Unit Graphs Associated with Some Finite Rings
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Relationships between a Linear Matrix Equation and Its Four Reduced Equations

1
College of Mathematics and Information Science, Shandong Technology and Business University, Yantai 264005, China
2
Shanghai Business School, College of Business and Economics, Shanghai 201400, China
*
Author to whom correspondence should be addressed.
Axioms 2022, 11(9), 440; https://doi.org/10.3390/axioms11090440
Submission received: 8 June 2022 / Revised: 19 August 2022 / Accepted: 24 August 2022 / Published: 30 August 2022
(This article belongs to the Special Issue Linear Algebra: Matrix Theory, Graph Theory and Applications)

Abstract

:
Given the linear matrix equation A X B = C , we partition it into the form A 1 X 11 B 1 + A 1 X 12 B 2 + A 2 X 21 B 1 + A 2 X 22 B 2 = C , and then pre- and post-multiply both sides of the equation by the four orthogonal projectors generated from the coefficient matrices A 1 , A 1 , B 1 , and B 2 to obtain four reduced linear matrix equations. In this situation, each of the four reduced equations involves just one of the four unknown submatrices X 11 , X 12 , X 21 , and X 22 , respectively. In this paper, we study the relationships between the general solution of A X B = C and the general solutions of the four reduced equations using some highly selective matrix analysis tools in relation to ranks, ranges, and generalized inverses of matrices.

1. Introduction

Throughout this article, let C m × n denote the collection of all m × n matrices over the field of complex numbers; A * denote the conjugate transpose of A C m × n ; r ( A ) denote the rank of A, i.e., the maximum order of the invertible submatrix of A; R ( A ) = { A x | x C n } and N ( A ) = { x C n | A x = 0 } denote the range and the null space of a matrix A C m × n , respectively; I m denote the identity matrix of order m; [ A , B ] denote a columnwise partitioned matrix consisting of the two submatrices A and B. For an A C m × n , the Moore–Penrose generalized inverse of A is denoted as A and is defined to be the unique matrix X C n × m satisfying the four Penrose equations
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A .
The Moore–Penrose inverse of a matrix A was specially studied and recognized because A A , A A , I m A A , and I n A A are four orthogonal projectors onto the ranges and kernels of A and A * , respectively, so that we can utilize it to denote and optimize a number of algebraic properties and performances of many matrix computations. Additionally, it can be used to represent other generalized inverses by means of certain algebraic operations of A and A . In brief, we let P A = A A , E A = I m A A , and F A = I n A A denote the three orthogonal projectors induced from A which can help in briefly denoting calculation processes related to generalized inverses of matrices. For more detailed information about generalized inverses of matrices, we refer to [1,2,3,4,5].
In this paper, we consider the following linear matrix equation
A X B = C ,
where A C m × n , B C p × q , and C C m × q are given matrices. As we know, the subject of matrix equations today is still a vital and active part of matrix theory, while (1) is one of the famous and classical algebraic equations with single unknown matrix in mathematics and applications. An equation as such was first proposed and studied by Sir Penrose in his seminal paper [6], and he obtained a group of exact and analytical results concerning the solvability condition and the general solution of the equation using the ranks, the ranges, and the generalized inverses of the given matrices in the equation. These results were recognized as the fundamental and classic materials in the research field of various linear matrix equations in the 1950s, and extensively prompted many deep-going investigations of linear matrix equations from theoretical and computational points of view in the past several decades; see [6,7,8,9,10,11,12,13,14] for several earlier and recent papers regarding this subject.
Below, we describe our motivation in the study of (1). We first partition the two coefficient matrices A, B, and the unknown matrix X into the following forms
A = [ A 1 , A 2 ] , B =   B 1 B 2 , X =   X 11 X 12 X 21 X 22 ,
where A i C m × n i , B i C p i × q , i = 1 , 2 . Correspondingly, (1) can be represented in the following partitioned form
A 1 X 11 B 1 + A 1 X 12 B 2 + A 2 X 21 B 1 + A 2 X 22 B 2 = C .
Based on the partitioned representation in (3), we are able to propose and approach the unknown submatrices in (1) and their performances, separately, such as the nonsingularity and nullity of the submatrices, the independence of the submatrices in the general solution of (1), etc. In fact, many explicit results and facts about the expressions and properties of the submatrices X i j in (3) were established in the literature, in particular, a family of exact and analytical formulas for calculating the maximum and minimum ranks of X i j were given in [11] for i , j = 1 , 2 .
In order to deeply reveal the algebraic properties of the four unknown matrices in (3), we pre- and post-multiply both sides of (3) with E A i and F B i respectively and note that E A i A i = 0 and B i F B i = 0 for i = 1 , 2 to yield the following four small linear matrix equations (transformed equations)
E A 2 A 1 X 11 B 1 F B 2 = E A 2 C F B 2 ,
E A 2 A 1 X 12 B 2 F B 1 = E A 2 C F B 1 ,
E A 1 A 2 X 21 B 1 F B 2 = E A 1 C F B 2 ,
E A 1 A 2 X 22 B 2 F B 1 = E A 1 C F B 1 .
Apparently, each of the four matrix equations involves only one of the four unknown submatrices in (2), and thus the four equations of this kind are often called reduced equations of (3), constructed with matrix partition and transformation methods.
The purpose of this article is to provide a meaningful study on intrinsic connections between the general solutions of the full equation in (3) and its four reduced equations in (4)–(7). The rest of the article is organized as follows. In Section 2, we introduce some known results regarding matrix rank formulas and matrix equations that we shall use in the main contents. In Section 3, we first divide (1) into a partitioned form, and present the general solutions of four reduced matrix equations associated with (1). We then discuss the relationships between the general solutions of (1) and the reduced equations using the matrix analysis methods and techniques mentioned above. As an application, we discuss the relationships among generalized inverses of a block matrix and its four submatrices.

2. Preliminary Results

As we know, block matrices, ranks of matrices, and matrix equations are basic concepts and objects in linear algebra, while the block matrix representation method (for short, BMRM), the matrix rank method (for short, MRM), and the matrix equation method (for short, MEM) are three fundamental and traditional analytic methods that are widely used in matrix theory and applications because they give us the ability to construct and analyze various complicated matrix expressions and matrix equalities in a subtle and computationally tractable way. On the other hand, it has been realized since the 1960s that generalized inverses of matrices can be employed to construct many precise and analytical expansion formulas for calculating the ranks of various partitioned matrices. These matrix rank formulas can be used to deal with a wide variety of theoretical and computational problems in matrix theory and applications; see the seminal paper [15].
In this section, we summarize some relevant formulas and facts on ranks and ranges of matrices, as well as matrix equations and matrix functions. We first introduce a group of well-known useful formulas for calculating ranks of partitioned matrices and their consequences, which are easily understandable in the discipline of linear algebra and generalized inverses of matrices, and can be used as fundamental tools in dealing with various concrete problems in relation to ranks of matrices.
Lemma 1
([15]). Let A C m × n , B C m × k , and C C l × n . Then,
r [ A , B ] = r ( A ) + r ( E A B ) = r ( B ) + r ( E B A ) ,
r A C = r ( A ) + r ( C F A ) = r ( C ) + r ( A F C ) ,
r A B C 0 = r ( B ) + r ( C ) + r ( E B A F C ) .
In particular, the following results hold.
(a)
r [ A , B ] = r ( A ) R ( A ) R ( B ) A A B = B E A B = 0 .
(b)
r A C = r ( A ) R ( A * ) R ( C * ) C A A = C C F A = 0 .
In the following, we present a group of known results that are represented in terms of generalized inverses and ranks of matrices on the solvability condition and the general solution of (1).
Lemma 2
([6]). Let A X B = C be as given in (1). Then, the following four statements are equivalent:
(i)
Equation (1) is solvable for X .
(ii)
R ( A ) R ( C ) and R ( B * ) R ( C * ) .
(iii)
r [ A , C ] = r ( A ) and r B C = r ( B ) .
(iv)
A A C B B = C .
In this case, the general solution of (1) can be written in the parametric form
X = A C B + F A U + V E B ,
where U , V C n × p are two arbitrary matrices.
Lemma 3
([16,17,18]). Let
A 1 X 1 B 1 + A 2 X 2 B 2 = C
be a given linear matrix equation, where A 1 C m × p 1 , A 2 C m × p 2 , B 1 C q 1 × n , B 2 C q 2 × n , and C C m × n are known matrices. Then, the following results hold.
(a)
Equation (11) is solvable for X 1 C p 1 × q 1 and X 2 C p 2 × q 2 if and only if the following four conditions hold
P A C = 0 , P A 1 C Q B 2 = 0 , P A 2 C Q B 1 = 0 , C Q B = 0 ,
where A = [ A 1 , A 2 ] and B = B 1 B 2 .
(b)
Equation (11) holds for all matrices X 1 C p 1 × q 1 and X 2 C p 2 × q 2 if and only if any one of the following four block matrix equalities holds
( i ) [ C , A 1 , A 2 ] = 0 , ( ii ) C A 1 B 2 0 = 0 , ( iii ) C A 2 B 1 0 = 0 , ( iv ) C B 1 B 2 = 0 .
(c)
Under the assumptions that R ( A 1 ) R ( A 2 ) and R ( B 1 * ) R ( B 2 * ) , (11) holds for all matrices X 1 C p 1 × q 1 and X 2 C p 2 × q 2 if and only if any one of the following three block matrix equalities holds:
( i ) [ C , A 2 ] = 0 , ( ii ) C B 1 = 0 , ( iii ) C A 1 B 2 0 = 0 .
Lemma 4
([19]). Let A 1 + B 1 X 1 + Y 1 C 1 and A 2 + B 2 X 2 + Y 2 C 2 be two linear matrix expressions, where A 1 , A 2 C m × n , B 1 C m × p 1 , B 2 C m × p 2 ,   C 1 C q 1 × n , and C 2 C q 2 × n are given, and X 1 C p 1 × n , Y 1 C m × q 1 , X 2 C p 2 × n , and Y 2 C m × q 2 are four variable matrices. Additionally, let
G 1 = { A 1 + B 1 X 1 + Y 1 C 1 | X 1 C p 1 × n , Y 1 C m × q 1 } , G 2 = { A 2 + B 2 X 2 + Y 2 C 2 | X 2 C p 2 × n , Y 2 C m × q 2 }
be the domains of two linear matrix functions, and N = A 1 A 2 B 1 B 2 C 1 0 0 C 2 0 0 . Then, the following results hold.
(a)
G 1 G 2 if and only if r ( N ) = r C 1 C 2 + r [ B 1 , B 2 ] .
(b)
G 1 G 2 if and only if any one of the three conditions(i) r ( B 2 ) = m , (ii) r ( C 2 ) = n , and(iii) r ( N ) = r ( B 2 ) + r ( C 2 ) holds.
(c)
G 1 = G 2 if and only if any one of the three conditions (i) r ( B 1 ) = m , (ii) r ( C 1 ) = n , (iii) r ( N ) = r ( B 1 ) + r ( C 1 ) and any one of the three conditions(iv) r ( B 2 ) = m , (v) r ( C 2 ) = n , (vi) r ( N ) = r ( B 2 ) + r ( C 2 ) hold.
In the following, we derive the solvability conditions and the general solutions of the four reduced equations in (4)–(7).
Lemma 5.
Let A X B = C be as given in (1), and let its four reduced linear matrix equations be given as in (4)–(7). Then, the following results hold.
(a)
The following five statements are equivalent:
(i)
Equation (4) is solvable for X 11 .
(ii)
R ( E A 2 A 1 ) R ( E A 2 C F B 2 ) and R ( ( B 1 F B 2 ) * ) R ( ( E A 2 C F B 2 ) * ) .
(iii)
r [ E A 2 A 1 , E A 2 C F B 2 ] = r ( E A 2 A 1 ) and r B 1 F B 2 E A 2 C F B 2 = r ( B 1 F B 2 ) .
(iv)
r C A 1 A 2 B 2 0 0 = r [ A 1 , A 2 ] + r ( B 2 ) and r C A 2 B 1 0 B 2 0 = r B 1 B 2 + r ( A 2 ) .
(v)
( E A 2 A 1 ) ( E A 2 A 1 ) E A 2 C F B 2 ( B 1 F B 2 ) ( B 1 F B 2 ) = E A 2 C F B 2 .
In this case, the general solution of (4) can be written in the following parametric form
X 11 = ( E A 2 A 1 ) E A 2 C F B 2 ( B 1 F B 2 ) + ( I n 1 ( E A 2 A 1 ) ( E A 2 A 1 ) ) S 11 + T 11 ( I p 1 ( B 1 F B 2 ) ( B 1 F B 2 ) ) ,
where S 11 , T 11 C n 1 × p 1 are two arbitrary matrices.
(b)
The following five statements are equivalent:
(i)
Equation (5) is solvable for X 12 .
(ii)
R ( E A 2 A 1 ) R ( E A 2 C F B 1 ) and R ( ( B 2 F B 1 ) * ) R ( ( E A 2 C F B 1 ) * ) .
(iii)
r [ E A 2 A 1 , E A 2 C F B 1 ] = r ( E A 2 A 1 ) and r B 2 F B 1 E A 2 C F B 1 = r ( B 2 F B 1 ) .
(iv)
r C A 1 A 2 B 1 0 0 = r [ A 1 , A 2 ] + r ( B 1 ) and r C A 2 B 1 0 B 2 0 = r B 1 B 2 + r ( A 2 ) .
(v)
( E A 2 A 1 ) ( E A 2 A 1 ) E A 2 C F B 1 ( B 2 F B 1 ) ( B 2 F B 1 ) = E A 2 C F B 1 .
In this case, the general solution of (5) can be written in the following parametric form
X 12 = ( E A 2 A 1 ) E A 2 C F B 1 ( B 2 F B 1 ) + ( I n 1 ( E A 2 A 1 ) ( E A 2 A 1 ) ) S 12 + T 12 ( I p 2 ( B 2 F B 1 ) ( B 2 F B 1 ) ) ,
where S 12 , T 12 C n 1 × p 2 are two arbitrary matrices.
(c)
The following five statements are equivalent:
(i)
Equation (6) is solvable for X 21 .
(ii)
R ( E A 1 A 2 ) R ( E A 1 C F B 2 ) and R ( ( B 1 F B 2 ) * ) R ( ( E A 1 C F B 2 ) * ) .
(iii)
r [ E A 1 A 2 , E A 1 C F B 2 ] = r ( E A 1 A 2 ) and r B 1 F B 2 E A 1 C F B 2 = r ( B 1 F B 2 ) .
(iv)
r C A 1 A 2 B 2 0 0 = r [ A 1 , A 2 ] + r ( B 2 ) and r C A 1 B 1 0 B 2 0 = r B 1 B 2 + r ( A 1 ) .
(v)
( E A 1 A 2 ) ( E A 1 A 2 ) E A 1 C F B 2 ( B 1 F B 2 ) ( B 1 F B 2 ) = E A 1 C F B 2 .
In this case, the general solution of (6) can be written in the following parametric form
X 21 = ( E A 1 A 2 ) E A 1 C F B 2 ( B 1 F B 2 ) + ( I n 2 ( E A 1 A 2 ) ( E A 1 A 2 ) ) S 21 + T 21 ( I p 1 ( B 1 F B 2 ) ( B 1 F B 2 ) ) ,
where S 21 , T 21 C n 2 × p 1 are two arbitrary matrices.
(d)
The following five statements are equivalent:
(i)
Equation (7) is solvable for X 22 .
(ii)
R ( E A 1 A 2 ) R ( E A 1 C F B 1 ) and R ( ( B 2 F B 1 ) * ) R ( ( E A 1 C F B 1 ) * ) .
(iii)
r [ E A 1 A 2 , E A 1 C F B 1 ] = r ( E A 1 A 2 ) and r B 2 F B 1 E A 1 C F B 1 = r ( B 2 F B 1 ) .
(iv)
r C A 1 A 2 B 1 0 0 = r [ A 1 , A 2 ] + r ( B 1 ) and r C A 1 B 1 0 B 2 0 = r B 1 B 2 + r ( A 1 ) .
(v)
( E A 1 A 2 ) ( E A 1 A 2 ) E A 1 C F B 1 ( B 2 F B 1 ) ( B 2 F B 1 ) = E A 1 C F B 1 .
In this case, the general solution of (7) can be written in the following parametric form
X 22 = ( E A 1 A 2 ) E A 1 C F B 1 ( B 2 F B 1 ) + ( I n 2 ( E A 1 A 2 ) ( E A 1 A 2 ) ) S 22 + T 22 ( I p 2 ( B 2 F B 1 ) ( B 2 F B 1 ) ) ,
where S 22 , T 22 C n 2 × p 2 are two arbitrary matrices.
Proof. 
Note that (4)–(7) are special cases of the linear matrix equation in (1) with different coefficient matrices and constant matrices. In this situation, applying the results in Lemma 2, we obtain the four groups of concrete facts and formulas in (a)–(d) on the solvability conditions and the general solutions of (4)–(7). □

3. Main Results

It is easy to see that if (1) is solvable for X, then the four equations in (4)–(7) are all solvable for X i j as well. In this situation, we denote the collections of all submatrices in the solutions of (1) and (4)–(7) as
G i j = X i j | A 1 X 11 B 1 + A 1 X 12 B 2 + A 2 X 21 B 1 + A 2 X 22 B 2 = C , i , j = 1 , 2 ,
H 11 = X 11 | E A 2 A 1 X 11 B 1 F B 2 = E A 2 C F B 2 ,
H 12 = X 12 | E A 2 A 1 X 12 B 2 F B 1 = E A 2 C F B 1 ,
H 21 = X 21 | E A 1 A 2 X 21 B 1 F B 2 = E A 1 C F B 2 ,
H 22 = X 22 | E A 1 A 2 X 22 B 2 F B 1 = E A 1 C F B 1 ,
and denote the collections of all solutions of (1) and the block matrices composed of all solutions of (4)–(7) as
G = X 11 X 12 X 21 X 22 A 1 X 11 B 1 + A 1 X 12 B 2 + A 2 X 21 B 1 + A 2 X 22 B 2 = C ,
H = X 11 X 12 X 21 X 22 E A 2 A 1 X 11 B 1 F B 2 = E A 2 C F B 2 , E A 2 A 1 X 12 B 2 F B 1 = E A 2 C F B 1 , E A 1 A 2 X 21 B 1 F B 2 = E A 1 C F B 2 , E A 1 A 2 X 22 B 2 F B 1 = E A 1 C F B 1 .
In view of this notation, we obtain the following results and facts on the relationships between the solution sets in (16)–(20), as well as (21) and (22).
Theorem 1.
Assume that the matrix equation in (1) is solvable for X , and let G i j and H i j be as given in (16)–(20), respectively. Then, we have the following results.
(a)
G i j = H i j always hold, i , j = 1 , 2 .
(b)
G H always holds.
(c)
G = H if and only if [ A , C ] = 0 , or B C = 0 , or R ( A 1 ) R ( A 2 ) = { 0 } and R ( B 1 * ) R ( B 2 * ) = { 0 } .
Proof. 
The matrix set inclusions G i j H i j follow directly from the constructions of H i j , i , j = 1 , 2 . We next show G i j H i j holds for i , j = 1 , 2 . By (1), the general expression of the submatrix X 11 in (3) can be written as
X 11 = P 1 A C B Q 1 + P 1 F A U 1 + V 1 E B Q 1 ,
where P 1 = [ I n 1 , 0 ] and Q 1 = I p 1 0 , and U 1 C n × p 1 and V 1 C n 1 × p are two arbitrary matrices. Thus, G 11 H 11 is equivalent to
{ ( E A 2 A 1 ) E A 2 C F B 2 ( B 1 F B 2 ) + ( I n 1 ( E A 2 A 1 ) ( E A 2 A 1 ) ) S 11 + T 11 ( I p 1 ( B 1 F B 2 ) ( B 1 F B 2 ) ) } { P 1 A C B Q 1 + P 1 F A U 1 + V 1 E B Q 1 } .
By Lemma 4(b), the matrix set inclusion in (23) holds if and only if any one of the following three conditions
r ( P 1 F A ) = n 1 ,
r ( E B Q 1 ) = p 1 ,
r ( E A 2 A 1 ) E A 2 C F B 2 ( B 1 F B 2 ) P 1 A C B Q 1 I n 1 ( E A 2 A 1 ) ( E A 2 A 1 ) P 1 F A I p 1 ( B 1 F B 2 ) ( B 1 F B 2 ) 0 0 E B Q 1 0 0 = r ( P 1 F A ) + r ( E B Q 1 )
holds. By (8)–(10) and elementary block matrix operations, we obtain the following rank equalities
r ( P 1 F A ) = r P 1 A r ( A ) = r I n 1 0 A 1 A 2 r ( A ) = n 1 + r ( A 2 ) r ( A ) ,
r ( E B Q 1 ) = r [ Q 1 , B ] r ( B ) = r I p 1 B 1 0 B 2 r ( B ) = p 1 + r ( B 2 ) r ( B ) , r ( E A 2 A 1 ) E A 2 C F B 2 ( B 1 F B 2 ) P 1 A C B Q 1 I n 1 ( E A 2 A 1 ) ( E A 2 A 1 ) P 1 F A I p 1 ( B 1 F B 2 ) ( B 1 F B 2 ) 0 0 E B Q 1 0 0 = r E A 2 C F B 2 E A 2 A 1 P 1 A C B Q 1 B 1 F B 2 E A 2 A 1 P 1 F A E B Q 1 B 1 F B 2 0 + r ( I n 1 ( E A 2 A 1 ) ( E A 2 A 1 ) ) + r ( I p 1 ( B 1 F B 2 ) ( B 1 F B 2 ) ) = r C A 1 P 1 A C B Q 1 B 1 A 1 P 1 A 2 0 Q 1 B 1 0 0 B B 2 0 0 0 0 A 0 0 r ( A ) r ( B ) r ( A 2 ) r ( B 2 ) + n 1 r ( E A 2 A 1 ) + p 1 r ( B 1 F B 2 ) = r C A 1 P 1 A 2 0 Q 1 B 1 0 0 B B 2 0 0 0 0 A 0 C 2 r ( A ) 2 r ( B ) + n 1 + p 1 = r C A 1 0 A 2 0 B 1 0 0 0 B 1 0 0 0 0 B 2 B 2 0 0 0 0 0 A 1 A 2 0 C 2 r ( A ) 2 r ( B ) + n 1 + p 1 = r 0 0 0 A 2 C 0 0 0 0 B 1 0 0 0 0 B 2 B 2 0 0 0 0 C A 1 A 2 0 C 2 r ( A ) 2 r ( B ) + n 1 + p 1 = r 0 0 A 2 0 0 0 0 B B 2 0 0 0 0 A 0 0 2 r ( A ) 2 r ( B ) + n 1 + p 1
= r ( A 2 ) + r ( B 2 ) r ( A ) r ( B ) + n 1 + p 1 .
Combining (27)–(29), we see that (26) is an identity for the ranks of matrices. Hence, (24) and (25) are of no use anymore in the description of (23), and thus (27) always holds. The set inclusions G 12 H 12 , G 21 H 21 , and G 22 H 22 can be shown by a similar way. So that the four matrix set equalities in (a) hold. Result (b) is obvious from the construction of H .
Substituting (12)–(15) into (3) gives the following matrix equation
( A 1 A 1 ( E A 2 A 1 ) ( E A 2 A 1 ) ) S 11 B 1 + A 1 T 11 ( B 1 ( B 1 F B 2 ) ( B 1 F B 2 ) B 1 ) + ( A 1 A 1 ( E A 2 A 1 ) ( E A 2 A 1 ) ) S 12 B 2 + A 1 T 12 ( B 2 ( B 2 F B 1 ) ( B 2 F B 1 ) B 2 ) + ( A 2 A 2 ( E A 1 A 2 ) ( E A 1 A 2 ) ) S 21 B 1 + A 2 T 21 ( B 1 ( B 1 F B 2 ) ( B 1 F B 2 ) B 1 ) + ( A 2 A 2 ( E A 1 A 2 ) ( E A 1 A 2 ) ) S 22 B 2 + A 2 T 22 ( B 2 ( B 2 F B 1 ) ( B 2 F B 1 ) B 2 ) = [ A 1 A 1 ( E A 2 A 1 ) ( E A 2 A 1 ) , A 2 A 2 ( E A 1 A 2 ) ( E A 1 A 2 ) ] S 11 S 12 S 21 S 22 B + A T 11 T 12 T 21 T 22 B 1 ( B 1 F B 2 ) ( B 1 F B 2 ) B 1 B 2 ( B 2 F B 1 ) ( B 2 F B 1 ) B 2 = M ,
where
M = C A 1 ( E A 2 A 1 ) E A 2 C F B 2 ( B 1 F B 2 ) B 1 A 1 ( E A 2 A 1 ) E A 2 C F B 1 ( B 2 F B 1 ) B 2 A 2 ( E A 1 A 2 ) E A 1 C F B 2 ( B 1 F B 2 ) B 1 A 2 ( E A 1 A 2 ) E A 1 C F B 1 ( B 2 F B 1 ) B 2 .
Thus, G H if and only if (30) holds for all S i j and T i j , i , j = 1 , 2 . By Lemma 3(c), we see that (30) holds for all matrices S i j and T i j if and only if any one of the following three conditions
[ A , M ] = 0 ,
B M = 0 ,
M A 1 A 1 ( E A 2 A 1 ) ( E A 2 A 1 ) A 2 A 2 ( E A 1 A 2 ) ( E A 1 A 2 ) B 1 ( B 1 F B 2 ) ( B 1 F B 2 ) B 1 0 0 B 2 ( B 2 F B 1 ) ( B 2 F B 1 ) B 2 0 0 = 0
holds. In this situation, it is easy to verify that
[ A , M ] = 0 [ A , C ] = 0 ,
B M = 0 B C = 0 ,
and by (8)–(10) and elementary block matrix operations that
r M A 1 A 1 ( E A 2 A 1 ) ( E A 2 A 1 ) A 2 A 2 ( E A 1 A 2 ) ( E A 1 A 2 ) B 1 ( B 1 F B 2 ) ( B 1 F B 2 ) B 1 0 0 B 2 ( B 2 F B 1 ) ( B 2 F B 1 ) B 2 0 0 = r M A 1 A 2 0 0 B 1 0 0 B 1 F B 2 0 B 2 0 0 0 B 2 F B 1 0 E A 2 A 1 0 0 0 0 0 E A 1 A 2 0 0 r ( E A 2 A 1 ) r ( E A 1 A 2 ) r ( B 1 F B 2 ) r ( B 2 F B 1 ) = r C A 1 A 2 A 2 ( E A 1 A 2 ) E A 1 C F B 2 A 1 ( E A 2 A 1 ) E A 2 C F B 1 B 1 0 0 B 1 F B 2 0 B 2 0 0 0 B 2 F B 1 E A 2 C F B 2 ( B 1 F B 2 ) B 1 E A 2 A 1 0 0 0 E A 1 C F B 1 ( B 2 F B 1 ) B 2 0 E A 1 A 2 0 0 r ( E A 2 A 1 ) r ( E A 1 A 2 ) r ( B 1 F B 2 ) r ( B 2 F B 1 )
= r C A 1 A 2 0 0 B 1 0 0 B 1 F B 2 0 B 2 0 0 0 B 2 F B 1 0 E A 2 A 1 0 E A 2 C F B 2 E A 2 C F B 1 0 0 E A 1 A 2 E A 1 C F B 2 E A 1 C F B 1 r ( E A 2 A 1 ) r ( E A 1 A 2 ) r ( B 1 F B 2 ) r ( B 2 F B 1 ) = r C A 1 A 2 0 0 0 0 B 1 0 0 B 1 0 0 0 B 2 0 0 0 B 2 0 0 0 A 1 0 C C A 2 0 0 0 A 2 C C 0 A 1 0 0 0 B 2 0 0 0 0 0 0 0 B 1 0 0 2 r ( A ) 2 r ( B ) = r C A 1 A 2 0 0 0 0 B 1 0 0 B 1 0 0 0 B 2 0 0 0 B 2 0 0 0 A 1 0 C 0 A 2 0 0 0 A 2 0 0 A 2 A 1 0 0 0 B 2 B 2 0 0 0 0 0 0 B 1 0 0 2 r ( A ) 2 r ( B ) = r C A 1 A 2 0 0 0 0 B 1 0 0 B 1 0 0 0 B 2 0 0 B 2 0 0 0 0 A 1 A 2 C 0 A 2 0 0 0 0 0 0 A 2 A 1 0 0 0 B 2 B 2 0 0 0 0 0 0 B 1 0 0 2 r ( A ) 2 r ( B ) = r C A 1 A 2 C 0 0 0 B 1 0 0 0 0 0 0 B 2 0 0 0 0 0 0 C 0 0 0 0 A 2 0 0 0 0 0 0 A 2 A 1 0 0 0 B 2 B 2 0 0 0 0 0 0 B 1 0 0 2 r ( A ) 2 r ( B ) = r 0 A 1 A 2 0 0 0 0 B 1 0 0 0 0 0 0 B 2 0 0 0 0 0 0 0 0 0 0 0 A 2 0 0 0 0 0 0 A 2 A 1 0 0 0 B 2 B 2 0 0 0 0 0 0 B 1 0 0 2 r ( A ) 2 r ( B ) = r ( A 1 ) + r ( A 2 ) + r ( B 1 ) + r ( B 2 ) r ( A ) r ( B ) .
Substituting (34)–(36) into (31)–(33) and then simplifying yield the results in (c). □
One of the fundamental research topics in the domain of generalized inverses is to characterize relationships between generalized inverses of two matrices. As one example in this regard, we let A C m × n , and partition A and its generalized inverse as
A = [ P 1 , P 2 ] = Q 1 Q 2 , A = X = X 11 X 12 X 21 X 22 , P i C m × n i , Q i C m i × n
for i = 1 , 2 . Additionally, denote
G i j = X i j | A X A = A , i , j = 1 , 2 ,
H 11 = X 11 | E P 2 P 1 X 11 Q 1 F Q 2 = E P 2 A F Q 2 ,
H 12 = X 12 | E P 2 P 1 X 12 Q 2 F Q 1 = E P 2 A F Q 1 ,
H 21 = X 21 | E P 1 P 2 X 21 Q 1 F Q 2 = E P 1 A F Q 2 ,
H 22 = X 22 | E P 1 P 2 X 22 Q 2 F Q 1 = E P 1 A F Q 1 ,
and
G = { X | A X A = A } ,
H = X 11 X 12 X 21 X 22 E P 2 P 1 X 11 Q 1 F Q 2 = E P 2 A F Q 2 , E P 2 P 1 X 12 Q 2 F Q 1 = E P 2 A F Q 1 , E P 1 P 2 X 21 Q 1 F Q 2 = E P 1 A F Q 2 , E P 1 P 2 X 22 Q 2 F Q 1 = E P 1 A F Q 1 .
Referring to Theorem 1, we obtain the following result.
Corollary 1.
Let A C m × n with A 0 , and let G i j ,   H i j ,   G , and H be as given in (37)–(43). Then, we have the following results.
(a)
G i j = H i j always hold, i , j = 1 , 2 .
(b)
G H always holds.
(c)
G = H if and only if R ( P 1 ) R ( P 2 ) = { 0 } and R ( Q 1 * ) R ( Q 2 * ) = { 0 } .
Proof. 
Setting A = B = C in Lemma 5 and Theorem 1 and then simplifying lead to the results in (a), (b), and (c). □

4. Concluding Remarks

We described and studied in the preceding sections a number of theoretical problems regarding the relationships between the full matrix equation in (1) and its four reduced equations in (2)–(5) through the well-organized employment of various well-known or established formulas and facts in relation to ranks, ranges, and generalized inverses of matrices. The obtained results provide some profound insights into the construction of the general solution of (1), and therefore, they can be viewed as certain original theoretical contributions with direct or potential value in applications. We believe that this study enables us to use the reduced equations instead of the full equation under some assumptions, and in turn, hope that they can improve computational efficiency in many issues in relation to matrix equations.
In addition to (4)–(7), we pre- and post-multiply the matrix equation in (3) by ( E A 2 A 1 ) , ( E A 1 A 2 ) , ( B 1 F B 2 ) , and ( B 2 F B 1 ) , respectively, and note that ( E A i A j ) A i = 0 and B i ( B j F B i ) = 0 for i j and i , j = 1 , 2 to yield a group of new reduced linear matrix equations as follows
A 1 ( E A 2 A 1 ) A 1 X 11 B 1 ( B 1 F B 2 ) B 1 = A 1 ( E A 2 A 1 ) C ( B 1 F B 2 ) B 1 ,
A 1 ( E A 2 A 1 ) A 1 X 12 B 2 ( B 2 F B 1 ) B 2 = A 1 ( E A 2 A 1 ) C ( B 2 F B 1 ) B 2 ,
A 2 ( E A 1 A 2 ) A 2 X 21 B 1 ( B 1 F B 2 ) B 1 = A 2 ( E A 1 A 2 ) C ( B 1 F B 2 ) B 1 ,
A 2 ( E A 1 A 2 ) A 2 X 22 B 2 ( B 2 F B 1 ) B 2 = A 2 ( E A 1 A 2 ) C ( B 2 F B 1 ) B 2 .
In comparison, the constructions of these four reduced equations are different from the four equations in (4)–(7). Hence, the four equations are not necessarily equivalent to these in (4)–(7). In this situation, it would be of interest to describe the relationships between the general solution of the matrix equation in (3) and the general solutions of the four reduced equations in (44)–(47) by means of the powerful and effective matrix rank methodology.
Moreover, we are able to decompose the matrix equation in (1) into the following form
[ A 1 , A 2 , , A k ] X 11 X 12 X 1 s X 21 X 22 X 2 s X k 1 X k 2 X k s B 1 B 2 B s = C ,
and then to construct a family of reduced linear matrix equations with the multiplication transformation method. In this light, it is necessary to explore the relationships between the general solution of (48) and the general solutions of the reduced linear matrix equations by means of the effective methods and techniques used in this article, but there will be a mass of complicated matrix calculations in the study of the matrix equation.
Finally, we remark that, prompted by the comparison problem described in this article, there are many similar topics that can be proposed and examined concerning the connections of solutions of other kinds of matrix equations and their reduced equations.

Author Contributions

Conceptualization, Y.T.; methodology, B.J. and Y.T.; validation, B.J., Y.T. and R.Y.; formal analysis, B.J., Y.T. and R.Y.; investigation, B.J., Y.T. and R.Y.; resources, Y.T.; writing—original draft preparation, B.J. and Y.T.; writing—review and editing, B.J. and Y.T.; supervision, Y.T.; project administration, B.J. and Y.T.; funding acquisition, B.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the Shandong Provincial Natural Science Foundation #ZR2019MA065.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors wish to thank anonymous referees for their helpful comments on an earlier version of this article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ben–Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  2. Bernstein, D.S. Scalar, Vector, and Matrix Mathematics: Theory, Facts, and Formulas Revised and Expanded Edition, 3rd ed.; Princeton University Press: Princeton, NJ, USA; Oxford, UK, 2018. [Google Scholar]
  3. Campbell, S.L.; Meyer, C.D., Jr. Generalized Inverses of Linear Transformations; SIAM: Philadelphia, PA, USA, 2009. [Google Scholar]
  4. Puntanen, S.; Styan, G.P.H.; Isotalo, J. Matrix Tricks for Linear Statistical Models, Our Personal Top Twenty; Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  5. Rao, C.R.; Mitra, S.K. Generalized Inverse of Matrices and Its Applications; Wiley: New York, NY, USA, 1971. [Google Scholar]
  6. Penrose, R. A generalized inverse for matrices. Proc. Camb. Phil. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef]
  7. Arias, M.L.; Gonzalez, M.C. Positive solutions to operator equations AXB=C. Linear Algebra Appl. 2010, 433, 1194–1202. [Google Scholar] [CrossRef]
  8. Cvetković-Ilić, D.C. Re-nnd solutions of the matrix equation AXB=C. J. Austral. Math. Soc. 2008, 84, 63–72. [Google Scholar] [CrossRef]
  9. Liu, Y. Ranks of least squares solutions of the matrix equation AXB=C. Comput. Math. Appl. 2008, 55, 1270–1278. [Google Scholar] [CrossRef]
  10. Peng, Z. New matrix iterative methods for constraint solutions of the matrix equation AXB=C. J. Comput. Appl. Math. 2015, 230, 726–735. [Google Scholar] [CrossRef]
  11. Tian, Y. Some properties of submatrices in a solution to the matrix equation AXB=C with applications. J. Franklin Instit. 2009, 346, 557–569. [Google Scholar] [CrossRef]
  12. Tian, Z.; Li, X.; Dong, Y.; Liu, Z. Some relaxed iteration methods for solving matrix equation AXB=C. Appl. Math. Comput. 2001, 403, 126189. [Google Scholar] [CrossRef]
  13. Xu, J.; Zhang, H.; Liu, L.; Zhang, H.; Yuan, Y. A unified treatment for the restricted solutions of the matrix equation AXB=C. AIMS Math. 2020, 5, 6594–6608. [Google Scholar] [CrossRef]
  14. Zhang, F.; Li, Y.; Guo, W.; Zhao, J. Least squares solutions with special structure to the linear matrix equation AXB=C. Appl. Math. Comput. 2011, 217, 10049–10057. [Google Scholar] [CrossRef]
  15. Marsaglia, G.; Styan, G.P.H. Equalities and inequalities for ranks of matrices. Linear Multilinear Algebra 1974, 2, 269–292. [Google Scholar] [CrossRef]
  16. Özgüler, A.B. The matrix equation AXB+CYD=E over a principal ideal domain. SIAM J. Matrix. Anal. Appl. 1991, 12, 581–591. [Google Scholar] [CrossRef]
  17. Jiang, B.; Tian, Y. Necessary and sufficient conditions for nonlinear matrix identities to always hold. Aequat. Math. 2019, 93, 587–600. [Google Scholar] [CrossRef]
  18. Tian, Y. Upper and lower bounds for ranks of matrix expressions using generalized inverses. Linear Algebra Appl. 2002, 355, 187–214. [Google Scholar] [CrossRef]
  19. Tian, Y. Relations between matrix sets generated from linear matrix expressions and their applications. Comput. Math. Appl. 2001, 61, 1493–1501. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Jiang, B.; Tian, Y.; Yuan, R. On Relationships between a Linear Matrix Equation and Its Four Reduced Equations. Axioms 2022, 11, 440. https://doi.org/10.3390/axioms11090440

AMA Style

Jiang B, Tian Y, Yuan R. On Relationships between a Linear Matrix Equation and Its Four Reduced Equations. Axioms. 2022; 11(9):440. https://doi.org/10.3390/axioms11090440

Chicago/Turabian Style

Jiang, Bo, Yongge Tian, and Ruixia Yuan. 2022. "On Relationships between a Linear Matrix Equation and Its Four Reduced Equations" Axioms 11, no. 9: 440. https://doi.org/10.3390/axioms11090440

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop