Next Article in Journal
Research on Robust Audio-Visual Speech Recognition Algorithms
Previous Article in Journal
Managing Disruptions in a Flow-Shop Manufacturing System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Minimal Rank Properties of Outer Inverses with Prescribed Range and Null Space

by
Dijana Mosić
1,
Predrag S. Stanimirović
1,2,* and
Spyridon D. Mourtas
2,3
1
Faculty of Sciences and Mathematics, University of Niš, Višegradska 33, 18000 Niš, Serbia
2
Laboratory “Hybrid Methods of Modelling and Optimization in Complex Systems”, Siberian Federal University, Prosp. Svobodny 79, 660041 Krasnoyarsk, Russia
3
Department of Economics, Mathematics-Informatics and Statistics-Econometrics, National and Kapodistrian University of Athens, Sofokleous 1 Street, 10559 Athens, Greece
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(7), 1732; https://doi.org/10.3390/math11071732
Submission received: 21 February 2023 / Revised: 28 March 2023 / Accepted: 4 April 2023 / Published: 5 April 2023

Abstract

:
The purpose of this paper is to investigate solvability of systems of constrained matrix equations in the form of constrained minimization problems. The main novelty of this paper is the unification of solutions of considered matrix equations with corresponding minimization problems. For a particular case we extend some well-known results and give several new results for the weak Drazin inverse. The main characterizations of the Drazin inverse, group inverse and Moore–Penrose inverse are obtained as consequences.
MSC:
15A09; 15A24; 15A23; 65F20

1. Introduction

The set containing m × n matrices over the complex numbers C will be denoted as C m × n . Standardly, A * , rk ( A ) , R ( A ) and N ( A ) will represent the conjugate transpose, rank, range (column space) and kernel (null space), respectively. Furthermore, C r m × n = X | X C m × n , rk ( X ) = r .
Generalized inverses are very powerful tools in many branches of mathematics, technics and engineering. The most frequent application of generalized inverses is in finding solutions of many matrix equations and systems of linear equations. There are many other mathematical and technical disciplines in which generalized inverses play an important role. Some of them are estimation theory (regression), computing polar decomposition, electrical circuits (networks) theory, automatic control theory, filtering, difference equations, pattern recognition and image restoration. Since 1955, thousands of papers have been published discussing various theoretical and computational features of generalized inverses and their applications. For the sake of completeness, we surveyed definitions of generalized inverses related to our research.
For arbitrary A C m × n , there is a Moore–Penrose inverse of A represented by the distinctive matrix X C n × m (denoted by A ) for which [1]:
( 1 ) A X A = A , ( 2 ) X A X = X , ( 3 ) ( A X ) * = A X , ( 4 ) ( X A ) * = X A .
The symbol A { ρ } is stated for the set of all matrices that satisfy equations involved in ρ { 1 , 2 , 3 , 4 } . A ρ -inverse of A, marked with A ( ρ ) , is any matrix from A { ρ } . Notice that A { 1 , 2 , 3 , 4 } = { A } .
The class consisting of outer generalized inverses ( { 2 } -inverses) is defined for arbitrary A C m × n by
A { 2 } = { X C n × m | X A X = X } .
Immediately from the definition, it can be concluded rk ( A ( 2 ) ) rk ( A ) . Furthermore, it is known that an arbitrary X A { 1 , 2 } satisfies rk ( X ) = rk ( A ) . The outer inverses have many applications in statistics [2,3], in the iterative themes for tackling nonlinear Equations [4], in stable approximations of ill-posed problems and in linear and nonlinear issues implicating rank-deficient generalized inverses [5].
Consider A C m × n , B C n × k and C C l × m . An outer inverse of A with predefined range R ( B ) (denoted by A R ( B ) , * ( 2 ) ) is a solution to the following constrained equation:
X A X = X , R ( X ) = R ( B ) .
The class of outer inverses with the predefined range R ( B ) is denoted by A { 2 } R ( B ) , * . Furthermore, an outer inverse of A with given kernel N ( C ) (denoted by A * , N ( C ) ( 2 ) ) is a solution to the following constrained equation:
X A X = X , N ( X ) = N ( C ) .
The symbol A { 2 } * , N ( C ) will stand for the class of outer inverses with the predefined kernel N ( C ) . Finally, an outer inverse of A with given image R ( B ) and kernel N ( C ) (denoted by A R ( B ) , N ( C ) ( 2 ) ) is the unique solution of the constrained equation
X A X = X , R ( X ) = R ( B ) , N ( X ) = N ( C ) .
The key characterizations, representations and computational procedures for outer inverses with prescribed range and/or kernel were discovered in [6,7,8,9,10] and other research articles cited in these references. More details can be found in the monographs [4,11,12]. Full rank representations of outer inverses are given in [13,14]. Characterizations, representations and computational procedures based on appropriate matrix equations and ranks of involved matrices are proposed in [15,16,17]. Iterative computational algorithms were developed in [18,19,20,21,22,23].
Recall that
A = A R ( A * ) , N ( A * ) ( 2 ) .
For A C n × n , there exists the Drazin inverse A D of A as the unique matrix X C n × n and it has the following properties:
A k + 1 X = A k , X A X = X , A X = X A ,
where k = ind ( A ) is used with meaning of the index of A. That is, k is the smallest nonnegative integer satisfying rk ( A k ) = rk ( A k + 1 ) . Under the limitation ind ( A ) = 1 , the group inverse of A is A D = A # . Notice that
A D = A R ( A k ) , N ( A k ) ( 2 ) and A # = A R ( A ) , N ( A ) ( 2 ) .
The Drazin inverse proved to be useful in the investigation of finite Markov chains, in the analysis of singular linear difference equations and differential Equations [24], cryptography [25] and other.
It is important to mention that some of popular generalized inverses are outer inverses with a predefined image and kernel. One of the most popular is the core-EP inverse applicable on square matrices in [26]. For a square matrix A of index k = ind ( A ) , its CEP inverse is the uniquely defined by
A A A = A , R ( A ) = R ( A * ) = R ( A k ) .
In the case ind ( A ) = 1 , the core-EP inverse transforms into the core inverse Mathematics 11 01732 i001 [27]. The DMP inverse A D , = A D A A is defined in [28] as the unique outer inverse satisfying A k X = A k A and X A = A D A . For arbitrary positive integer m, the m-weak group inverse (m-WGI) of a square matrix A is defined the unique solution to A X = ( A ) m A m and A X 2 = X [29] and it can be given by Mathematics 11 01732 i002  = ( A ) m + 1 A m . For m = 1 , the m-WGI becomes the weak group inverse, proposed in [30]. For m = 2 , the m-WGI reduces to the generalized group inverse, proposed in [31].
The definition of the weak Drazin inverse was presented in [32] as a weakened form of the Drazin inverse. Although a weak Drazin inverse lacks some properties of the Drazin inverse, such as being unique, it is still easier to find the weak Drazin inverse than the Drazin inverse. Furthermore, the weak Drazin inverse may be applied instead of the Drazin inverse; for example, in investigating differential equations or Markov chains, as well as in its additional own applications.
Consider a square matrix A C n × n of index k = ind ( A ) . Then, a matrix X C n × n represents [32]
  • A weak Drazin inverse of A when
    X A k + 1 = A k ;
  • A minimal rank weak Drazin inverse of A when
    X A k + 1 = A k and rk ( X ) = rk ( A D ) ;
  • A commuting weak Drazin inverse of A when
    X A k + 1 = A k and A X = X A .
Recall that, by [32], the Drazin inverse is unique minimal rank commuting weak Drazin inverse. Important characterizations of the minimal rank weak Drazin inverse were given in [33]. Furthermore, it was proven in [33] that many recently defined generalized inverses are special cases of the minimal rank weak Drazin inverse.
The conditions for solvability of matrix equations and studying their explicit solutions were applied in physics, mechanics, control theory and many different areas [4,11]. Motivated by theoretical and applied importance of studies involving the solvability of systems of equations and forms of their solutions, we continue to study this topic.
The aim of this paper is to investigate the solvability of systems of matrix equations which are weaker than systems considered in [32,33], and to solve some constrained minimization problems. The main novelty of this paper is the unification of solutions of considered matrix equations with corresponding minimization problems. Consequently, we extend some well-known results and provide several new results for the weak Drazin inverse. Furthermore, some characterizations for significant Drazin inverse, group inverse and Moore–Penrose inverse are obtained as consequences.

2. Motivation and Research Highlights

The detailed explanations of our research goals follow in this section.
(1)
For X C n × m , A C m × n and B C n × k , the first problem we consider is to find equivalent conditions for solvability of the constrained system
X A B = B and rk ( X ) = rk ( B ) .
We will prove that X is a solution to (5) if and only if (iff) X A { 2 } R ( B ) , * .
(2)
In the case that system (5) is consistent, we solve the minimization model
min rk ( X ) subject to X A B = B .
(3)
We investigate solvability of system (5) with the additional assumptions. Precisely, we add an additional constraint rk ( X ) = rk ( B ) = rk ( A ) or B A X = B or A X = X A . A minimal rank outer inverse X with prescribed range R ( B ) which commutes with A, will be called a commuting minimal rank outer inverse with prescribed range R ( B ) .
(4)
Suppose that A C m × n , X C n × m and C C l × m . We study the solvability of the system
C A X = C and rk ( X ) = rk ( C ) .
Since we will show that X is a solution to (7) iff X A { 2 } * , N ( C ) , a solution X to (7) is called a minimal rank outer inverse with prescribed kernel N ( C ) .
(5)
If the system (7) is consistent, the minimization problem
min rk ( X ) subject to C A X = C
can be solved.
(6)
Special cases of the system (7) will be the topic of this research. A minimal rank outer inverse X with prescribed kernel N ( C ) which commutes with A, will be called a commuting minimal rank outer inverse with prescribed kernel N ( C ) .
(7)
Characterizations for the Drazin inverse, group and the Moore–Penrose inverse are obtained applying our results.
(8)
The solvability of the system which contains equalities from both systems (5) and (7) is considered. Precisely, in the case that A C m × n , X C n × m , B C n × k and C C l × m , we study the system
X A B = B , C A X = C and rk ( X ) = rk ( B ) = rk ( C ) .
We will observe that X is a solution to (9) iff X = A R ( B ) , N ( C ) ( 2 ) , and a solution X to (9) is called a minimal rank outer inverse with predefined range R ( B ) and kernel N ( C ) . Furthermore, we investigate solvability of the system (9) with additional conditions.
The following is the organization of this paper. Preliminary information and motivation of our research are presented in Section 2. Section 3 contains investigations related to solvability of the system (5) and the minimization problem (6) as well as solvability of special cases of the system (5). As consequences, we also present characterizations for the Drazin inverse, group and the Moore–Penrose inverse. The system (7) and the minimization problem (8) are considered in Section 4. Section 5 involves solvability of the system (9) and its particular cases. Concluding remarks are part of Section 6.

3. Minimal Rank Outer Inverses with Prescribed Range

The main goals of this section are to consider solvability of the system (5) and the minimization problem (6). In the first theorem, we will observe that X presents a solution to (5) iff X is an outer inverse of A with the predefined range R ( B ) . Furthermore, we give some systems of matrix equations which are equivalent to (5).
Lemma 1.
(a)  If A C m × n and B C n × k , it follows
there exists X C n × m such that X A B = B rk ( A B ) = rk ( B ) .
(b) For A C m × n and C C l × m , it follows
there exists X C n × m such that C A X = C rk ( C A ) = rk ( C ) .
Proof. 
(a) The equality X A B = B gives rk ( B ) rk ( A B ) rk ( B ) , i.e., rk ( B ) = rk ( A B ) .
On the other hand, rk ( B ) = rk ( A B ) B ( A B ) ( 1 ) A B = B (see, for example [11] (p. 33)), implies X A B = B in the case X = B ( A B ) ( 1 ) .
(b) This statement can be verified using the conjugate transpose matrices in part (a). □
Theorem 1.
Suppose that A C m × n , X C n × m and B C n × k .
(a) 
The subsequent statements are mutually equivalent:
(i)
X A B = B and rk ( X ) = rk ( B ) ;
(ii)
X A B = B and R ( X ) = R ( B ) ;
(iii)
X is a solution to (2), i.e., X A { 2 } R ( B ) , * ;
(iv)
X = B B X and X A B = B ;
(v)
X A X = X , X = B B X and X A B = B .
(b) 
Additionally,
min rk ( X ) | X A B = B = rk ( B ) rk ( X ) | X A B = B [ rk ( B ) , rk ( X ) ] rk ( X ) | X A { 2 } X A B = B [ rk ( B ) , rk ( A ) ]
and the following set identities are valid:
A { 2 } R ( B ) , * = X C n × m | X A B = B rk ( X ) = rk ( B )
A { 2 } R ( B ) , * = X : = B ( A B ) + Y ( I ( A B ) ( A B ) ) | Y C n × m X A B = B rk ( X ) = rk ( B ) .
Proof. 
(a) (i) ⇒ (ii): From X A B = B , it follows R ( B ) R ( X ) . Furthermore, rk ( X ) = rk ( B ) gives R ( X ) = R ( B ) .
(ii) ⇒ (iii): The assumption R ( X ) = R ( B ) implies X = B W 1 for some W 1 C k × m . Then X A X = X A B W 1 = B W 1 = X .
(iii) ⇔ (iv) ⇔ (v): It follows by (Theorem 2.3 [34]).
(v) ⇒ (i): From X = B B X and X A B = B , it follows rk ( X ) = rk ( B ) . Furthermore, X A B = B B X A B = B B B = B .
(b) It is straightforward that X A X = X implies rk ( X ) rk ( A ) . On the other hand, X A B = B implies rk ( X ) rk ( B ) . So, (12) holds.
The set identity (13) follows from (i) ⟺ (iii). Finally, the set identities (14) follow from the general solution to the matrix equation X A B = B [4,12] and the conditions (i)–(v). □
Remark that the suppositions X = B B X and X A B = B , exploited in Theorem 1, can be substituted by some of equivalent requirements presented in (Corollary 2.4 [34]).
Proposition 1.
If A C m × n and B C n × k , it follows
there exists X C n × m satisfying X A B = B and rk ( X ) = rk ( B ) rk ( A B ) = rk ( B ) .
Proof. 
If there exists X satisfying X A B = B and rk ( X ) = rk ( B ) , by Lemma 1, we conclude rk ( A B ) = rk ( B ) .
In addition, the assumption rk ( A B ) = rk ( B ) and (Theorem 3 [15]) imply the existence of X A { 2 } R ( B ) , * . By Theorem 1, it follows X A B = B and rk ( X ) = rk ( B ) . □
Because of (12), a solution X to (5) is called a minimal rank outer inverse with prescribed range R ( B ) . Note that a weak Drazin inverse is a specific solution to (5) for m = n , B = A k and k = ind ( A ) . So, we study solvability of a more general system than the system whose solution is the weak Drazin inverse.
For the particular settings B = A k , k = ind ( A ) in Theorem 1, we obtain the next result which involves characterizations of the minimal rank weak Drazin inverse.
Corollary 1 generalizes results from [33], since the statements (i)–(iii) of Corollary 1 are proposed in [33].
Corollary 1.
For A , X C n × n and k N , the next assertions are equivalent:
(i)
X A k + 1 = A k and rk ( X ) = rk ( A k ) ;
(ii)
X A k + 1 = A k and R ( X ) = R ( A k ) ;
(iii)
X A { 2 } R ( A k ) , * ;
(iv)
X = A k ( A k ) X and X A k + 1 = A k ;
(v)
X A X = X , X = A k ( A k ) X and X A k + 1 = A k ;
(vi)
X is a minimal rank weak Drazin inverse of A.
The assumption rk ( X ) = rk ( B ) = rk ( A ) in the system (5) reduces the results of Theorem 1 to the smaller class of inner reflexive inverses if A { 1 , 2 } R ( B ) , * .
Theorem 2.
Suppose that A C m × n , X C n × m and B C n × k .
(a) 
The subsequent statements are mutually equivalent:
(i)
X A B = B and rk ( X ) = rk ( B ) = rk ( A ) ;
(ii)
X A X = X , R ( X ) = R ( B ) and R ( A B ) = R ( A ) ;
(iii)
X A X = X , R ( X ) = R ( B ) and R ( A B ) R ( A ) ;
(iv)
X A X = X , R ( X ) = R ( B ) and A = A B ( A B ) A ;
(v)
X A X = X , A X A = A and R ( X ) = R ( B ) , i.e., X A { 1 , 2 } R ( B ) , * .
(b) 
In addition,
X C n × m | X A B = B , rk ( X ) = rk ( B ) = rk ( A ) = A { 1 , 2 } R ( B ) , * .
Proof. 
(a) (i) ⇒ (ii): According to Theorem 1, X A X = X and R ( X ) = R ( B ) . Using Theorem 3, [15], rk ( A B ) = rk ( B ) = rk ( A ) . Therefore, the fact R ( A B ) R ( A ) gives R ( A B ) = R ( A ) .
(ii) ⇔ (iii) ⇔ (iv): These equivalences are clear.
(ii) ⇒ (v): It is clear, by Theorem 1, that X A B = B . For some V C k × n , the assumption R ( A B ) = R ( A ) implies
A = A B V = A X ( A B V ) = A X A .
(v) ⇒ (i): From the equalities X A X = X and A X A = A , we deduce that rk ( X ) = rk ( A ) . The hypothesis R ( X ) = R ( B ) yields rk ( X ) = rk ( B ) and
B = X T = X A ( X T ) = X A B ,
for some T C m × k .
The proof of part (b) follows from the results of part (a) of this theorem. The matrices X satisfying X A B = B , rk ( X ) = rk ( B ) are outer inverses of rank rk ( X ) = rk ( B ) rk ( A ) . In the case rk ( X ) = rk ( B ) = rk ( A ) , outer inverses become { 1 , 2 } -inverses [15]. Consequently, the matrices X satisfying (15) are { 1 , 2 } -inverses of rank rk ( X ) = rk ( B ) = rk ( A ) . □
Proposition 2.
If A C m × n and B C n × k , it follows
there exists X C n × m that fulfills X A B = B and rk ( X ) = rk ( B ) = rk ( A ) rk ( A B ) = rk ( B ) = rk ( A ) .
When we add the assumption A X = X A in the system (5), we obtain the following characterizations for a commuting minimal rank outer inverse with prescribed range R ( B ) .
Theorem 3.
For A , X , B C n × n , the subsequent statements are equivalent each other:
(i)
X A B = B , rk ( X ) = rk ( B ) and A X = X A ;
(ii)
X A X = X , R ( X ) = R ( B ) and A X = X A ;
(iii)
X 2 A = A X 2 = X and R ( X ) = R ( B ) ;
(iv)
X 2 A = A X 2 = X , X = B B X and X A B = B .
Proof. 
(i) ⇔ (ii): It follows by Theorem 1.
(ii) ⇒ (iii): This implication is evident.
(iii) ⇒ (ii): Using X 2 A = A X 2 = X , we get A X = A X 2 A = X A . Hence, X = X 2 A = X A X .
(iv) ⇔ (iii): Applying Theorem 1, one can verify this implication. □
By Theorem 3, we get the next consequence which contains several characterizations for the Drazin inverse. For A C n × n with k = ind ( A ) , recall that by (Corollary 2.3 [33]), X is a minimal rank weak Drazin inverse of A and A X = X A iff X = A D .
Corollary 2.
Let A , X C n × n and k N . The subsequent statements are equivalent each other:
(i)
X A k + 1 = A k , rk ( X ) = rk ( A k ) and A X = X A ;
(ii)
X A X = X , R ( X ) = R ( A k ) and A X = X A ;
(iii)
X 2 A = A X 2 = X and R ( X ) = R ( A k ) ;
(iv)
X 2 A = A X 2 = X , X = A k ( A k ) X and X A k + 1 = A k ;
(v)
X = A D .
In the case that the hypothesis B A X = B is added to the system (5), we present necessary and sufficient requirements for the solvability of novel system. The system X A B = B A X = B was considered in [35], but in conjunction with additional assumptions different from our conditions in Theorem 4.
Theorem 4.
The subsequent statements are equivalent each other for A , X , B C n × n :
(i)
X A B = B A X = B and rk ( X ) = rk ( B ) ;
(ii)
X A B = B , R ( X ) = R ( B ) and N ( X ) = N ( B ) ;
(iii)
X A B = B , R ( X ) = R ( B ) and N ( X ) N ( B ) ;
(iv)
X A B = B , R ( X ) = R ( B ) and N ( B ) N ( X ) ;
(v)
X A B = B and N ( B ) N ( X ) ;
(vi)
X A X = X , B A X = B and R ( X ) = R ( B ) ;
(vii)
X A X = X , R ( X ) = R ( B ) and N ( X ) = N ( B ) , i.e., X = A R ( B ) , N ( B ) ( 2 ) ;
(viii)
X A X = X , R ( X ) = R ( B ) and N ( X ) N ( B ) ;
(ix)
X A X = X , R ( X ) = R ( B ) and N ( B ) N ( X ) .
Proof. 
(i) ⇒ (ii): Firstly, B A X = B gives N ( X ) N ( B ) . Since rk ( X ) = rk ( B ) , then dim N ( X ) = n rk ( X ) = n rk ( B ) = dim N ( B ) . So, N ( X ) = N ( B ) .
(ii) ⇒ (iii) and (iv): It is evident.
(iii) ⇒ (i): Theorem 1 and assumptions X A B = B and R ( X ) = R ( B ) imply X A X = X and rk ( X ) = rk ( B ) . The condition N ( X ) N ( B ) yields, for some V C n × n ,
B = V X = ( V X ) A X = B A X .
(iv) ⇒ (v): This implication is evident.
(v) ⇒ (ii): From X A B = B , we conclude that R ( B ) R ( X ) and rk ( B ) rk ( X ) . Because N ( B ) N ( X ) , we have X = S B , for some S C n × n , and so rk ( X ) rk ( B ) . Hence, rk ( X ) = rk ( B ) , which implies N ( X ) = N ( B ) and R ( B ) = R ( X ) .
The rest follows by Theorem 1. □
As a consequence of Theorem 4, we get the following result which involves characterizations of the Drazin inverse.
Corollary 3.
Let A , X C n × n and k N . The subsequent statements are mutually equivalent:
(i)
X A k + 1 = A k + 1 X = A k and rk ( X ) = rk ( A k ) ;
(ii)
X A k + 1 = A k , R ( X ) = R ( A k ) and N ( X ) = N ( A k ) ;
(iii)
X A k + 1 = A k , R ( X ) = R ( A k ) and N ( X ) N ( A k ) ;
(iv)
X A k + 1 = A k , R ( X ) = R ( A k ) and N ( A k ) N ( X ) ;
(v)
X A k + 1 = A k and N ( A k ) N ( X ) ;
(vi)
X A X = X , A k + 1 X = A k and R ( X ) = R ( A k ) ;
(vii)
X A X = X , R ( X ) = R ( A k ) , N ( X ) = N ( A k ) , i.e., X = A R ( A k ) , N ( A k ) ( 2 ) = A D ;
(viii)
X A X = X , R ( X ) = R ( A k ) and N ( X ) N ( A k ) ;
(ix)
X A X = X , R ( X ) = R ( A k ) and N ( A k ) N ( X ) .
For k = 1 in Corollary 3, we obtain characterizations for the group inverse.
Corollary 4.
The subsequent statements are equivalent for A , X C n × n :
(i)
X A 2 = A 2 X = A and rk ( X ) = rk ( A ) ;
(ii)
X A 2 = A , R ( X ) = R ( A ) and N ( X ) = N ( A ) ;
(iii)
X A 2 = A , R ( X ) = R ( A ) and N ( X ) N ( A ) ;
(iv)
X A 2 = A , R ( X ) = R ( A ) and N ( A ) N ( X ) ;
(v)
X A 2 = A and N ( A ) N ( X ) ;
(vi)
X A X = X , A 2 X = A and R ( X ) = R ( A ) ;
(vii)
X A X = X , R ( X ) = R ( A ) , N ( X ) = N ( A ) , i.e., X = A R ( A ) , N ( A ) ( 2 ) = A # ;
(viii)
X A X = X , R ( X ) = R ( A ) and N ( X ) N ( A ) ;
(ix)
X A X = X , R ( X ) = R ( A ) and N ( A ) N ( X ) .
Theorem 4 also implies new characterizations for the Moore–Penrose inverse.
Corollary 5.
The next assertions are mutually equivalent for A , X C n × n :
(i)
X A A * = A * A X = A * and rk ( X ) = rk ( A * ) ;
(ii)
X A A * = A * , R ( X ) = R ( A * ) and N ( X ) = N ( A * ) ;
(iii)
X A A * = A * , R ( X ) = R ( A * ) and N ( X ) N ( A * ) ;
(iv)
X A A * = A * , R ( X ) = R ( A * ) and N ( A * ) N ( X ) ;
(v)
X A A * = A * and N ( A * ) N ( X ) ;
(vi)
X A X = X , A * A X = A * and R ( X ) = R ( A * ) ;
(vii)
X A X = X , R ( X ) = R ( A * ) and N ( X ) = N ( A * ) , i.e.,
X = A R ( A * ) , N ( A * ) ( 2 ) = A ;
(viii)
X A X = X , R ( X ) = R ( A * ) and N ( X ) N ( A * ) ;
(ix)
X A X = X , R ( X ) = R ( A * ) and N ( A * ) N ( X ) .
Example 1.
Consider the matrices
A = ϵ + 1 ϵ ϵ ϵ ϵ + 1 ϵ ϵ 1 ϵ ϵ ϵ ϵ ϵ ϵ + 1 ϵ ϵ ϵ ϵ ϵ ϵ 1 ϵ ϵ + 1 ϵ ϵ ϵ ϵ + 1
and
B = 2 ϵ + 1 ϵ ϵ ϵ 2 ϵ 1 ϵ ϵ ϵ 2 ϵ + 1 ϵ ϵ ϵ 3 ϵ ϵ ϵ .
Let us generate the candidate solutions X in the generic form
X = x 1 , 1 x 1 , 2 x 1 , 3 x 1 , 4 x 1 , 5 x 2 , 1 x 2 , 2 x 2 , 3 x 2 , 4 x 2 , 5 x 3 , 1 x 3 , 2 x 3 , 3 x 3 , 4 x 3 , 5 x 4 , 1 x 4 , 2 x 4 , 3 x 4 , 4 x 4 , 5 x 5 , 1 x 5 , 2 x 5 , 3 x 5 , 4 x 5 , 5 ,
where  x i , j i , j = 1 , , 5  are unevaluated symbols. The general solution X to  X A B = B  is the matrix
x 1 , 1 2 ϵ 3 + ϵ 2 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 1 , 1 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 1 , 5 1 2 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 2 ϵ + ( 6 ϵ + 3 ) x 1 , 1 + ( 6 ϵ + 3 ) x 1 , 5 3 6 ϵ + 4 x 2 , 1 7 ϵ 3 + 3 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 2 , 1 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 2 , 5 2 ϵ 3 ϵ 2 ϵ 2 ϵ + ( 6 ϵ + 3 ) x 2 , 1 + ( 6 ϵ + 3 ) x 2 , 5 6 ϵ + 4 x 3 , 1 ϵ ( ϵ + 1 ) 2 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 3 , 1 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 3 , 5 2 ϵ 3 ϵ 2 ϵ 2 5 ϵ + ( 6 ϵ + 3 ) x 3 , 1 + ( 6 ϵ + 3 ) x 3 , 5 + 4 6 ϵ + 4 x 4 , 1 ϵ ( ϵ + 1 ) 2 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 4 , 1 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 4 , 5 2 ϵ 3 ϵ 2 ϵ 2 ϵ + ( 6 ϵ + 3 ) x 4 , 1 + ( 6 ϵ + 3 ) x 4 , 5 6 ϵ + 4 x 5 , 1 ϵ 5 ϵ 2 2 ϵ 3 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 5 , 1 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 5 , 5 2 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 5 ϵ + ( 6 ϵ + 3 ) x 5 , 1 + ( 6 ϵ + 3 ) x 5 , 5 6 ϵ + 4 4 ϵ 3 ϵ 2 2 ϵ + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 1 , 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 1 , 5 1 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 1 , 5 ϵ 12 ϵ 3 + 3 ϵ 2 6 ϵ 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 2 , 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 2 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 2 , 5 12 ϵ 4 3 ϵ 3 + 6 ϵ 2 + ϵ + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 3 , 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 3 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 3 , 5 ϵ 7 ϵ 2 + 2 ϵ 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 4 , 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 4 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 4 , 5 ϵ ϵ 2 + 2 ϵ 3 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 5 , 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 5 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 5 , 5
which satisfies  X A B = B  but does not satisfy  X A X = X . Ranks of relevant matrices are equal to
rk ( B ) = rk ( A B ) = 3 < rk ( A ) = 4 < rk ( X ) = 5 .
The matrix Z obtained by the replacement  x 1 , 5 = x 2 , 5 = x 3 , 5 = x 4 , 5 = x 5 , 5 = 0  in X is equal to
Z = 0 2 ϵ 3 + ϵ 2 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 1 , 5 1 2 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 2 ϵ + ( 6 ϵ + 3 ) x 1 , 5 3 6 ϵ + 4 4 ϵ 3 ϵ 2 2 ϵ + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 1 , 5 1 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 1 , 5 0 7 ϵ 3 + 3 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 2 , 5 2 ϵ 3 ϵ 2 ϵ 2 ϵ + ( 6 ϵ + 3 ) x 2 , 5 6 ϵ + 4 ϵ 12 ϵ 3 + 3 ϵ 2 6 ϵ 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 2 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 2 , 5 0 ϵ ( ϵ + 1 ) 2 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 3 , 5 2 ϵ 3 ϵ 2 ϵ 2 5 ϵ + ( 6 ϵ + 3 ) x 3 , 5 + 4 6 ϵ + 4 12 ϵ 4 3 ϵ 3 + 6 ϵ 2 + ϵ + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 3 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 3 , 5 0 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 4 , 5 ϵ ( ϵ + 1 ) 2 2 ϵ 3 ϵ 2 ϵ 2 ϵ + ( 6 ϵ + 3 ) x 4 , 5 6 ϵ + 4 ϵ 7 ϵ 2 + 2 ϵ 1 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 4 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 4 , 5 0 ϵ 5 ϵ 2 2 ϵ 3 + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 x 5 , 5 2 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) ( 6 ϵ + 3 ) x 5 , 5 5 ϵ 6 ϵ + 4 ϵ ϵ 2 + 2 ϵ 3 + 12 ϵ 4 8 ϵ 3 + 5 ϵ 2 + 6 ϵ + 1 x 5 , 5 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) x 5 , 5
and satisfies  rk ( Z ) = 4 > rk ( B ) . Then the matrix equation  Z A B = B  holds, but  Z A Z = Z  does not hold.
Finally, consider the matrix Q obtained by the replacement  x 1 , 5 = x 2 , 5 = x 3 , 5 = x 4 , 5 = x 5 , 5 = 0  in the matrix Z:
Q = 0 2 ϵ 3 + ϵ 2 2 ϵ 1 2 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 2 ϵ 3 6 ϵ + 4 4 ϵ 3 ϵ 2 2 ϵ 1 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) 0 0 3 ϵ 7 ϵ 3 2 ϵ 3 ϵ 2 ϵ 2 ϵ 6 ϵ + 4 12 ϵ 3 + 3 ϵ 2 6 ϵ 1 4 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 0 0 ( ϵ + 1 ) 2 2 3 ϵ 2 ϵ 2 5 ϵ + 4 6 ϵ + 4 12 ϵ 4 3 ϵ 3 + 6 ϵ 2 + ϵ 4 ( ϵ 1 ) ϵ 2 ( 3 ϵ + 2 ) 0 0 ( ϵ + 1 ) 2 2 3 ϵ 2 ϵ 2 ϵ 6 ϵ + 4 7 ϵ 2 + 2 ϵ 1 4 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 0 0 5 ϵ 2 2 ϵ 3 2 ( ϵ 1 ) ( 3 ϵ + 2 ) 5 ϵ 6 ϵ + 4 ϵ 2 + 2 ϵ 3 4 ( ϵ 1 ) ϵ ( 3 ϵ + 2 ) 0 .
The matrix Q satisfies  rk ( Q ) = 3 = rk ( B ) . Then both the matrix equations  Q A B = B  and  Q A Q = Q  are satisfied, which is in accordance with the results presented in Theorem 1.
Now, let us calculate the matrix  X = B U , where  U C 5 × 3  is in generic form
U = u 1 , 1 u 1 , 2 u 1 , 3 u 1 , 4 u 1 , 5 u 2 , 1 u 2 , 2 u 2 , 3 u 2 , 4 u 2 , 5 u 3 , 1 u 3 , 2 u 3 , 3 u 3 , 4 u 3 , 5 .
The set of solutions to  B U A B = B  with respect to U is given by
u 1 , 1 u 1 , 2 3 ϵ 2 ϵ 2 + ϵ + 1 u 1 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 u 2 , 1 u 2 , 2 3 ϵ ( 2 ϵ + 1 ) ( ϵ 1 ) u 2 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 u 3 , 1 u 3 , 2 6 ϵ 2 + 3 2 ϵ 2 + ϵ + 1 u 3 , 2 ϵ 3 ϵ 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 1 , 2 ϵ 6 ϵ 2 + 9 ϵ + 1 2 ϵ 6 ϵ 3 3 ϵ 2 6 ϵ 1 3 ϵ 2 + 2 3 ϵ 2 + ϵ + 2 u 1 , 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 1 , 1 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 24 ϵ 3 + 26 ϵ 2 + 9 ϵ + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 2 , 2 + 1 2 ϵ 6 ϵ 3 3 ϵ 2 6 ϵ 1 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 2 , 1 2 ϵ ( 3 ϵ + 2 ) ( ϵ 1 ) u 2 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 3 , 2 4 ϵ 2 ( 4 ϵ + 1 ) 2 ϵ 6 ϵ 3 3 ϵ 2 6 ϵ 1 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 3 , 1 + 2 ϵ ϵ + 3 ϵ 2 + ϵ + 2 u 3 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 .
Then the set  A { 2 } R ( B ) , *  coincides with the set  Y = B U  which is given in Appendix A.
The rank identities  rk ( Y ) = rk ( B )  are satisfied.

4. Minimal Rank Outer Inverses with Prescribed Kernel

This section is devoted to the solvability of the system (7) as well as the minimization problem (8). Besides some systems of matrix equations which are equivalent to the system (7), we present in Theorem 5 that X is a solution to the system (7) iff X is an outer inverse of A with the given kernel N ( C ) .
Theorem 5.
Let A C m × n , X C n × m and C C l × m .
(a) 
The subsequent statements are mutually equivalent:
(i)
C A X = C and rk ( X ) = rk ( C ) ;
(ii)
C A X = C and N ( X ) = N ( C ) ;
(iii)
X is a solution to (3), i.e., X A { 2 } * , N ( C ) ;
(iv)
X = X C C and C A X = C ;
(v)
X A X = X , X = X C C and C A X = C .
(b) 
In addition,
min rk ( X ) | C A X = C = rk ( C ) rk ( X ) | C A X = C [ rk ( C ) , rk ( X ) ] rk ( X ) | X A { 2 } C A X = C [ rk ( C ) , rk ( A ) ]
and the following set identities are valid:
A { 2 } * , N ( C ) = X C n × m | C A X = C rk ( X ) = rk ( C ) .
A { 2 } * , N ( C ) = X : = ( C A ) C + ( I ( C A ) C A ) Y | Y C n × m C A X = C rk ( X ) = rk ( C ) .
Proof. 
(i) ⇒ (ii): The hypothesis C A X = C implies N ( X ) N ( C ) . Since rk ( X ) = rk ( C ) , we deduce that N ( X ) = N ( C ) .
(ii) ⇒ (iii): From N ( X ) = N ( C ) , we haveit follows X = W 2 C for some W 2 C n × l . Then X A X = W 2 C A X = W 2 C = X .
(iii) ⇔ (iv) ⇔ (v): These equivalences are clear by (Theorem 2.6 [34]).
(v) ⇒ (i): The assumptions X = X C C and C A X = C give rk ( X ) = rk ( C ) . Now, C A X = C A X C C = C C C = C .
The rest of the proof is analogous as the proof of Theorem 1. □
In order to provide new systems of matrix equations, we can replace the conditions X = X C C and C A X = C of Theorem 5 with some of the equivalent conditions presented in (Remark 2.7 [34]).
Proposition 3.
If A C m × n and C C l × m , it follows
there exists X C n × m satisfying C A X = C and rk ( X ) = rk ( C ) rk ( C A ) = rk ( C ) .
Because of (17), a solution X to (7) is called a minimal rank outer inverse with prescribed kernel N ( C ) .
Theorem 5 implies the following result.
Corollary 6.
The next statements are equivalent each other for A , X C n × n and k N :
(i)
A k + 1 X = A k and rk ( X ) = rk ( A k ) ;
(ii)
A k + 1 X = A k and N ( X ) = N ( A k ) ;
(iii)
X A { 2 } * , N ( A k ) ;
(iv)
X = X ( A k ) A k and A k + 1 X = A k ;
(v)
X A X = X , X = X ( A k ) A k and A k + 1 X = A k ;
(vi)
X is a minimal rank weak Drazin inverse of A.
We now consider the solvability of particular cases of the system (7). Firstly, we assume that rk ( X ) = rk ( C ) = rk ( A ) holds in the system (7). Notice that the following result can be proven as corresponding results of the previous section.
Theorem 6.
Consider A C m × n , X C n × m and C C l × m .
(a) 
The subsequent statements are mutually equivalent:
(i)
C A X = C and rk ( X ) = rk ( C ) = rk ( A ) ;
(ii)
X A X = X , N ( X ) = N ( C ) and N ( A ) = N ( C A ) ;
(iii)
X A X = X , N ( X ) = N ( C ) and N ( C A ) N ( A ) ;
(iv)
X A X = X , N ( X ) = N ( C ) and A = A ( C A ) C A ;
(v)
X A X = X , A X A = A and N ( X ) = N ( C ) , i.e., X A { 1 , 2 } * , N ( C ) .
(b) 
In addition,
X C n × m | C A X = C , rk ( X ) = rk ( C ) = rk ( A ) = A { 1 , 2 } * , N ( C ) .
Proposition 4. 
If A C m × n and C C l × m , it follows
there exists X C n × m satisfying C A X = C and rk ( X ) = rk ( C ) = rk ( A ) rk ( C A ) = rk ( C ) = rk ( A ) .
Several characterizations of a commuting minimal rank outer inverse with prescribed kernel N ( C ) are proposed in Theorem 7.
Theorem 7. 
Let A , X , C C n × n . The subsequent statements are mutually equivalent:
(i)
C A X = C , rk ( X ) = rk ( C ) and A X = X A ;
(ii)
X A X = X , N ( X ) = N ( C ) and A X = X A ;
(iii)
X 2 A = A X 2 = X and N ( X ) = N ( C ) ;
(iv)
X 2 A = A X 2 = X , X = X C C and C A X = C .
Theorem 7 gives the next result which gives characterizations of the Drazin inverse.
Corollary 7. 
The subsequent statements are equivalent for A , X , C C n × n and k N :
(i)
A k + 1 X = A k , rk ( X ) = rk ( A k ) and A X = X A ;
(ii)
X A X = X , N ( X ) = N ( A k ) and A X = X A ;
(iii)
X 2 A = A X 2 = X and N ( X ) = N ( A k ) ;
(iv)
X 2 A = A X 2 = X , X = X ( A k ) A k and A k + 1 X = A k ;
(v)
X = A D .
Taking that X A C = C in the system (7), we establish some necessary and sufficient conditions for a matrix X to be a solution to a novel system.
Theorem 8. 
Let A , X , C C n × n . The subsequent statements are equivalent each other:
(i)
C A X = X A C = C and rk ( X ) = rk ( C ) ;
(ii)
C A X = C , N ( X ) = N ( C ) and R ( X ) = R ( C ) ;
(iii)
C A X = C , N ( X ) = N ( C ) and R ( X ) R ( C ) ;
(iv)
C A X = C , N ( X ) = N ( C ) and R ( C ) R ( X ) ;
(v)
C A X = C and R ( X ) R ( C ) ;
(vi)
X A X = X , X A C = C and N ( X ) = N ( C ) ;
(vii)
X A X = X , N ( X ) = N ( C ) and R ( X ) = R ( C ) , i.e., X = A R ( C ) , N ( C ) ( 2 ) ;
(viii)
X A X = X , N ( X ) = N ( C ) and R ( X ) R ( C ) ;
(ix)
N ( X ) = N ( C ) and R ( C ) R ( X ) .
Consequently, by Theorem 8, we derive the following characterizations for the Drazin inverse.
Corollary 8. 
The next statements are equivalent for A , X C n × n and k N :
(i)
A k + 1 X = A k , N ( X ) = N ( A k ) and R ( X ) = N ( A k ) ;
(ii)
A k + 1 X = A k , N ( X ) = N ( A k ) and R ( X ) R ( A k ) ;
(iii)
A k + 1 X = A k , N ( X ) = N ( A k ) and R ( A k ) R ( X ) ;
(iv)
A k + 1 X = A k and N ( X ) N ( A k ) ;
(v)
X A X = X , X A k + 1 = A k and N ( X ) = N ( A k ) ;
(vi)
X A X = X , N ( X ) = N ( A k ) , R ( X ) = R ( A k ) , i.e., X = A R ( A k ) , N ( A k ) ( 2 ) = A D ;
(vii)
X A X = X , N ( X ) = N ( A k ) and R ( X ) R ( A k ) ;
(viii)
X A X = X , N ( X ) = N ( A k ) and R ( A k ) R ( X ) .
By Corollary 8, we characterize the group inverse.
Corollary 9. 
The subsequent constrained equations are equivalent for A , X C n × n :
(i)
A 2 X = A , N ( X ) = N ( A ) and R ( X ) = N ( A ) ;
(ii)
A 2 X = A , N ( X ) = N ( A ) and R ( X ) R ( A ) ;
(iii)
A 2 X = A , N ( X ) = N ( A ) and R ( A ) R ( X ) ;
(iv)
A 2 X = A and N ( X ) N ( A ) ;
(v)
X A X = X , X A 2 = A and N ( X ) = N ( A ) ;
(vi)
X A X = X , N ( X ) = N ( A ) and R ( X ) = R ( A ) , i.e., X = A R ( A ) , N ( A ) ( 2 ) = A # ;
(vii)
X A X = X , N ( X ) = N ( A ) and R ( X ) R ( A ) ;
(viii)
X A X = X , N ( X ) = N ( A ) and R ( A ) R ( X ) .
According to Theorem 8, we have more characterizations of the Moore–Penrose inverse.
Corollary 10. 
The subsequent constrained equations are equivalent for A , X C n × n :
(i)
A * A X = A * , N ( X ) = N ( A * ) and R ( X ) = R ( A * ) ;
(ii)
A * A X = A * , N ( X ) = N ( A * ) and R ( X ) R ( A * ) ;
(iii)
A * A X = A * , N ( X ) = N ( A * ) and R ( A * ) R ( X ) ;
(iv)
A * A X = A * and R ( X ) R ( A * ) ;
(v)
X A X = X , X A A * = A * and N ( X ) = N ( A * ) ;
(vi)
X A X = X , N ( X ) = N ( A * ) and R ( X ) = R ( A * ) , i.e., X = A R ( A * ) , N ( A * ) ( 2 ) = A ;
(vii)
X A X = X , N ( X ) = N ( A * ) and R ( X ) R ( A * ) ;
(viii)
X A X = X , N ( X ) = N ( A * ) and R ( A * ) R ( X ) .
Example 2. 
Consider the matrix A from Example 1 and the matrix C of rank 3 defined by
C = 2 1 1 1 2 1 0 1 1 1 1 1 2 1 1
Let us generate the candidate solutions X in the generic form (16). The general solution X to  C A X = C  is equal to
x 1 , 1 x 1 , 2 x 1 , 3 2 ϵ + ( 9 ϵ + 2 ) x 3 , 1 9 ϵ 2 5 ϵ ( 9 ϵ + 2 ) x 3 , 2 + 2 9 ϵ 2 5 ϵ ( 9 ϵ + 2 ) x 3 , 3 + 2 9 ϵ 2 x 3 , 1 x 3 , 2 x 3 , 3 3 ϵ + ( 9 ϵ + 4 ) x 3 , 1 9 ϵ 2 6 ϵ ( 9 ϵ + 4 ) x 3 , 2 9 ϵ 2 3 ϵ ( 9 ϵ + 4 ) x 3 , 3 + 4 9 ϵ 2 5 ϵ + ( 2 9 ϵ ) x 1 , 1 + ( 9 ϵ 1 ) x 3 , 1 1 9 ϵ 2 ϵ + ( 2 9 ϵ ) x 1 , 2 + ( 9 ϵ 1 ) x 3 , 2 9 ϵ 2 8 ϵ + ( 2 9 ϵ ) x 1 , 3 + ( 9 ϵ 1 ) x 3 , 3 + 1 9 ϵ 2 x 1 , 4 x 1 , 5 2 ϵ + ( 9 ϵ + 2 ) x 3 , 1 9 ϵ 2 4 ϵ ( 9 ϵ + 2 ) x 3 , 4 9 ϵ 2 2 ϵ + ( 9 ϵ + 2 ) x 3 , 5 9 ϵ 2 x 3 , 4 x 3 , 5 3 ϵ ( 9 ϵ + 4 ) x 3 , 4 + 2 9 ϵ 2 3 ϵ + ( 9 ϵ + 4 ) x 3 , 5 9 ϵ 2 ϵ + ( 2 9 ϵ ) x 1 , 4 + ( 9 ϵ 1 ) x 3 , 4 9 ϵ 2 5 ϵ + ( 2 9 ϵ ) x 1 , 5 + ( 9 ϵ 1 ) x 3 , 5 1 9 ϵ 2 .
The matrix X satisfies  C A X = C  but does not satisfy  X A X = X . Ranks of relevant matrices are equal to
rk ( C ) = rk ( C A ) = 3 < rk ( A ) = 4 < rk ( X ) = 5 .
The matrix Z obtained by the replacement  x 1 , 1 = x 1 , 2 = x 1 , 3 = x 1 , 4 = x 1 , 5 = 0  in X satisfies  rk ( Z ) = 4 > rk ( B ) . Then the matrix equation  Z A B = B  holds, but  Z A Z = Z  does not hold.
Finally, consider the matrix Q obtained by the replacement  x 3 , 1 = x 3 , 2 = x 3 , 3 = x 3 , 4 = x 3 , 5 = 0  in Z: 
Q = 0 0 0 0 0 2 ϵ 9 ϵ 2 2 5 ϵ 9 ϵ 2 5 ϵ + 2 9 ϵ 2 4 ϵ 9 ϵ 2 2 ϵ 9 ϵ 2 0 0 0 0 0 3 ϵ 9 ϵ 2 6 ϵ 9 ϵ 2 3 ϵ + 4 9 ϵ 2 2 3 ϵ 9 ϵ 2 3 ϵ 9 ϵ 2 5 ϵ 1 9 ϵ 2 ϵ 9 ϵ 2 1 8 ϵ 9 ϵ 2 ϵ 9 ϵ 2 5 ϵ 1 9 ϵ 2 .
The matrix Q satisfies  rk ( Q ) = 3 = rk ( B ) . Then both the matrix equations  Q A B = B  and  Q A Q = Q  are satisfied, which is in accordance with the results presented in Theorem 5.
Now, let us calculate the matrix  X = U C , where  U C 5 × 3  is in generic form
U = u 1 , 1 u 1 , 2 u 1 , 3 u 2 , 1 u 2 , 2 u 2 , 3 u 3 , 1 u 3 , 2 u 3 , 3 u 4 , 1 u 4 , 2 u 4 , 3 u 5 , 1 u 5 , 2 u 5 , 3 .
The set of solutions to  C A U C = C  with respect to U is given by
u 1 , 1 u 1 , 2 u 1 , 3 u 2 , 1 1 ( 9 ϵ + 2 ) u 3 , 2 9 ϵ 2 u 2 , 3 ( 2 9 ϵ ) u 2 , 1 6 ϵ 9 ϵ + 2 u 3 , 2 ϵ + ( 2 9 ϵ ) u 2 , 3 + 2 9 ϵ + 2 6 ϵ + ( 9 ϵ + 4 ) u 2 , 1 + 2 9 ϵ + 2 ( 9 ϵ + 4 ) u 3 , 2 9 ϵ 2 1 5 ϵ + ( 9 ϵ + 4 ) u 2 , 3 + 2 9 ϵ + 2 ( 9 ϵ + 2 ) u 1 , 1 + ( 1 9 ϵ ) u 2 , 1 + 1 9 ϵ + 2 ( 9 ϵ 1 ) u 3 , 2 9 ϵ 2 u 1 , 2 6 ϵ ( 9 ϵ + 2 ) u 1 , 3 + ( 1 9 ϵ ) u 2 , 3 9 ϵ + 2 .
Then the set  A { 2 } * , N ( C )  coincides with the set  Y = U C  is given in Appendix B. The rank identities  rk ( Y ) = rk ( C )  are satisfied.

5. Minimal Rank Outer Inverses with Prescribed Range and Kernel

Applying results of Section 3 and Section 4, we are able to characterize solvability of the system (9). In particular, by Theorem 1 and Theorem 5, the system (9) has a solution X iff X is an outer inverse of A with the prescribed range R ( B ) and kernel N ( C ) .
Corollary 11. 
Consider A C m × n , X C n × m and B C n × k , C C l × m .
(a) 
The subsequent constrained matrix equations are mutually equivalent:
(i)
X A B = B , C A X = C and rk ( X ) = rk ( B ) = rk ( C ) ;
(ii)
X A B = B , C A X = C , R ( X ) = R ( B ) and N ( X ) = N ( C ) ;
(iii)
X is a solution to (4), i.e., X = A R ( B ) , N ( C ) ( 2 ) ;
(iv)
X = B B X = X C C , X A B = B and C A X = C ;
(v)
X A X = X , X = B B X = X C C , X A B = B and C A X = C .
(b) 
In addition, the system (9) has the unique solution X = A R ( B ) , N ( C ) ( 2 ) .
Theorem 2 and Theorem 6 imply the next characterizations of solution to the special system of the system (9) with rk ( X ) = rk ( B ) = rk ( C ) = rk ( A ) .
Corollary 12. 
(a) The subsequent constrained equations are equivalent for A C m × n , X C n × m , B C n × k and C C l × m :
(i)
X A B = B , C A X = C and rk ( X ) = rk ( B ) = rk ( C ) = rk ( A ) ;
(ii)
X A X = X , R ( X ) = R ( B ) , N ( X ) = N ( C ) , R ( A ) = R ( A B ) and N ( A ) = N ( C A ) ;
(iii)
X A X = X , R ( X ) = R ( B ) , N ( X ) = N ( C ) , R ( A ) R ( A B ) and N ( C A ) N ( A ) ;
(iv)
X A X = X , R ( X ) = R ( B ) , N ( X ) = N ( C ) and A = A B ( A B ) A = A ( C A ) C A ;
(v)
X A X = X , A X A = A , R ( X ) = R ( B ) and N ( X ) = N ( C ) , i.e., X A { 1 , 2 } R ( B ) , N ( C ) .
(b) 
In addition, the constrained system in (i) has the unique solution  X = A R ( B ) , N ( C ) ( 1 , 2 ) .
Using Theorem 3 and Theorem 7, we characterize the solvability of a new system obtained from the system (9) adding an extra condition A X = X A .
Corollary 13. 
The subsequent constrained equations are equivalent for A , X , B , C C n × n :
(i)
X A B = B , C A X = C , rk ( X ) = rk ( B ) = rk ( C ) and A X = X A ;
(ii)
X A X = X , R ( X ) = R ( B ) , N ( X ) = N ( C ) and A X = X A ;
(iii)
X 2 A = A X 2 = X , R ( X ) = R ( B ) and N ( X ) = N ( C ) ;
(iv)
X 2 A = A X 2 = X , X = B B X = X C C , X A B = B and C A X = C .
Example 3. 
Consider
A = 1 ϵ θ 0 0 1 θ 0 0 0 , B = 0 0 1 1 0 ϵ 3 , C = 1 0 1 1 1 1 .
Let us generate the possible solutions Q in the generic form
Q = q 1 , 1 q 1 , 2 q 1 , 3 q 2 , 1 q 2 , 2 q 2 , 3 q 3 , 1 q 3 , 2 q 3 , 3 ,
where  q i , j , i , j = 1 , , 3  are unevaluated symbols. The general solution Q to the system of matrix equations  Q A B = B , C A Q = C  is equal to
Q = 0 0 ϵ ϵ θ q 2 , 3 1 θ 0 x 2 , 3 1 θ 2 1 θ q 2 , 3 θ .
Ranks of relevant matrices are equal to
rk ( B ) = rk ( A B ) = rk ( C ) = rk ( C A ) = rk ( A ) = 2 < rk ( Q ) = 3 .
Consequently, the system of matrix equations  Q A B = B , C A Q = C  holds, but
Q A Q = 0 0 0 1 θ 0 1 θ 1 θ 2 1 θ 1 θ 2 Q .
The important requirement in Corollary 11 is  rk ( B ) = rk ( C ) = rk ( A ) = rk ( X ) . To reduce  rk ( Q )  to  rk ( A )  we use the matrix X obtained by the replacements  q 2 , 3 1 / θ  in Q, which gives
X = 0 0 0 1 θ 0 1 θ 1 θ 2 1 θ 1 θ 2 .
All requirements in Corollary 11 are satisfied and all the matrix equations  X A X = X X = B B X = X C C X A B = B  and  C A X = C  are fulfilled. Furthermore, the matrix equation  A X A = A  is satisfied, which means  X = A R ( B ) , N ( C ) ( 1 , 2 ) .
It is important to mention that  B ( C A B ) C  coincides with X, which is in accordance with the Urquhart representation [36] and its generalizations from [16].

6. Conclusions

The aim of this paper is to investigate solvability of systems of constrained matrix equations. The main novelty of this paper is the establishment of correlations between solutions of certain constrained matrix equations with corresponding minimization problems. Some well-known results and several new results for the weak Drazin inverse are obtained in particular cases. certain characterizations for the Drazin inverse, group inverse and Moore–Penrose inverse are obtained as corollaries.
Implementation of the stated research highlights can be summarized as follows.
-
Conditions (i)–(vi) in Theorem 1 are solutions to (5), while (6) is solved in (12) and (13).
-
Conditions (i)–(vi) in Theorem 5 are solutions to (7), while (8) is solved in (17) and (18).
-
The unique solution to (9) is X = A R ( B ) , N ( C ) ( 2 ) and conditions (i)–(vi) in Corollary 11 are conditions for solvability of (9).

Author Contributions

D.M.: writing—original draft, conceptualization, methodology, validation, formal analysis, writing—review & editing. P.S.S.: conceptualization, methodology, validation, formal analysis, investigation, writing—original draft, writing—review & editing. S.D.M.: data curation, validation, investigation, formal analysis, writing-review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Ministry of Science and Higher Education of the Russian Federation (Grant No. 075-15-2022-1121).

Data Availability Statement

Not applicable.

Acknowledgments

Dijana Mosić and Predrag Stanimirović are supported from the Ministry of Education, Science and Technological Development, Republic of Serbia, Grants 451-03-47/2023-01/200124. Predrag Stanimirović is supported by the Science Fund of the Republic of Serbia, (No. 7750185, Quantitative Automata Models: Fundamental Problems and Applications—QUAM).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

( 2 ϵ + 1 ) u 1 , 1 + ϵ u 2 , 1 + u 3 , 1 ( 2 ϵ + 1 ) u 1 , 2 + ϵ u 2 , 2 + u 3 , 2 ϵ u 1 , 1 + ( 2 ϵ 1 ) u 2 , 1 + ϵ u 3 , 1 ϵ u 1 , 2 + ( 2 ϵ 1 ) u 2 , 2 + ϵ u 3 , 2 ϵ u 1 , 1 + ϵ u 2 , 1 + ( 2 ϵ + 1 ) u 3 , 1 ϵ u 1 , 2 + ϵ u 2 , 2 + ( 2 ϵ + 1 ) u 3 , 2 ϵ u 1 , 1 + u 2 , 1 + u 3 , 1 ϵ u 1 , 2 + u 2 , 2 + u 3 , 2 ϵ 3 u 1 , 1 + u 2 , 1 + u 3 , 1 ϵ 3 u 1 , 2 + u 2 , 2 + u 3 , 2 ϵ 6 u 3 , 2 ϵ 3 + 3 u 3 , 2 ϵ 2 + 3 2 ϵ 2 + ϵ + 1 u 2 , 2 ϵ + 3 u 3 , 2 ϵ + 12 ϵ 3 + 9 ϵ + 3 u 1 , 2 + 2 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 6 u 3 , 2 ϵ 3 + 3 u 3 , 2 ϵ 2 6 ϵ 2 + 3 2 ϵ 2 + ϵ + 1 u 1 , 2 ϵ + 3 u 3 , 2 ϵ 3 4 ϵ 3 4 ϵ 2 ϵ + 1 u 2 , 2 + 2 6 ϵ 3 3 ϵ 2 6 ϵ 1 ( ϵ 1 ) 12 u 3 , 2 ϵ 3 + 3 ( 2 ϵ + 1 ) u 1 , 2 ϵ 2 + 3 ( 2 ϵ + 1 ) u 2 , 2 ϵ 2 + 12 u 3 , 2 ϵ 2 6 ϵ 2 + 3 u 3 , 2 ϵ 6 ϵ 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 6 u 3 , 2 ϵ 3 + 3 u 3 , 2 ϵ 2 + 3 2 ϵ 2 + ϵ + 1 u 1 , 2 ϵ + 3 2 ϵ 2 + ϵ + 1 u 2 , 2 ϵ + 3 u 3 , 2 ϵ 3 ϵ 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 6 u 3 , 2 ϵ 3 + 3 u 3 , 2 ϵ 2 + 9 2 ϵ 2 + ϵ + 1 u 1 , 2 ϵ + 3 2 ϵ 2 + ϵ + 1 u 2 , 2 ϵ + 3 u 3 , 2 ϵ + 3 ϵ 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 24 ϵ 5 + 28 ϵ 4 2 ϵ 3 17 ϵ 2 8 ϵ 1 u 1 , 2 + ϵ 2 ϵ 2 ϵ 2 + ϵ + 1 + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 2 , 2 + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 3 , 2 2 ϵ 6 ϵ 3 3 ϵ 2 6 ϵ 1 12 u 3 , 2 ϵ 5 + 8 u 3 , 2 ϵ 4 + 26 ϵ 4 5 u 3 , 2 ϵ 3 + 15 ϵ 3 6 u 3 , 2 ϵ 2 9 ϵ 2 + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 1 , 2 ϵ u 3 , 2 ϵ 7 ϵ + 24 ϵ 5 + 4 ϵ 4 18 ϵ 3 7 ϵ 2 + 4 ϵ + 1 u 2 , 2 1 2 ϵ 6 ϵ 3 3 ϵ 2 6 ϵ 1 24 u 3 , 2 ϵ 5 + 28 u 3 , 2 ϵ 4 14 ϵ 4 2 u 3 , 2 ϵ 3 7 ϵ 3 17 u 3 , 2 ϵ 2 + 4 ϵ 2 + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 1 , 2 ϵ + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 2 , 2 ϵ 8 u 3 , 2 ϵ + ϵ u 3 , 2 2 ϵ 6 ϵ 3 3 ϵ 2 6 ϵ 1 12 u 3 , 2 ϵ 4 + 8 u 3 , 2 ϵ 3 + 2 ϵ 3 5 u 3 , 2 ϵ 2 + 13 ϵ 2 6 u 3 , 2 ϵ + 8 ϵ + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 1 , 2 + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 2 , 2 u 3 , 2 + 1 2 6 ϵ 3 3 ϵ 2 6 ϵ 1 12 u 3 , 2 ϵ 4 + 8 u 3 , 2 ϵ 3 10 ϵ 3 5 u 3 , 2 ϵ 2 5 ϵ 2 6 u 3 , 2 ϵ + 6 ϵ + 3 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 1 , 2 + 12 ϵ 4 + 8 ϵ 3 5 ϵ 2 6 ϵ 1 u 2 , 2 u 3 , 2 + 1 2 6 ϵ 3 3 ϵ 2 6 ϵ 1 ( 2 ϵ + 1 ) 3 ϵ 2 + 2 3 ϵ 2 + ϵ + 2 u 1 , 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 1 , 1 1 + ϵ 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 2 , 1 2 ϵ ( 3 ϵ + 2 ) ( ϵ 1 ) u 2 , 2 + 1 + ϵ 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 3 , 1 + 2 ϵ ϵ + 3 ϵ 2 + ϵ + 2 u 3 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 3 ϵ 2 + 2 3 ϵ 2 + ϵ + 2 u 1 , 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 1 , 1 1 + ( 2 ϵ 1 ) 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 2 , 1 2 ϵ ( 3 ϵ + 2 ) ( ϵ 1 ) u 2 , 2 + 1 + ϵ 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 3 , 1 + 2 ϵ ϵ + 3 ϵ 2 + ϵ + 2 u 3 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 3 ϵ 2 + 2 3 ϵ 2 + ϵ + 2 u 1 , 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 1 , 1 1 + ϵ 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 2 , 1 2 ϵ ( 3 ϵ + 2 ) ( ϵ 1 ) u 2 , 2 + 1 + ( 2 ϵ + 1 ) 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 3 , 1 + 2 ϵ ϵ + 3 ϵ 2 + ϵ + 2 u 3 , 2 + 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 6 u 2 , 1 ϵ 3 6 u 2 , 2 ϵ 3 6 u 3 , 1 ϵ 3 6 u 3 , 2 ϵ 3 + 3 u 2 , 1 ϵ 2 + 2 u 2 , 2 ϵ 2 + 3 u 3 , 1 ϵ 2 + 2 u 3 , 2 ϵ 2 ϵ 2 + 2 3 ϵ 2 + ϵ + 2 u 1 , 2 ϵ + 6 u 2 , 1 ϵ + 4 u 2 , 2 ϵ + 6 u 3 , 1 ϵ + 4 u 3 , 2 ϵ 2 ϵ + 6 ϵ 3 + 3 ϵ 2 + 6 ϵ + 1 u 1 , 1 + u 2 , 1 + u 3 , 1 1 6 ϵ 3 3 ϵ 2 6 ϵ 1 ϵ 6 u 2 , 1 ϵ 3 6 u 2 , 2 ϵ 3 6 u 3 , 1 ϵ 3 6 u 3 , 2 ϵ 3 + 3 u 2 , 1 ϵ 2 + 2 u 2 , 2 ϵ 2 + 3 u 3 , 1 ϵ 2 + 2 u 3 , 2 ϵ 2 + 5 ϵ 2 + 6 3 ϵ 2 + ϵ + 2 u 1 , 2 ϵ + 6 u 2 , 1 ϵ + 4 u 2 , 2 ϵ + 6 u 3 , 1 ϵ + 4 u 3 , 2 ϵ 2 ϵ + 18 ϵ 3 + 9 ϵ 2 + 18 ϵ + 3 u 1 , 1 + u 2 , 1 + u 3 , 1 3 6 ϵ 3 3 ϵ 2 6 ϵ 1 .

Appendix B

2 u 1 , 1 + u 1 , 2 + u 1 , 3 u 1 , 1 + u 1 , 3 2 u 2 , 1 + u 2 , 3 ( 9 ϵ + 2 ) u 3 , 2 9 ϵ 2 + 1 u 2 , 1 + u 2 , 3 9 u 3 , 2 ϵ 11 ϵ + ( 4 18 ϵ ) u 2 , 1 + ( 2 9 ϵ ) u 2 , 3 + 2 u 3 , 2 + 2 9 ϵ + 2 5 ϵ + ( 2 9 ϵ ) u 2 , 1 + ( 2 9 ϵ ) u 2 , 3 + 2 9 ϵ + 2 2 6 ϵ + ( 9 ϵ + 4 ) u 2 , 1 + 2 9 ϵ + 2 + 5 ϵ + ( 9 ϵ + 4 ) u 2 , 3 + 2 9 ϵ + 2 ( 9 ϵ + 4 ) u 3 , 2 9 ϵ 2 1 11 ϵ + ( 9 ϵ + 4 ) u 2 , 1 + ( 9 ϵ + 4 ) u 2 , 3 + 4 9 ϵ + 2 u 1 , 2 + 2 ( 9 ϵ + 2 ) u 1 , 1 + ( 1 9 ϵ ) u 2 , 1 + 1 9 ϵ + 2 + 6 ϵ ( 9 ϵ + 2 ) u 1 , 3 + ( 1 9 ϵ ) u 2 , 3 9 ϵ + 2 + ( 9 ϵ 1 ) u 3 , 2 9 ϵ 2 9 u 2 , 1 ϵ 9 u 2 , 3 ϵ 6 ϵ ( 9 ϵ + 2 ) u 1 , 1 ( 9 ϵ + 2 ) u 1 , 3 + u 2 , 1 + u 2 , 3 + 1 9 ϵ + 2 u 1 , 1 + u 1 , 2 + 2 u 1 , 3 u 1 , 1 + u 1 , 2 + u 1 , 3 u 2 , 1 + 2 u 2 , 3 ( 9 ϵ + 2 ) u 3 , 2 9 ϵ 2 + 1 u 2 , 1 + u 2 , 3 ( 9 ϵ + 2 ) u 3 , 2 9 ϵ 2 + 1 9 u 3 , 2 ϵ 4 ϵ + ( 2 9 ϵ ) u 2 , 1 + ( 4 18 ϵ ) u 2 , 3 + 2 u 3 , 2 + 4 9 ϵ + 2 9 u 3 , 2 ϵ 5 ϵ + ( 2 9 ϵ ) u 2 , 1 + ( 2 9 ϵ ) u 2 , 3 + 2 u 3 , 2 + 2 9 ϵ + 2 6 ϵ + ( 9 ϵ + 4 ) u 2 , 1 + 2 9 ϵ + 2 + 2 5 ϵ + ( 9 ϵ + 4 ) u 2 , 3 + 2 9 ϵ + 2 ( 9 ϵ + 4 ) u 3 , 2 9 ϵ 2 1 6 ϵ + ( 9 ϵ + 4 ) u 2 , 1 + 2 9 ϵ + 2 + 5 ϵ + ( 9 ϵ + 4 ) u 2 , 3 + 2 9 ϵ + 2 ( 9 ϵ + 4 ) u 3 , 2 9 ϵ 2 1 u 1 , 2 + ( 9 ϵ + 2 ) u 1 , 1 + ( 1 9 ϵ ) u 2 , 1 + 1 9 ϵ + 2 2 6 ϵ + ( 9 ϵ + 2 ) u 1 , 3 + ( 9 ϵ 1 ) u 2 , 3 9 ϵ + 2 + ( 9 ϵ 1 ) u 3 , 2 9 ϵ 2 u 1 , 2 + ( 9 ϵ + 2 ) u 1 , 1 + ( 1 9 ϵ ) u 2 , 1 + 1 9 ϵ + 2 + 6 ϵ ( 9 ϵ + 2 ) u 1 , 3 + ( 1 9 ϵ ) u 2 , 3 9 ϵ + 2 + ( 9 ϵ 1 ) u 3 , 2 9 ϵ 2 2 u 1 , 1 + u 1 , 2 + u 1 , 3 2 u 2 , 1 + u 2 , 3 ( 9 ϵ + 2 ) u 3 , 2 9 ϵ 2 + 1 9 u 3 , 2 ϵ 11 ϵ + ( 4 18 ϵ ) u 2 , 1 + ( 2 9 ϵ ) u 2 , 3 + 2 u 3 , 2 + 2 9 ϵ + 2 2 6 ϵ + ( 9 ϵ + 4 ) u 2 , 1 + 2 9 ϵ + 2 + 5 ϵ + ( 9 ϵ + 4 ) u 2 , 3 + 2 9 ϵ + 2 ( 9 ϵ + 4 ) u 3 , 2 9 ϵ 2 1 u 1 , 2 + 2 ( 9 ϵ + 2 ) u 1 , 1 + ( 1 9 ϵ ) u 2 , 1 + 1 9 ϵ + 2 + 6 ϵ ( 9 ϵ + 2 ) u 1 , 3 + ( 1 9 ϵ ) u 2 , 3 9 ϵ + 2 + ( 9 ϵ 1 ) u 3 , 2 9 ϵ 2 .

References

  1. Penrose, R. A generalized inverse for matrices. Proc. Cambridge Philos. Soc. 1955, 51, 406–413. [Google Scholar] [CrossRef] [Green Version]
  2. Getson, A.J.; Hsuan, F.C. {2}-Inverses and Their Statistical Applications; Lecture Notes in Statistics 47; Springer: Berlin/Heidelberg, Germany, 1988. [Google Scholar]
  3. Rao, C.R. A note on a generalized inverse of a matrix with applications to problems in mathematical statistics. J. R. Soc. Ser. B 1962, 24, 152–158. [Google Scholar] [CrossRef]
  4. Ben-Israel, A.; Greville, T.N.E. Generalized Inverses: Theory and Applications, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
  5. Nashed, M.Z. Generalized Inverse and Applications; Academic Press: New York, NY, USA, 1976. [Google Scholar]
  6. Wei, Y. A characterization and representation of the generalized inverse A T , S ( 2 ) and its applications. Linear Algebra Appl. 1998, 280, 87–96. [Google Scholar] [CrossRef] [Green Version]
  7. Wei, Y.; Wu, H. The representation and approximation for the generalized inverse A T , S ( 2 ) . Appl. Math. Comput. 2003, 135, 263–276. [Google Scholar] [CrossRef]
  8. Yang, H.; Liu, D. The representation of generalized inverse A T , S ( 2 ) and its applications. J. Comput. Appl. Math. 2009, 224, 204–209. [Google Scholar] [CrossRef] [Green Version]
  9. Zheng, B.; Wang, G. Representation and approximation for generalized inverse A T , S ( 2 ) : Revisited. Appl. Math. Comput. 2006, 22, 225–240. [Google Scholar]
  10. Cao, C.G.; Zhang, X. The generalized inverse A T , * ( 2 ) and its applications. J. Appl. Math. Comput. 2003, 11, 155–164. [Google Scholar] [CrossRef]
  11. Wang, G.R.; Wei, Y.; Qiao, S. Generalized Inverses: Theory and Computations; Science Press: Beijing, China; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  12. Wei, Y.; Stanimirović, P.S.; Petković, M. Numerical and Symbolic Computations of Generalized Inverses; World Scientific: Singapore, 2018. [Google Scholar]
  13. Sheng, X.; Chen, G. Full-rank representation of generalized inverse A T , S ( 2 ) and its applications. Comput. Math. Appl. 2007, 54, 1422–1430. [Google Scholar] [CrossRef] [Green Version]
  14. Sheng, X.; Chen, G.L.; Gong, Y. The representation and computation of generalized inverse A T , S ( 2 ) . J. Comput. Appl. Math. 2008, 213, 248–257. [Google Scholar] [CrossRef] [Green Version]
  15. Stanimirović, P.S.; Ćirić, M.; Stojanović, I.; Gerontitis, D. Conditions for existence, representations and computation of matrix generalized inverses. Complexity 2017, 2017, 6429725. [Google Scholar] [CrossRef] [Green Version]
  16. Stanimirović, P.S.; Ćirić, M.; Lastra, A.; Sendra, J.R.; Sendra, J. Representations and symbolic computation of generalized inverses over fields. Appl. Math. Comput. 2021, 406, 126287. [Google Scholar] [CrossRef]
  17. Stanimirović, P.S.; Ćirić, M.; Lastra, A.; Sendra, J.R.; Sendra, J. Representations and geometrical properties of generalized inverses over fields. Linear Multilinear Algebra. [CrossRef]
  18. Stanimirović, P.S.; Soleymani, F.; Haghani, F.K. Computing outer inverses by scaled matrix iterations. J. Comput. Appl. Math. 2016, 296, 89–101. [Google Scholar] [CrossRef]
  19. Ma, X.; Nashine, H.K.; Shi, S.; Soleymani, F. Exploiting higher computational efficiency index for computing outer generalized inverses. Appl. Numer. Math. 2022, 175, 18–28. [Google Scholar] [CrossRef]
  20. Kansal, M.; Kumar, S.; Kaur, M. An efficient matrix iteration family for finding the generalized outer inverse. Appl. Math. Comput. 2022, 430, 127292. [Google Scholar] [CrossRef]
  21. Petković, M.; Krstić, M.A.; Rajković, K.P. Rapid generalized Schultz iterative methods for the computation of outer inverses. J. Comput. Appl. Math. 2018, 344, 572–584. [Google Scholar] [CrossRef]
  22. Cordero, A.; Soto-Quiros, P.; Torregrosa, J.R. A general class of arbitrary order iterative methods for computing generalized inverses. Appl. Math. Comput. 2021, 409, 126381. [Google Scholar] [CrossRef]
  23. Dehghan, M.; Shirilord, A. A fast computational algorithm for computing outer pseudo-inverses with numerical experiments. J. Comput. Appl. Math. 2022, 408, 114128. [Google Scholar] [CrossRef]
  24. Campbell, S.L.; Meyer, C.D., Jr. Generalized Inverses of Linear Transformations; Dover Publications, Inc.: New York, NY, USA, 1991; Corrected Reprint of the 1979 Original; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
  25. Levine, J.; Hartwig, R.E. Applications of Drazin inverse to the Hill cryptographic systems. Cryptologia 1980, 1558–1586, 71–85. [Google Scholar] [CrossRef]
  26. Prasad, K.M.; Mohana, K.S. Core-EP inverse. Linear Multilinear Algebra 2014, 62, 792–802. [Google Scholar] [CrossRef]
  27. Baksalary, O.M.; Trenkler, G. Core inverse of matrices. Linear Multilinear Algebra 2010, 58, 681–697. [Google Scholar] [CrossRef]
  28. Malik, S.B.; Thome, N. On a new generalized inverse for matrices of an arbitrary index. Appl. Math. Comput. 2014, 226, 575–580. [Google Scholar] [CrossRef]
  29. Zhou, Y.; Chen, J.; Zhou, M. m-weak group inverses in a ring with involution. RACSAM 2021, 115, 2. [Google Scholar] [CrossRef]
  30. Wang, H.; Chen, J. Weak group inverse. Open Math. 2018, 16, 1218–1232. [Google Scholar] [CrossRef]
  31. Ferreyra, D.E.; Malik, S.B. A generalization of the group inverse. Quaest. Math 2023. [Google Scholar] [CrossRef]
  32. Campbell, S.L.; Meyer, C.D. Weak Drazin inverses. Linear Algebra Appl. 1978, 20, 167–178. [Google Scholar] [CrossRef] [Green Version]
  33. Wu, C.; Chen, J. Minimal rank weak Drazin inverses: A class of outer inverses with prescribed range. Electron. Linear Algebra 2023, 39, 1–16. [Google Scholar] [CrossRef]
  34. Mosić, D.; Stanimirović, P.S. Existence and Representation of Solutions to Some Constrained Systems of Matrix Equations. In Matrix and Operator Equations and Applications; Book Series: Mathematics Online First Collections; Moslehian, M.S., Ed.; Springer: Cham, Switzerland, 2023; Available online: https://link.springer.com/book/9783031253850 (accessed on 1 January 2023).
  35. Deng, C. On the solutions of operator equation CAX = C = XAC. J. Math. Anal. Appl. 2013, 398, 664–670. [Google Scholar] [CrossRef]
  36. Urquhart, N.S. Computation of generalized inverse matrtices which satisfy specified conditions. SIAM Rev. 1968, 10, 216–218. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mosić, D.; Stanimirović, P.S.; Mourtas, S.D. Minimal Rank Properties of Outer Inverses with Prescribed Range and Null Space. Mathematics 2023, 11, 1732. https://doi.org/10.3390/math11071732

AMA Style

Mosić D, Stanimirović PS, Mourtas SD. Minimal Rank Properties of Outer Inverses with Prescribed Range and Null Space. Mathematics. 2023; 11(7):1732. https://doi.org/10.3390/math11071732

Chicago/Turabian Style

Mosić, Dijana, Predrag S. Stanimirović, and Spyridon D. Mourtas. 2023. "Minimal Rank Properties of Outer Inverses with Prescribed Range and Null Space" Mathematics 11, no. 7: 1732. https://doi.org/10.3390/math11071732

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop