Next Article in Journal
New Modular Fixed-Point Theorem in the Variable Exponent Spaces p(.)
Previous Article in Journal
Modeling and Fatigue Characteristic Analysis of the Gear Flexspline of a Harmonic Reducer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Riemann–Hilbert Problems and Soliton Solutions of Type (λ, λ) Reduced Nonlocal Integrable mKdV Hierarchies

1
Department of Mathematics, Zhejiang Normal University, Jinhua 321004, China
2
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Mathematics and Statistics, University of South Florida, Tampa, FL 33620, USA
4
School of Mathematical and Statistical Sciences, North-West University, Mafikeng Campus, Private Bag X2046, Mmabatho 2735, South Africa
Mathematics 2022, 10(6), 870; https://doi.org/10.3390/math10060870
Submission received: 6 February 2022 / Revised: 28 February 2022 / Accepted: 4 March 2022 / Published: 9 March 2022

Abstract

:
Reduced nonlocal matrix integrable modified Korteweg–de Vries (mKdV) hierarchies are presented via taking two transpose-type group reductions in the matrix Ablowitz–Kaup–Newell–Segur (AKNS) spectral problems. One reduction is local, which replaces the spectral parameter λ with its complex conjugate λ , and the other one is nonlocal, which replaces the spectral parameter λ with its negative complex conjugate λ . Riemann–Hilbert problems and thus inverse scattering transforms are formulated from the reduced matrix spectral problems. In view of the specific distribution of eigenvalues and adjoint eigenvalues, soliton solutions are constructed from the reflectionless Riemann–Hilbert problems.

1. Introduction

Starting from matrix spectral problems, one can generate integrable hierarchies of equations, based on the corresponding zero curvature equations. Among typical examples are the nonlinear Schrödinger (NLS) hierarchy and the modified Korteweg–de Vries (mKdV) hierarchy. Specific group reductions on spectral matrices can yield reduced integrable hierarchies. In soliton theory, there are a few effective methods to solve integrable equations, which include the inverse scattering transforms [1,2], the Darboux transformation [3], and the Hirota bilinear method [4]. A kind of multiple wave solution, called soliton solutions, can be presented explicitly by the Hirota bilinear method [5,6,7]. Riemann–Hilbert problems, formulated from the associated given matrix spectral problems, also provide a powerful technique that allows us to solve integrable equations, particularly to present soliton solutions [8].
Let us consider the (1+1)-dimensional case. Let x and t be two independent variables, λ a spectral parameter, and u = u ( x , t ) a column vector of dependent variables. Take two square matrices, U = U ( u , λ ) and V = V ( u , λ ) , from a loop algebra to form a Lax pair consisting of spatial and temporal matrix spectral problems:
i ϕ x = U ϕ = U ( u , λ ) ϕ , i ϕ t = V ϕ = V ( u , λ ) ϕ ,
where ϕ is a square matrix eigenfunction and i is the unit imaginary number. We assume that the compatibility condition of the above two matrix spectral problems, namely the zero curvature equation
U t V x + i [ U , V ] = 0 ,
where [ · , · ] denotes the matrix commutator, gives us an integrable equation:
u t = K ( u ) .
For such integrable equations, Lie algebraic structures behind matrix spectral problems have been explored to generate their infinitely many symmetries [9]. The adjoint Lax pair of the matrix spectral problems in (1) is defined by:
i ϕ ˜ x = ϕ ˜ U = ϕ ˜ U ( u , λ ) , i ϕ ˜ t = ϕ ˜ V = ϕ ˜ V ( u , λ ) ,
where ϕ ˜ is a square matrix eigenfunction, too. Their compatibility condition leads to the same zero curvature equation as (2), and so, it does not bring any additional equations. Both the Lax pair and the adjoint Lax pair lay the basis for the subsequent analyses in the formulation of Riemann–Hilbert problems and soliton solutions.
We state the standard procedure for establishing Riemann–Hilbert problems as follows. It begins with a pair of matrix spectral problems in (1) with:
U ( u , λ ) = A ( λ ) + P ( u , λ ) , V ( u , λ ) = B ( λ ) + Q ( u , λ ) ,
where A , B are commuting constant square matrices, and P , Q are trace-less square matrices satisfying deg λ ( P ) < deg λ ( A ) and deg λ ( Q ) < deg λ ( B ) . In order to formulate a Riemann–Hilbert problem for the corresponding integrable equation Equation (3), we adopt an equivalent Lax pair of matrix spectral problems:
ψ x = i [ A ( λ ) , ψ ] + P ˇ ( u , λ ) ψ , ψ t = i [ B ( λ ) , ψ ] + Q ˇ ( u , λ ) ψ ,
where P ˇ = i P , Q ˇ = i Q , and an equivalent adjoint Lax pair consisting of the following matrix spectral problems:
i ψ ˜ x = [ ψ ˜ , A ( λ ) ] + ψ ˜ P ( u , λ ) , i ψ ˜ t = [ ψ ˜ , B ( λ ) ] + ψ ˜ Q ( u , λ ) ,
where ψ and ψ ˜ also denote square matrix eigenfunctions. The equivalence between the matrix spectral problems in (1) and the matrix spectral problems in (6) is a consequence of the commutativity of A and B. From tr P = tr Q = 0 , we obtain the properties ( det ψ ) x = ( det ψ ) t = 0 . Obviously, there are the relations ϕ ˜ = ϕ 1 and ψ ˜ = ψ 1 . There also exists a direct connection between the matrix spectral problems in (1) and the matrix spectral problems in (6):
ϕ = ψ E g , E g = e i A ( λ ) x + i B ( λ ) t .
It is crucial to note that for the pair and the adjoint pair of matrix spectral problems in (6) and (7), we can require the asymptotic conditions:
ψ ± , ψ ˜ ± I , when x or t ± ,
where I stands for the identity matrix. Then, based on those matrix eigenfunctions ψ ± and ψ ˜ ± , we can pick the entries to form two generalized matrix Jost solutions T ± ( x , t , λ ) , which are analytic in the upper and lower half-planes, C + and C , and continuous in the closed upper and lower half-planes, C ¯ + and C ¯ , respectively, and present a Riemann–Hilbert problem with a jump on the real line:
G + ( x , t , λ ) = G ( x , t , λ ) G 0 ( x , t , λ ) , λ R .
The two unimodular generalized matrix Jost solutions, G + and G , and the jump matrix, G 0 , are all generated from the generalized Jost solutions T + and T , and G + and G have the same analyticity properties as T + and T , respectively. Moreover, the jump matrix, G 0 , carries all essential scattering data, generated from the scattering matrix S g ( λ ) of the associated matrix spectral problems, which is defined via:
ψ E g = ψ + E g S g ( λ ) .
Exact solutions to the resulting Riemann–Hilbert problem (9) present the required generalized matrix Jost solutions to recover the potential of the matrix spectral problems, and thus, solutions to the corresponding integrable Equation (3). Such solutions, G + and G , can be determined through an application of the Sokhotski–Plemelj formula to the difference of G + and G . Observing the asymptotic behaviors of the generalized matrix Jost solutions G ± at infinity of λ leads to a recovery of the potential. The whole procedure also generates the corresponding inverse scattering transforms. Soliton solutions correspond to the reflectionless case, and they are constructed by solving the reflectionless Riemann–Hilbert problems or computing the corresponding reflectionless inverse scattering transforms.
It is also known that we can generate reduced integrable equations under group reductions of matrix spectral problems, both local (see, e.g., [10]) and nonlocal (see, e.g., [11,12,13,14,15]). One class of local group reductions is defined by:
U ( x , t , λ ) = ( U ( x , t , λ ) ) = Σ U ( x , t , λ ) Σ 1 ,
and one class of nonlocal group reductions reads:
U ( x , t , λ ) = ( U ( x , t , λ ) ) = Δ U ( x , t , λ ) Δ 1 ,
where † stands for the Hermitian transpose, Σ , Δ are two constant invertible Hermitian matrices, and λ is the complex conjugate of λ . The first class of reductions in (10) works for both the NLS equations and the mKdV equations, but the second class of reductions in (11) works only for the mKdV equations. In those reductions, the crucial point is to replace the spectral parameter λ with its complex conjugate, λ , and its negative complex conjugate λ , respectively. Each of them can yield reduced integrable equations from zero curvature equations.
In this paper, we would like to consider the above two classes of group reductions (10) and (11) for the matrix Ablowitz–Kaup–Newell–Segur (AKNS) spectral problems simultaneously, to generate reduced nonlocal matrix integrable mKdV hierarchies and to establish their Riemann–Hilbert problems and inverse scattering transforms. The starting point is a kind of arbitrary-order matrix AKNS spectral problems. The corresponding reflectionless Riemann–Hilbert problems are used to construct soliton solutions, by taking advantage of the specific distribution of eigenvalues and adjoint eigenvalues. The last section gives the conclusions and concluding remarks.

2. Reduced Nonlocal Matrix Integrable mKdV Hierarchies

2.1. The Matrix AKNS Integrable Hierarchies Revisited

To present reduced nonlocal matrix integrable mKdV hierarchies, let us recall the construction of the integrable hierarchies of matrix AKNS equations (see, e.g., [16]).
Assume that m , n 1 are two given integers, and p , q are two matrix potentials:
p = p ( x , t ) = ( p j k ) m × n , q = q ( x , t ) = ( q k j ) n × m ,
λ is a spectral parameter, I s denotes the identity matrix of size s, s 0 , and α 1 , α 2 and β 1 , β 2 are two arbitrary pairs of distinct real constants. Each of the matrix AKNS integrable hierarchies is constructed from the matrix AKNS spectral problems with matrix potentials:
i ϕ x = U ϕ = U ( u , λ ) ϕ , i ϕ t = V [ r ] ϕ = V [ r ] ( u , λ ) ϕ , r 0 ,
where the Lax pair of spectral matrices are given by:
U = λ Λ + P , V [ r ] = λ r Ω + Q [ r ] .
In this pair of spectral matrices, Λ and Ω are two constant square matrices:
Λ = diag ( α 1 I m , α 2 I n ) , Ω = diag ( β 1 I m , β 2 I n ) ,
and the other two involved square matrices are defined by:
P = P ( u ) = 0 p q 0 ,
which is called the potential matrix, and:
Q [ r ] = Q [ r ] ( u , λ ) = s = 0 r 1 λ s a [ r s ] b [ r s ] c [ r s ] d [ r s ] ,
where a [ s ] , b [ s ] , c [ s ] , and d [ s ] will be defined recursively later.
Evidently, when m = 1 , the matrix spectral problems in (13) are reduced to the multicomponent case, and if there is only a pair of nonzero potentials—for example, p j k and q k j —the matrix spectral problems in (13) are reduced to the standard AKNS case [17].
As normal, to generate an associated matrix AKNS integrable hierarchy, let us first solve the stationary zero curvature equation:
W x = i [ U , W ] ,
for a given spectral matrix U defined as in (14). We look for a solution W of the form:
W = a b c d ,
where a , b , c , d are m × m , m × n , n × m , and n × n matrices, respectively. Obviously, the stationary zero curvature Equation (18) precisely presents:
a x = i ( p c b q ) , b x = i ( α λ b + p d a p ) , c x = i ( α λ c + q a d q ) , d x = i ( q b c p ) ,
where α = α 1 α 2 . Let us take W as a formal Laurent series:
W = a b c d = s = 0 W s λ s , W s = W s ( p , q ) = a [ s ] b [ s ] c [ s ] d [ s ] , s 0 ,
and then, the system (20) leads equivalently to the recursion relations:
b [ 0 ] = 0 , c [ 0 ] = 0 , a x [ 0 ] = 0 , d x [ 0 ] = 0 ,
b [ s + 1 ] = 1 α ( i b x [ s ] p d [ s ] + a [ s ] p ) , s 0 ,
c [ s + 1 ] = 1 α ( i c x [ s ] + q a [ s ] d [ s ] q ) , s 0 ,
and
a x [ s ] = i ( p c [ s ] b [ s ] q ) , d x [ s ] = i ( q b [ s ] c [ s ] p ) , s 1 .
Now, let us take the initial values for a [ 0 ] and d [ 0 ] :
a [ 0 ] = β 1 I m , d [ 0 ] = β 2 I n ,
and select zero constants of integration in (25), which means that we impose:
W s | p , q = 0 = 0 , s 1 .
In this way, with a [ 0 ] and d [ 0 ] given by (26), we see that:
V [ r ] = ( λ W ) + : = s = 0 r λ r s W s , r 0 ;
and can uniquely determine all matrices W s , s 1 , defined recursively. For instance, one can work out that:
b [ 1 ] = β α p , c [ 1 ] = β α q , a [ 1 ] = 0 , d [ 1 ] = 0 ;
b [ 2 ] = β α 2 i p x , c [ 2 ] = β α 2 i q x , a [ 2 ] = β α 2 p q , d [ 2 ] = β α 2 q p ;
b [ 3 ] = β α 3 ( p x x + 2 p q p ) , c [ 3 ] = β α 3 ( q x x + 2 q p q ) , a [ 3 ] = β α 3 i ( p q x p x q ) , d [ 3 ] = β α 3 i ( q p x q x p ) ;
and
b [ 4 ] = β α 4 i ( p x x x + 3 p q p x + 3 p x q p ) , c [ 4 ] = β α 4 i ( q x x x + 3 q x p q + 3 q p q x ) , a [ 4 ] = β α 4 [ 3 ( p q ) 2 + p q x x p x q x + p x x q ] , d [ 4 ] = β α 4 [ 3 ( q p ) 2 + q p x x q x p x + q x x p ] ;
where β = β 1 β 2 . Particularly, we can obtain:
Q [ 1 ] = β α 0 p q 0 = β α P ,
Q [ 2 ] = β α λ 0 p q 0 β α 2 p q i p x i q x q p = β α λ P β α 2 I m , n ( P 2 + i P x ) ,
and:
Q [ 3 ] = β α λ 2 0 p q 0 β α 2 λ p q i p x i q x q p β α 3 i ( p q x p x q ) ) p x x + 2 p q p q x x + 2 q p q i ( q p x q x p ) = β α λ 2 P β α 2 λ I m , n ( P 2 + i P x ) β α 3 ( i [ P , P x ] + P x x + 2 P 3 ) ,
in which I m , n = diag ( I m , I n ) . Based on (25), we can easily derive, from (23) and (24), a recursion relation for determining b [ s ] and c [ s ] :
c [ s + 1 ] b [ s + 1 ] = Ψ c [ s ] b [ s ] , s 1 ,
where the matrix operator Ψ reads:
Ψ = i α ( x + q x 1 ( p · ) + [ x 1 ( · p ) ] q q x 1 ( · q ) [ x 1 ( q · ) ] q p x 1 ( · p ) + [ x 1 ( p · ) ] p x p x 1 ( q · ) [ x 1 ( · q ) ] p .
Finally, we see that the compatibility conditions of the two matrix spectral problems in (13), i.e., the zero curvature equations
U t V x [ r ] + i [ U , V [ r ] ] = 0 , r 0 ,
engender one so-called matrix AKNS integrable hierarchy:
p t = i α b [ r + 1 ] , q t = i α c [ r + 1 ] , r 0 .
The first two nonlinear integrable equations in this hierarchy give us the AKNS matrix NLS equations:
p t = β α 2 i ( p x x + 2 p q p ) , q t = β α 2 i ( q x x + 2 q p q ) ,
and the AKNS matrix mKdV equations:
p t = β α 3 ( p x x x + 3 p q p x + 3 p x q p ) , q t = β α 3 ( q x x x + 3 q x p q + 3 q p q x ) ,
where the two matrix potentials, p and q, are defined by (12).
When m = 1 and n = 2 , the matrix NLS Equation (40) can be reduced to the Manakov system [18], under a group reduction of type (10).
By a theory of Lax operator algebras [9], we can directly show that (39) defines a hierarchy of commuting flows, which implies that each equation in the hierarchy (39) possesses infinitely many symmetries. Moreover, an application of the trace identity [19] can show that every nonlinear equation in (39) possesses a bi-Hamiltonian structure and thus infinitely many conservation laws, which commute under both Poisson brackets associated with the bi-Hamiltonian structure.

2.2. Reduced Nonlocal Matrix Integrable mKdV Hierarchies

Let us now construct a kind of reduced nonlocal integrable mKdV hierarchies by two groups reductions of the matrix AKNS spectral problems in (39), of which one is local and the other is nonlocal.
We take two pairs of constant invertible Hermitian matrices Σ 1 , Σ 2 and Δ 1 , Δ 2 , and consider two classes of group reductions for the spectral matrix U defined as in (14):
U ( x , t , λ ) = ( U ( x , t , λ ) ) = Σ U ( x , t , λ ) Σ 1 ,
and:
U ( x , t , λ ) = ( U ( x , t , λ ) ) = Δ U ( x , t , λ ) Δ 1 ,
where Σ , Δ are two constant invertible Hermitian matrices given by:
Σ = Σ 1 0 0 Σ 2 , Σ j = Σ j , j = 1 , 2 ,
and:
Δ = Δ 1 0 0 Δ 2 , Δ j = Δ j , j = 1 , 2 .
These two classes of reductions precisely require the local potential reduction:
P ( x , t ) = Σ P ( x , t ) Σ 1 ,
and the nonlocal potential reduction:
P ( x , t ) = Δ P ( x , t ) Δ 1 ,
which allow us to make the local and nonlocal reductions for the matrix potentials:
q ( x , t ) = Σ 2 1 p ( x , t ) Σ 1 ,
and:
q ( x , t ) = Δ 2 1 p ( x , t ) Δ 1 ,
respectively. It then follows that both classes of reductions for the spectral matrix U need an additional constraint for the matrix potential p:
Σ 2 1 p ( x , t ) Σ 1 = Δ 2 1 p ( x , t ) Δ 1 .
Further, noting that the group reductions in (42) and (43) ensure that
W ( x , t , λ ) = ( W ( x , t , λ ) ) = Σ W ( x , t , λ ) Σ 1 , W ( x , t , λ ) = ( W ( x , t , λ ) ) = Δ W ( x , t , λ ) Δ 1 ,
we can see that
V [ 2 s + 1 ] ( x , t , λ ) = ( V [ 2 s + 1 ] ( x , t , λ ) ) = Σ V [ 2 s + 1 ] ( x , t , λ ) Σ 1 , V [ 2 s + 1 ] ( x , t , λ ) = ( V [ 2 s + 1 ] ( x , t , λ ) ) = Δ V [ 2 s + 1 ] ( x , t , λ ) Δ 1 ,
and
Q [ 2 s + 1 ] ( x , t , λ ) = ( Q [ 2 s + 1 ] ( x , t , λ ) ) = Σ Q [ 2 s + 1 ] ( x , t , λ ) Σ 1 , Q [ 2 s + 1 ] T ( x , t , λ ) = ( Q [ 2 s + 1 ] ( x , t , λ ) ) = Δ Q [ 2 s + 1 ] ( x , t , λ ) Δ 1 ,
where s 0 , V [ 2 s + 1 ] is defined as in (14) and Q [ 2 s + 1 ] is defined by (17). Therefore, under the group reductions (48) and (49), the integrable matrix AKNS equations in (39) with r = 2 s + 1 , s 0 , are reduced to a hierarchy of reduced nonlocal matrix integrable mKdV type equations:
p t = i α b [ 2 s + 2 ] | q = Σ 2 1 p Σ 1 = Δ 2 1 p ( x , t ) Δ 1 , s 0 ,
where p = ( p j l ) m × n satisfies (50), and Σ 1 , Δ 1 and Σ 2 , Δ 2 are two pairs of arbitrarily given invertible Hermitian matrices of sizes m and n, respectively. All equations in the hierarchy (54) possess Lax pairs of the reduced spatial and temporal matrix spectral problems in (13) with 2 s + 1 , s 0 , and infinitely many symmetries and conservation laws reduced from those for the integrable matrix AKNS equations in (39) with 2 s + 1 , s 0 .
Let us fix s = 1 , i.e., r = 3 . Then, the reduced nonlocal matrix integrable mKdV type equations in (54) present a class of reduced nonlocal matrix integrable mKdV equations:
p t = β α 3 ( p x x x + 3 p Σ 2 1 p Σ 1 p x + 3 p x Σ 2 1 p T Σ 1 p ) = β α 3 ( p x x x 3 p Δ 2 1 p ( x , t ) Δ 1 p x 3 p x Δ 2 1 p ( x , t ) Δ 1 p ) ,
where p is an m × n matrix potential satisfying (50).
In what follows, we would like to present a few examples of these novel nonlocal matrix integrable mKdV equations, by taking different values for m , n and different choices for Σ , Δ . Let us first consider m = 1 and n = 2 , and take
Σ 1 = 1 , Σ 2 1 = σ 0 0 σ , Δ 1 = 1 , Δ 2 1 = 0 δ δ 0 ,
where σ and δ are real constants and satisfy σ 2 = δ 2 = 1 . Then, the potential constraint (50) tells:
p 2 = σ δ p 1 ( x , t ) ,
where p = ( p 1 , p 2 ) , and so the corresponding potential matrix P reads:
P = 0 p 1 σ δ p 1 ( x , t ) σ p 1 0 0 δ p 1 ( x , t ) 0 0 .
Then, the corresponding novel nonlocal integrable mKdV equation becomes:
p 1 , t = β α 3 [ p 1 , x x x + 6 σ | p 1 | 2 p 1 , x + 3 σ p 1 ( x , t ) ( p 1 p 1 ( x , t ) ) x ] ,
where σ = ± 1 , | z | is the absolute value of z and z is the complex conjugate of z. This nonlocal integrable mKdV equation, which has two nonlinear terms and two reverse spacetime factors, is very different from the ones studied in [20,21,22], which have only one nonlinear term and one reverse spacetime factor. The equation needs more restrictions while formulating its soliton solutions, which will be seen later.
Let us second consider m = 1 and n = 4 , and take:
Σ 1 = 1 , Σ 2 1 = σ 1 0 0 0 0 σ 1 0 0 0 0 σ 2 0 0 0 0 σ 2 , Δ 1 = 1 , Δ 2 1 = 0 δ 1 0 0 δ 1 0 0 0 0 0 0 δ 2 0 0 δ 2 0 ,
where σ j and δ j are real constants and satisfy σ j 2 = δ j 2 = 1 , j = 1 , 2 . Then, the potential constraint (50) generates:
p 2 = σ 1 δ 1 p 1 ( x , t ) , p 4 = σ 2 δ 2 p 3 ( x , t ) ,
where p = ( p 1 , p 2 , p 3 , p 4 ) , and so the corresponding potential matrix P reads:
P = 0 p 1 σ 1 δ 1 p 1 ( x , t ) p 3 σ 2 δ 2 p 3 ( x , t ) σ 1 p 1 0 0 0 0 δ 1 p 1 ( x , t ) 0 0 0 0 σ 2 p 3 0 0 0 0 δ 2 p 3 ( x , t ) 0 0 0 0 .
This enables us to obtain a class of two-component reduced nonlocal integrable mKdV equations:
p 1 , t = β α 3 [ p 1 , x x x + 6 σ 1 | p 1 | 2 p 1 , x + 3 σ 1 p 1 ( x , t ) ( p 1 p 1 ( x , t ) ) x + 3 σ 2 p 3 ( p 1 p 3 ) x + 3 σ 2 p 3 ( x , t ) ( p 1 p 3 ( x , t ) ) x ] , p 3 , t = β α 3 [ p 3 , x x x + 3 σ 1 p 1 ( p 1 p 3 ) x + 3 σ 1 p 1 ( x , t ) ( p 1 ( x , t ) p 3 ) x + 6 σ 2 | p 3 | 2 p 3 , x + 3 σ 2 p 3 ( x , t ) ( p 3 p 3 ( x , t ) ) x ] ,
where σ j are real constants and satisfy σ j 2 = 1 , j = 1 , 2 . Similarly, we can also generate multi-component reduced nonlocal integrable mKdV equations.

3. Riemann–Hilbert Problems

3.1. Properties of Eigenvalues and Eigenfunctions

Note that the reduction in (42) (or (43)) grarantees that λ is an eigenvalue of the matrix spectral problems in (13) if and only if λ ^ = λ (or λ ^ = λ ) is an adjoint eigenvalue, i.e., it satisfies the adjoint matrix spectral problems:
i ϕ ˜ x = ϕ ˜ U = ϕ ˜ U ( u , λ ^ ) , i ϕ ˜ t = ϕ ˜ V [ r ] = ϕ ˜ V [ r ] ( u , λ ^ ) ,
where r = 2 s + 1 , s 0 . Consequently, we can assume to have eigenvalues λ : μ , μ , i ν , and adjoint eigenvalues λ ^ : μ , μ , i ν , where μ i R and ν R .
Suppose that all the potentials are in L 2 ( R 2 ) . For the matrix spectral problems in (13) with r = 2 s + 1 , s 0 , we can impose the asymptotic behavior: ϕ e i λ Λ x + i λ 2 s + 1 Ω t , when x , t ± . Consequently, if we take the transformation
ϕ = ψ E g , E g = e i λ Λ x + i λ 2 s + 1 Ω t ,
then we can achieve the canonical asymptotic conditions ψ I m + n , when x or t goes to or . The equivalent pair of matrix spectral problems to (13) with r = 2 s + 1 , s 0 , is defined by:
ψ x = i λ [ Λ , ψ ] + P ˇ ψ , P ˇ = i P ,
and
ψ t = i λ 2 s + 1 [ Ω , ψ ] + Q ˇ [ 2 s + 1 ] ψ , Q ˇ [ 2 s + 1 ] = i Q [ 2 s + 1 ] .
Upon applying a generalized Liouville’s formula, one can obtain:
det ψ = 1 ,
because ( det ψ ) x = 0 due to tr P ˇ = tr Q ˇ [ 2 s + 1 ] = 0 .
Recall that the adjoint equation of the x-part of (13) and the adjoint equation of (66) are:
i ϕ ˜ x = ϕ ˜ U ,
and
i ψ ˜ x = λ [ ψ ˜ , Λ ] + ψ ˜ P ,
respectively. Obviously, the pair of adjoint matrix spectral problems or equivalent adjoint matrix spectral problems does not create any additional condition.
Let ψ ( λ ) be a matrix eigenfunction of the spatial spectral problem (66) associated with an eigenvalue λ . Then, Σ ψ 1 ( λ ) and Δ ψ 1 ( λ ) are two matrix adjoint eigenfunctions associated with the same eigenvalue λ . With the group reduction in (43), one can have:
i [ ψ ( x , t , λ ) Δ ] x = i [ ( ψ x ) ( x , t , λ ) Δ ] = i { i λ [ ψ ( x , t , λ ) , Λ ] i ψ ( x , t , λ ) P ( x , t ) } Δ = λ [ ψ ( x , t , λ ) , Λ ] Δ ψ ( x , t , λ ) Δ [ Δ 1 P ( x , t ) Δ ] = λ [ ψ ( x , t , λ ) Δ , Λ ] + ψ ( x , t , λ ) Δ P .
This implies that the matrix
ψ ˜ ( λ ) : = ψ ( x , t , λ ) Δ ,
gives rise to another matrix adjoint eigenfunction associated with the same original eigenvalue λ . Equivalently, ψ ( x , t , λ ) Δ solves the adjoint spectral problem (70). Thus, upon observing the asymptotic conditions of the matrix eigenfunction ψ , it follows from the uniqueness of solutions that ψ ( λ ) satisfies:
ψ ( x , t , λ ) = Δ ψ 1 ( λ ) Δ 1 ,
when ψ I m + n , x or t or .
Similarly, based on the group reduction in (42), we can find that:
ψ ˜ ( λ ) = ψ ( λ ) Σ
presents a new matrix adjoint eigenfunction associated with λ and satisfies:
ψ ( λ ) = Σ ψ 1 ( λ ) Σ 1 .

3.2. Riemann–Hilbert Problems

We begin to present a class of associated Riemann–Hilbert problems with the space variable x. To formulate the problems explicitly, we make the following assumptions:
α = α 1 α 2 < 0 , β = β 1 β 2 < 0 .
While considering the scattering problem, let us first take the two matrix eigenfunctions ψ ± ( x , λ ) of (66) with the canonical asymptotic conditions:
ψ ± I m + n , when x ± ,
respectively. Based on (68), we then see that det ψ ± = 1 for all values of x R . Since
ϕ ± = ψ ± E , E = e i λ Λ x ,
are both matrix eigenfunctions of the x-part of the matrix spectral problems (13), they have to be linearly dependent, and therefore, we have:
ψ E = ψ + E S ( λ ) , λ R .
Here, S ( λ ) is the so-called scattering matrix, and it is clear that det S ( λ ) = 1 , due to det ψ ± = 1 .
As usual, by the method of variation in parameters, one can transform the x-part of the matrix spectral problems (13) into the following Volterra integral equations for ψ ± [8]:
ψ ± ( λ , x ) = I m + n x ± e i λ Λ ( x y ) P ˇ ( y ) ψ ± ( λ , y ) e i λ Λ ( y x ) d y ,
where the canonical asymptotic conditions (76) have been applied. Further, by the Neumann series [23] in the theory of Volterra integral equations, one can prove the existence of the eigenfunctions ψ ± , which allow analytic continuations off the real axis λ R as long as the integrals on their right-hand sides converge (see, e.g., [24]). Based on the diagonal form of Λ and the first assumption in (75), one can show that the integral equation for the first m columns of ψ contains only the exponential factor e i α λ ( x y ) , and the integral equation for the last n columns of ψ + contains only the exponential factor e i α λ ( x y ) . Note that the function factor e i α λ ( x y ) decays because of y < x in the integral, when λ takes values in the upper half-plane C + , and the function factor e i α λ ( x y ) also decays because of y > x in the integral, when λ takes values in the upper half-plane C + . Accordingly, one knows that those m + n columns are analytic in the upper half-plane C + and continuous in the closed upper half-plane C ¯ + . In a similar manner, one can show that the first m columns of ψ + and the last n columns of ψ are analytic in the lower half-plane C and continuous in the closed lower half-plane C ¯ .
In what follows, we show how to prove the above statements. Let us split
ψ ± = ( ψ 1 ± , ψ 2 ± , , ψ m + n ± ) ,
namely, ψ j ± denotes the jth column of ϕ ± ( 1 j m + n ). We would like to show that the m + n column eigenfunctions, ψ j , 1 j m , and ψ j + , m + 1 j m + n , are analytic with respect to λ in C + and continuous with respect to λ in C ¯ + ; and the m + n row eigenfunctions, ψ j + , 1 j m , and ψ j , m + 1 j m + n , are analytic with respect to λ in C and continuous with respect to λ in C ¯ . Below, we only seek to prove the result for ψ j + , m + 1 j m + n , and the proofs for the other row and column eigenfunctions follow analogously.
From the Volterra integral Equation (79), we know that
ψ j + ( λ , x ) = e j x R 1 ( λ , x , y ) ψ j + ( λ , y ) d y , 1 j m ,
and
ψ j + ( λ , x ) = e j x R 2 ( λ , x , y ) ψ j + ( λ , y ) d y , m + 1 j m + n ,
where e j , 1 j m + n , are the standard basis of R m + n and the square matrices R 1 and R 2 are given by
R 1 ( λ , x , y ) = i 0 p ( y ) e i α λ ( x y ) q ( y ) 0 ,
and
R 2 ( λ , x , y ) = i 0 e i α λ ( x y ) p ( y ) q ( y ) 0 .
Let us first prove that for each m + 1 j m + n , the Neumann series
k = 0 ψ j , k + ( λ , x ) ,
whose terms are defined recursively by
ψ j , 0 + ( λ , x ) = e j , ψ j , k + 1 + ( λ , x ) = x R 2 ( λ , x , y ) ψ j , k + ( λ , y ) d y , k 1 ,
will determine the solution to (81). This statement will be true, if we can show that the Neumann series converges uniformly for both x R and λ C ¯ + . Based on (84), an application of the mathematical induction yields
| ψ j , k + ( λ , x ) | 1 k ! x P ( y ) d y k , m + 1 j m + n , k 0 ,
for both x R and λ C ¯ + , where | · | stands for the Euclidean norm for vectors and · denotes the Frobenius norm for square matrices. By using the Weierstrass M-test, it follows from this estimation that
ψ j + ( λ , x ) = k = 0 ψ j , k + ( λ , x ) , m + 1 j m + n ,
uniformly converges for both λ C ¯ + and x R , and all ψ j + ( λ , x ) , m + 1 j m + n , are continuous with respect to λ in C ¯ + , because so are all ψ j , k + ( λ , x ) , m + 1 j m + n , k 0 .
Next, we would like to consider the differentiability of ψ j + ( λ , x ) , m + 1 j m + n , with respect to λ in C + (similarly, we can show the differentiability with respect to x in R ). Let us fix an integer m + 1 j m + n . For a complex number μ in C + , take a disk B ρ ( μ ) = { λ C | | λ μ | ρ } with a radius ρ > 0 such that B ρ ( μ ) C + . Then, there is a constant C ( ρ ) > 0 such that | α x e i α λ x | C ( ρ ) for λ B ρ ( μ ) and x 0 . We consider the following Neumann series:
k = 0 ψ j , λ , k + ( λ , x )
where ψ j , λ , 0 + = 0 and ψ j , λ , k + = 0 , k 1 , are defined recursively by
ψ j , λ , k + 1 + ( λ , x ) = x R 2 , λ ( λ , x , y ) ψ j , k + ( λ , y ) d y x R 2 ( λ , x , y ) ψ j , λ , k + ( λ , y ) d y , k 0 ,
with ψ j , k + , k 0 , being defined by (84) and R 2 , λ being given by
R 2 , λ ( λ , x , y ) = λ R 2 ( λ , x , y ) = 0 α ( x y ) e i α λ ( x y ) p ( y ) 0 0 .
It can be easily shown by applying the mathematical induction that
| ψ j , λ , k + ( λ , x ) | 1 k ! C ( ρ ) + 1 x P ( y ) d y k , k 0 ,
for both x R and λ B ρ ( μ ) . Now, based on the Weierstrass M-test, the Neumann series determined by (86) converges uniformly for both x R and λ B ρ ( μ ) , and through the term-by-term differentiability theorem, it converges to the derivative of ψ j + with respect to λ , since ψ j , λ , k + = λ ψ j , k + , k 0 . It follows that ψ j + is analytic at any point λ B ρ ( μ ) , and thus, particularly at the point μ . This tells that all ψ j + , m + 1 j m + n , are analytic with respect to λ in C + , indeed. Therefore, the required proof is finished.
Now, on the basis of these analyses, we can define the generalized matrix Jost solution T + as
T + = T + ( x , λ ) = ( ψ 1 , , ψ m , ψ m + 1 + , , ψ m + n + ) = ψ H 1 + ψ + H 2 ,
where H 1 and H 2 are given by
H 1 = diag ( I m , 0 , , 0 n ) , H 2 = diag ( 0 , , 0 m , I n ) ,
and know that T + is analytic with respect to λ in the upper half-plane C + and continuous with respect to λ in the closed upper half-plane C ¯ + . Additionally, the generalized matrix Jost solution
( ψ 1 + , , ψ m + , ψ m + 1 , , ψ m + n ) = ψ + H 1 + ψ H 2
is analytic with respect to λ in the lower half-plane C and continuous with respect to λ in the closed lower half-plane C ¯ .
To determine the other generalized matrix Jost solution T , we adopt the analytic counterpart of T + in the lower half-plane C , which can be generated from the adjoint counterparts of the matrix spectral problems. Recall that the inverse matrices ( ϕ ± ) 1 and ( ψ ± ) 1 provide solutions to the two corresponding adjoint matrix spectral problems. Thus, upon splitting ψ ˜ ± into rows,
ψ ˜ ± = ψ ˜ ± , 1 ψ ˜ ± , 2 ψ ˜ ± , m + n ,
namely, ψ ˜ ± , j stands for the jth row of ψ ˜ ± ( 1 j m + n ), one can show by similar arguments that one can define the generalized matrix Jost solution T as the adjoint matrix solution of (70), i.e.,
T = ψ ˜ , 1 ψ ˜ , m ψ ˜ + , m + 1 ψ ˜ + , m + n = H 1 ψ ˜ + H 2 ψ ˜ + = H 1 ( ψ ) 1 + H 2 ( ψ + ) 1 .
This is analytic at λ in the lower half-plane C and continuous at λ in the closed lower half-plane C ¯ . Additionally, the other generalized matrix Jost solution of (70),
ψ ˜ + , 1 ψ ˜ + , m ψ ˜ , m + 1 ψ ˜ , m + n = H 1 ψ ˜ + + H 2 ψ ˜ = H 1 ( ψ + ) 1 + H 2 ( ψ ) 1 ,
is analytic at λ in the upper half-plane C + and continuous at λ in the upper half-plane C ¯ + .
Furthermore, based on det ψ ± = 1 and the scattering relation (78) for ψ + and ψ , one immediately obtains
lim x T + ( x , λ ) = S 11 ( λ ) 0 0 I n , λ C ¯ + , lim x T ( x , λ ) = S ^ 11 ( λ ) 0 0 I n , λ C ¯ ,
and thus,
det T + ( x , λ ) = det S 11 ( λ ) , det T ( x , λ ) = det S ^ 11 ( λ ) ,
where we split S ( λ ) and S 1 ( λ ) into block matrices as follows:
S ( λ ) = S 11 ( λ ) S 12 ( λ ) S 21 ( λ ) S 22 ( λ ) , S 1 ( λ ) = ( S ( λ ) ) 1 = S ^ 11 ( λ ) S ^ 12 ( λ ) S ^ 21 ( λ ) S ^ 22 ( λ ) .
From (94), we know that S 11 , S ^ 11 are m × m matrices; and so, S 12 , S ^ 12 are m × n matrices, S 21 , S ^ 21 are n × m matrices, and S 22 , S ^ 22 are n × n matrices, since S ( λ ) is a square matrix of size m + n . Also, it follows from the uniform convergence of the Neumann series, defined previously, that S 11 ( λ ) and S ^ 11 ( λ ) are analytic at λ C + and λ C , respectively.
Now, one can define the two unimodular generalized matrix Jost solutions as follows:
G + ( x , λ ) = T + ( x , λ ) S 11 1 ( λ ) 0 0 I n , λ C ¯ + ,
and
( G ) 1 ( x , λ ) = S ^ 11 1 ( λ ) 0 0 I n T ( x , λ ) , λ C ¯ .
These two generalized matrix Jost solutions allow us to establish the required matrix Riemann–Hilbert problems on the real line:
G + ( x , λ ) = G ( x , λ ) G 0 ( x , λ ) , λ R ,
for the reduced nonlocal matrix integrable mKdV type equations (54). Here, the jump matrix G 0 is given by
G 0 ( x , λ ) = E S ^ 11 1 ( λ ) 0 0 I n S ˜ ( λ ) S 11 1 ( λ ) 0 0 I n E 1 ,
which is a consequence of (78). The matrix S ˜ ( λ ) has the following factorization:
S ˜ ( λ ) = ( H 1 + H 2 S ( λ ) ) ( H 1 + S 1 ( λ ) H 2 ) ,
which can be shown to be
S ˜ ( λ ) = I m S ^ 12 S 21 I n .
Note that for the presented Riemann–Hilbert problems, the canonical normalization conditions
G ± ( x , λ ) I m + n , when λ C ¯ ± ,
come from the Volterra integral equations in (79). Moreover, from the properties of eigenfunctions in (72) and (74), we can have
( G + ) ( λ ) = Σ ( G ) 1 ( λ ) Σ 1 ,
and
( G + ) ( x , t , λ ) = Δ ( G ) 1 ( λ ) Δ 1 .
It therefore follows that the the jump matrix G 0 satisfies the following involution properties:
G 0 ( λ ) = Σ G 0 ( λ ) Σ 1 , G 0 ( x , t , λ ) = Δ G 0 ( λ ) Δ 1 , λ R .

3.3. Evolution of the Scattering Data

For the completeness of the required direct scattering transforms, we compute the derivative of the eigenfunction relation (78) with time t, and apply the following temporal matrix spectral problems:
ψ t ± = i λ 2 s + 1 [ Ω , ψ ± ] + i Q [ 2 s + 1 ] ψ ± ,
where s 0 is fixed. Then, one can know that the scattering matrix S possesses the following evolution equation:
S t = i λ 2 s + 1 [ Ω , S ] .
This leads to the time evolution:
S 12 = S 12 ( t , λ ) = S 12 ( 0 , λ ) e i β λ 2 s + 1 t , S 21 = S 21 ( t , λ ) = S 21 ( 0 , λ ) e i β λ 2 s + 1 t ,
for the time-dependent scattering coefficients, and tells that all remaining scattering coefficients do not depend on the time variable t.

3.4. Gelfand–Levitan–Marchenko Type Equations

To determine the generalized matrix Jost solutions, we compute equivalent Gelfand–Levitan–Marchenko type integral equations. To this end, we transform the associated Riemann–Hilbert problems in (99) into the following problems:
G + G = G v , v = G 0 I m + n , on R , G ± I m + n as λ C ¯ ± ,
where each jump matrix G 0 is defined by (100) and (102).
Define G ( λ ) = G ± ( λ ) for λ C ± . To avoid the spectral singularity, we suppose that G has only simple poles off R : { ξ j } j = 1 R , where R 1 is an arbitrarily given integer. Further, define
G ˜ ± ( λ ) = G ± ( λ ) j = 1 R G j λ ξ j , λ C ¯ ± ; G ˜ ( λ ) = G ˜ ± ( λ ) , λ C ± ,
where G j denotes the residue of G at λ = ξ j , namely,
G j = res ( G ( λ ) , ξ j ) = lim λ ξ j ( λ ξ j ) G ( λ ) .
Evidently, one has:
G ˜ + G ˜ = G + G = G v , on R , G ˜ ± I m + n as λ C ¯ ± .
Then, upon applying the Sokhotski–Plemelj formula [25], one obtains the solution to each problem in (113):
G ˜ ( λ ) = I m + n + 1 2 π i ( G v ) ( ξ ) ξ λ d ξ .
Computing the limit as λ ξ l engenders:
LHS = lim λ ξ l G ˜ = F l j l R G j ξ l ξ j , RHS = I m + n + 1 2 π i ( G v ) ( ξ ) ξ ξ l d ξ ,
where:
F l = lim λ ξ l ( λ ξ l ) G ( λ ) G l λ ξ l , 1 l R ,
and consequently, we arrive at:
I m + n F l + j l R G j ξ l ξ j + 1 2 π i ( G v ) ( ξ ) ξ ξ l d ξ = 0 , 1 l R ,
which define the required Gelfand–Levitan–Marchenko type integral equations.
All these equivalent integral equations completely determine solutions to the resulting Riemann–Hilbert problems and thus the required generalized matrix Jost solutions. However, little is known regarding the existence and uniqueness of solutions. However, in the reflectionless case, a formulation of solutions will be given for the reduced nonlocal reverse spacetime matrix integrable mKdV type equations in the next section.

3.5. Recovering the Potential Matrix

To obtain the potential matrix P from the unimodular generalized matrix Jost solutions, let us consider an asymptotic expansion:
G + ( x , t , λ ) = I m + n + 1 λ G 1 + ( x , t ) + O ( 1 λ 2 ) , λ .
Upon plugging the above asymptotic expansion into the matrix spectral problem (66) and making a comparison of constant terms, one obtains
P = [ Λ , G 1 + ] = lim λ λ [ G + ( λ ) , Λ ] .
Consequently, the potential matrix is given by
P = 0 α G 1 , 12 + α G 1 , 21 + 0 ,
where the matrix G 1 + has been similarly partitioned into a block matrix as follows:
G 1 + = G 1 , 11 + G 1 , 12 + G 1 , 21 + G 1 , 22 + = ( G 1 , 11 + ) n × n ( G 1 , 12 + ) n × m ( G 1 , 21 + ) m × n ( G 1 , 22 + ) m × m .
Therefore, the solutions to the matrix AKNS equations (39) are given by:
p = α G 1 , 12 + , q = α G 1 , 21 + .
When the reduction conditions in (46) and (47) are satisfied, the reduced matrix potential p solves the reduced nonlocal matrix integrable mKdV type Equation (54).
To sum up, this provides a Riemann–Hilbert problem formulation of the inverse scattering transform for computing solutions to the reduced nonlocal matrix integrable mKdV type equations (54). It starts from the scattering data in S ( λ ) , and then computes the jump matrix G 0 ( λ ) . The potential matrix P finally follows from the solution { G + ( λ ) , G ( λ ) } of the associated Riemann–Hilbert problems.

4. Soliton Solutions

4.1. General Formulation

Let N 1 be another given integer. Assume that the function det S 11 ( λ ) has N zeros, λ k C , 1 k N , and the function det S ^ 11 ( λ ) also has N zeros, λ ^ k C , 1 k N .
In order to compute soliton solutions explicitly, we additionally assume that each of these zeros, λ k and λ ^ k , 1 k N , is geometrically simple. Thus, we know that each of ker T + ( λ k ) and ker T ( λ ^ k ) , 1 k N contains only a single column and row basis vector, respectively. We take v k ker T + ( λ k ) , v k 0 , and v ^ k ker T ( λ k ) , v ^ k 0 , for 1 k N . In this way, we have:
T + ( λ k ) v k = 0 , v ^ k T ( λ ^ k ) = 0 , 1 k N .
It is known that soliton solutions are associated with the situation where G 0 = I m + n is taken in each Riemann–Hilbert problem in (99). Such a situation can be met if we take that S 21 = S ^ 12 = 0 —namely, take all zero reflection coefficients in the scattering problem.
Such a kind of specific Riemann–Hilbert problems, which possess the canonical normalization conditions in (103) and the zero structures given in (123), is solvable [8,26], in the local case of
{ λ k | 1 k N } C + , { λ ^ k | 1 k N } C ,
and therefore, we can present the potential matrix P exactly, which generates soliton solutions.
In the nonlocal case, we cannot keep the condition (124). Therefore, to present a general formulation of solutions to reflectionless Riemann–Hilbert problems in the nonlocal case, we assume that for N = 2 N 1 + N 2 , where N 1 , N 2 0 are two integers, we can make the rearrangements of eigenvalues λ k , 1 k N and adjoint eigenvalues λ ^ k , 1 k N :
λ ¯ k , 1 k N : λ 1 , , λ N 1 , λ ^ N 1 + 1 , , λ ^ 2 N 1 , λ 2 N 1 + 1 , , λ N C + ,
and
λ ¯ ^ k , 1 k N : λ ^ 1 , , λ ^ N 1 , λ N 1 + 1 , , λ 2 N 1 , λ ^ 2 N 1 + 1 , , λ ^ N C ,
and the rearrangements for their corresponding eigenfunctions and adjoint eigenfunctions:
v ¯ k , 1 k N : v 1 , , v N 1 , v ^ N 1 + 1 T , , v ^ 2 N 1 T , v 2 N 1 + 1 , , v N ,
and
v ¯ ^ k , 1 k N : v ^ 1 , , v ^ N 1 , v N 1 + 1 T , , v 2 N 1 T , v ^ 2 N 1 + 1 , , v ^ N .
Then, we introduce
G + ( λ ) = I m + n k , l = 1 N v ¯ k ( M ¯ 1 ) k l v ¯ ^ l λ λ ¯ ^ l , ( G ) 1 ( λ ) = I m + n + k , l = 1 N v ¯ k ( M ¯ 1 ) k l v ¯ ^ l λ λ ¯ k ,
where M ¯ is a square matrix M ¯ = ( m ¯ k l ) N × N with its entries determined by:
m ¯ k l = v ¯ ^ k v ¯ l λ ¯ l λ ¯ ^ k , 1 k , l N .
Therefore, G + ( λ ) and G ( λ ) are analytical in C + and C , respectively. By an analogous argument to the one in [12], we can prove that G + ( λ ) and G ( λ ) solve the corresponding reflectionless Riemann–Hilbert problem:
( G ) 1 ( λ ) G + ( λ ) = I m + n , λ R .
Since the zeros λ k and λ ^ k do not depend on the space and time variables, one can readily determine the spatial and temporal evolutions for the kernel vectors, v k ( x , t ) and v ^ k ( x , t ) , 1 k N . For example, one can compute v k ( x , t ) , 1 k N , as follows. Taking the x-derivative of both sides of the first set of equations in (123), and applying (66) and then again the first set of equations in (123), one obtains:
T + ( x , λ k ) d v k d x i λ k Λ v k = 0 , 1 k N .
Consequently, for each 1 k N , d v k d x i λ k Λ v k is a kernel vector T + ( x , λ k ) , and hence, a constant multiple of v k , because ker T + ( λ k ) is one-dimensional. Therefore, without loss of generality, one can just assume:
d v k d x = i λ k Λ v k , 1 k N .
The time dependence of v k
d v k d t = i λ k 2 s + 1 Ω v k , 1 k N ,
can be obtained in a similar manner via applying the associated temporal matrix spectral problem, i.e., (67). As a consequence of these differential equations, we get:
v k ( x , t ) = e i λ k Λ x + i λ k 2 s + 1 Ω t w k , 1 k N ,
and completely similarly, we can obtain:
v ^ k ( x , t ) = w ^ k e i λ ^ k Λ x i λ ^ k 2 s + 1 Ω t , 1 k N ,
where w k and w ^ k , 1 k N , are constant column and row vectors, respectively.
Now, based on the solutions in (129), one obtains:
G 1 + = k , l = 1 N v ¯ k ( M ¯ 1 ) k l v ¯ ^ l ,
and further, the presentations in (122) give rise to the N-soliton solutions to the matrix AKNS Equation (39):
p = α k , l = 1 N v ¯ k 1 ( M ¯ 1 ) k l v ¯ ^ l 2 , q = α k , l = 1 N v ¯ k 2 ( M ¯ 1 ) k l v ¯ ^ l 1 .
Here, for each 1 k N , we have made the splittings v ¯ k = ( ( v ¯ k 1 ) T , ( v ¯ k 2 ) T ) T and v ¯ ^ k = ( v ¯ ^ k 1 , v ¯ ^ k 2 ) , where v ¯ k 1 and v ¯ ^ k 1 are column and row vectors of dimension m, respectively, while v ¯ k 2 and v ¯ ^ k 2 are column and row vectors of of dimension n, respectively.
To present N-soliton solutions for the reduced nonlocal matrix integrable mKdV type Equation (54), one needs to check if G 1 + determined by (136) possesses the involution properties:
( G 1 + ) = Σ G 1 + Σ 1 , ( G 1 + ) ( x , t ) = Δ G 1 + Δ 1 .
These mean that the resulting potential matrix P determined by (120) will satisfy the group reduction conditions in (46) and (47). In this way, the above N-soliton solutions to the matrix AKNS Equation (39) reduce to the following N-soliton solutions:
p = α k , l = 1 N v ¯ k 1 ( M ¯ 1 ) k l v ¯ ^ l 2 ,
to the reduced nonlocal matrix integrable mKdV type Equation (54).

4.2. Realization

Let us now check how to realize the involution properties in (138).
First, we take N distinct zeros of det T + ( λ ) (i.e., eigenvalues of the matrix spectral problems with the zero potential):
{ λ k | 1 k N } = { μ 1 , , μ N 1 , μ 1 , , μ N 1 , i ν 1 , , i ν N 2 }
and N zeros of det T ( λ ) (i.e., eigenvalues of the adjoint matrix spectral problems with the zero potential):
{ λ ^ k | 1 k N } = { μ 1 , , μ N 1 , μ 1 , , μ N 1 , i ν 1 , , i ν N 2 } ,
where μ k C + , μ k i R , and ν k R + , It is easy to see that all ker T + ( λ k ) , 1 k N , are linearly spanned by
v k = v k ( x , t , λ k ) = e i λ k Λ x + i λ k 2 s + 1 Ω t w k , 1 k N ,
respectively, where each w k ( 1 k N ) is a constant column vector. These column vectors in (142) are eigenfunctions of the matrix spectral problems with the zero potential associated with the eigenvalue λ k , 1 k N . Furthermore, following the proceeding analyses in Section 3.1, ker T ( λ k ) , 1 k N , are linearly spanned by
v ^ k = v ^ k ( x , t , λ ^ k ) = v k ( λ k ) Σ = v N 1 + k ( x , t , λ k ) Δ , 1 k N 1 ,
v ^ N 1 + k = v ^ N 1 + k ( x , t , λ ^ N 1 + k ) = v N 1 + k ( λ N 1 + k ) Σ = v k ( x , t , λ k ) Δ , 1 k N 1 ,
and
v ^ k = v ^ k ( x , t , λ ^ k ) = v k ( λ k ) Σ = v k ( x , t , λ k ) Δ , 2 N 1 + 1 k N ,
respectively. These row vectors v ^ k , 1 k N , are eigenfunctions of the adjoint spectral problems with the zero potential associated with the adjoint eigenvalues λ ^ k , 1 k N , respectively. It is direct to see that the choices in (143)–(145) yield the selections on w k , 1 k N :
w k ( Δ Σ 1 Σ Δ 1 ) = 0 , 1 k N 1 , w k = Δ 1 Σ w k N 1 , N 1 + 1 k 2 N 1 , w k Σ = w k Δ , 2 N 1 + 1 k N ,
where * denotes the complex conjugate of a matrix. We emphasize that all these selections aim to satisfy the reduction conditions in (46) and (47).
Now, note that when the solutions to the special Riemann–Hilbert problems, defined by (129) and (130), possess the involution properties in (104) and (105), the corresponding relevant matrix G 1 + will satisfy the involution properties in (138), which are consequences of the group reductions in (42) and (43). Therefore, when the selections in (146) are made, the Formula (139), together with (129), (130), and (142)–(145), gives rise to N-soliton solutions to the reduced nonlocal matrix integrable mKdV type equations (54).
When m = n = N = 1 , let us fix α = α 1 α 2 = 1 , take λ 1 = i ν , λ ^ 1 = i ν , ν R , ν 0 , and due to the last requirement in (146), choose
w 1 = ( w 1 , 1 , w 1 , 2 , σ w 1 , 2 ) T ,
where w 1 , 1 , w 1 , 2 are real and σ 2 = 1 . This leads to a class of one-soliton solutions to the reduced nonlocal integrable mKdV Equation (59):
p 1 = 2 i σ ν w 1 , 1 w 1 , 2 w 1 , 1 2 e ν x + ν 3 ( β 1 β 2 ) t + 2 σ w 1 , 2 2 e ν x ν 3 ( β 1 β 2 ) t ,
where ν R is abitrary, but w 1 , 1 , w 1 , 2 R need to satisfy
w 1 , 1 2 2 w 1 , 2 2 = 0 ,
which comes from the involution properties in (138).

5. Concluding Remarks

We have proposed type ( λ , λ ) reduced nonlocal matrix integrable mKdV hierarchies of equations, by taking advantage of two group reductions of the matrix AKNS spectral problem of arbitrary order, and formulated Riemann–Hilbert problems for the resulting matrix integrable mKdV type equations, by use of the Lax pair and the adjoint Lax pair of matrix spectral problems. The reflectionless Riemann–Hilbert problems have been applied to soliton solutions of the proposed reduced matrix integrable mKdV type equations.
The key step in our construction is to use two group reductions simultaneously to generate reduced integrable equations, of which one is local and the other is nonlocal. In our analyses of Riemann–Hilbert problems, we have reformulated solutions to the corresponding reflectionless Riemann–Hilbert problems, based on the distribution of eigenvalues and adjoint eigenvalues. Such a treatment for Riemann–Hilbert problems is vital to the presentation of soliton solutions in the nonlocal case. It should also be interesting to apply the idea of adopting a pair of group reductions to other matrix spectral problems to generate reduced nonlocal integrable equations.
Indeed, the Riemann–Hilbert approach is very effective in presenting soliton solutions (see also, e.g., [27,28,29]), and the technique has been generalized to solve various initial boundary value problems of nonlinear integrable equations on the half-line or the finite interval [30,31]. There exist many other powerful approaches to soliton solutions, which include the Hirota direct method [4], the Wronskian technique [32,33], the generalized bilinear technique [34,35], the Bell polynomial approach [36,37], and the Darboux transformation [3,38]. It would be of significant importance to search for connections among different methods to exhibit dynamical behaviors of soliton solutions. It is another interesting topic for future study to establish Riemann–Hilbert problems to solve generalized integrable counterparts—for example, integrable couplings, super-symmetric integrable equations, and fractional spacetime analogous equations. We would also like to emphasize that it would be particularly interesting to construct diverse exact solutions other than solitons to nonlinear integrable equations—for instance, positon solutions [39], or more generally, complexiton solutions [40], rogue wave and lump solutions [41,42,43,44], solitonless solutions [45], and algebro-geometric solutions [46] from the perspective of Riemann–Hilbert problems.

Funding

This research was funded in part by the National Natural Science Foundation of China (11975145, 11972291, and 51771083), the Ministry of Science and Technology of China (G2021016032L), and the Natural Science Foundation for Colleges and Universities in Jiangsu Province (17 KJB 110020).

Data Availability Statement

All data generated or analyzed during this study are included in this published article.

Acknowledgments

The author would like to thank Alle Adjiri, Ahmed Ahmed, Li Cheng, Jingwei He, Morgan McAnally, Solomon Manukure, Fudong Wang, and Yi Zhang for their helpful discussions.

Conflicts of Interest

The author declares that there is no conflict of interest.

References

  1. Ablowitz, M.J.; Segur, H. Solitons and the Inverse Scattering Transform; SIAM: Philadelphia, PA, USA, 1981. [Google Scholar]
  2. Calogero, F.; Degasperis, A. Solitons and Spectral Transform I; North-Holland: Amsterdam, The Netherlands, 1982. [Google Scholar]
  3. Matveev, V.B.; Salle, M.A. Darboux Transformations and Solitons; Springer: Berlin/Heidelberg, Germany, 1991. [Google Scholar]
  4. Hirota, R. The Direct Method in Soliton Theory; Cambridge University Press: New York, NY, USA, 2004. [Google Scholar]
  5. Ma, W.X. N-soliton solution and the Hirota condition of a (2+1)-dimensional combined equation. Math. Comput. Simul. 2021, 190, 270–279. [Google Scholar] [CrossRef]
  6. Ma, W.X.; Yong, X.L.; Lü, X. Soliton solutions to the B-type Kadomtsev-Petviashvili equation under general dispersion relations. Wave Motion 2021, 103, 102719. [Google Scholar] [CrossRef]
  7. Ma, W.X. N-soliton solutions and the Hirota conditions in (1+1)-dimensions. Int. J. Nonlinear Sci. Numer. Simul. 2022, 23, 123–133. [Google Scholar] [CrossRef]
  8. Novikov, S.P.; Manakov, S.V.; Pitaevskii, L.P.; Zakharov, V.E. Theory of Solitons: The Inverse Scattering Method; Consultants Bureau: New York, NY, USA, 1984. [Google Scholar]
  9. Ma, W.X. The algebraic structure of zero curvature representations and application to coupled KdV systems. J. Phys. A Math. Gen. 1993, 26, 2573–2582. [Google Scholar] [CrossRef]
  10. Ma, W.X. Riemann-Hilbert problems and soliton solutions of a multicomponent mKdV system and its reduction. Math. Meth. Appl. Sci. 2019, 42, 1099–1113. [Google Scholar] [CrossRef]
  11. Ma, W.X.; Huang, Y.H.; Wang, F.D. Inverse scattering transforms and soliton solutions of nonlocal reverse-space nonlinear Schrödinger hierarchies. Stud. Appl. Math. 2020, 145, 563–585. [Google Scholar] [CrossRef]
  12. Ma, W.X. Inverse scattering and soliton solutions for nonlocal reverse-spacetime nonlinear Schrödinger equations. Proc. Am. Math. Soc. 2021, 149, 251–263. [Google Scholar] [CrossRef]
  13. Ma, W.X. Nonlocal PT-symmetric integrable equations and related Riemann-Hilbert problems. Partial. Differ. Equ. Appl. Math. 2021, 4, 100190. [Google Scholar] [CrossRef]
  14. Ma, W.X. Integrable nonlocal nonlinear Schrödinger equations associated with so(3, R ). Proc. Am. Math. Soc. Ser. B 2022, 9, 1–11. [Google Scholar] [CrossRef]
  15. Ma, W.X. Riemann-Hilbert problems and soliton solutions of nonlocal reverse-time NLS hierarchies. Acta Math. Sci. 2022, 42B, 127–140. [Google Scholar] [CrossRef]
  16. Ma, W.X. Riemann-Hilbert problems and inverse scattering of nonlocal real reverse-spacetime matrix AKNS hierarchies. Physica D 2022, 430, 133078. [Google Scholar] [CrossRef]
  17. Ablowitz, M.J.; Kaup, D.J.; Newell, A.C.; Segur, H. The inverse scattering transform-Fourier analysis for nonlinear problems. Stud. Appl. Math. 1974, 53, 249–315. [Google Scholar] [CrossRef]
  18. Manakov, S.V. On the theory of two-dimensional stationary self-focusing of electromagnetic waves. Sov. Phys. JETP 1974, 38, 248–253. [Google Scholar]
  19. Tu, G.Z. On Liouville integrability of zero-curvature equations and the Yang hierarchy. J. Phys. A Math. Gen. 1989, 22, 2375–2392. [Google Scholar]
  20. Ablowitz, M.J.; Musslimani, Z.H. Integrable nonlocal nonlinear equations. Stud. Appl. Math. 2016, 139, 7–59. [Google Scholar] [CrossRef] [Green Version]
  21. Ji, J.L.; Zhu, Z.N. On a nonlocal modified Korteweg-de Vries equation: Integrability, Darboux transformation and soliton solutions. Commun. Nonlinear Sci. Numer. Simul. 2017, 42, 699–708. [Google Scholar] [CrossRef]
  22. Gürses, M.; Pekcan, A. Nonlocal modified KdV equations and their soliton solutions by Hirota method. Commun. Nonlinear Sci. Numer. Simul. 2019, 67, 427–448. [Google Scholar] [CrossRef]
  23. Hildebrand, F.B. Methods of Applied Mathematics; Dover: Mineola, New York, NY, USA, 1992. [Google Scholar]
  24. Ablowitz, M.J.; Prinari, B.; Trubatch, A.D. Discrete and Continuous Nonlinear Schrödinger Systems; Cambridge University Press: New York, NY, USA, 2004. [Google Scholar]
  25. Gakhov, F.D. Boundary Value Problems; Elsevier Science: London, UK, 2014. [Google Scholar]
  26. Kawata, T. Riemann spectral method for the nonlinear evolution equation. In Advances in Nonlinear Waves; Pitman Advanced Pub.: Boston, MA, USA, 1984; Volume 1, pp. 210–225. [Google Scholar]
  27. Yang, J. Nonlinear Waves in Integrable and Nonintegrable Systems; SIAM: Philadelphia, PA, USA, 2010. [Google Scholar]
  28. Wang, D.S.; Zhang, D.J.; Yang, J. Integrable properties of the general coupled nonlinear Schrödinger equations. J. Math. Phys. 2010, 51, 023510. [Google Scholar] [CrossRef] [Green Version]
  29. Geng, X.G.; Wu, J.P. Riemann-Hilbert approach and N-soliton solutions for a generalized Sasa-Satsuma equation. Wave Motion 2016, 60, 62–72. [Google Scholar] [CrossRef]
  30. Fokas, A.S.; Lenells, J. The unified method: I. Nonlinearizable problems on the half-line. J. Phys. A Math. Theor. 2012, 45, 195201. [Google Scholar] [CrossRef] [Green Version]
  31. Lenells, J.; Fokas, A.S. The unified method: III. Nonlinearizable problems on the interval. J. Phys. A Math. Theor. 2012, 45, 195203. [Google Scholar] [CrossRef]
  32. Freeman, N.C.; Nimmo, J.J.C. Soliton solutions of the Korteweg-de Vries and Kadomtsev-Petviashvili equations: The Wronskian technique. Phys. Lett. A 1983, 95, 1–3. [Google Scholar] [CrossRef]
  33. Ma, W.X.; You, Y. Solving the Korteweg-de Vries equation by its bilinear form: Wronskian solutions. Trans. Am. Math. Soc. 2005, 357, 1753–1778. [Google Scholar] [CrossRef] [Green Version]
  34. Ma, W.X. Generalized bilinear differential equations. Stud. Nonlinear Sci. 2011, 2, 140–144. [Google Scholar]
  35. Batwa, S.; Ma, W.X. A study of lump-type and interaction solutions to a (3+1)-dimensional Jimbo-Miwa-like equation. Comput. Math. Appl. 2018, 76, 1576–1582. [Google Scholar] [CrossRef]
  36. Lambert, F.; Loris, I.; Springael, J.; Willox, R. On a direct bilinearization method: Kaup’s higher-order water wave equation as a modified nonlocal Boussinesq equation. J. Phys. A Math. Gen. 1994, 27, 5325–5334. [Google Scholar] [CrossRef]
  37. Fan, E.G. The integrability of nonisospectral and variable-coefficient KdV equation with binary Bell polynomials. Phys. Lett. A 2011, 375, 493–497. [Google Scholar] [CrossRef]
  38. Ma, W.X.; Zhang, Y.J. Darboux transformations of integrable couplings and applications. Rev. Math. Phys. 2018, 30, 1850003. [Google Scholar] [CrossRef]
  39. Matveev, V.B. Generalized Wronskian formula for solutions of the KdV equations: First applications. Phys. Lett. A 1992, 166, 205–208. [Google Scholar] [CrossRef]
  40. Ma, W.X. Complexiton solutions to the Korteweg-de Vries equation. Phys. Lett. A 2002, 301, 35–44. [Google Scholar] [CrossRef]
  41. Ma, W.X.; Zhou, Y. Lump solutions to nonlinear partial differential equations via Hirota bilinear forms. J. Differ. Equ. 2018, 264, 2633–2659. [Google Scholar] [CrossRef] [Green Version]
  42. Ma, W.X.; Bai, Y.S.; Adjiri, A. Nonlinearity-managed lump waves in a spatial symmetric HSI model. Eur. Phys. J. Plus 2021, 136, 240. [Google Scholar] [CrossRef]
  43. Ma, W.X. Linear superposition of Wronskian rational solutions to the KdV equation. Commun. Theor. Phys. 2021, 73, 065001. [Google Scholar] [CrossRef]
  44. Ma, W.X. A polynomial conjecture connected with rogue waves in the KdV equation. Partial Differ. Equ. Appl. Math. 2021, 3, 100023. [Google Scholar] [CrossRef]
  45. Rybalko, Y.; Shepelsky, D. Long-time asymptotics for the integrable nonlocal nonlinear Schrödinger equation. J. Math. Phys. 2019, 60, 031504. [Google Scholar] [CrossRef] [Green Version]
  46. Belokolos, E.D.; Bobenko, A.I.; Enol’skii, V.Z.; Its, A.R.; Matveev, V.B. Algebro-Geometric Approach to Nonlinear Integrable Equations; Springer: Berlin/Heidelberg, Germany, 1994. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ma, W.-X. Riemann–Hilbert Problems and Soliton Solutions of Type (λ, λ) Reduced Nonlocal Integrable mKdV Hierarchies. Mathematics 2022, 10, 870. https://doi.org/10.3390/math10060870

AMA Style

Ma W-X. Riemann–Hilbert Problems and Soliton Solutions of Type (λ, λ) Reduced Nonlocal Integrable mKdV Hierarchies. Mathematics. 2022; 10(6):870. https://doi.org/10.3390/math10060870

Chicago/Turabian Style

Ma, Wen-Xiu. 2022. "Riemann–Hilbert Problems and Soliton Solutions of Type (λ, λ) Reduced Nonlocal Integrable mKdV Hierarchies" Mathematics 10, no. 6: 870. https://doi.org/10.3390/math10060870

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop