Next Article in Journal
Cross-Version Software Defect Prediction Considering Concept Drift and Chronological Splitting
Next Article in Special Issue
Periodic Solution Problems for a Class of Hebbian-Type Networks with Time-Varying Delays
Previous Article in Journal
A Novel Hybrid MSA-CSA Algorithm for Cloud Computing Task Scheduling Problems
Previous Article in Special Issue
Localized Symmetric and Asymmetric Solitary Wave Solutions of Fractional Coupled Nonlinear Schrödinger Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Riemann–Hilbert Problems, Polynomial Lax Pairs, Integrable Equations and Their Soliton Solutions †

by
Vladimir Stefanov Gerdjikov
1,2,* and
Aleksander Aleksiev Stefanov
1,3
1
Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. Georgi Bonchev Str., Block 8, 1113 Sofia, Bulgaria
2
Institute for Advanced Physical Studies, 111 Tsarigradsko Chaussee, 1784 Sofia, Bulgaria
3
Faculty of Mathematics and Informatics, Sofia University “St. Kliment Ohridski”, 5 James Bourchier Blvd., 1164 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
To the memory of V. E. Zakharov and A. B. Shabat.
Symmetry 2023, 15(10), 1933; https://doi.org/10.3390/sym15101933
Submission received: 11 September 2023 / Revised: 5 October 2023 / Accepted: 7 October 2023 / Published: 18 October 2023

Abstract

:
The standard approach to integrable nonlinear evolution equations (NLEE) usually uses the following steps: (1) Lax representation [ L , M ] = 0 ; (2) construction of fundamental analytic solutions (FAS); (3) reducing the inverse scattering problem (ISP) to a Riemann-Hilbert problem (RHP) ξ + ( x , t , λ ) = ξ ( x , t , λ ) G ( x , t λ ) on a contour Γ with sewing function G ( x , t , λ ) ; (4) soliton solutions and possible applications. Step 1 involves several assumptions: the choice of the Lie algebra g underlying L, as well as its dependence on the spectral parameter, typically linear or quadratic in λ . In the present paper, we propose another approach that substantially extends the classes of integrable NLEE. Its first advantage is that one can effectively use any polynomial dependence in both L and M. We use the following steps: (A) Start with canonically normalized RHP with predefined contour Γ ; (B) Specify the x and t dependence of the sewing function defined on Γ ; (C) Introduce convenient parametrization for the solutions ξ ± ( x , t , λ ) of the RHP and formulate the Lax pair and the nonlinear evolution equations (NLEE); (D) use Zakharov–Shabat dressing method to derive their soliton solutions. This requires correctly taking into account the symmetries of the RHP. (E) Define the resolvent of the Lax operator and use it to analyze its spectral properties.

1. Introduction

In 1968 one of the great discoveries in mathematical physics took place. Its authors: P. Lax, C. S. Gardner, J. M. Greene, M. D. Kruskal, R. M. Miura and N. J. Zabusky after several years of analysis proved that the KdV equation can be exactly solved by the inverse scattering method (ISM). This was the first and for some time, the only NLEE that could be solved exactly. Soon after, it was demonstrated that the KdV is a completely integrable infinite-dimensional Hamiltonian system; its action-angle variables were found by Zakharov and Faddeev [1]. The whole story is well described by N. J. Zabusky in his review paper [2].
The second big step in this direction followed in 1971 by the seminal paper of Zakharov and Shabat who discovered the second equation integrable by the ISM: the nonlinear Schrödinger (NLS) equation [3]; in 1973 the same authors demonstrated that the NLS equation is integrable also under non-vanishing boundary conditions [4]. Both versions of NLS equations described interesting and important physical applications in nonlinear optics, plasma physics, hydrodynamics and others. This inspired many scientists, mathematicians and physicists alike, to join the scientific community interested in the study of soliton equations. As a result, new soliton equations started to appear one after another. Notable mentions being the modified KdV (mKdV) equation [5], the N-wave equations [6], the Manakov system known also as the vector NLS equation [7] and many others. Many of them have already been included in monographs: see, e.g., [8,9,10,11,12] and the numerous references therein.
The first few NLEE were related to the algebra s l ( 2 ) , so the corresponding inverse scattering problem could be solved using the famous Gelfand-Levitan-Marchenko (GLM) equation. For the Manakov system, it was necessary to use the 2 × 2 block matrix Lax operator, so the GLM equation was naturally generalized also for that case. However, the ISM for the N-wave system came up to be substantially more difficult. Indeed, for the 2 × 2 (block) matrix Lax operator the Jost solutions possess analyticity properties, which are basic for the GLM equation. However, the Lax pair for the N-wave system is the generalized s l ( n ) Zakharov–Shabat system (GZS):
L ψ i ψ x + ( [ J , Q ( x , t ) ] λ J ) ψ ( x , t , λ ) = 0 , M ψ i ψ t + ( [ K , Q ( x , t ) ] λ K ) ψ ( x , t , λ ) = 0 , J = diag ( a 1 , a 2 , , a n ) , K = diag ( b 1 , b 2 , , b n ) ,
where [ X , Y ] = X Y Y X is the commutator of X and Y, a k and b k are real constants such that t r J = 0 , t r K = 0 . Without restrictions, we can assume that a 1 > a 2 > > a n . In this case, only the first and the last columns of the corresponding Jost solutions allow analytic extension in the spectral parameter λ . This, however, was not enough to derive the GLM equation. It was Shabat who discovered the way out of this difficulty [13,14]. He was able to modify the integral equations for the Jost solutions into integral equations that provide the fundamental analytic solutions (FAS) χ + ( x , t , λ ) and χ + ( x , t , λ ) of L which allowed analytic extensions for I m λ > 0 and Im λ < 0 , respectively. As a result, the interrelation between the FAS and the sewing function G 0 ( x , t , λ ) :
χ + ( x , t , λ ) = χ ( x , t , λ ) G 0 ( t , λ ) , I m λ = 0 .
can be reformulated as the Riemann–Hilbert problem (RHP). Now, we can solve the ISP for L by using the RHP with canonical normalization:
ξ + ( x , t , λ ) = ξ ( x , t , λ ) G ( x , t , λ ) , Im λ = 0 , i G 0 x = λ [ J , G 0 ( x , t , λ ) ] , i G 0 t = λ [ K , G 0 ( x , t , λ ) ] .
lim λ ξ ± ( x , t , λ ) = 1 1 ,
The normalization condition (4) ensures that the RHP has a unique regular solution [12]. It also allowed Zakharov and Shabat to develop their dressing method, which enables one to calculate the N-soliton solutions not only for the N-wave system but also for the whole hierarchy of NLEE related to L (1); see [12,15,16,17]. In short, the dressing Zakharov–Shabat factor came up as one of the most effective methods for (a) constructing soliton solutions, and (b) understanding that the dressed Lax operator has some additional discrete eigenvalues, as well as giving the explicit form of the dressed FAS.
The GZS system has natural reductions. One commonly used reduction requires that Q ( x , t ) = Q ( x , t ) and J and K are constant diagonal matrices with real eigenvalues. Other reductions require that Q ( x , t ) , J and K belong either to s p ( 2 r ) or s o ( n ) Lie algebras. Mikhailov in [18] introduced the notion of the reduction group which substantially enlarged the classes of integrable NLEE. Some of these reductions require that J has complex-valued eigenvalues. Constructing FAS for such systems poses additional difficulties, which were overcome by Beals and Coifman [19] for the GZS systems related to s l ( n ) algebra. Later, the results of [19] were extended first for the systems related to s o ( n ) or to s p ( 2 r ) algebra, (see [11]) as well as to Mikhailov’s reductions [11,20].
The FAS play an important role in soliton theory. Indeed, they can be used to introduce:
1. 
Scattering data The minimal sets of scattering data are determined by the asymptotics of
lim x χ ± ( x , t , λ ) e i J λ x = S ± ( λ , t ) or lim x χ ± ( x , t , λ ) e i J λ x = T ± ( λ , t ) .
Here, S ± ( t , λ ) and T ± ( λ , t ) are the factors of the Gauss decompositions of the scattering matrix T ( λ , t ) .
2. 
Resolvent The FAS determine the kernel of the resolvent R ± ( x , y , t , λ ) of L. Applying the contour integration method on R ± ( x , y , t , λ ) , one can derive the spectral expansions for L, i.e., the completeness relation of the FAS.
3. 
Dressing method Zakharov–Shabat dressing method is a very effective and convenient method to construct the class of reflectionless potentials of L and to derive the soliton solutions of the NLEE. The simplest dressing factor u ( x , t , λ ) has pole singularities at λ 1 ± , which determine the new discrete eigenvalues that are added to the spectrum of the initial Lax operator.
4. 
Generalized Fourier transforms (GFTs) Here, we start with a GZS system related to a simple Lie algebra g with Cartan-Weyl basis H j , E α [21] and construct the so-called ‘squared solutions’
e α ± ( x , t , λ ) = π 0 J ( χ ± E α ( χ ± ) 1 ( x , t , λ ) and h j ± ( x , t , λ ) = π 0 J ( χ ± H j ( χ ± ) 1 ( x , t , λ ) ,
where π 0 J is the projector onto the image of the operator ad J . It is known that the ‘squared solution‘ are complete set of functions in the space of allowed potentials [22]. In particular, if we expand the potential Q ( x , t ) over the ‘squared solutions’ the expansion coefficients will provide the minimal set of scattering data. Similarly, the expansion coefficients of ad J 1 δ Q are the variations of the minimal set of scattering data. Therefore, the ‘squared solutions’ can be viewed as FAS in the adjoint representation of g , see [11,22,23,24,25,26,27,28,29,30,31,32,33,34].
5. 
Hierarchies of Hamiltonian structures The GFTs described above allow one to prove that each of the NLEE related to L allows a hierarchy of Hamiltonian structures. More precisely, each NLEE allows a hierarchy of Hamiltonians H ( n ) and a hierarchy of symplectic forms Ω ( n ) (or a hierarchy of Poisson brackets) such that for any n they produce the relevant NLEE [22,35,36].
6. 
Complete integrability and action-angle variables Starting from the famous paper by Zakharov and Faddeev [1] it is known that some of the NLEE allow action-angle variables. The difficulty here is that these NLEE are Hamiltonian systems with infinitely many degrees of freedom. Therefore, the strict derivation of the proof must be based on the completeness relation for the ‘squared solutions’. In fact, V. Gerdjikov and E. Khristov [27,28] (see also [30]) proposed the so-called ‘symplectic basis’ of squared solutions, which maps the variation of the potential δ Q ( x , t ) of the AKNS system to the variation of the action-variables. Unfortunately, for many multi-component systems such bases are not yet known.
The above arguments lead us to the hypothesis that we could use more effective approach to the integrable NLEE which starts from the RHP rather than from a specific Lax operator.
In Section 2 we remind Shabat’s idea for constructing the FAS χ ± ( x , t , λ ) for the GZS which are analytic for λ C ± respectively, see Figure 1.
In Section 3 we first demonstrate that the well known methods for analysis of NLEE can be generalized also for Lax operators quadratic in λ , see Equation (32) below. On this level we for the first time meet with purely algebraic problem for constructing two commuting quadratic pencils. For polynomial pencils of higher orders those problems will be more and more difficult to solve. In particular we outline the construction of the FAS χ + ( x , t , λ ) and χ ( x , t , λ ) which are analytic in Q 1 Q 3 and Q 2 Q 4 respectively, see Figure 2 below. As a result, the continuous spectrum of these Lax operators fill up the union of the real and imaginary axis of the complex λ -plane. As a consequence contours of the corresponding RHP will be R i R unless an additional factor complicates the picture. Thus we see that the symmetries of the NLEE or of its Lax pair determine the contour of the RHP.
In Section 4 we review the notion of Mikhailov’s reduction group and briefly outline the characteristic contours of the relevant RHP. We also demonstrate that Zakharov–Shabat theorem is valid for a larger class of Lax operators than it was proved before. It is important that the RHP is canonically normalized. This ensures that the RHP has unique regular solution, which is important for the application of the Zakharov-Shabat dressing method.
Another important factor in formulating the RHP is Mikhailov’s reduction group. In Section 5 we outline some of the obvious effects which the reduction group may have on the contour of the RHP. Therefore it is not only the order of the polynomial in λ , but also the symmetry (the reduction group) which determine the contour of the RHP. For example, if we add an additional Mikhailov’s symmetry that maps λ 1 / λ then the corresponding Lax operator will be polynomial in λ and 1 / λ which in turn will require adjustments in the techniques for deriving the dressing factors and soliton solutions.
In Section 6 we propose a parametrization of the solutions ξ ( x , t , λ ) of RHP for the class of RHP related to homogeneous spaces, see Equation (76). Here we require that the coefficients Q s ( x , t ) provide local coordinates of the corresponding homogeneous space. Thus we are able to derive a new systems of N-wave equations, see also [37,38,39,40,41]. We also demonstrate that the dressing Zakharov–Shabat method [16,17,42,43,44] can be naturally extended to derive the soliton solutions of these new N-wave equations. At the same time the structure of the dressing factors depends substantially on the symmetries of the Lax operators. Thus even for the one-soliton solutions we need to solve linear block-matrix equations. The situation when we have two involutions: the Hermitian one ( χ + ( x , t , λ ) ) = ( χ ( x , t , λ ) ) 1 and the λ λ symmetry C 0 χ ± ( x , t , λ ) C 0 1 = χ ± ( x , t , λ ) are typical for Lax operators L related to the algebras s l ( n , C ) . But if we request in addition that L is related to symplectic or orthogonal Lie algebra then we have to deal with three involutions, and the corresponding linear equations get more involved. That is why we focus first on the one-soliton solutions. The derivation of the N soliton solutions is discussed later.
Section 7 is devoted to the MNLS equations which require the use of symmetric spaces, see Refs. [7,21,45,46,47,48,49,50,51]. We start again with the parametrization of the RHP which now must be compatible with the structure of the symmetric spaces. To us it was natural to limit ourselves to the four classes Hermitian symmetric spaces related to the non-exceptional Lie algebras, see [21]. Again we parametrize the coefficients Q s ( x , t ) as local coordinates of the corresponding symmetric spaces. In fact Q ( x , t , λ ) must have the same grading as the symmetric space, but we were able to apply additional reductions requesting Q 2 s = 0 , see Equation (177) below. Thus we formulate the typical MNLS equations related to the four classes of symmetric spaces.
In Section 8 we derive the one soliton solutions of MNLS. Again, like in Section 6, we treat separately the MNLS related to A.III type symmetric spaces, because the corresponding FAS have only two involutions. The MNLS related to C.I and D.III symmetric spaces possess three involutions; the corresponding linear equations are similar to the ones for the class of N-wave equations, but the solutions are different. The symmetric spaces of BD.I class are treated separately, because their typical representation is provided by 3 × 3 block matrices, so many of the calculations are indeed different.
In Section 9 we show, how the dressing method can be naturally generalized for the calculation of N soliton solutions. The problem again reduces to solving systems of linear equations for deriving the polarization vectors | N j and | M j which determine the residues of the poles of the dressing factors. The additional difficulties as compared to the GZS system are illustrated by Equation (269). In order to calculate the two-soliton solutions we need to invert 4 × 4 block-matrix.
In Section 10 we introduce the resolvent of the Lax operators in terms of the FAS. The diagonal of the resolvent after a regularization can be expressed in terms of the solution of the RHP by ξ ± ( x , t , λ ) J ξ ^ ± ( x , t , λ ) ; here by ‘hat’ we denote the inverse matrix. It can be viewed as generating functional of the integrals of motion.
In Section 11 we outline how our ideas can be extended to Lax pairs in which both operators are cubic pencils.
We end the paper with a discussion and conclusions. Some technical aspects in the calculations such as the structure of the symmetric spaces, and the root systems of the simple Lie algebras as well as the Gauss decompositions of the elements of the simple Lie groups are given in the Appendix A and Appendix B.

2. From the Lax Representation to the RHP

2.1. N-Waves According to Manakov and Zakharov

The N-wave equations were discovered by Manakov and Zakharov in 1974 [6]. The Lax operator is the classical Zakharov–Shabat system:
L ZS ψ i ψ x + ( [ J , Q ( x , t ) ] λ J ) ψ ( x , t , λ ) = 0 ,
with real-valued diagonal J = diag ( a 1 , a 2 , , a n ) . Here and below, by [ X , Y ] = X Y Y X we denote the commutator of the matrices X and Y, also known as the Lie bracket. The second operator in the Lax representation is also linear in λ
M ψ i ψ ZS x + ( [ K , Q ( x , t ) ] λ K ) ψ ( x , t , λ ) = 0 ,
with real-valued diagonal K = diag ( b 1 , b 2 , , b n ) . Then, the N-wave equations, which are the compatibility condition [ L , M ] = 0 take the form:
J , Q t K , Q x + [ J , Q ( x , t ) ] , [ K , Q ( x , t ) ] = 0 .
For some time, solving the ISP for the Lax operator L ZS was an open problem. However, in 1974, Shabat [13,14] proved that L ZS has fundamental solutions χ + ( x , t , λ ) and χ ( x , t , λ ) , which are analytic in the upper- and lower-half of C .
Let us briefly outline the construction of those FAS, for more details see also [12,15]. For simplicity and if there is no risk of confusion we will not write explicitly the t-dependence of the potential. We will assume that Q ( x ) is defined for all x and satisfies the following conditions:
C.1 
By Q ( x ) M S we mean that Q ( x ) possesses smooth derivatives of all orders and falls off to zero for | x | faster than any power of x:
lim x ± | x | k Q ( x ) = 0 , k = 0 , 1 , 2 ,
C.2 
Q ( x ) is such that the corresponding operator L has only a finite number of simple discrete eigenvalues.
We will impose also the following reduction on the Lax operator (GZS with a Z 2 -reduction):
U ( x , t , ϵ λ ) = U ( x , t , λ ) , ϵ = ± 1 .
Next we introduce the Jost solutions of L by:
lim x ϕ ( x , t , λ ) e i λ J x = 1 1 , lim x ϕ ( x , t , λ ) e i λ J x = 1 1 ,
and the scattering matrix:
T ( t , λ ) = ψ 1 ( x , t , λ ) ϕ ( x , t , λ ) .
By 1 1 here and below we denote the unit matrix, whose dimensions is clear from the context. The Jost solutions satisfy the following Volterra type integral equations:
ψ i j ( x , λ ) = δ i j + i x d y e i λ ( a i a j ) ( x y ) p = 1 h ( a i a p ) Q i p ( y ) ψ p j ( y , λ ) , ϕ i j ( x , λ ) = δ i j + i x d y e i λ ( a i a j ) ( x y ) p = 1 h ( a i a p ) Q i p ( y ) ϕ p j ( y , λ ) ,
where ψ ( x , λ ) = ψ ( x , λ ) e i J λ x and ϕ ( x , λ ) = ϕ ( x , λ ) e i J λ x . It is well known that Volterra type equations possess solutions provided the integrals of the equations are convergent. In our case, this will be true for real λ . Indeed, in this case, the exponential factors in the Equation (11) will be bounded and the convergence of the integrals is ensured by the fact that Q ( x ) satisfies Condition 1. However, for complex λ the exponential factors are not growing only for the first and the last columns of the Jost solutions.
Shabat introduced the FAS χ ± ( x , λ ) of L by modifying the integral equations for them as follows:
ξ i j + ( x , λ ) = δ i j + i x d y e i λ ( a i a j ) ( x y ) p = 1 h ( a i a p ) Q i p ( y ) ξ p j + ( y , λ ) , i j ; ξ i j + ( x , λ ) = i x d y e i λ ( a i a j ) ( x y ) p = 1 h ( a i a p ) Q i p ( y ) ξ p j + ( y , λ ) , i < j ;
ξ i j ( x , λ ) = i x d y e i λ ( a i a j ) ( x y ) p = 1 h ( a i a p ) Q i p ( y ) ξ p j ( y , λ ) , i > j ; ξ i j ( x , λ ) = δ i j + i x d y e i λ ( a i a j ) ( x y ) p = 1 h ( a i a p ) Q i p ( y ) ξ p j ( y , λ ) , i j ;
where
ξ ± ( x , λ ) = χ ± ( x , λ ) e i λ J x .
Now, it is not difficult to check that all exponential factors in the integrands of (12) are falling off for λ C + , while the exponential factors in the integrands of (13) are falling off for λ C . In other words, ξ + ( x , t , λ ) allows analytic extensions for λ in the upper complex half-plane, while ξ ( x , t , λ ) is analytic for λ in the lower complex half-plane. Obviously ξ + ( x , λ ) and ξ ( x , λ ) are FAS of a slightly modified Lax operator:
L ˜ ξ ± i ξ ± x + [ J , Q ( x , t ) ] ξ ± ( x , λ ) λ [ J , ξ ± ( x , λ ) ] = 0 .
Theorem 1. 
Let Q ( x ) M S satisfy conditions (C.1), (C.2) and let the matrix elements of J be ordered a 1 > a 2 > > a n . Then, the solution ξ + ( x , λ ) of the Equation (12) (resp. ξ ( x , λ ) of the Equation (13)) exists and allows analytic extension for λ C + (resp. for λ C ).
Remark 1. 
Due to the fact that in Equation (12) we have both ∞ and as lower limits the equations are rather of Fredholm than of Volterra type. Therefore, we have to consider also the Fredholm alternative, i.e., there may exist finite number of values of λ = λ k ± C ± for which the solutions ξ ± ( x , λ ) have zeroes and pole singularities in λ. The points λ k ± in fact are the discrete eigenvalues of L ( λ ) in C ± .
The reduction condition (8) with ϵ = 1 means that the FAS and the scattering matrix T ( λ ) satisfy:
χ + ( x , t , λ ) = ( χ ( x , t , λ ) ) 1 , T ( t , λ ) = T 1 ( t , λ ) .
Each fundamental solution of the Lax operator is uniquely determined by its asymptotic for x or x . Therefore, in order to determine the linear relations between the FAS and the Jost solutions for λ R we need to calculate the asymptotics of FAS for x ± . Taking the limits in the right-hand sides of the integral Equations (12) and (13) we obtain:
lim x ξ i j + ( x , λ ) = δ i j , for i j ; lim x ξ i j + ( x , λ ) = 0 , for i < j ;
lim x ξ i j ( x , λ ) = δ i j , for i j ; lim x ξ i j ( x , λ ) = 0 , for i > j ;
This can be written in compact form using (14):
χ ± ( x , λ ) = ψ ( x , λ ) S ± ( λ ) = ψ + ( x , λ ) T ( λ ) D ± ( λ ) ,
where the matrices S ± ( λ ) , D ± ( λ ) and T ± ( λ ) are of the form:
S + ( λ ) = 1 S 12 + S 1 n + 0 1 S 2 n + 0 0 1 , T + ( λ ) = 1 T 12 + T 1 n + 0 1 T 2 n + 0 0 1 ,
D + ( λ ) = diag ( D 1 + , D 2 + , , D n + ) , D ( λ ) = diag ( D 1 , D 2 , , D n ) ,
S ( λ ) = 1 0 0 S 21 1 0 S n 1 S n 2 1 , T ( λ ) = 1 0 0 T 21 1 0 T n 1 T n 2 1 ,
Let us now relate the factors T ± ( λ ) , S ± ( λ ) and D ± ( λ ) to the scattering matrix T ( λ ) . Comparing (18) with (10) we find
T ( λ ) = T ( λ ) D + ( λ ) S ^ + ( λ ) = T + ( λ ) D ( λ ) S ^ ( λ ) ,
i.e., T ± ( λ ) , S ± ( λ ) and D ± ( λ ) are the factors in the Gauss decomposition of T ( λ ) .
It is well known that given T ( λ ) one can construct explicitly its Gauss decomposition, see the Appendix B. Here, we need only the expressions for D ± ( λ ) :
D j + ( λ ) = m j + ( λ ) m j 1 + ( λ ) , D j ( λ ) = m n j + 1 ( λ ) m n j ( λ ) ,
where m j ± are the principal upper and lower minors of T ( λ ) of order j.
Remark 2. 
The Gauss decomposition of of T ( t , λ ) S L ( n ) is well known, see Appendix B and [15]. The derivation requires knowledge of the fundamental representations of the Lie algebra s l ( n ) . If T ( t , λ ) belongs to an orthogonal or symplectic group, see [22].
Corollary 1. 
The upper (resp. lower) principal minors m j ± ( λ ) (resp. m j ( λ ) of T ( λ ) are analytic functions of λ for λ C + (resp. for λ C ).
Proof. 
Follows directly from Theorem 1, considering the limits:
lim x ξ j j + ( x , λ ) = D j + ( λ ) , lim x ξ j j ( x , λ ) = D j ( λ ) ,
and from (19b) and (21). □
This means that the solution of the ISP for L ZS can be reduced to an RHP:
χ + ( x , t , λ ) = χ ( x , t , λ ) G 0 ( t , λ ) , λ R
on the real axis of C . The solutions ξ + ( x , t , λ ) defined by (14) satisfy an alternative RHP:
ξ + ( x , t , λ ) = ξ ( x , t , λ ) G ( x , t , λ ) , λ R , i G x λ [ J , G ( x , t , λ ) ] = 0 , i G t λ [ K , G ( x , t , λ ) ] = 0 ,
which is canonically normalized, i.e.,
lim λ ξ ± ( x , t , λ ) = 1 1 .

2.2. MNLS Equations According to Manakov, Fordy and Kulish

The first MNLS:
i q t + 2 q x 2 + 2 ( q , q ) q ( x , t ) = 0
where q is a 2-component vector, was proposed by Manakov [7] in 1974. Later, in their seminal paper [45], Fordy and Kulish demonstrated the deep relations between the MNLS equations and symmetric spaces [21]. As a result the Lax representation for the MNLS takes the form:
L MNLS ψ i ψ x + ( Q ( x , t ) λ J ) ψ ( x , t , λ ) = 0 , M MNLS ψ i ψ t + ( V 2 ( x , t ) + λ Q ( x , t ) λ 2 J ) ψ ( x , t , λ ) = 0
For the Manakov model (26) J = diag ( 1 , 1 , 1 ) and Q ( x , t ) = 0 q T q 0 . Special role here is played by the Cartan element J of the corresponding simple Lie algebra. It determines the Cartan involution, which fixes the symmetric space from the corresponding Lie algebra. The Manakov model is related to the symmetric space S U ( 3 ) / S ( U ( 1 ) × U ( 2 ) ) . The symmetric spaces have been classified about a century ago by E. Cartan, see, e.g., [21].
In 1976, Kaup and Newell [35] derived the generalizations of NLS corresponding to Lax operator, quadratic in λ . This generalization of NLS now is known as the derivative NLS (DNLS), because its nonlinear terms depend on the x-derivative of q. Another form of the DNLS equation. is known since 1978 as the Gerdjikov–Ivanov (GI) equation [52,53], see also [54,55,56,57] and the references therein. It is gauge equivalent to DNLS and besides the terms with x-derivative, contains also non-linearities of 5th order. In Section 6, below, we describe the multi-component generalizations to GI equations.

2.3. Generic Lax Representation

We start with the idea of constructing more general polynomial Lax pairs:
L 0 ψ 0 i ψ 0 x + ( U ( x , t , λ ) λ N 1 J ) ψ 0 ( x , t , λ ) = 0 , M 0 ψ 0 i ψ 0 t + ( V ( x , t , λ ) λ N 2 K ) ψ 0 ( x , t , λ ) = ψ 0 ( x , t , λ ) C ( λ ) , U ( x , t , λ ) = s = 1 N 1 U s λ N 1 s , V ( x , t , λ ) = s = 1 N 2 V s λ N 2 s .
The compatibility condition [ L 0 , M 0 ] = 0 of the pair (28) holds for any C ( λ ) and has the form:
i V x i U t + [ U ( x , t , λ ) , V ( x , t , λ ) ] = 0 .
Here, we assume that both N 1 2 and N 2 2 . In addition, we fix up the gauge by requiring that
U N 1 ( x , t ) = J , V N 2 ( x , t ) = K ,
where J and K are constant diagonal matrices.
Equation (29) must hold identically with respect to the spectral parameter λ . Thus, here there comes the first technical difficulty related with the parametrization of L and M. Indeed, let us consider the example where N 1 = 3 and N 2 = 3 . The left-hand side of (29) is a polynomial of order 6 with respect to λ , whose highest 4 coefficients are given by
λ 6 : [ K , J ] = 0 , λ 5 : [ V 2 , J ] [ K , U 2 ] = 0 , λ 4 : [ V 1 , J ] + [ V 2 , U 2 ] [ K , U 1 ] = 0 , λ 3 : [ V 2 , J ] + [ V 1 , U 2 ] + [ V 2 , U 1 ] [ K , U 2 ] = 0 ,
The coefficient at λ 6 is vanishing because J and K are diagonal. The vanishing of the coefficient at λ 5 means that V 2 is expressed through U 2 . Indeed, if we put J = diag ( a 1 , a 2 , , a n ) and K = diag ( b 1 , b 2 , , b n ) then
( V 2 ) j k = b j b k a j a k ( U 2 ) j k .
The next two relations coming from λ 4 and λ 3 are not so easy to satisfy.
Gel’fand and Dickey [58,59,60,61,62] provide a very effective solution to the problem. They suggest a general construction for the Lax pairs of the form (28) using the fractional powers of the Lax operator. Below, we outline, along with the effective parametrization of the RHP, an equivalently effective method for generic Lax representations.
Another serious difficulty in treating Lax pairs like (28) is in solving the inverse scattering problem. For Lax operators that are linear in λ like (1), this difficulty was overcome by Shabat who constructed the FAS for such operators. Generalizing his results to Lax operators of the form (28) is much more difficult. That is why we decided to start with RHP which ensures that our Lax pairs possess FAS.
In short, we start with RHP with canonical normalization and specify the x and t dependence of the sewing function (see Equation (58) below). Then, introducing the proper parametrization of the RHP, we obtain an appropriate Lax pair that possess FAS and solution of the ISP.
Another important moment here is to specify the contour of the RHP. Of course it must be compatible with the additional symmetries (Mikhailov’s reduction group [18]) imposed on L and M.
In what follows, our main attention will be focused on Lax operators (28) with N 1 = 2 . For this case, we can generalize Shabat’s method and construct the FAS, see the next Section. However, the results can be generalized also for N 1 > 2 .

3. Jost Solutions and FAS of L

Here, we start with generic Lax operator, quadratic in λ with vanishing boundary conditions and canonical gauge. In our case, this is:
L ψ i ψ x + ( U 2 ( x , t ) + U 1 ( x , t ) λ 2 J ) ψ ( x , t , λ ) = 0 , J = diag ( a 1 , a 2 , , a n ) , a 1 > a 2 > > a n , t r J = 0 .
Remark 3. 
For the potentials U 2 ( x , t ) and U 1 ( x , t ) , we assume that the n × n matrices, which are smooth functions of x for all values of t, tending to 0 fast enough for x ± ; for simplicity, we could take them to be Schwartz-type functions.

3.1. Jost Solutions and Scattering Matrix

The Jost solutions to (32) are defined as follows:
lim x ψ ( x , t , λ ) e i λ 2 J x = 1 1 , lim x ϕ ( x , t , λ ) e i λ 2 J x = 1 1 .
Both Jost solutions are fundamental solutions: indeed they are non-degenerate n × n matrix-valued functions. In what follows, we choose the potentials U 2 and U 1 to take values in a given simple Lie algebra g . Then, the fundamental solutions ψ and ϕ belong to the corresponding simple Lie group G .
It is also well known that any two fundamental solutions are linearly related. In other words:
ϕ ( x , t , λ ) = ψ ( x , t , λ ) T ( t , λ ) .
The matrix T ( t , λ ) belongs to the Lie group G .
The integral equations for the Jost solutions take the form:
ψ ( x , t , λ ) = 1 1 + i x d y e i λ 2 J ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) ψ ( y , t , λ ) e i λ 2 J ( x y ) , ϕ ( x , t , λ ) = 1 1 + i x d y e i λ 2 J ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) ϕ ( y , t , λ ) e i λ 2 J ( x y ) ,
where ψ ( x , t , λ ) = ψ ( x , t , λ ) e i λ 2 J x and ϕ ( x , t , λ ) = ϕ ( x , t , λ ) e i λ 2 J x and 1 1 is the unit n × n matrix. In components we have:
ψ j k ( x , t , λ ) = δ j k + i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) ψ ( y , t , λ ) j k , ϕ j k ( x , t , λ ) = δ j k + i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) ϕ ( y , t , λ ) j k .
Note that both integral equations are Volterra type equations.
The boundary conditions on the potentials U 2 ( x , t ) and U 1 ( x , t ) in Remark 3 ensure that Equation (35) always have solutions provided the exponentials e i λ 2 ( a j a k ) ( x y ) do not grow for x or x . Obviously, this holds true for Im λ 2 = 0 , i.e., for λ R i R . As we will see below, this is the continuous spectrum of L (32).
However, Equation (35) allows for important exceptions. These will be easier seen if we use Equation (36). Let us, for example, consider the equations for the first (resp. for the last) columns of the Jost solutions. So, we have to consider the Equation (36) for k = 1 (resp. for k = n ). Let us assume also that Im λ 2 < 0 , i.e., λ is in the second or fourth quadrant of the complex λ -plane. Then, it is easy to check that the exponential factors in all the equations for ( ψ ) j 1 , j = 1 , , n decay for x , y ± . The same holds true also for the equations for ( ϕ ) j n , j = 1 , , n . Therefore, we find that the first column of ψ and the last column of ϕ allow analytic extensions to the second and fourth quadrants of C . Similarly, we find that the first column of ϕ and the last column of ψ allow analytic extensions for Im λ 2 > 0 , or to the first and third quadrants of C . The other columns of ϕ and ψ are defined only on the continuous spectrum of L. Indeed, for the corresponding set of Equation (36), some of the exponential factors will decay but others will grow up.

3.2. Construction of the FAS

Nevertheless, our aim is to demonstrate that one can construct fundamental analytic solutions (FAS) for L. These will be n × n matrix solutions, one of which χ + ( x , t , λ ) allows analytic extension for Im λ 2 > 0 and the other one χ ( x , t , λ ) – for Im λ 2 < 0 . This can be achieved using Shabat’s method [13,14] based on proper modification of the integral Equation (36). So, let χ ± ( x , t , λ ) be fundamental solutions of L, i.e., L χ + ( x , t , λ ) = 0 and let us introduce χ ± ( x , t , λ ) = χ ± ( x , t , λ ) e i λ 2 J x . These solutions will be different from the Jost solutions, because, as we will see, their behavior for x ± is different.
Following Shabat’s idea we define χ + ( x , t , λ ) as the solution of the following set of integral equations:
χ j k + ( x , t , λ ) = δ j k + i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ + ( y , t , λ ) j k , j k χ j k + ( x , t , λ ) = i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ + ( y , t , λ ) j k , j < k .
Likewise, χ , ( x , t , λ ) are defined as the solution of the following set of integral equations:
χ j k ( x , t , λ ) = δ j k + i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ ( y , t , λ ) j k , j k χ j k ( x , t , λ ) = i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ ( y , t , λ ) j k , j > k .
Note that the only difference with the Equation (36) is in index inequalities in the right-hand sides; this changes the signs of the factors a j a k .
There is an alternative possibility to introduce FAS with a minor change of the integral Equations (37) and (38)
χ j k + , ( x , t , λ ) = i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ + , ( y , t , λ ) j k , j > k χ j k + , ( x , t , λ ) = δ j k + i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ + , ( y , t , λ ) j k , j k .
Likewise, χ , ( x , t , λ ) are defined as the solution of the following set of integral equations:
χ j k , ( x , t , λ ) = i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ , ( y , t , λ ) j k , j < k χ j k , ( x , t , λ ) = δ j k + i x d y e i λ 2 ( a j a k ) ( x y ) ( U 2 ( y , t ) + λ U 1 ( y , t ) ) χ , ( y , t , λ ) j k , j k .
Now, we have to establish the linear relations between the FAS and the Jost solutions.
The first consequence of the Equations (37)–(40) concerns the limits of their diagonal matrix elements for x , namely:
lim x diag ( χ ± ( x , t , λ ) ) = D ± ( λ ) = diag ( D 1 ± ( λ ) , D 2 ± ( λ ) , , D n ± ( λ ) ) , lim x diag ( χ ± , ( x , t , λ ) ) = D ± , ( λ ) = diag ( D 1 ± , ( λ ) , D 2 ± , ( λ ) , , D n ± , ( λ ) ) ,
Obviously, the matrices D + ( λ ) and D + , ( λ ) are x and t independent and in addition they are analytic for Im λ 2 > 0 ; analogously D ( λ ) and D , ( λ ) are analytic for Im λ 2 < 0 .
Next, we find that:
lim x e i λ 2 J x χ ± ( x , t , λ ) = S ± ( t , λ ) , lim x e i λ 2 J x χ ± ( x , t , λ ) = T ( t , λ ) D ± ( λ ) , lim x e i λ 2 J x χ ± , ( x , t , λ ) = T ( t , λ ) , lim x e i λ 2 J x χ ± , ( x , t , λ ) = S ± ( t , λ ) D ^ ± ( λ ) ,
where S + ( t , λ ) and T + ( t , λ ) (resp. S ( t , λ ) and T ( t , λ ) ) are upper-triangular matrices (resp. lower triangular) with 1 on the diagonal. Remember, since the Jost solutions and the FAS belong to the Lie group G , then all the limits in (42) must also belong to G .
From (42), it follows that the FAS χ j k ± ( x , t , λ ) are related to the Jost solutions as follows:
χ ± ( x , t , λ ) = ϕ ( x , t , λ ) S ± ( t , λ ) , χ ± ( x , t , λ ) = ψ ( x , t , λ ) T ( t , λ ) D ± ( λ ) , χ ± , ( x , t , λ ) = ϕ ( x , t , λ ) S ± ( t , λ ) D ^ ± , χ ± , ( x , t , λ ) = ψ ( x , t , λ ) T ( t , λ ) ,
More detailed analysis shows that these triangular and diagonal matrices are in fact the factors in the Gauss decompositions of the scattering matrix T ( t , λ ) :
T ( t , λ ) = T ( t , λ ) D + ( λ ) S ^ + ( t , λ ) = T + ( t , λ ) D ( λ ) S ^ ( t , λ ) ,
see Appendix B.
Remark 4. 
Let us consider cubic pencils in λ assuming that the leading terms λ 3 J and λ 3 K of L and M, respectively, are such that J and K have different real eigenvalues. Their FAS can be constructed quite analogously as we did above for the quadratic pencils. The substantial difference between the cubic and the quadratic cases are in the regions of analyticity. The solutions χ + ( x , t , λ ) (resp. χ ( x , t , λ ) ) for the cubic pencils are analytic for I m λ 3 > 0 (resp. I m λ 3 < 0 ), see sectors Ω 0 Ω 2 Ω 4 (resp. sectors Ω 1 Ω 3 Ω 5 ) on Figure 3.

3.3. The Time-Dependence of T ( t , λ )

The Lax representation of a given NLEE requires the commutativity of two ordinary differential operators:
[ L , M ] = 0
where L is given by (32). We assume that M has the same form as L:
M ψ i t + V 2 ( x , t ) + λ V 1 ( x , t ) λ 2 K ψ ( x , t , λ ) = ψ ( x , t , λ ) C ( λ ) .
Below, we treat M-operators that are higher order polynomials of λ . The first remark is that the commutativity condition (45) must hold identically with respect to λ . Note also that (45) holds true for any C ( λ ) in (46). We use this fact to determine the t-dependence of the scattering matrix T ( t , λ ) .
Indeed, let us consider M ϕ ( x , t , λ ) = ϕ ( x , t , λ ) C ( λ ) where ϕ ( x , t , λ ) is the Jost solution in (33) and let us take the limit x in (46). Due to the vanishing boundary conditions we obtain:
i t λ 2 K e i λ 2 J x = e i λ 2 J x C ( λ ) ,
i.e., C ( λ ) = λ 2 K . Thus, we determined the function C ( λ ) for this choice of the M-operator. Let us now take the limit x in (46). This gives:
i t λ 2 K e i λ 2 J x T ( t , λ ) = e i λ 2 J x T ( t , λ ) C ( λ ) ,
i.e.,
i T t λ 2 [ K , T ( t , λ ) ] = 0 .
From Equation (49), there follow also the time-dependence of the Gauss factors:
i S ± t λ 2 [ K , S ± ( t , λ ) ] = 0 , i T ± t λ 2 [ K , T ± ( t , λ ) ] = 0 .
For the diagonal factors we obtain:
i D ± ( λ ) t = 0 .
In other words, we have two sets of functions of λ that are t-independent. Note also that from the canonical normalization of FAS there follows that lim λ D k ± = 1 . Obviously, they generate integrals of motion. Indeed, let us consider their asymptotic expansions:
ln D k ± = s = 1 λ s I k . k = 1 , , n ;
Obviously, d I k / d t = 0 . As we shall see below, each of the integrals I k can be expressed as an integral of the potentials U 2 and U 1 . The advantage of the choice (51) is that the integrands of I k are local, i.e., they depend only on U 2 and U 1 and their x-derivatives.
The above applies to the time dependence for the N-wave equations related to the Lax operators (32). In general, each of these operators generates a hierarchy of NLEE. Indeed, the NLEE is determined by fixing up along with L also the second operator M in the Lax pair. For example, if we want to treat DNLS-type equations we need, to specify two things. First, the structure of L must be compatible with a Hermitian symmetric space [45], and second, the potential of M must polynomial of order 4 in λ with leading term λ 4 J . Then, C ( λ ) = λ 4 J and the time dependence of the scattering data will be given by:
i T t λ 4 [ J , T ( t , λ ) ] = 0 .
Then, from Equation (53) there follow also the time-dependence of the Gauss factors:
i S ± t λ 4 [ J , S ± ( t , λ ) ] = 0 , i T ± t λ 4 [ J , T ± ( t , λ ) ] = 0 .
More details on this subject will be given in Section 7.

4. RHP and Integrable NLEE

The simplest nontrivial Lax operators (i.e., the ones that are linear in λ ) have been studied in great detail, see [10,12,30] and the numerous references therein. That is why in this paper we concentrate mostly on Lax operators that are quadratic in λ like (32). In the previous section, we constructed the FAS of this operators for the case of vanishing boundary conditions and the result was as follows. The continuous spectrum of L fills up the union of the real and purely imaginary axis R i R , see Figure 2; The analyticity regions for the FAS of (32) are Q 1 Q 3 for χ + and Q 2 Q 4 for χ , where by Q k , k = 1 , 2 , 3 , 4 we denote the quadrants of the complex λ -plane.

4.1. Uniqueness of the Regular Solution of RHP

The FAS derived above are also linearly related to each other. From Equation (43), it easily follows that:
χ + ( x , t , λ ) = χ ( x , t , λ ) G 0 ( t , λ ) , G 0 = S ^ S + ( t , λ ) , χ + , ( x , t , λ ) = χ , ( x , t , λ ) G 0 ( t , λ ) , G 0 = S ^ , S + , ( t , λ ) , λ R i R .
These relations allow us to relate the Lax pair to a RHP. Indeed, our FAS are not very suitable because for large x ± , they strongly oscillate (like exp ( i λ 2 J x ) ). In order to avoid these singularities, we introduce ξ ± ( x , t , λ ) = χ ± ( x , t , λ ) exp ( i λ 2 J x ) . Then, the RHP can be formulated as:
ξ + ( x , t , λ ) = ξ ( x , t , λ ) G ( x , t , λ ) , G ( x , t , λ ) = e i λ 2 J x G 0 ( t , λ ) e i λ 2 J x , ξ + , ( x , t , λ ) = ξ , ( x , t , λ ) G ( x , t , λ ) , G ( x , t , λ ) = e i λ 2 J x G 0 ( t , λ ) e i λ 2 J x , λ R i R .
which allows canonical normalization:
lim λ ξ + ( x , t , λ ) = lim λ ξ ( x , t , λ ) = 1 1
In the previous subsections, we outlined how, starting with the Lax representation one can derive the RHP. Here, following Zakharov and Shabat, we demonstrate how starting from the RHP one can derive the relevant Lax representation. We use slightly modified RHP with canonical normalization:
ξ + ( x , t , λ ) = ξ ( x , t , λ ) G ( x , t , λ ) , λ R i R , i G x λ 2 [ J , G ( x , t , λ ) ] = 0 , i G t λ 2 [ K , G ( x , t , λ ) ] = 0 , lim λ ± ξ ± ( x , t , λ ) = 1 1 .
We will say that the solution ξ 0 ± ( x , t , λ ) is regular if ξ 0 + ( x , t , λ ) and ξ 0 ( x , t , λ ) have neither zeroes nor singularities in their regions of analyticity.
Corollary 2. 
The RHP (58) with canonical normalization has unique regular solution.
Proof. 
Let us assume that ξ 1 ± ( x , t , λ ) is another regular solution to the RHP (58). Consider:
g 0 ± ( x , t , λ ) = ξ 0 ± ( x , t , λ ) ξ ^ 1 ± ( x , t , λ ) ;
we remind that ξ ^ ξ 1 . Using (58) we easily find that:
g 0 + ( x , t , λ ) = ξ 0 + ( x , t , λ ) ξ ^ 1 + ( x , t , λ ) = ξ 0 ( x , t , λ ) G ( x , t , λ ) G ^ ( x , t , λ ) ξ ^ 1 ( x , t , λ ) = g 0 ( x , t , λ ) ,
i.e., g 0 + ( x , t , λ ) is analytic in the whole complex plane λ . In addition, it tends to 1 1 for λ . Then, according to the Liouville theorem, g 0 + ( x , t , λ ) = g 0 ( x , t , λ ) = 1 1 . □
Remark 5. 
The RHP (58) obviously allows the trivial regular solution when ξ 0 + = ξ 0 = 1 1 . In this case, the corresponding FAS of L will take the form:
χ 0 + ( x , t , λ ) = χ 0 ( x , t , λ ) = e i λ 2 J x i λ 2 K t .

4.2. Zakharov–Shabat Theorem

Theorem 2 
(Zakharov–Shabat [17]). Let ξ ± ( x , t , λ ) satisfy the RHP (58). Then, χ ± ( x , t , λ ) = ξ ± ( x , t , λ ) e i λ 2 J x are FAS of L (32) and M (46).
Proof. 
Let us introduce the functions
g 1 ± ( x , t , λ ) = i ξ ± x ξ ^ ± ( x , t , λ ) + λ 2 ξ ± ( x , t , λ ) J ξ ^ ± ( x , t , λ ) , g 2 ± ( x , t , λ ) = i ξ ± t ξ ^ ± ( x , t , λ ) + λ 2 ξ ± ( x , t , λ ) K ξ ^ ± ( x , t , λ ) ,
Then, from Equation (58) we find:
g 1 + ( x , t , λ ) = i ( ξ G ) x G ^ ξ ^ ( x , t , λ ) + λ 2 ξ ( x , t , λ ) G J G ^ ξ ^ ( x , t , λ ) = i ξ x ξ ^ ( x , t , λ ) + ξ ( x , t , λ ) i G x G ^ ( x , t , λ ) + λ 2 G J G ^ ( x , t , λ ) ξ ^ ( x , t , λ ) = i ξ x ξ ^ ( x , t , λ ) + λ 2 ξ ( x , t , λ ) J ξ ^ ( x , t , λ ) = g 1 ( x , t , λ ) .
Thus, g 1 ± ( x , t , λ ) are analytic in the whole complex plane C . The canonical normalization of the RHP means, however, that g 1 ± ( x , t , λ ) are singular for λ . More specifically, g 1 ± ( x , t , λ ) λ 2 J is linear function of λ . Using again the great Liouville theorem we conclude that there must exist the linear in λ function U 2 ( x , t ) λ U 1 ( x , t ) such that:
g 1 ± ( x , t , λ ) λ 2 J = U 2 ( x , t ) λ U 1 ( x , t ) ,
i.e.,
g 1 ± ( x , t , λ ) + U 2 ( x , t ) + λ U 1 ( x , t ) λ 2 J = 0 .
Multiplying both sides of (65) by ξ ± ( x , t , λ ) we find that the solutions to the RHP (58) must satisfy the ODE:
i ξ ± x + ( U 2 ( x , t ) + λ U 1 ( x , t ) ) ξ ± ( x , t , λ ) λ 2 [ J , ξ ± ( x , t , λ ) ] = 0 .
It remains to insert χ ± ( x , t , λ ) = ξ ± ( x , t , λ ) e i λ 2 J x into Equation (66) to obtain that χ ± ( x , t , λ ) are FAS of Equation (32).
Applying the same considerations on the second function g 2 ± ( x , t , λ ) we find that it is analytic on the whole complex λ -plane and has the form:
g 2 ± ( x , t , λ ) + V 2 ( x , t ) + λ V 1 ( x , t ) λ 2 K = 0 .
This means that ξ ± ( x , t , λ ) must also satisfy
i ξ ± t + ( V 2 ( x , t ) + λ V 1 ( x , t ) ) ξ ± ( x , t , λ ) λ 2 [ K , ξ ± ( x , t , λ ) ] = 0 .
and, therefore, χ ± ( x , t , λ ) = ξ ± ( x , t , λ ) e i λ 2 J x are FAS of the M-operator (46).
Remark 6. 
Zakharov–Shabat theorem initially was proven for Lax operators with linear dependence on the spectral parameter λ. The above proof is elementary generalization of the original idea. Obviously, it can easily be extended for any polynomial Lax pair, see, e.g., (28). The details of these generalizations are left to the readers.

5. Mikhailov’s Reduction Groups and the Contours of RHP

5.1. General Theory

The reduction groups introduced by [18] are a powerful tool for deriving new integrable equations, admitting Lax representation. It has been substantially developed since its discovery, see [63,64,65,66,67,68,69,70]. Mikhailov’s reductions were used in the seminal paper of Drinfeld and Sokolov [71] as an important tool to analyze the gradings of simple Lie algebras and their consequences for the integrable equations. It also stimulated the development of the infinite dimensional Lie algebras and Kac–Moody algebras [21,71,72].
A reduction group G R is a finite group acting on the solution set of (28), which preserves the Lax representation [18], i.e., it ensures that the reduction constraints are automatically compatible with the evolution. G R must have two realizations: (i) G R Aut g and (ii) G R Conf C , i.e., as conformal mappings of the complex λ -plane. To each g k G R , we relate a reduction condition for the Lax pair as follows [18]:
C k ( L ( Γ k ( λ ) ) ) = η k L ( λ ) , C k ( M ( Γ k ( λ ) ) ) = η k M ( λ ) ,
where C k Aut g and Γ k ( λ ) Conf C are the images of g k and η k = 1 or 1 depending on the choice of C k . Since G R is a finite group then for each g k there exist an integer N k such that g k N k = 1 1 .
The finite subgroups of Conf ( C ) were classified by Klein, see [73]. They consist of two infinite series: (i) Z h —cyclic group of order h; (ii) D h —dihedral group of order 2 h ; and the groups related to the Platonic solids: tetrahedron, cube, octahedron, dodecahedron and icosahedron. In what follows, we restrict our attention mostly to Z h and D h ; although examples of systems with Platonic solids as group of reductions are also known [67,68,74,75].
It is important to note that the form of the equations depends not only on the chosen reduction group but also on its realization.
It is well known that every finite group can be embedded as a subgroup of some finite symmetric group S m ; this is the group of permutations of the numbers { 1 , 2 , , m } . The group Z h consists of all cyclic permutations of the numbers 1 , 2 , , h .
The symmetric group A h is the commutator subgroup of the symmetric group S h with index 2 and has, therefore, h ! / 2 elements. It is the kernel of the signature group homomorphism s g n : S h { 1 , 1 } explained under symmetric group. It is isomorphic to D h .
Generically, each of these groups is rigorously defined by their genetic codes (presentations). In other words, one introduces one or more generating elements of the group and defines the relations they must satisfy. For the two types of groups, we have:
S 2 : s 1 2 = 1 1 , S h : s 1 h = 1 1 , D h : s 1 2 = 1 1 , s 2 2 = 1 1 , ( s 1 s 2 ) h = 1 1 .
These are the formal definitions of these groups. Below, we outline their realizations as subgroups of the group of automorphisms Aut g of the algebra g , as well as subgroups of the conformal group. These realizations are specific both for the explicit form of the corresponding NLEE as well as for the spectral properties of their Lax operators.
Other important facts are the orbits and the fundamental domains of the groups. Each element Γ k of G R is of finite order, i.e., there exist an integer n k such that Γ k n k = 1 1 . Acting on a point, say λ 0 it produces an orbit in the complex λ -plane consisting of the points { Γ k s λ 0 , s = 1 , , n k } . Then, by the fundamental domain of G R , we mean the manifold A G R which contains only one point of each orbit. The orbits depend not only on the element Γ k , but also on the specific realization of G R . Below, we specify the fundamental domains for each of the realizations of the reduction groups.
We start with Z 2 -reductions, (or involutions) for two of the best known classes of NLEE:

5.2. Involutive Reductions

We will start with typical Z 2 reductions (involutions) on the Lax representations (28) [18]:
( 1 ) C 1 ( U ( κ 1 ( λ ) ) ) = U ( λ ) , C 1 ( V ( κ 1 ( λ ) ) ) = V ( λ ) , ( 2 ) C 2 ( U T ( κ 2 ( λ ) ) ) = U ( λ ) , C 2 ( V T ( κ 2 ( λ ) ) ) = V ( λ ) , ( 3 ) C 3 ( U ( κ 1 ( λ ) ) ) = U ( λ ) , C 3 ( V ( κ 1 ( λ ) ) ) = V ( λ ) , ( 4 ) C 4 ( U ( κ 2 ( λ ) ) ) = U ( λ ) , C 4 ( V ( κ 2 ( λ ) ) ) = V ( λ ) ,
By C j above we denoted involutive automorphisms C j 2 = 1 1 of the simple Lie algebra g in which U ( x , t , λ ) and v ( x , t , λ ) take values.
Working with Lax operators which are quadratic pencils it is natural to impose two basic symmetries: (i) the Hermitian symmetry (1) in (71) with C 1 = 1 1 and κ 1 ( λ ) = λ ; and the symmetry (2) in (71) mapping λ λ . A third involution (ii) may appear if we request in addition that U and V belong to a simple Lie algebra of the series B r , C r or D r . For orthogonal and symplectic algebras we use slightly modified (but equivalent) definitions, which are convenient because the Cartan subalgebra h can be represented by diagonal matrices:
U ( x , t , λ ) = S a U T ( x , t , λ ) S a 1 , V ( x , t , λ ) = S a V T ( x , t , λ ) S a 1 ,
where the matrices S a are introduced in the Appendix A.
Note that Equation (71) in fact takes into account all the typical external automorphisms of the simple Lie algebras. Therefore, we need to consider only those realizations of C j , which are elements of the Weyl group W g of g , or belong to the Cartan subgroup of g . This means that C j may have the form:
C j S j S β 1 S β 2 S β k , o r C j h j = exp ( π i H b j ) ,
where the roots β 1 , β 2 , , β k are all orthogonal to each other. This will ensure that S j is an involution: S j 2 = 1 1 . Similarly, the vector b j must be such that b j = s = 1 r k j , s ω s , where k j , s are non-negative integers and ω s are the fundamental weights of g . Then, the similarity transformations with h j also have the property h j 2 = 1 1 .
Another important factor for the correct realization of the reduction groups is N 1 —the leading power λ in the Lax operator (28). In the majority of cases, people consider generalized Zakharov–Shabat systems, i.e., N 1 = 1 . Let us combine this choice with real-valued eigenvalues of J and vanishing boundary conditions on Q ( x , t ) . Then, the analysis of the corresponding integral equations for the Jost solutions show that the FAS χ 0 + ( x , t , λ ) (resp. χ 0 ( x , t , λ ) ) of L 0 have analyticity properties for Im λ > 0 (resp. Im λ < 0 ). That means that the contour of the corresponding RHP coincides with the real axis R . Then, the spectrum of L 0 consists of continuous part filling up R and pairs of complex-valued discrete eigenvalues λ j ± C ± , see Figure 4. As a fundamental domain of this group, we can choose the upper half-plane C + of the complex λ -plane.
In what follows, we concentrate on Lax operators with N 1 > 1 ; most often, we have N 1 = 2 . In Section 2, we proved that χ + ( x , t , λ ) (resp. χ ( x , t , λ ) ) of L (32) have analyticity properties for Im λ 2 > 0 (resp. Im λ 2 < 0 ). This means that the contour of the corresponding RHP coincides with the union of the real and purely imaginary axis R i R . Then, the spectrum of L consists of continuous part filling up R i R . Typically, such Lax operators have additional symmetry λ λ , so the discrete eigenvalues come in quadruplets ± λ j + Q 1 Q 3 and ± λ j Q 2 Q 4 , see Figure 4. Effectively, in these cases, we are dealing with a Z 2 × Z 2 reduction. Then, as a fundamental domain of this realizations of the group we can choose the first quadrant Q 1 of the complex λ -plane.

5.3. Z h Reduction Groups

We will consider here the cyclic groups Z h of order h > 2 . These groups have only one generating elements:
Z h : s h = 1 1 .
The cyclic group has h elements: 1 1 , s k , k = 1 , , h 1 ; typically its realization on the complex λ -plane is given by s ( λ ) = λ ω , where ω = exp ( 2 π i / h ) .

5.4. D h Reduction Groups

We will consider here the dihedral groups D h of order h > 2 . These groups have two generating elements:
D h : r 2 = s h = 1 1 , s r s 1 = s 1 .
The dihedral group D h has 2 h elements: { s k , r s k , k = 1 , , h } and allows several inequivalent realization on the complex λ -plane. Some of them are:
( i ) s ( λ ) = λ ω , r ( λ ) = ϵ λ , ( i i ) s ( λ ) = λ ω , r ( λ ) = ϵ λ , ( iii ) s ( λ ) = λ ω , r ( λ ) = ϵ λ , ( iv ) s ( λ ) = λ ω , r ( λ ) = ϵ λ ,
where again ω = exp ( 2 π i / h ) and ϵ = ± 1 . An important realization in the case of a D 2 reduction group is given by (see Figure 3, right panel):
( v ) s ( λ ) = λ , r ( λ ) = ϵ λ .

6. Parametrizing the RHP with Canonical Normalization

An important tool in our investigation is the theory of the simple Lie algebras and the methods of their gradings. The reason for this is that we need to have a unique solution to the inverse spectral problem of the Lax operator. The mapping between the potential and the scattering matrix for generic, linear in λ operators have been studied using the Wronskian relations [9,22,30]. They require the existence of a non-degenerate metric. A metric characteristic for the Lie algebras is the famous Killing form, non-degenerate for semi-simple Lie algebras, which are direct sums of simple Lie algebras. Therefore, it is sufficient to restrict our attention only to simple Lie algebras.
We will limit ourselves also by considering only two families of NLEE. The first family is known as the N-wave equations, discovered by Zakharov and Manakov [6], see Section 2.1 above. Typically, they contain first order derivatives in both x and t and quadratic non-linearities. In this section, we describe a new class of N-wave equations whose Lax operators are both quadratic in λ [37,38]. We see that they have higher-order non-linearities.
The second family of NLEE we will focus on are the multi-component NLS (MNLS) equations. It is well known that they are related to the symmetric spaces [45]. Their Lax operators will also be quadratic in λ , so they will be multicomponent generalizations of the derivative MLS equation [35] and GI equations [52,53,55,56].

6.1. Generic Parametrization of the RHP with Canonical Normalization

We can introduce a parametrization for ξ ± ( x , t , λ ) using its asymptotic expansion:
ξ ± ( x , t , λ ) = exp ( Q ( x , t , λ ) ) , Q ( x , t , λ ) = s = 1 λ s Q s ( x , t ) .
Obviously, if we want that ξ ± ( x , t , λ ) be elements of a simple Lie group G , then the coefficients Q s ( x , t ) must be elements of the corresponding simple Lie algebra g . In addition, we request that Q s provides local coordinates of the corresponding homogeneous space. Furthermore, the solution ξ ± ( x , t , λ ) is canonically normalized, because
lim λ ξ ± ( x , t , λ ) = 1 1 .
The most general parametrization of ξ ± ( x , t , λ ) requires that Q s ( x , t ) are generic elements of the algebra g . However, such an approach has a disadvantage: the corresponding NLEE involve too many independent functions. There are two ways to avoid it: first, we can fix up the gauge of the Lax operators; second, we can and will impose reductions of Mikhailov type. Typically, we will fix the gauge by requesting that the leading terms in the Lax operators are chosen as diagonal constant matrices, i.e., constant elements of the Cartan subalgebra h . Another important issue is to explain how, using Q s ( x , t ) from Equation (76) we can parameterize any generic Lax pair related to that RHP.
Let us choose L and M, following the ideas of Gel’fand and Dickey
L ψ ψ x U ( x , t , λ ) ψ ( x , t , λ ) = 0 , U ( x , t , λ ) = λ N 1 ξ J ξ 1 ( x , t , λ ) + , M ψ ψ t V ( x , t , λ ) ψ ( x , t , λ ) = 0 , V ( x , t , λ ) = λ N 2 ξ K ξ 1 ( x , t , λ ) + ,
where the subscript + means that we retain only the non-negative in λ terms in the right-hand sides of (78) and explain how one can calculate U ( x , t , λ ) and V ( x , t , λ ) . First, we note that since J , K h and ξ ± ( x , t , λ ) G , then both U ( x , t , λ ) , V ( x , t , λ ) g . From the general theory of Lie algebras, we know that
ξ ± J ξ ^ ± ( x , t , λ ) = J + k = 1 λ k k ! ad Q k J = J k = 1 λ k V k .
where ad Q Y [ Q , Y ] , ad Q 2 [ Q , [ Q , Y ] ] etc. The first few coefficients in these expansions take the form:
V 1 = ad J Q 1 , V 2 = ad J Q 2 1 2 ad Q 1 2 J , V 3 = ad J Q 3 1 2 ad Q 1 ad Q 2 + ad Q 2 ad Q 1 J 1 6 ad Q 1 3 J , V 4 = ad J Q 4 1 2 ad Q 1 ad Q 3 + ad Q 2 2 + ad Q 3 ad Q 1 J 1 6 ad Q 1 2 ad Q 2 + ad Q 1 ad Q 2 ad Q 1 + ad Q 2 ad Q 1 2 J 1 24 ad Q 1 4 J .
Thus, we see that U ( x , t , λ ) and V ( x , t , λ ) are parameterized by the first few coefficients Q s .
Another formula from the general theory of Lie algebras which we will need is:
i ξ ± x ξ ^ ± ( x , t , λ ) = i Q x + i k = 1 1 ( k + 1 ) ! ad Q k Q x = i k = 1 X k λ k ,
In general:
i X k = i Q k x + i s = 1 k 1 ( s + 1 ) ! j 1 + j 2 + + j s = k ad Q j 1 ad Q j 2 ad Q j s 1 Q j s x .
The first few coefficients are:
X 1 = Q 1 x , X 2 = Q 2 x + 1 2 ad Q 1 Q 1 x , X 3 = Q 3 x + 1 2 ad Q 1 Q 2 x + ad Q 2 Q 1 x + 1 6 ad Q 1 2 Q 1 x , X 4 = Q 4 x + 1 2 ad Q 1 Q 3 x + ad Q 2 Q 2 x + ad Q 3 Q 1 x + 1 6 ad Q 1 2 Q 2 x + ad Q 1 ad Q 2 Q 1 x + ad Q 2 ad Q 1 Q 1 x + 1 24 ad Q 1 3 Q 1 x ,
The effectiveness of the general form of the Lax pair (78) follows from the relation, which is easy to check:
λ N 1 ξ J ξ 1 ( x , t , λ ) , λ N 2 ξ K ξ 1 ( x , t , λ ) = λ N 1 + N 2 ξ [ J , K ] ξ 1 ( x , t , λ ) = 0 ,
because the matrices K and J are diagonal. Therefore, the commutator [ U ( x , t , λ ) , V ( x , t , λ ) ] must contain only negative powers of λ .
In addition, we may impose on Q Mikhailov type reductions. Each of them uses a finite order automorphism, which introduces a grading in the algebra g . Below, we use several types of Z 2 -reductions based on automorphisms of order 2 of the Lie algebra:
( 1 ) Q ( x , t , κ 1 ( λ ) ) = Q ( x , t , λ ) , κ 1 ( λ ) = λ , ( 2 ) Q T ( x , t , λ ) = S a Q ( x , t , λ ) S a 1 , κ 2 ( λ ) = ± λ , ( 3 ) Q ( x , t , κ 3 ( λ ) ) = Q ( x , t , λ ) , κ 3 ( λ ) = λ , ( 4 ) C 0 Q ( x , t , λ ) C 0 1 = Q ( x , t , λ ) , C 0 2 = 1 1 , C 0 h ,
compare with (71). The last reduction C 0 is typical for Lax operators which are quadratic in λ .
Another important Z 2 reduction is provided by the Cartan involutions J , which determines the hermitian symmetric spaces [21] and acts on Q ( x , t , λ ) as follows:
Q ( x , t , λ ) = J Q ( x , t , λ ) J .

6.2. The Family of N-Wave Equations with Cubic Non-linearities

In this section, we consider RHP whose structure is compatible with the structure of homogeneous spaces [21]. We also assume that the x and t dependence of the sewing function G ( x , t , λ ) is given by Equation (58).
Consider the Lax pair:
L Nw ψ i ψ x + U ( x , t , λ ) ψ ( x , t , λ ) = 0 , M Nw ψ i ψ x + V ( x , t , λ ) ψ ( x , t , λ ) = 0 ,
First, we will derive the N-wave equations in general form; then we will illustrate them by a couple of examples. Using the generic parametrization (76) for quadratic pencils we obtain:
U ( x , t , λ ) = U 2 + λ U 1 λ 2 J , U 2 = [ J , Q 2 ] + 1 2 [ Q 1 , U 1 ] , U 1 = [ J , Q 1 ] , V ( x , t , λ ) = V 2 + λ V 1 λ 2 K , V 2 = [ K , Q 2 ] + 1 2 [ Q 1 , V 1 ] , V 1 = [ K , Q 1 ] ,
where Q 1 , Q 2 again belong to a simple Lie algebra g , J and K are constant elements of h .
Below, we will impose two types of Mikhailov reductions:
C 0 U ( x , t , λ ) C 0 1 = U ( x , t , λ ) , C 0 V ( x , t , λ ) C 0 1 = V ( x , t , λ ) , U ( x , t , λ ) = U ( x , t , λ ) , V ( x , t , λ ) = V ( x , t , λ ) ,
where C 0 Aut g , C 0 2 = 1 1 . In particular for the n-wave equations (see Equations (5)–(7), we obtain Q 1 = Q 1 , Q 2 = Q 2 and J and K must be real. For the FAS and the scattering matrix, these reductions give:
χ ± ( x , t , λ ) = ( χ ) ( x , t , λ ) , T Nw ( λ , t ) = T Nw ( λ , t ) , C 0 χ ± ( x , t , λ ) C 0 1 = χ ± ( x , t , λ ) , C 0 T Nw ( λ , t ) C 0 1 = T Nw ( λ , t ) ,
see [37,38].
The compatibility condition in this case is:
i x V 2 + λ V 1 i t U 2 + λ U 1 + [ U 2 + λ U 1 λ 2 J , V 2 + λ V 1 λ 2 K ] = 0 .
It must hold identically with respect to λ . It is easy to check that the coefficients at λ 4 and λ 3 vanish. Some more efforts are needed to check that the coefficient at λ 2 :
[ J , V 2 ] + [ U 1 , V 1 ] [ U 2 , K ] = 0
also vanishes identically due to the proper parametrization of U and V. The compatibility conditions must hold identically with respect to λ . The first three of these relations:
λ 4 : [ J , K ] = 0 , λ 3 : [ K , U 1 ] [ J , V 1 ] = 0 , λ 2 : [ K , U 2 ] [ J , V 2 ] + [ U 1 , V 1 ] = 0 ,
are satisfied identically due to the correct parametrization of ξ ± ( x , t , λ ) . In more details
U 1 = [ J , Q 1 ] , U 2 = 1 2 [ U 1 , Q 1 ] + [ J , Q 2 ] , V 1 = [ K , Q 1 ] , V 2 = 1 2 [ V 1 , Q 1 ] + [ K , Q 2 ] .
The last two coefficients at λ 1 and λ 0 vanish provided Q 1 ( x , t ) and Q 2 ( x , t ) satisfy the following N-wave type equations:
λ 1 : i U 1 t + i V 1 x + [ U 2 , V 1 ] + [ U 1 , V 2 ] = 0 , λ 0 : i U 2 t + i V 2 x + [ U 2 , V 2 ] = 0 .
Note that while U 1 and V 1 are linear in Q 1 , U 2 and V 2 are quadratic in Q 1 . Therefore, the non-linearities in this N-wave equations are cubic in Q 1 .
We assume that the root system of g is split into Δ = Δ 0 Δ 1 , such that
C 0 E α C 0 1 = E α , α Δ 0 , C 0 E α C 0 1 = E α , α Δ 1 .
We also denote positive and negative roots by a plus or minus superscript. Then, considering (88) we must have:
C 0 Q 1 ( x , t ) C 0 1 = Q 1 ( x , t ) , C 0 Q 2 ( x , t ) C 0 1 = Q 2 ( x , t ) , Q 2 s 1 ( x , t ) = α Δ 1 + ( q 2 s 1 , α E α q 2 s 1 , α E α ) , Q 2 s ( x , t ) = α Δ 0 + ( q 2 s , α E α q 2 s , α E α ) ,
It is easy to check that this choice of Q s is compatible with the following two involutions of the RHP
( a ) Q ( x , t , λ ) = Q ( x , t , λ ) = s = 1 Q s ( x , t ) λ s , Q s ( x , t ) = Q s ( x , t ) , ( b ) Q ( x , t , λ ) = C 0 Q ( x , t , λ ) C 0 1 , C 0 Q 2 s 1 C 0 1 = Q 2 s 1 , C 0 Q 2 s C 0 1 = Q 2 s ,
which means that
( a ) ( ξ ± ) ( x , t , λ ) = ( ξ ) 1 ( x , t , λ ) , ( b ) C 0 ξ ± ( x , t , λ ) C 0 1 = ξ ± ( x , t , λ ) .
Explicit examples of N-wave-type equations will be given below; here, we just note that they contain first order derivatives with respect to x and t and cubic (not quadratic) non-linearities with respect to Q j k .
Example 1 
(6-wave type equations: s l ( 4 ) c a s e ). The involution is given by
C 0 = exp ( π i H e 2 ) = d i a g ( 1 , 1 , 1 , 1 ) .
The potentials are:
Q 1 = 0 q 1 q 2 0 q 1 0 0 q 3 q 2 0 0 q 4 0 q 3 q 4 0 , Q 2 = 0 0 0 q 6 0 0 q 5 0 0 q 5 0 0 q 6 0 0 0 , J = d i a g ( a 1 , a 2 , a 2 , a 1 ) , K = d i a g ( b 1 , b 2 , b 2 , b 1 ) .
The corresponding NLEE ensure that the coefficients at λ 1 and λ 0 also vanish. These give:
i ( a 1 a 2 ) q 1 t + i ( b 1 b 2 ) q 1 x + κ ( q 1 ( | q 3 | 2 | q 2 | 2 ) + 2 ( q 2 q 5 + q 6 q 3 ) ) = 0 , i ( a 2 + a 1 ) q 2 t + i ( b 2 + b 1 ) q 2 x + κ ( q 2 ( | q 1 | 2 | q 4 | 2 ) + 2 ( q 1 q 5 q 4 q 6 ) ) = 0 , i ( a 2 + a 1 ) q 3 t + i ( b 2 + b 1 ) q 3 x κ ( q 3 ( | q 1 | 2 | q 4 | 2 ) 2 ( q 1 q 6 q 4 q 5 ) ) = 0 , i ( a 1 a 2 ) q 4 t + i ( b 1 b 2 ) q 4 x κ ( q 4 ( | q 3 | 2 | q 2 | 2 ) + 2 ( q 2 q 6 + q 3 q 5 ) ) = 0
where κ = a 1 b 2 a 2 b 1 and
2 i a 2 q 5 t + 2 i b 2 q 5 x i a 1 ( q 3 q 4 q 1 q 2 ) t + i b 1 ( q 3 q 4 q 1 q 2 ) x + κ ( q 1 q 2 q 3 q 4 ) ( | q 1 | 2 + | q 2 | 2 + | q 3 | 2 + | q 4 | 2 ) 2 κ q 5 ( | q 1 | 2 | q 2 | 2 | q 3 | 2 + | q 4 | 2 ) = 0 , 2 i a 1 q 6 t + 2 i b 1 q 6 x i a 2 ( q 1 q 3 q 2 q 4 ) t + i b 2 ( q 1 q 3 q 2 q 4 ) x + κ ( q 1 q 3 q 2 q 4 ) ( | q 1 | 2 + | q 2 | 2 + | q 3 | 2 + | q 4 | 2 ) + 2 κ q 6 ( | q 1 | 2 | q 2 | 2 | q 3 | 2 + | q 4 | 2 ) = 0 .
Example 2 
(4-wave type equations: s o ( 5 ) c a s e ). The involution is given by
C 0 = exp ( π i H e 1 ) = d i a g ( 1 , 1 , 1 , 1 , 1 ) .
We first choose the potentials Q 1 , Q 2 , J, K and the involution C 0 as follows:
Q 1 ( x , t ) = 0 q 1 q 2 q 3 0 q 1 0 0 0 q 3 q 2 0 0 0 q 2 q 3 0 0 0 q 1 0 q 3 q 2 q 1 0 , Q 2 ( x , t ) = 0 0 0 0 0 0 0 q 4 0 0 0 q 4 0 q 4 0 0 0 q 4 0 0 0 0 0 0 0 , J = d i a g ( a 1 , a 2 , 0 , a 2 , a 1 ) , K = d i a g ( b 1 , b 2 , 0 , b 2 , b 1 ) .
It is easy to check that C 0 Q 1 C 0 1 = Q 1 and C 0 Q 2 C 0 1 = Q 2 , and consequently the FAS of L Nw (86) satisfies C 0 χ ± ( x , t , λ ) C 0 1 = χ ± ( x , t , λ ) . The corresponding Equation (94) become:
2 i ( a 1 a 2 ) q 1 t + 2 i ( b 1 b 2 ) q 1 x + κ ( q 1 | q 2 | 2 + q 2 2 q 3 2 q 2 q 4 ) = 0 , 2 i a 1 q 2 t + 2 i b 1 q 2 x κ ( q 2 ( | q 2 | 2 | q 3 | 2 ) + 2 q 3 q 4 2 q 1 q 4 ) = 0 , 2 i ( a 1 + a 2 ) q 3 t + 2 i ( b 1 + b 2 ) q 3 x κ ( q 3 | q 2 | 2 + q 2 2 q 1 2 q 2 q 4 ) = 0 ,
and
2 i a 2 q 4 t + 2 i b 2 q 4 x i ( 2 a 1 a 2 ) ( q 2 q 1 ) t + i ( 2 b 1 b 2 ) ( q 2 q 1 ) x i ( 2 a 1 + a 2 ) ( q 3 q 2 ) t + i ( 2 b 1 + b 2 ) ( q 3 q 2 ) x + κ | q 1 | 2 ( 3 q 3 q 2 + q 2 q 1 ) + | q 3 | 2 ( 3 q 2 q 1 + q 3 q 2 ) + 2 q 4 ( | q 1 | 2 | q 3 | 2 ) = 0 ,

6.3. The Main Idea of the Dressing Method

In this section, we generalize the Zakharov–Shabat dressing method [16,17] for quadratic pencils. We start with the simplest possible form of the dressing factor which generates the one-soliton solutions. We do this for two reasons. The first one is that due to the additional involutions inherent in the quadratic pencils the dressing factors for the one-soliton solutions require solving block-matrix linear equations. The other reason is that we will be able to calculate the asymptotics of the one-soliton dressing factors which will allow us to study the soliton interactions for the corresponding NLEE. The N-soliton solutions can be derived either by repeating N-times the one-soliton dressing or by considering dressing factors whose pole singularities determined by λ k ± , k = 1 , 2 , , N . In this case, one has to solve much more complicated block-matrix linear equations.
In order to avoid unnecessary repetitions of formulae, we introduce the notations for the ‘naked‘ and one-soliton solutions FAS of the Lax pairs.
L 0 χ 0 ± ( x , t , λ ) = 0 , M 0 χ 0 ± ( x , t , λ ) = 0 , L 1 s χ 1 ± ( x , t , λ ) = 0 , M 1 s χ 1 ± ( x , t , λ ) = 0 ,
where L 0 and M 0 are the Lax pair whose potentials U k and V k are vanishing. By L 1 s and M 1 s we denote the Lax pair whose potentials are provided by the one-soliton solutions of the corresponding NLEE. Each time, from the context, it will be clear which specific La pair we are considering.
For the N-wave systems, the ‘naked‘ FAS are given by:
χ 0 ± ( x , t , λ ) = exp i λ 2 ( J x + K t ) , χ 1 ± ( x , t , λ ) = u ( x , t , λ ) χ 0 ± ( x , t , λ ) ,
while for the NLS-type equations
χ 0 ± ( x , t , λ ) = exp i J ( λ 2 x + λ 4 t ) , χ 1 ± ( x , t , λ ) = u ( x , t , λ ) χ 0 ± ( x , t , λ ) ,
where the dressing factor u ( x , t , λ ) will be calculated below for each of the relevant cases. The specific form of J and K in (108) depends on the specific choice of the corresponding homogeneous space. Likewise, the specific form of J in (109) is determined by the choice of the relevant symmetric space.
Each dressing factor is a fractional linear function of the spectral parameter λ . As such we will use:
c 1 ( λ ) = λ λ 1 + λ λ 1 , c 1 ( λ ) = λ + λ 1 + λ + λ 1 , λ 1 ± = μ 1 ± ν 1 , c 1 ( λ ) = 1 c 1 ( λ ) .
Indeed, c 1 ( λ ) comes up naturally due to the symmetry λ λ . By λ 1 ± , we denote constants such that Im ( λ 1 ± ) 2 0 ; i.e., μ 1 ν 1 0 . As we shall see below, λ 1 ± , λ 1 ± and their hermitian conjugate determine the discrete eigenvalues of L 1 s .
The generic form of the dressing factors is the same for both types of NLEE considered above. If we impose only types of symmetries on L and M, such as:
U ( x , t , λ ) = U ( x , t , λ ) , C 0 U ( x , t , λ ) C 0 1 = U ( x , t , λ ) ,
and similar relations for V ( x , t , λ ) . Here, C 0 is constant diagonal matrix such that C 0 2 = 1 1 . Then, u ( x , t , λ ) must satisfy:
u ( x , t , λ ) = u 1 ( x , t , λ ) , C 0 u ( x , t , λ ) C 0 1 = u ( x , t , λ ) ,
then u ( x , t , λ ) and its inverse have the form
u ( x , t , λ ) = 1 1 + ( c 1 ( λ ) 1 ) | N 1 m 1 | + ( c 1 ( λ ) 1 ) C 0 | N 1 m 1 | C 0 1 , u 1 ( x , t , λ ) = 1 1 + 1 c 1 ( λ ) 1 | n 1 M 1 | + 1 c 1 ( λ ) 1 C 0 | n 1 M 1 | C 0 1 .
Remark 7. 
Here and below, for the polarization vectors m 1 | , M 1 | and | n 1 , | N 1 we are using Dirac’s bra- and -ket notations. They determine the residues of u and u 1 at λ = ± λ 1 + and λ = ± λ 1 . The polarization vectors whose index contains additional ‘0‘ in the index are constant vectors. For more details see the examples in (154) and (162).

6.4. Dressing of N-Wave Equations: Two Involutions

We start with the N-wave type on homogeneous spaces with two involutions. Using the Equation (107), we derive the following equation for the dressing factor:
i u x + ( U 2 ; 1 s + λ U 1 ; 1 s λ 2 J ) u ( x , t , λ ) u ( x , t , λ ) ( U 20 + λ U 10 λ 2 J ) = 0 ,
which also must hold identically with respect to λ . This can be verified by taking the residues of the left-hand sides of (114) for λ = λ 1 and equating them to 0. This gives:
i | N 1 x + ( U 2 ; 1 s + λ 1 U 1 ; 1 s ( λ 1 ) 2 J ) | N 1 m 1 |                             + | N 1 i m 1 | x m 1 | ( U 20 + λ 1 U 10 ( λ 1 ) 2 J = 0 ,
from which one easily finds, see Equation (108):
| N 1 = χ 1 ( x , t , λ 1 ) | N 10 , m 1 | = m 10 | χ ^ 0 ( x , t , λ 1 ) .
Similarly, we can use the equation satisfied by u ^ u 1 ( x , t , λ ) which reads:
i u ^ x + ( U 20 + λ U 10 λ 2 J ) u ^ ( x , t , λ ) u ^ ( x , t , λ ) ( U 2 ; 1 s + λ U 1 ; 1 s λ 2 J ) = 0 .
Putting the residue of (117) at λ 1 + to 0 we obtain:
i | n 1 x + ( U 20 + λ 1 + U 10 ( λ 1 + ) 2 J ) | n 1 M 1 |                           + | n 1 i M 1 | x m 1 | ( U 2 ; 1 s + λ 1 + U 1 ; 1 s ( λ 1 + ) 2 J = 0 ,
The result is, see Equation (108):
| n 1 = χ 0 + ( x , t , λ 1 + ) | n 10 , M 1 | = M 10 | χ ^ 1 + ( x , t , λ 1 + ) .
Thus, if we know the regular solutions χ 0 ± ( x , t , λ ) then we have derived explicitly the x and t dependence of the vectors | n 1 and m 1 | . In addition we know that u u ^ = 1 1 also must hold identically with respect to λ . That means that the residues:
Res λ = λ 1 u ( x , t , λ ) u 1 ( x , t , λ ) = 0 , Res λ = λ 1 + u ( x , t , λ ) u 1 ( x , t , λ ) = 0 .
must vanish. Inserting u and u ^ from Equation (113) we obtain the equations:
M 1 | = m 1 | m 1 | n 1 1 1 + λ 1 + λ 1 λ 1 + + λ 1 m 1 | C 0 | n 1 C 0 1 , | N 1 = m 1 | n 1 1 1 λ 1 + λ 1 λ 1 + + λ 1 m 1 | C 0 | n 1 C 0 1 | n 1 .
In the specific calculations below, we will use more convenient notations, namely:
E 01 ± = exp i ( λ 1 ± ) 2 ( J x + K t ) = d i a g e ± z 1 i ϕ 1 , , e ± z n i ϕ n , λ 1 ± = μ 1 ± i ν 1 , z k = 2 μ 1 ν 1 ( J k x + K k t ) , ϕ k = ( μ 1 2 ν 1 2 ) ( J k x + K k t ) , κ 1 = λ 1 + λ 1 λ 1 + + λ 1 = i ν 1 μ 1 , | n 1 = E 01 + | n 10 , m 1 | = m 10 | E ^ 01 .
where μ 1 > 0 and ν 1 > 0 . The functions z k ( x , t ) and ϕ k ( x , t ) are linear functions of x and t; for each specific example, they will be given explicitly.
The last step we need to do is to determine the corresponding singular potentials U 1 and U 2 . To this end, we come back to Equation (114) for the dressing factor and study its limit for λ . Its left-hand side is a quadratic polynomial of λ . Skipping the details we obtain:
U 1 ; 1 s U 10 = ( λ 1 + λ 1 ) [ J , W + ] , W ± = | N 1 m 1 | + C 0 | N 1 m 1 | C 0 , U 2 ; 1 s U 20 = ( λ 1 + λ 1 ) U 1 W W U 20 + λ 1 [ J , W ]
We put U 10 = U 20 = 0 we obtain simplified expression for the one-soliton solution:
U 1 ; 1 s = ( λ 1 + λ 1 ) [ J , W ] , U 2 ; 1 s = ( λ 1 + λ 1 ) U 1 ; 1 s W + + λ 1 [ J , W ] .
More explicit expressions for U 1 ; 1 s and U 2 ; 1 s in terms of hyperbolic functions will be given below for each of the examples.
Example 3 
(One soliton solutions, s l ( 4 ) case). The Lax representation in that case is provided by the operators (86) where J, K, Q 1 and Q 2 are given by (100). The ‘naked‘ polarization vectors m 1 | (116) and | n 1 (119) become:
m 1 | = m 10 , 1 e z 1 + i ϕ 1 , m 10 , 2 e z 2 + i ϕ 2 , m 10 , 3 e z 2 i ϕ 2 , m 10 , 2 e z 1 i ϕ 1 , | n 1 = n 10 , 1 e z 1 i ϕ 1 n 10 , 2 e z 2 i ϕ 2 n 10 , 3 e z 2 + i ϕ 2 n 10 , 4 e z 1 + i ϕ 1 .
The typical hermitian reduction of L and M requires that m 1 | = ( | n 1 ) and m 10 | = ( | n 10 ) . The dressed polarization vectors defined by (121) are equal to:
M 1 | = m 1 ; 1 R 1 + , m 1 ; 2 R 1 , m 1 ; 3 R 1 , m 1 ; 4 R 1 + , | N 1 = ( R 1 ) 1 0 0 0 0 ( R 1 + ) 1 0 0 0 0 ( R 1 + ) 1 0 0 0 0 ( R 1 ) 1 | n 1 , R 1 ± = m 1 | n 1 ± i ν 1 μ 1 m 1 | C 0 | n 1 , m 1 | n 1 = 2 η 01 cosh ( 2 z 1 + ζ 01 ) + 2 η 02 cosh ( 2 z 2 + ζ 02 ) , η 01 = | n 10 , 1 n 10 , 4 | , η 02 = | n 10 , 2 n 10 , 3 | , m 1 | C 0 | n 1 = 2 η 01 cosh ( 2 z 1 + ζ 01 ) 2 η 02 cosh ( 2 z 2 + ζ 02 ) , ζ 01 = ln | n 10 , 1 | | n 10 , 4 | , ζ 02 = ln | n 10 , 2 | | n 10 , 3 | ,
They also satisfy M 1 | = ( | N 1 ) . Therefore,
W + = 2 0 n 1 , 1 m 1 , 2 n 1 , 1 m 1 , 3 0 n 2 , 1 m 1 , 1 0 0 n 1 , 2 m 1 , 4 n 1 , 3 m 1 , 1 0 0 n 1 , 3 m 1 , 4 0 n 1 , 4 m 1 , 2 n 1 , 4 m 1 , 3 0 , W = 2 n 1 , 1 m 1 , 1 0 0 n 1 , 1 m 1 , 4 0 n 1 , 2 m 1 , 2 n 1 , 2 m 1 , 3 0 0 n 1 , 3 m 1 , 2 n 1 , 3 m 1 , 3 0 n 1 , 4 m 1 , 1 0 0 n 1 , 4 m 1 , 4 .
q 1 ( x , t ) = 4 i ν 1 ( a 1 a 2 ) m 1 , 2 n 1 , 1 R 1 + , q 2 ( x , t ) = 4 i ν 1 ( a 1 + a 2 ) m 1 , 3 n 1 , 1 R 1 + q 3 ( x , t ) = 4 i ν 1 ( a 1 + a 2 ) m 1 , 4 n 1 , 2 R 1 , q 4 ( x , t ) = 4 i ν 1 ( a 1 a 2 ) m 1 , 4 n 1 , 3 R 1 q 5 ( x , t ) = 8 ν 1 n 1 , 1 m 1 , 4 ( a 1 a 2 ) n 1 , 2 m 1 , 2 + ( a 1 + a 2 ) n 1 , 3 m 1 , 3 i λ 1 a 1 R 1 R 1 + R 1 , q 6 ( x , t ) = 8 ν 1 n 1 , 2 m 1 , 3 ( a 1 a 2 ) n 1 , 1 m 1 , 1 ( a 1 + a 2 ) n 1 , 4 m 1 , 4 i λ 1 + a 2 R 1 + R 1 + R 1 ,
q 1 ( x , t ) = 4 i ν 1 μ 1 ( a 1 a 2 ) | n 10 , 1 n 10 , 2 | e z 1 + z 2 + i ( ϕ ˜ 2 ϕ ˜ 1 ) λ 1 + η 01 cosh ( z ˜ 1 ) + λ 1 η 02 cosh ( z ˜ 2 ) ( μ 1 2 + ν 1 2 ) ( η 01 2 cosh 2 ( z ˜ 1 ) + η 02 2 cosh 2 ( z ˜ 2 ) ) + 2 η 10 η 20 ( μ 1 2 ν 1 2 ) cosh ( z ˜ 1 ) cosh ( z ˜ 2 ) , q 2 ( x , t ) = 4 i ν 1 μ 1 ( a 1 + a 2 ) | n 10 , 3 n 10 , 2 | e z 1 z 2 i ( ϕ ˜ 1 + ϕ ˜ 2 ) λ 1 + η 01 cosh ( z ˜ 1 ) + λ 1 η 02 cosh ( z ˜ 2 ) ( μ 1 2 + ν 1 2 ) ( η 01 2 cosh 2 ( z ˜ 1 ) + η 02 2 cosh 2 ( z ˜ 2 ) ) + 2 η 10 η 20 ( μ 1 2 ν 1 2 ) cosh ( z ˜ 1 ) cosh ( z ˜ 2 ) , q 3 ( x , t ) = 4 i ν 1 μ 1 ( a 1 + a 2 ) | n 10 , 4 n 10 , 2 | e z 1 + z 2 i ( ϕ ˜ 1 + ϕ ˜ 2 ) λ 1 η 01 cosh ( z ˜ 1 ) + λ 1 + η 02 cosh ( z ˜ 2 ) ( μ 1 2 + ν 1 2 ) ( η 01 2 cosh 2 ( z ˜ 1 ) + η 02 2 cosh 2 ( z ˜ 2 ) ) + 2 η 10 η 20 ( μ 1 2 ν 1 2 ) cosh ( z ˜ 1 ) cosh ( z ˜ 2 ) , q 4 ( x , t ) = 4 i ν 1 μ 1 ( a 1 a 2 ) | n 10 , 1 n 10 , 2 | e z 1 z 2 + i ( ϕ ˜ 2 ϕ ˜ 1 ) λ 1 η 01 cosh ( z ˜ 1 ) + λ 1 + η 02 cosh ( z ˜ 2 ) ( μ 1 2 + ν 1 2 ) ( η 01 2 cosh 2 ( z ˜ 1 ) + η 02 2 cosh 2 ( z ˜ 2 ) ) + 2 η 10 η 20 ( μ 1 2 ν 1 2 ) cosh ( z ˜ 1 ) cosh ( z ˜ 2 ) ,
where z ˜ k = 2 z k + ζ 0 k and ϕ ˜ k = ϕ k + arg n 10 , k . In addition
q 5 ( x , t ) = 16 ν 1 | n 10 , 1 | | n 10 , 4 | e 2 i ϕ ˜ 1 × × ν 1 μ 1 a 1 η 01 cosh ( z ˜ 2 ) a 2 η 02 sinh ( z ˜ 2 ) ) i λ 1 a 1 ( λ 1 η 01 cosh ( z ˜ 1 ) + λ 1 + η 02 cosh ( z ˜ 2 ) ) μ 1 ( μ 1 2 + ν 1 2 ) ( η 01 2 cosh 2 ( z ˜ 1 ) + η 02 2 cosh 2 ( z ˜ 2 ) ) + 2 η 10 η 20 ( μ 1 2 ν 1 2 ) cosh ( z ˜ 1 ) cosh ( z ˜ 2 ) , q 6 ( x , t ) = 16 ν 1 | n 10 , 2 | | n 10 , 3 | e 2 i ϕ ˜ 2 × × ν 1 μ 1 a 1 η 01 sinh ( z ˜ 1 ) a 2 η 02 cosh ( z ˜ 2 ) ) i λ 1 + a 2 ( λ 1 η 01 cosh ( z ˜ 1 ) + λ 1 + η 02 cosh ( z ˜ 2 ) ) μ 1 ( μ 1 2 + ν 1 2 ) ( η 01 2 cosh 2 ( z ˜ 1 ) + η 02 2 cosh 2 ( z ˜ 2 ) ) + 2 η 10 η 20 ( μ 1 2 ν 1 2 ) cosh ( z ˜ 1 ) cosh ( z ˜ 2 ) .
The modules | q k ( x , t ) | 2 , k = 1 , , 6 are plotted in Figure 5.

6.5. Dressing of N-Wave Equations: Three Involutions

Here, we consider only the cases, when the two above involutions are combined with condition that u ( x , t , λ ) S O ( N ) or u ( x , t , λ ) S P ( 2 N ) . Then, the dressing factors take the form:
u ( x , t , λ ) = 1 1 + ( c 1 ( λ ) 1 ) | N 1 m 1 | + ( c 1 ( λ ) 1 ) C 0 | N 1 m 1 | C 0 1 + 1 c 1 ( λ ) 1 S a 1 | M 1 n 1 | S a + 1 c 1 ( λ ) 1 S a 1 C 0 | M 1 n 1 | C 0 1 S a , u 1 ( x , t , λ ) = 1 1 + 1 c 1 ( λ ) 1 | n 1 M 1 | + 1 c 1 ( λ ) 1 C 0 1 | n 1 M 1 | C 0 + c 1 ( λ ) 1 S a | m 1 N 1 | S a 1 + c 1 ( λ ) 1 S a C 0 1 | m 1 N 1 | C 0 S a 1 ,
where c 1 ( λ ) = ( λ λ 1 + ) / ( λ λ 1 ) . Inserting c 1 ( λ ) into (131) we obtain:
u ( x , t , λ ) = 1 1 λ 1 + λ 1 λ λ 1 | N 1 m 1 | + λ 1 + λ 1 λ + λ 1 C 0 | N 1 m 1 | C 0 1 + λ 1 + λ 1 λ λ 1 + S a 1 | M 1 n 1 | S a λ 1 + λ 1 λ + λ 1 + S a 1 C 0 | M 1 n 1 | C 0 1 S a , u 1 ( x , t , λ ) = 1 1 + λ 1 + λ 1 λ λ 1 + | n 1 M 1 | λ 1 + λ 1 λ + λ 1 + C 0 1 | n 1 M 1 | C 0 λ 1 + λ 1 λ λ 1 S a | m 1 N 1 | S a 1 + λ 1 + λ 1 λ + λ 1 S a C 0 1 | m 1 N 1 | C 0 S a 1 .
Since the dressing factor is mapping a regular solution to the RHP into a singular solution of the same problem, it must satisfy the reductions (98), i.e.,
u ( x , t , λ ) = u 1 ( x , t , λ ) .
In addition one can check that the conditions:
C 0 u ( x , t , λ ) C 0 1 = u ( x , t , λ ) , u 1 ( x , t , λ ) = S a u T ( x , t , λ ) S a 1 , a = 0 , 1 ;
are identically satisfied. The second condition in (134) ensures that u ( x , t , λ ) belongs to the orthogonal group if S = S 0 ; for S = S 1 u ( x , t , λ ) belongs to the symplectic group.
Note that the new dressing factors and their inverse satisfy the same differential Equations (114) and (117), respectively. These equations must be satisfied identically with respect to λ . This means that all the residues of these equations at the poles of u and u ^ must vanish.
Thus, we obtain the generalizations of Equations (119) and (116) for the three involutions case:
| n 1 = χ 0 + ( x , t , λ 1 + ) | n 10 , | N 1 = χ 1 + ( x , t , λ 1 + ) | N 10 , m 1 | = m 10 | χ ^ 0 ( x , t , λ 1 ) , M 1 | = M 10 | χ ^ 1 ( x , t , λ 1 ) ,
where | n 10 , | N 10 , m 10 | and M 10 | are constant polarization vectors. We assume that we know χ 0 ± ( x , t , λ ) , which are related to the regular solutions of the RHP. Typically, they correspond to vanishing potentials U 10 = 0 and U 20 = 0 ; thus, we must have:
χ 0 ± ( x , t , λ ) = e i λ 2 ( J x + K t ) .
In addition, we need to ensure that the expressions for u u 1 = 1 1 and u 1 u = 1 1 hold identically with respect to λ . The presence of the third involution makes this problem more difficult, because these expressions have second order poles at the points λ 1 ± . It is easy to see that these residues simplify to:
m 1 | S a | m 1 = 0 , n 1 | S a | n 1 = 0 , m 1 | C 0 S a C 0 | m 1 = 0 , n 1 | C 0 S a C 0 | n 1 = 0 , N 1 | S a | N 1 = 0 , M 1 | S a | M 1 = 0 , N 1 | C 0 S a C 0 | N 1 = 0 , M 1 | C 0 S a C 0 | N 1 = 0 ,
Remember that these vectors depend on x and t and the conditions (137) must be identities. But we also know that these polarization vectors must satisfy (135). Therefore, we have:
n 1 | S a | n 1 = n 10 | ( χ 0 + ( x , λ 1 + ) ) T S a χ 0 + ( x , λ 1 + ) | n 10 = n 10 | S a | n 10 , N 1 | S a | N 1 = N 10 | ( χ ( x , λ 1 ) ) T S a χ ( x , λ 1 ) | N 10 = N 10 | S a | N 10 , N 1 | C 0 S a C 0 | N 1 = N 10 | ( χ ( x , λ 1 ) ) T C 0 S a C 0 χ ( x , λ 1 ) | N 10 = N 10 | C 0 ( χ ( x , λ 1 ) ) T S a χ ( x , λ 1 ) C 0 | N 10 = N 10 | C 0 S a C 0 | N 10 .
The proof that all other scalar products in (137) are x and t independent is performed similarly. Thus, the conditions (137) in fact impose restrictions only on the initial polarization vectors:
m 10 | S a | m 10 = 0 , n 10 | S a | n 10 = 0 , m 10 | C 0 S a C 0 | m 10 = 0 , n 10 | C 0 S a C 0 | n 10 = 0 , N 10 | S a | N 10 = 0 , M 10 | S a | M 10 = 0 , N 10 | C 0 S a C 0 | N 10 = 0 , M 10 | C 0 S a C 0 | N 10 = 0 ,
The last condition that is imposed on the polarization vectors comes from Equation (133) and reads
| n 10 = ( m 10 | ) , | N 10 = ( M 10 | ) , C 0 S a = S a C 0 .
Taking the residues at λ 1 , λ 1 + leads to
m 1 | 1 1 | n 1 M 1 | λ 1 + λ 1 λ 1 + + λ 1 C 0 | n 1 M 1 | C 0 + λ 1 + λ 1 2 λ 1 S C 0 | m 1 N 1 | C 0 S 1 = 0 , 1 1 | N 1 m 1 | + λ 1 + λ 1 λ 1 + + λ 1 C 0 | N 1 m 1 | C 0 λ 1 + λ 1 2 λ 1 + S 1 C 0 | M 1 n 1 | C 0 S | n 1 = 0 .
i.e., transposing the first of the above equations we find:
| m 1 = m 1 | n 1 1 1 + λ 1 + λ 1 λ 1 + + λ 1 C 0 m 1 | C 0 | n 1 | M 1 λ 1 + λ 1 2 λ 1 m 1 | C 0 S | m 1 C 0 S | N 1 , | n 1 = m 1 | n 1 1 1 λ 1 + λ 1 λ 1 + + λ 1 m 1 | C 0 | n 1 C 0 | N 1 + λ 1 + λ 1 2 λ 1 + n 1 | C 0 S | n 1 S 1 C 0 | M 1
By the way, λ 1 ± = μ 1 ± i ν 1 , so
λ 1 + λ 1 λ 1 + + λ 1 = i ν 1 μ 1 .
Thus, Equation (142) is rewritten as:
| m 1 | n 1 = X 1 X 2 Y 1 Y 2 | M 1 | N 1
where the block matrices X 1 , 1 and Y 1 , 2 are given by:
X 1 = m 1 | n 1 1 1 + i ν 1 μ 1 C 0 m 1 | C 0 | n 1 , X 2 = i ν 1 λ 1 m 1 | C 0 S | m 1 C 0 S , Y 1 = i ν 1 λ 1 + n 1 | C 0 S | n 1 S 1 C 0 , Y 2 = m 1 | n 1 1 1 i ν 1 μ 1 m 1 | C 0 | n 1 C 0 ,
The inverse of the matrix in the right-hand side of Equation (143) is given by:
X 1 X 2 Y 1 Y 2 1 = ( X 1 X 2 Y 2 1 Y 1 ) 1 X 1 1 X 2 ( Y 2 Y 1 X 1 1 X 2 ) 1 Y 2 1 Y 1 ( X 1 X 2 Y 2 1 Y 1 ) 1 ( Y 2 Y 1 X 1 1 X 2 ) 1
and
X 1 1 = μ 1 2 m 1 | n 1 i μ 1 ν 1 m 1 | C 0 | n 1 C 0 μ 1 2 m 1 | n 1 2 + ν 1 2 m 1 | C 0 | n 1 2 , Y 2 1 = μ 1 2 m 1 | n 1 + i μ 1 ν 1 m 1 | C 0 | n 1 C 0 μ 1 2 m 1 | n 1 2 + ν 1 2 m 1 | C 0 | n 1 2 ,
i.e., we introduce the relations and the notations
X 1 1 = g 1 Y 2 , Y 2 1 = g 1 X 1 , g 1 = μ 1 2 μ 1 2 m 1 | n 1 + ν 1 2 m 1 | C 0 | n 1 , β 1 = m 1 | S C 0 | m 1 , β ¯ 1 = n 1 | S C 0 | n 1 .
Using the fact that S and C 0 commute we can simplify the matrix in (145) into:
X 1 X 2 Y 1 Y 2 1 = g 1 w 1 Y 2 i ν 1 λ 1 β 1 C 0 S i ν 1 λ 1 + β ¯ 1 C 0 S 1 X 1
where
g 1 w 1 = μ 1 2 ( μ 1 2 + ν 1 2 ) ( μ 1 2 + ν 1 2 ) ( μ 1 2 m 1 | n 1 2 + ν 1 2 m 1 | C 0 | n 1 2 ) μ 1 2 ν 1 2 β 1 β ¯ 1 .
Thus, we find the following expressions for | M 1 and | N 1 in terms of | m 1 and | n 1 :
| M 1 = g 1 w 1 m 1 | n 1 + i ν 1 μ 1 m 1 | C 0 | n 1 C 0 | m 1 + i ν 1 λ 1 β 1 C 0 S | n 1 , | N 1 = g 1 w 1 m 1 | n 1 i ν 1 μ 1 m 1 | C 0 | n 1 C 0 | n 1 i ν 1 λ 1 + β ¯ 1 C 0 S | m 1 ,
It remains to calculate the potentials. To this end, we rewrite Equation (114) in the form:
i u x u ^ + U 2 , 1 s + λ U 1 , 1 s = λ 2 ( J u J u ^ ( x , t , λ ) ) ,
where we have assumed U 20 = 0 and U 1 , 0 = 0 . Taking the limit λ of (151) we obtain:
U 1 , 1 s = 2 i ν 1 ( J A ^ + A J ) , U 2 , 1 s = 4 ν 1 2 ( A J B ^ + B J A ^ ) 2 i ν 1 ( B J + J B ^ ) ,
where
A = | N 1 m 1 | + C 0 | N 1 m 1 | C 0 + S a 1 | M 1 n 1 | S a S a 1 C 0 | M 1 n 1 | C 0 S a , A ^ = | n 1 M 1 | C 0 | n 1 M 1 | C 0 S a | m 1 N 1 | S a 1 + S a C 0 | m 1 N 1 | C 0 S a 1 , B = λ 1 ( | N 1 m 1 | + C 0 | N 1 m 1 | C 0 ) λ 1 + ( S a 1 | M 1 n 1 | S a + S a 1 C 0 | M 1 n 1 | C 0 S a ) , B ^ = λ 1 + ( | n 1 M 1 | + C 0 | n 1 M 1 | C 0 ) λ 1 ( S a | m 1 N 1 | S a 1 + S a C 0 | m 1 N 1 | C 0 S a 1 ) .
Example 4 
(One soliton solutions, s o ( 5 ) case). We start with the ‘naked’ polarization vectors:
m 1 | = m 10 | e i λ 1 , 2 ( J x + K t ) = n 10 , 1 e z 1 + i ϕ 1 , n 10 , 2 e z 2 + i ϕ 2 , n 10 , 3 , n 10 , 4 e z 2 i ϕ 2 , n 10 , 5 e z 1 i ϕ 1 , | n 1 = e i λ 1 + , 2 ( J x + K t ) | n 10 = e z 1 i ϕ 1 n 10 , 1 e z 2 i ϕ 2 n 10 , 2 n 10.3 e z 2 + i ϕ 2 n 10 , 4 e z 1 + i ϕ 1 n 10 , 5 , z k = 2 μ 1 ν 1 s k , ϕ k = ( μ 1 2 ν 1 2 ) s k , z k = 2 μ 1 ν 1 ( a k x + b k t + ζ 1 , k ) , ϕ k = ( μ 1 2 ν 1 2 ) ( a k x + b k t + ϕ 0 , k ) , C 0 = d i a g ( 1 , 1 , 1 , 1 , 1 ) .
where s k = ( a k x + b k t ) . In addition, the constant polarization vectors must satisfy Equation (138). For convenience, we choose the following parametrization of | n 10 , | N 10 , m 10 | and M 10 | :
n 10 , k = r 10 , k e i ϕ 0 k , n 10 , 6 k = s 10 , 6 k e i ϕ 0 k , k = 1 , 2 ; m 10 , k = r 10 , k e i ϕ 0 k , m 10 , 6 k = s 10 , 6 k e i ϕ 0 k , k = 1 , 2 ; N 10 , k = R 10 , k e i Φ 0 k , N 10 , 6 k = S 10 , 6 k e i Φ 0 k , k = 1 , 2 ; M 10 , k = R 10 , k e i Φ 0 k , M 10 , 6 k = S 10 , 6 k e i Φ 0 k , k = 1 , 2 ;
where the real constants r 10 , k , s 10 , k , R 10 , k and S 10 , k satisfy the relations:
n 10 , 3 2 = 2 r 10 , 1 s 10 , 5 + 2 r 10 , 2 s 10 , 4 , N 10 , 3 2 = 2 R 10 , 1 S 10 , 5 + 2 R 10 , 2 S 10 , 4 , m 10 , 3 2 = 2 r 10 , 1 s 10 , 5 + 2 r 10 , 2 s 10 , 4 , M 10 , 3 2 = 2 R 10 , 1 S 10 , 5 + 2 R 10 , 2 S 10 , 4 .
With polarization constants the relations (138) are automatically satisfied; in addition the relations (140) are also satisfied. With these notations for the scalar products of the ‘naked‘ polarization vectors we obtain:
m 1 | n 1 = | n 10 , 1 | 2 e 2 z 1 + | n 10 , 2 | 2 e 2 z 2 + | n 10 , 3 | 2 + | n 10 , 4 | 2 e 2 z 2 + | n 10 , 5 | 2 e 2 z 1 = 4 η 10 sinh 2 ( z ˜ 1 ) + 4 η 20 cosh 2 ( z ˜ 2 ) , z ˜ k = z k + ζ 1 , k , η k 0 = r 10 , k s 10 , 6 k , = 2 | n 10 , 1 n 10 , 5 | cosh ( 2 z 1 ) + 2 | n 10 , 2 n 10 , 4 | cosh ( 2 z 2 ) + | n 10 , 3 | 2 , m 1 | C 0 | n 1 = 4 η 10 cosh 2 ( z ˜ 1 ) + 4 η 20 cosh 2 ( z ˜ 2 ) , ζ 1 , k = 1 2 ln r 10 , k s 10 , 6 k = 2 | n 10 , 1 n 10 , 5 | cosh ( 2 z 1 ) 2 | n 10 , 2 n 10 , 4 | cosh ( 2 z 2 ) + | n 10 , 3 | 2 .
The ‘dressed’ polarization vectors:
| M 1 | N 1 = X 1 X 2 Y 1 Y 2 1 | m 1 | n 1 ,
where
X 1 = m 1 | n 1 + i ν 1 μ 1 m 1 | C 0 | n 1 C 0 , X 2 = i ν 1 λ 1 β 1 S C 0 , Y 1 = i ν 1 λ 1 + β ¯ 1 S 1 C 0 , Y 2 = m 1 | n 1 i ν 1 μ 1 m 1 | C 0 | n 1 C 0 ,
Note that β 1 = β ¯ 1 = 4 η 10 are real constants. Skipping the details we find:
X 1 X 2 Y 1 Y 2 1 = g 1 w 1 Y 2 X 2 Y 1 X 1 ,
where g 1 w 1 , provided in general by Equation (149) now takes the form:
g 1 w 1 = μ 1 2 ( μ 1 2 + ν 1 2 ) ( μ 1 2 + ν 1 2 ) ( μ 1 2 ν 1 2 ) 4 η 10 cosh 2 ( z ˜ 1 ) + ( μ 1 2 + ν 1 2 ) 4 η 20 cosh 2 ( z ˜ 2 ) 4 μ 1 2 η 10 16 μ 1 2 ν 1 2 η 10 2
Thus, we obtain the dressed polarization vectors as follows:
| M 1 = g 1 w 1 S 1 + 0 0 0 0 0 S 1 0 0 0 0 0 S 1 0 0 0 0 0 S 1 0 0 0 0 0 S 1 + | m 1 + i λ 1 + ν 1 β 1 μ 1 2 ν 1 2 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 | n 1 , | N 1 = g 1 w 1 S 1 0 0 0 0 0 S 1 + 0 0 0 0 0 S 1 + 0 0 0 0 0 S 1 + 0 0 0 0 0 S 1 | n 1 i λ 1 ν 1 β ¯ 1 μ 1 2 ν 1 2 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 | m 1 ,
where
S 1 + = 4 μ 1 λ 1 + η 10 cosh 2 ( z ˜ 1 ) + λ 1 η 20 cosh 2 ( z ˜ 2 ) μ 1 η 10 , S 1 = 4 μ 1 λ 1 η 10 cosh 2 ( z ˜ 1 ) + λ 1 + η 20 cosh 2 ( z ˜ 2 ) μ 1 η 10 .
It remains to use Equation (152) and calculate the corresponding one-soliton solutions of the s o ( 5 ) 4-wave NLEE (105) and (106). The result is:
q 1 ( x , t ) = 2 i ν 1 a 1 ( N 1 , 4 m 1 , 5 + M 1 , 2 n 1 , 1 a 2 ( N 1 , 1 m 1 , 2 + M 1 , 5 n 1 , 4 ) , q 2 ( x , t ) = 4 i ν 1 a 1 ( N 1 , 3 m 1 , 5 M 1 , 3 n 1 , 1 ) , q 3 ( x , t ) = 4 i ν 1 a 1 ( N 1 , 2 m 1 , 5 + M 1 , 4 n 1 , 1 ) + a 2 ( N 1 , 1 m 1 , 4 + M 1 , 5 n 1 , 2 ) ,
and
q 4 = 4 i ν 1 a 2 ( λ 1 N 1 , 3 m 1 , 4 + λ 1 + M 1 , 3 n 1 , 2 ) + 16 ν 1 2 a 1 ( N 1 , 2 M 1 , 3 + M 1 , 4 N 1 , 3 ) ( n 1 , 1 m 1 , 1 m 1 , 5 n 1 , 5 ) .

7. MNLS Family and Symmetric Spaces

In order to analyze the MNLS family of NLEE, we need to make two changes to the RHP. First, following Fordy and Kulish [45], we need to introduce in it the structure of the symmetric space, and second, we need to modify the x and t dependence of the sewing function G ( x , t , λ ) to:
i G x λ 2 [ J , G ( x , t , λ ) ] = 0 , i G t λ 4 [ J , G ( x , t , λ ) ] = 0 .
This means that the M operator in the Lax pair now must be a polynomial of λ of 4-th order.
We already pointed out the importance of the seminal paper of Fordy and Kulish [45] for the NNLS equations. Each symmetric space is generated by a Cartan involution [21], which determines the principal symmetry for the Lax pair and for the solution of the RHP. For this family of NLEE, it will be more convenient to use another parametrization of ξ ( x , t , λ ) :
ξ ( x , t , λ ) = exp ( Q ( x , t , λ ) ) , Q ( x , t , λ ) = s = 1 Q s λ s ,
This is similar to the parametrization in (76); however, now each of the coefficients Q s ( x , t ) provides local coordinates for the relevant symmetric space. The typical reduction here is to assume that Q ( x , t , λ ) is anti-hermitian:
Q ( x , t , λ ) = Q ( x , t , λ ) , i . e . , Q s ( x , t ) = Q s ( x , t ) , and ( ξ ± ( x , t , λ ) ) 1 = ξ ( x , t , λ ) .
In Appendix A we briefly describe the root systems of the simple Lie algebras, as well as the Cartan involutions that generate the corresponding Hermitian spaces. We would like to remind the reader that the Cartan involution is determined by a special element of the Cartan subalgebra J h and that by h we denote the vector in root space which is dual to J h . Using h , we can split the positive roots Δ + of g into two subsets:
Δ + = Δ 0 + Δ 1 + , Δ 0 + { α Δ + , ( h , α ) = 0 } , Δ 1 + { β Δ 1 + , ( h , β ) > 0 } ,
Below, we specify the Cartan involutions for the four classes of Hermitian symmetric spaces and their realizations as factor groups:
A . III J = j = 1 m E j j m r + 1 j = 1 r + 1 E j j , S U ( m + n ) / S ( U ( m ) U ( n ) ) , BD . I J = H e 1 = E 11 E 2 r + 1 , 2 r + 1 , S O ( 2 r + 1 ) / S ( O ( 2 r 1 ) O ( 2 ) ) , C . I J = j = 1 r H e j = j = 1 r ( E j j E 2 r + 1 j , 2 r + 1 j ) , S P ( 2 r ) / U ( r ) , D . I J = j = 1 r H e j = j = 1 r ( E j j E 2 r + 1 j , 2 r + 1 j ) , S O ( 2 r ) / U ( r ) ,
Note that for the A.III case m + n = r + 1 , the corresponding vectors h and the subsets of roots Δ 1 + are given by:
A . III h = j = 1 m e j m r + 1 ϵ , Δ 1 + = { e j e k , m j < k r + 1 } , BD . I h = e 1 , Δ 1 + = { e 1 e j , e 1 , e 1 + e j , 2 j r , } , C . I h = j = 1 r e j , Δ 1 + = { e 1 + e j , 2 e j , 1 i < j r , } , D . III h = j = 1 r e j , Δ 1 + = { e 1 + e j , 1 i < j r , } ,
where ϵ = j = 1 r + 1 e j .
Q s ( x , t ) = α Δ 1 + ( q α , s ( x , t ) E α q α , s ( x , t ) E α ) .
The Cartan involution splits the root system Δ of the relevant simple Lie algebra into two subsets:
Δ = Δ 0 Δ 1 , Δ 0 { α Δ , α ( J ) = 0 } , Δ 1 { α Δ , α ( J ) 0 } .
Here, we just note that for the A.III, C.I and D.III symmetric spaces we can introduce local coordinates, which in the typical representation is given by block 2 × 2 matrices:
Q s ( x , t ) = α Δ 1 + ( q s , α E α q s , α E α ) = 0 q s q s 0 .
Indeed, for the A.III type symmetric spaces the blocks q and p may be arbitrary, apart from the fact that they must be hermitian conjugate to each other. For the symmetric spaces of type C.I and D.III, these blocks must be constrained in such a way that the corresponding matrix Q 1 ( x , t ) must be an element of the s p ( 2 r ) or s o ( 2 r ) algebra.
The fourth hermitian symmetric space is S O ( m + 2 ) / S ( O ( m ) × O ( 2 ) ) has different block-matrix structure. In this case, we are dealing with the root system of B r algebras and the Cartan involution is provided by J which is dual to e 1 . As a result, we have:
J = H e 1 , Δ 0 { ± ( e k ± e j ) , 1 < k < j r } { ± e k , 1 < k r } , Δ 1 { ± ( e 1 ± e j ) , 1 < j r } { ± e 1 } .
As a result, in the typical representation the local coordinates are provided by a 3 × 3 block matrices Q s ( x , t ) with the following structure:
Q s ( x , t ) = α Δ 1 + ( q s , α E α q s , α E α ) = 0 q s T 0 q s 0 s 0 q 1 0 q s s 0 0 .
For these reasons, we will consider separately these two types of symmetric space.
Here, we assume that the exponent Q ( x , t , λ ) provides the local coordinates of the corresponding hermitian symmetric space; we also assume that it is an odd function of λ . In short:
Q ( x , t , λ ) = s = 1 λ 2 s + 1 Q 2 s 1 ( x , t ) , ξ ± ( x , t , λ ) = exp ( Q ( x , t , λ ) ) .
Note that if we evaluate the exponential in (177) we will have two types of terms. The even powers of Q ( x , t , λ ) will be block diagonal matrices which are even functions of λ ; the odd powers of Q ( x , t , λ ) will be block-off-diagonal matrices which will be odd functions of λ . As we shall see below, this will give us more ‘economic’ way for the MNLS equations. Indeed, the equations that we will derive will be parametrized solely by the coefficients of Q 1 ( x , t ) . In addition we have additional symmetry due to the fact that J Q 1 J = Q 1 and J Q ( x , t , λ ) J = Q ( x , t , λ ) . This means that
C 0 ξ ± ( x , t , λ ) C 0 1 = ξ ± ( x , t , λ ) . C 0 = exp ( π i J ) .
Let us now consider Lax operators, which are quadratic in λ , e.g., (32). The construction of their FAS is outlined in Section 3.2 above. Obviously, the solutions χ + ( x , t , λ ) and χ ( x , t , λ ) of the Equation (37) are analytic for Im λ 2 > 0 and Im λ 2 < 0 , respectively. This means that now χ + ( x , t , λ ) is analytic in the first and third quadrant of the complex λ -plane, while χ ( x , t , λ ) is analytic in the second and the third quadrant, see Figure 2.

7.1. Lax Pairs on Symmetric Spaces. Generic Case

We start with the realization of Cartan involution which are convenient for Lax pairs. Here, we assume that the reader is familiar with the theory of simple Lie algebras g and their root systems [21,76]. We already mentioned above that the Cartan involution is determined by the Cartan subalgebra element J which provides the leading terms of the operators in the Lax pair. Below, we will assume that L is quadratic in λ ; then M must be a quartic polynomial of λ :
L MNLS ψ i ψ x + ( U 2 ( x , t ) + λ U 1 ( x , t ) λ 2 J ) ψ ( x , t , λ ) = 0 , M MNLS ψ i ψ t + ( V 4 ( x , t ) + λ V 3 ( x , t ) + λ 2 V 2 ( x , t ) + λ 3 V 1 ( x , t ) λ 4 J ) ψ ( x , t , λ ) = 0
This Lax pair has the form of (78), only now J = K . Therefore, U k ( x , t ) = V k ( x , t ) for k = 1 , 2 . Then, the compatibility condition takes the form:
i V x i U t + [ U ( x , t , λ ) , V ( x , t , λ ) ] = 0 .
and we equate to zero the terms with the different powers of λ . It is easy to check that the terms at λ 6 , λ 5 and λ 4 vanish identically. The rest of the terms are given by:
λ 3 : i Q x = [ J , V 3 ] , λ 2 : i V 2 x + [ Q , V 3 ] = [ J , V 4 ] , λ 1 : i V 3 x i Q t + [ V 2 , V 3 ] + [ Q , V 4 ] = 0 , λ 0 : i V 4 x V 2 t + [ V 2 , V 4 ] = 0 .
where we used the traditional Q ( x , t ) V 1 ( x , t ) . The first two of the above equations allow us to express V 3 and V 4 in terms of Q and Q x .
The Cartan involution introduces a Z 2 -grading in g , i.e.,
g = g ( 0 ) g ( 1 ) .
Let us assume that X 1 , X 2 g ( 0 ) and Y 1 , Y 2 g ( 1 ) . Then:
[ X 1 , X 2 ] g ( 0 ) , [ Y 1 , Y 2 ] g ( 0 ) , [ X 1 , Y 2 ] g ( 1 ) .
In particular, this means that the Cartan involution splits the roots system Δ into two subsets, which are determined by the choice of J:
Δ Δ 0 Δ 1 , α Δ 0 iff β ( J ) = 0 ; β Δ 1 iff α ( J ) = 1 mod ( 2 ) ;
In particular, this means that the generic elements X g ( 0 ) and Y g ( 1 ) take the form:
X = j = 1 r x j H j + α Δ 0 x α E α , Y = β Δ 1 y β E β ,
The Cartan involution can be realized as a similarity transformation by the Cartan subgroup element J = exp ( π i J ) which acts on X and Y above by:
J 1 X J = X , J 1 Y J = Y .
Therefore, the imposed reduction can be written as:
Q ( x , t , λ ) = Q ( x , t , λ ) g ( 1 ) , i . e . , ( ξ ± ) ( x , t , λ ) = ( ξ ) 1 ( x , t , λ ) .
Note that this is in agreement with the chosen parametrization (76).
Remark 8. 
Note that the mapping λ λ maps Q 1 Q 3 and Q 2 Q 4 ; as a result its preserves the analyticity regions of both ξ + and ξ . The mapping λ λ maps Q 1 Q 2 and Q 3 Q 4 , i.e., it maps the analyticity region of ξ + into the analyticity region of ξ .

7.2. NLEE on Symmetric Spaces: A.III

We will limit ourselves with the Hermitian symmetric spaces, see [21,48].
Here, we consider multicomponent derivative NLS-type (DMNLS) equations [35,77] and multicomponent GI (MGI) equations [52,53]. Note that the RHP for the DMNLS equations is not canonically normalized which requires slight modifications in applying the dressing method. However, DMNLS and MGI equations are gauge equivalent.
For L linear in λ we obtain the well known multicomponent NLS equations, see [45]. For the symmetric spaces of the types A.III, C.I and D.III the Cartan involution is fixed up by the choice of the matrix J, which takes the form [21]:
J = 1 1 0 0 1 1 .
The coefficients of the Lax operators are:
Q ( x , t ) = 0 q 1 q 1 0 , V 2 ( x , t ) = 1 2 q 1 q 1 0 0 q 1 q 1 , V 3 ( x , t ) = 0 q 3 1 6 q 1 q 1 q 1 q 3 1 6 q 1 q 1 q 1 0 , V 4 ( x , t ) = 1 2 q 1 q 3 + q 3 q 1 1 12 q 1 q 1 q 1 q 1 0 0 q 1 q 3 q 3 q 1 + 1 12 q 1 q 1 q 1 q 1 , V 5 ( x , t ) = 0 V 5 ; 12 V 5 ; 21 0 , V 5 ; 12 = q 5 1 6 ( q 1 q 1 q 3 + q 1 q 3 q 1 + q 3 q 1 q 1 ) + 1 120 q 1 q 1 q 1 q 1 q 1 V 5 ; 21 = q 5 1 6 ( q 1 q 1 q 3 + q 1 q 3 q 1 + q 3 q 1 q 1 ) + 1 120 q 1 q 1 q 1 q 1 q 1 .
The first of the equations in (181) is satisfied if
V 3 ( x , t ) = i a d J 1 Q x = i 2 0 q 1 , x q 1 , x 0 .
Comparing this expression for V 3 with the one from (189) we find that
Q 3 1 2 0 q 3 q 3 0 = 1 4 0 i q 1 , x 1 3 q 1 q 1 q 1 i q 1 , x + 1 3 q 1 q 1 q 1 .
Inserting Q 3 into the expression for V 4 in (189) we find:
V 4 = 1 4 i ( q 1 q 1 , x q 1 , x q 1 ) + 1 2 q 1 q 1 q 1 q 1 0 0 i ( q 1 q 1 , x q 1 , x q 1 ) 1 2 q 1 q 1 q 1 q 1
The corresponding MNLS is obtained from the third of the Equation (181). Taking into account that
[ V 3 , V 2 ] = i 4 0 q 1 , x q 1 q 1 + q 1 q 1 q 1 , x q 1 , x q 1 q 1 + q 1 q 1 q 1 , x 0 ,
and
[ V 4 , Q ] = 1 4 0 Y 4 ; 12 Y 4 ; 21 0 , Y 4 ; 12 = 2 i q 1 q 1 , x q 1 i q 1 , x q 1 q 1 i q 1 q 1 q 1 , x + q 1 q 1 q 1 q 1 q 1 Y 4 , 21 = 2 i q 1 q 1 , x q 1 i q 1 , x q 1 q 1 i q 1 q 1 q 1 , x q 1 q 1 q 1 q 1 q 1 ,
we obtain the MNLS in the form:
i q 1 , t + 1 2 q 1 , x x + i 2 q 1 q 1 , x q 1 + 1 4 q 1 q 1 q 1 q 1 q 1 = 0 ,
If we put τ = 2 t the equations obtain more familiar form of the vector GI equation:
i q 1 , τ + q 1 , x x + i q 1 q 1 , x q 1 + 1 2 q 1 q 1 q 1 q 1 q 1 = 0 ,
It remains to analyze the last of the Equation (181). To this end, we need:
[ V 4 , V 2 ] = i 8 W 4 ; 11 0 W 4 ; 22 . W 4 , 11 = q 1 ( q 1 q 1 ) x q 1 q 1 , x q 1 q 1 q 1 q 1 q 1 q 1 q 1 , x , W 4 ; 22 = q 1 ( q 1 q 1 ) x q 1 + q 1 , x q 1 q 1 q 1 + q 1 q 1 q 1 q 1 , x .
Then, the last of Equation (181) becomes:
i ( q 1 q 1 ) t + 1 2 ( q 1 q 1 , x x + q 1 , x x q 1 ) i 2 q 1 ( q 1 q 1 ) x q 1 = 0 .
It is easy to check that Equation (195) directly follows from (193).
Up to now, in this subsection, we treated the matrix q 1 as generic k × p rectangular matrix. However, below we would like to outline the special case: the vector NLS known also as the Manakov model.
Remark 9. 
Manakov proposed the first vector NLS equation with only 2 components [7]. The reason was that he wanted to treat a special case of VNLS which described the propagation of birefringent optical pulses in optical fibers. Since our Lax operator above is a quadratic pencil, the Equation (196) we derived is a vector generalization of GI equation. Of course, the method of solution of the VNLS is easily generalized to p-component vectors, p 2 .
i q τ + 2 q x 2 + i q x , q q + 1 2 q , q 2 q ( x , t ) = 0 .

7.3. MNLS Equations Related to D.III and C.I Symmetric Spaces

Equation (194) provides the NLEE related to the A.III type symmetric spaces. Similar considerations may be applied to other two classes of symmetric spaces: D.III and C.I. Indeed the block structure of the Lax pair for these two classes of symmetric spaces is the same as the one of (189) and J has the form of (188) where the unit matrices 1 1 have equal dimensions N. The substantial difference between these symmetric spaces is in the fact that they are subject of additional reduction. Indeed, for D.III we must require that Q s s o ( 2 N ) , which means that they must satisfy in addition (see [71]):
Q s + S 0 Q s T S 0 1 = 0 , S 0 = 0 s 0 s 0 0 , s 0 = p = 1 N ( 1 ) p + 1 E p , N p , s 0 s 0 = 1 1 N .
Therefore, S 0 2 = 1 1 2 N and
Q s = 0 q s q s 0 , q s = s 0 q s T s 0 , q s = s 0 q s T s 0 .
For C.I type symmetric spaces we must have Q s s p ( 2 N ) which means that
Q s + S 1 Q s T S 1 1 = 0 , S 1 = 0 s 1 s 1 0 , s 1 = p = 1 N ( 1 ) p + 1 E p , N p + 1 , s 1 s 1 = 1 1 N .
Therefore, S 1 2 = 1 1 2 N and
Q s = 0 q s q s 0 , q s = s 1 q s T s 1 , q s = s 1 q s T s 1 .
The reasons for such a choice of the definitions of the algebras s o ( 2 N ) and s p ( 2 N ) is that the Cartan subalgebras are represented by diagonal matrices. The explicit form of S 0 and S 1 are given in [71], see also Appendix A.
For example, the NLEE related to the symmetric spaces C.I and D.III related to s p ( 6 ) and s o ( 8 ) , respectively are obtained by inserting into the equation below:
i q 1 , τ + q 1 , x x + i q 1 q 1 , x q 1 + 1 2 q 1 q 1 q 1 q 1 q 1 = 0 ,
the following matrices for q ( x , t ) :
q C . I = q 1 q 3 q 4 q 2 q 5 q 3 q 6 q 2 q 1 , q D . III = q 1 q 2 q 3 0 q 5 q 4 0 q 3 q 6 0 q 4 q 2 0 q 6 q 5 q 1 .
In both cases, such substitutions into (194) can be viewed as special reductions to 6-component MNLS. Of course, these MNLS cannot be equivalent since they are related to non-isomorphic symmetric spaces. The parametrizations in Equation (202) have been obtained using the Cartan-Weyl basis given in Appendix A.

7.4. MNLS Related to BD.I-Type Symmetric Spaces

The basic parametrization for BD.I, which is isomorphic to S O ( 2 r + 1 ) / S O ( 2 r 1 ) S O ( 2 ) has different block matrix form from the one for A.III, namely:
ξ ( x , t , λ ) = exp ( Q ( x , t , λ ) ) , Q ( x , t , λ ) = s = 1 λ 2 s + 1 Q 2 s 1 ( x , t ) = Q ( x , t , λ ) , ( ξ ± ) ( x , t , λ ) = ( ξ ) 1 ( x , t , λ ) , Q 2 s 1 ( x , t ) = 0 q 2 s 1 T 0 q 2 s 1 0 s 0 q 2 s 1 0 q 2 s 1 s 0 0 , J = 1 0 0 0 0 0 0 0 1 .
Here, the matrices are ( 2 r + 1 ) × ( 2 r + 1 ) , while q 2 s 1 are 2 r 1 -component vectors; this fixes up the block-structure of the matrices in this subsection. We also introduce
ξ J ξ 1 ( x , t , λ ) = J s = 1 λ s V s ( x , t ) ,
where
V 1 Q ( x , t ) = [ J , Q 1 ( x , t ) ] , V 2 = 1 2 [ Q 1 , Q ] , V 3 = [ J , Q 3 ] 1 6 ad Q 1 3 J ,
V 1 = 0 q 1 T 0 q 1 0 s 0 q 1 0 q 1 s 0 0 , V 2 = ( q 1 , q 1 ) 0 0 0 q 1 q 1 T + s 0 q 1 q 1 s 0 0 0 0 ( q 1 , q 1 ) , V 3 = 0 q 3 T + w 3 T 0 q 3 + u 3 0 s 0 ( q 3 + w 3 ) 0 ( q 3 + u 3 T ) s 0 0 , V 4 = V 4 ; 11 0 0 0 V 4 ; 22 0 0 0 V 4 , 11 .
We have used the following notations above:
w 3 = 1 3 2 ( q 1 , q 1 ) q 1 + ( q 1 s 0 q 1 ) s 0 q 1 , u 3 = 1 3 2 ( q 1 , q 1 ) q 1 + ( q 1 s 0 q 1 ) s 0 q 1 , V 4 ; 11 = ( q 1 , q 3 ) + ( q 3 , q 1 ) 1 3 ( q 1 , q 1 ) 2 + 1 6 ( q 1 s 0 q 1 ) ( q 1 s 0 q 1 ) , V 4 ; 22 = 1 3 ( q 1 q 1 T s 0 q 1 q 1 s 0 ) ( q 1 , q 1 ) q 1 q 3 T + s 0 q 3 q 1 s 0 q 3 q 1 T + s 0 q 1 q 3 s 0 .
The potentials are:
V 3 ( x , t ) = 0 V 3 ; 2 T 0 V 3 ; 1 0 s 0 V 3 ; 2 0 V 3 ; 1 T s 0 0 , V 3 ; 1 = w + 1 3 ( p T s 0 p ) s 0 q 2 3 ( q , p ) p , V 3 ; 2 = s 0 v + 1 3 ( q T s 0 q ) p 2 3 ( q , p ) s 0 q .
V 4 = V 4 ; 11 0 0 0 V 4 ; 22 0 0 0 V 4 ; 33 , Q 3 ( x , t ) = 0 v T 0 w 0 s 0 v 0 w T s 0 0 , V 4 ; 11 = V 4 ; 33 = i ( q x , p ) ( q , p x ) + ( q , p ) 2 1 2 ( q T s 0 q ) ( p T s 0 p ) , V 4 ; 22 = i p x q T p q x T + s 0 ( q x p T q p x T ) s 0 ( q . p ) p q T s 0 q p T s 0 .
Since U 2 = V 2 the first of the Equation (181) gives:
V 3 ( x , t ) = a d J 1 i Q x = i 0 q x T 0 p x 0 s 0 q x 0 p x T s 0 0 ,
which in components give:
v = i q x + 2 3 ( p , q ) q 1 3 ( q T s 0 q ) s 0 p , w = i p x + 2 3 ( p , q ) p 1 3 ( p T s 0 p ) s 0 q .
The second equation in (181) is identically satisfied as a consequence of (211).
Finally, the equations of motion:
λ : i V 3 x i Q t + [ Q , V 4 ] + [ U 2 , V 3 ] = 0 , λ 0 : i V 4 x i U 2 t + [ U 2 , V 4 ] = 0 .
Since, in addition, we put p = ϵ q we obtain [78]:
i q t + 2 q x 2 + i ϵ s 0 ( q s 0 q ) q x + s 0 ( q , q ) ( q s 0 q ) s 0 q 1 2 | ( q s 0 q ) | 2 + 2 i ϵ ( q , q ) q = 0 .
One can check that the second equation in (212) holds identically as a consequence of (213).

8. Soliton Solutions of the MNLS Equations

8.1. Dressing for NLEE on Symmetric Spaces: A.III Case

In this case, the dressing factor satisfies two involutions:
u ( x , t , λ ) = u 1 ( x , t , λ ) , J u ( x , t , λ ) J 1 = u ( x , t , λ ) ,
then u ( x , t , λ ) and its inverse have the form
u ( x , t , λ ) = 1 1 + ( c 1 ( λ ) 1 ) | N 1 m 1 | + ( c 1 ( λ ) 1 ) J | N 1 m 1 | J 1 , u 1 ( x , t , λ ) = 1 1 + 1 c 1 ( λ ) 1 | n 1 M 1 | + 1 c 1 ( λ ) 1 J | n 1 M 1 | J 1 .
Here, the ‘polarization’ vectors | N 1 , m 1 | , | n 1 and M 1 | determine the residues of u and u 1 . They must satisfy Equation (114), which must be valid identically with respect to λ . Repeating the arguments as in Section 5.4 we find the x and t dependence of the polarization vectors:
| n 1 = χ 0 + ( x , t , λ 1 + ) | n 10 , | N 1 = χ 1 + ( x , t , λ 1 + ) | N 10 , m 1 | = m 10 | χ ^ 0 ( x , t , λ 1 ) , M 1 | = M 10 | χ ^ 1 ( x , t , λ 1 ) ,
where | n 10 , | N 10 , m 10 | and M 10 | are constant polarization vectors. We assume that we know χ 0 ± ( x , t , λ ) , which are related to the regular solutions of the RHP. Typically they correspond to vanishing potentials U 10 = 0 and U 20 = 0 ; thus, we must have:
χ 0 ± ( x , t , λ ) = e i ( λ 2 x + λ 4 t ) J ) .
We will need also to impose the proper normalization conditions on the polarization vectors so that
m 1 | χ ^ 0 ( x , t , λ 1 ) = m ˜ 10 | e ( z 1 + i ϕ 1 ) J . | n 1 = e ( z 1 i ϕ 1 ) J | n ˜ 10 , m ˜ 10 ; k | n ˜ 10 ; k = 1 , s = 1 m arg n ˜ 10 ; k = 0 , s = 1 m arg m ˜ 10 ; k = 0 , s = m + 1 n arg n ˜ 10 ; k = 0 , s = m + 1 n arg m ˜ 10 ; k = 0 , z 1 = 2 μ 1 ν 1 x + 4 μ 1 ν 1 ( μ 1 2 ν 1 2 ) t + x 01 , ϕ 1 = ( μ 1 2 ν 1 2 ) x + ( μ 1 4 6 μ 1 2 ν 1 2 + ν 1 4 ) t + ϕ 10 ,
Thus, we find the following expressions for | M 1 and | N 1 in terms of | m 1 and | n 1 :
| M 1 = m 1 | n 1 + i ν 1 μ 1 m 1 | J | n 1 J 1 | m 1 , | N 1 = m 1 | n 1 + i ν 1 μ 1 m 1 | J | n 1 J 1 | n 1 ,
It remains to calculate the potentials. To this end, we must consider the limit of Equation (114) for λ , which takes the form:
U 2 , 1 s + λ U 1 , 1 s = λ 2 ( J u J u ^ ( x , t , λ ) ) ,
where we have assumed U 20 = 0 and U 1 , 0 = 0 . Taking the limit λ of (220) we obtain:
U 1 , 1 s = 2 i ν 1 [ J , | N 1 m 1 | + J | N 1 m 1 | J ] , U 2 , 1 s = 2 i ν 1 U 1 , 1 s ( | N 1 m 1 | + J | N 1 m 1 | J ) .
Skipping the details we find:
U 1 ( x , t ) = 0 q , 1 s q , 1 s 0 , q , 1 s = 4 ν 1 r 01 | n ˜ 10 , 2 m ˜ 10 , 1 | e 2 i ϕ 1 cosh ( z 1 ) i ν 1 μ 1 sinh ( z 1 ) , U 2 ( x , t ) = 8 ν 1 2 r 01 2 1 cosh 2 ( z 1 ) + ν 1 2 μ 1 2 sinh 2 ( z 1 ) | n ˜ 10 , 1 m ˜ 10 , 1 | 0 0 | n ˜ 10 , 2 m ˜ 10 , 2 |
Example 5 
(A.III type symmetric spaces, generic case). Let us specify the form of the naked polarization vectors:
| n 1 ; 0 = | n 1 ; 01 | n 1 ; 02 , m 1 ; 0 | = n 1 ; 01 | , n 1 ; 02 | , | n 1 = e z ˜ 1 i ϕ ˜ 1 | n 1 ; 01 e z ˜ 1 + i ϕ ˜ 1 | n 1 ; 02 , m 1 | = n 1 ; 01 | e z ˜ 1 + i ϕ ˜ 1 , n 1 ; 02 | e z ˜ 1 i ϕ ˜ 1 ,
Therefore, the scalar products are given by:
m 1 | n 1 = 2 r 10 cosh ( 2 z 1 ) , m 1 | J | n 1 = 2 r 10 sinh ( 2 z 1 ) , z 1 = z ˜ 1 + x 01 , r 01 = m 1 ; 01 | n 1 ; 01 m 1 ; 02 | n 1 ; 02 , x 01 = 1 2 ln m 1 ; 01 | n 1 ; 01 m 1 ; 02 | n 1 ; 02 .
As a result:
| N 1 = μ 1 2 μ 1 2 cosh 2 ( 2 z 1 ) + ν 1 2 sinh 2 ( 2 z 1 ) ( cosh ( 2 z 1 ) i ν 1 μ 1 sinh ( 2 z 1 ) ) e z ˜ 1 i ϕ ˜ 1 | n 1 ; 01 ( cosh ( 2 z 1 ) + i ν 1 μ 1 sinh ( 2 z 1 ) ) e z ˜ 1 + i ϕ ˜ 1 | n 1 ; 02
The corresponding U 1 ( x , t ) will be given by:
U 1 ( x , t ) = 2 i ν 1 J , | N 1 m 1 | + J | N 1 m 1 | J = 0 q 1 s p 1 s 0 , q 1 s = 4 i ν 1 μ 1 2 | n 1 ; 01 m 1 ; 02 | μ 1 2 cosh 2 ( 2 z 1 ) + ν 1 2 sinh 2 ( 2 z 1 ) cosh ( 2 z 1 ) i ν 1 μ 1 sinh ( 2 z 1 ) e 2 i ϕ ˜ 1 , p 1 s = q 1 s .
The coefficient U 2 ( x , t ) is expressed through the matrix elements of U 1 ( x , t ) by:
U 2 = 2 q 1 s q 1 s 0 0 q 1 s q 1 s .
Example 6 
(A.III type symmetric spaces, VNLS case). Now, the naked polarization vectors:
| n 1 ; 0 = n 10 ; 1 n 10 ; 2 , m 1 ; 0 | = n 10 ; 1 , n 10 ; 2 | , | n 1 = e z ˜ 1 i ϕ ˜ 1 n 10 ; 1 e z ˜ 1 + i ϕ ˜ 1 n 10 ; 2 , m 1 | = n 10 ; 1 e z ˜ 1 + i ϕ ˜ 1 , n 10 ; 2 | e z ˜ 1 i ϕ ˜ 1 ,
Therefore, the scalar products are given by:
m 1 | n 1 = 2 r 10 cosh ( 2 z 1 ) , m 1 | J | n 1 = 2 r 10 sinh ( 2 z 1 ) , z 1 = z ˜ 1 + x 01 , r 01 = | n 10 ; 1 | ( n 10 ; 2 , n 10 ; 2 ) , x 01 = 1 2 ln | n 10 ; 1 | 2 ( n 10 ; 2 , n 10 ; 2 ) ,
and
| N 1 = μ 1 2 μ 1 2 cosh 2 ( 2 z 1 ) + ν 1 2 sinh 2 ( 2 z 1 ) ( cosh ( 2 z 1 ) i ν 1 μ 1 sinh ( 2 z 1 ) ) e z ˜ 1 i ϕ ˜ 1 n 10 ; 1 ( cosh ( 2 z 1 ) + i ν 1 μ 1 sinh ( 2 z 1 ) ) e z ˜ 1 + i ϕ ˜ 1 n 10 ; 2
The corresponding U 1 ( x , t ) will be given by:
U 1 ( x , t ) = 2 i ν 1 J , N 1 m 1 T + J N 1 m 1 T J = 0 q 1 s p 1 s 0 , q 1 s = 4 i ν 1 μ 1 2 n 10 ; 1 m 10 ; 2 | μ 1 2 cosh 2 ( 2 z 1 ) + ν 1 2 sinh 2 ( 2 z 1 ) cosh ( 2 z 1 ) i ν 1 μ 1 sinh ( 2 z 1 ) e 2 i ϕ ˜ 1 , p 1 s = q 1 s .
The coefficient U 2 ( x , t ) is expressed through the matrix elements of U 1 ( x , t ) by:
U 2 = 2 ( q 1 s , q 1 s ) 0 0 q 1 s q 1 s .
Figure 6 shows that the one soliton solution of (194) has two maxima. Another important difference as compare with the scalar case is that the phase of the A.III type soliton is not linear in x.

8.2. Dressing for NLEE on Symmetric Spaces: C.I and D.III Cases

In this case, the dressing factors are subject to three involutions. Therefore, the dressing factor must also satisfy these involutions. They determine the structure of the dressing factors. The first two involutions are the same as in Equation (189). The third involution comes from the requirement that the FAS and the dressing factors belong to the corresponding Lie group ( S P ( 2 r ) or S O ( 2 r ) ), see Appendix A.
The dressing factors that satisfy the above constraints take the form
u ( x , t , λ ) = 1 1 λ 1 + λ 1 λ λ 1 | N 1 m 1 | + λ 1 + λ 1 λ + λ 1 J | N 1 m 1 | J 1 + λ 1 + λ 1 λ λ 1 + S a 1 | M 1 n 1 | S a λ 1 + λ 1 λ + λ 1 + S a 1 J | M 1 n 1 | J 1 S a , u 1 ( x , t , λ ) = 1 1 + λ 1 + λ 1 λ λ 1 + | n 1 M 1 | λ 1 + λ 1 λ + λ 1 + J 1 | n 1 M 1 | J λ 1 + λ 1 λ λ 1 S a | m 1 N 1 | S a 1 + λ 1 + λ 1 λ + λ 1 S a J 1 | m 1 N 1 | J S a 1 .
First, we need to ensure that u ( x , t , λ ) and u 1 ( x , t , λ ) in (233) map a FAS of L 0 into a FAS of L 1 and vice versa. This will be true if they satisfy the equations:
i u x + ( U 2 + λ U 1 λ 2 ) u ( x , t , λ ) u ( x , t , λ ) ( U 20 + λ U 10 λ 2 ) = 0 , i u ^ x + ( U 20 + λ U 10 λ 2 ) u ^ ( x , t , λ ) u ^ ( x , t , λ ) ( U 2 + λ U 1 λ 2 ) = 0 ,
identically with respect to λ . But we have chosen fraction linear dependence on λ , it is easy to guess that Equation (234) will hold true if all the residues in the left-hand sides of (234) vanish. For example, taking the residue at λ = λ 1 ± we obtain:
i | N 1 m 1 | x + ( U 2 + λ 1 U 1 λ 1 , 2 ) | N 1 m 1 | | N 1 m 1 | ( U 20 + λ 1 U 10 λ 1 , 2 ) = 0 , i | n 1 M 1 | x + ( U 20 + λ 1 + U 10 λ 1 + , 2 ) | n 1 M 1 | | n 1 M 1 | ( U 2 + λ 1 + U 1 λ 1 + , 2 ) = 0 ,
Thus, it is easy to check, that the left-hand sides of Equation (235) vanish if:
| N 1 = χ 1 ( x , t , λ 1 ) | N 10 , m 1 | = m 10 | χ ^ 0 ( x , t , λ 1 ) , M 1 | = M 10 | χ ^ 1 + ( x , t , λ 1 + ) , | n 1 = χ 0 + ( x , t , λ 1 + ) | n 10 ,
where | N 10 , | N 10 , m 10 | and n 10 | are constant polarization vectors. Analogously, using the symmetries we find that all the other residues of Equation (234) vanish.
Next, we check that u ( x , t , λ ) and u 1 ( x , t , λ ) obviously satisfy conditions (b) and (c). However, it is far from obvious that they belong to the corresponding group. Since the expression for u 1 ( x , t , λ ) is obtained using the group properties of u ( x , t , λ ) it is enough to check that u u 1 1 1 . Note that the product u u 1 contains residues of second order. It is easy to see that these residues simplify to:
m 1 | S a | m 1 = 0 , n 1 | S a | n 1 = 0 , m 1 | J S a J | m 1 = 0 , n 1 | J S a J | n 1 = 0 , N 1 | S a | N 1 = 0 , M 1 | S a | M 1 = 0 , N 1 | J S a J | N 1 = 0 , M 1 | J S a J | N 1 = 0 ,
Remember, that these vectors depend on x and t and the conditions (237) must be identities. But we also know that these polarization vectors must satisfy (116) and (119). Therefore, we have:
n 1 | S a | n 1 = n 10 | ( χ 0 + ( x , λ 1 + ) ) T S a χ 0 + ( x , λ 1 + ) | n 10 = n 10 | S a | n 10 , N 1 | S a | N 1 = N 10 | ( χ ( x , λ 1 ) ) T S a χ ( x , λ 1 ) | N 10 = N 10 | S a | N 10 , N 1 | J S a J | N 1 = N 10 | ( χ ( x , λ 1 ) ) T J S a J χ ( x , λ 1 ) | N 10 = N 10 | J ( χ ( x , λ 1 ) ) T S a χ ( x , λ 1 ) J | N 10 = N 10 | J S a J | N 10 .
The proof that all other scalar products in (237) are x and t independent is performed similarly. Thus, the conditions (237) in fact impose restrictions only on the initial polarization vectors:
m 10 | S a | m 10 = 0 , n 10 | S a | n 10 = 0 , m 10 | J S a J | m 10 = 0 , n 10 | J S a J | n 10 = 0 , N 10 | S a | N 10 = 0 , M 10 | S a | M 10 = 0 , N 10 | J S a J | N 10 = 0 , M 10 | J S a J | N 10 = 0 ,
The last condition that is imposed on the polarization vectors comes from Equation (133) and reads
| n 10 = ( m 10 | ) , | N 10 = ( M 10 | ) .
Assuming that the regular solution of the Lax operator corresponds to vanishing potentials U 20 = U 10 = 0 , which means that
χ ± ( x , t , λ ) = exp ( i λ 2 ( x + λ 2 t ) ) ,
and using for the eigenvalues λ 1 ± = μ 1 ± i ν 1 and the polarization vectors the parametrization above we obtain
m 1 | = m ˜ 10 | e ( z 1 + i ϕ 1 ) J , | n 1 = e ( z 1 = i ϕ 1 ) J | n ˜ 10 , z 1 = 2 μ 1 ν 1 ( x + 2 ( μ 1 2 + ν 1 2 ) t + r 10 , ϕ 1 = ( μ 1 2 ν 1 2 ) x + ( μ 1 4 6 ν 1 2 μ 1 2 + ν 1 4 ) t + ϕ 10 ,
and the notations and normalization conditions on the polarization vectors are the same as in (218) Taking the residues at λ 1 , λ 1 + leads to
| m 1 = m 1 | n 1 1 1 + λ 1 + λ 1 λ 1 + + λ 1 J m 1 | J | n 1 | M 1 + λ 1 + λ 1 2 λ 1 m 1 | J S | m 1 J S | N 1 , | n 1 = m 1 | n 1 1 1 λ 1 + λ 1 λ 1 + + λ 1 m 1 | J | n 1 J | N 1 λ 1 + λ 1 2 λ 1 + n 1 | J S | n 1 S 1 J | M 1
which can be written down as the following sets of linear equations:
| m 1 | n 1 = X 1 X 2 Y 1 Y 2 | M 1 | N 1
Here, the block matrices X 1 , 1 and Y 1 , 2 are given by:
X 1 = m 1 | n 1 1 1 + i ν 1 μ 1 J m 1 | J | n 1 , X 2 = i ν 1 λ 1 η a β 1 J S a , Y 1 = i ν 1 λ 1 + η a β ¯ 1 S a J , Y 2 = m 1 | n 1 1 1 i ν 1 μ 1 m 1 | J | n 1 J , β 1 = m 1 | S a J | m 1 = m 10 | S a J | m 10 , β ¯ 1 = n 1 | J S a | n 1 = n 10 | J S a | n 10 .
where S a 2 = η a 1 1 , η a = ± 1 . This system is solved similarly as Equations (147) and (160). Using the fact that S J = J S we can simplify the result into:
X 1 X 2 Y 1 Y 2 1 = g 2 w 2 Y 2 X 2 Y 1 X 1
where
g 2 w 2 = μ 1 2 ( μ 1 2 + ν 1 2 ) ( μ 1 2 + ν 1 2 ) ( μ 1 2 m 1 | n 1 + ν 1 2 m 1 | J | n 1 ) + μ 1 2 ν 1 2 η a β 1 β ¯ 1 = μ 1 2 ( μ 1 2 + ν 1 2 ) 4 r 01 2 ( μ 1 2 + ν 1 2 ) ( μ 1 2 cosh 2 ( 2 z 1 ) + ν 1 2 sinh 2 ( 2 z 1 ) ) + μ 1 2 ν 1 2 η a β 1 β ¯ 1 ,
Thus, we find the following expressions for | M 1 and | N 1 in terms of | m 1 and | n 1 :
| M 1 = g 2 w 2 m 1 | n 1 i ν 1 μ 1 m 1 | J | n 1 J | m 1 i ν 1 η a λ 1 β 1 S a J | n 1 , | N 1 = g 2 w 2 m 1 | n 1 i ν 1 μ 1 m 1 | J | n 1 J | n 1 + i ν 1 η a λ 1 + β ¯ 1 S a J | m 1 ,
It remains to calculate the potentials. To this end we use the first equation in (234) for λ , assuming U 2 , 0 = 0 and U 1 , 0 = 0 . In this limit the term i u x vanishes and the rest of the equation is linear in λ . Thus, we obtain:
U 1 , 1 s = 2 i ν 1 [ J , R 1 ( x , t ) ] , U 2 , 1 s = 2 i ν 1 U 1 , 1 s R 1 ( x , t ) = 2 i ν 1 [ J , R 2 ( x , t ) ] ,
where
R 1 ( x , t ) = | N 1 m 1 | + J | N 1 m 1 | J + S a 1 | M 1 n 1 | S a S a 1 J | M 1 n 1 | J S a , R 2 ( x , t ) = λ 1 ( | N 1 m 1 | + J | N 1 m 1 | J ) + λ 1 + ( S a 1 | M 1 n 1 | S a + S a 1 J | M 1 n 1 | J S a .
After some calculations we obtain:
U 1 , 1 s = 0 q 1 , 1 s q 1 , 1 s 0 , U 2 , 1 s = q 2 , 1 s 0 0 q 2 , 1 s , q 1 , 1 s = 8 i ν 1 | N 1 , 1 m 1 , 2 | σ 1 | M 1 , 2 n 1 , 1 | s 1 , q 2 , 1 s = 32 ν 1 2 | N 1 , 1 m 1 , 2 | σ 1 | M 1 , 2 n 1 , 1 | s 1 | N 1 , 2 m 1 , 1 | σ 2 | M 1 , 1 n 1 , 2 | s 2 = 1 2 q 1 , 1 s q 1 , 1 s .
The expression for U 2 , 1 s coincides with V 2 from Equation (189).
For the case of s p ( 6 ) q 1 , 1 s is a 3 × 3 matrix and contains 6 independent functions, see q C . I in Equation (202). Their modules are plotted on Figure 7.
For the case of s o ( 8 ) q 1 , 1 s is a 4 × 4 matrix and contains 6 independent functions, see q D . III in Equation (202). Their modules are plotted on Figure 8 and Figure 9.

8.3. Dressing for NLEE on Symmetric Spaces: BD.I Cases

The Cartan involution is given by a matrix C 0 , with
C 0 = e i π J = diag ( 1 , 1 1 , 1 ) , J = diag ( 1 , 0 , , 0 , 1 ) , S 0 = 0 0 1 0 s 0 0 1 0 0 .
The dressing factors are like in (233)
u ( x , t , λ ) = 1 1 λ 1 + λ 1 λ λ 1 | N 1 m 1 | + λ 1 + λ 1 λ + λ 1 J | N 1 m 1 | J 1 + λ 1 + λ 1 λ λ 1 + S a 1 | M 1 n 1 | S a λ 1 + λ 1 λ + λ 1 + S a 1 J | M 1 n 1 | J 1 S a , u 1 ( x , t , λ ) = 1 1 + λ 1 + λ 1 λ λ 1 + | n 1 M 1 | λ 1 + λ 1 λ + λ 1 + J 1 | n 1 M 1 | J λ 1 + λ 1 λ λ 1 S a | m 1 N 1 | S a 1 + λ 1 + λ 1 λ + λ 1 S a J 1 | m 1 N 1 | J S a 1 .
where the polarization vectors | n 10 , m 10 | , | N 10 and M 10 | have n + 1 components. The explicit form of χ 0 ± ( x , t , λ ) is:
χ 0 ± ( x , t , λ 1 ± ) = exp i ( λ 1 ± , 2 x + λ 1 ± , 4 t = exp ( ± z 1 ( x , t ) ) exp ( i ϕ 1 ( x , t ) ) , z 1 ( x , t ) = 2 μ 1 ν 1 ( x + 2 ( μ 1 2 ν 1 2 ) t ) , ϕ 1 ( x , t ) = ( μ 1 2 ν 1 2 ) x + ( μ 1 4 + ν 1 4 ) t .
The involutions can then be written as
S u T ( x , t , λ ) S 1 = u 1 ( x , t , λ ) , u ( x , t , λ ) = u 1 ( x , t , λ ) , C 0 u ( x , t , λ ) C 0 = u ( x , t , λ ) , C 0 2 = 1 1 .
Note also that [ C 0 , S ] = 0 .
The dressing factors subject to three involutions are presented in (233). Applying the same technique as above we first derive the constraints on the polarization vectors that annihilate the second order poles in u ( x , t , λ ) u ^ ( x , t , λ ) :

9. Multi-Soliton Solutions

As is evident from the above even the derivation of the one-soliton solutions is not trivial. The N-soliton solutions, however, can be derived using one the following approaches.
The first is to try and repeat the dressing method starting from the one soliton solution, rather than from the naked one χ 0 ( x , t , λ ) . Thus:
χ 2 ( x , t , λ ) = u 2 ( x , t , λ ) χ 1 ( x , t , λ )
where u 2 ( x , t , λ ) has the same form as (233) but now:
| n 1 = χ 1 + ( x , t , λ 1 + ) | n 10 , | N 1 = χ 2 + ( x , t , λ 1 + ) | N 10 , m 1 | = m 10 | χ ^ 1 ( x , t , λ 1 ) , M 1 | = M 10 | χ ^ 1 ( x , t , λ 1 ) .
Solving the algebraic equations we can use the same formula from above, only now the expressions for m 1 | n 1 , m 1 | J | n 1 , m 1 | C 0 | n 1 will be more complicated.
Another way to derive the N-soliton solutions will be to use a more complicated dressing factor: We will outline the dressing method for the N-soliton solutions of MNLS in the case of three involutions.
The dressing factors for N solitons in that case take the form:
u Ns ( x , t , λ ) = 1 1 + j = 1 N u j ( x , t , λ ) , u Ns 1 ( x , t , λ ) = 1 1 + j = 1 N u ^ j ( x , t , λ ) , u ^ j ( x , t , λ ) = S a u j T ( x , t , λ ) S a 1 ,
where
u j ( x , t , λ ) = λ j + λ j λ λ j | N j m j | + λ j + λ j λ + λ j J | N j m j | J + λ j + λ j λ λ j + S a | M j n j | S a 1 λ j + λ j λ + λ j + S a J | M j n j | J S a 1 , u ^ j ( x , t , λ ) = λ j + λ j λ λ j S a | m j N j | S a 1 + λ j + λ j λ + λ j S a J | m j N j | J S a 1 + λ j + λ j λ λ j + | n j M j | λ j + λ j λ + λ j + J | n j M j | J .
The N-soliton dressing factor must satisfy Equation (234) which must hold identically with respect to λ . That means that all residues for λ = λ j ± must vanish. Repeating the analysis in Section 7.2 we obtain the analogues of Equation (236):
| N j = χ Ns ( x , λ j ) | N j , 0 , m j | = m j 0 | χ ^ 0 ( x , λ j ) , | n j = χ 0 + ( x , λ j + ) | n j , 0 , M j | = M j 0 | χ ^ Ns + ( x , λ j + .
Next, we must ensure that u Ns u ^ Ns = 1 1 . It is obvious that here we encounter residues of second order. Vanishing or these residues requires set of conditions generalizing Equation (237):
m j | S a | m j = 0 , n j | S a | n j = 0 , m j | J S a J | m j = 0 , n j | J S a J | n j = 0 , M j | S a | M j = 0 , N j | S a | N j = 0 , M j | J S a J | M j = 0 , N j | J S a J | N j = 0 ,
for all j = 1 , , N . Using the fact that the solutions χ 0 ± ( x , λ j ± ) and χ Ns ± ( x , λ j ± ) are elements of the orthogonal or symplectic groups one can check that the constraints (261) in fact affect only the ‘naked’ polarization vectors:
m j 0 | S a | m j 0 = 0 , n j 0 | S a | n j 0 = 0 , M j 0 | S a | M j 0 = 0 , N j 0 | S a | N j 0 = 0 ,
Analyzing the residues of first order and requiring them to vanish we derive a set of linear equations which allow us to find explicit expressions for the ‘dressed’ polarization vectors | N j and | M j in terms of | m j and | n j . Below, we will derive this equations for the simplest nontrivial case: N = 2 :
| m 1 = m 1 | n 1 + i ν 1 μ 1 m 1 | J | n 1 J | M 1 + λ 2 + λ 2 λ 2 + λ 1 m 1 | n 2 + λ 2 + λ 2 λ 2 + + λ 1 m 1 | J | n 2 J | M 2 + i ν 1 λ 1 m 1 | S a J | m 1 S a J | N 1 + λ 2 + λ 2 λ 1 λ 2 m 1 | S a | m 2 S a + λ 2 + λ 2 λ 2 + λ 1 m 1 | S a J | m 2 S a J | N 2 , | m 2 = λ 1 + λ 1 λ 1 + λ 2 m 2 | n 1 + λ 1 + λ 1 λ 1 + + λ 2 m 2 | J | n 1 J | M 1 + m 2 | n 2 + i ν 2 μ 2 m 2 | J | n 2 J | M 2 + λ 1 + λ 1 λ 2 λ 1 m 2 | S a | m 1 S a + λ 1 + λ 1 λ 2 + λ 1 m 2 | S a J | m 1 S a J | N 1 + i ν 2 λ 2 m 2 | S a J | m 2 S a J | N 2 .
and
| n 1 = i ν 1 λ 1 + n 1 | S a 1 J | n 1 S a 1 J | M 1 λ 2 + λ 2 λ 1 + λ 2 + n 1 | S a 1 | n 2 S a 1 λ 2 + λ 2 λ 2 + + λ 1 + n 1 | S a 1 J | n 2 S a 1 J | M 2 + n 1 | m 1 i ν 1 μ 1 n 1 | J | m 1 J | N 1 + λ 2 + λ 2 λ 1 + λ 2 n 1 | m 2 λ 2 + λ 2 λ 1 + + λ 2 n 1 | J | m 2 J | N 2 , | n 2 = λ 1 + λ 1 λ 2 + λ 1 + n 2 | S a 1 | n 1 S a 1 + λ 1 + λ 1 λ 2 + + λ 1 + n 2 | S a 1 J | n 1 S a 1 J | M 1 + i ν 2 λ 2 + n 2 | S a 1 J | n 2 S a 1 J | M 2 λ 1 + λ 1 λ 2 + λ 1 n 2 | m 1 λ 1 + λ 1 λ 2 + + λ 1 n 2 | J | m 1 J | N 1 + n 2 | m 2 i ν 2 μ 2 n 2 | J | m 2 J | N 2 .
| m 1 | m 2 | n 1 | n 2 = A 11 A 12 B 11 B 12 A 21 A 22 B 21 B 22 C 11 C 12 D 11 D 12 C 21 C 22 D 21 D 22 | M 1 | M 2 | N 1 | N 2 ,
where
A 11 = m 1 | n 1 + i ν 1 μ 1 m 1 | J | n 1 J , A 12 = λ 2 + λ 2 λ 2 + λ 1 m 1 | n 2 + λ 2 + λ 2 λ 2 + + λ 1 m 1 | J | n 2 J , B 11 = i ν 1 λ 1 m 1 | S a J | m 1 S a J , B 12 = λ 2 + λ 2 λ 1 λ 2 m 1 | S a | m 2 S a + λ 2 + λ 2 λ 2 + λ 1 m 1 | S a J | m 2 S a J , A 21 = λ 1 + λ 1 λ 1 + λ 2 m 2 | n 1 + λ 1 + λ 1 λ 1 + + λ 2 m 2 | J | n 1 J , A 22 = m 2 | n 2 + i ν 2 μ 2 m 2 | J | n 2 J , B 21 = λ 1 + λ 1 λ 2 λ 1 m 2 | S a | m 1 S a + λ 1 + λ 1 λ 2 + λ 1 m 2 | S a J | m 1 S a J , B 22 = i ν 2 λ 2 m 2 | S a J | m 2 S a J , C 11 = λ 1 + λ 1 λ 2 + λ 1 + n 2 | S a 1 | n 1 S a 1 + λ 1 + λ 1 λ 2 + + λ 1 + n 2 | S a 1 J | n 1 S a 1 J , C 12 = i ν 2 λ 2 + n 2 | S a 1 J | n 2 S a 1 J , D 11 = n 1 | m 1 i ν 1 μ 1 n 1 | J | m 1 J , D 12 = λ 2 + λ 2 λ 1 + λ 2 n 1 | m 2 λ 2 + λ 2 λ 1 + + λ 2 n 1 | J | m 2 J , C 21 = λ 1 + λ 1 λ 2 + λ 1 + n 2 | S a 1 | n 1 S a 1 + λ 1 + λ 1 λ 2 + + λ 1 + n 2 | S a 1 J | n 1 S a 1 J , C 22 = i ν 2 λ 2 + n 2 | S a 1 J | n 2 S a 1 J , D 21 = λ 1 + λ 1 λ 2 + λ 1 n 2 | m 1 λ 1 + λ 1 λ 2 + + λ 1 n 2 | J | m 1 J , D 22 = n 2 | m 2 i ν 2 μ 2 n 2 | J | m 2 J
The explicit form of the scalar products depends on the chosen algebra and the definition of C 0 . For example, if we consider s o ( 2 N 0 ) or s p ( 2 N 0 ) algebras and that
C 0 = k = 0 N 0 ϵ k H e k , ϵ k = ± 1 .
Then
m 1 | n 1 = s = 1 2 N 0 2 | n 10 , k | 2 e z 1 ( J k x + K k t ) = s = 1 N 0 2 | n 10 , k | | n 10 , k ¯ | cosh z 1 , k + x 0 , k , x 0 , k = ln | n 10 , k | | n 10 , k ¯ |
In order to find the 2-soliton solution we need to solve Equation (265) and invert the 4 × 4 block matrix; then
| M 1 | M 2 | N 1 | N 2 = A 11 A 12 B 11 B 12 A 21 A 22 B 21 B 22 C 11 C 12 D 11 D 12 C 21 C 22 D 21 D 22 1 | m 1 | m 2 | n 1 | n 2 .
The explicit calculation of the right-hand side of Equation (269) will be published elsewhere.

10. The Resolvent and Spectral Properties of Lax Operators

In this Section, following [15], we briefly address the spectral properties of the Lax operators. We intend to demonstrate that the notion of the resolvent introduced in [15] for the generalized Zakharov–Shabat systems can be naturally extended to the class of quadratic pencils that we have studied above.
Below, for simplicity, we assume that the function Q ( x , t , λ ) (76) which introduces the parametrization of the RHP is a smooth matrix-valued function of x satisfying the following conditions:
C.1 
Q ( x , t , λ ) possesses smooth derivatives of all orders with respect to x and falls off to zero for | x | faster than any power of x:
lim x ± | x | k Q ( x , t , λ ) = 0 , λ and k = 0 , 1 , 2 , ;
C.2 
Q ( x , t , λ ) is such that the corresponding RHP has finite index. For the class of RHP that we have been dealing with this means that the solution of the RHP must have finite number of simple zeroes and pole singularities.
Remark 10. 
Let us assume that the zeroes and the pole singularities of ξ + ( x , t , λ ) (resp. ξ ( x , t , λ ) ) are located at the points λ j + Q 1 Q 3 (resp. at λ j Q 2 Q 4 ). Taking into account the symmetries of the FAS (89) we conclude that:
 1. 
if λ j + is a zero or pole of ξ + ( x , t , λ ) , then there must exist λ p + = λ j + , which is also a zero or pole of ξ + ( x , t , λ ) ;
 2. 
if λ j is a zero or pole of ξ ( x , t , λ ) , then there must exist λ p = λ j , which is also zero or pole of ξ ( x , t , λ ) .
 3. 
if λ j + is a zero or pole of ξ + ( x , t , λ ) , then there must exist λ p = ( λ j + ) which is a zero or pole of ξ ( x , t , λ ) .
Choosing proper enumeration of the zeroes and poles we can arrange them so that λ j + Q 1 , j = 1 , , N , λ j + N + = λ j + Q 3 , j = 1 , , N , λ j = ( λ j + ) Q 4 , j = 1 , , N and λ j + N = ( λ j + N + ) Q 2 , j = 1 , , N .
The FAS of L ( λ ) , which are related to the solution of the RHP by χ ± ( x , λ ) = ξ ± ( x , λ ) e I λ 2 J x allow one to construct the resolvent of the operator L and then to investigate its spectral properties. By resolvent of L ( λ ) we understand the integral operator R ( λ ) with kernel R ( x , y , λ ) , which satisfies
L ( λ ) ( R ( λ ) f ) ( x ) = f ( x ) ,
where f ( x ) is an n-component vector function in C n with bounded norm, i.e., d y ( f ( y ) f ( y ) ) < .
From the general theory of linear operators [79,80] we know that the point λ in the complex λ -plane is a regular point if R ( λ ) is the kernel of a bounded integral operator. In each connected subset of regular points, R ( λ ) must be analytic in λ .
The points λ which are not regular constitute the spectrum of L ( λ ) . Roughly speaking the spectrum of L ( λ ) consist of two types of points:
  • (i) the continuous spectrum of L ( λ ) consists of all points λ for which R ( λ ) is an unbounded integral operator;
  • (ii) the discrete spectrum of L ( λ ) consists of all points λ for which R ( λ ) develops pole singularities.
Let us now show how the resolvent R ( λ ) can be expressed through the FAS of L ( λ ) . Here and below, we will limit ourselves with Lax operators L which are quadratic pencils of λ and have the form (78) (i.e., N 1 = 2 ). Assuming that the eigenvalues of J are different and real we can always order them so that
J = diag ( a 1 , a 2 , , a n ) , a 1 > a 2 > a k > 0 > a k + 1 > > a n
In Section 2 above, we constructed the FAS of such operators. Using them we can write down R ( λ ) in the form:
R ( λ ) f ( x ) = R ( x , y , λ ) f ( y ) ,
the kernel R ( x , y , λ ) of the resolvent is given by:
R ( x , y , λ ) = R + ( x , y , λ ) f o r λ Q 1 Q 3 , R ( x , y , λ ) f o r λ Q 2 Q 4 ,
where
R ± ( x , y , λ ) = ± i χ ± ( x , λ ) Θ ± ( x y ) χ ^ ± ( y , λ ) , Θ ± ( z ) = θ ( z ) Π 0 θ ( ± z ) ( 1 1 Π 0 ) , Π 0 = s = 1 k 0 E s s ,
where k 0 is the number of positive eigenvalues of J, see (271). Due to the condition t r J = s = 1 n a s = 0 , k 0 is fixed up uniquely.
The next theorem establishes that R ( x , y , λ ) is indeed the kernel of the resolvent of L ( λ ) .
Theorem 3. 
Let Q ( x , t , λ ) satisfy conditions (C.1) and (C.2) and let λ j ± be the simple zeroes of the solutions ξ ± ( x , t , λ ) of the RHP. Let χ ± ( x , λ ) = ξ ± ( x , λ ) exp ( i λ 2 J x ) and let R ± ( x , y , λ ) be defined as in Equation (274). Then:
 1. 
χ ± ( x , λ ) will be FAS of a quadratic pencil of the form (86) whose coefficients will be expressed through Q s ( x , t ) as in (87).
 2. 
R ± ( x , y , λ ) is a kernel of a bounded integral operator for I m λ 2 0 ;
 3. 
R ( x , y , λ ) is uniformly bounded function for λ 2 R and provides a kernel of an unbounded integral operator;
 4. 
R ± ( x , y , λ ) satisfy the equation:
L ( λ ) R ± ( x , y , λ ) = 1 1 δ ( x y ) .
Idea of the proof. 
1.
is obvious from the fact that χ ± ( x , λ ) are the FAS of L ( λ ) (86). It is also easy to see that if Q ( x , t , λ ) satisfies conditions (C.1) and (C.2) then U 2 ( x , t ) and U 1 ( x , t ) will also satisfy condition C1. In addition obviously χ ± ( x , λ ) will satisfy condition C2 and will have poles and zeroes at the points λ j ± , see Remark 10.
2.
Assume that Im λ 2 > 0 and consider the asymptotic behavior of R + ( x , y , λ ) for x , y . From Equation (86) we find that
R i j + ( x , y , λ ) = p = 1 n ξ i p + ( x , λ ) e i λ 2 a p ( x y ) Θ p p + ( x y ) ξ ^ p j + ( y , λ )
We use the fact that χ + ( x , λ ) has triangular asymptotics for x and λ 2 C + (see Equation (42)). With the choice of Θ + ( x y ) (274) we check that the right-hand side of (276) falls off exponentially for x and arbitrary choice of y. All other possibilities are treated analogously.
3.
For λ R i R the arguments of (2) can not be applied because the exponentials in the right-hand side of (276) I m λ = 0 only oscillate. Thus, we conclude that R ± ( x , y , λ ) for λ R is only a bounded function of x and thus the corresponding operator R ( λ ) is an unbounded integral operator.
4.
The proof of Equation (275) follows from the fact that L ( λ ) χ + ( x , λ ) = 0 and
d Θ ± ( x y ) d x = 1 1 δ ( x y ) .
If the algebra g s l ( n ) then χ + ( x , λ ) and its inverse χ ^ + ( x , λ ) do not have common poles. In this case, condition C2 is valid also for R + ( x , y , λ ) . However, this is not true for the orthogonal or symplectic algebras; in this case, R + ( x , y , λ ) may have poles of second order, which require additional care.
Now, we can derive the completeness relation for the eigenfunctions of the Lax operator L by applying the contour integration method (see, e.g., [23,81,82]) to the integral:
J ( x , y ) = 1 2 π i γ 1 γ 3 d λ λ R + ( x , y , λ ) 1 2 π i γ 2 γ 4 d λ λ R ( x , y , λ ) ,
where the contours γ j are shown on the Figure 10. The idea is to calculate J ( x , y ) first using the Cauchy residue theorem. Taking into account that the contours γ 1 and γ 3 are positively oriented, while γ 2 and γ 4 are negatively oriented we obtain:
J ( x , y ) = j = 1 N R e s λ = λ j + λ R + ( x , y , λ ) + R e s λ = λ j + λ R + ( x , y , λ ) R e s λ = λ j λ R ( x , y , λ ) R e s λ = λ j λ R ( x , y , λ ) .
We can also evaluate J ( x , y ) integrating along the contours. The integration along the real and purely imaginary axis provides the contribution coming from the continuous spectrum of L. The integration along the infinite arcs of the contours can be evaluated explicitly. To this end, we use the fact that ξ ± ( x , λ ) are canonically normalized we find:
J + ( x , y ) = 1 2 π i γ 1 , γ 3 , d λ λ R + ( x , y , λ ) = 1 4 π i γ 1 , γ 3 , d λ 2 e i λ 2 J ( x y ) Θ + ( x y ) = 1 4 π i d λ 2 e i λ 2 J ( x y ) Θ + ( x y )
J ( x , y ) = 1 2 π i γ 2 , γ 4 , d λ λ R ( x , y , λ ) = 1 4 π i γ 2 , γ 4 , d λ 2 e i λ 2 J ( x y ) Θ ( x y ) = 1 4 π i d λ 2 e i λ 2 J ( x y ) Θ ( x y ) .
Adding these two answers and using the definitions of Θ ± ( x y ) we find
J + ( x , y ) + J ( x , y ) = 1 4 π i d λ 2 e i λ 2 ( x y ) 1 1 = 1 2 δ ( x y ) s = 1 n 1 a s E s s .
Equating the two answers for J ( x , y ) we find the following completeness relations for the FAS of L:
δ ( x y ) 1 2 s = 1 n 1 a s E s s = 1 2 π d λ 2 s = 1 k 0 | χ [ s ] + ( x , λ ) χ ^ [ s ] + ( y , λ ) | s = k 0 + 1 n | χ [ s ] ( x , λ ) χ ^ [ s ] ( y , λ ) | + j = 1 N Res λ = λ j + λ R + ( x , y , λ ) + Res λ = λ j + λ R + ( x , y , λ ) Res λ = λ j λ R ( x , y , λ ) Res λ = λ j λ R ( x , y , λ ) .
This relation (283) allows one to expand any vector-function | z ( x ) C n over the eigenfunctions of the system (86). If we multiply (283) on the right by J z ( y ) and integrate over y we obtain:
z ( x ) = i 2 π d λ s = 1 k 0 | χ [ s ] + ( x , λ ) · ζ s + ( λ ) s = k 0 + 1 n χ [ s ] ( x , λ ) · ζ s ( λ ) + j = 1 N ν j n j + ( x ) ζ j + n j ( x ) ζ j ,
where the expansion coefficients are of the form:
ζ s ± ( λ ) = i d x χ ^ [ s ] ± ( x , λ ) | J | z ( x ) , ζ j ± ( λ ) = i d x m j ± J z ( x ) .
Corollary 3. 
The discrete spectrum of the Lax operator (86) consists of the poles of the resolvent R ± ( x , y , λ ) , which are described in Remark 10.
Remark 11. 
Here, we used also the fact that all eigenvalues of J are non-vanishing. In the case when one (or several) of them vanishes, we can prove the completeness of the eigenfunctions only on the image of a d J , which is a subspace of C n .
Remark 12. 
The authors are aware that these type of derivations need additional arguments to be made rigorous. One of the real difficulties is to find explicit conditions on the potentials U 2 ( x ) and U 1 ( x ) that are equivalent to the condition (C.2). Nevertheless, there are situations (e.g., the reflectionless potentials) when all these conditions are fulfilled and all eigenfunctions of L ( λ ) can be explicitly calculated. Another advantage of this approach is the possibility to apply it to Lax operators with more general dependence on λ, e.g., polynomial and rational.

11. Natural Generalizations

We outline possible natural generalizations of the above method. Thus, we demonstrate that with our method we can treat a number of new classes of Lax pairs. Treating the quadratic pencil we started by constructing the FAS. We did that to demonstrate that the FAS exist also for pencils of higher order. In fact, since we are dealing with RHP means that the FAS of the Lax pair are directly related to the solution of the RHP, see Equation (290) below.

11.1. Cubic Pencils as Lax Operators

Assume that
L i ψ x + ( U 3 + λ U 2 + λ 2 U 1 λ 3 J ) ψ ( x , t , λ ) = 0 , M i ψ t + ( V 3 + λ V 2 + λ 2 V 1 λ 3 K ) ψ ( x , t , λ ) = 0 ,
where J and K are diagonal matrices with real eigenvalues. If we assume U k and V k to be generic elements of the Lie algebra g the corresponding NLEE will be a system of 3 N g equations, where N g = dim g . In order to avoid this difficulty, we introduce the automorphism C of the algebra of third order, i.e., C 3 = 1 1 . For simplicity we can take it in the form:
C ( X ) C 1 X C , C = 1 1 0 0 0 ω 2 1 1 0 0 0 ω 1 1 , ω = exp ( 2 π i / 3 ) .
Using it we will introduce a grading of g
g = g ( 0 ) g ( 1 ) g ( 2 ) , X ( k ) g ( k ) , X ( 0 ) = X ( 0 ; 1 ) 0 0 0 X ( 0 ; 2 ) 0 0 0 X ( 0 ; 3 ) , X ( 1 ) = 0 0 X ( 1 ; 1 ) X ( 1 ; 2 ) 0 0 0 X ( 1 ; 3 ) 0 , X ( 2 ) = 0 X ( 2 ; 1 ) 0 0 0 X ( 2 ; 2 ) X ( 2 ; 3 ) 0 0 .
Of course U 3 , J and K are elements of g ( 0 ) .
The RHP is formulated as:
ξ + ( x , t , λ ) = ξ ( x , t , λ ) G k ( x , t , λ ) , λ l k , i G x = λ 3 [ J , G k ( x , t , λ ) ] , i G x = λ 3 [ K , G k ( x , t , λ ) ] .
ξ + ( x , t , λ ) is analytic in the sectors Ω 0 Ω 2 Ω 4 while ξ ( x , t , λ ) is analytic in the sectors Ω 1 Ω 3 Ω 5 , see the left panel of Figure 3. The FAS of the Lax pair (286) are given by:
χ ± ( x , t , λ ) = ξ + ( x , t , λ ) exp ( i λ 3 ( J x + K t ) ) .
The parametrization of the RHP will be given by (compare with Equations (167) and (174)):
ξ ± ( x , t , λ ) = exp ( Q ( x , t , λ ) ) , Q ( x , t , λ ) = s = 1 λ s Q s ( x , t ) , C 1 Q ( x , t , λ ) C = Q ( x , t , ω λ ) , i . e . , C 1 Q s ( x , t ) C = ω s Q s ( x , t ) g ( 3 k ( s ) )
where k ( s ) = s ( mod 3 ) . The contour of the corresponding RHP consists of the rays l k , k = 0 , 5 (see the left panel of Figure 3).
Using Equations (78)–(80) we obtain the following parametrization of the Lax pair (286):
U 1 = ad J Q 1 , U 2 = ad J Q 2 + 1 2 ad Q 1 U 1 , U 3 = 1 2 ad Q 1 ad J Q 2 + ad Q 2 ad J Q 1 + 1 6 ad Q 1 2 U 1 + ad J Q 3 , V 1 = ad K Q 1 , V 2 = ad K Q 2 + 1 2 ad Q 1 V 1 , V 3 = 1 2 ad Q 1 ad K Q 2 + ad Q 2 ad K Q 1 + 1 6 ad Q 1 2 V 1 + ad K Q 3 .
The rest of the calculations follow the same ideas as developed above.

11.2. Cubic Pencils with Dihedral Reduction Group

Let us now demonstrate another class of cubic pencils possessing additional symmetry. To this end, we start with the pencil (286) and apply to it additional Z 2 -reduction such that, see [49]:
L i ψ x + U ( x , t , λ ) + R 0 U ( x , t , 1 / λ ) R 0 1 ψ ( x , t , λ ) = 0 , M i ψ x + V ( x , t , λ ) + R 0 V ( x , t , 1 / λ ) R 0 1 ψ ( x , t , λ ) = 0 , U ( x , t , λ ) = U 3 + λ U 2 + λ 2 U 1 λ 3 J , V ( x , t , λ ) = V 3 + λ V 2 + λ 2 V 1 λ 3 K ,
where R 0 must be constant matrix satisfying R 0 2 = 1 1 . With this additional reduction the grading of the algebra
g = g ( 0 ) g ( 1 ) g ( 2 ) g ^ ( 0 ) g ^ ( 1 ) g ^ ( 2 ) ,
where the elements X ( k ) g ( k ) were described above in (288) while g ^ ( k ) consist of the elements R 0 X ( k ) R 0 1 .
The RHP now is formulated on a different contour Γ k = 0 5 l k S 1 , where S 1 is the unit circle, see the right panel of Figure 3. For the formulation of the RHP we need to introduce the besides rays l k , also the arcs l α of the unit circle:
l k arg λ = k π 3 , | λ | 1 ; l k + 6 arg λ = k π 3 , | λ | 1 ; l α | λ | = 1 , arg λ ( α 1 ) π 3 α π 3 .
Then, we have:
ξ k + ( x , t , λ ) = ξ k ( x , t , λ ) G k ( x , t , λ ) , λ l k , ξ k + 6 + ( x , t , λ ) = ξ k + 6 ( x , t , λ ) R 0 G k ( x , t , 1 / λ ) R 0 1 , λ l k + 6 , ξ α + 6 ± ( x , t , λ ) = ξ α ± ( x , t , λ ) G α ( x , t , λ ) , λ l α , i G x = λ 3 [ J , G k ( x , t , λ ) ] , i G x = λ 3 [ K , G k ( x , t , λ ) ] .
The FAS of the Lax pair (293) in each of the sectors Ω k , k = 0 , , 11 (see right panel of Figure 3), is related to the solutions of the RHP (296) as follows:
χ k ± ( x , t , λ ) = ξ k ± ( x , t , λ ) exp ( i λ 3 ( J x + K t ) ) .
For the RHP (296), we have to use different parametrizations inside and outside of the unit circle S 1 . Outside the unit circle we will have the parametrization (291). Now, we must have:
ξ ± ( x , t , λ ) = exp ( Q ( x , t , λ ) ) , Q ( x , t , λ ) = s = 1 λ s Q s ( x , t ) , | λ | 1 ; ξ ± ( x , t , λ ) = R 0 ξ ( x , t , 1 / λ ) R 0 1 = exp ( R 0 Q ( x , t , 1 / λ ) R 0 1 ) , R 0 Q ( x , t , 1 / λ ) R 0 1 = s = 1 λ s R 0 Q s ( x , t ) R 0 1 , | λ | 1 ;
Then, the parametrization of L and M is given by (293).
It allows one to derive new types of integrable NLEE. The derivation of their soliton solutions consists in straightforward execution of the dressing method.

12. Discussion and Conclusions

Our intention was to review some of the old papers which were milestones in the past, but for various reasons tend to be forgotten. Another motivation for this paper was our intention to generalize these results for quadratic pencils and pencils of higher degrees.
Of course, many of the results we mentioned here were first introduced for the GZS system. One of the important results is the resolvent for the Lax operator L [15,83,84] expressed in terms of the FAS. Combined with the contour integration method, it allows one to construct the spectral decompositions of L. They also allow one to prove that the poles and zeroes of FAS provide the discrete eigenvalues of L. Rigorous proofs would hardly be possible with these methods, but their importance is in the fact that they can be applied to wide class of non-self-adjoint Lax operators.
In the Introduction, we already listed a number of results about the interpretation of the ISM as GFT and ISM for the GZS systems. Our comment on it is that this idea can also be generalized for the quadratic pencils considered as Lax operators above. We intend to continue the idea in our future publications. Of course, there will be specifics for the homogeneous and symmetric spaces.
The initial ideas that we have extended above for quadratic pencils have been developed before by other influential scientists, see some of theirs reviews and monographs [10,12,15,30,84,85,86,87,88,89]. It is important to note that these ideas are also applicable to discrete evolution equations as Ablowitz-Ladik system [90,91].
Another important approach to integrable NLEE has been developed by Shabat, Mikhailov, Yamilov, Svinolupov et al., see [88,92,93,94]. This allows one to derive systems of equations that possess higher symmetries, or higher integrals of motion. Lately, this method has been used by Zhao [94], who covered all integrable systems in two independent functions. Some of these equations were shown to possess Lax representations related to the twisted Kac-Moody algebras [95].
Special attention must be paid to the NLEE with deep Mikhailov reduction groups, like Z h and D h . Typically, by h we denote the Coxeter number of the corresponding simple Lie algebra. Here, we include the families of KdV and MKdV eqs., as well as 2-dimensional Toda field theories [96,97,98,99,100,101,102,103,104,105] as well as some Camassa–Holm type equations [106,107,108,109,110,111,112].
The Kulish–Sklyanin system, or in other words the 3-component NLS related to the BD.I symmetric space has important applications to the spin-1 Bose–Einstein condensate, see [25,78,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127]. Using the dressing method in [115,128], we calculated the limit of the dressing factor for t ± . This allowed us to describe the soliton interactions, i.e., to calculate the center-of-mass and the phase shifts in the soliton interactions.
In short, we illustrated on the class of quadratic pencils how our method works. However, we could apply it to pencils of higher order, see Section 11 where we outlined how one could approach cubic pencils. The straightforward generalization to pencils of higher order meets with a difficulty: the corresponding NLEE should involve too many independent variables. We avoided this difficulty by choosing the parametrization of the RHP. Generic parametrization like (76) should involve generic choice for Q s ( x , t ) . However, we imposed on Q ( x , t , λ ) Mikhailov reduction. In particular, for the case of symmetric spaces we chose Q s ( x , t ) g ( 1 ) , see Equation (187). Similar ideas could be generalized also to cubic and higher order pencils.
Finally, in Section 11, we demonstrated how our approach could be combined with additional Mikhailov reductions, which may modify the RHP. Applying such reductions changes the contour of the RHP, which in turn requires us to use different parametrization outside and inside the unit circle, see Equation (298).
In our final remarks, we mention the generalizations to 2 + 1 dimensions [29,129,130,131], the MNLS related to spectral curves [106,132,133,134,135,136,137] and the hierarchies of Hamiltonian structures. It is known that one can relate Poisson brackets on the quadratic pencils which, along with the integrals of motion, allow one to interpret the NLEE in the hierarchy as Hamiltonian equations of motion. Such Poisson brackets have been introduced for the s l ( 2 ) polynomial pencils by Kulish and Reyman [36]. Of course, they allow generalization to any semi-simple Lie algebra. We already mentioned that for the generalized Zakharov–Shabat systems this has been already performed. In fact, the spectral theory of the recursion operators has been constructed which generate all NLEE along with its Hamilton formulation. For the quadratic pencils, the recursion operators have not yet been constructed; we hope that this problem will be solved soon.

Author Contributions

V.S.G. and A.A.S. contributed equally to the manuscript and typed, read, and approved the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research has been supported by the Bulgarian National Science Foundation, contract KP-06N42-2.

Data Availability Statement

Any data related to this paper will be made available under reasonable request.

Acknowledgments

The authors are grateful to R. I. Ivanov and G. G. Grahovski for careful reading of the manuscript and for many useful suggestions. We are also grateful to anonymous referee for careful reading of the manuscript and for useful suggestions and comments. Financial support of the Bulgarian National Science Foundation, contract KP-06N42-2 is acknowledged.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Root Systems of Simple Lie Algebras

This is assumed to be well known material which we display without proofs and in a manner suitable for pedestrians in Lie algebras; for proofs and more details, see [21,72]. The fundamental ideas of Cartan and Weil are based on:
  • The Cartan subalgebras h of all simple Lie algebras can be represented by diagonal matrices;
  • There is a one-to-one mapping between the elements of h and the vectors in r-dimensional Euclidean space E r ;
  • The Weil generators E α are defined as eigenvectors of all the elements of h , i.e.,
    [ H , E α ] = α ( H ) E α = 2 ( α , h ) ( α , α ) E α ,
    where α ( H ) = ( α , h ) . Here, h E r is the vector corresponding to H, and α Δ is a root of the algebra g belonging to its system of roots Δ .
Skipping the details, we describe the root systems Δ for all classical series A r , B r , C r and D r of simple Lie algebras.
  • Root systems
  • Dynkin diagrams
  • Cartan-Weyl basis
  • Automorphisms of finite order
Basic properties of the root systems.
  • If α Δ then α is also a root, α Δ .
  • Each root system is split into positive and negative roots:
    Δ = Δ + Δ , if α Δ + then α Δ .
  • In each root system, one can introduce a basis, known as system of simple roots. By definition α j , j = 1 , , r are simple roots if: (i) they are linearly independent and form a basis in E r ; (ii) they are positive roots α j Δ + such that α j α k Δ ;
  • Each positive root β Δ + can be expressed as sum of simple roots β = j = 1 r n j α j where all n j 0 are integers;
  • There is a maximal (resp. minimal) root α max (resp α min ) such that α max + β (resp. α min β ) is not a root;
  • Symmetry properties of Δ and Weyl group. Introduce the Weyl reflections S β by:
    S β x = x 2 ( β , x ) ( β , β ) β .
    The Weyl reflections form a finite group, which preserves Δ , i.e., S β Δ Δ .

Appendix A.1. The Root System of A r s l ( r + 1 ) Algebras

The algebras A r can be represented by ( r + 1 ) × ( r + 1 ) matrices whose trace is equal to 0. Their Cartan subalgebras are provided by diagonal matrices H = d i a g ( h 1 , h 2 , , h r + 1 ) whose trace vanishes t r H = j = 1 r + 1 h j = 0 . Therefore, the vector h = ( h 1 , h 2 , , h r + 1 ) T dual to H belongs to the r + 1 dimensional Euclidean space E r + 1 which is orthogonal to the vector ϵ = e 1 + e 2 + + e r + 1 , where e j form the orthonormal basis in E r + 1 . The Weil generators are provided by the matrices E j k , which have only one non-vanishing entry at position j , k equal to 1, i.e.,
( E j k ) m n = δ j k δ m n .
It is easy to check that the commutation relations between the Cartan elements H and the Weyl generators E j k can be written down as:
[ H , E j k ] = ( h j h k ) E j k = ( h , e j e k ) E j k ,
It coincides with (A1), where the root α = e j e k . Thus, we find that the root system of A r consists of the vectors:
Δ A r { e j e k , j k }
Obviously each root is a vector e j e k E r + 1 orthogonal to ϵ . We will need also to split the root system Δ = Δ + Δ into positive and negative roots:
Δ + = { e j e k , j < k } , Δ = { e j e k , j > k } ,
i.e., to the positive (resp. negative) roots there correspond upper triangular (resp., lower) triangular matrices. Note that all roots β = e j e k are orthogonal to the vector ϵ , so they span r-dimensional subspace of E r + 1 .
The simple roots of A r are provided by α j = e j e j + 1 , j = 1 , , r . The Dynkin diagram of A r is: Symmetry 15 01933 i001
The basis of the Cartan subalgebra consists of H j = E j j E j + 1 , j + 1 , j = 1 , , r . The Weyl generators E α were given above in (A2).
Remark A1. 
For the orthogonal and symplectic Lie algebras, we use slightly different but equivalent definition. Typically the orthogonal algebras s o ( P ) are represented as the algebras of P × P skew-symmetric matrices, i.e.,
X s o ( P ) i f f X + X T = 0 .
In what follows, we use a slightly different but equivalent definition, namely:
X s o ( N ) i f f X + S 0 X T S 0 = 0 .
where the matrix S 0 is given by:
P = 2 r + 1 S 0 = k = 1 2 r + 1 ( 1 ) k + 1 E k , 2 r + 2 k . P = 2 r S 0 = k = 1 r ( 1 ) k + 1 E k , 2 r + 1 k + ( 1 ) k + 1 E 2 r + 1 k , k
Note that S 0 2 = 1 1 . For the symplectic algebras, we have:
X s p ( 2 r ) i f f X + S 1 X T S 1 1 = 0 .
where
S 1 = k = 1 r ( 1 ) k + 1 E k , 2 r + 1 k ( 1 ) k + 1 E 2 r + 1 k , k .
Now, S 1 2 = 1 1 . The advantage of these definitions is that now the Cartan subalgebras of both s o ( P ) and s p ( 2 r ) are represented by diagonal matrices.

Appendix A.2. The Root System of B r s o ( 2 r + 1 ) Algebras

Using Remark A1, we find that with the new definition of s o ( P ) the antisymmetry is with respect to the second diagonal. The Cartan subalgebra elements are given by the following ( 2 r + 1 ) × ( 2 r + 1 ) matrices:
H j = E j j H 2 r + 2 j , 2 r + 2 j , H = j = 1 r h j H j .
The Cartan–Weyl basis in the typical ( 2 r + 1 ) × ( 2 r + 1 ) representation basis is given by:
E e j e k = E j k ( 1 ) k + j E k ¯ , j ¯ , E e j + e k = E j k ¯ ( 1 ) k + j E k , j ¯ , E e k = E k , r + 1 ( 1 ) k E r + 1 , k , H e k = E k , k E k ¯ , k ¯ .
where k ¯ = 2 r + 2 k .
The commutation relations (A1) show that the root system of B r is provided by:
Δ + = { α = e k e j , β = e k + e j , γ k = e k } , 1 k < j r .
The simple roots are given by:
α k = e k e k + 1 , k = 1 , , r 1 , α r = e r .
The Dynkin diagram is: Symmetry 15 01933 i002

Appendix A.3. The Root System of C r s p ( 2 r ) Algebras

The Cartan subalgebra elements of s p ( 2 r ) are given by the following ( 2 r ) × ( 2 r ) matrices:
H j = E j j H 2 r + 1 j , 2 r + 1 j , H = j = 1 r h j H j .
The Cartan–Weyl basis in the typical 2 r × 2 r representation basis is given by:
E e j e k = E j k ( 1 ) k + j E k ¯ , j ¯ , E e j + e k = E j k ¯ + ( 1 ) k + j E k , j ¯ , E 2 e k = 2 E k , k ¯ , H e k = E k , k E k ¯ , k ¯ .
where k ¯ = 2 r + 1 k .
The commutation relations (A1) show that the root system of B r is provided by:
Δ + = { α = e k e j , β = e k + e j , γ k = 2 e k } , 1 k < j r .
The simple roots are given by:
α k = e k e k + 1 , k = 1 , , r 1 , α r = 2 e r .
The Dynkin diagram is: Symmetry 15 01933 i003

Appendix A.4. The Root System of D r s o ( 2 r ) Algebras

Using Remark A1, we find that with the new definition of s o ( P ) the antisymmetry is with respect to the second diagonal. The Cartan subalgebra elements are given by the following ( 2 r ) × ( 2 r ) matrices:
H j = E j j H 2 r + 1 j , 2 r + 1 j , H = j = 1 r h j H j .
The Cartan-Weyl basis in the typical 2 r × 2 r representation basis is given by:
E e j e k = E j k ( 1 ) k + j E k ¯ , j ¯ , E e j + e k = E j k ¯ ( 1 ) k + j E k , j ¯ , H e k = E k , k E k ¯ , k ¯ ,
where k ¯ = 2 r + 1 k .
The commutation relations (A1) show that the root system of B r is provided by:
Δ + = { α = e k e j , β = e k + e j , } , 1 k < j r .
The simple roots are given by:
α k = e k e k + 1 , k = 1 , , r 1 , α r = e r 1 + e r .
The Dynkin diagram is: Symmetry 15 01933 i004

Appendix B. Gauss Decompositions 

The Gauss decompositions have natural group-theoretical interpretation and can be generalized to any semi-simple Lie algebra:
T ( λ ) = T ( λ ) D + ( λ ) S ^ + ( λ ) = T + ( λ ) D ( λ ) S ^ ( λ ) ,
i.e., T ± ( λ ) , S ± ( λ ) and D ± ( λ ) are the factors in the Gauss decomposition of T ( λ ) . It is well known that if given group element allows Gauss decompositions then its factors are uniquely determined. Below, we write down the explicit expressions for the matrix elements of T ± ( λ ) , S ± ( λ ) , D ± ( λ ) through the matrix elements of T ( λ ) :
T p j ( λ ) = 1 m j + ( λ ) 1 , 2 , , j 1 , p 1 , 2 , , j 1 , j T ( λ ) ( j ) ,
T ^ j p ( λ ) = ( 1 ) j + p m j 1 + ( λ ) 1 , 2 , , p ˇ , , j 1 , 2 , , p , , j 1 T ( λ ) ( j 1 ) ,
S p j + ( λ ) = ( 1 ) p + j m j 1 + ( λ ) 1 , 2 , , p , , j 1 1 , 2 , , p ˇ , , j T ( λ ) ( j 1 ) ,
S ^ j p + ( λ ) = 1 m j + ( λ ) 1 , 2 , , j 1 , j 1 , 2 , , j 1 , p T ( λ ) ( j ) ,
T p j + ( λ ) = 1 m n j + 1 ( λ ) p , j + 1 , , n 1 , n j , j + 1 , , n 1 , n T ( λ ) ( n j + 1 ) ,
T ^ j p + ( λ ) = ( 1 ) p + j m n j ( λ ) j , j + 1 , , p ˇ , , n j + 1 , j + 2 , , p , , n T ( λ ) ( n j ) ,
S p j ( λ ) = ( 1 ) p + j m n j ( λ ) j + 1 , j + 2 , , p , , n j , j + 1 , , p ˇ , , n T ( λ ) ( n j ) ,
S ^ j p ( λ ) = 1 m n j + 1 ( λ ) j , j + 1 , , n 1 , n p , j + 1 , , n 1 , n T ( λ ) ( n j + 1 ) ,
where
i 1 , i 2 , , i k , j 1 , j 2 , , j k T ( λ ) ( k ) = det T i i j 1 T i 1 j 2 T i 1 j k T i 2 j 1 T i 2 j 2 T i 2 j k T i k j 1 T i k j 2 T i k j k
is the minor of order k of T ( λ ) formed by the rows i 1 , i 2 , …, i k and the columns j 1 , j 2 , …, j k ; by p ˇ we mean that p is missing.
From the formulae above, we arrive to the following
Corollary A1. 
In order that the group element T ( λ ) S L ( n , C ) allows the first (resp. the second) Gauss decomposition (A17) is necessary and sufficient that all upper- (resp. lower-) principle minors m k + ( λ ) (resp. m k ( λ ) ) are not vanishing.
These formulae hold true also if we need to construct the Gauss decomposition of an element of the orthogonal S O ( n ) group. Here, we just note that if T ( λ ) S O ( n ) then
S 0 ( T ( λ ) ) T S 0 1 = T 1 ( λ ) ,
where
S 0 = k = 1 n 0 ( 1 ) k + 1 ( E k , n + 1 k + E n + 1 k , k ) , if n = 2 n 0 , S 0 = k = 1 n 0 ( 1 ) k + 1 ( E k , n + 1 k + E n + 1 k , k ) + ( 1 ) n 0 E n 0 + 1 , n 0 + 1 , if n = 2 n 0 + 1 .
One can check that if T ( λ ) satisfies (A27) then each of the factors T ± ( λ ) , S ± ( λ ) and D ± ( λ ) also satisfy (A27) and thus belong to the same group G . In addition, we have the following interrelations between the principal minors of T ( λ ) :
m j ± ( λ ) = m n j ± ( λ ) , for S O ( n ) , m j ± ( λ ) = m n j ± ( λ ) , for S P ( n ) ,
etc.

References

  1. Zakharov, V.E.; Faddeev, L.D. Korteweg-de Vries equation: A completely integrable Hamiltonian system. Funct. Anal. Appl. 1971, 5, 280–287. [Google Scholar] [CrossRef]
  2. Zabusky, N.J. Fermi–Pasta–Ulam, solitons and the fabric of nonlinear and computational science: History, synergetics, and visiometrics. Chaos 2005, 15, 015102. [Google Scholar] [CrossRef]
  3. Zakharov, V.E.; Shabat, A.B. Exact theory of two-dimensional self-focusing and one-dimensional self-modulation of waves in nonlinear media. Sov. Phys. JETP 1972, 34, 62–69. [Google Scholar]
  4. Zakharov, V.E.; Shabat, A.B. Interaction between solitons in a stable medium. Sov. Phys. JETP 1973, 37, 823–828. [Google Scholar]
  5. Wadati, M. The exact solution of the modified Korteweg de Vries equation. J. Phys. Soc. Jpn. 1972, 32, 1681. [Google Scholar] [CrossRef]
  6. Zakharov, V.E.; Manakov, S.V. On the theory of resonance interactions of wave packets in nonlinear media. Zh. Exp. Teor. Fiz 1975, 69, 1654–1673. [Google Scholar]
  7. Manakov, S.V. On the theory of two-dimensional stationary self-focusing of electromagnetic waves. Sov. Phys. JETP 1974, 38, 248. [Google Scholar]
  8. Ablowitz, A.; Clarkson, A. Solitons, Nonlinear Evolution Equations and Inverse Scattering; Cambridge University Press: Cambridge, UK, 1991. [Google Scholar]
  9. Calogero, F.; Degasperis, A. Spectral transform and solitons. I. In Studies in Mathematics and its Applications; Elsevier: Haarlem, The Netherlands, 1982; Volume 13. [Google Scholar]
  10. Faddeev, L.D.; Takhtadjan, L.A. Hamiltonian Methods in the Theory of Solitons; Springer: Berlin/Heidelberg, Germany, 1987. [Google Scholar]
  11. Gerdjikov, V.S.; Yanovski, A.B. Completeness of the eigenfunctions for the Caudrey–Beals–Coifman system. J. Math. Phys. 1994, 35, 3687–3725. [Google Scholar] [CrossRef]
  12. Novikov, S.; Manakov, S.; Pitaevskii, L.; Zakharov, V. Theory of Solitons: The Inverse Scattering Method; Plenum, Consultants Bureau: New York, NY, USA, 1984. [Google Scholar]
  13. Shabat, A.B. A one-dimensional scattering theory. I. Differ. Uravn. 1972, 8, 164–178. [Google Scholar]
  14. Shabat, A.B. The inverse scattering problem for a system of differential equations. Funkts. Anal. Prilozhen 1975, 9, 75–78. [Google Scholar] [CrossRef]
  15. Gerdjikov, V.S. Algebraic and analytic aspects of n-wave type equations. Contemp. Math. 2002, 301, 35–68. [Google Scholar]
  16. Zakharov, V.E.; Shabat, A.B. A scheme for integrating nonlinear evolution equations of mathematical physics by the inverse scattering method. I. Funkts. Anal. Prilozhen 1974, 8, 43–53. [Google Scholar]
  17. Zakharov, V.E.; Shabat, A.B. Integration of the nonlinear equations of mathematical physics by the inverse scattering method. Funkts. Anal. Prilozhen 1979, 13, 13–22. [Google Scholar] [CrossRef]
  18. Mikhailov, A.V. The reduction problem and the inverse scattering problem. Phys. D 1981, 3D, 73–117. [Google Scholar] [CrossRef]
  19. Beals, R.; Coifman, R.R. Scattering and inverse scattering for first order systems. Commun. Pure Appl. Math. 1984, 37, 39. [Google Scholar] [CrossRef]
  20. Gerdjikov, V.S.; Yanovski, A.B. CBC systems with Mikhailov reductions by Coxeter automorphism. I. Spectral theory of the recursion operators. Stud. Appl. Math. 2015, 134, 145–180. [Google Scholar] [CrossRef]
  21. Helgasson, S. Differential Geometry, Lie Groups and Symmetric Spaces; Graduate Studies in Mathematics; AMS: Providence, RI, USA, 2012; Volume 34. [Google Scholar]
  22. Gerdjikov, V.S. Generalised Fourier transforms for the soliton equations. gauge covariant formulation. Inverse Probl. 1986, 2, 51–74. [Google Scholar] [CrossRef]
  23. Ablowitz, M.J.; Kaup, D.J.; Newell, A.C.; Segur, H. The inverse scattering transform—Fourier analysis for nonlinear problems. Stud. Appl. Math. 1974, 53, 249–315. [Google Scholar] [CrossRef]
  24. Constantin, A.; Gerdjikov, V.S.; Ivanov, R.I. Generalized Fourier transform for the Camassa–Holm equation. Inverse Probl. 2007, 23, 1565–1597. [Google Scholar] [CrossRef]
  25. Gaiarin, S.; Perego, A.M.; da Silva, E.P.; Da Ros, F.; Zibar, D. Dual-polarization nonlinear Fourier transform-based optical communication system. Optica 2018, 5, 263–270. [Google Scholar] [CrossRef]
  26. Gerdjikov, V.S. On nonlocal models of Kulish–Sklyanin type and generalized Fourier transforms. In Advanced Computing in Industrial Mathematics. Studies in Computational Intelligence; Georgiev, K., Todorov, M., Georgiev, I., Eds.; Springer: Berlin/Heidelberg, Germany, 2017; Volume 681, pp. 37–52. [Google Scholar]
  27. Gerdjikov, V.S.; Khristov, E.K. On the evolution equations solvable through the inverse scattering problem. I. The spectral theory. Bulg. J. Phys. 1979, 7, 28–41. [Google Scholar]
  28. Gerdjikov, V.S.; Khristov, E.K. On the evolution equations solvable through the inverse scattering problem. II. Hamiltonian structures and Backlund transformations. Bulg. J. Phys. 1979, 7, 119–133. [Google Scholar]
  29. Gerdjikov, V.S.; Smirnov, A.O.; Matveev, V.B. From generalized Fourier transforms to spectral curves for the Manakov hierarchy. I. Generalized Fourier transforms. Eur. Phys. J. Plus 2020, 135, 659. [Google Scholar] [CrossRef]
  30. Gerdjikov, V.S.; Vilasi, G.; Yanovski, A.B. Integrable Hamiltonian Hierarchies: Spectral and Geometric Methods; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 2008; Volume 748. [Google Scholar]
  31. Goossens, J.V.; Yousefi, M.; Jaou, Y.; Haffermann, H. Polarization-division multiplexing based on the nonlinear Fourier transform. Opt. Express 2017, 25, 437–462. [Google Scholar] [CrossRef]
  32. Kaup, D.J. Closure of the squared Zakharov–Shabat eigenstates. J. Math. Annal. Appl. 1976, 54, 849–864. [Google Scholar] [CrossRef]
  33. Smirnov, A.O.; Gerdjikov, V.S.; Matveev, V.B. From generalized fourier transforms to spectral curves for the Manakov hierarchy. II. Spectral curves for the Manakov hierarchy. Eur. Phys. J. Plus 2020, 135, 561. [Google Scholar] [CrossRef]
  34. Gerdzhikov, M.I.I.V.S.; Kulish, P.P. Quadratic bundle and nonlinear equations. Theor. Math. Phys. 1980, 44, 784–795. [Google Scholar] [CrossRef]
  35. Kaup, D.J.; Newell, A.C. An exact solution for a derivative nonlinear Schrödinger equation. J. Math. Phys. 1978, 19, 798–801. [Google Scholar] [CrossRef]
  36. Kulish, P.P.; Reiman, A.G. Hamiltonian structure of polynomial bundles. J. Sov. Math. 1985, 28, 505–513. [Google Scholar] [CrossRef]
  37. Gerdjikov, V.S. Riemann–Hilbert problems with canonical normalization and families of commuting operators. Pliska Stud. Math. Bulgar. 2012, 21, 201–216. [Google Scholar]
  38. Gerdjikov, V.S.; Stefanov, A.A. New types of two component NLS-type equations. Pliska Stud. Math. 2016, 26, 53–66. [Google Scholar]
  39. Gerdjikov, V.S.; Stefanov, A.A. On an example of derivative nonlinear Schrödinger equation with D2 reduction. Pliska Stud. Math. 2019, 30, 99–108. [Google Scholar]
  40. Gerdjikov, V.S.; Yanovski, A.B. Riemann–Hilbert problems, families of commuting operators and soliton equations. J. Phys. Conf. Ser. 2014, 482, 012017. [Google Scholar] [CrossRef]
  41. Ivanov, R. On the dressing method for the generalised Zakharov–Shabat system. Nucl. Phys. B 2004, 694, 509–524. [Google Scholar] [CrossRef]
  42. Mikhailov, A.V.; Zakharov, V.E. On the integrability of classical spinor models in two-dimensional space–time. Commun. Math. Phys. 1980, 74, 21–40. [Google Scholar]
  43. Zakharov, V.E. The inverse scattering method. In Solitons; Bullough, R.K., Caudrey, P.J., Eds.; Springer: Berlin/Heidelberg, Germany, 1980; pp. 243–286. [Google Scholar]
  44. Zakharov, V.E. Exact solutions of the problem of parametric interaction of wave packets. Dokl. Akad. Nauk SSSR 1976, 228, 1314–1316. [Google Scholar]
  45. Fordy, A.P.; Kulish, P.P. Nonlinear Schrödinger equations and simple lie algebras. Commun. Math. Phys. 1983, 89, 427–443. [Google Scholar] [CrossRef]
  46. Gerdjikov, V.S.; Grahovski, G.G. Multi-component NLS models on symmetric spaces: Spectral properties versus representations theory. Symmetry Integr. Geom. Methods Appl. 2010, 6, 44–73. [Google Scholar] [CrossRef]
  47. Matveev, V.B.; Smirnov, A.O. AKNS hierarchy, MRW solutions, Pn breathers, and beyond. J. Math. Phys. 2018, 59, 091419. [Google Scholar] [CrossRef]
  48. Gerdjikov, R.I.V.; Grahovski, G. On integrable wave interactions and Lax pairs on symmetric spaces. Wave Motion 2017, 71, 53–70. [Google Scholar] [CrossRef]
  49. Gerdjikov, R.I.V.; Stefanov, A. Riemann–Hilbert problem, integrability and reductions. J. Geom. Mech. 2019, 11, 167–185. [Google Scholar] [CrossRef]
  50. Valchev, T.I. On certain reductions of integrable equations on symmetric spaces. AIP Conf. Proc. 2011, 1340, 154–164. [Google Scholar]
  51. Warren, O.H.; Elgin, J.N. The vector nonlinear Schrödinger hierarchy. Phys. D 2007, 228, 166–171. [Google Scholar] [CrossRef]
  52. Gerdjikov, V.S.; Ivanov, M.I. The quadratic bundle of general form and the nonlinear evolution equations. I. Expansions over the “squared” solutions are generalized Fourier transforms. Bulg. J. Phys. 1983, 10, 13–26. [Google Scholar]
  53. Gerdjikov, V.S.; Ivanov, M.I. The quadratic bundle of general form and the nonlinear evolution equations. II. Hierarchies of Hamiltonian structures. Bulg. J. Phys. 1983, 10, 130–143. [Google Scholar]
  54. Dai, H.H.; Fan, E.G. Variable separation and algebro-geometric solutions of the Gerdjikov–Ivanov equation. Chaos Solitons Fractals 2004, 22, 93–101. [Google Scholar] [CrossRef]
  55. Fan, E.G. Darboux transformation and solion-like solutions for the Gerdjikov–Ivanov equation. J. Phys. A 2000, 33, 6925–6933. [Google Scholar] [CrossRef]
  56. Fan, E.G. A family of completely integrable multi-hamiltonian systems explicitly related to some celebrated equations. J. Math. Phys. 2001, 42, 4327–4344. [Google Scholar] [CrossRef]
  57. Luo, J.; Fan, E. ¯ -dressing method for the coupled Gerdjikov–Ivanov equation. Appl. Math. Lett. 2020, 110, 106589. [Google Scholar] [CrossRef]
  58. Dickey, L.A. Soliton Equations and Hamiltonian Systems; World Scientific: Singapore, 2003. [Google Scholar]
  59. Gel, I.M.; Dikii, L.A. Fractional powers of operators and hamiltonian systems. Funct. Anal. Appl. 1976, 10, 259–273. [Google Scholar]
  60. Gel, I.M.; Dikii, L.A. The resolvent and hamiltonian systems. Funct. Anal. Appl. 1977, 11, 93–105. [Google Scholar]
  61. IGel, M.; Dikii, L.A. The calculus of jets and nonlinear Hamiltonian systems. Funct. Anal. Appl. 1978, 12, 81–94. [Google Scholar]
  62. Gel, I.M.; Dikii, L.A. Integrable nonlinear equations and the Liouville theorem. Funct. Anal. Appl. 1979, 13, 6–15. [Google Scholar]
  63. Bury, R.T. Automorphic Lie Algebras, Corresponding Integrable Systems and Their Soliton Solutions. Ph.D. Thesis, University of Leeds, Leeds, UK, 2010. [Google Scholar]
  64. Berkeley, A.V.M.G.; Xenitidis, P. Darboux transformations with tetrahedral reduction group and related integrable systems. J. Math. Phys. 2016, 57, 092701. [Google Scholar] [CrossRef]
  65. Gerdjikov, V.S. zn—reductions and new integrable versions of derivative nonlinear Schrödinger equations. In Nonlinear Evolution Equations: Integrability and Spectral Methods; Lakshmanan, M., Fordy, A.P., Degasperis, A., Eds.; Manchester University Press: Manchester, UK, 1991; pp. 367–379. [Google Scholar]
  66. Gerdjikov, V.S. Derivative nonlinear Schrödinger equations with Zn and Dn–reductions. Rom. J. Phys. 2013, 58, 573–582. [Google Scholar]
  67. Lombardo, S.; Mikhailov, A.V. Reductions of integrable equations: Dihedral group. J. Phys. A 2004, 37, 7727–7742. [Google Scholar] [CrossRef]
  68. Lombardo, S.; Mikhailov, A.V. Reduction groups and automorphic Lie algebras. Commun. Math. Phys. 2005, 258, 179–202. [Google Scholar] [CrossRef]
  69. Gerdjikov, G.G.G.V.S.; Mikhailov, A.V.; Valchev, T.I. On soliton interactions for the hierarchy of a generalised Heisenberg ferromagnetic model on SU(3)/S(U(1)× U(2)) symmetric space. J. Geom. Symmetry Phys. 2012, 25, 23–55. [Google Scholar]
  70. Yanovski, A.B.; Valchev, T.I. Pseudo-hermitian reduction of a generalized Heisenberg ferromagnet equation. I. Auxiliary system and fundamental properties. J. Nonlinear Math. Phys. 2018, 25, 324–350. [Google Scholar] [CrossRef]
  71. Drinfel, V.G.; Sokolov, V.V. Lie algebras and equations of Korteweg-de Vries type. Sov. J. Math. 1985, 30, 1975–2036. [Google Scholar] [CrossRef]
  72. Bourbaki, N. Elements of Mathematics. Lie Groups and Lie Algebras; Springer: Berlin/Heidelberg, Germany, 2002; Chapters 4–6. [Google Scholar]
  73. Coxeter, H.; Moser, W. Generators and Relations for Discrete Groups, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1972. [Google Scholar]
  74. Knibbeler, V.; Lombardo, S.; Sanders, J.A. Higher-dimensional automorphic Lie algebras. Found. Comput. Math. 2017, 17, 987–1035. [Google Scholar] [CrossRef]
  75. Mikhailov, A.V. Reductions in integrable systems. The reduction group. JETP Lett. 1980, 32, 187–192. [Google Scholar]
  76. Leznov, A.N.; Saveliev, M.V. Two-dimensional nonlinear equations of the string type and their complete integration. Theor. Math. Phys. 1983, 54, 323–337. [Google Scholar] [CrossRef]
  77. Valchev, T.I. Dressing method and quadratic bundles related to symmetric spaces. Vanishing boundary conditions. J. Math. Phys. 2016, 57, 021508. [Google Scholar] [CrossRef]
  78. Gerdjikov, V.S. Kulish-Sklyanin type models: Integrability and reductions. Theor. Math. Phys. 2017, 192, 1097–1114. [Google Scholar] [CrossRef]
  79. NAkhiezer, I.; Glazman, I.M. Theory of Linear Operators in Hilbert Space; Dover Publications: New York, NY, USA, 1963. (In Russian) [Google Scholar]
  80. Dunford, N.; Schwartz, J.T. Spectral Theory: Self-Adjoint Operators in Hilbert Space; Linear Operators; Interscience Publishers, Inc.: New York, NY, USA, 1963; Volume 2. [Google Scholar]
  81. Gerdjikov, V.S.; Kulish, P.P. Complete integrable Hamiltonian systems related to the non–self–adjoint Dirac operator. Bulg. J. Phys. 1978, 5, 337–349. [Google Scholar]
  82. Gerdjikov, V.S.; Kulish, P.P. The generating operator for the n × n linear system. Phys. D 1981, 3D, 549–564. [Google Scholar] [CrossRef]
  83. Gerdjikov, V.S. On the spectral theory of the integro-differential operator λ, generating nonlinear evolution equations. Lett. Math. Phys. 1982, 6, 315–324. [Google Scholar] [CrossRef]
  84. Gerdjikov, V.S. Basic aspects of soliton theory. In Geometry, Integrability and Quantization; Hirshfeld, A.C., Mladenov, I.M., Eds.; Sortex: Sofia, Bulgaria, 2005; pp. 78–125. [Google Scholar]
  85. Holm, D.D. Geometric Mechanics Part I: Dynamics and Symmetry; Imperial College Press: London, UK, 2011. [Google Scholar]
  86. Holm, D.D. Geometric Mechanics Part II: Rotating, Translating and Rolling; Imperial College Press: London, UK, 2011. [Google Scholar]
  87. Ablowitz, B.P.M.J.; Trubach, A.D. Discrete and Continuous Nonlinear Schrödinger Systems; London Mathematical Society Lecture Note Series; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
  88. Mikhailov, A.V.; Shabat, A.B.; Sokolov, V.V. The symmetry approach to classification of integrable equations. In What is Integrability? Springer Series in Nonlinear Dynamics; Zakharov, V.E., Ed.; Springer: Berlin/Heidelberg, Germany, 1991; pp. 115–184. [Google Scholar]
  89. Zakharov, V.E. Integrable systems in multidimensional spaces. In Mathematical Problems in Theoretical Physics; Lecture Notes in Physics; Springer: Berlin/Heidelberg, Germany, 1982; Volume 153, pp. 190–216. [Google Scholar]
  90. Ablowitz, M.J.; Ladik, J.F. Nonlinear differential–difference equations and Fourier analysis. J. Math. Phys. 1976, 17, 1011–1018. [Google Scholar] [CrossRef]
  91. Gerdjikov, V.S.; Ivanov, M.I.; Kulish, P.P. Expansions over the “squared” solutions and difference evolution equations. J. Math. Phys. 1984, 25, 25–34. [Google Scholar] [CrossRef]
  92. Mikhailov, A.V.; Shabat, A.B.; Yamilov, R.I. The symmetry approach to the classification of non-linear equations. Complete lists of integrable systems. Russ. Math. Surv. 1987, 42, 3–53. [Google Scholar] [CrossRef]
  93. Mikhailov, A.V.; Shabat, A.B.; Yamilov, R.I. Extension of the module of invertible transformations. classification of integrable systems. Commun. Math. Phys. 1988, 115, 1–19. [Google Scholar] [CrossRef]
  94. Zhao, G. Integrability of Two-Component Systems of Partial Differential Equations. Ph.D. Thesis, Loughborough University, Loughborough, UK, 2020. [Google Scholar]
  95. Gerdjikov, V.S.; Stefanov, A.A.; Iliev, I.D.; Boyadjiev, G.P.; Smirnov, A.O.; Pavlov, M.V. Recursion operators and the hierarchies of MKdV equations related to D 4 ( 1 ) , D 4 ( 2 ) and D 4 ( 3 ) Kac-Moody algebras. Theor. Math. Phys. 2020, 204, 1110–1129. [Google Scholar] [CrossRef]
  96. Adler, M. On a trace functional for pseudo-differential operators and the symplectic structure of the Korteweg-de Vries equation. Invent. Math. 1979, 50, 219–248. [Google Scholar] [CrossRef]
  97. Gerdjikov, V.S.; Mladenov, D.M.; Stefanov, A.A.; Varbev, S.K. Mkdv-type of equations related to B 2 ( 1 ) and A 4 ( 2 ) . In Nonlinear Mathematical Physics and Natural Hazards; Springer Proceedings in Physics; Aneva, B., Kouteva-Guentcheva, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2015; Volume 163, pp. 59–69. [Google Scholar]
  98. Gerdjikov, V.S.; Mladenov, D.M.; Stefanov, A.A.; Varbev, S.K. On mKdV equations related to the affine Kac-Moody algebra a 5 ( 2 ) . J. Geom. Symmetry Phys. 2015, 39, 17–31. [Google Scholar] [CrossRef]
  99. Gerdjikov, V.S.; Mladenov, D.M.; Stefanov, A.A.; Varbev, S.K. Soliton equations related to the affine Kac-Moody algebra D 4 ( 1 ) . Eur. Phys. J. Plus 2015, 130, 106–123. [Google Scholar] [CrossRef]
  100. Gerdjikov, V.S.; Yanovski, A.B. On soliton equations with Z h and D h reductions: Conservation laws and generating operators. J. Geom. Symmetry Phys. 2013, 31, 57–92. [Google Scholar]
  101. Kaup, D.J. The three-wave interaction—A nondispersive phenomenon. Stud. Appl. Math. 1976, 55, 9–44. [Google Scholar] [CrossRef]
  102. Kaup, D.J. On the inverse scattering problem for cubic eigenvalue problems of the class ψxxx + 6x + 6 = λψ. Stud. Appl. Math. 1980, 62, 189–216. [Google Scholar] [CrossRef]
  103. Mikhailov, A.V.; Olshanetsky, M.A.; Perelomov, A.M. Two-dimensional generalized Toda lattice. Comm. Math. Phys. 1981, 79, 473–488. [Google Scholar] [CrossRef]
  104. Babalic, R.C.N.C.; Gerdjikov, V. On the solutions of a family of Tzitzeica equations. J. Geom. Symmetry Phys. 2015, 37, 1–24. [Google Scholar]
  105. Babalic, R.C.N.C.; Gerdjikov, V.S. On Tzitzeica equation and spectral properties of related Lax operators. Balk. J. Geom. Appl. 2014, 19, 11–22. [Google Scholar]
  106. Changzheng, Q.U.; Song, J.; Yao, R. Multi-component integrable systems and invariant curve flows in certain geometries. Symmetry Integr. Geom. Methods Appl. 2013, 9, 1–20. [Google Scholar]
  107. Gerdjikov, V.S.; Ivanov, R.I. Multicomponent Fokas-Lenells equations on Hermitian symmetric spaces. Nonlinearity 2021, 34, 939. [Google Scholar] [CrossRef]
  108. Haberlin, J.; Lyons, T. Solitons of shallow-water models from energy-dependent spectral problems. Eur. Phys. J. Plus 2018, 133, 16. [Google Scholar] [CrossRef]
  109. Holm, D.; Ivanov, R. Smooth and peaked solitons of the CH equation. J. Phys. A Math. Theor. 2010, 43, 434003. [Google Scholar] [CrossRef]
  110. Holm, D.; Ivanov, R. Two-component CH system: Inverse scattering, peakons and geometry. Inverse Probl. 2011, 27, 045013–045032. [Google Scholar] [CrossRef]
  111. Ivanov, R.; Lyons, T. Integrable models for shallow water with energy dependent spectral problems. J. Nonlinear Math. Phys. 2012, 19, 1240008–12400025. [Google Scholar] [CrossRef]
  112. Ivanov, R.I. Nls-type equations from quadratic pencil of Lax operators: Negative flows. Chaos Solitons Fractals 2022, 161, 112299. [Google Scholar] [CrossRef]
  113. Gaiarin, S.; Perego, A.M.; da Silva, E.P.; Da Ros, F.; Zibar, D. Experimental demonstration of dual polarization nonlinear frequency division multiplexed optical transmission system. In Proceedings of the 2017 European Conference on Optical Communication (ECOC), Gothenburg, Sweden, 17–21 September 2017; pp. 1–3. [Google Scholar]
  114. Gerdjikov, V.S. Bose-Einstein condensates and spectral properties of multicomponent nonlinear Schrödinger equations. Discret. Contin. Dyn. Syst. 2011, 4, 1181–1197. [Google Scholar] [CrossRef]
  115. Gerdjikov, V.S.; Grahovski, G.G. Two soliton interactions of BD.I multicomponent NLS equations and their gauge equivalent. AIP Conf. Proc. 2010, 1301, 561–572. [Google Scholar]
  116. Gerdjikov, V.S.; Kostov, N.A.; Valchev, T.I. Bose-Einstein condensates with F = 1 and F = 2. Reductions and soliton interactions of multi-component NLS models. In Proceedings of the Ultrafast Nonlinear Optics 2009, Bourgas, Bulgaria, 14–18 September 2009; Volume 7501. [Google Scholar]
  117. Gerdjikov, V.S.; Kostov, N.A.; Valchev, T.I. Solutions of multi-component NLS models and spinor Bose-Einstein condensates. Phys. D 2009, 238, 1306–1310. [Google Scholar] [CrossRef]
  118. Ieda, J.; Miyakawa, T.; Wadati, M. Exact analysis of soliton dynamics in spinor Bose-Einstein condensates. Phys. Rev. Lett. 2004, 93, 194102. [Google Scholar] [CrossRef] [PubMed]
  119. Ieda, J.; Miyakawa, T.; Wadati, M. Matter-wave solitons in an F = 1 spinor Bose-Einstein condensate. J. Phys. Soc. Jpn. 2004, 73, 2996. [Google Scholar] [CrossRef]
  120. Kostov, N.A.; Atanasov, V.A.; Gerdjikov, V.S.; Grahovski, G.G. On the soliton solutions of the spinor Bose-Einstein condensate. In Proceedings of the 14th International School on Quantum Electronics: Laser Physics and Applications, Sunny Beach, Bulgaria, 18–22 September 2006. [Google Scholar]
  121. Kulish, P.P.; Sklyanin, E.K. O(N)-invariant nonlinear Schrodinger equation—A new completely integrable system. Phys. Lett. A 1981, 84, 349–352. [Google Scholar] [CrossRef]
  122. Ohmi, T.; Machida, K. Bose–Einstein condensation with internal degrees of freedom in alkali atom gases. J. Phys. Soc. Jpn. 1998, 67, 1822. [Google Scholar] [CrossRef]
  123. Streche-Pauna, A.; Florian, A.; Gerdjikov, V.S. On generalized Kulish-Sklyanin models. Phys. AUC 2020, 30, 175–195. [Google Scholar]
  124. Uchiyama, M.; Ieda, J.; Wadati, M. Dark solitons in F = 1 spinor Bose–Einstein condensate. J. Phys. Soc. Jpn. 2006, 75, 064002. [Google Scholar] [CrossRef]
  125. Uchiyama, M.; Ieda, J.; Wadati, M. Multicomponent bright solitons in f = 2 spinor Bose–Einstein condensates. J. Phys. Soc. Jpn. 2007, 76, 74005. [Google Scholar] [CrossRef]
  126. Ueda, M.; Koashi, M. Theory of spin-2 Bose–Einstein condensates: Spin correlations, magnetic response, and excitation spectra. Phys. Rev. A 2002, 65, 063602. [Google Scholar] [CrossRef]
  127. Atanasov, V.A.; Gerdjikov, V.S.; Grahovski, G.G.; Kostov, N.A. Fordy-kulish models and spinor Bose–Einstein condensates. J. Nonlinear Math. Phys. 2008, 15, 291–298. [Google Scholar] [CrossRef]
  128. Gerdjikov, V.S. On soliton interactions of vector nonlinear Schrödinger equations. AIP Conf. Proc. 2011, 1404, 57–67. [Google Scholar]
  129. Gerdjikov, V.S.; Li, N.; Matveev, V.B.; Smirnov, A.O. On soliton solutions and soliton interactions of Kulish–Sklyanin and Hirota–Ohta systems. Theor. Math. Phys. 2022, 213, 1331–1347. [Google Scholar] [CrossRef]
  130. Konopelchenko, B.G. Solitons in Multidimensions. Inverse Spectral Transform Method; World Scientific: Singapore, 1993. [Google Scholar]
  131. Manakov, S.V.; Zakharov, V.E. Soliton Theory. In Soviet Scientific Reviews A; Khalatnikov, I.M., Ed.; IMSc Library: London, UK, 1979; Volume 1, pp. 133–190. [Google Scholar]
  132. Dubrovin, B.A. Matrix finite-zone operators. J. Sov. Math. 1985, 28, 20–50. [Google Scholar] [CrossRef]
  133. Enol, V.Z.; Kostov, N.A. Quasiperiodic and periodic solutions for vector nonlinear Schrödinger equations. J. Math. Phys. 2000, 41, 8236. [Google Scholar]
  134. Smirnov, A.O. Spectral curves for the derivative nonlinear Schrodinger equations. Symmetry 2021, 13, 1203. [Google Scholar] [CrossRef]
  135. Smirnov, A.O.; Frolov, E.A.; Gerdjikov, V.S. Spectral curves for the multi-phase solutions of Manakov system. IOP Conf. Ser. Mater. Sci. Eng. 2020, 862, 052041. [Google Scholar] [CrossRef]
  136. Smirnov, A.O.; Gerdjikov, V.S.; Aman, E.E. The Kulish–Sklyanin type hierarchy and spectral curves. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1047, 012114. [Google Scholar] [CrossRef]
  137. Smirnov, A.O.; Kolesnikov, A.S. Dubrovin’s method and Ablowitz-Kaup-Newell-Segur hierarchy. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1181, 012028. [Google Scholar] [CrossRef]
Figure 1. Examples of involutions for Lax operators linear in λ . The continuous spectrum fills up the real axis; the FAS χ ± ( x , t , λ ) are analytic for λ C ± , respectively. Left panel: for reductions of types C 1 and C 3 the discrete eigenvalues come in complex-conjugate pairs, shown by × and ∘; Right panel: for reductions of type C 2 and C 3 there are two types of eigenvalues: pairs of purely imaginary ones ± i λ 0 shown by ∘; quadruplets ± μ and ± μ shown by ×.
Figure 1. Examples of involutions for Lax operators linear in λ . The continuous spectrum fills up the real axis; the FAS χ ± ( x , t , λ ) are analytic for λ C ± , respectively. Left panel: for reductions of types C 1 and C 3 the discrete eigenvalues come in complex-conjugate pairs, shown by × and ∘; Right panel: for reductions of type C 2 and C 3 there are two types of eigenvalues: pairs of purely imaginary ones ± i λ 0 shown by ∘; quadruplets ± μ and ± μ shown by ×.
Symmetry 15 01933 g001
Figure 2. The continuous spectrum of the Lax operator (32) and the contour of the related Riemann–Hilbert problem R i R .
Figure 2. The continuous spectrum of the Lax operator (32) and the contour of the related Riemann–Hilbert problem R i R .
Symmetry 15 01933 g002
Figure 3. Left panel: contour of a RHP with Z 3 symmetry. The FAS χ + ( x , λ ) is analytic in the sectors Ω 0 Ω 2 Ω 4 , while χ ( x , λ ) is analytic in the sectors Ω 1 Ω 3 Ω 5 ; Right panel: contour of a RHP with Z 3 symmetry with additional involution mapping λ λ 1 .
Figure 3. Left panel: contour of a RHP with Z 3 symmetry. The FAS χ + ( x , λ ) is analytic in the sectors Ω 0 Ω 2 Ω 4 , while χ ( x , λ ) is analytic in the sectors Ω 1 Ω 3 Ω 5 ; Right panel: contour of a RHP with Z 3 symmetry with additional involution mapping λ λ 1 .
Symmetry 15 01933 g003
Figure 4. Examples of involutions for Lax operators quadratic in λ ; the continuous spectrum fills up R i R . Left panel: for reductions of types C 1 and C 3 the discrete eigenvalues come in complex-conjugate pairs, shown by ×; for reductions of type C 2 they come in pairs shown by ∘. In both cases, the contour of the RHP is the real axis. Right panel: the first involution maps λ λ 1 ; or to λ λ 1 ; the second one maps μ μ , 1 or to μ μ , 1 . In both cases, the contour of the RHP is given by the unit circle | λ | 2 = 1 , [49].
Figure 4. Examples of involutions for Lax operators quadratic in λ ; the continuous spectrum fills up R i R . Left panel: for reductions of types C 1 and C 3 the discrete eigenvalues come in complex-conjugate pairs, shown by ×; for reductions of type C 2 they come in pairs shown by ∘. In both cases, the contour of the RHP is the real axis. Right panel: the first involution maps λ λ 1 ; or to λ λ 1 ; the second one maps μ μ , 1 or to μ μ , 1 . In both cases, the contour of the RHP is given by the unit circle | λ | 2 = 1 , [49].
Symmetry 15 01933 g004
Figure 5. One soliton solution (the vertical axis is the modulus squared, the horizontal axis is x) for the 6-wave equations: q 1 (red), q 2 (green), q 3 (yellow) and q 4 (blue) on the left panel; q 5 (red) and q 6 (green) are on the right panel. All functions are evaluated for t = 0 and are plotted for x [ 0.5 , 0.5 ] . The values of the parameters are | n 10 = ( 2 , 2 , 1 , 2 ) T , m 10 | = ( 3 , 3 , 2 , 3 ) , a 1 = 2 , a 2 = 1 , b 1 = 3 , b 2 = 1 , μ 1 = 2 , ν 1 = 1 .
Figure 5. One soliton solution (the vertical axis is the modulus squared, the horizontal axis is x) for the 6-wave equations: q 1 (red), q 2 (green), q 3 (yellow) and q 4 (blue) on the left panel; q 5 (red) and q 6 (green) are on the right panel. All functions are evaluated for t = 0 and are plotted for x [ 0.5 , 0.5 ] . The values of the parameters are | n 10 = ( 2 , 2 , 1 , 2 ) T , m 10 | = ( 3 , 3 , 2 , 3 ) , a 1 = 2 , a 2 = 1 , b 1 = 3 , b 2 = 1 , μ 1 = 2 , ν 1 = 1 .
Symmetry 15 01933 g005
Figure 6. The one-soliton solution for the A.III type vector NLS (194). From Equation (231) one finds that all components of q 1 s are proportional to the same function, whose modulus squared is plotted on the left panel in green. For comparison, we have plotted also the modulus squared of the scalar NLS (red). On the right panel, we have plotted the phase for both equations: in green is the phase of (194), while in red is phase of the scalar NLS. The plots are performed for μ 1 = 2 and ν 1 = 1 .
Figure 6. The one-soliton solution for the A.III type vector NLS (194). From Equation (231) one finds that all components of q 1 s are proportional to the same function, whose modulus squared is plotted on the left panel in green. For comparison, we have plotted also the modulus squared of the scalar NLS (red). On the right panel, we have plotted the phase for both equations: in green is the phase of (194), while in red is phase of the scalar NLS. The plots are performed for μ 1 = 2 and ν 1 = 1 .
Symmetry 15 01933 g006
Figure 7. The one-soliton solution for the C.I type NLS (194). The plots are performed for μ 1 = 1 and ν 1 = 3 . Left panel: q 2 , 1 s – red, q 3 , 1 s – green, q 4 , 1 s – orange; Right panel: q 1 , 1 s – red, q 5 , 1 s – green, q 6 , 1 s – orange. In both cases μ 1 = 3 , ν 1 = 1 and n 10 , 1 = 1 + i , n 10 , 2 = 2 i , n 10 , 3 = 3 ( 1 + i ) , n 10 , 4 = 3 ( 1 i ) , n 10 , 5 = 2 + i , n 10 , 6 = 1 i .
Figure 7. The one-soliton solution for the C.I type NLS (194). The plots are performed for μ 1 = 1 and ν 1 = 3 . Left panel: q 2 , 1 s – red, q 3 , 1 s – green, q 4 , 1 s – orange; Right panel: q 1 , 1 s – red, q 5 , 1 s – green, q 6 , 1 s – orange. In both cases μ 1 = 3 , ν 1 = 1 and n 10 , 1 = 1 + i , n 10 , 2 = 2 i , n 10 , 3 = 3 ( 1 + i ) , n 10 , 4 = 3 ( 1 i ) , n 10 , 5 = 2 + i , n 10 , 6 = 1 i .
Symmetry 15 01933 g007
Figure 8. The one-soliton solution for the D.III type NLS (194). The plots are performed for μ 1 = 3 and ν 1 = 1 . Left panel: q 1 , 1 s – red, q 3 , 1 s – green, q 4 , 1 s – orange, q 6 , 1 s – blue; Right panel: q 2 , 1 s – red, q 5 , 1 s – green. In both cases μ 1 = 3 , ν 1 = 1 and n 10 , 1 = 2 , n 10 , 2 = 3 ( 1 + i ) , n 10 , 3 = 1 + i , n 10 , 4 = 2 ( 1 + i ) , n 10 , 5 = 2 ( 1 i ) , n 10 , 6 = 3 ( 1 i ) , n 10 , 7 = 1 i , n 10 , 8 = 4 .
Figure 8. The one-soliton solution for the D.III type NLS (194). The plots are performed for μ 1 = 3 and ν 1 = 1 . Left panel: q 1 , 1 s – red, q 3 , 1 s – green, q 4 , 1 s – orange, q 6 , 1 s – blue; Right panel: q 2 , 1 s – red, q 5 , 1 s – green. In both cases μ 1 = 3 , ν 1 = 1 and n 10 , 1 = 2 , n 10 , 2 = 3 ( 1 + i ) , n 10 , 3 = 1 + i , n 10 , 4 = 2 ( 1 + i ) , n 10 , 5 = 2 ( 1 i ) , n 10 , 6 = 3 ( 1 i ) , n 10 , 7 = 1 i , n 10 , 8 = 4 .
Symmetry 15 01933 g008
Figure 9. The same as in Figure 8 but now μ 1 = 2 and ν 1 = 1 .
Figure 9. The same as in Figure 8 but now μ 1 = 2 and ν 1 = 1 .
Symmetry 15 01933 g009
Figure 10. The contours γ j of the related Riemann–Hilbert problem R i R .
Figure 10. The contours γ j of the related Riemann–Hilbert problem R i R .
Symmetry 15 01933 g010
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gerdjikov, V.S.; Stefanov, A.A. Riemann–Hilbert Problems, Polynomial Lax Pairs, Integrable Equations and Their Soliton Solutions. Symmetry 2023, 15, 1933. https://doi.org/10.3390/sym15101933

AMA Style

Gerdjikov VS, Stefanov AA. Riemann–Hilbert Problems, Polynomial Lax Pairs, Integrable Equations and Their Soliton Solutions. Symmetry. 2023; 15(10):1933. https://doi.org/10.3390/sym15101933

Chicago/Turabian Style

Gerdjikov, Vladimir Stefanov, and Aleksander Aleksiev Stefanov. 2023. "Riemann–Hilbert Problems, Polynomial Lax Pairs, Integrable Equations and Their Soliton Solutions" Symmetry 15, no. 10: 1933. https://doi.org/10.3390/sym15101933

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop