Next Article in Journal
Endpoint Geodesic Formulas on Graßmannians Applied to Interpolation Problems
Next Article in Special Issue
Keller–Osserman Phenomena for Kardar–Parisi–Zhang-Type Inequalities
Previous Article in Journal
Link Prediction for Temporal Heterogeneous Networks Based on the Information Lifecycle
Previous Article in Special Issue
Direct Method for Identification of Two Coefficients of Acoustic Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Approach to Solving Direct and Inverse Scattering Problems for Non-Selfadjoint Schrödinger Operators on a Half-Line

by
Vladislav V. Kravchenko
and
Lady Estefania Murcia-Lozano
*
Department of Mathematics, Cinvestav, Campus Querétaro, Libramiento Norponiente #2000, Fracc. Real de Juriquilla, Querétaro 76230, Mexico
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3544; https://doi.org/10.3390/math11163544
Submission received: 19 July 2023 / Revised: 6 August 2023 / Accepted: 7 August 2023 / Published: 16 August 2023

Abstract

:
In this paper, an approach to solving direct and inverse scattering problems on the half-line for a one-dimensional Schrödinger equation with a complex-valued potential that is exponentially decreasing at infinity is developed. It is based on a power series representation of the Jost solution in a unit disk of a complex variable related to the spectral parameter by a Möbius transformation. This representation leads to an efficient method of solving the corresponding direct scattering problem for a given potential, while the solution to the inverse problem is reduced to the computation of the first coefficient of the power series from a system of linear algebraic equations. The approach to solving these direct and inverse scattering problems is illustrated by several explicit examples and numerical testing.
MSC:
34A55; 34L05; 34L16; 34L25; 34L40; 65L09; 65L15

1. Introduction

Consider the one-dimensional Schrödinger equation
l y : = y + q ( x ) y = λ y , x ( 0 , ) ,
with λ C and a complex-valued potential q ( x ) satisfying the condition
0 e ε x q ( x ) d x <
for some ε > 0 . By ρ, we denote the square root of λ such that ρ C + ¯ : = w C : Im ( w ) 0 . In the present work, an approach to solving direct and inverse scattering problems for (1) under Condition (2) is developed.
Complex-valued potentials arise when studying parity time (PT)-symmetric potentials [1] (Chapter 1), [2], quasi-exactly solvable (QES) potentials [3,4], hydrodynamics, and magnetohydrodynamics [5]; see also [6,7,8].
Studying a Zakharov–Shabat system, even with a real-valued potential, naturally leads to a couple of equations of the form (1) with complex-valued potentials; see [9]. Indeed, consider the Zakharov–Shabat system
υ x ( x ) = v 1 ( x ) v 2 ( x ) x = i ρ u ( x ) u ( x ) i ρ υ ( x ) , 0 < x <
where ρ is a complex spectral parameter and u ( x ) is a real-valued potential.
The further transformation of υ ( x ) is as follows:
y 1 ( x ) = v 2 ( x ) i v 1 ( x ) , y 2 ( x ) = v 2 ( x ) + i v 1 ( x )
This leads to a pair of Schrödinger equations with complex-valued potentials
y 1 ( x ) + ( i u ( x ) u 2 ( x ) ) y 1 ( x ) = ρ 2 y 1 ( x ) ,
y 2 ( x ) + ( i u ( x ) u 2 ( x ) ) y 2 ( x ) = ρ 2 y 2 ( x ) .
Thus, the results of the present work are applicable to direct and inverse scattering problems for a Zakharov–Shabat system.
A direct scattering problem for (1) with a complex-valued potential was studied in a number of publications ([10,11,12,13]). Equation (1) under Condition (2) was considered in [12] (p. 292), [14,15,16,17,18,19,20] (p. 353), and [21,22].
It is well-known (see, e.g., [12] (p. 443), [18]) that (1) admits a unique solution, which we denote by e ( ρ , x ) , satisfying the asymptotic equality
e ( ρ , x ) = e i ρ x ( 1 + o ( 1 ) ) , x .
This solution is called the Jost solution of (1). It admits the Levin integral representation [12] (see also [18,23,24])
e ( ρ , x ) = e i ρ x + x A ( x , t ) e i ρ t d t , Im ρ > ε 2 , x 0
where for every fixed x, the kernel A ( x , t ) belongs to L 2 ( x , ) . In [25] (see also [26]) a Fourier–Laguerre series representation for A ( x , t ) was proposed in the form
A ( x , t ) = n = 0 a n ( x ) L n ( t x ) e x t 2 ,
where L n ( τ ) stands for the Laguerre polynomial of order n. A recurrent integration procedure was developed in [27] to calculate the coefficients a n ( x ) . The substitution of (7) into (6) was found to lead to a series representation for the Jost solution [25,26]
e ρ , x = e i ρ x 1 + ( z + 1 ) n = 0 ( 1 ) n z n a n ( x ) , x 0 , ρ C + ¯
where
z = z ( ρ ) = 1 2 + i ρ 1 2 i ρ ·
In the present work, we consider the direct and inverse scattering problems for (1) subject to the homogeneous Dirichlet condition
y ( 0 ) = 0 ,
however, the approach developed here is also applicable in the case of other boundary conditions, such as
y ( 0 ) h y ( 0 ) = 0
with h C .
The problem (1) and (10) under Condition (2) possesses a continuous spectrum coinciding with the positive semi-axis λ > 0 , and may have a point spectrum that coincides with the squares of the non-real roots of the Jost function
e ( ρ ) : = e ( ρ , 0 ) ,
if such roots exist. Let us denote them as ρ 1 , , ρ α . Their multiplicity may be greater than one. In this case, instead of norming constants associated to the eigenvalues, the corresponding normalization polynomials X k ( x ) naturally arise (see Section 3.3 below).
As a component of the scattering data for (1), the scattering function
s ( ρ ) : = e ( ρ ) e ( ρ )
is considered in the strip Im ( ρ ) < ε 0 where ε 0 is sufficiently small (see Section 3.2 below).
The direct scattering problem for (1) and (10) consists of obtaining the set of the scattering data
ρ k , m k , X k ( x ) k = 1 α , s ( ρ ) .
The overall approach developed in the present work to solve this problem is based on the representation (8). Indeed, the calculation of ρ k k = 1 α is easily realizable with the aid of the argument principle theorem applied to find zeros of (8) in the unit disc. To the best of our knowledge, there has been no practical way of calculating the normalization polynomials. We propose a simple procedure for computing their coefficients by solving a finite system of linear algebraic equations. For this, an auxiliary result for the derivatives m z m e ( ρ ( z ) , x ) is obtained.
The calculation of the scattering function s ( p ) requires an analytic extension of the Jost function e ( ρ ) obtained from (8), onto the strip ε 0 < Im ( ρ ) < 0 . We explore different possibilities for such an extension, including the Padé approximants (see [28,29]) and the power series analytic continuation [30] (p. 150), [31]. This results in an efficient numerical method for solving the direct scattering problem.
The inverse scattering problem consists of recovering the potential q ( x ) from the set of the scattering data. A general theory of this inverse problem can be found in [12,13,20] (p. 353), [24,32,33,34,35]. Here, we use the representation (7) for the numerical solution of the problem, thus extending the approach developed in [25,26,36,37,38] to the non-selfadjoint situation. The inverse Sturm–Liouville problem is reduced to an infinite system of linear algebraic equations. The potential q ( x ) is recovered from the first component of the solution vector, which coincides with a 0 ( x ) in (7).
The reduction to the infinite system of linear algebraic equations is based on the substitution of the series representation (7) for the kernel A ( x , t ) into the Gelfand–Levitan equation (see [39]),
A ( x , t ) = x A ( x , u ) f ( u + t ) d u + f ( x + t ) , 0 x t < ,
where the function f can be computed from the set of scattering data (11):
f ( x ) : = 1 2 π + i η + i η ( s ( ρ ) 1 ) e i x ρ d ρ k = 1 α X k ( x ) e i ρ k x , 0 < η < ε 0 .
To approximate the complex-valued function a 0 ( x ) , we consider the truncated system of linear algebraic equations, for which the existence, uniqueness and stability of the solution is proved.
Finally, we illustrate the proposed approach by numerical calculations performed in Matlab2021a.
We discuss the details of the numerical implementation of the method: its convergence, stability and accuracy. In a couple of examples, we show the “in-out” performance of the approach, i.e., we solve the direct problem numerically and use the results of our computation as the input data to solve the inverse problem.
The approach based on the representations (7) and (8) leads to efficient numerical methods for solving both direct and inverse scattering problems.
In Section 2, we recall the series representations for the kernel A ( x , t ) and for the Jost solution, then prove additional results related to these representations. In Section 3, we recall the set of scattering data and put forward an algorithm for solving the direct scattering problem. Additionally, we present analytical examples. In Section 4, the approach for solving the inverse scattering problem is developed. Analytical examples from Section 2 are considered in order to illustrate the approach. In Section 5, we discuss the numerical implementation of the algorithms proposed for solving the direct and inverse scattering problems. Section 6 contains some concluding remarks.

2. Series Representations for the Transmutation Operator Kernel and Jost Solution

Consider the one-dimensional Schrödinger equation on the half-line (1) where λ = ρ 2 C is the spectral parameter. The potential q ( x ) is a complex-valued function satisfying Condition (2) for some ε > 0 .
Equation (1) is considered on the class of functions D ( l ) = y W 2 , 2 ( 0 , ) : l y L 2 ( 0 , ) .

Series Representation for Solutions of the One-Dimensional Schrödinger Equation

Equation (1) possesses the unique so-called Jost solution e ( ρ , x ) (see, e.g., [12] (p. 443), [18]), which for all x 0 is a holomorphic function of ρ in the half-plane Im ρ 0 and satisfies the asymptotic relation
e ( ρ , x ) = e i ρ x ( 1 + o ( 1 ) ) when x and Im ρ 0 .
The function e ( ρ ) : = e ( ρ , 0 ) is called the Jost function.
Remark 1. 
Under the assumption q ( x ) L ( 0 , ) instead of (2), for every x 0 the solution e ( ρ , x ) is continuous with respect to ρ for ρ C + ¯ 0 and holomorphic with respect to ρ for ρ C + . If in addition ( 1 + x ) q ( x ) L ( 0 , ) , the functions e ( ν ) ( ρ , x ) , ν = 0 , 1 are continuous for ρ C + ¯ , x 0 (see [24] (p. 105)).
Remark 2. 
Under Condition (2), the Jost solution satisfies the asymptotic relations
j x j e ( ρ , x ) = ( i x ) ( j ) e i ρ x + o e ε 2 x , j = 0 , 1 , when x ,
provided the existence of these derivatives; see [21].
The solution e ( ρ , x ) admits the Levin integral representation [12]
e ( ρ , x ) = e i x ρ + x A ( x , t ) e i ρ t d t , Im ρ 0 , x 0 ,
where A ( x , t ) is a complex-valued continuous function for 0 x t < . Denote Q ( x ) : = q L ( x , ) . The kernel A ( x , t ) admits the bound [24] (p. 108)
A ( x , t ) 1 2 Q x + t 2 exp Q L ( x , ) Q L x + t 2 , .
Under Condition (2), the Jost solution is extensible onto the half-plane Im ρ > ε 2 through the Levin representation (14). The extension satisfies (13) for Im ρ > ε 2 .
Proposition 1. 
Under Condition (2), the kernel A ( x , t ) admits the bound
A ( x , t ) 1 2 e ε x + t 2 x + t 2 e ε τ q ( τ ) d τ exp C ε ε e ε x e ε x + t 2 , 0 x t < ,
where C ε = 0 e ε t q ( t ) d t .
Proof. 
Under Condition (2), the potential q ( x ) satisfies the inequality
Q ( x ) e ε x x e ε t q ( t ) d t = C ε e ε x .
Moreover, for any fixed x [ 0 , ) we have [12] (p. 317)
Q L ( x , ) C ε ε e ε x .
Thus, substitution of (17) and (18) into (15) gives us (16). □
Additionally, the kernel A ( x , t ) has first continuous derivatives that satisfy the inequalities [12] (p. 305)
A x ( x , t ) , A t ( x , t ) 1 4 q x + t 2 + C ε exp ε 3 2 x + t ,
and the equality [12] (p. 328)
A ( x , x ) = 1 2 x q ( t ) d t .
As was pointed out in [25], since A ( x , · ) L 2 ( x , ) , the function
a ( x , t ) : = e t 2 A ( x , t + x )
belongs to L 2 ( [ 0 , ) ; e t ) and hence admits the series representation
a ( x , t ) = n = 0 a n ( x ) L n ( t ) ,
where L n ( t ) stands for the Laguerre polynomial of order n and a n ( x ) are complex-valued functions such that a n ( x ) n = 0 l 2 for any x 0 . For all x 0 , the series (22) converges in the norm of L 2 ( [ 0 , ) ; e t ) . Thus,
A ( x , t ) = n = 0 a n ( x ) L n ( t x ) e x t 2
and
n = 0 a n ( x ) = A ( x , x ) = 1 2 x q ( t ) d t .
This series representation was obtained in [25] for real-valued q ( x ) . However, (23) remains true in the non-selfadjoint case as well.
Proposition 2. 
For any fixed x 0 , the series
a ( x , t ) = n = 0 a n ( x ) L n ( t ) , t [ 0 , )
converges pointwise.
Proof. 
We use [40] (Theorem 6.5), and thus need to verify that the following assertions are true.
1. a ( x , · ) is of class L [ 0 , ) ; e t .
2. a ( x , · ) is γ -Hölder continuous, i.e., there exists 0 < γ 1 , such that
a x , t 0 a ( x , t ) M t 0 t γ ,
for some constant M > 0 and arbitrary t, t 0 [ 0 , ) .
3. The integrals
0 1 t 3 / 4 a ( x , t ) d t , 1 e t / 2 a ( x , t ) d t
exist.
To prove the first assertion, it is enough to consider estimate (15). Indeed,
0 e t a ( x , t ) d t 0 A ( x , x + t ) d t 1 2 0 Q 2 x + t 2 exp Q L ( x , ) Q L 2 x + t 2 , d t exp Q L ( x , ) 2 0 Q 2 x + t 2 exp Q L 2 x + t 2 , d t = exp Q L ( x , ) x Q τ exp Q L τ , d τ .
Note that d d τ Q L τ , = Q ( τ ) and therefore
x Q τ exp Q L τ , d τ = 1 exp Q L x , .
Thus,
0 e t a ( x , t ) d t exp Q L x , 1 < .
The second assertion follows from the inclusion A ( x , · ) C 1 ( x , ) .
The existence of the first integral in (26) follows from the continuity of a ( x , · ) . Finally, for the second integral we have
1 e t / 2 a ( x , t ) d t = 1 A ( x , x + t ) d t 0 A ( x , x + t ) d t ,
and thus, from the proof of the first assertion, we obtain 1 e t / 2 a ( x , t ) d t < .
Now, the application of Theorem 6.5 from [40] completes the proof. □
Following [25] (see also [26] (p. 63)), the substitution of (23) into (14) and termwise integration lead to the series representation (8) for the Jost solution.
The series (8) is convergent in the open unit disk of the complex z-plane, D : = z C : z < 1 , and for every x, the function e ( ρ , x ) e i ρ x belongs to the Hardy space H 2 ( D ) as a function of z [26].
Proposition 3. 
Let q ( x ) ( 1 + x ) L ( 0 , ) . Then, the kernel A ( x , t ) admits the representation (23), where for any x fixed the series converges in the norm of L 2 x , , and the complex-valued coefficients a n ( x ) satisfy the system of equations
l a 0 a 0 = q ,
l a n a n = l a n 1 + a n 1 , n = 1 , 2 , ,
as well as the inequality
a n ( x ) exp Q L ( x , ) 1 , n = 0 , 1 , 2 , .
Proof. 
The proof of (28) and (29) from [26] (Theorem 10.1, p. 66) given for the case of a real-valued q remains valid in this more general situation as well.
Note that
a n ( x ) = 0 a ( x , t ) L n ( t ) e t d t .
From estimate (15) and inequality ([41] (p. 164)) L n ( t ) e t / 2 , t 0 , we have
a n ( x ) 0 A ( x , x + t ) L n ( t ) e t / 2 d t 1 2 0 Q 2 x + t 2 exp Q L ( x , ) Q L 2 x + t 2 , d t = exp Q L ( x , ) 1
(see (27)). □
Corollary 1. 
Under Condition (2), the coefficients a n satisfy the inequality
a n ( x ) exp C ε ε e ε x 1 .
Proof. 
Substitution of (17) and (18) into (31) yields (32). □
Remark 3. 
Under the assumption that functions a ( ν ) ( x , t ) are absolutely continuous with respect to t in [ 0 , ) for ν = 0 , 1 , 2 , the convergence of the power series in (8) for z D ¯ can be proved with the aid of a result from [42], which states that
a n ( x ) V n ( n 1 ) ( n 2 ) ,
provided that
lim t + e t / 2 t 1 + j a ( j ) ( x , · ) = 0 j = 0 , 1 , 2
and
V = 0 t 3 e t a ( 3 ) ( x , · ) 2 d t < .
Moreover,
a ( x , t ) n = 0 N a n ( x ) L n ( t ) L 2 ( 0 , ; e t ) V N ( N 1 ) ( N 2 ) ( N 3 ) .
To ensure Condition (33) for j = 0 , notice that from (16) we have
a ( x , t ) = e t 2 A ( x , t + x ) C e t 2 e ε x + t 2 .
For j = 1 , Condition (33) holds due to (19). However, the fulfillment of (33) for j = 2 as well as that of (34) requires the additional regularity of q ( x ) , ensuring the possibility of the differentiation of the integral equation for the kernel
A ( x , t ) = 1 2 ( x + t ) / 2 q ( ξ ) d ξ + 1 2 x ( x + t ) / 2 q ( ξ ) t + x ξ t + ξ x A ( ξ , η ) d η d ξ + 1 2 ( t + x ) / 2 q ( ξ ) ξ t + ξ x A ( ξ , η ) d η d ξ , 0 x t <
at least three times [12] (p. 296).
Remark 4. 
Denote
e N ρ , x = e i ρ x 1 + ( z + 1 ) n = 0 N ( 1 ) n z n a n ( x ) , ρ C + .
In [27], the following statements were proved in the case of a real-valued potential.
1. If Im ρ > 0 , then
e ( ρ , x ) e N ( ρ , x ) ε N ( x ) e Im ρ x 2 Im ρ
where
ε N ( x ) : = n = N + 1 a n ( x ) 2 1 / 2 = 0 e t a ( x , t ) a N ( x , t ) 2 d t 1 / 2 .
2. If ρ R , then
e ( · , x ) e N ( · , x ) L 2 ( , ) = 2 π ε N ( x ) .
These results remain valid in the case of a complex-valued potential. Moreover, under the assumptions of Remark 3, we obtain the inequality
ε N ( x ) V N ( N 1 ) ( N 2 ) ( N 3 ) .
Remark 5. 
The substitution of ρ = i 2 into (8) leads to the equality a 0 ( x ) = e i 2 , x e x / 2 1 .
Moreover, note that we have
q ( x ) = a 0 ( x ) a 0 ( x ) a 0 ( x ) + 1 .
By ω ( ρ , x ) , we denote the solution of (1), satisfying the initial conditions
ω ( ρ , 0 ) = 0 , d d x ω ( ρ , 0 ) = 1 .
We also need the solution
Ω ( ρ , x ) = 2 i ρ ω ( ρ , x ) e ( ρ ) .

3. Direct Problem

3.1. Spectrum of (1) and (10)

Consider the problem (1) and (10) under Condition (2). Let us recall some definitions and facts from [12] (p. 452) (see also [18]). The continuous spectrum fills the entire semi-axis λ > 0 .
Definition 1. 
We call the roots of e ( ρ ) that lie in C + ¯ 0 the singular numbers of the problem (1) and (10).
If they exist, their number is finite. Let us denote the non-real singular numbers by ρ 1 , , ρ α . The numbers λ k = ρ k 2 constitute the point spectrum of the problem, and the multiplicities of the zeros ρ k k = 1 , , α are called the multiplicities of the singular numbers and denoted by m k , respectively.
Thus, we are interested in the zeros z k of the Jost function
e ( ρ ) = 1 + ( z + 1 ) n = 0 ( 1 ) n z n a n ( 0 )
to obtain the eigenvalues from λ k = z k 1 2 ( z k + 1 ) 2 .
For an estimate of the number of the eigenvalues, we refer to [43].

3.2. Scattering Function s ( ρ )

Let us introduce ε 1 as the distance from the real axis to the non-real roots of the function e ( ρ ) . Let ε 0 = min ε 1 , ε 2 when ε 1 0 ( ε 1 = 0 means that there are no non-real roots), or ε 0 = ε 2 otherwise.
The scattering function s ( ρ ) is defined by
s ( ρ ) : = e ( ρ ) e ( ρ ) , Im ( ρ ) < ε 0 .
Let us recall some properties of the Jost function e ( ρ ) and scattering function s ( ρ ) (see, e.g., [39]).
  • e ( ρ ) is holomorphic for Im ρ > ε 0 , and for every 0 < η < ε 0 it satisfies the asymptotic relation
    e ( ρ ) = 1 + O 1 ρ , as ρ
    uniformly in the strip Im ρ η .
  • s ( ρ ) is meromorphic in the strip Im ρ < ε 0 , and for every 0 < η < ε 0 :
    s ( ρ ) = 1 + O 1 ρ , as ρ
    uniformly in the strip Im ρ η .
  • s ( ρ ) has no non-real poles in the strip Im ρ < ε 0 .
  • s ( ρ ) s ( ρ ) = 1 .
  • s ( 0 ) = ± 1 .
A function satisfying properties 2–5 is said to be of S-type in the strip Im ρ < ε 0 . The following examples illustrate some of the above definitions.
Example 1
([44,45]). Consider the potential
q 1 ( x ) : = 10 i e x , x 0
with 0 < ε < 1 in (2). With the aid of Wolfram Mathematica v.12 the Jost solution can be obtained in a closed form,
e 1 ( ρ , x ) = 10 i 2 i ρ J 2 i ρ ( ( 2 2 i ) 5 e x / 2 ) Γ ( 1 2 i ρ ) , x 0 , Im ( ρ ) > 1 / 2 ,
where J ν ( z ) stands for the Bessel function of the first kind of order ν.
Hence,
e 1 ( ρ ) = 10 i 2 i ρ J 2 i ρ ( ( 2 2 i ) 5 ) Γ ( 1 2 i ρ ) ,
and the eigenvalues are the squares of the values ρ C + such that
J 2 i ρ ( ( 2 2 i ) 5 ) = 0 .
From here, we obtain the only singular number
ρ 0 1.784065847527427576134879232 + 0.6087886812067186310240220034 i .
The scattering function has the form
s 1 ( ρ ) = ( 1 i ) 4 i ρ 5 2 i ρ J 2 i ρ ( ( 2 2 i ) 5 ) Γ ( 1 + 2 i ρ ) J 2 i ρ ( ( 2 2 i ) 5 ) Γ ( 1 2 i ρ ) .
It is well-defined in the domain
D s 1 = ρ C : J 2 i ρ ( 2 2 i ) 5 0 Im ( ρ ) > 1 2 2 i ρ Z Im ( ρ ) < 1 2 2 i ρ Z
and is an S-type function in the strip Im ( ρ ) < 1 2 .
Example 2. 
Consider the potential
q 2 ( x ) : = 4 i sech ( 2 x ) tanh ( 2 x ) 4 sech 2 ( 2 x ) , x 0 ,
which satisfies Condition (2) for 0 < ε < 2 . The Schödinger equation with this potential comes from a Zakharov–Shabat system (3) with the potential u ( x ) = 2 sech ( 2 x ) and its reduction to Equation (5).
The corresponding Jost solution e 2 ( ρ , x ) is obtained from the Jost solution of a Zakharov–Shabat system (see [46]) with the potential u ( x ) ,
e 2 ( ρ , x ) = ρ tanh ( 2 x ) sech ( 2 x ) ρ + i e i ρ x , x 0 , Im ( ρ ) > 1 2 .
Thus, the Jost function is
e 2 ( ρ ) = ρ + 1 ρ + i , Im ( ρ ) > 1 2 .
It has one root, ρ * = 1 , which corresponds to the spectral singularity λ * = ρ * 2 = 1 .
The scattering function is given by
s 2 ( ρ ) = 1 ρ i ρ ρ + i ρ + 1 ,
which is an S-type function in the strip Im ρ < 1 2 .
Example 3
([21]). Consider the potentials of the form
q ( x ) = 2 a 2 sech 2 ( a x + b ) , x 0 , b C , a > 0
satisfying Condition (2) for 0 < ε < 2 a . The Jost solution has the form
e ( ρ , x ) = ρ + i a tanh ( b + a x ) ρ + i a e i ρ x , x 0 , Im ( ρ ) > a ,
from which the Jost function is obtained
e ( ρ ) = ρ + i a tanh ( b ) ρ + i a , Im ( ρ ) > a
with the single root ρ = i a tanh ( b ) .
The square of this ρ represents the discrete spectrum of the problem. The potential (49) is complex-valued when b is not purely imaginary. The scattering function has the form
s ( ρ ) = ρ i a tanh ( b ) ρ + i a ρ + i a tanh ( b ) ρ i a ,
which is an S-type function in Im ( ρ ) < min a , Im i a tanh ( b ) in the case of a complex-valued potential. In the case of a real-valued potential, s ( ρ ) is an S-type function in Im ( ρ ) < a .
To present an explicit example, we fix a = 1 and b = 1 i in (49). Then,
q 3 ( x ) = 2 sech 2 ( x 1 i ) , x 0
with 0 < ε < 2 in Condition (2) and the Jost solution is
e 3 ( ρ , x ) = ρ + i tanh ( x 1 i ) ρ + i e i ρ x , x 0 , Im ( ρ ) > 1 .
Thus, the Jost function has the form
e 3 ( ρ ) = ρ i tanh ( 1 + i ) ρ + i , Im ( ρ ) > 1 ,
and one eigenvalue exists: λ = tanh 2 ( 1 + i ) .
The scattering function
s 3 ( ρ ) = ρ + i tanh ( 1 + i ) ρ + i ρ i tanh ( 1 + i ) ρ i
is an S-type function in the strip Im ( ρ ) < 1 .

3.3. Normalization Polynomials

The normalization polynomial X k ( x ) of degree m k 1 , associated with the eigenvalue ρ k 2 ( m k is the algebraic multiplicity of ρ k as zero of e ( ρ ) ), defined by the equation [18]
i Res ( Ω ( ρ , x ) ; ρ k ) = e i ρ k x X k ( x ) + x A ( x , t ) X k ( t ) e i ρ k t d t ,
where Ω ( ρ , x ) is defined by (40). Using the series representation (23) of the kernel A ( x , t ) , we can obtain a method to compute the coefficients of X k ( x ) .
Remark 6. 
Note that the series (8) can be written as
e ( ρ , x ) = e i ρ x 1 + ( z + 1 ) n = 0 ( 1 ) n a n ( x ) P n ( n , 0 ) ( 1 + 2 z ) , ρ C + z D
in terms of the Jacobi polynomials P n ( α , β ) ( τ ) .
Let us write Equation (51) in terms of the Jost solution and Jacobi polynomials, as follows.
Proposition 4. 
Let λ k = ρ k 2 , k = 1 , , α be an eigenvalue of problem (1) and (10) and m k be its multiplicity. For the normalization polynomial X k ( x ) , the equality holds
i Res ( Ω ( ρ , x ) ; ρ k ) = X k ( x ) e ( ρ k , x ) + e i ρ k x n = 0 ( 1 ) n a n ( x ) j = 1 m k 1 d j X k ( x ) d x j ( z k + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z k ) , x 0 .
Proof. 
The substitution of (23) into (51) yields
i Res ( Ω ( ρ , x ) ; ρ k ) = e i ρ k x X k ( x ) + 0 n = 0 a n ( x ) L n ( s ) e 1 2 i ρ k s X k ( x + s ) d s .
Here, we change the order of summation and integration due to Parseval’s identity [47] (p. 16) and additionally use the equality
X k ( x + s ) = j = 0 m k 1 s j j ! d j X k ( x ) d x j .
Thus,
i Res ( Ω ( ρ , x ) ; ρ k ) = e i ρ k x X k ( x ) + n = 0 a n ( x ) j = 0 m k 1 1 j ! d j X k ( x ) d x j 0 L n ( s ) e 1 2 i ρ k s s j d s .
The last integral can be explicitly evaluated [48] (Formula 7.414 (7))
0 L n ( s ) e 1 2 i ρ k s s j d s = j ! ( z k + 1 ) j + 1 F ( n , j + 1 , 1 ; z k + 1 ) = j ! ( z k + 1 ) j + 1 ( 1 ) n P n ( j n , 0 ) ( 1 + 2 z k ) ,
where F ( a , b , c ; z ) stands for the hypergeometric function [49] (p. 56). Thus, we have the equation
i Res ( Ω ( ρ , x ) ; ρ k ) = e i ρ k x X k ( x ) + n = 0 ( 1 ) n a n ( x ) j = 0 m k 1 ( z k + 1 ) j + 1 d j X k ( x ) d x j P n ( j n , 0 ) ( 1 + 2 z k ) ,
and due to Remark 6, we obtain (53). □
Hereinafter C k n = n k denotes the binomial coefficient.
Lemma 1. 
The m-th derivative of the Jost solution e ( ρ , x ) with respect to the variable z admits the representation
m z m e ( ρ , x ) = e ( ρ , x ) j = 0 m 1 ( 1 ) j C j m 1 m ! ( m j ) ! x m j ( z + 1 ) j 2 m + e i ρ x n = 0 ( 1 ) n a n ( x ) j = 2 m + 1 P n ( j n 1 , 0 ) ( 1 + 2 z ) s = j 1 m ( 1 ) s + j + 1 C s j + 1 m 1 m ! ( m s ) ! x m s ( z + 1 ) s + 1 2 m ,
where ρ C + z D and x 0 .
Proof. 
We use the identity [50] (p. 3)
F z ( n , j , 1 , z + 1 ) ( z + 1 ) = j F ( n , j + 1 , 1 , z + 1 ) j F ( n , j , 1 , z + 1 ) ,
where F z means the derivative with respect to z, and j, n are integers.
Let us prove the lemma by induction. For m = 1 , from (52), we have
z e ( ρ , x ) = e i ρ x x ( z + 1 ) 2 1 + ( z + 1 ) n = 0 ( 1 ) n a n ( x ) P n ( n , 0 ) ( 1 + 2 z ) + e i ρ x n = 0 a n ( x ) F ( n , 1 , 1 , z + 1 ) + ( z + 1 ) n = 0 a n ( x ) F z ( n , 1 , 1 , z + 1 ) .
The application of (55) gives
z e ( ρ , x ) = x e ( ρ , x ) ( z + 1 ) 2 + e i ρ x n = 0 ( 1 ) n a n ( x ) P n ( 1 n , 0 ) ( 1 + 2 z k ) .
Consider Formula (54) as the induction hypothesis for m = k . The idea is to prove the equation
z e ( ρ , x ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k e i ρ x n = 0 a n ( x ) F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k = e ( ρ , x ) j = 0 k ( 1 ) j C j k ( k + 1 ) ! ( k + 1 j ) ! x k + 1 j ( z + 1 ) j 2 k 2
and the equality
z e i ρ x n = 0 a n ( x ) j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + e i ρ x n = 0 a n ( x ) F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k
= e i ρ x n = 0 a n ( x ) j = 2 k + 2 F ( n , j , 1 , z + 1 ) m = j 1 k + 1 ( 1 ) m + j + 1 C m j + 1 k ( k + 1 ) ! ( k m + 1 ) ! x k m + 1 ( z + 1 ) m 2 k 1 .
Then, noting that the second terms on the left-hand side of (57) and (58) coincide up to the sign, the desired result is obtained by summing up both equations.
The proof of Equations (57) and (58) is presented in Appendix A, which completes the proof of the Lemma. □
As long as there is no possible misunderstanding, we consider a fixed ρ = ρ k with a multiplicity m = m k and the corresponding normalization polynomial X ( x ) = X k ( x ) . Thus, the index k is omitted along the following two statements.
Lemma 2. 
The coefficients b j of a normalization polynomial X ( x ) of degree m 1
X ( x ) = j = 0 m 1 b j x j
satisfy the equation
i Res ( Ω ( ρ , x ) ; ρ k ) = b 0 e ( ρ k , x ) + n = 1 m 1 r = 0 n 1 b n n ! ( r + 1 ) ! C r n 1 r + 1 z r + 1 e ( ρ , x ) z = z k ( z k + 1 ) n + r + 1 .
Proof. 
Comparing (53) with (60) we see that, in fact, we need to prove the equality
X ( x ) e ( ρ , x ) + e i ρ x n = 0 ( 1 ) n a n ( x ) j = 1 m 1 d j X ( x ) d x j ( z + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z ) = b 0 e ( ρ , x ) + n = 1 m 1 r = 0 n 1 b n n ! ( r + 1 ) ! C r n 1 r + 1 z r + 1 e ( ρ , x ) ( z + 1 ) n + r + 1 .
Note that
X ( x ) e ( ρ , x ) + e i ρ x n = 0 ( 1 ) n a n ( x ) j = 1 m 1 d j X ( x ) d x j ( z + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z ) = s = 0 m 1 b s x s e ( ρ , x ) + e i ρ x n = 0 ( 1 ) n a n ( x ) j = 1 m 1 d j d x j s = 0 m 1 b s x s ( z + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z ) = s = 0 m 1 b s x s e ( ρ , x ) + e i ρ x n = 0 ( 1 ) n a n ( x ) s = 1 m 1 j = 1 s s ! ( s j ) ! b s x s j ( z + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z ) = b 0 e ( ρ , x ) + s = 1 m 1 b s e ( ρ , x ) x s + e i ρ x n = 0 ( 1 ) n a n ( x ) j = 1 s s ! ( s j ) ! x s j ( z + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z ) .
Then, upon comparison of (61) with (62), it can be observed that proving (61) is equivalent to proving the equality
r = 0 s 1 s ! ( r + 1 ) ! C r s 1 r + 1 z r + 1 e ( ρ , x ) ( z + 1 ) s + r + 1
= e ( ρ , x ) x s + e i ρ x n = 0 j = 1 s ( 1 ) n a n ( x ) s ! s j ! x s j ( z + 1 ) j + 1 P n ( j n , 0 ) ( 1 + 2 z )
for some natural number s m 1 . Thus, we are going to prove (64). The substitution of the term with the derivative in (63) by Formula (54) for m = r + 1 is enough to obtain (64) as follows
r = 0 s 1 s ! ( r + 1 ) ! C r s 1 r + 1 z r + 1 e ( ρ , x ) ( z + 1 ) s + r + 1 = r = 0 s 1 s ! ( r + 1 ) ! C r s 1 e ( ρ , x ) j = 0 r ( 1 ) j C j r ( r + 1 ) ! ( r + 1 j ) ! x r j + 1 ( z + 1 ) j 2 r 2 + e i ρ x n = 0 a n ( x ) j = 2 r + 2 F ( n , j , 1 , z + 1 ) s = j 1 r + 1 ( 1 ) s + j + 1 C s j + 1 r ( r + 1 ) ! ( r s + 1 ) ! x r s + 1 ( z + 1 ) s 2 r 1
= r = 0 s 1 s ! ( r + 1 ) ! C r s 1 e ( ρ , x ) j = 0 r ( 1 ) j C j r ( r + 1 ) ! ( r + 1 j ) ! x r j + 1 ( z + 1 ) j 2 r 2 + e i ρ x n = 0 a n ( x ) j = 1 r + 1 F ( n , j + 1 , 1 , z + 1 ) s = j r + 2 ( 1 ) s + j C s j r ( r + 1 ) ! ( r s + 1 ) ! x r s + 1 ( z + 1 ) s 2 r 1 = e ( ρ , x ) x s + e i ρ x n = 0 j = 1 s a n ( x ) s ! ( s j ) ! x s j ( z + 1 ) j + 1 F ( n , j + 1 , 1 ; z + 1 ) .
This completes the proof of the Lemma. □
Equation (60) provides us with a simple method for computing the coefficients b j in (59), and consequently for calculating the normalization polynomials.
Theorem 1. 
The coefficients b j of a normalization polynomial X k ( x ) = j = 0 m k 1 b j x j corresponding to a complex singular number ρ k satisfy the system of linear algebraic equations
A · B = D ,
where A is an m × m k matrix with entries defined by
A j n = e ( ρ k , x j ) , n = 1 , r = 0 n 2 n ! ( r + 1 ) ! C r n 1 r + 1 z r + 1 e ρ k , x j z = z k ( z k + 1 ) n + r + 1 , 1 < n m k 1 .
Here, x j 0 are distinct points, j = 1 , , m ( m m k ). B is an m k vector with its entries being the normalization polynomial coefficients B n = b n 1 , n = 1 , , m k , and D is an m vector defined by
D j = i Res ( Ω ( ρ , x j ) ; ρ k ) .
Proof. 
The proof consists of observing that each row in (65) is just Formula (60) corresponding to a point x j . The number of rows must be at least m k ; otherwise, the system (65) is underdetermined. □
Thus, the coefficients of the normalization polynomial are obtained from the system (65).
Definition 2. 
A set
J = ρ k , m k , X k ( x ) k = 1 , α , s ( ρ )
is called the scattering data set of problem (1) and (10).
Here, ρ k are the non-real singular numbers, m k their multiplicities, X k ( x ) the corresponding normalization polynomials, and s ( ρ ) is the scattering function (S-type function in the strip Im ρ < ε 0 ).
In order to recall a result on the characterization of the scattering data, we need the following definition [39].
Definition 3. 
Let s ( ρ ) be an S-type function in the strip Im ρ < ε 0 and let L be a curve lying in the strip and running from to + , such that all roots (poles) of s ( ρ ) are situated above (below) L . The increment divided by 2 π of a continuous branch of Arg s ( ρ ) , when ρ runs along L from to + , is called the index of s ( ρ ) and denoted by Ind s .
Let us assume that a set J as in Definition 2 is given. A necessary and sufficient condition (obtained in [18]) to ensure that this set represents the scattering data for a problem (1) and (10) with Condition (2) is the following relation
Ind s + 2 m + ϰ = 0
where
m = m 1 + + m α , ϰ = 1 2 1 s ( 0 ) = 0 for e ( 0 ) 0 , 1 for e ( 0 ) = 0 .
In the case when m k = 1 , the notion of the Birkhoff solution is useful for computing the corresponding norming constants.
Remark 7. 
Let E ( ρ , x ) denote the Birkhoff solution of Equation (1) (see [24] (p. 113)), i.e., a solution satisfying the asymptotic relation
E ( ρ , x ) = ( i ρ ) ( ν ) e i ρ x ( 1 + o ( 1 ) ) , x , ν = 0 , 1
uniformly for | ρ | δ , for each δ > 0 . For Im ρ > 0 , this solution is not unique. Indeed, if E 0 ( ρ , x ) is a Birkhoff solution, then E ( ρ , x ) = E 0 ( ρ , x ) + c e ( ρ , x ) is also a Birkhoff solution of (1) for any constant c C . Note that for ρ = ρ k (a singular number of the problem), the values of all Birkhoff solutions at the origin coincide. We have E ( ρ k ) : = E ( ρ k , 0 ) = E 0 ( ρ k , 0 ) , because e ( ρ k ) = 0 . Moreover,
E ( ρ k ) = 2 i ρ k e ( ρ k , 0 ) ,
which can be observed by considering the Wronskian W [ e ( ρ , x ) , E ( ρ , x ) ] = 2 i ρ .
Remark 8. 
The solution ω ( ρ , x ) satisfying Conditions (39) has the form
ω ρ , x = E ( ρ ) e ( ρ , x ) e ( ρ ) E ( ρ , x ) 2 i ρ , ρ C + ¯ .
Note that ρ k is a pole of Ω ( ρ , x ) in the upper half-plane of the complex variable ρ if and only if it is a root of the Jost function e ( ρ ) (see (40)). Thus, in case of a simple pole ρ k in Equation (67), the residue can be computed as follows
Res Ω ( ρ , x ) ; ρ k = Res 2 i ρ e ( ρ ) ω ρ , x ; ρ k = Res E ( ρ ) e ( ρ , x ) e ( ρ ) E ( ρ , x ) e ( ρ ) ; ρ k = E ( ρ k ) e ( ρ k , x ) e ( ρ k ) E ( ρ k , x ) e ˙ ( ρ k ) = E ( ρ k ) e ( ρ k , x ) e ˙ ( ρ k ) ,
where e ˙ ( ρ ) : = d d ρ e ( ρ ) , and the corresponding normalization polynomial (in fact normalization constant) is given by
c k = Res ( Ω ( ρ , x ) ; ρ k ) e ( ρ k , x ) i = E ( ρ k ) e ˙ ( ρ k ) i .
Moreover, due to (70), we have
c k = 2 ρ k e ˙ ( ρ k ) e ( ρ k , 0 ) .
Similarly to the case of a real-valued potential [51] (p. 95), one can see that
c k = 1 0 e 2 ( ρ k , x ) d x = 1 e ( ρ k , 0 ) 2 0 ω 2 ( ρ k , x ) d x .
If q ( x ) c 1 exp c 2 | x | γ ,   γ > 1 (for some constants c 1 , c 2 > 0 ), then e ( ρ ) is an entire function of ρ (see [51] (p. 95)). In this case, as a Birkhoff solution E ( ρ , x ) , one can consider the Jost solution e ( ρ , x ) , Im ( ρ ) > 0 , and hence from (72) we obtain
c k = e ( ρ k ) e ˙ ( ρ k ) i .
Example 4. 
According to Remark 8, the normalization constant associated with the unique eigenvalue of the operator from Example 3 is
c 1 = e ( i a tanh ( b ) ) e ˙ ( i a tanh ( b ) ) i = 2 a tanh ( b ) tanh ( b ) + 1 tanh ( b ) 1 ,
and, in particular, for q 3 ( x ) , we have
c 1 = 2 ( 1 + tanh ( 1 + i ) ) tanh ( 1 + i ) 1 tanh ( 1 + i ) .
Example 5. 
With the aid of Remark 8, an approximate value of the normalization constant for the eigenvalue λ 0 from Example 1 is obtained
c 0 16.339391035537 + 40.670169841396 i .

3.4. Numerical Algorithm

The approximate solution of the direct problem can be performed with the following steps.
  • Compute the Jost function using (41) and the recurrent integration procedure from [27], for Im ( ρ ) 0 .
  • Extend the Jost function e ( ρ ) to ε 0 < Im ( ρ ) < 0 using any convenient technique, such as the classic analytic continuation, Padé approximants [28] or some other approach [52,53].
  • Obtain the scattering function s ( ρ ) for 0 Im ( ρ ) < ε 0 by Formula (42).
  • To locate the eigenvalues, find the non-real poles of the function Ω ( ρ , x ) , which is equivalent to finding zeroes of the function e ( ρ ) in the unit disk in terms of z. This can be achieved with the aid of the argument principle theorem. In particular, in the present work, we compute the change in the argument along rectangular contours γ . If the change in the argument along γ is zero, consider another contour. Otherwise, subdivide the region within the contour until the desired accuracy is attained. Note that for a sufficiently large N, zeros of e N ( ρ ) , approximate the square roots of the eigenvalues of the problem arbitrarily closely. The proof is analogous to that in [54] and is based on the Rouché theorem from complex analysis.
  • Obtain the normalization polynomials.
    5.1
    For simple poles, use Remark 8 to obtain the normalization constants.
    5.2
    Otherwise, for higher multiplicities, solve the linear system of Equation (65) for the coefficients b n k , n k = 0 , 1 , , m k 1 computing A j , n and D j defined in Equations (66) and (67) for several values of x j .

4. Inverse Problem

In order to reconstruct the potential in (1) from the scattering data, it is convenient to introduce the function [39]
ϕ s ( x ) = 1 2 π + i η + i η ( s ( ρ ) 1 ) e i x ρ d ρ ,
where η is a number satisfying the inequalities 0 < η < ε 0 ( ε 0 s, defined in Section 3.2), and the function
f ( x ) = ϕ s ( x ) k = 1 α X k ( x ) e i ρ k x , x 0 .
Remark 9. 
Hereinafter, we use the notation
L η = + i η + i η
for 0 < η < ε 0 where L η represents a line parallel to the real axis crossing i η .
The kernel A ( x , t ) and the function f ( x ) satisfy the following Gel’fand–Levitan (G-L) equation [39] (Theorem 10.1)
A ( x , t ) = x A ( x , u ) f ( u + t ) d u + f ( x + t ) , 0 x t < .

4.1. Infinite Linear Algebraic System for Coefficients a n ( x )

Following [38], from the G-L Equation (78), we deduce the following system of linear algebraic equations for the coefficients a n ( x ) from the series representation (23).
Theorem 2. 
The complex-valued functions a n ( x ) satisfy the equations
a m ( x ) n = 0 a n ( x ) A m n ( x ) = f m ( 2 x ) , m = 0 , 1 ,
where
f m ( x ) : = 0 f ( s + x ) L m ( s ) e s 2 d s , A m n ( x ) : = 0 f n ( 2 x + s ) L m ( s ) e s 2 d s .
Proof. 
Substitution of the series representation (23) into (78) leads to the equalities
f ( x + t ) = n = 0 a n ( x ) L n ( t x ) e x t 2 n = 0 a n ( x ) x L n ( u x ) e x u 2 f ( u + t ) d u = n = 0 a n ( x ) L n ( t x ) e x t 2 n = 0 a n ( x ) 0 L n ( y ) e y 2 f ( x + y + t ) d y ,
where the change in the order of summation and integration is justified by the general Parseval identity [47] (p. 16).
We have
x A ( x , u ) f ( u + t ) d u = A ( x , x + u ) , f ( u + x + t ) ¯ L 2 ( 0 , ) = n = 0 A ( x , x + u ) , e u / 2 L n ( u ) L 2 0 , e u / 2 L n ( u ) , f ( u + x + t ) ¯ L 2 ( 0 , ) = n = 0 a n ( x ) 0 e u 2 L n ( u ) f ( u + x + t ) d u .
Denote s = t x . Equation (81) is equivalent to
f ( s + 2 x ) = n = 0 a n ( x ) L n ( s ) e s 2 n = 0 a n ( x ) 0 L n ( y ) e y 2 f ( s + 2 x + y ) d y .
Multiplying the last equation by L m ( s ) e s 2 and integrating this, we obtain
0 f ( s + 2 x ) L m ( s ) e s 2 d s = n = 0 a n ( x ) 0 L n ( s ) L m ( s ) e s d s n = 0 a n ( x ) 0 L m ( s ) e s 2 0 f ( s + 2 x + y ) L n ( y ) e y 2 d y d s .
Note that
0 L n ( s ) L m ( s ) e s d s = δ m n ,
and
0 f ( s + 2 x + y ) L n ( y ) e y 2 d y = f n ( 2 x + s ) .
Thus, from (83) we obtain (79). □

4.2. Expressions for f m ( x ) and A m n ( x )

It is convenient to regard the functions f m ( x ) and A m n ( x ) as a sum of the components corresponding to the continuous f m , c ( x ) , A m n , c ( x ) and discrete spectra f m , d ( x ) , A m n , d ( x ) , and simplify these expressions with the aid of the formula ([48], Formula 7.414 (6))
0 L m ( s ) e s i ρ 1 2 d s = ( 1 ) m 1 2 + i ρ m 1 2 i ρ m + 1 .
The continuous and discrete components for the function f m ( x ) have the form
f m , c ( x ) : = 0 ϕ s ( s + x ) L m ( s ) e s 2 d s = 1 2 π L η s ( ρ ) 1 e i ρ x 0 L m ( s ) e i ρ s s 2 d s d ρ = ( 1 ) m 2 π L η s ( ρ ) 1 1 2 + i ρ m 1 2 i ρ m + 1 e i ρ x d ρ ,
and
f m , d ( x ) : = k = 1 α 0 X k ( s + x ) e i ( s + x ) ρ k L m ( s ) e s 2 d s , = k = 1 α e i ρ k x 0 j = 0 m k 1 1 j ! d j X k ( x ) d x j s j e i ρ k s L m ( s ) e s 2 d s , = k = 1 α e i ρ k x j = 0 m k 1 1 j ! d j X k ( x ) d x j 0 s j L m ( s ) e 1 2 i ρ k s d s , = ( 1 ) m + 1 k = 1 α j = 0 m k 1 e i ρ k x d j X k ( x ) d x j ( z k + 1 ) j + 1 P m ( j m , 0 ) ( 1 + 2 z k ) .
For the function A m n , c ( x ) , we have
A m n , c ( x ) : = 0 L m ( s ) f n , c ( 2 x + s ) e s 2 d s = ( 1 ) n 2 π L η s ( ρ ) 1 1 2 + i ρ n 1 2 i ρ n + 1 0 L m ( s ) e 1 2 i ρ s d s e 2 i ρ x d ρ = ( 1 ) n + m 2 π L η s ( ρ ) 1 1 2 + i ρ n + m 1 2 i ρ n + m + 2 e 2 i ρ x d ρ ,
and for A m n , d ( x ) , we use (84) to obtain
A m n , d ( x ) = 0 L m ( s ) f n , d ( 2 x + s ) e s 2 d s = 0 L m ( s ) k = 1 α j = 0 m k 1 d j d x j X k ( 2 x + s ) ( z k + 1 ) j + 1 F ( n , j + 1 , 1 ; z k + 1 ) e 2 i ρ k x e 1 2 i ρ k s d s = k = 1 α j = 0 m k 1 ( z k + 1 ) j + 1 F ( n , j + 1 , 1 ; z k + 1 ) e 2 i ρ k x 0 d j d x j p = 0 m k 1 s p 2 p p ! d p d x p X k ( 2 x ) L m ( s ) e 1 2 i ρ k s d s = k = 1 α j = 0 m k 1 p = 0 m k 1 1 2 p p ! d p + j d x p + j X k ( 2 x ) ( z k + 1 ) j + 1 F ( n , j + 1 , 1 ; z k + 1 ) e 2 i ρ k x 0 s p L m ( s ) e 1 2 i ρ k s d s = 1 m + n + 1 k = 1 α j = 0 m k 1 p = 0 m k 1 j 1 2 p d p + j d x p + j X k ( 2 x ) ( z k + 1 ) p + j + 2 P n ( j n , 0 ) ( 1 + 2 z k ) P m ( p m , 0 ) ( 1 + 2 z k ) e 2 i ρ k x .
Remark 10. 
When an eigenvalue ρ k 2 is simple and the corresponding normalization polynomial X k ( x ) is just a normalization constant c k , expressions (86) and (88) can be written in the form
f m , d ( x ) = k = 1 α e i x ρ k ( z k ) m ( z k + 1 ) c k ,
A m n , d ( x ) = k = 1 α e 2 i x ρ k ( z k ) m + n ( z k + 1 ) 2 c k .
We illustrate the calculation of the functions (85)–(88) with some examples.
Example 6. 
Consider the scattering function obtained in Example 2:
s 2 ( ρ ) = 1 ρ i ρ ρ + i ρ + 1
in the strip 0 Im ( ρ ) < 1 , with no discrete spectrum and thus no normalization polynomials. Let us compute the function ϕ s ( x ) defined by (75), where the line L η lies in the strip 0 < Im ( ρ ) < 1 . Since the function s 2 ( ρ ) is analytic in the strip 0 < Im ( ρ ) < 1 , the value of the integral is independent of the choice of 0 < η < 1 . Using Jordan’s lemma to calculate the integral in (75), we obtain
f ( x ) = ϕ s ( x ) = 2 i e x .
Now, computing the functions f m ( x ) and A m n ( x ) from Formula (85) and (87) and using the residue theorem, we obtain
f m ( x ) = 4 i · 3 ( m + 1 ) e x , A n m ( x ) = 8 i · 3 ( n + m + 2 ) e 2 x .
Thus, in the case of the potential q 2 ( x ) , the system of Equation (79) can be written explicitly.
Example 7. 
Consider the scattering function s 3 ( ρ ) from Example 3. It has two poles in the upper half-plane: at i and i tanh ( 1 + i ) . Hence, using the residue theorem, we find that
ϕ s ( x ) = 2 ( 1 + tanh ( 1 + i ) ) e x ( 1 + tanh ( 1 + i ) ) e x tanh ( 1 + i ) e x tanh ( 1 + i ) tanh ( 1 + i ) 1 , f ( x ) = 2 e ( 2 x + 2 i ) , f m ( x ) = 4 · 3 ( m + 1 ) e ( 2 x + 2 i ) , A m n ( x ) = 8 e 2 2 x + 2 i 3 ( m + n + 2 ) .
Again, the corresponding system of Equation (79) can be written explicitly.
Example 8. 
Consider s 1 ( ρ ) from Example 1. To compute ϕ s ( x ) , we consider the singularities of s 1 ( ρ ) in the upper half-plane. From the set D ( s 1 ) (see Example 1), we have that s 1 ( ρ ) has an infinite number of isolated singularities at the points ρ k = i k 2 with k N 0 and a singular number ρ 0 ; see (46). Using properties of the gamma function, we obtain
Res ρ = ρ k k = 1 , 2 , ( s 1 ( ρ ) 1 ) = ( 1 i ) 4 i ρ k 5 2 i ρ k J 2 i ρ k ( ( 2 2 i ) 5 ) J 2 i ρ k ( ( 2 2 i ) 5 ) Γ ( 1 2 i ρ k ) Res ρ = ρ k k = 1 , 2 , Γ 1 + 2 i ρ = ( 1 i ) 2 k 5 k J k ( ( 2 2 i ) 5 ) J k ( ( 2 2 i ) 5 ) Γ ( 1 + k ) ( 1 ) k i 2 ( k 1 ) ! = 5 ( 1 i ) 2 k 2 k ! ( k 1 ) ! i ,
and
Res ρ = ρ 0 ( s 1 ( ρ ) 1 ) = e ( ρ 0 ) e ( ρ 0 ) = i c 0 ,
where c 0 is the normalization constant obtained in Example 5. Therefore, for x > 0 , we have
ϕ s ( x ) = i e i x ρ 0 Res ρ = ρ 0 ( s 1 ( ρ ) 1 ) + i k = 1 e x k 2 Res ρ = ρ k ( s 1 ( ρ ) 1 ) = c 0 e i x ρ 0 1 2 k = 1 e x 2 k ( k 1 ) ! = c 0 e i x ρ 0 + e e x / 2 x / 2 2 .
Hence, the function f ( x ) has the form
f ( x ) = c 0 e i x ρ 0 + e e x / 2 x / 2 2 c 0 e i x ρ 0 = e e x / 2 x / 2 2 ,
and we obtain the functions f m , c ( x ) and f m , d ( x ) in terms of z 0 = 1 2 + i ρ 0 1 2 i ρ 0 (see (9)) as follows
f n , c ( x ) = c 0 e i x ρ 0 ( 1 ) n z 0 n ( z 0 + 1 ) + ( 1 ) n k = 1 e x 2 k ( k 1 ) ! 1 k n 1 + k n + 1 ,
and
f n , d ( x ) = ( 1 ) n + 1 c 0 e i x ρ 0 ( z 0 + 1 ) z 0 n .
Thus, from Equations (92) and (93) we obtain
f n ( x ) = ( 1 ) n k = 1 e x 2 k ( k 1 ) ! 1 k n 1 + k n + 1 .
Likewise, applying the residue theorem, we have
A m n ( x ) = 2 ( 1 ) n + m k = 1 e x k ( k 1 ) ! 1 k n + m 1 + k n + m + 2 .
Thus, as in the previous two examples, the system of Equation (79) can be written explicitly.
The cancellation of terms when summing up (92) with (93) is not incidental and is generalized below in Remark 12.
To calculate the integrals in functions f m and A m n in the case when the scattering function is given explicitly, we implement Jordan’s lemma and the residue theorem considering the asymptotics (44). However, often the function s ( ρ ) is not given in a closed form but as a table of data—then, the following techniques can be useful to compute the integrals. First, we recall a widely used technique for the quadrature of highly oscillatory integrals through approximations of the Fourier sine and cosine transform. This is illustrated below in Example 16. A second option is a transformation of integrals in f m and A m n into integrals over a finite interval providing a certain advantage for its numerical implementation. This is illustrated below in Example 21.
Remark 11. 
We mainly discuss the calculation of the functions f m . The calculation of A m n is analogous.
1.
Suppose s ( ρ ) is given in a closed form. By Pol we denote the set of its poles in the open upper half-plane. Since s ( ρ ) satisfies the asymptotics (44), the integral in (85) can be computed with the aid of Jordan’s lemma and the residue theorem as follows
f m , c ( x ) = ( 1 ) m i ρ j Pol Res ρ = ρ j ( z + 1 ) z m s ( ρ ) 1 e i ρ x ,
provided the series on the right-hand side is convergent; see [55] (p. 459).
If Pol contains only simple poles, we obtain
f m , c ( x ) = ( 1 ) m i ρ j Pol ( z j + 1 ) z j m e i ρ j x Res ρ = ρ j s ( ρ ) 1 .
2.
Consider the integral in (85)
f m , c ( x ) = ( 1 ) m e η x 2 π s ( σ + i η ) 1 1 2 + i ( σ + i η ) m 1 2 i ( σ + i η ) m + 1 e i σ x d σ
for some 0 < η < ϵ 0 . Following the approach from [56] (p. 236), denote
g ( σ ) : = s ( σ + i η ) 1 1 2 + i ( σ + i η ) m 1 2 i ( σ + i η ) m + 1 ,
and set ψ ( σ ) = g ( σ ) + g ( σ ) , ϕ ( σ ) = g ( σ ) g ( σ ) . Then
s ( σ + i η ) 1 1 2 + i ( σ + i η ) m 1 2 i ( σ + i η ) m + 1 e i σ x d σ = 0 ψ ( σ ) cos ( σ x ) d σ + i 0 ϕ ( σ ) sin ( σ x ) d σ .
The integrals on the right hand side (the Fourier cosine and sine transforms) are approximated by the corresponding sums
h k = 0 N ψ k + 1 2 h cos x k + 1 2 a n d h k = 0 N ϕ k h sin x k h ,
where h and N are chosen to be sufficiently small and large, respectively.
3.
Transform the line L η into a circle centered at 2 η 1 + 2 η of radius 1 2 + η with the aid of the formulas
ρ = i ( 1 + 4 η exp ( i θ ) ) 2 ( 1 + exp ( i θ ) ) , d ρ = exp ( i θ ) ( 1 + 2 η ) ( 1 + exp ( i θ ) ) 2 d θ .
This enables us to consider the integral in (85) in the form
f m , c ( x ) = ( 1 ) m 2 π 0 2 π s i ( 1 + 4 η exp ( i θ ) ) 2 ( 1 + exp ( i θ ) ) 1 exp x 1 + 4 η exp ( i θ ) 2 ( 1 + exp ( i θ ) ) · 1 2 1 + 4 η exp ( i θ ) 2 ( 1 + exp ( i θ ) ) m 1 2 + 1 + 4 η exp ( i θ ) 2 ( 1 + exp ( i θ ) ) m + 1 exp ( i θ ) ( 1 + 2 η ) ( 1 + exp ( i θ ) ) 2 d θ ,
reducing the integration to a finite interval.
Remark 12. 
Suppose that the eigenvalues are simple, and Formula (74) is applicable. Denote the set K = ρ 1 , , ρ α of non-real singular values. From (89), (90) and (95), we have that the functions f m ( x ) and A m n ( x ) can be computed as
f m ( x ) = ( 1 ) m i ρ j Pol K 1 e ˙ ( ρ j ) ( z j ) m ( z j + 1 ) e i ρ j x Res e ( ρ ) ; ρ = ρ j , A m n ( x ) = ( 1 ) n + m i ρ j Pol K 1 e ˙ ( ρ j ) ( z j ) n + m ( z j + 1 ) 2 e 2 i ρ j x Res e ( ρ ) ; ρ = ρ j .

4.3. Stability of the System and Its Solution

Consider the truncated system (79):
a m ( x ) n = 0 M a n ( x ) A m n ( x ) = f m ( 2 x ) , m = 0 , , M .
Denote its solution as U M = a m M m = 0 M . In the following two theorems, we prove the unique solvability of (100), the convergence of its solution to the exact one as well as its stability.
Theorem 3. 
Let x 0 be fixed. Consider the system (100) truncated to M + 1 equations. Then, for a sufficiently large M, the truncated system is uniquely solvable, and
a m M ( x ) a m ( x ) , M , m = 0 , 1 .
Proof. 
Since f m ( 2 x ) m = 0 2 and A m , n ( x ) m , n = 0 2 2 and we look for a m ( x ) m = 0 2 , the assertion of the theorem for the truncated system follows directly from the general theory presented in [57] (Chapter 14, §3). □
Theorem 4. 
The approximate solution a m M ( x ) m = 0 M of the system is stable.
Proof. 
Note that the truncated system (100) coincides with that obtained by applying the Bubnov–Galerkin procedure to the G-L Equation (78) with the orthonormal system of Laguerre polynomials in L 2 0 , ; e x ; see [58] (§14). Let I M denote the M + 1 × M + 1 identity matrix, L M = A m , n ( x ) m , n = 0 M be the coefficient matrix of the truncated system and R M = f m ( 2 x ) m = 0 M the right hand side of (100). Following [58] (§9), consider a system called inexact
I M + L M + Γ M V M = R M + δ M ,
where Γ M is an M + 1 × M + 1 matrix representing errors in the coefficients A m , n , and δ M is the column vector representing errors in the coefficients f m . Let V M be a solution of the non-exact system. The solution of the Bubnov–Galerkin procedure is said to be stable if there exist constants c 1 , c 2 > 0 , such that for Γ M r and arbitrary δ M the non-exact system is solvable, and the following inequality holds
U M V M c 1 Γ M + c 2 δ M .
Now, since in the case under consideration, the inequality (102) is true (see [58] (Theorems 14.1 and 14.2)), the approximate solution is stable. □

4.4. Algorithm to Recover the Potential

Given a scattering data set J as in Definition 2, the algorithm to recover q ( x ) consists of the following steps.
  • Compute the functions f m ( x ) and A m n ( x ) with the aid of (85)–(88).
  • Solve the truncated system of linear algebraic Equation (100) to obtain the coefficient a 0 ( x ) .
  • Recover the potential q ( x ) from (38).

5. Numerical Examples

We implemented the algorithms proposed in Section 3.4 and Section 4.4 to solve the direct and inverse problems, respectively, with machine precision and with the aid of Matlab2021. Several examples are discussed, some of which have been introduced in previous sections.

5.1. Direct Problem

In this subsection, we discuss the computation of the scattering data, based on the series representation of the Jost solution (8). We deal with the approximate solution obtained by truncating the series (36).
The computation of the coefficients a n ( x ) is performed with the aid of the recurrent integration procedure from [27].
First of all, we discuss the choice of the number N in (36). Below, we show that a satisfactory accuracy is attained for a relatively small N (from several units to several dozens), and a reliable indicator
ε N = max n = 0 N a n ( x ) 1 2 x q ( t ) d t
can be used to choose an appropriate N.
In the case of simple singular numbers ρ k , the norming constants can be computed with the aid of (73):
c k 2 ρ k e ˙ N ( ρ k ) e N ( ρ k , 0 ) .
Another possibility consists of using (74) in the form
c k m , n e N ( ρ ) ( ρ k ) e ˙ N ( ρ k ) ,
where m , n e N ( ρ ) ( ρ k ) stands for the Padé approximant of e N ( ρ ) at ρ = ρ k . This can be achieved when the accuracy of this rational approximation in the upper half-plane is satisfactory, i.e., when one has a suitable small value of max m , n e N ( ρ ) ( ρ ) e N ( ρ ) in a sufficiently large region in the upper half-plane of the complex variable ρ .
A reliable algorithm to compute derivatives of (36) in (104) is proposed in [27].
To obtain the scattering function (42) in the strip Im ( ρ ) < ε 0 we consider two options depending on how the computation of the Jost function is performed for ρ in the lower half-plane. The first one uses
s ( ρ ) m , n e N ( ρ ) ( ρ ) e N ( ρ ) ,
provided m , n e N ( ρ ) ( ρ ) extends e N ( ρ ) analytically onto a certain strip in the lower half ρ -plane. A second option for the computation of s ( ρ ) is
s ( ρ ) e N ( ρ ) e N ( ρ ) ,
where the expression (36) is calculated at points ρ of a parallel line sufficiently close to the real axis and contained in the lower half ρ -plane.
Remark 13. 
The notation for the approximate Jost solution (Jost function) may contain two indices, k and N: e k , N ρ , x ( e k , N ( ρ ) ), where k denotes the solution associated with the Schrödinger equation with the potential q k ( x ) and N is the parameter from (36).
Example 9. 
Consider the potential q 2 ( x ) from Example 2. We present the indicator ε N in Table 1 for different values of N in (103).
Figure 1 shows the real and imaginary parts of the approximate and exact Jost solution computed from (36) at a sample point ρ = 1 + i / 3 with N = 30 , i.e., e 2 , 30 ( 1 + i / 3 , x ) . The maximum absolute error of the computed Jost solution for x in the interval [ 0 , 12 ] is 2.14 × 10 13 .
Table 2 presents the maximum absolute and relative errors of the approximate Jost function e 2 , N ( ρ ( z ) ) for z D ¯ for different values of N.
Figure 2 shows the function e 2 , 30 ( ρ ( z ) ) for z D ¯ . Here, we illustrate the existence of a unique singular number. Indeed, this singular number ρ = 1 corresponds to z = 0.6 0.8 i and the value e 2 , 30 ( 1 ) is 5.55 × 10 15 + 2.66 × 10 15 i .
The distribution of the absolute and relative errors of the approximate Jost function is presented in Figure 3 and Figure 4 (respectively), where the maximum absolute error is 1.98 × 10 14 and the maximum relative error is 3.17 × 10 13 .
Furthermore, a good approximation of the derivative of the Jost function becomes essential for the argument principle algorithm performance. This is necessary to obtain the eigenvalues as the squares of non-real zeros of the approximate Jost function. In Figure 5, we illustrate d e 2 , 30 ( ρ ( z ) ) d z , and Figure 6 and Figure 7 depict the distribution of the absolute and relative errors, respectively. The maximum absolute error is 9.6 × 10 13 and the maximum relative error is 7.21 × 10 13 .
To find the singular numbers, we consider the circle z C : | z | = 1 (real axis in ρ) and a cubic spline interpolation of the approximate Jost function ( N = 30 ). For the spline interpolation, we use the Matlab routine csapi. To locate the zeros of the spline, we use slmsolve from the Shape Language Modeling (SLM) toolbox, version 1.14 by John D’Errico [59], available for Matlab2021a. The value ρ 1 = 1.000000000000003 was obtained with an absolute error of 3.11 × 10 15 . Additionally, the argument principle algorithm applied to e 2 , 30 ( ρ ( z ) ) in D discarded any eigenvalue of the problem (non-real zero ρ).
The second step of the algorithm from Section 3.4 requires computing the Jost function in the strip ε 0 < Im ( ρ ) < 0 ( ε 0 = ε 2 = 1 ). In this example, we extend e 2 , 30 ( ρ ( z ) ) analytically via Padé’s approximation. The Padé approximant m , n e 2 , 30 ( ρ ) was computed in Matlab2021a using the routine pade.
In Table 3, we computed the maximum absolute and relative errors of the Padé approximant 1 , 1 e 2 , N ( ρ ) of e 2 , N ( ρ ) with N = 3 , 5 , 20 , 30 , 40 , 50 and 180 for 0 Im ρ < 1 . These values indicate the possibility of dealing with this Padé approximant when computing the set of scattering data. Additionally, from Table 4, we confirm that this approximant satisfactorily extends the Jost function to a desirable strip in the lower half-plane (the strip is related to the one needed for the calculation of the scattering function s 2 ( ρ ) ).
To obtain s 2 ρ numerically on the strip 0 < Im ρ < ε 0 = 1 , we use the truncated series e 2 , 30 ( ρ ) and the Padé approximant 1 , 1 e 2 , 30 ( ρ ) :
s 2 ( ρ ) 1 , 1 e 2 , 30 ( ρ ) ( ρ ) e 2 , 30 ( ρ ) .
The maximum absolute error inside the region R = [ 30 , 30 ] × 10 2 i , ε 0 10 2 i is 9.64 × 10 10 .
Remark 14. 
The order of the Padé approximant used for the Jost function is not arbitrary. Although the maximum absolute errors inside the region R of other approximations of s 2 ( ρ ) using [ 2 , 2 ] e 2 , 30 ( ρ ) 9.35 × 10 11 , [ 3 , 3 ] e 2 , 30 ( ρ )   1.34 × 10 11 , [ 4 , 4 ] e 2 , 30 ( ρ ) 1.5 × 10 11 and [ 7 , 7 ] e 2 , 30 ( ρ ) 6.37 × 10 12 are better in comparison with [ 1 , 1 ] e 2 , 30 ( ρ ) , we choose [ 1 , 1 ] e 2 , 30 ( ρ ) as the most suitable option to avoid the appearance of Froissart doublets. Indeed, the use of the Padé approximants when there is no available information about the smoothness of the function to be approximated is challenging. Some publications propose modified algorithms [60], even using the Toeplitz matrix theory with many numerical implementations in Maple, Wolfram Mathematica (see [61]) or Matlab (see [62]). For the purposes of this paper, it is sufficient to use only the information obtained from the truncated series e N ( ρ ) and the argument principle algorithm to construct the approximant. Consider the number K of zeros counting multiplicities of the approximate Jost function e N ( ρ ) (singular numbers being calculated using the argument principle algorithm) located inside D as the degree of the polynomial in the numerator in the Padé approximant. Recalling that, in most cases, an accurate Padé’s approximation is obtained on the diagonal approximant types for analytical functions, it is reasonable to choose the Padé approximant as [ K , K ] e N ( ρ ) .
Example 10. 
Consider the potential q 3 ( x ) from Example 3. The approximate Jost function e 3 , N ( ρ ) is computed in the strip 0 Im ( ρ ) < ε 2 = 1 for several values of N. In Table 5, the maximum absolute error of the approximate Jost function is presented.
Similarly to the previous example, a search for real singular numbers was performed; however, none were detected. Subsequently, the argument principle algorithm located a non-real singular number in D , with the value z 1 0.386709149322063 0.105221869864471 i ( ρ 1 0.271752585319512 + 1.083923327338694 i ). Its absolute error is 8 × 10 15 . The contour refinement is not a concern, since the performed algorithm from [54] is based on the argument principle algorithm followed by several Newton iterations.
Additionally, the Jost function was extended to the strip Im ( ρ ) < ε 0 = 1 through the Padé approximant
1 , 1 e 3 , 30 ( ρ ) = ρ 4.69 × 10 15 + 4.94 × 10 13 i + 1.94 × 10 15 4.92 × 10 15 i ρ 4.69 × 10 15 + 6.12 × 10 14 i 6.11 × 10 14 + 4.69 × 10 15 i .
The corresponding maximum absolute error of 1 , 1 e 3 , 30 ( ρ ) ( ρ ) inside the rectangle R 1 : = [ 20 , 20 ] × 0 i , ε 0 10 2 i in the complex ρ-plane is 8.43 × 10 11 . Inside R 2 : = [ 20 , 20 ] × ε 0 + 10 2 i , ε 0 10 2 i , the maximum absolute error is 8.42 × 10 11 .
Next, an approximate value of the normalization constant corresponding to ρ 1 was computed
c 1 1 , 1 e 3 , 30 ( ρ ) ( ρ 1 ) e ˙ 3 , 30 ( ρ 1 ) i 10.317711295453737 + 12.894194226972697 i
with an absolute error of 2.8 × 10 9 .
Finally, we calculate the scattering function by
s 3 ( ρ ) 1 , 1 e 3 , 30 ( ρ ) ( ρ ) e 3 , 30 ( ρ ) .
The maximum absolute error of the approximation of s 3 ( ρ ) in R 1 is 1 × 10 9 (see Figure 8).
Example 11. 
Consider the potential q 1 ( x ) from Example 1. Table 6 shows the parameter ε N for some values of N .
Note that the approximation of the Jost function in this example requires more terms in the series representation than in previous examples. To control the accuracy of the approximation, in addition to the parameter ε N , one can use the asymptotic relation for the Jost function from [24] (p. 105),
e ( ρ ) = 1 + ω ( 0 ) i ρ q ( 0 ) ( 2 i ρ ) 2 + ω 2 ( 0 ) ( 2 i ρ ) 2 + o 1 ρ 2 , ρ , ρ C + 0 ,
where ω ( x ) = 1 2 x q ( s ) d s . This relation is valid for q ( x ) with first and second summable derivatives.
Figure 9 depicts the Jost function computed with N = 98 and the singular number ρ 0 1.784065846059995 + 0.608788673578742 i . Figure 10 shows the fulfillment of the asymptotic relation (108), namely the graph of e 1 , 98 ( ρ ) ω ( 0 ) i ρ + q ( 0 ) ( 2 i ρ ) 2 ω 2 ( 0 ) ( 2 i ρ ) 2 , which tends to 1 when ρ .
The eigenvalue is computed numerically as a zero of the exact Jost function with the aid of Wolfram Mathematica v.12 (Wolfram Research, Inc., Champaign, IL, USA) λ 0 λ 0 * : = 2.8122672899483 + 2.1722381890043 i . This “exact” eigenvalue is compared with the approximation 2.812267289948449 + 2.172238189004328 i obtained as the square of the approximate ρ 0 . The absolute error is 1.52 × 10 13 .
For the numerical calculation of the analytic extension of e 1 , 98 ( ρ ) onto the strip 1 2 < Im ( ρ ) < 0 , it is not possible to consider the Padé approximant [ 1 , 1 ] e 1 , 98 ( ρ ) . This does not approximate e 1 , 98 ( ρ ) accurately even in the upper half-plane of the complex variable ρ. Using the Padé approximant [ 7 , 7 ] e 1 , 98 ( ρ ) the absolute error was 0.17 .
Instead of using Formula (105) to compute the normalization constant c 0 , Formula (104) is applied to obtain the approximation c 0 16.339391965970112 + 40.670169715260290 i with absolute error 3.05 × 10 12 .
To compute the scattering function s 1 ( ρ ) on a line parallel to the real axis contained in the strip Im ( ρ ) < ε 0 = Im ( ρ 0 ) 0.608788673578742 i , Formula (106) was used. The function e 1 ( ρ ) is represented by (36) for ρ on a line in the lower half ρ-plane parallel and sufficiently close to the real axis. Having calculated these series representations for the functions involved in s 1 ( ρ ) , we compute
s 1 ( ρ ) e 1 , 98 ( ρ ) e 1 , 98 ( ρ )
with a maximum absolute error 1.45 × 10 7 along the line L η = 0.1 (see (77)).
In this example, we obtain a satisfactory accuracy in the calculations of the scatterin data set using the expression (36) alone and the derivatives required by (104).
Example 12
([44]). Consider the potential
q 4 ( x ) : = R i sin ( x ) e x ,
with R being a constant (Reynolds number). When R > 0 is sufficiently large, the eigenvalues may exist. For example, for R = 10 , there is one eigenvalue in the box B : = 1 . 6043912 44 58 + 1.7978849 i 67 81 [44] (see also [45]).
The Jost solution is not available in a closed form. In order to check the validity of the numerical calculation of the coefficients a n ( x ) for e 4 , N ( ρ . x ) , we consider the indicator ε N (Table 7).
Figure 11 depicts the Jost function computed with N = 137 and the approximation of the singular number ρ 1 1.416695330664399 + 0.634534798062634 i , with its square belonging to the box B . Additionally, Figure 12 shows the fulfillment of the asymptotic relation (108).
The normalization constant c 1 is calculated using (104),
c 1 0.423317609673475 + 10.608764849282464 i .
Finally, the scattering function is approximated by e 4 , 137 ( ρ ) e 4 , 137 ( ρ ) .
Now, take R = 30 in the potential q 4 ( x ) . In this case, two boxes localizing the only two eigenvalues λ 1 and λ 2 were obtained in [44],
B 1 : = 2.555641614 19 35 + 7.688187018 03 19 i , B 2 : = 6.37465 12 91 + 2.4699 46 55 i .
Table 8 provides the values of the indicator ε N for several values of N.
Approximate eigenvalues computed from e 4 , N ( ρ ) for different values of N, are presented in Table 9 and Table 10.
Note that λ ˜ 1 B 1 and λ ˜ 2 B 2 for N = 200 . Finally, the normalization constants are calculated using e 4 , 200 ( ρ ) in (104),
c 1 1.669128547357084 × 10 2 1.694940279771396 × 10 2 i c 2 54.578951306154920 + 45.276710620944780 i .
Although, in this example, more powers for the series representation of the Jost function were used, the method proved to be applicable to obtaining the scattering data set without any additional informatio. The good accuracy achieved is confirmed by the ability t use the scattering data obtained as input data to solve the inverse scattering problem to recover the potential q 4 ( x ) with R = 30 below in Example 22.

5.2. Inverse Problem

In the present section, we discuss the accuracy, convergence and stability of the proposed method for solving the inverse scattering problem.
Remark 15. 
By q k , M ( x ) , we denote the approximation of the potential q k ( x ) ( k = 1 , 2 , 3 , 4 , 5 ) obtained by solving the truncated system (100) with the sum up to M, i.e., with M + 1 equations.

5.2.1. Convergence and Accuracy

Example 13. 
Consider the scattering data calculated in Example 3:
J = ρ 1 = i tanh ( 1 + i ) , m 1 = 1 , c 1 = 2 ( i + i tanh ( 1 + i ) ) tanh ( 1 + i ) i i tanh ( 1 + i ) ,
s 3 ( ρ ) = ρ + i tanh ( 1 + i ) ρ + i ρ i tanh ( 1 + i ) ρ i
where s 3 ( ρ ) is an S-type function in the strip 0 Im ( ρ ) < ε 0 = 1 .
We shall recover the potential q 3 ( x ) = 2 sech 2 ( x 1 i ) . The system (100) of linear algebraic equations for this example is obtained in a closed form (see Example 7). For a different number of equations in the truncated system, we obtain a solution symbolically by using the Matlab routine solve. The potential q 3 ( x ) is recovered from (38). Figure 13 presents the recovered potential in each case.
The corresponding absolute and relative errors are presented in Figure 14 and Figure 15, respectively. Note that a high accuracy is attained even in the case of a very reduced number of equations in the truncated system. Moreover, a very fast convergence of the method can be appreciated.
Example 14. 
Consider the scattering data J = s 2 ( ρ ) from Example 2. As was shown above (Example 6), the system (100) for this example can be written explicitly. Again, when solving the corresponding truncated system for different values of M we observe a fast convergence and remarkable accuracy even for small values of M (see Table 11 and Figure 16).
Example 15. 
Consider the closed form of the scattering function s 1 ( ρ ) from Example 1. We compute functions f m , c ( x ) and A m n , c ( x ) using the first option from Remark 11. Some poles and residues are given in Table 12 (computed with the aid of the package Numerical Calculus of Mathematica v.12).
Note that the absolute value of the residues decreases considerably as the poles move away from the origin on the imaginary axis. This allows us to use a small number of poles for the calculation of the functions f m , c ( x ) and A m n , c ( x ) .
The convergence of the method in this case results to be slower; see Figure 17, although a satisfactory accuracy is attained for M = 9 .

5.2.2. Stability of the System

Since the stability of the method was proved in Theorem 4, we are able to work efficiently with noisy scattering data. First, we consider the natural noise arising from the numerical implementations of the last two procedures in Remark 11, i.e., calculation of the approximate matrix in (100) from the scattering function s ( ρ ) given in a closed form. Another situation considered in this subsection is the recovery of the potential from a uniformly noisy scattering function.
Remark 16. 
Henceforth, denote by f ˜ m ( x ) , f ˜ m , c ( x ) , A ˜ m n ( x ) and A ˜ m n , c ( x ) the numerical approximation of f m ( x ) , f m , c ( x ) , A m n ( x ) and A m n , c ( x ) .
Remark 17. 
In the last step of the algorithm from Section 4.4, for recovering q with the aid of (38), the coefficient a 0 needs to be differentiated twice. This was performed by interpolating a 0 ( x ) with a quintic spline through the Matlab routine spapi and a posterior differentiation with the Matlab command fnder.
Example 16. 
Let us consider the scattering data from Example 2. The recovery of the potential q 2 ( x ) from the exact scattering function s 2 ( ρ ) , obtained by using approximate functions f ˜ m ( x ) and A ˜ m n ( x ) in the truncated system (100), is presented. The computation of functions f m ( x ) and A m n ( x ) requires numerical integration along the line L η = 0.5 (see (77)). For this purpose, the last two procedures in Remark 11 were applied.
Method 1. The second option in Remark 11 is implemented. With the scattering function (91) at points ρ = σ + 0.5 i and σ = ( k + 1 / 2 ) h for k = 0 , 1 , , N ( x ) , where N ( x ) = 55000 / x and h = 0.145454545 , the calculation of the Fourier transforms in (98) is carried out. In Table 13, the maximum absolute error of f ˜ m ( x ) is presented for 4 values of the parameter m.
Now, we compute A ˜ m n , c ( x ) using the same numerical integration method with parameters N ( x ) = 5500 / x and h = 0.127272727 .
Table 14 shows the maximum absolute error of A ˜ m n ( x ) for parameters m , n = 0 , 1 , 2 , 3 .
The system (100) constructed with f ˜ m ( x ) and A ˜ m n ( x ) is solved numerically in Matlab for several values of M. Maximum absolute and relative errors of the approximation of the potential q 2 , M are shown in Table 15.
Figure 18 presents the absolute value of the recovered q 2 potential from 4 equations in (100).
Method 2. Now, we compute the approximate functions f ˜ m ( x ) (see Table 16) and A ˜ m n ( x ) (see Table 17) following the third procedure in Remark 11.
In Table 18, the absolute error of the recovered potential for some values of M in (100) is presented.
Both methods (procedures 2 and 3 from Remark 11) illustrated in the above example have proven to be suitable for calculating the functions f m and A m n from a table of values for the s 2 ( ρ ) . Nevertheless, it is worth mentioning that although the first method (procedure 2) produced slightly more accurate results, this approach might be sensitive to the choice of the N ( x ) and h parameters, whereas the second method (procedure 3) only requires the implementation of trapz, the Matlab integration routine on a dense set of points defined in the interval ( 0 , 2 π ) . Hence, for the purposes of this paper, it is sufficient to consider procedure 3 from Remark 11 in the following examples, so as to obtain satisfactory approximations of f m and A m n .
As expected from the results of Example 14 for this potential, the numerical method for recovering the potential q 2 ( x ) converges very fast. Indeed, an acceptable approximation of q 2 ( x ) is achieved with only four equations in this case, where an inexact matrix in the linear system (100) is considered. In fact, the difference between the approximate and the exact potential presented in Figure 18 is indistinguishable.
In the following examples, a noisy scattering function with a uniformly distributed noise ε ( ρ ) added to the rand routine of Matlab is considered.
Example 17. 
Consider the scattering function s 2 ( ρ ) and denote the noisy scattering function by s ^ 2 ( ρ ) : = s 2 ( ρ ) + ε ( ρ ) . Here, ε ( ρ ) is ± 5 % uniformly distributed complex-valued noise (the percentage of the noise is applied pointwise to the modulus and argument of the value of s 2 ( ρ ) ). The maximum absolute error of s ^ 2 on the line L η = 0.5 is 2.46 × 10 1 . The potential was recovered using five equations with a maximum absolute error of 5.2 × 10 1 . The real and imaginary parts of the potential and the absolute error of its recovery are shown in Figure 19.
Despite the noise that s ^ 2 ( ρ ) produces in the matrix of the system (100), the method recovers the shape of the potential q 2 with reasonable accuracy.
Example 18. 
Consider the scattering function s 3 ( ρ ) and define s ^ 3 ( ρ ) : = s 3 ( ρ ) + ε ( ρ ) where ε ( ρ ) is a ± 10 % uniformly distributed complex-valued noise (considered as in the previous example). The maximum absolute error of s ^ 3 on the line L η = 0.5 is 1.75 . The potential was recovered using eight equations with a maximum absolute error of 8.6 × 10 1 . The real and imaginary parts of the potential as well as the absolute error of its recovery are shown in Figure 20.
Although, in this case, the absolute error of s ^ 3 ( ρ ) is larger, the shape of the recovered potential is still quite close to that of the exact one.

5.2.3. In-Out

In this subsection, we consider the results obtained in Section 5.1 as input data for the inverse problem.
Example 19. 
We use the approximate scattering function s 3 ( ρ ) from Example 10 calculated by (107). Particularly, the form in which it is given allows for us to approximate functions f ˜ m , c ( x ) and A ˜ m n , c ( x ) with the aid of the numerical calculus of residues, i.e., the first procedure in Remark 11 (see Table 19 and Table 20).
The potential q 3 ( x ) was recovered with an absolute error of 1.8 × 10 5 in the interval ( 0 , 15 ) using 8 equations.
Example 20. 
Consider the approximate scattering function s 1 ( ρ ) from Example 11. The coefficient a 0 ( x ) was recovered using 14 equations with a maximum absolute error of 4.29 × 10 3 , from which the potential was recovered with a maximum absolute error of 0.23 , Figure 21.
Example 21. 
Consider the approximate scattering data obtained in Example 9. The approximate functions f ˜ m ( x ) (see Table 21) and A ˜ m n ( x ) (see Table 22) were obtained accurately enough to recover the potential (see Table 23).
Figure 22 illustrates the stability and convergence of the method with the absolute error stabilized at 5.3 × 10 2 .
Example 22. 
Consider the potential q 4 ( x ) = 30 i sin ( x ) exp ( x ) introduced in Example 12. Using the results of the solution of the direct scattering problem from Example 12, we recover q 4 ( x ) using 20 equations with a maximum absolute error of 8.67 × 10 1 (Figure 23).
It is worth mentioning that the coefficient a 0 ( x ) is recovered with an absolute error of 2.28 × 10 2 (Figure 24). The error is calculated and compared with the solution of the Cauchy problem
a 0 ( x ) a 0 ( x ) = q 4 ( x ) ( a 0 ( x ) + 1 ) , a 0 ( b ) = 0 , a 0 ( b ) = 1 2 ,
for a sufficiently large value of b > 0 , obtained using ode45 routine of Matlab2021a.
This is a case where closed formulas for the scattering data set are unavailable. Therefore, the In–Out procedure confirms a satisfactory accuracy in the solution of both the direct and inverse scattering problems.
Example 23. 
Consider the singular potential
q 5 ( x ) = exp ( 2.5 x ) x π 2 1 / 3 .
In Table 24, we present the parameter ε N for different values of N in (103).
Using data from Table 24, we computed the scattering data with N = 45 . No eigenvalue was detected, so the scattering data set consists of the scattering function approximated by the expression
s 5 ( ρ ) e 5 , 45 ( ρ ) e 5 , 45 ( ρ ) , ρ R .
Using this scattering data set to solve the inverse problem, we obtained the coefficient a 0 ( x ) as shown in Figure 25. The maximum absolute error resulted in 1.9 × 10 4 .
The potential is recovered as shown in Figure 26. The corresponding absolute error is presented in Figure 27. Indeed, the maximum absolute error is 9.82 × 10 2 .
This example shows the applicability of the proposed algorithms to both the solution of the direct and inverse scattering problems in the case of non-smooth potentials.

6. Conclusions

An approach to solving the direct and inverse scattering problems on the half-line for the one-dimensional Schrödinger equation with an exponentially decreasing complex-valued potential is developed. It is based on a series representation of the Jost solution from [25], which is shown in the present work to remain valid in a non-selfadjoining case.
When solving the direct problem, this representation is used to calculate the scattering data set through a simple and efficient procedure, which includes a proposed algorithm for computing normalization polynomials (which are part of the scattering data set) by solving a finite system of linear algebraic equations for its coefficients.
When solving the inverse problem, the use of the series representation combined with the Gel’fand–Levitan equation reduces the problem to a system of linear algebraic equations for the series coefficients, and the knowledge of the first coefficient is sufficient to recover the potential.
The numerical results illustrate the remarkable accuracy of the proposed algorithms in solving both the direct and inverse scattering problems.

Author Contributions

Conceptualization, formal analysis, methodology, funding acquisition, investigation, project administration, software, supervision, writing—review and editing, V.V.K.; formal analysis, investigation, software, writing—original draft, writing—review and editing, L.E.M.-L. All authors have read and agreed to the published version of the manuscript.

Funding

Research was supported by CONAHCYT, Mexico via the project 284470.

Data Availability Statement

The data that support the findings of this study are available upon reasonable request.

Conflicts of Interest

This work does not have any conflict of interest.

Appendix A. Proofs of Auxiliary Equalities for Lemma 1

Let us prove Equation (57). Consider
z e ( ρ , x ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k e i ρ x n = 0 a n ( x ) P n ( 0 , 1 n ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k = e z ( ρ , x ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k + e ( ρ , x ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( 2 k + j ) ( z + 1 ) j 2 k 1 e i ρ x n = 0 a n ( x ) P n ( 0 , 1 n ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k .
Using the representation (52) for the Jost solution, the last expression can be written as follows
x e ( ρ , x ) ( z + 1 ) 2 j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k + e ( ρ , x ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( j 2 k ) ( z + 1 ) j 2 k 1
= e ( ρ , x ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j + 1 ( z + 1 ) j 2 k 2 1 + ( j 2 k ) x 1 ( z + 1 ) = e ( ρ , x ) j = 0 k 1 ( 1 ) j C j k ( k + 1 ) ! ( k + 1 j ) ! x k j + 1 ( z + 1 ) j 2 k 2 + e ( ρ , x ) ( 1 ) k C k k ( k + 1 ) ! ( k + 1 k ) ! x ( z + 1 ) k 2 = e ( ρ , x ) j = 0 k ( 1 ) j C j k ( k + 1 ) ! ( k + 1 j ) ! x k j + 1 ( z + 1 ) j 2 k 2 .
Now, let us prove equality (58). Consider
z e i ρ x n = 0 a n ( x ) j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) 1 m + j C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + e i ρ x n = 0 a n ( x ) F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k
= x e i ρ x ( z + 1 ) 2 n = 0 a n ( x ) j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + e i ρ x n = 0 a n ( x ) j = 2 k + 1 F z ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m m 2 k + 1 ( z + 1 ) m 2 k + e i ρ x n = 0 a n ( x ) F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k .
Note that the last expression can be written as e i ρ x n = 0 a n ( x ) F n ( z ) where
F n ( z ) = x ( z + 1 ) 2 j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + j = 2 k + 1 F z ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m m 2 k + 1 ( z + 1 ) m 2 k + F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k .
Associating terms in the expression for F n ( z ) , we obtain
F n ( z ) = j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 x ( z + 1 ) 2 2 k m 1 z + 1 + j = 2 k + 1 F z ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 + F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k .
Simplification of the last expression results in
F n ( z ) = j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 x ( z + 1 ) 2 2 k 1 m z + 1 + j = 2 k + 1 j F ( n , j + 1 , 1 , z + 1 ) j F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z + 1 ) j 2 k ,
where we applied Formula (55). Thus,
F n ( z ) = j = 2 k + 1 F ( n , j , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + 1 x ( z + 1 ) 2 2 k 1 m z + 1 j z + 1 + j = 2 k + 1 j F ( n , j + 1 , 1 , z + 1 ) m = j 1 k ( 1 ) j m + 1 C m j + 1 k 1 k ! ( k m ) ! x k m ( z + 1 ) m 2 k + F ( n , 2 , 1 , z + 1 ) j = 0 k 1 ( 1 ) j C j k 1 k ! ( k j ) ! x k j ( z +</