Next Article in Journal
An Improved Component-Wise WENO-NIP Scheme for Euler System
Next Article in Special Issue
The Partial Inverse Spectral and Nodal Problems for Sturm–Liouville Operators on a Star-Shaped Graph
Previous Article in Journal
High Accuracy Modeling of Permanent Magnet Synchronous Motors Using Finite Element Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reconstruction of Higher-Order Differential Operators by Their Spectral Data

by
Natalia P. Bondarenko
1,2
1
Department of Applied Mathematics and Physics, Samara National Research University, Moskovskoye Shosse 34, Samara 443086, Russia
2
Department of Mechanics and Mathematics, Saratov State University, Astrakhanskaya 83, Saratov 410012, Russia
Mathematics 2022, 10(20), 3882; https://doi.org/10.3390/math10203882
Submission received: 13 September 2022 / Revised: 14 October 2022 / Accepted: 17 October 2022 / Published: 19 October 2022

Abstract

:
This paper is concerned with inverse spectral problems for higher-order ( n > 2 ) ordinary differential operators. We develop an approach to the reconstruction from the spectral data for a wide range of differential operators with either regular or distribution coefficients. Our approach is based on the reduction of an inverse problem to a linear equation in the Banach space of bounded infinite sequences. This equation is derived in a general form that can be applied to various classes of differential operators. The unique solvability of the linear main equation is also proved. By using the solution of the main equation, we derive reconstruction formulas for the differential expression coefficients in the form of series and prove the convergence of these series for several classes of operators. The results of this paper can be used for the constructive solution of inverse spectral problems and for the investigation of their solvability and stability.

1. Introduction

This paper is concerned with the inverse spectral theory for operators generated by the differential expression
n ( y ) : = y ( n ) + k = 0 n / 2 1 ( τ 2 k ( x ) y ( k ) ) ( k ) + k = 0 ( n 1 ) / 2 1 ( τ 2 k + 1 ( x ) y ( k ) ) ( k + 1 ) + ( τ 2 k + 1 ( x ) y ( k + 1 ) ) ( k ) , x ( 0 , 1 ) ,
where the notation a means rounding down, and the functions { τ ν } ν = 0 n 2 can be either integrable or distributional. Various aspects of spectral theory for such operators and related issues have been intensively studied in recent years (see, e.g., [1,2,3,4,5,6,7,8,9]). However, the general theory of inverse spectral problems for (1) with arbitrary n > 2 has not been created yet. This paper aims to develop an approach to the reconstruction of the coefficients { τ ν } ν = 0 n 2 from the spectral data for a wide class of differential operators.

1.1. Historical Background

Inverse problems of spectral analysis consist in the recovery of differential operators from their spectral information. Such problems arise in practice when one needs to determine certain physical parameters of a system from some measured data or to construct a model with desired properties. The majority of physical applications are concerned with linear differential operators of form (1) with n = 2 , 3 , 4 .
For n = 2 , expression (1) turns into the Sturm–Liouville (Schrödinger) operator
2 ( y ) = y + q ( x ) y ,
which models string vibrations in classical mechanics, electron motion in quantum mechanics, and is widely used in other branches of science and engineering. The third-order linear differential operators arise in the inverse problem method for integration of the nonlinear Boussinesq equation (see [10,11]), in mechanical problems of modeling the thin membrane flow of viscous liquid and elastic beam vibrations (see [12] and references therein). Inverse spectral problems for the fourth-order linear differential operators attract much attention from scholars because of their applications in mechanics and geophysics (see [13,14,15,16,17,18,19,20] and references therein).
The classical results of the inverse problem theory were obtained for the Sturm–Liouville operator (2) with integrable potential q ( x ) in the 1950s by Marchenko, Levitan, and their followers (see [21,22]). They developed the transformation operator method, which reduces the nonlinear inverse Sturm–Liouville spectral problem to the linear Fredholm integral equation of the second kind. However, the transformation operator method appeared to be ineffective for the higher-order differential operators
y ( n ) + k = 0 n 2 p k ( x ) y ( k ) , n > 2 .
Note that the differential expression (1) can be transformed into (3) in the case of sufficiently smooth coefficients { τ ν } ν = 0 n 2 .
Thus, the development of inverse spectral theory for the higher-order operators (3) required new approaches. Relying on the ideas of Leibenson [23,24], Yurko created the method of spectral mappings. This method allowed him to construct inverse problem solutions for the higher-order differential operators (3) with regular (integrable) coefficients on the half-line x > 0 and on a finite interval x ( 0 , T ) (see [25,26]). The case of Bessel-type singularities also was considered [27,28]. Later on, the ideas of the method of spectral mappings were applied to a wide range of inverse spectral problems, e.g., to inverse problems for the first-order differential systems [29], for differential operators on graphs [30], and for quadratic differential pencils [31]. This method is based on the theory of analytic functions and mainly on the contour integration in the complex plane of the spectral parameter. The method of spectral mappings reduces a nonlinear inverse problem to a linear equation in a suitable Banach space. This space is constructed in different ways for different operator classes. In particular, for differential operators on a finite interval, the main equation is usually derived in the space m of infinite bounded sequences. It is also worth mentioning that an approach to inverse scattering problems for higher-order differential operators (3) on the full line was developed by Beals et al. [32,33].
During the last 20 years, the inverse problems have been actively investigated for the second-order differential operators with distributional potentials (see, e.g., [34,35,36,37,38,39,40,41,42,43]). In particular, Hryniv and Mykytyuk [34,35,36] transferred the transformation operator method to the Sturm–Liouville operators (2) with potential q ( x ) of class W 2 1 ( 0 , 1 ) and so generalized the basic results of inverse problem theory to this class of operators. Note that the space W 2 1 contains the Dirac δ -function and the Coulumb potential 1 x , which are used for modeling particle interactions in quantum mechanics [44]. The method of spectral mappings has been extended to the Sturm–Liouville operators with potentials of W 2 1 in [37,43,45]. This opens the possibility of constructing the inverse spectral theory for higher-order differential operators with distribution coefficients. However, till now, only the first steps have been taken in this direction. In [9,46], the uniqueness of recovering the higher-order differential operators with distribution coefficients on a finite interval and on the half-line has been studied. The goals of this paper are to derive the linear main equation of the inverse problem to prove its unique solvability and to obtain reconstruction formulas for the coefficients { τ ν } ν = 0 n 2 of various classes.

1.2. Problem Statement and Methods

Our treatment of the differential expression (1) is based on the regularization approach. Namely, we assume the differential equation
n ( y ) = λ y , x ( 0 , 1 ) ,
where λ is the spectral parameter, can be equivalently transformed into the first-order system
Y ( x ) = ( F ( x ) + Λ ) Y ( x ) , x ( 0 , 1 ) ,
where Y ( x ) is a column vector function of size n, Λ is the ( n × n ) -matrix whose entry at the position ( n , 1 ) equals λ and all the other entries are zero, and F ( x ) = [ f k , j ( x ) ] k , j = 1 n is a matrix function with the following properties:
f k , j ( x ) 0 , k + 1 < j , f k , k + 1 ( x ) 1 , k = 1 , n 1 ¯ , f k , k L 2 ( 0 , 1 ) , k = 1 , n ¯ , f k , j L 1 ( 0 , 1 ) , k > j , t r a c e ( F ( x ) ) = 0 .
We denote the class of ( n × n ) matrix functions satisfying (6) by F n .
By using any matrix F F n , one can define the quasi-derivatives
y [ 0 ] : = y , y [ k ] = ( y [ k 1 ] ) j = 1 k f k , j y [ j 1 ] , k = 1 , n ¯ ,
and the domain
D F = { y : y [ k ] A C [ 0 , 1 ] , k = 0 , n 1 ¯ } .
Definition 1.
A matrix function F ( x ) F n is called an associated matrix of the differential expression n ( y ) if n ( y ) = y [ n ] for any y D F . We call a function y a solution of Equation (4) if y D F and y [ n ] = λ y , x ( 0 , 1 ) .
For a function y D F , introduce the notation y ( x ) = c o l ( y [ 0 ] ( x ) , y [ 1 ] ( x ) , , y [ n 1 ] ( x ) ) . Obviously, y is a solution of Equation (4) if and only if Y = y satisfies (5).
The associated matrices for various classes of differential expressions n ( y ) have been constructed, e.g., in [1,3,46,47,48] (see also Section 4.3, Section 4.4 and Section 4.5 of this paper). For example, for the differential expression 2 ( y ) = y τ 0 y , τ 0 W 2 1 ( 0 , 1 ) , that is, τ 0 = σ 0 , σ 0 L 2 ( 0 , 1 ) , the associated matrix has the form (see [49]):
F ( x ) = σ 0 ( x ) 1 σ 0 2 ( x ) σ 0 ( x ) .
For the regular case τ ν L 1 ( 0 , 1 ) , ν = 0 , n 2 ¯ , the construction of associated matrix F ( x ) is well-known (see [50] and Section 4.4 of this paper). The regularization of even-order ( n = 2 m ) differential operators (1) with distribution coefficients τ 2 k + j W 2 ( m k j ) ( 0 , 1 ) , k = 0 , m 1 ¯ , j = 0 , 1 , has been obtained by Mirzoev and Shkalikov [1]. Later on, the case of odd-order n was considered in [47]. Vladimirov [51] suggested a more general construction which, in particular, includes both cases [1,47]. It is worth mentioning that, in [1,47,51], the differential expressions of more general form than (1) were studied, with the coefficients at y ( n ) and y ( n 1 ) not necessarily equal 1 and 0, respectively. However, in this paper, we confine ourselves to the form (1), which is natural for studying the inverse problems [9,46].
In this paper, we assume that n ( y ) is any differential expression that has an associated matrix in terms of Definition 1. We do not impose any additional restrictions on { τ ν } ν = 0 n 2 , since we are interested in formulating the abstract results which can be applied to various classes of differential operators. Certain restrictions on { τ ν } ν = 0 n 2 are imposed below when necessary.
Let us proceed to the inverse problem formulation. Suppose that we have a differential expression of form (1) and an associated matrix F ( x ) = [ f k , j ] k , j = 1 n . By using the corresponding quasi-derivatives (7), define the linear forms
U s , a ( y ) : = y [ p s , a ] ( a ) + j = 1 p s , a u s , j , a y [ j 1 ] ( a ) , s = 1 , n ¯ , a = 0 , 1 ,
where p s , a { 0 , , n 1 } , p s , a p k , a for s k , and u s , j , a are some complex numbers. In addition, introduce the matrices U a = [ u s , j , a ] s , j = 1 n , u s , j , a : = δ j , p s , a + 1 for j > p s , a , a = 0 , 1 . Here, and below, δ j , k is the Kronecker delta. We call the triple ( F ( x ) , U 0 , U 1 ) by the problem L . Below, we introduce various characteristics related to the problem L .
Denote by { C k ( x , λ ) } k = 1 n the solutions of Equation (4) satisfying the initial conditions
U s , 0 ( C k ) = δ s , k , s = 1 , n ¯ .
Equivalently, the ( n × n ) -matrix function C ( x , λ ) : = [ C k ( x , λ ) ] k = 1 n is the solution of the system (5) with the initial condition C ( 0 , λ ) = U 0 1 . Therefore, the solutions { C k ( x , λ ) } k = 1 n are uniquely defined. Moreover, their quasi-derivatives C k [ j ] ( x , λ ) are entire in λ for each fixed x [ 0 , 1 ] , k = 1 , n ¯ , j = 0 , n 1 ¯ .
It has been proved in ([9], Section 4) that, for all λ C except for a countable set, Equation (4) has the so-called Weyl solutions { Φ k ( x , λ ) } k = 1 n satisfying the boundary conditions
U s , 0 ( Φ k ) = δ s , k , s = 1 , k ¯ , U s , 1 ( Φ k ) = 0 , s = k + 1 , n ¯ ,
Define the matrix function Φ ( x , λ ) = [ Φ k ( x , λ ) ] k = 1 n . The columns of the matrices C ( x , λ ) and Φ ( x , λ ) form fundamental solution systems of (5). Consequently, the following relation holds:
Φ ( x , λ ) = C ( x , λ ) M ( λ )
where the matrix function M ( λ ) is called the Weyl matrix of the problem L (see [9]).
The notion of Weyl matrix generalizes the notion of Weyl function for the second-order operators (see [21,26]). Weyl functions and their generalizations play an important role in the inverse spectral theory for various classes of differential operators. In particular, Yurko [25,26,27,28] has used the Weyl matrix as the main spectral characteristics for the reconstruction of the higher-order differential operators (3) with regular coefficients. The analogous inverse problem for the differential expression of form (1) can be formulated as follows.
Problem 1.
Given the Weyl matrix M ( λ ) , find the coefficients { τ ν } ν = 0 n 2 .
The uniqueness of Problem 1’s solution has been proved in [9] for the Mirzoev–Shkalikov case: n = 2 m , τ 2 k + j W 2 ( m k j ) ( 0 , 1 ) and n = 2 m + 1 , τ 2 k + j W 1 ( m k j ) ( 0 , 1 ) , j = 0 , 1 . In [46], the uniqueness of recovering the boundary condition coefficients from the Weyl matrix has been studied.
It has been shown in ([9], Section 4) that the Weyl matrix M ( λ ) = [ M j , k ( λ ) ] j , k = 1 n is unit lower-triangular, and its nontrivial entries have the form
M j , k ( λ ) = Δ j , k ( λ ) Δ k , k ( λ ) , 1 k < j n ,
where Δ k , k ( λ ) : = det [ U s , 1 ( C r ) ] s , r = k + 1 n and Δ j , k ( λ ) is obtained from Δ k , k ( λ ) by the replacement of C j by C k . The functions C r [ s ] ( 1 , λ ) , r = 1 , n ¯ , s = 0 , n 1 ¯ are entire analytic in λ , and so are the functions Δ j , k ( λ ) , 1 k j n . Hence, M ( λ ) is meromorphic in λ , and the poles of the k-th column of M ( λ ) coincide with the zeros of Δ k , k ( λ ) . At the same time, the zeros of the entire functions Δ j , k ( λ ) , 1 k j n coincide with the eigenvalues of some boundary value problems for Equation (4), and the inverse problem by the Weyl matrix (Problem 1) is related to the inverse problem by n ( n + 1 ) 2 spectra (see [9] for details).
We say that the problem L belongs to the class W if all the zeros of Δ k , k ( λ ) are simple for k = 1 , n 1 ¯ . Then, in view of (12), the poles of M ( λ ) are simple. In general, the function Δ k , k ( λ ) can have at most a finite number of multiple zeros. The latter case can be treated by developing the methods of Buterin et al. [52,53], who considered the non-self-adjoint Sturm–Liouville operators ( n = 2 ) with regular potentials. However, the case of multiple zeros is much more technically complicated, so, in this paper, we always assume that L W .
Denote by Λ the set of the Weyl matrix poles. Consider the Laurent series
M ( λ ) = M 1 ( λ 0 ) λ λ 0 + M 0 ( λ 0 ) + M 1 ( λ 0 ) ( λ λ 0 ) + , λ 0 Λ .
Denote
N ( λ 0 ) : = [ M 0 ( λ 0 ) ] 1 M 1 ( λ 0 ) , λ 0 Λ ,
We call the collection { λ 0 , N ( λ 0 ) } λ 0 Λ the spectral data of the problem L . Obviously, the spectral data are uniquely specified by the Weyl matrix M ( λ ) , so Problem 1 can be reduced to the following problem:
Problem 2.
Given the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , find the coefficients { τ ν } ν = 0 n 2 .
It is more convenient to study the reconstruction question for Problem 2. It is worth mentioning that, in fact, the Weyl matrix and the spectral data can be constructed according to the above definitions for any matrix function F ( x ) of class F n , not necessarily associated with any differential expression of form (1). However, in general, the matrix F ( x ) is not uniquely specified by the Weyl matrix (see Example 4.5 in [46]). Therefore, in this paper, the solution of Problem 2 is divided into the two steps:
{ λ 0 , N ( λ 0 ) } λ 0 Λ ( 1 ) { Φ k ( x , λ ) } k = 1 n ( 2 ) { τ ν } ν = 0 n 2 .
The recovery of the Weyl solutions { Φ k ( x , λ ) } k = 1 n from the spectral data is studied for a matrix F ( x ) of general form, and then reconstruction formulas are derived for { τ ν } ν = 0 n 2 of certain classes.
For a fixed F F n , we define the quasi-derivatives (7), the expression n ( y ) : = y [ n ] , the problem L = ( F ( x ) , U 0 , U 1 ) , its spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ as above, and focus on the following auxiliary problem.
Problem 3.
Given the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , find the Weyl solutions { Φ k ( x , λ ) } k = 1 n .
Let us briefly describe the method of solution. Along with L , we consider another problem L ˜ = ( F ˜ ( x ) , U ˜ 0 , U ˜ 1 ) of the same form but with different coefficients. Similarly to Φ ( x , λ ) , define Φ ˜ ( x , λ ) for L ˜ . An important role in our analysis is played by the matrix of spectral mappings:
P ( x , λ ) = Φ ( x , λ ) [ Φ ˜ ( x , λ ) ] 1 .
For each fixed x [ 0 , 1 ] , the matrix function P ( x , λ ) is meromorpic in λ with poles at the eigenvalues Λ Λ ˜ . The method is based on the integration of some functions by a special family of contours enclosing these eigenvalues. Applying the Residue theorem, we derive an infinite system of linear equations. Furthermore, that system is transformed into a linear equation in the Banach space m of infinite bounded sequences. The main equation of the inverse problem has the form
( I R ˜ ( x ) ) ψ ( x ) = ψ ˜ ( x ) , x [ 0 , 1 ] ,
where, for each fixed x [ 0 , 1 ] , ψ ( x ) and ψ ˜ ( x ) are elements of m, R ˜ ( x ) is a linear compact operator in m, and I is the unit operator. The element ψ ˜ ( x ) and the operator R ˜ ( x ) are constructed by the model problem L ˜ and by the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , { λ ˜ 0 , N ˜ ( λ ˜ 0 ) } λ ˜ 0 Λ ˜ of the two problems L , L ˜ , respectively, while the unknown element ψ ( x ) is related to the desired functions { Φ k ( x , λ ) } k = 1 n . We prove that the operator ( I R ˜ ( x ) ) has the bounded inverse, and so the main equation is uniquely solvable (see Theorem 1). This implies the uniqueness of the solution for Problem 3. Using the main equation, we obtain a constructive procedure for solving Problem 3 (see Algorithm 1). These results can be applied to a wide range of differential operators (1) with associated matrices of class F n .
Furthermore, by using the solution of the main equation, we derive reconstruction formulas for { τ ν } ν = 0 n 2 . We describe the general idea and then apply it to the certain classes of operators:
(i)
n = 3 , τ 1 L 2 ( 0 , 1 ) , τ 0 W 2 1 ( 0 , 1 ) .
(ii)
n is even, τ ν L 2 ( 0 , 1 ) , ν = 0 , n 2 ¯ .
(iii)
n is even, τ ν W 2 1 ( 0 , 1 ) , ν = 0 , n 2 ¯ .
We obtain the uniqueness theorems and constructive algorithms for solving Problem 2 for the cases (i)–(iii). Note that, although the functions τ ν in the case (ii) are regular, this case has less smoothness than the one considered by Yurko [26].
The reconstruction formulas have the form of series, and the main difficulties in our analysis are related to studying the convergence of those series. These difficulties increase for the case of nonsmooth and/or distribution coefficients. In order to prove the series convergence, we use the Birkhoff-type solutions constructed by Savchuk and Shkalikov [2] and the precise asymptotic formulas for the spectral data obtained in [54]. For the cases (ii) and (iii), we reconstruct the functions τ ν step-by-step for ν = n 2 , n 3 , , 1 , 0 . The similar approach can be used in the case of odd n, which requires technical modifications.
By using the reconstruction formulas, one can develop numerical methods for solving inverse spectral problems (see [55] for the second-order case). However, this issue requires an additional work. In this paper, we obtain theoretical algorithms, which in the future can be used for the investigation of existence and stability of the inverse problem solution.
It is worth mentioning that our method of inverse problem solution is the first one for higher-order differential operators with distribution coefficients. The obtained main equation and reconstruction formulas generalize the results of [45] for the Sturm–Liouville operators with distribution potential. The other methods which applied to the second-order operators (see, e.g., [34,39]), to the best of the author’s knowledge, appear to be ineffective for higher orders.
The paper is organized as follows. In Section 2, we provide preliminaries and study the properties of the spectral data. Section 3 is devoted to the contour integration and to the derivation of the main equation of the inverse problem in a Banach space. The unique solvability of the main equation is also proved. As a result, an algorithm for solving the auxiliary Problem 3 is obtained for arbitrary F F n . In Section 4, we derive the reconstruction formulas for the coefficients { τ ν } ν = 0 n 2 and study the convergence of the obtained series. Section 5 contains a brief summary of the main results.

2. Preliminaries

Throughout the paper, we use the following notations.
  • I is the ( n × n ) unit matrix, e k is the k-th column of I, k = 1 , n ¯ .
  • The sign T denotes the matrix transpose.
  • δ k , j = 1 , k = j , 0 , k j .
  • J : = [ ( 1 ) k + 1 δ k , n j + 1 ] k , j = 1 n , J a : = [ ( 1 ) p k , a 🟉 δ k , n j + 1 ] k , j = 1 n , where p k , a 🟉 : = n 1 p k , a , a = 0 , 1 .
  • If for λ λ 0
    A ( λ ) = k = q p a k ( λ λ 0 ) k + o ( ( λ λ 0 ) p ) ,
    then
    [ A ( λ ) ] | λ = λ 0 k = A k ( λ 0 ) : = a k .
  • The notations x and x are used for rounding a real number x down and up, respectively.
  • The binomial coefficients are denoted by C n k = n ! k ! ( n k ) ! .
  • Along with L , we consider the problems L ˜ , L 🟉 , L ˜ 🟉 of the same form but with different coefficients. We agree that, if a symbol γ denotes an object related to L , then the symbols γ ˜ , γ 🟉 , γ ˜ 🟉 denote the analogous objects related to L ˜ , L 🟉 , L ˜ 🟉 , respectively. Note that the quasi-derivatives for the problems L ˜ , L 🟉 , L ˜ 🟉 are defined by using the matrices F ˜ ( x ) , F 🟉 ( x ) , F ˜ 🟉 ( x ) , respectively, which may be different from F ( x ) .
  • The notation y [ k ] is used for quasi-derivatives defined by (7) (or analogously by using the entries of F ˜ ( x ) , F 🟉 ( x ) , or F ˜ 🟉 ( x ) ). The notation y ( x ) is used for the column vector of the quasi-derivatives y [ 0 ] ( x ) , y [ 1 ] ( x ) , …, y [ n 1 ] ( x ) .
  • In estimates, the symbol C is used for various positive constants independent of x, l, k, etc.
  • a × i f ( c o n d i t i o n ) b = a b , if ( condition ) holds , a , otherwise . .
In Section 2.1, we define an auxiliary problem L 🟉 = ( F 🟉 ( x ) , U 0 🟉 , U 1 🟉 ) and study its properties. In Section 2.2, the properties of the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ are investigated.

2.1. Problems L and L 🟉

For a matrix F F n , define the matrix F 🟉 ( x ) = [ f k , j 🟉 ( x ) ] k , j = 1 n as follows:
f k , j 🟉 ( x ) : = ( 1 ) k + j + 1 f n j + 1 , n k + 1 ( x ) .
Obviously, F 🟉 F n .
Let F ( x ) be a fixed matrix function of class F n . Suppose that y D F and z D F 🟉 ; the quasi-derivatives for y are defined via (7) by using the elements of F ( x ) , and the quasi-derivatives for z are defined as
z [ 0 ] : = z , z [ k ] = ( z [ k 1 ] ) j = 1 k f k , j 🟉 z [ j 1 ] , k = 1 , n ¯ ,
and
D F 🟉 : = { z : z [ k ] A C [ 0 , 1 ] , k = 0 , n 1 ¯ } .
Define
n ( y ) : = y [ n ] , n 🟉 ( z ) : = ( 1 ) n z [ n ] , z , y : = j = 0 n 1 ( 1 ) j z [ j ] y [ n j 1 ] .
Lemma 1.
The following relation holds:
d d x z , y = z n ( y ) y n 🟉 ( z ) .
Proof. 
Differentiation implies
d d x z , y = j = 0 n 1 ( 1 ) j ( z [ j ] ) y [ n j 1 ] + j = 0 n 1 ( 1 ) j z [ j ] ( y [ n j 1 ] ) .
From (7) and (15), we obtain
( z [ j ] ) = z [ j + 1 ] + s = 1 j + 1 f j + 1 , s 🟉 z [ s 1 ] , ( y [ n j 1 ] ) = y [ n j ] + s = 1 n j f n j , s y [ s 1 ] .
Substituting the latter relations into (17), we obtain
d d x z , y = j = 0 n 1 ( 1 ) j y [ n j ] z [ j ] + j = 0 n 1 ( 1 ) j s = 1 n j f n j , s y [ s 1 ] z [ j ] + j = 0 n 1 ( 1 ) j y [ n j 1 ] z [ j + 1 ] + j = 0 n 1 ( 1 ) j s = 1 j + 1 f j + 1 , s 🟉 y [ n j 1 ] z [ s 1 ] .
Note that
j = 0 n 1 ( 1 ) j y [ n j ] z [ j ] + j = 0 n 1 ( 1 ) j y [ n j 1 ] z [ j + 1 ] = y [ n ] z + ( 1 ) n 1 y z [ n ] , j = 0 n 1 ( 1 ) j s = 1 n j f n j , s y [ s 1 ] z [ j ] = 1 s j n ( 1 ) s + 1 f n s + 1 , n j + 1 y [ n j ] z [ s 1 ] , j = 0 n 1 ( 1 ) j s = 1 j + 1 f j + 1 , s 🟉 y [ n j 1 ] z [ s 1 ] = 1 s j n ( 1 ) j + 1 f j , s 🟉 y [ n j ] z [ s 1 ] .
Taking (14) into account, we arrive at (16).    □
If y and z satisfy the relations n ( y ) = λ y and n 🟉 ( z ) = μ z , respectively, then (16) readily implies
d d x z , y = ( λ μ ) y z .
Define y ( x ) = col ( y [ 0 ] ( x ) , y [ 1 ] ( x ) , , y [ n 1 ] ( x ) ) and z ( x ) = col ( z [ 0 ] ( x ) , z [ 1 ] ( x ) , , z [ n 1 ] ( x ) ) by using the corresponding quasi-derivatives (7) and (15), and the matrix J : = [ ( 1 ) k + 1 δ k , n j + 1 ] k , j = 1 n . Then,
z , y | x = a = [ z ( a ) ] T J y ( a ) .
For a = 0 , 1 , let U a = [ u s , j , a ] s , j = 1 n be an ( n × n ) matrix such that u s , j , a = δ j , p s , a + 1 for j > p s , a , where p s , a { 0 , , n 1 } , and p s , a p k , a for s k . The matrices U a define the linear forms U s , a via (8).
Along with U a , consider the matrices
U a 🟉 : = [ J a 1 U a 1 J ] T , a = 0 , 1 ,
where J a = [ ( 1 ) p k , a 🟉 δ k , n j + 1 ] k , j = 1 n , p k , a 🟉 : = n 1 p n k + 1 , a . The matrices U a 🟉 , a = 0 , 1 , generate the linear forms
U s , a 🟉 ( z ) = z [ p s , a 🟉 ] ( a ) + j = 1 p s , a 🟉 u s , j , a 🟉 z [ j 1 ] ( a ) , s = 1 , n ¯ , a = 0 , 1 .
The matrices U a 🟉 are chosen is such a way that the following relation holds:
z , y | x = a = s = 1 n ( 1 ) p s , a 🟉 U s , a 🟉 ( z ) U n s + 1 , a ( y )
for any y D F , z D F 🟉 . Indeed, the right-hand side of (21) can be represented in the matrix form
[ U a 🟉 z ( a ) ] T J a U a y ( a ) ,
Taking (19) and (20) into account, we arrive at (21).
Consider the problems L = ( F ( x ) , U 0 , U 1 ) and L 🟉 = ( F 🟉 ( x ) , U 0 🟉 , U 1 🟉 ) . For L , the matrix functions C ( x , λ ) , Φ ( x , λ ) , and M ( λ ) were defined in the Introduction. For L 🟉 , similarly denote by { C k 🟉 ( x , λ ) } k = 1 n and { Φ k 🟉 ( x , λ ) } k = 1 n the solutions of equation n 🟉 ( z ) = λ z , x ( 0 , 1 ) , satisfying the conditions
U s , 0 🟉 ( C k 🟉 ) = δ s , k , s = 1 , n ¯ , U s , 0 🟉 ( Φ k 🟉 ) = δ s , k , s = 1 , k ¯ , U s , 1 🟉 ( Φ k 🟉 ) = 0 , s = k + 1 , n ¯ .
Put C 🟉 ( x , λ ) : = [ C k 🟉 ( x , λ ) ] k = 1 n , Φ 🟉 ( x , λ ) : = [ Φ k 🟉 ( x , λ ) ] k = 1 n . Then, the relation
Φ 🟉 ( x , λ ) = C 🟉 ( x , λ ) M 🟉 ( λ )
holds, where M 🟉 ( λ ) is the Weyl matrix of the problem L 🟉 .
Lemma 2.
The following relations hold:
[ M 🟉 ( λ ) ] T J 0 M ( λ ) = J 0 ,
[ Φ 🟉 ( x , λ ) ] T J Φ ( x , λ ) = J 0 .
Proof. 
The initial conditions (9) are equivalent to U 0 C ( 0 , λ ) = I . Using (11), we obtain M ( λ ) = U 0 Φ ( 0 , λ ) . Similarly, M 🟉 ( λ ) = U 0 🟉 Φ 🟉 ( 0 , λ ) . Hence,
A ( λ ) : = [ M 🟉 ( λ ) ] T J 0 M ( λ ) = [ U 0 🟉 Φ 🟉 ( 0 , λ ) ] T J 0 U 0 Φ ( 0 , λ ) , A ( λ ) = [ A k , j ( λ ) ] k , j = 1 n , A k , j ( λ ) = [ U 0 🟉 Φ k 🟉 ( 0 , λ ) ] T J 0 U 0 Φ j ( 0 , λ ) = s = 1 n ( 1 ) p s , 0 🟉 U s , 0 🟉 ( Φ k 🟉 ) U n s + 1 , 0 ( Φ j ) .
On the one hand, using (10), (22), and (26), we obtain A k , j ( λ ) = 0 if k + j > n + 1 and A k , j ( λ ) = ( 1 ) p k , 0 🟉 if k + j = n + 1 . On the other hand, (21) and (26) imply A k , j ( λ ) = Φ k 🟉 , Φ j | x = 0 . It follows from (18) that Φ k 🟉 , Φ j does not depend on x. Consequently,
Φ k 🟉 , Φ j | x = 0 = Φ k 🟉 , Φ j | x = 1 = s = 1 n ( 1 ) p s , 1 🟉 U s , 1 🟉 ( Φ k 🟉 ) U n s + 1 , 1 ( Φ j ) .
Using the boundary conditions (10) and (22) at x = 1 , we conclude that A k , j ( λ ) = 0 if k + j < n + 1 . Thus, A ( λ ) = J 0 and (24) is proved.
Using the relation A k , j ( λ ) = Φ k 🟉 , Φ j for k , j = 1 , n ¯ and (19), we obtain
A ( λ ) = [ Φ 🟉 ( x , λ ) ] T J Φ ( x , λ ) .
This implies (25).    □

2.2. Spectral Data

Consider the Weyl matrix M ( λ ) of the problem L = ( F ( x ) , U 0 , U 1 ) , where F F n . Recall that the poles of the k-th column of M ( λ ) coincide with the zeros of Δ k , k ( λ ) = det [ U s , 1 ( C r ) ] s , r = k + 1 n . One can easily show that the zeros of Δ k , k ( λ ) coincide with the eigenvalues of the following boundary value problem L k :
n ( y ) = λ y , x ( 0 , 1 ) , U s , 0 ( y ) = 0 , s = 1 , k ¯ , U s , 1 ( y ) = 0 , s = k + 1 , n ¯ .
By virtue of Theorem 1.1 in [54], the spectrum of L k is a countable set of eigenvalues Λ k : = { λ l , k } l 1 having the following asymptotics (counting with multiplicities):
λ l , k = ( 1 ) n k π sin π k n ( l + χ k + ϰ l , k ) n ,
where { ϰ l , k } l 2 and χ k are constants which depend only on n, k, and { p s , a } . Hence, for a fixed k { 1 , , n 1 } and sufficiently large l, the eigenvalues λ l , k are simple.
Assume that L W , that is, all the zeros of Δ k , k ( λ ) are simple for k = 1 , n 1 ¯ . Then, in view of (12) and (24), the poles of M ( λ ) and M 🟉 ( λ ) are simple. It follows from (11) and (23) that the matrix functions Φ ( x , λ ) and Φ 🟉 ( x , λ ) for each fixed x [ 0 , 1 ] also have only simple poles.
Denote Λ : = k = 1 n 1 Λ k . Similarly to N ( λ 0 ) , denote
N 🟉 ( λ 0 ) : = [ M 0 🟉 ( λ 0 ) ] 1 M 1 🟉 ( λ 0 ) , λ 0 Λ .
For λ 0 Λ , we mean that N ( λ 0 ) = N 🟉 ( λ 0 ) = 0 .
Let us study some properties of the matrices N ( λ 0 ) and N 🟉 ( λ 0 ) . Denote by ϕ ( x , λ ) the first row of the matrix function Φ ( x , λ ) : ϕ ( x , λ ) = e 1 T Φ ( x , λ ) = [ Φ k ( x , λ ) ] k = 1 n .
Lemma 3.
The following relations hold for each λ 0 Λ : N 2 ( λ 0 ) = 0 ,
[ N 🟉 ( λ 0 ) ] T = J 0 N ( λ 0 ) J 0 1 ,
Φ 1 ( x , λ 0 ) = Φ 0 ( x , λ 0 ) N ( λ 0 ) , Φ 1 🟉 ( x , λ 0 ) = Φ 0 🟉 ( x , λ 0 ) N 🟉 ( λ 0 ) ,
n ( ϕ 0 ( x , λ 0 ) ) = λ 0 ϕ 0 ( x , λ 0 ) + ϕ 0 ( x , λ 0 ) N ( λ 0 ) .
Proof. 
The relation (24) implies
[ M ( λ ) ] 1 = J 0 1 [ M 🟉 ( λ ) ] T J 0 ,
M ( λ ) J 0 1 [ M 🟉 ( λ ) ] T = J 0 1 .
It follows from (33) that
M 1 ( λ 0 ) J 0 1 [ M 1 🟉 ( λ 0 ) ] T = 0 ,
M 0 ( λ 0 ) J 0 1 [ M 1 🟉 ( λ 0 ) ] T + M 1 ( λ 0 ) J 0 1 [ M 0 🟉 ( λ 0 ) ] T = 0 .
Using (13), (28), and (35), we obtain (29). Multiplying (29) by N ( λ 0 ) and using (34), we derive
N ( λ 0 ) J 0 1 [ N 🟉 ( λ 0 ) ] T = N 2 ( λ 0 ) J 0 1 = 0 .
Hence N 2 ( λ 0 ) = 0 .
Using (11) and (32), we obtain
C ( x , λ ) = Φ ( x , λ ) [ M ( λ ) ] 1 = Φ ( x , λ ) J 0 1 [ M 🟉 ( λ ) ] T J 0 .
Since C ( x , λ ) is entire in λ for each fixed x [ 0 , 1 ] , then we obtain
Φ 0 ( x , λ 0 ) J 0 1 [ M 1 🟉 ( λ 0 ) ] T J 0 + Φ 1 ( x , λ 0 ) J 0 1 [ M 0 🟉 ( λ 0 ) ] T J 0 = 0 , λ 0 Λ .
Using (36) and (28), we derive
Φ 0 ( x , λ 0 ) J 0 1 [ N 🟉 ( λ 0 ) ] T J 0 + Φ 1 ( x , λ 0 ) = 0 .
Taking (29) into account, we arrive at the first relation in (30). The second one is similar.
It follows from the relation n ( ϕ ( x , λ ) ) = λ ϕ ( x , λ ) that
n ( ϕ 1 ( x , λ 0 ) ) = λ 0 ϕ 1 ( x , λ 0 ) , n ( ϕ 0 ( x , λ 0 ) ) = λ 0 ϕ 0 ( x , λ 0 ) + ϕ 1 ( x , λ 0 ) .
Using (30), we arrive at (31).    □
Consider the entries of the matrix N ( λ 0 ) = [ N k , j ( λ 0 ) ] k , j = 1 n . Since M ( λ ) is unit lower-triangular, we have N k , j ( λ 0 ) = 0 for all k j , λ 0 Λ . The structural properties of N ( λ 0 ) are described by the following lemma.
Lemma 4.
(i) If λ 0 Λ k , then N s , j ( λ 0 ) = 0 , s = k + 1 , n ¯ , j = 1 , k ¯ .
(ii) If λ 0 Λ s for s = ν + 1 , k 1 ¯ , λ 0 Λ ν , λ 0 Λ k , 1 ν + 1 < k n , then N k , ν + 1 ( λ 0 ) 0 . (Here Λ 0 = Λ n = ).
Proof. 
This lemma is proved similarly to Lemma 2.3.1 in [26], so we outline the proof briefly. If λ 0 Λ k , then Φ k , 1 ( x , λ 0 ) = 0 . On the other hand, it follows from (30) that
Φ k , 1 ( x , λ 0 ) = s = k + 1 n N s , k ( λ 0 ) Φ s , 0 ( x , λ 0 ) .
Applying the linear forms U s , 0 to this relation for s = k + 1 , n ¯ , we conclude that N s , k ( λ 0 ) = 0 , s = k + 1 , n ¯ . Thus, the assertion (i) is proved for j = k . The proof for j = k 1 , , 2 , 1 can be obtained by induction.
In order to prove (ii), we suppose that Δ ν , ν ( λ 0 ) 0 , Δ s , s ( λ 0 ) = 0 for s = ν + 1 , k 1 ¯ . Then, it can be shown that U s , 1 ( Φ s , 0 ( x , λ 0 ) ) 0 , s = ν + 2 , k 1 ¯ and Φ ν + 1 , 1 ( x , λ 0 ) 0 . Suppose that N k , ν + 1 ( λ 0 ) = 0 . Consequently, (30) implies
Φ ν + 1 , 1 ( x , λ 0 ) = s = ν + 2 k 1 N s , ν + 1 ( λ 0 ) Φ s , 0 ( x , λ 0 ) .
Applying the linear forms U s , 1 for s = ν + 2 , k 1 ¯ , we conclude that N s , ν + 1 ( λ 0 ) = 0 , s = ν + 2 , k 1 ¯ , and so Φ ν + 1 , 1 ( x , λ 0 ) 0 . This contradiction yields (ii).    □
In view of the asymptotics (27), we have λ l , k λ r , k + 1 for sufficiently large l and r. Therefore, Lemma 4 implies the following corollary.
Corollary 1.
For sufficiently large | λ 0 | , λ 0 Λ , all the entries of N ( λ 0 ) equal zero except N k + 1 , k ( λ 0 ) , k = 1 , n 1 ¯ .
Define the weight numbers  β l , k : = N k + 1 , k ( λ l , k ) . It is worth considering β l , k only for sufficiently large l. It follows from (13) and (12) that
β l , k = M k + 1 , k , 1 ( λ l , k ) = Δ k + 1 , k ( λ l , k ) d d λ Δ k , k ( λ l , k ) .
Consequently, Theorem 6.2 from [54] yields the asymptotics
β l , k = l n 1 + p k + 1 , 0 p k , 0 ( β k 0 + ϰ l , k 0 ) , { ϰ l , k 0 } l 2 , k = 1 , n 1 ¯ ,
where the constants β k 0 depend only on n, k, and { p s , a } .

3. Main Equation

This section is devoted to the constructive solution of the auxiliary Problem 3, that is, to the recovery of the Weyl solutions { Φ k ( x , λ ) } k = 1 n from the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ . We consider this problem for L = ( F ( x ) , U 0 , U 1 ) W with an arbitrary F F n . Thus, the results of this section can be applied to a wide class of differential expressions (1) with the associated matrix of F n .
Along with L , we consider another problem L ˜ = ( F ˜ ( x ) , U ˜ 0 , U ˜ 1 ) of the same form but with different coefficients. Assume that F ˜ F n , p s , a = p ˜ s , a , s = 1 , n ¯ , a = 0 , 1 . The quasi-derivatives for L ˜ are defined by the matrix F ˜ ( x ) , so they are different from the quasi-derivatives of the problem L . The problem L ˜ 🟉 is defined similarly to L 🟉 . For simplicity, we assume that L ˜ W . The case L ˜ W requires technical modifications (see Remark 1). Denote I : = Λ Λ ˜ .
In Section 3.1, we reduce the studied problem to the infinite system (68) of linear equations with respect to some entries of ϕ 0 ( x , λ 0 ) , λ 0 I . Our technique is based on the contour integration in the λ -plane and on the Residue theorem. In Section 3.2, the system (68) is transformed into the main Equation (80) in the Banach space m of infinite bounded sequences. The unique solvability of the main equation is proved. Finally, we arrive at the constructive Algorithm 1 for finding { Φ k ( x , λ ) } k = 1 n by the spectral data. This algorithm is used in the next section for solving the inverse spectral problem.

3.1. Contour Integration

In order to formulate and prove the main lemma of this subsection (Lemma 6), we first need some preliminaries. Introduce the notations
D ( x , μ , λ ) : = ( λ μ ) 1 [ Φ ( x , μ ) ] 1 Φ ( x , λ ) , D ˜ ( x , μ , λ ) : = ( λ μ ) 1 [ Φ ˜ ( x , μ ) ] 1 Φ ˜ ( x , λ ) ,
D α ( x , λ 0 , λ ) : = [ D ( x , μ , λ ) ] | μ = λ 0 α , α Z .
and similarly define D ˜ α ( x , λ 0 , λ ) .
Lemma 5.
The following relations hold:
D 1 ( x , λ 0 , λ ) = N ( λ 0 ) D 0 ( x , λ 0 , λ ) ,
[ D ( x , μ , λ ) ] | λ = λ 0 1 = [ D ( x , μ , λ ) ] | λ = λ 0 0 N ( λ 0 ) ,
[ ( λ λ 0 ) I + N ( λ 0 ) ] D 0 ( x , λ 0 , λ ) = J 0 1 [ ϕ 0 🟉 ( x , λ 0 ) ] T , ϕ ( x , λ ) ,
D ( x , μ , λ ) = J 0 1 [ ϕ 🟉 ( x , μ ) ] T ϕ ( x , λ ) .
Proof. 
Using (25) and (38), we obtain
D ( x , μ , λ ) = ( λ μ ) 1 J 0 1 [ Φ 🟉 ( x , μ ) ] T J Φ ( x , λ ) .
It follows from (44) and (39) that
D 1 ( x , λ 0 , λ ) = ( λ λ 0 ) 1 J 0 1 [ Φ 1 🟉 ( x , λ 0 ) ] T J Φ ( x , λ ) ,
D 0 ( x , λ 0 , λ ) = ( λ λ 0 ) 1 J 0 1 [ Φ 0 🟉 ( x , λ 0 ) ] T J Φ ( x , λ ) + ( λ λ 0 ) 2 J 0 1 [ Φ 1 🟉 ( x , λ 0 ) ] T J Φ ( x , λ ) .
Using (45) and (46) together with Lemma 3, we derive (40). The relation (41) is proved similarly.
It follows from (19) that
[ Φ 🟉 ( x , μ ) ] T J Φ ( x , λ ) = [ ϕ 🟉 ( x , μ ) ] T , ϕ ( x , λ ) .
Using (45), (46), and (47), we obtain
( λ λ 0 ) D 0 ( x , λ 0 , λ ) = J 0 1 [ ϕ 0 🟉 ( x , μ ) ] T , ϕ ( x , λ ) + D 1 ( x , λ 0 , λ ) .
Taking (40) into account, we arrive at (42).
In order to prove (43), we combine (44), (47), and (18):
D ( x , μ , λ ) = ( λ μ ) 1 J 0 1 d d x [ ϕ 🟉 ( x , μ ) ] T , ϕ ( x , λ ) = J 0 1 [ ϕ 🟉 ( x , μ ) ] T ϕ ( x , λ ) .
   □
Put N ^ ( λ 0 ) : = N ( λ 0 ) N ˜ ( λ 0 ) . Below, in this section, we suppose that x [ 0 , 1 ] is fixed.
Lemma 6.
The following relations hold:
ϕ ( x , λ ) = ϕ ˜ ( x , λ ) + λ 0 I ϕ 0 ( x , λ 0 ) N ^ ( λ 0 ) D ˜ 0 ( x , λ 0 , λ ) ,
D ( x , μ , λ ) D ˜ ( x , μ , λ ) = λ 0 I [ D ( x , μ , ξ ) ] ξ = λ 0 0 N ^ ( λ 0 ) D ˜ 0 ( x , λ 0 , λ ) ,
where the series converge in the sense
λ 0 I = lim R λ 0 I R , I R : = { λ I : | λ | < R } ,
uniformly by λ , μ on compact sets of ( C \ I ) .
Proof. 
In this proof, a crucial role is played by the matrix of spectral mappings
P ( x , λ ) = Φ ( x , λ ) [ Φ ˜ ( x , λ ) ] 1 .
It follows from (25) and (50) that
P ( x , λ ) = Φ ( x , λ ) J 0 1 [ Φ ˜ 🟉 ( x , λ ) ] T J .
The proof consists of three steps.
Step 1. Regions and contours. Choose a circle C * : = { λ C : | λ | < λ * } of sufficiently large radius λ * . Choose the λ n branch so that arg ( λ n ) π 2 n , 3 π 2 n . Then, it follows from the asymptotics (27) that the roots ρ 0 : = λ 0 n of the eigenvalues λ 0 ( I \ C * ) lie in the two strips
S j : = { ρ : R e ( ϵ j ρ ) > 0 , | I m ( ϵ j ρ ) | < c } , ϵ j : = exp ( 2 π i j / n ) , j = 0 , 1 ,
for an appropriate choice of the constant c. More precisely, λ l , k n S 0 if ( n k ) is even and λ l , k n S 1 otherwise. For j = 0 , 1 , denote by Ξ j the image of S j in the λ -plane under the mapping λ = ρ n . Put Ξ : = Ξ 0 Ξ 1 C * . Clearly, I Ξ .
Furthermore, fix a sufficiently small δ > 0 and define the regions
S j , δ : = { ρ S j : ρ 0 S j I s . t . | ρ ρ 0 | < δ } , j = 0 , 1 .
For j = 0 , 1 , denote by Ξ j , δ the image of S j in the λ -plane under the mapping λ = ρ n . Put
H δ : = C \ ( Ξ 1 , δ Ξ 2 , δ C * ) .
Let λ = ρ n , Θ ( ρ ) : = diag { 1 , ρ , , ρ n 1 } . It can be shown in the standard way (see, e.g., the relation (2.1.37) in [26], and the proof of Theorem 2 in [9]) that
P ( x , λ ) = Θ ( ρ ) ( I + o ( 1 ) ) [ Θ ( ρ ) ] 1 , | λ | ,
uniformly with respect to λ H δ .
For sufficiently large values of R > 0 , define the regions (see Figure 1):
Ξ R : = { λ Ξ : | λ | < R } , Ξ R ± : = { λ : | λ | < R , λ Ξ , ± I m λ > 0 } ,
and their boundaries γ R : = Ξ R , γ R ± : = Ξ R ± with the counter-clockwise circuit. Below, we consider only such radii R that γ R H δ .
Step 2. Contour integration. In view of (51), the matrix function P ( x , λ ) is meromorpic in λ with the poles I . Hence, P ( x , λ ) is analytic in Ξ R ± . Let P 1 ( x , λ ) be the first row of P ( x , λ ) . The Cauchy formula implies
P 1 ( x , λ ) e 1 T = 1 2 π i γ R ± P 1 ( x , ξ ) e 1 T λ ξ d ξ , λ Ξ R ± , P ( x , λ ) P ( x , μ ) λ μ = 1 2 π i γ R ± P ( x , ξ ) ( λ ξ ) ( ξ μ ) d ξ , λ , μ Ξ R ± .
Consequently,
P 1 ( x , λ ) = e 1 T + 1 2 π i γ R P 1 ( x , ξ ) λ ξ d ξ 1 2 π i | ξ | = R P 1 ( x , ξ ) e 1 T λ ξ d ξ ,
P ( x , λ ) P ( x , μ ) λ μ = 1 2 π i γ R P ( x , ξ ) ( λ ξ ) ( ξ μ ) d ξ 1 2 π i | ξ | = R P ( x , ξ ) ( λ ξ ) ( ξ μ ) d ξ .
Using (38), (50), (54), and (55), we derive
ϕ ( x , λ ) = P 1 ( x , λ ) Φ ˜ ( x , λ ) = ϕ ˜ ( x , λ ) + 1 2 π i γ R P 1 ( x , ξ ) Φ ˜ ( x , λ ) λ ξ d ξ + ε R 1 ( x , λ ) , D ( x , μ , λ ) D ˜ ( x , μ , λ ) = [ Φ ( x , μ ) ] 1 ( P ( x , λ ) P ( x , μ ) ) Φ ˜ ( x , λ ) λ μ = 1 2 π i γ R [ Φ ( x , μ ) ] 1 Φ ( x , ξ ) ξ μ [ Φ ˜ ( x , ξ ) ] 1 Φ ˜ ( x , λ ) λ ξ d ξ + ε R 2 ( x , μ , λ )
= 1 2 π i γ R D ( x , μ , ξ ) D ˜ ( x , ξ , λ ) d ξ + ε R 2 ( x , μ , λ ) ,
where
ε R 1 ( x , λ ) : = 1 2 π i | ξ | = R ( P 1 ( x , ξ ) e 1 T ) Φ ˜ ( x , λ ) λ ξ d ξ , ε R 2 ( x , μ , λ ) : = 1 2 π i | ξ | = R [ Φ ( x , μ ) ] 1 P ( x , ξ ) Φ ˜ ( x , λ ) ( λ ξ ) ( ξ μ ) d ξ .
It follows from (53) that
lim R γ R H δ ε R 1 ( x , λ ) = 0 , lim R γ R H δ ε R 2 ( x , μ , λ ) = 0 .
Step 3. Residues. Using the first row of (51):
P 1 ( x , λ ) = ϕ ( x , λ ) J 0 1 [ Φ ˜ 🟉 ( x , λ ) ] T J
and the Residue theorem, we obtain
1 2 π i γ R P 1 ( x , ξ ) Φ ˜ ( x , λ ) λ ξ d ξ = λ 0 I R Res ζ = λ 0 ϕ ( x , ξ ) D ˜ ( x , ξ , λ )
Using (56), (58), and (59), we obtain
ϕ ( x , λ ) = ϕ ˜ ( x , λ ) + λ 0 I ( ϕ 1 ( x , λ 0 ) D ˜ 0 ( x , λ 0 , λ ) + ϕ 0 ( x , λ 0 ) D ˜ 1 ( x , λ 0 , λ ) ) .
It follows from (30) that
ϕ 1 ( x , λ 0 ) = ϕ 0 ( x , λ 0 ) N ( λ 0 ) .
Substituting (40) for D ˜ 1 ( x , λ 0 , λ ) and (61) into (60), we derive the relation (48).
It remains to prove (49). Using Lemma 5, we derive
Res ξ = λ 0 D ( x , μ , ξ ) D ˜ ( x , ξ , λ ) = [ D ( x , μ , ξ ) ] | ξ = λ 0 1 D ˜ 0 ( x , λ 0 , λ ) + [ D ( x , μ , ξ ) ] | ξ = λ 0 0 D ˜ 1 ( x , λ 0 , λ ) = [ D ( x , μ , ξ ) ] | ξ = λ 0 0 N ^ ( λ 0 ) D ˜ 0 ( x , λ 0 , λ ) .
Combining (57), (58), and (62) all together and applying the Residue theorem, we arrive at (49).
Now, (48) and (49) are proved only for λ , μ ( C \ Ξ ) . Using analytic continuation, we conclude that these relations hold for λ , μ ( C \ I ) .    □
Our next goal is to obtain an infinite system of linear equations with respect to some entries of ϕ 0 ( λ 0 ) , λ 0 I . Introduce the ordered set
V : = { ( l , k , ε ) : l 1 , k { 1 , , n 1 } , ε { 0 , 1 } .
For v = ( l , k , ε ) , v 0 = ( l 0 , k 0 , ε 0 ) and v , v 0 V , we mean that v < v 0 if l < l 0 or ( l = l 0 and k < k 0 ) or ( l = l 0 , k = k 0 and ε < ε 0 ) . Denote
λ l , k , 0 : = λ l , k , λ l , k , 1 : = λ ˜ l , k , N 0 ( λ 0 ) : = N ( λ 0 ) , N 1 ( λ 0 ) : = N ˜ ( λ 0 ) ,
φ l , k , ε ( x ) : = Φ k + 1 , 0 ( x , λ l , k , ε ) , φ ˜ l , k , ε ( x ) : = Φ ˜ k + 1 , 0 ( x , λ l , k , ε ) ,
P ˜ l , k , ε ( x , λ ) : = e k + 1 T N ε ( λ l , k , ε ) D ˜ 0 ( x , λ l , k , ε , λ ) ,
G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) : = [ P ˜ l , k , ε ( x , λ ) ] λ = λ l 0 , k 0 , ε 0 0 e k 0 + 1 ,
and similarly define P l , k , ε ( x , λ ) , G ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) . Using these notations, we obtain the following corollary of Lemma 6.
Corollary 2.
The following relations hold:
ϕ ( x , λ ) = ϕ ˜ ( x , λ ) + ( l , k , ε ) V ( 1 ) ε φ l , k , ε ( x ) P ˜ l , k , ε ( x , λ ) ,
φ l 0 , k 0 , ε 0 ( x ) = φ ˜ l 0 , k 0 , ε 0 ( x ) + ( l , k , ε ) V ( 1 ) ε φ l , k , ε ( x ) G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) ,
G ( l 0 , k 0 , ε 0 ) , ( l 1 , k 1 , ε 1 ) ( x ) G ˜ ( l 0 , k 0 , ε 0 ) , ( l 1 , k 1 , ε 1 ) ( x ) = ( l , k , ε ) V ( 1 ) ε G ( l 0 , k 0 , ε 0 ) , ( l , k , ε ) ( x ) G ˜ ( l , k , ε ) , ( l 1 , k 1 , ε 1 ) ( x ) ,
where x [ 0 , 1 ] , ( l 0 , k 0 , ε 0 ) , ( l 1 , k 1 , ε 1 ) V .
Proof. 
Taking Lemma 4 on the structure of N ( λ 0 ) and N ˜ ( λ 0 ) into account, we rewrite (48) in the form
ϕ ( x , λ ) = ϕ ˜ ( x , λ ) + ( l , k , ε ) V ( 1 ) ε Φ k + 1 , 0 ( x , λ l , k , ε ) e k + 1 T N ε ( λ l , k , ε ) D ˜ 0 ( x , λ l , k , ε , λ ) .
Using (64) and (65), we arrive at (67). Taking the ( k 0 + 1 ) -th entry in the relation (67), putting λ = λ l 0 , k 0 , ε 0 , and using (64) and (66), we readily obtain (68).
Analogously, we represent (49) as follows:
D ( x , μ , λ ) D ˜ ( x , μ , λ ) = ( l , k , ε ) V ( 1 ) ε [ D ( x , μ , ξ ) ] ξ = λ l , k , ε 0 e k + 1 e k + 1 T N ε ( λ l , k , ε ) D ˜ 0 ( x , λ l , k , ε , λ ) .
Passing from D ( x , μ , λ ) and D ˜ ( x , μ , λ ) to P l 0 , k 0 , ε 0 ( x , λ ) and P ˜ l 0 , k 0 , ε 0 ( x , λ ) , respectively, we derive
P l 0 , k 0 , ε 0 ( x , λ ) P ˜ l 0 , k 0 , ε 0 ( x , λ ) = ( l , k , ε ) V ( 1 ) ε [ P l 0 , k 0 , ε 0 ( x , ξ ) ] ξ = λ l , k , ε 0 e k + 1 P ˜ l , k , ε ( x , λ ) .
Using (66) and the analogous relation for G ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) , we finally arrive (69).    □
The relations (68) can be considered as an infinite linear system with respect to φ l , k , ε ( x ) , ( l , k , ε ) V . However, it is inconvenient to use (68) as the main equation system for the inverse problem, because the series in (68) converges only “with brackets”:
( l , k , ε ) V = ( l , k ) ε = 0 , 1 ( ) .
Therefore, in the next section, we transform the system (68) to a linear equation in a suitable Banach space. The relation (69) is used to prove the unique solvability of the main equation.
Remark 1.
If L ˜ W , that is, the poles of M ˜ ( λ ) are not necessarily simple, then this influences the calculation of the residues in (59). Consequently, we obtain the following relation instead of (48):
ϕ ( x , λ ) = ϕ ˜ ( x , λ ) + λ 0 I [ ϕ 0 ( x , λ 0 ) ( N ( λ 0 ) D ˜ 0 ( x , λ 0 , λ ) + D ˜ 1 ( x , λ 0 , λ ) ) + k = 1 m λ 0 1 ϕ k ( x , λ 0 ) D ˜ ( k + 1 ) ( x , λ 0 , λ ) ] ,
where m λ 0 is the multiplicity of λ 0 Λ ˜ . Using (70), one can derive an infinite system analogous to (68), containing not only entries of the vectors ϕ 0 ( x , λ 0 ) but also of ϕ k ( x , λ 0 ) for k = 1 , m λ 0 1 ¯ .

3.2. Linear Equation in a Banach Space

Define the numbers { ξ l } , which characterize “the difference” of the two spectral data sets { λ 0 , N ( λ 0 ) } λ 0 Λ and { λ ˜ 0 , N ˜ ( λ ˜ 0 ) } λ ˜ 0 Λ ˜ :
ξ l : = k = 1 n 1 | λ l , k λ ˜ l , k | + j = k + 1 n | N j , k ( λ l , k ) N ˜ j , k ( λ ˜ l , k ) | l p k , 0 p k + 1 , 0 l 1 n , l 1 .
Taking Corollary 1 into account, we reduce (71) to the following form for all sufficiently large values of l:
ξ l = k = 1 n 1 | λ l , k λ ˜ l , k | + | β l , k β ˜ l , k | l p k , 0 p k + 1 , 0 l 1 n .
Relation (72), together with the asymptotics (27) and (37), implies { ξ l } l 2 .
Lemma 7.
The following estimates hold for ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) V :
| φ l , k , ε ( x ) | C w l , k ( x ) , | φ l , k , 0 ( x ) φ l , k , 1 ( x ) | C w l , k ( x ) ξ l , | G ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) | C | l l 0 | + 1 · w l 0 , k 0 ( x ) w l , k ( x ) , | G ( l , k , 0 ) , ( l 0 , k 0 , ε 0 ) ( x ) G ( l , k , 1 ) , ( l 0 , k 0 , ε 0 ) ( x ) | C ξ l | l l 0 | + 1 · w l 0 , k 0 ( x ) w l , k ( x ) , | G ( l , k , ε ) , ( l 0 , k 0 , 0 ) ( x ) G ( l , k , ε ) , ( l 0 , k 0 , 1 ) ( x ) | C ξ l 0 | l l 0 | + 1 · w l 0 , k 0 ( x ) w l , k ( x ) , | G ( l , k , 0 ) , ( l 0 , k 0 , 0 ) ( x ) G ( l , k , 0 ) , ( l 0 , k 0 , 1 ) ( x ) G ( l , k , 1 ) , ( l 0 , k 0 , 0 ) ( x ) + G ( l , k , 1 ) , ( l 0 , k 0 , 1 ) ( x ) | C ξ l ξ l 0 | l l 0 | + 1 · w l 0 , k 0 ( x ) w l , k ( x ) ,
where
w l , k ( x ) : = l p k + 1 , 0 exp ( x l cot ( k π / n ) ) ,
and the constant C does not depend on x , l , ε , k , l 0 , ε 0 , k 0 .
The proof of Lemma 7 repeats the technique of ([26], Section 2.3.3), so we omit it. The similar estimates are valid for φ ˜ l , k , ε ( x ) and G ˜ ( l 0 , k 0 , ε 0 ) , ( l , k , ε ) ( x ) .
Put θ l : = ξ l 1 if ξ l 0 and θ l = 0 otherwise. Introduce the notations
ψ l , k , 0 ( x ) ψ l , k , 1 ( x ) : = w l , k 1 ( x ) θ l θ l 0 1 φ l , k , 0 ( x ) φ l , k , 1 ( x ) ,
R ( l 0 , k 0 , 0 ) , ( l , k , 0 ) ( x ) R ( l 0 , k 0 , 0 ) , ( l , k , 1 ) ( x ) R ( l 0 , k 0 , 1 ) , ( l , k , 0 ) ( x ) R ( l 0 , k 0 , 1 ) , ( l , k , 1 ) ( x ) : = w l , k ( x ) w l 0 , k 0 ( x ) θ l 0 θ l 0 0 1 G ( l , k , 0 ) , ( l 0 , k 0 , 0 ) ( x ) G ( l , k , 1 ) , ( l 0 , k 0 , 0 ) ( x ) G ( l , k , 0 ) , ( l 0 , k 0 , 1 ) ( x ) G ( l , k , 1 ) , ( l 0 , k 0 , 1 ) ( x ) ξ l 1 0 1 .
For brevity, put ψ v ( x ) : = ψ l , k , ε ( x ) , R v 0 , v ( x ) : = R ( l 0 , k 0 , ε 0 ) , ( l , k , ε ) ( x ) , v = ( l , k , ε ) , v 0 = ( l 0 , k 0 , ε 0 ) , v , v 0 V . The functions ψ ˜ v ( x ) and R ˜ v 0 , v ( x ) are defined analogously.
Using (68) and (69), and the above notations, we obtain
ψ v 0 ( x ) = ψ ˜ v 0 ( x ) + v V R ˜ v 0 , v ( x ) ψ v ( x ) , v 0 V ,
R v 1 , v 0 ( x ) R ˜ v 1 , v 0 ( x ) = v V R ˜ v 1 , v ( x ) R v , v 0 ( x ) , v 1 , v 0 V .
Lemma 7 yields the estimates
| ψ v ( x ) | C , | R v 0 , v ( x ) | C ξ l | l l 0 | + 1 , v , v 0 V ,
and the similar estimates for ψ ˜ v ( x ) , R ˜ v 0 , v ( x ) . Consequently, the Cauchy—Bunyakovsky-Schwarz inequality
l ξ l | l l 0 | + 1 l ξ l 2 1 / 2 l 1 ( | l l 0 | + 1 ) 2 1 / 2 < ,
implies the absolute convergence of the series in (75) and (76).
Consider the Banach space m of bounded infinite sequences α = [ α v ] v V with the norm α m = v V | α v | . Obviously, ψ ( x ) , ψ ˜ ( x ) m for each fixed x [ 0 , 1 ] . Define the linear operator R ( x ) = [ R v 0 , v ( x ) ] v 0 , v V acting on an element α = [ α v ] v V m by the following rule:
[ R ( x ) α ] v 0 = v V R v 0 , v ( x ) α v , v 0 V .
The operator R ˜ ( x ) = [ R ˜ v 0 , v ( x ) ] v 0 , v V is defined similarly. It follows from (77) and (78) that the operators R ( x ) , R ˜ ( x ) are bounded from m to m for each fixed x [ 0 , 1 ] . Denote by I the unit operator in m.
Using the introduced notations, we obtain the following theorem on the main equation and its unique solvability.
Theorem 1.
For each fixed x [ 0 , 1 ] , the linear operator R ( x ) is compact in m and can be approximated by finite-rank operators: R ( x ) = lim N R N ( x ) . The same properties are valid for R ˜ ( x ) . Furthermore, the following relation holds
( I R ˜ ( x ) ) ψ ( x ) = ψ ˜ ( x ) , x [ 0 , 1 ] ,
which is called the main equation of the inverse problem. The operator ( I + R ˜ ( x ) ) has a bounded inverse of form
( I R ˜ ( x ) ) 1 = I + R ( x ) .
Thus, the main Equation (80) is uniquely solvable in m for each fixed x [ 0 , 1 ] .
Proof. 
For N N , define the index set V N : = { v = ( l , k , ε ) V : l N } and the finite-rank operator R N ( x ) :
[ R N ( x ) α ] v 0 = v V N R v 0 , v ( x ) α v .
Using (77)–(82), we show that
R ( x ) R N ( x ) m m = sup v 0 V v ( V \ V N ) | R v 0 , v ( x ) | sup l 0 l N C ξ l | l l 0 | + 1 0 , N .
Hence, the operator R ( x ) is compact.
According to our notations, the relations (75) and (76) take the form (80) and
R ( x ) R ˜ ( x ) = R ˜ ( x ) R ( x ) ,
respectively. The latter relation implies (81), which completes the proof.    □
Thus, we arrive at the following algorithm for solving Problem 3.
Algorithm 1: Suppose that the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ of the problem L W are given. We have to find the Weyl solutions { Φ k ( x , λ ) } k = 1 n .
  • Choose an arbitrary model problem L ˜ W with p ˜ s , a = p s , a , s = 1 , n ¯ , a = 0 , 1 . In particular, one can take F ˜ ( x ) = [ δ k + 1 , j ] k , j = 1 n , U ˜ a = [ δ j , p s , a + 1 ] s , j = 1 n .
  • For the problem L ˜ , find the matrix function Φ ˜ ( x , λ ) and then D ˜ ( x , μ , λ ) by (38).
  • Using Φ ˜ ( x , λ ) , D ˜ ( x , μ , λ ) , the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , { λ ˜ 0 , N ˜ ( λ ˜ 0 ) } λ ˜ 0 Λ ˜ ,
    and the notations (63), find φ ˜ l , k , ε ( x ) , P ˜ l , k , ε ( x , λ ) , and G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) for ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) V via (64), (65), and (66), respectively.
  • Construct the infinite sequence ψ ˜ ( x ) and the operator R ˜ ( x ) by using (73) and (74) (with tilde), respectively.
  • Find ψ ( x ) by solving the main Equation (80).
  • Find { φ l , k , ε ( x ) } ( l , k , ε ) V from (73):
    φ l , k , 0 ( x ) φ l , k , 1 ( x ) = w l , k ( x ) ξ l 1 0 1 ψ l , k , 0 ( x ) ψ l , k , 1 ( x )
  • Construct ϕ ( x , λ ) = [ Φ k ( x , λ ) ] k = 1 n by (67).

4. Reconstruction Formulas

In this section, we use the solution ψ ( x ) of the main Equation (80) to obtain the solution of Problem 2 for some classes of differential operators. We derive the reconstruction formulas in the form of series for the coefficients { τ ν } ν = 0 n 2 of the differential expression (1).
In Section 4.1, the general approach to obtaining reconstruction formulas is described. However, for certain classes of the coefficients { τ ν } ν = 0 n 2 , the convergence of the obtained series has to be studied in the corresponding spaces. Therefore, in Section 4.2, we prove an auxiliary lemma on the series convergence. In Section 4.3, Section 4.4 and Section 4.5, we study the three classes of operators:
(i)
n = 3 , τ 0 W 2 1 ( 0 , 1 ) , τ 1 L 2 ( 0 , 1 ) ;
(ii)
n is even, τ ν L 2 ( 0 , 1 ) , ν = 0 , n 2 ¯ ;
(iii)
n is even, τ ν W 2 1 ( 0 , 1 ) , ν = 0 , n 2 ¯ .
For each case, we provide the uniqueness theorem of the inverse problem solution in an appropriate statement, obtain reconstruction formulas, and prove the convergence of the series, and so obtain constructive algorithms for solving Problem 2. For the cases (ii) and (iii), we recover the coefficients τ n 2 , τ n 3 , …, τ 1 , τ 0 one-by-one in order to achieve the convergence estimates for the corresponding series. The even order in (ii) and (iii) is considered for definiteness. Similar ideas can be applied to the odd-order differential operators. For simplicity, in all the three cases, we choose such boundary conditions that their coefficients cannot be uniquely recovered from the spectral data and so do not consider their reconstruction. However, for other types of boundary conditions, the recovery of their coefficients also can be studied similarly to the regular case (see Lemma 2.3.7 in [26]).
Let us introduce some notations used throughout this section. Note that the collection { λ l , k , ε } ( l , k , ε ) V may contain multiple eigenvalues for a fixed ε { 0 , 1 } : λ l , k , ε = λ l 0 , k 0 , ε , ( l , k ) ( l 0 , k 0 ) . In order to exclude such values, we define the set
V : = { ( l , k , ε ) V : ¬ ( l 0 , k 0 , ε ) V s . t . ( l 0 , k 0 ) < ( l , k ) and λ l 0 , k 0 , ε = λ l , k , ε } .
In this section, we use the following notations for an index v = ( l , k , ε ) V :
λ v : = λ l , k , ε , ϕ v ( x ) : = ϕ 0 ( x , λ v ) , P ˜ v ( x , λ ) : = ( 1 ) ε N ε ( λ v ) D ˜ 0 ( x , λ v , λ ) ,
c v : = ( 1 ) ε N ε ( λ v ) J 0 1 , g ˜ v ( x ) : = [ ϕ ˜ 0 🟉 ( x , λ v ) ] T .
Additionally, define the scalar functions
η ˜ l , k , ε ( x ) : = ( 1 ) ε e k + 1 T N ε ( λ l , k , ε ) J 0 1 [ ϕ ˜ 0 🟉 ( x , λ l , k , ε ) ] T , v V .

4.1. General Approach

In terms of the notations (83), the relation (48) can be rewritten as
ϕ ( x , λ ) = ϕ ˜ ( x , λ ) + v V ϕ v ( x ) P ˜ v ( x , λ ) .
Formal calculations show that
n ( ϕ ( x , λ ) ) = n ( ϕ ˜ ( x , λ ) ) + v V n ( ϕ v ( x ) P ˜ v ( x , λ ) ) .
Recall that
n ( ϕ ( x , λ ) ) = λ ϕ ( x , λ ) , ˜ n ( ϕ ˜ ( x , λ ) ) = λ ϕ ˜ ( x , λ ) ,
and, by virtue of (31),
n ( ϕ v ( x ) ) = λ v ϕ v ( x ) + ϕ v ( x ) N 0 ( λ v ) .
Define ^ n ( y ) : = n ( y ) ˜ n ( y ) . Consequently,
λ ( ϕ ( x , λ ) ϕ ˜ ( x , λ ) ) v V n ( ϕ v ( x ) ) P ˜ v ( x , λ ) = v V ϕ v ( x ) [ ( λ λ v ) I N 0 ( λ v ) ] P ˜ v ( x , λ ) = ^ n ( ϕ ˜ ( x , λ ) ) + v V n ( ϕ v ( x ) P ˜ v ( x , λ ) ) v V n ( ϕ v ( x ) ) P ˜ v ( x , λ ) .
Using (83) and (42), we derive
[ ( λ λ v ) N 0 ( λ v ) ] P ˜ v ( x , λ ) = ( 1 ) ε N ε ( λ v ) J 0 1 [ ϕ ˜ v 🟉 ( x ) ] T , ϕ ˜ ( x , λ ) + ( 1 ) ε + 1 [ N ε ( λ v ) N 1 ( λ v ) + N 0 ( λ v ) N ε ( λ v ) ] D ˜ 0 ( x , λ 0 , λ ) .
The summation yields
v V ϕ v ( x ) [ ( λ λ v ) I N 0 ( λ v ) ] P ˜ v ( x , λ ) = v V ϕ v ( x ) c v g ˜ v ( x ) , ϕ ˜ ( x , λ ) ,
where c v and g ˜ v ( x ) are defined by (84). Combining (86) and (87) together, we obtain
v V ϕ v ( x ) c v g ˜ v ( x ) , ϕ ˜ ( x , λ ) = ^ n ( ϕ ˜ ( x , λ ) ) + v V n ( ϕ v ( x ) P ˜ v ( x , λ ) ) v V n ( ϕ v ( x ) ) P ˜ v ( x , λ ) .
Suppose that the differential expression y [ n ] = n ( y ) has the form (1). Then, n ( y ) can be formally represented as
n ( y ) = y ( n ) + s = 0 n 2 p s ( x ) y ( s ) ,
where
p s = k = s / 2 min { s , n / 2 1 } C k s k [ τ 2 k ( 2 k s ) + τ 2 k + 1 ( 2 k s + 1 ) ] + k = ( s 1 ) / 2 min { s , ( n 1 ) / 2 } 1 2 C k s k 1 τ 2 k + 1 ( 2 k + 1 s ) .
(We assume that τ n 1 ( x ) 0 ). Suppose that ˜ n ( y ) has a form similar to (89) with the coefficients p ˜ s ( x ) , so
^ n ( y ) : = s = 0 n 2 p ^ s ( x ) y ( s ) , p ^ s : = p s p ˜ s .
Using (89), we derive
n ( ϕ v P ˜ v ) = n ( ϕ v ) P ˜ v + k = 1 n C n k v V ϕ v ( n k ) P ˜ v ( k ) + k = 1 n 2 p k r = 1 k C k r v V ϕ v ( k r ) P ˜ v ( r ) .
The relations (43) and (83) imply
P ˜ v ( x , λ ) = c v g ˜ v ( x ) ϕ ˜ ( x , λ ) .
Substituting (92) into (93) and grouping the terms at ϕ ˜ ( s ) ( x , λ ) , we obtain
n ( ϕ v P ˜ v ) n ( ϕ v ) P ˜ v = s = 0 n 1 t n , s ϕ ˜ ( s ) + s = 0 n 3 k = s + 1 n 2 p k t k , s ϕ ˜ ( s ) ,
where
t k , s ( x ) : = r = s k 1 C k r + 1 C r s T k r 1 , r s ( x ) , T j 1 , j 2 ( x ) : = v V ϕ v ( j 1 ) ( x ) c v g ˜ v ( j 2 ) ( x ) .
Combining (88), (91), and (94) all together, we arrive at the relation
v V ϕ v ( x ) c v g ˜ v ( x ) , ϕ ˜ ( x , λ ) = s = 0 n 2 p ^ s ( x ) ϕ ˜ ( s ) ( x , λ ) + s = 0 n 1 t n , s ( x ) ϕ ˜ ( s ) ( x , λ ) + s = 0 n 3 k = s + 1 n 2 p k ( x ) t k , s ( x ) ϕ ˜ ( s ) ( x , λ )
For definiteness, suppose that p ˜ s ( x ) = 0 , s = 0 , n 2 ¯ . Then, y [ s ] = y ( s ) , s = 0 , n ¯ , for the problem L ˜ , and so
g ˜ v ( x ) , ϕ ˜ ( x , λ ) = s = 0 n 1 ( 1 ) n s 1 g ˜ v ( n s 1 ) ( x ) ϕ ˜ ( s ) ( x , λ ) .
Therefore, combining the terms at ϕ ˜ ( s ) ( x , λ ) , we obtain the formulas for finding the coefficients
p s = ( 1 ) n s 1 v V ϕ v ( x ) c v g ˜ v ( n s 1 ) ( x ) t n , s ( x ) k = s + 1 n 2 p k ( x ) t k , s ( x ) ,
where s = n 2 , n 1 , , 1 , 0 . These formulas coincide with the ones for the regular case (see ([26]), Lemma 2.3.7).
Using the relations (90) and (97), one can find τ ν for ν = n 2 , n 3 , , 1 , 0 . However, the Formulas (97) have been obtained by formal calculations. They can be used for reconstruction if the coefficients { τ ν } ν = 0 n 2 are so smooth that the series in (95) and (97) converge. If the coefficients { τ ν } ν = 0 n 2 are nonsmooth or even distributional, then the convergence of the series is a nontrivial question, which should be investigated separately for different classes of operators. For some classes, this question is considered in Section 4.3, Section 4.4 and Section 4.5.

4.2. Series Convergence

In this subsection, we prove the following auxiliary lemma.
Lemma 8.
Suppose that j 1 , j 2 { 0 , 1 , , n 1 } and { l ( j 1 + j 2 ) ξ l } l 2 . Then, there exist constants { A v } v V , such that the series
v V ( ϕ v [ j 1 ] ( x ) c v g ˜ v [ j 2 ] ( x ) A v )
converges in L 2 ( 0 , 1 ) . Moreover, if { l ( j 1 + j 2 ) ξ l } l 1 , then the series
v V ϕ v [ j 1 ] ( x ) c v g ˜ v [ j 2 ] ( x )
converges absolutely and uniformly on [ 0 , 1 ] .
Here, and below, the quasi-derivatives for ϕ v ( x ) are generated by the matrix F ( x ) and for g ˜ v ( x ) , by F ˜ 🟉 ( x ) . In order to prove Lemma 8, we need to formulate preliminary propositions.
Consider the sector Γ 1 = ρ C : 0 < arg ρ < π n . Denote by { ω k } k = 1 n the roots of the equation ω n = 1 , numbered so that
R e ( ρ ω 1 ) < R e ( ρ ω 2 ) < < R e ( ρ ω n ) , ρ Γ 1 .
In addition, define the extended sector
Γ 1 , h : = ρ C : ρ + h exp i π 2 n Γ 1 , h > 0 .
In the proof of Lemma 8, we need the following proposition on the Birkhoff-type solutions of Equation (4) with certain asymptotic behavior as | ρ | .
Proposition 1
([2]). For some ρ * > 0 , Equation (4) has a fundamental system of solutions { y k ( x , ρ ) } k = 1 n whose quasi-derivatives y k [ j ] ( x , ρ ) , k = 1 , n ¯ , j = 0 , n 1 ¯ are continuous for x [ 0 , 1 ] , ρ Γ ¯ 1 , h , | ρ | ρ * , analytic in ρ Γ 1 , h , | ρ | > ρ * for each fixed x [ 0 , 1 ] , and satisfy the relation
y k [ j ] ( x , ρ ) = ( ρ ω k ) j exp ( ρ ω k x ) ( 1 + ζ j k ( x , ρ ) ) ,
where
max j , k , x | ζ j k ( x , ρ ) | C ( Υ ( ρ ) + | ρ | 1 ) , ρ Γ ¯ 1 , h , | ρ | ρ * ,
and Υ ( ρ ) fulfills the condition { Υ ( ρ l ) } l 2 for any noncondensing sequence { ρ l } Γ 1 , h .
Consider the strip S 0 defined by (52). Clearly, for a suitable choice of h and c, we have S 0 Γ 1 , h and λ l , k , ε = ρ l , k , ε n , ρ l , k , ε S 0 for even ( n k ) and for sufficiently large l. Furthermore, in this section, we confine ourselves to considering even ( n k ) , since the case of odd ( n k ) is similar.
Proposition 2.
Suppose that k { 1 , 2 , , n 1 } and ( n k ) is even. Then, the Weyl solution can be expanded as
Φ k + 1 ( x , λ ) = s = 1 n b s , k + 1 ( ρ ) y s ( x , ρ ) , λ = ρ n , ρ S 0 ,
where the coefficients b s , k + 1 ( ρ ) are analytic in ρ S 0 , | ρ | ρ * and fulfill the estimate
b s , k + 1 ( ρ ) = O ρ p k + 1 , 0 × i f s > k + 1 exp ( ρ ( ω k + 1 ω s ) ) .
Proof. 
The properties of the coefficient b s , k + 1 ( ρ ) follow from the certain formulas for these coefficients obtained in the proof of Lemma 3 in [9].    □
Proposition 3.
Let z be a nonzero complex with R e z 0 , and let { ϰ l } l 1 l 2 . Then, the series l 1 ϰ l exp ( z l x ) converges in L 2 ( 0 , 1 ) .
Proof of Lemma 8. 
Let j 1 , j 2 { 0 , 1 , , n 1 } be fixed. In order to prove the convergence of the series (97) and (98), it is sufficient to consider their terms for v = ( l , k , ε ) with sufficiently large l. For technical simplicity, let us assume that λ l 1 , k 1 , ε λ l 2 , k 2 , ε for any sufficiently large l 1 , l 2 , such that l 1 l 2 . In view of Corollary 1, we have
v : l is fixed ϕ v [ j 1 ] ( x ) c v g ˜ v [ j 2 ] ( x ) = k = 1 n 1 ( 1 ) n 1 p k , 0 Z l , k ( x ) , Z l , k ( x ) : = ε = 0 , 1 ( 1 ) ε β l , k , ε φ l , k , ε [ j 1 ] ( x ) φ ˜ l , n k , ε 🟉 [ j 2 ] ( x ) ,
where
φ l , k , ε [ j 1 ] ( x ) = Φ k + 1 [ j 1 ] ( x , λ l , k , ε ) , φ ˜ l , n k , ε 🟉 [ j 2 ] ( x ) = Φ ˜ n k + 1 🟉 [ j 2 ] ( x , λ l , k , ε ) , β l , k , 0 : = β l , k , β l , k , 1 : = β ˜ l , k ,
Fix k { 1 , 2 , , n 1 } such that ( n k ) is even. Then, by Proposition 2, we have
Φ k + 1 [ j 1 ] ( x , λ l , k , ε ) = s 1 = 1 n b s 1 , k + 1 ( ρ l , k , ε ) y s 1 [ j 1 ] ( x , ρ l , k , ε ) , Φ ˜ n k + 1 🟉 [ j 2 ] ( x , λ l , k , ε ) = s 2 = 1 n b ˜ n s 2 + 1 , n k + 1 🟉 ( ρ l , k , ε ) y ˜ n s 2 + 1 🟉 [ j 2 ] ( x , ρ l , k , ε ) .
Using the above relations and Proposition 1, we obtain
Z l , k ( x ) = s 1 = 1 n s 2 = 1 n Z l , k , s 1 , s 2 ( x ) , Z l , k , s 1 , s 2 ( x ) = ε = 0 , 1 α l , k , s 1 , s 2 , ε exp ( ρ l , k , ε ( ω s 1 ω s 2 ) x ) ( 1 + ζ s 1 , j 1 ( x , ρ l , k , ε ) ) ( 1 + ζ ˜ n s 2 + 1 , j 2 🟉 ( x , ρ l , k , ε ) ) , α l , k , s 1 , s 2 , ε : = β l , k , ε b s 1 , k + 1 ( ρ l , k , ε ) b n s 2 + 1 , n k + 1 🟉 ( ρ l , k , ε ) ( ω s 1 ) j 1 ( ω s 2 ) j 2 ρ l , k , ε j 1 + j 2 .
Consider the sums
Z l , k , s 1 , s 2 ( x ) = Z l , k , s 1 , s 2 1 ( x ) + Z l , k , s 1 , s 2 2 ( x ) + Z l , k , s 1 , s 2 3 ( x ) + Z l , k , s 1 , s 2 4 ( x ) , Z l , k , s 1 , s 2 1 ( x ) : = ε = 0 , 1 α l , k , s 1 , s 2 , ε exp ( ρ l , k , ε ( ω s 1 ω s 2 ) x ) , Z l , k , s 1 , s 2 2 ( x ) : = ε = 0 , 1 α l , k , s 1 , s 2 , ε exp ( ρ l , k , ε ( ω s 1 ω s 2 ) x ) ζ s 1 , j 1 ( x , ρ l , k , ε ) , Z l , k , s 1 , s 2 3 ( x ) : = ε = 0 , 1 α l , k , s 1 , s 2 , ε exp ( ρ l , k , ε ( ω s 1 ω s 2 ) x ) ζ ˜ n s 2 + 1 , j 2 🟉 ( x , ρ l , k , ε ) , Z l , k , s 1 , s 2 4 ( x ) : = ε = 0 , 1 α l , k , s 1 , s 2 , ε exp ( ρ l , k , ε ( ω s 1 ω s 2 ) x ) ζ s 1 , j 1 ( x , ρ l , k , ε ) ζ ˜ n s 2 + 1 , j 2 🟉 ( x , ρ l , k , ε ) .
Thus, it is sufficient to study the convergence of the series l l 0 Z l , k , s 1 , s 2 ν ( x ) for fixed k, s 1 , s 2 , and ν = 1 , 4 ¯ .
The asymptotics (27) and (37) imply
| ρ l , k , ε | C l , | β l , k , ε | C l n 1 + p k + 1 , 0 p k , 0 .
Using (103) together with the estimates (100) and (104), we obtain
| α l , k , s 1 , s 2 , ε | C l j 1 + j 2 × i f s 1 > k + 1 exp ( R e ( ω k + 1 ω s 1 ) r k l ) × i f s 2 < k exp ( R e ( ω s 2 ω k ) r k l ) ,
where r k : = π sin π k n . The relation (72) yields
| ρ l , k , 0 ρ l , k , 1 | C ξ l , | β l , k , 0 β l , k , 1 | C ξ l l n 1 + p k + 1 , 0 p k , 0 .
Since the functions b s , k + 1 ( ρ ) are analytic and satisfy (100), we obtain
| b s , k + 1 ( ρ l , k , 0 ) b s , k + 1 ( ρ l , k , 1 ) | C ξ l l p k + 1 , 0 × i f s > k + 1 exp ( R e ( ω s ω k ) r k l ) .
It follows from (103) that
α l , k , s 1 , s 2 , 0 α l , k , s 1 , s 2 , 1 = ( β l , k , 0 β l , k , 1 ) b s 1 , k + 1 ( ρ l , k , 0 ) b n s 2 + 1 , n k + 1 🟉 ( ρ l , k , 0 ) ( ω s 1 ) j 1 ( ω s 2 ) j 2 ρ l , k , 0 j 1 + j 2 + β l , k , 1 ( b s 1 , k + 1 ( ρ l , k , 0 ) b s 1 , k + 1 ( ρ l , k , 1 ) ) b n s 2 + 1 , n k + 1 🟉 ( ρ l , k , 0 ) ( ω s 1 ) j 1 ( ω s 2 ) j 2 ρ l , k , 0 j 1 + j 2 + β l , k , 1 b s 1 , k + 1 ( ρ l , k , 1 ) ( b n s 2 + 1 , n k + 1 🟉 ( ρ l , k , 0 ) b n s 2 + 1 , n k + 1 🟉 ( ρ l , k , 1 ) ) ( ω s 1 ) j 1 ( ω s 2 ) j 2 ρ l , k , 0 j 1 + j 2 + β l , k , 1 b s 1 , k + 1 ( ρ l , k , 1 ) b n s 2 + 1 , n k + 1 🟉 ( ρ l , k , 1 ) ( ω s 1 ) j 1 ( ω s 2 ) j 2 ( ρ l , k , 0 j 1 + j 2 ρ l , k , 1 j 1 + j 2 ) .
Consequently, we estimate
| α l , k , s 1 , s 2 , 0 α l , k , s 1 , s 2 , 1 | C l j 1 + j 2 ξ l × i f s 1 > k + 1 exp ( R e ( ω k + 1 ω s 1 ) r k l ) × i f s 2 < k exp ( R e ( ω s 2 ω k ) r k l ) .
Suppose that { l j 1 + j 2 ξ l } l 2 . Consider the cases:
  • If s 1 = s 2 { k , k + 1 } , then the terms of the series l l 0 Z l , k , s 1 , s 2 1 ( x ) decay exponentially, so the series converges absolutely.
  • If s 1 = s 2 { k , k + 1 } , then the series l l 0 ( α l , k , s 1 , s 2 , 0 α l , k , s 1 , s 2 , 1 ) does not necessarily converge.
  • If s 1 s 2 , then
    Z l , k , s 1 , s 2 1 ( x ) = ( ( α l , k , s 1 , s 2 , 0 α l , k , s 1 , s 2 , 1 ) + α l , k , s 1 , s 2 , 1 [ ( ρ l , k , 0 ρ l , k , 1 ) ( ω s 1 ω s 2 ) x + O ( ξ l 2 ) ] ) exp ( ρ l , k , 0 ( ω s 1 ω s 2 ) x ) .
    Consequently, the series l l 0 Z l , k , s 1 , s 2 1 ( x ) converges in L 2 ( 0 , 1 ) by virtue of Proposition 3.
Using Proposition 1, we show that
| ζ s 1 , j 1 ( x , ρ l , k , ε ) | C ( Υ ( ρ l , k , ε ) + l 1 ) , | ζ s 1 , j 1 ( x , ρ l , k , 0 ) ζ s 1 , j 1 ( x , ρ l , k , 1 ) | C ξ l ( Υ ( ρ l , k , 0 * ) + l 1 ) ,
where Υ ( ρ l , k , 0 * ) = max | ρ ρ l , k , 0 | δ Υ ( ρ ) . Note that { Υ ( ρ l , k , 0 * ) } l 2 . Consequently, the series l l 0 Z l , k , s 1 , s 2 2 ( x ) converges absolutely and uniformly on [ 0 , 1 ] . The proof for Z 3 and Z 4 is analogous. Thus, the regularized series l l 0 ( Z l , k ( x ) A l , k ) converges in L 2 ( 0 , 1 ) with the constants
A l , k = s = k , k + 1 ( α l , k , s , s , 0 α l , k , s , s , 1 ) .
Using the arguments above, we obtain the estimate
| Z l , k ( x ) | C l j 1 + j 2 ξ l .
Hence, in the case { l j 1 + j 2 ξ l } l 1 , the series l l 0 Z l , k ( x ) converges absolutely and uniformly with respect to x [ 0 , 1 ] . Taking (101) into account, we arrive at the assertion of the lemma.    □

4.3. Case n = 3

Consider the differential expression
3 ( y ) = y ( 3 ) + ( τ 1 ( x ) y ) + τ 1 ( x ) y + τ 0 ( x ) y , x ( 0 , 1 ) ,
where τ 1 L 2 ( 0 , 1 ) and τ 0 W 2 1 ( 0 , 1 ) , that is, τ 0 = σ 0 , σ 0 L 2 ( 0 , 1 ) . The associated matrix has the form (see, e.g., [47]):
F ( x ) = 0 1 0 ( σ 0 + τ 1 ) 0 1 0 ( σ 0 τ 1 ) 0 ,
so, y [ 1 ] = y , y [ 2 ] = y + ( σ 0 + τ 1 ) y , y [ 3 ] = 3 ( y ) .
Suppose that p s , 0 = s 1 , p s , 1 = 3 s , s = 1 , 3 ¯ , in the linear forms (8). Using the technique of [54], we obtain the eigenvalue asymptotics
λ l , k = ( 1 ) k + 1 2 π 3 l + 1 6 + ( 1 ) k π 2 l 0 1 τ 1 ( t ) d t + ϰ l , k l 3 , { ϰ l , k } l 2 , l 1 , k = 1 , 2 .
Assume that L W . It can be easily shown that, if Λ 1 Λ 2 = , then the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ do not depend on the boundary condition coefficients u s , j , a . Therefore, let us assume that U 0 = I , U 1 = [ δ k , 4 j ] k , j = 1 3 . Consider the following inverse problem.
Consider the problems L = ( F ( x ) , U 0 , U 1 ) W and L ˜ = ( F ˜ ( x ) , U 0 , U 1 ) W , where F ˜ ( x ) is the matrix function associated with the differential expression ˜ 3 ( y ) having the coefficients τ ˜ 1 L 2 ( 0 , 1 ) and τ ˜ 0 = σ ˜ 0 W 2 1 ( 0 , 1 ) . Under the above assumptions, the following uniqueness theorem for solution of Problem 2 is valid.
Theorem 2.
If Λ = Λ ˜ and N ( λ 0 ) = N ˜ ( λ 0 ) for all λ 0 Λ , then τ 1 ( x ) = τ ˜ 1 ( x ) and σ 0 ( x ) = σ ˜ 0 ( x ) + c o n s t a.e. on ( 0 , 1 ) . Thus, the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ uniquely specify τ 1 L 2 ( 0 , 1 ) and τ 0 W 2 1 ( 0 , 1 ) .
In order to prove Theorem 2, we need the following auxiliary lemma, which is valid for n not necessarily equal to 3.
Lemma 9.
If L , L ˜ W , Λ = Λ ˜ and N ( λ 0 ) = N ˜ ( λ 0 ) for all λ 0 Λ , then the matrix of spectral mappings P ( x , λ ) defined by (50) does not depend on λ.
Proof. 
It follows from (25) and (50) that
P ( x , λ ) = Φ ( x , λ ) J 0 1 [ Φ ˜ 🟉 ( x , λ ) ] T J .
Using (29) and (30), we derive for λ 0 Λ :
P 2 ( x , λ ) J 1 = Φ 1 ( x , λ 0 ) J 0 1 [ Φ ˜ 1 🟉 ( x , λ 0 ) ] T = Φ 0 ( x , λ 0 ) N ( λ 0 ) J 0 1 [ N * ( λ 0 ) ] T [ Φ ˜ 0 🟉 ( x , λ 0 ) ] T = 0 , P 1 ( x , λ ) J 1 = Φ 1 ( x , λ 0 ) J 0 1 [ Φ ˜ 0 🟉 ( x , λ 0 ) ] T + Φ 0 ( x , λ 0 ) J 0 1 [ Φ ˜ 1 🟉 ( x , λ 0 ) ] T = Φ 0 ( x , λ 0 ) ( N ( λ 0 ) J 0 1 + J 0 1 [ N 🟉 ( λ 0 ) ] T ) [ Φ ˜ 0 🟉 ( x , λ 0 ) ] T = 0 .
Hence, P ( x , λ ) is entire in λ . Using the asymptotics (53) and Liouville’s theorem, we conclude that P ( x , λ ) P ( x ) , x [ 0 , 1 ] .    □
Proof of Theorem 2. 
This proof is similar to the proof of Theorem 2 in [9], so we outline it briefly. By Lemma 9, P ( x , λ ) P ( x ) . Furthermore, P ( x ) is a unit lower-triangular matrix. One can easily show that
P ( x ) + P ( x ) F ˜ ( x ) = F ( x ) P ( x ) , x ( 0 , 1 ) ,
where the matrix functions F ( x ) and F ˜ ( x ) have the form (105). In the element-wise form, (107) implies P 2 , 1 = P 3 , 2 = P 3 , 1 = 0 , P 3 , 1 = σ ^ 0 ± τ ^ 1 . Hence, τ ^ 1 = 0 , σ ^ 0 = c o n s t in L 2 ( 0 , 1 ) , which concludes the proof.    □
Now, suppose that the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ of the problem L = ( F ( x ) , U 0 , U 1 ) are given. Using the asymptotics (106), one can find the number τ ˜ 1 : = 0 1 τ 1 ( t ) d t . Put
F ˜ ( x ) = 0 1 0 τ ˜ 1 0 1 0 τ ˜ 1 0 ,
and L ˜ : = ( F ˜ ( x ) , U 0 , U 1 ) . Clearly, F ˜ 🟉 ( x ) = F ˜ ( x ) . Consequently, in our case,
g ˜ v , ϕ ˜ = g ˜ v ϕ ˜ g ˜ v ϕ ˜ + g ˜ v ϕ ˜ + 2 τ ˜ 1 g ˜ v ϕ ˜ .
Hence, the relation (88) takes the form
T 0 , 0 ϕ ˜ T 0 , 1 ϕ ˜ + ( T 0 , 2 + 2 τ ˜ 1 T 0 , 0 ) ϕ ˜ = τ ^ 1 ϕ ˜ + ( τ ^ 1 + τ ^ 0 ) ϕ ˜ + T 0 , 0 ϕ ˜ + ( 3 T 1 , 0 + 2 T 0 , 1 ) ϕ ˜ + ( 3 T 2 , 0 + 3 T 1 , 1 + T 0 , 2 + 2 τ 1 T 0 , 0 ) ϕ ˜ ,
where T j 1 , j 2 were defined in (95). Grouping the terms at ϕ ˜ ( x , λ ) and ϕ ˜ ( x , λ ) , we derive the formulas
τ 1 = τ ˜ 1 3 2 v V ( ϕ v c v g ˜ v + ϕ v c v g ˜ v ) , τ 0 = τ ^ 1 3 d d x v V ϕ v c v g ˜ v 2 τ ^ 1 v V ϕ v c v g ˜ v .
By virtue of Corollary 1.3 and Theorem 6.4 from [54] and (72), we have { l ξ l } l 2 . Applying Lemma 8 to prove the series convergence in suitable spaces and using the notations (85), we arrive at the following reconstruction formulas for τ 1 and τ 0 .
Theorem 3.
Let L and L ˜ be the problems defined above in this section. The following relations hold:
τ 1 = τ ˜ 1 3 2 ( l , k , ε ) V ( φ l , k , ε η ˜ l , k , ε + φ l , k , ε η ˜ l , k , ε ) ,
τ 0 = τ ^ 1 3 d d x ( l , k , ε ) V φ l , k , ε η ˜ l , k , ε 2 τ ^ 1 ( l , k , ε ) V φ l , k , ε η ˜ l , k , ε .
The series in (108) converges in L 2 ( 0 , 1 ) . In (110), the series in brackets converges in L 2 ( 0 , 1 ) with regularization, and the second series converges absolutely and uniformly with respect to x [ 0 , 1 ] , so the right-hand side of (110) belongs to W 2 1 ( 0 , 1 ) .
Following the proof of Lemma 8, one can easily show that the regularization constants A v for the series in (109) equal zero. The regularization constants in (110) are omitted because of the differentiation. Finally, we arrive at the following Algorithm 2 for solving Problem 2.
Algorithm 2: Suppose that the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ of the problem L = L ( F ( x ) , U 0 , U 1 ) W are given. Here, F ( x ) is defined by (105), U 0 = I , U 1 = [ δ k , 4 j ] k , j = 1 3 . We have to find τ 1 and τ 0 .
  • Find τ ˜ 1 = 0 1 τ 1 ( x ) d x from the eigenvalue asymptotics (106).
  • Take the model problem L ˜ = L ( F ˜ ( x ) , U 0 , U 1 ) , where F ˜ ( x ) is defined by (108).
  • Implement the steps 2–6 of Algorithm 1 to obtain { φ l , k , ε ( x ) } ( l , k , ε ) V .
  • Using the problem L ˜ and the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , { λ ˜ 0 , N ˜ ( λ ˜ 0 ) } λ ˜ 0 Λ ˜ , construct the functions { η ˜ l , k , ε ( x ) } ( l , k , ε ) V by (85).
  • (5)
    Construct τ 1 ( x ) and τ 0 ( x ) by (109) and (110), respectively.

    4.4. Case of Even n, τ ν L 2 ( 0 , 1 )

    Consider the differential expression (1) with even n and τ ν L 2 ( 0 , 1 ) , ν = 0 , n 2 ¯ . The associated matrix F ( x ) = [ f k , j ( x ) ] k , j = 1 n is given by the relations
    f n k , k + 1 = τ 2 k , k = 0 , n / 2 1 ¯ , f n k 1 , k + 1 = f n k , k + 2 = τ 2 k + 1 , k = 0 , n / 2 2 ¯ ,
    and all the other elements are defined by f k , j = δ k , j 1 . For instance,
    6 ( y ) = y ( 6 ) + ( τ 4 y ) + [ ( τ 3 y ) + ( τ 3 y ) ] + ( τ 2 y ) + [ ( τ 1 y ) + τ 1 y ] + τ 0 y ,
    and the corresponding associated matrix is
    F ( x ) = 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 τ 3 τ 4 0 1 0 τ 1 τ 2 τ 3 0 0 1 τ 0 τ 1 0 0 0 0 .
    Suppose that U 0 = I , U 1 = [ δ k , n j + 1 ] k , j = 1 n , L = ( F ( x ) , U 0 , U 1 ) W and L ˜ = ( F ˜ ( x ) , U 0 , U 1 ) , where F ˜ ( x ) is constructed in the same way as F ( x ) by different coefficients τ ˜ ν L 2 ( 0 , 1 ) , ν = 0 , n 2 ¯ . The following uniqueness theorem is proved similarly to Theorem 2.
    Theorem 4.
    If Λ = Λ ˜ and N ( λ 0 ) = N ˜ ( λ 0 ) for all λ 0 Λ , then τ ν ( x ) = τ ˜ ν ( x ) a.e. on ( 0 , 1 ) , ν = 0 , n 2 ¯ . Thus, the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ uniquely specify τ ν L 2 ( 0 , 1 ) , ν = 0 , n 2 ¯ .
    Furthermore, we need the following proposition, which is an immediate corollary of Theorems 1.2 and 6.4 from [54] for the problems L , L ˜ defined above in this subsection and the sequence { ξ l } defined by (71) (see also Example 5.2 in [54]).
    Proposition 4
    ([54]). Suppose that ν 0 { 1 , 2 , , n 1 } , τ ν ( x ) = τ ˜ ν ( x ) a.e. on ( 0 , 1 ) for ν = ν 0 , n 2 ¯ , and 0 1 τ ^ ν 0 1 ( x ) d x = 0 . Then, { l n ν 0 ξ l } l 2 .
    We construct the solution of Problem 2 step-by-step.
    Step 1. Take the model problem L ˜ = L ˜ ( 1 ) : = ( F ˜ ( 1 ) ( x ) , U 0 , U 1 ) , where F ˜ ( 1 ) ( x ) is the associated matrix for the differential expression l ˜ n ( 1 ) ( y ) with the coefficients τ ˜ n 2 : = 0 1 τ n 2 ( x ) d x , τ ˜ ν : = 0 , ν = 0 , n 3 ¯ . The coefficient 0 1 τ n 2 ( x ) d x can be found from the eigenvalue asymptotics similarly to the case in Section 4.3. Using the terms of (88) at ϕ ˜ ( n 2 ) ( x , λ ) , we derive the reconstruction formula
    τ n 2 = τ ˜ n 2 t n , n 2 T 0 , 1 = τ ˜ n 2 n v V ( ϕ v c v g ˜ v + ϕ v c v g ˜ v ) .
    By virtue of Proposition 4, { l ξ l } l 2 . Therefore, Lemma 8 implies that the obtained series converges in L 2 ( 0 , 1 ) with the regularization constants A v = 0 .
    Step 2. Take the model problem L ˜ = L ˜ ( 2 ) : = ( F ˜ ( 2 ) ( x ) , U 0 , U 1 ) , where F ˜ ( 2 ) ( x ) is the associated matrix for the differential expression l ˜ n ( 2 ) ( y ) with the coefficients τ ˜ n 2 : = τ n 2 , τ ˜ n 3 : = 0 1 τ n 3 ( x ) d x , τ ˜ ν : = 0 , ν = 0 , n 4 ¯ . The coefficient 0 1 τ n 3 ( x ) d x can be found from the eigenvalue asymptotics. Using the terms of (88) at ϕ ˜ ( n 2 ) ( x , λ ) , we show that T 0 , 0 ( x ) = 0 . One can easily show that T 0 , 0 ( 0 ) = 0 , so T 0 , 0 ( x ) 0 . Consequently, grouping the terms of (88) at ϕ ˜ ( n 3 ) ( x , λ ) , we obtain
    2 τ n 3 = 2 τ ˜ n 3 t n , n 3 + T 0 , 2 = 2 τ ˜ n 3 v V n ( n 1 ) 2 ϕ v c v g ˜ v + n ( n 2 ) ϕ v c v g ˜ v + [ ( n 1 ) ( n 2 ) 2 1 ] ϕ v c v g v .
    By virtue of Proposition 4, { l 2 ξ l } l 2 . Lemma 8 implies that the series converges in L 2 ( 0 , 1 ) with the zero regularization constants.
    Step s. Take the model problem L ˜ = L ˜ ( s ) : = ( F ˜ ( s ) ( x ) , U 0 , U 1 ) , where F ˜ ( s ) ( x ) is the associated matrix for the differential expression l ˜ n ( s ) ( y ) with
    τ ˜ ν : = τ ν , ν = n s , n 2 ¯ , τ ˜ n s 1 : = 0 1 τ n s 1 ( x ) d x , τ ˜ ν : = 0 , ν = 0 , n s 2 ¯ .
    For this model problem, we have T j 1 , j 2 ( x ) 0 for all j 1 + j 2 s 2 . Grouping the terms of (88) at ϕ ˜ ( n s 1 ) ( x , λ ) , we obtain
    τ n s 1 = τ ˜ n s 1 ( t n , n s 1 + ( 1 ) s + 1 T 0 , s ) × i f s i s e v e n 1 2 = τ ˜ n s 1 v V r = n s n C n r C r 1 n s 1 ϕ v ( n r ) c v g ˜ v ( r n + s ) + ( 1 ) s + 1 ϕ v c v g ˜ v ( s ) × i f s i s e v e n 1 2 = τ ˜ n s 1 v V r = n s n C n r C r 1 n s 1 ϕ v [ n r ] c v g ˜ v [ r n + s ] + ( 1 ) s + 1 ϕ v c v g ˜ v [ s ] × i f s i s e v e n 1 2
    Proposition 4 implies that { l s ξ l } l 2 . Therefore, it follows from Lemma 8 that the series in (112) converges in L 2 ( 0 , 1 ) . The regularization constants equal zero because
    r = n s n C n r C r 1 n s 1 ( 1 ) r + ( 1 ) s + 1 = 0 .
    Note that all functions { τ ν } necessary for computation of the quasi-derivatives ϕ v [ n r ] in (112) are computed at the previous steps, so the formula (112) can be used for finding τ n s 1 . In terms of the notations (85), the relation (112) can be written as follows:
    τ n s 1 = τ ˜ n s 1 ( l , k , ε ) V r = n s n C n r C r 1 n s 1 φ l , k , ε [ n r ] η ˜ l , k , ε [ r n + s ] + ( 1 ) s + 1 φ l , k , ε η ˜ l , k , ε [ s ] × i f s i s e v e n 1 2
    Thus, we obtain the following Algorithm 3 for solving Problem 2 in the considered case.
    Algorithm 3: Suppose that the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ of the problem L = ( F ( x ) , U 0 , U 1 ) W are given. Here, F ( x ) is the matrix associated with the differential expression n ( y ) , n is even, τ ν L 2 ( 0 , 1 ) , ν = 0 , n 2 ¯ , U 0 = I , and U 1 = [ δ k , n j + 1 ] k , j = 1 n . We have to find { τ ν } ν = 0 n 2 . For simplicity, assume that the values 0 1 τ ν ( x ) d x are known. In fact, they can be found from the eigenvalue asymptotics.
    For s = 1 , 2 , , n 1 , we find τ n s 1 implementing the following steps:
    • Take the model problem L ˜ = L ˜ ( s ) = ( F ˜ ( s ) , U 0 , U 1 ) induced by the differential expression ˜ n ( s ) ( y ) with the coefficients (111).
    • Implement steps 2–6 of Algorithm 1 to find { φ l , k , ε ( x ) } ( l , k , ε ) V .
    • Using the problem L ˜ and the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ , { λ ˜ 0 , N ˜ ( λ ˜ 0 ) } λ ˜ 0 Λ ˜ , construct the functions { η ˜ l , k , ε ( x ) } ( l , k , ε ) V by (85).
    • Construct τ n s 1 ( x ) by (113).

    4.5. Case of Even n, τ ν W 2 1 ( 0 , 1 )

    Suppose that n is even and τ ν W 2 1 ( 0 , 1 ) in (1) for ν = 0 , n 2 ¯ , that is, τ ν = σ ν , where σ ν L 2 ( 0 , 1 ) , and the derivative is understood in the sense of distributions. Put m : = n / 2 , and define the matrix function
    Q ( x ) = [ q r , j ( x ) ] r , j = 0 m : = ν = 0 n 2 ( 1 ) ( ν 1 ) / 2 χ ν σ ν ( x ) ,
    where χ ν : = [ χ ν ; r , j ] r , j = 0 m ,
    χ 2 k ; k , k + 1 = χ 2 k ; k + 1 , k = 1 , χ 2 k + 1 ; k , k + 2 = χ 2 k + 1 ; k + 2 , k = 1 ,
    and all the other entries χ ν ; r , j equal zero. The associated matrix F ( x ) = [ f k , j ( x ) ] k , j = 1 n for n ( y ) is defined as follows (see [46] for details):
    f m , j : = ( 1 ) m + 1 q j 1 , m , j = 1 , m ¯ , f k , m + 1 : = ( 1 ) k + 1 q m , 2 m k , k = m + 1 , 2 m ¯ , f k , j : = ( 1 ) k + 1 q j 1 , 2 m k + ( 1 ) m + k q j 1 , m q m , 2 m k , k = m + 1 , 2 m ¯ , j = 1 , m ¯ ,
    and f k , j = δ k , j 1 for all the other indices. Clearly, F F n . For example, if n = 4 , then
    Q ( x ) = 0 σ 0 σ 1 σ 0 0 σ 2 σ 1 σ 2 0 , F ( x ) = 0 1 0 0 σ 1 σ 2 1 0 σ 0 σ 1 σ 2 σ 2 2 σ 2 1 σ 1 2 σ 0 σ 1 σ 2 σ 1 0
    Consider Problem 2 for L = ( F ( x ) , U 0 , U 1 ) , U 0 = I , U 1 = [ δ k , n j + 1 ] k , j = 1 n . Let L ˜ = ( F ˜ ( x ) , U 0 , U 1 ) , where F ˜ ( x ) is the associated matrix for the differential expression l ˜ n ( y ) with the coefficients τ ˜ ν = σ ˜ ν W 2 1 ( 0 , 1 ) , ν = 0 , n 2 ¯ . The following uniqueness theorem is proved analogously to Theorem 2.
    Theorem 5.
    If Λ = Λ ˜ and N ( λ 0 ) = N ˜ ( λ 0 ) for all λ 0 Λ , then σ ν ( x ) = σ ˜ ν ( x ) + c o n s t a.e. on ( 0 , 1 ) for ν = 0 , n 2 ¯ . Thus, the spectral data { λ 0 , N ( λ 0 ) } λ 0 Λ uniquely specify τ ν W 2 1 ( 0 , 1 ) , ν = 0 , n 2 ¯ .
    The functions {</