Next Article in Journal
Physics-Informed Neural Networks with Periodic Activation Functions for Solute Transport in Heterogeneous Porous Media
Next Article in Special Issue
An Approach to Multidimensional Discrete Generating Series
Previous Article in Journal
Multiplicity Results of Solutions to the Double Phase Problems of Schrödinger–Kirchhoff Type with Concave–Convex Nonlinearities
Previous Article in Special Issue
Almost Automorphic Solutions to Nonlinear Difference Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Necessary and Sufficient Conditions for Solvability of an Inverse Problem for Higher-Order Differential Operators

by
Natalia P. Bondarenko
1,2,3
1
Department of Mechanics and Mathematics, Saratov State University, Astrakhanskaya 83, Saratov 410012, Russia
2
Department of Applied Mathematics and Physics, Samara National Research University, Moskovskoye Shosse 34, Samara 443086, Russia
3
S.M. Nikolskii Mathematical Institute, Peoples’ Friendship University of Russia (RUDN University), 6 Miklukho-Maklaya Street, Moscow 117198, Russia
Mathematics 2024, 12(1), 61; https://doi.org/10.3390/math12010061
Submission received: 12 November 2023 / Revised: 13 December 2023 / Accepted: 21 December 2023 / Published: 24 December 2023

Abstract

:
We consider an inverse spectral problem that consists in the recovery of the differential expression coefficients for higher-order operators with separate boundary conditions from the spectral data (eigenvalues and weight numbers). This paper is focused on the principal issue of inverse spectral theory, namely, on the necessary and sufficient conditions for the solvability of the inverse problem. In the framework of the method of the spectral mappings, we consider the linear main equation of the inverse problem and prove the unique solvability of this equation in the self-adjoint case. The main result is obtained for the first-order system of the general form, which can be applied to higher-order differential operators with regular and distribution coefficients. From the theorem on the main equation’s solvability, we deduce the necessary and sufficient conditions for the spectral data for a class of arbitrary order differential operators with distribution coefficients. As a corollary of our general results, we obtain the characterization of the spectral data for the fourth-order differential equation in terms of asymptotics and simple structural properties.

1. Introduction

This paper is concerned with inverse spectral problems for differential equations of the form
n ( y ) : = y ( n ) + k = 0 n / 2 1 ( τ 2 k ( x ) y ( k ) ) ( k ) + k = 0 ( n 1 ) / 2 1 ( τ 2 k + 1 ( x ) y ( k ) ) ( k + 1 ) + ( τ 2 k + 1 ( x ) y ( k + 1 ) ) ( k ) = λ y , x ( 0 , 1 ) ,
where n 2 , the notation a means rounding a real number a down, the coefficients { τ ν } ν = 0 n 2 in general can be generalized functions (distributions), the functions i n + ν τ ν are assumed to be real-valued, and λ is the spectral parameter.
We investigate the recovery of the coefficients { τ ν } ν = 0 n 2 from the eigenvalues { λ l , k } l 1 and the weight numbers { β l , k } l 1 of the boundary value problems L k , k = 1 , , n 1 , for Equation (1) with the separated boundary conditions
y [ j 1 ] ( 0 ) = 0 , j = 1 , , k , y [ s 1 ] ( 1 ) = 0 , s = 1 , , n k .
Thus, the problem L k has k boundary conditions at x = 0 and ( n k ) boundary conditions at x = 1 . The quasi-derivatives y [ j ] and weight numbers β l , k are rigorously defined in Section 2.
Our spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 generalize the spectral data { λ l , β l } l 1 of the classical Sturm–Liouville problem
y + q ( x ) y = λ y , x ( 0 , 1 ) ,
y ( 0 ) = y ( 1 ) = 0 .
Here, { λ l } l 1 are the eigenvalues of the boundary value problem (3) and (4) and β l : = 0 1 y l 2 ( x ) d x 1 , where { y l ( x ) } l 1 are the eigenfunctions normalized by the condition y l ( 0 ) = 1 . The inverse problems for the second-order Sturm–Liouville Equation (3) have been studied fairly completely (see the monographs [1,2,3,4,5] and references therein). In particular, it is well known that the potential q ( x ) can be uniquely reconstructed from the spectral data { λ l , β l } l 1 following the method of Gelfand and Levitan [6]. Nevertheless, this method appears to be ineffective for higher-order differential equations
y ( n ) + s = 0 n 2 p s ( x ) y ( s ) = λ y , x ( 0 , 1 ) .
The inverse problem by the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 for Equation (5) with the coefficients p s C s [ 0 , 1 ] , s = 0 , , n 2 , was introduced by Leibenzon [7], who proved the uniqueness of its solution. In [8,9], Leibenzon developed a constructive method for finding the solution and obtained the necessary and sufficient conditions for the solvability of the inverse problem. However, Yurko [10] showed that Leibenzon’s spectral data uniquely specify the coefficients { p s } s = 0 n 2 only under the so-called separation condition: the eigenvalues of any two neighboring problems L k and L k + 1 must be distinct. Furthermore, Yurko introduced another spectral characteristic, which is now called the Weyl–Yurko matrix. It generalizes the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 of Leibenzon and uniquely determines the higher-order differential operators in the general case, without any restrictions on their spectra. As a result, Yurko created the inverse problem theory for Equation (5), with p s W 2 s + ν , s = 0 , , n 2 , ν 0 , on a finite interval and on the half-line [10,11,12]. The inverse scattering problem on the line requires a different approach (see [13,14]). Inverse spectral problems for higher-order differential operators in other statements have been considered in [15,16,17,18,19,20,21,22,23,24] and other studies.
Generalizations of the inverse spectral problems using the Weyl–Yurko matrix and the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 to the higher-order differential operators with distribution coefficients have been investigated by Bondarenko (see [25,26,27,28,29] and other studies by the author). In particular, two papers [25,27] mostly focused on uniqueness theorems. In [26], nonlinear inverse problems were reduced to the linear equation
( I R ˜ ( x ) ) ψ ( x ) = ψ ˜ ( x ) ,
in the Banach space m of bounded infinite sequences. Equation (6) is considered for each fixed x [ 0 , 1 ] . Here, ψ ( x ) , ψ ˜ ( x ) m , R ˜ ( x ) is a compact operator, and I is the identity operator in m. The vector ψ ˜ ( x ) and the operator R ˜ ( x ) are constructed by using the given spectral data, while the unknown vector ψ ( x ) is related to the coefficients in Equation (1). Further details regarding the main Equation (6) can be found in Section 3.
For existence of the inverse problem solution, the solvability of Equation (6) is crucial. However, this issue is very difficult to investigate. Leibenzon [9] and Yurko [10,12] imposed the requirement of the existence of a bounded inverse operator ( I R ˜ ( x ) ) 1 . But, in general, it is difficult to verify this requirement. The only relatively simple situation is the case of small R ( x ) . This case of local solvability was considered in [10,12] for regular coefficients and in [28] for distributions. For n = 2 , the unique solvability of the main Equation (6) can be proved in the self-adjoint case. Recently, it was proved for n = 3 (see [29]). Nevertheless, to the best of our knowledge, there are no such results for n > 3 even in the case of regular coefficients. This paper aims to study the solvability of the main Equation (1) in the self-adjoint case for arbitrary even and odd orders n and to obtain necessary and sufficient conditions on the spectral data { λ l , k , β l , k } l 1 k = 1 , , n 1 for Equation (1) with distribution coefficients.
It is worth mentioning that boundary value problems for linear differential equations of form (1) and their inverse spectral problems appear in various applications. The majority of applications deal with n = 2 . Direct and inverse Sturm–Liouville problems arise in classical and quantum mechanics, geophysics, material science, acoustics, and other branches of science and engineering (see, e.g., the references in [4,5]). Third-order differential equations have been applied to describing elastic beam vibrations [30] and thin membrane flow of viscous liquid [31]. Linear differential operators of orders n = 4 and n = 6 arise in geophysics [15] and vibration theory [21,32]. Although the author has not found specific physical models leading to Equation (1) for n > 4 in the literature, the characterization of its spectral data in the general case is of fundamental significance from the mathematical viewpoint.
In order to deal with Equation (1), we apply the regularization approach, which was developed in [33] and a number of subsequent studies. Namely, we reduce Equation (1) to the first-order system
Y ( x ) = ( F ( x ) + Λ ) Y ( x ) , x ( 0 , 1 ) ,
where Y ( x ) is a column-vector function related to y ( x ) , Λ is the ( n × n ) matrix whose entry at position ( n , 1 ) equals λ and all the other entries equal zero, and the matrix function F ( x ) = [ f k , j ( x ) ] k , j = 1 n with integrable entries is the so-called associated matrix, which is constructed by the coefficients { τ ν } ν = 0 n 2 of the differential expression n ( y ) in a special way.
Constructions of associated matrices for various classes of differential operators were obtained in [33,34,35,36,37,38,39] and other studies. As an example, consider Equation (1) for n = 2 :
y + τ 0 y = λ y .
Suppose that τ 0 W 2 1 [ 0 , 1 ] , that is, τ 0 = σ , where σ is some function of L 2 [ 0 , 1 ] . Then, Equation (8) can be reduced to the ( 2 × 2 ) system (7) with the following associated matrix (see [40]):
F ( x ) = σ ( x ) 1 σ 2 ( x ) σ ( x ) .
In this paper, we consider the main Equation (6) constructed for the first-order system (7) in the general form. Namely, we suppose that F ( x ) = [ f k , j ( x ) ] k , j = 1 n belongs to the class F n of ( n × n ) matrix functions satisfying the conditions
f k , j = δ k + 1 , j , k < j , f k , k L 2 [ 0 , 1 ] , f k , j L 1 [ 0 , 1 ] , k j , trace ( F ( x ) ) = 0 .
Here and below, δ k , j is the Kronecker delta. Thus, the structure of F F n can be symbolically presented as follows:
L 2 1 0   0     0 L 1 L 2 1   0     0 L 1 L 1 L 2   0     0 ............. L 1 L 1 L 1 L 2 1 L 1 L 1 L 1 L 1 L 2 .
For the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 of the system (7) with any F F n , the following asymptotic relations hold (see [26]):
λ l , k = ( 1 ) n k π sin π k n ( l + χ k + ϰ l , k ) n , { ϰ l , k } l 2 ,
β l , k = n λ l , k ( 1 + η l , k ) , { η l , k } l 2 ,
where { χ k } k = 1 n 1 are constants that do not depend on the matrix F ( x ) . Hence, { χ k } k = 1 n 1 can be determined by the eigenvalues { λ l , k 0 } l 1 of the matrix function F 0 ( x ) with the zero entries f s , j 0 ( x ) 0 for s j :
χ k = lim l 1 π sin ( π k n ) ( 1 ) n k λ l , k 0 n l , k = 1 , , n 1 .
Let F n , s i m p denote a class of matrix functions F ( x ) of F n such that the corresponding eigenvalues { λ l , k } l 1 , k = 1 , , n 1 fulfill the following simplifying assumptions:
(A-1) For each fixed k { 1 , , n 1 } , the eigenvalues { λ l , k } l 1 are simple.
(A-2)  { λ l , k } l 1 { λ l , k + 1 } l 1 = for k = 1 , , n 2 .
Note that in view of the asymptotics (10), the conditions (A-1) and (A-2) hold for all sufficiently large indices l. Assumptions (A-1) and (A-2) have been imposed in a number of previous studies [7,8,9,10,12,28]. Under these assumptions, the coefficients of the differential expression (1) are typically uniquely specified by the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 . Uniqueness theorems of this kind have been proven in [7,10,12,28] for various classes of regular and distributional coefficients.
Let F n + and F n , s i m p + denote the subclasses of matrix functions of F n and F n , s i m p , respectively, satisfying the additional condition
f k , j ( x ) = ( 1 ) k + j + 1 f n j + 1 , n k + 1 ( x ) ¯ ,
where the bar denotes the complex conjugate. The condition (12) is a kind of self-adjointness. In particular, the matrix (9) belongs to F 2 + if σ ( x ) is real-valued.
The first main result of this study is the following theorem, which provides sufficient conditions for the unique solvability of the main Equation (6).
Theorem 1. 
Suppose that F ˜ F n , s i m p + and complex numbers { λ l , k , β l , k } l 1 , k = 1 , , n satisfy assumptions (A-1) and (A-2), the asymptotics (10) and (11), the self-adjointness conditions
λ l , k = ( 1 ) n λ l , n k ¯ , β l , k = ( 1 ) n β l , n k ¯ , l 1 , k = 1 , , n 1 ,
the additional requirements
i f n = 2 p : ( 1 ) p + 1 β l , p > 0 , l 1 , i f n = 2 p + 1 : ( 1 ) p + 1 R e λ l , p > 0 , l 1 ,
and β l , k 0 for l 1 , k = 1 , , n 1 . Then, the linear operator ( I R ˜ ( x ) ) , which is constructed according to Section 3, has a bounded inverse operator in the Banach space m for each fixed x [ 0 , 1 ] .
The proof of Theorem 1 is based on construction and contour integration of some functions, which are meromorphic in the complex plane of the spectral parameter. The obtained contour integrals, on the one hand, can be estimated as the radii of the contours tend to infinity. On the other hand, they can be calculated using residue theorem. Although the idea of the proof arises from the cases n = 2 and n = 3 , which were considered in [4,10,41] and [29], respectively, the solvability of the main equation for n > 3 is a fundamentally novel result, and the proofs require several new constructions.
Since Theorem 1 is obtained for system (7) in the general form, it can be applied to different classes of differential operators with distribution as well as integrable coefficients. Nevertheless, it is important to note that, in general, the matrix function F ( x ) cannot be uniquely recovered from the spectral data (see Example 1 in [27]). Therefore, in the next part of this paper, we consider the system (7) associated with Equation (1), in which τ ν W 2 ν 1 [ 0 , 1 ] , i n + ν τ ν are real-valued functions for ν = 0 , , n 2 , and the assumptions (A-1) and (A-2) are fulfilled. We denote this class of coefficients { τ ν } ν = 0 n 2 by W s i m p + . For the considered class, in [28], the coefficients of the differential expression n ( y ) were reconstructed as some series using the solution ψ ( x ) of the main Equation (6). Moreover, the convergence of those series were proved in the corresponding functional spaces, including the space W 2 1 [ 0 , 1 ] of generalized functions. Theorem 1 together with the results of [28] imply the following solvability conditions of the inverse spectral problem:
Theorem 2. 
Let complex numbers { λ l , k , β l , k } l 1 , k = 1 , , n 1 satisfy (A-1), (A-2), (13), (14), and β l , k 0 for all l , k . Suppose that there exists a model problem with coefficients { τ ˜ ν } ν = 0 n 2 W s i m p + such that { l n 2 ξ l } l 1 l 2 , where
ξ l : = k = 1 n 1 l ( n 1 ) | λ l , k λ ˜ l , k | + l n | β l , k β ˜ l , k | , l 1 .
Then, there exist coefficients { τ ν } ν = 0 n 2 with the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 . Moreover, { τ ν } ν = 0 n 2 W s i m p + .
For even n, the conditions of Theorem 2 are not only necessary but also sufficient. For odd n, the only “gap” between the necessary and sufficient conditions is the requirement (14) (see Remark 2). For n = 2 , 3 , our conditions coincide with the previously known results (see [29,42]). For n = 4 , we obtain a novel theorem (Theorem 3), which completely characterizes the corresponding spectral data in terms of the asymptotics recently derived in [43] and structural properties.
This paper is organized as follows: In Section 2, we define the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 for the system (7) and provide other preliminaries. In Section 3, we construct the main Equation (6) based on the technique in [26] and discuss some useful properties of the functions that participate in the main equation. Section 4 contains the proof of Theorem 1. In Section 5, we apply Theorem 1 to the inverse problem for Equation (1) with τ ν W 2 ν 1 [ 0 , 1 ] and so prove Theorem 2. Section 6 contains examples for n = 2 , 3 , 4 . In Section 7, we summarize our main results and discuss their advantages over those of previous studies.
Throughout the paper, we use the following notations:
  • In estimates, the same symbol C is used for various positive constants that do not depend on x, λ , l, etc.
  • F 0 ( x ) [ δ k + 1 , j ] k , j = 1 n .
  • Along with F ( x ) , we consider matrix functions F ˜ ( x ) , F ( x ) , F ˜ ( x ) , abd F 0 ( x ) . If a symbol γ denotes an object related to F ( x ) , the notations γ ˜ , γ , γ ˜ , and γ 0 are used for similar objects related to F ˜ ( x ) , F ( x ) , F ˜ ( x ) , and F 0 ( x ) , respectively.
  • The notation a k ( λ 0 ) is used for the kth coefficient of the Laurent series for a function a ( λ ) at a point λ = λ 0 :
    a ( λ ) = k = q a k ( λ 0 ) ( λ λ 0 ) k .

2. Preliminaries

In this section, we consider system (7) with an arbitrary matrix F F n and define the corresponding spectral data. This section is mainly based on the results of [25,26,28].

2.1. Eigenvalues

Let F F n . Using the entries of F ( x ) , define the quasi-derivatives
y [ 0 ] : = y , y [ k ] : = ( y [ k 1 ] ) j = 1 k f k , j y [ j 1 ] , k = 1 , , n ,
and the domain
D F : = { y : y [ k ] A C [ 0 , 1 ] , k = 0 , , n 1 } .
For y D F , define the column vector function y ( x ) = [ y [ j 1 ] ( x ) ] j = 1 n . Obviously, if y D F , then y [ n ] L 1 [ 0 , 1 ] , and the equation
y [ n ] = λ y , x ( 0 , 1 ) ,
is equivalent to system (7) with Y ( x ) = y ( x ) . Indeed, the first ( n 1 ) rows of (7) correspond to the definitions (15) of the quasi-derivatives and the nth row corresponds to Equation (17). Below, we consider solutions of (17) belonging to domain D F . Note that domain D F is nonempty. In particular, the first component y ( x ) : = Y 1 ( x ) of any solution Y ( x ) of the first-order system (7) belongs to D F and satisfies (17).
For k = 1 , , n , let C k ( x , λ ) denote the solution of Equation (17), satisfying the initial conditions
C k [ j 1 ] ( 0 , λ ) = δ k , j , j = 1 , , n .
Clearly, the matrix function C ( x , λ ) : = [ C k [ j 1 ] ( x , λ ) ] j , k = 1 n is a fundamental solution of the first-order system (7). Therefore, the solutions { C k ( x , λ ) } k = 1 n exist and are unique. Moreover, their quasi-derivatives C k [ j 1 ] ( x , λ ) are entire functions of λ for each fixed x [ 0 , 1 ] and k , j = 1 , , n .
For k = 1 , , n 1 , L k denotes the boundary value problem for Equation (17) with the boundary conditions (2). It can be found in the standard way that, for each k = 1 , , n 1 , the problem L k has a countable set of eigenvalues { λ l , k } l 1 , which coincide with the zeros of the entire characteristic function
Δ k , k ( λ ) : = det [ C r [ n j ] ( 1 , λ ) ] j , r = k + 1 n .

2.2. Weyl–Yurko Matrix, Weight Matrices, and Weight Numbers

For k = 1 , , n , Φ k ( x , λ ) denotes the solution of Equation (17) satisfying the boundary conditions
Φ k [ j 1 ] ( 0 , λ ) = δ k , j , j = 1 , , k , Φ k [ s 1 ] ( 1 , λ ) = 0 , s = 1 , , n k .
The functions { Φ k ( x , λ ) } k = 1 n are called the Weyl solutions of Equation (17). Let us summarize the properties of Weyl solutions, which were discussed in [10] for the case of higher-order differential operators with regular coefficients and in [25,26] for the system (7) in more detail. For each fixed x [ 0 , 1 ] and k = 1 , , n , the quasi-derivatives Φ k [ j ] ( x , λ ) , j = 0 , , n 1 , are meromorphic in the λ -plane and have poles at the eigenvalues { λ l , k } l 1 . Furthermore, the Weyl solutions are ranked by their growth as | λ | . In order to estimate them, λ = ρ n , and the ρ plane is divided into sectors
Γ s : = ρ C : π ( s 1 ) n < arg ρ < π s n , s = 1 , , 2 n .
In each fixed sector Γ s , { ω k } k = 1 n denotes the roots of ω n = 1 , numbered so that
Re ( ρ ω 1 ) < Re ( ρ ω 2 ) < < Re ( ρ ω n ) , ρ Γ s .
Consider the matrix F 0 ( x ) = [ δ k + 1 , j ] k , j = 1 n and the corresponding eigenvalues { λ l , k 0 } l 1 of the boundary value problems L k 0 , k = 1 , , n 1 . For a fixed sector Γ s , let ρ l , k 0 = λ l , k 0 n Γ ¯ s . It can be shown that for sufficiently large l, the eigenvalues λ l , k 0 are real (see Lemma 3), so ρ l , k 0 lie on the boundary of sector Γ s . Introduce the region
Γ s , ρ * , δ : = { ρ Γ s : | ρ | > ρ * , | ρ ρ l , k 0 | > δ , l 1 , k = 1 , , n 1 } ,
for some positive numbers ρ * and δ .
Proposition 1 
([25,26]). The following estimate holds:
| Φ k [ j 1 ] ( x , ρ n ) | C | ρ | j k | exp ( ρ ω k x ) | , k , j = 1 , , n , x [ 0 , 1 ] , ρ Γ ¯ s , ρ * , δ
for each fixed s = 1 , , 2 n , a sufficiently small δ > 0 , and some ρ * > 0 . The numbers { ω k } k = 1 n are supposed to be numbered in the order (20) associated with sector Γ s .
Clearly, the matrix function Φ ( x , λ ) : = [ Φ k [ j 1 ] ( x , λ ) ] j , k = 1 n is a solution of system (7). Therefore, Φ ( x , λ ) is related to the fundamental solution C ( x , λ ) as follows:
Φ ( x , λ ) = C ( x , λ ) M ( λ ) ,
where M ( λ ) = [ M j , k ( λ ) ] j , k = 1 n is some matrix function called the Weyl–Yurko matrix. The Weyl–Yurko matrix for the first time was introduced by Yurko [10,11,12] for the investigation of inverse spectral problems for higher-order differential operators with regular coefficients.
It follows from (18), (19) and (22) that M ( λ ) is a unit lower-triangular matrix:
M ( λ ) = 1 0 0 M 2 , 1 ( λ ) 1 0 ............. M n , 1 ( λ ) M n , 2 ( λ ) 1 .
Furthermore, the entries M j , k ( λ ) for j > k are meromorphic in λ with poles at the eigenvalues { λ l , k } l 1 . In other words, the poles of the kth column coincide with the zeros of the corresponding characteristic function Δ k , k ( λ ) .
Now, suppose that F F n , s i m p ; that is, the assumptions (A-1) and (A-2) hold for the corresponding eigenvalues { λ l , k } l 1 , k = 1 , , n 1 . In terms of the Weyl–Yurko matrix, assumptions (A-1) and (A-2) mean that all the poles of M ( λ ) are simple and neighboring columns do not have common poles, respectively. Hence, under assumption (A-1), the Laurent series of M ( λ ) at each pole λ = λ l , k has the form
M ( λ ) = M 1 ( λ l , k ) λ λ l , k + M 0 ( λ l , k ) + M 1 ( λ l , k ) ( λ λ l , k ) + ,
where M j ( λ l , k ) are ( n × n ) matrix coefficients. Define the weight matrices
N ( λ l , k ) : = ( M 0 ( λ l , k ) ) 1 M 1 ( λ l , k ) , N ( λ l , k ) = [ N j , r ( λ l , k ) ] j , r = 1 n .
Due to Lemma 4 in [26], under assumption (A-2), the weight matrices have the following structure:
N j , r ( λ l , k ) 0 j = r + 1 , Δ r , r ( λ l , k ) = 0 .
Thus, in this case, the weight matrices { N ( λ l , k ) } l 1 , k = 1 , , n 1 are uniquely specified by the weight numbers
β l , k : = N k + 1 , k ( λ l , k ) = Res λ = λ l , k M k + 1 , k ( λ ) , l 1 , k = 1 , , n 1 .
In particular, (24) implies that β l , k 0 .
Throughout this paper, we use the collection { λ l , k , β l , k } l 1 , k = 1 , , n 1 as the spectral data of system (7) or of Equation (1).

2.3. Matrix F ( x )

Along with F ( x ) F n , consider the matrix function
F ( x ) = [ f k , j ( x ) ] k , j = 1 n , f k , j ( x ) = ( 1 ) k + j + 1 f n j + 1 , n k + 1 ( x ) .
F ( x ) also belongs to F n . Using the entries of F ( x ) , define the quasi-derivatives
z [ 0 ] : = z , z [ k ] : = ( z [ k 1 ] ) j = 1 k f k , j z [ j 1 ] , k = 1 , , n ,
the domain
D F : = { z : z [ k ] A C [ 0 , 1 ] , k = 0 , , n 1 } ,
and consider the following equation
( 1 ) n z [ n ] = μ z , x ( 0 , 1 ) ,
analogous to (17). Equation (28) is equivalent to the first-order system
d d x z ( x ) = ( F ( x ) + ( 1 ) n Λ ) z ( x ) , x ( 0 , 1 ) ,
where z = [ z [ j 1 ] ] j = 1 n is a column vector, and the quasi-derivatives are understood in the sense (27). We agree that for y D F , the quasi-derivatives are defined by (15) and, for z D F , by (27). The solutions of (17) and (28) are considered in domains D F and D F , respectively.
For y D F and z D F , define the Lagrange bracket:
z , y = k = 0 n 1 ( 1 ) k z [ k ] y [ n k 1 ] .
If y and z satisfy Equations (17) and (28), respectively, then
d d x z , y = ( λ μ ) z y ,
see Section 2.1 in [26].
Analogous to { C k ( x , λ ) } k = 1 n and { Φ k ( x , λ ) } k = 1 n , we define solutions { C k ( x , λ ) } k = 1 n and { Φ k ( x , λ ) } k = 1 n of Equation (28) satisfying the initial conditions (18) and the boundary conditions (19), respectively. Furthermore, put C ( x , λ ) : = [ C k [ j 1 ] ( x , λ ) ] j , k = 1 n , Φ ( x , λ ) : = [ Φ k [ j 1 ] ( x , λ ) ] j , k = 1 n , and define the Weyl–Yurko matrix M ( λ ) = [ M j , k ( λ ) ] j , k = 1 n by the relation
Φ ( x , λ ) = C ( x , λ ) M ( λ ) .
Proposition 2 
([26]). The Weyl–Yurko matrices M ( λ ) and M ( λ ) are related as follows:
( M ( λ ) ) T = J ( M ( λ ) ) 1 J 1 ,
where J = [ ( 1 ) j δ j , n k + 1 ] j , k = 1 n and T denotes the matrix transpose.
Consider the boundary value problems L k , k = 1 , , n 1 , for Equation (28) with boundary conditions (2), where the quasi-derivatives are defined by (27). Define the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 analogously to { λ l , k , β l , k } l 1 , k = 1 , , n 1 .
Lemma 1. 
Suppose that F F n , s i m p , then F F n , s i m p and
λ l , k = λ l , n k , β l , k = β l , n k , l 1 , k = 1 , , n 1 .
Proof. 
Relation (32) implies
M k + 1 , k ( λ ) = M n k + 1 , n k ( λ ) , k = 1 , , n 1 .
Let assumptions (A-1) and (A-2) hold for { λ l , k } l 1 , k = 1 , , n 1 . Taking (25) and β l , k 0 into account, we conclude that all the poles of M k + 1 , k ( λ ) are simple and coincide with { λ l , n k } l 1 . On the other hand, the poles of the kth column of M ( λ ) belong to the set { λ l , k } l 1 . The set { λ l , k } l 1 { λ l , n k } l 1 is empty because of the asymptotics (10) for λ l , k and λ l , n k . Hence, λ l , k = λ l , n k for l 1 , k = 1 , , n 1 . This implies (A-1) and (A-2) for { λ l , k } l 1 , k = 1 , , n 1 . Thus, F F n , s i m p . The relation β l , k = β l , n k follows from (25) and (33). □

2.4. Self-Adjoint Case

In this subsection, we study the properties of the spectral data for F F n , s i m p + .
If F F n + , then F ( x ) = [ f k , j ( x ) ¯ ] k , j = 1 n . Consequently,
C k ( x , λ ) = C k ( x , ( 1 ) n λ ¯ ) ¯ , Φ k ( x , λ ) = Φ k ( x , ( 1 ) n λ ¯ ) ¯ , M j , k ( λ ) = M j , k ( ( 1 ) n λ ¯ ) ¯ , j , k = 1 , , n .
In view of Lemma 1, for F F n , s i m p + , the following relations (13) hold:
λ l , k = ( 1 ) n λ l , n k ¯ , β l , k = ( 1 ) n β l , n k ¯ , l 1 , k = 1 , , n 1 .
In particular, if n = 2 p , p N , then the boundary value problem L p is self-adjoint and λ l , p , β l , p are real for all l 1 . Moreover, the following lemma holds.
Lemma 2. 
Suppose that n = 2 p , p N , and F F n , s i m p + . Then, ( 1 ) p + 1 β l , p > 0 for all l 1 .
Proof. 
Fix l N . From Lemma 3 in [26], we have the following relation
Φ 1 ( x , λ l , p ) = Φ 0 ( x , λ l , p ) N ( λ l , p ) .
In view of (24) and (25), this implies that
Φ p , 1 ( x , λ l , p ) = Φ p + 1 ( x , λ l , p ) β l , p .
Note that Φ p + 1 ( x , λ l , p ) is the eigenfunction of problem L p corresponding to the real eigenvalue λ l , p . The identity (30) implies that
Φ p + 1 ( x , λ l , p ) , Φ p ( x , λ ) | 0 1 = ( λ λ l , p ) 0 1 Φ p + 1 ( x , λ l , p ) Φ p ( x , λ ) d x .
Using (29) and (19), we calculate
Φ p + 1 ( x , λ l , p ) , Φ p ( x , λ ) = ( 1 ) p , x = 0 , 0 , x = 1 .
Consequently, it follows from (35) that
( 1 ) p + 1 = lim λ λ l , p ( λ λ l , p ) 0 1 Φ p + 1 ( x , λ l , p ) Φ p ( x , λ ) d x = 0 1 Φ p + 1 ( x , λ l , p ) Φ p , 1 ( x , λ l , p ) d x .
Taking the relations (34), Φ p + 1 ( x , λ l , p ) = Φ p + 1 ( x , λ l , p ¯ ) ¯ , and λ l , p R into account, we conclude that
( 1 ) p + 1 = β l , p 0 1 | Φ p + 1 ( x , λ l , p ) | 2 d x .
Since the integral is positive, this yields the claim. □
In addition, we obtain the following lemma for the eigenvalues corresponding to matrix F 0 ( x ) = [ δ k + 1 , j ] k , j = 1 n .
Lemma 3. 
The eigenvalues { λ l , k 0 } l 1 , k = 1 , , n 1 are real for all sufficiently large indices l.
Proof. 
On the one hand, ( F 0 ) = F 0 , so λ l , k = ( 1 ) n λ l , k . Using Lemma 1, we conclude that λ l , k = ( 1 ) n λ l , n k . On the other hand, F 0 F n + , so λ l , k = ( 1 ) n λ l , n k ¯ . Consequently, the spectrum { λ l , k } l 1 of each problem L k is symmetric with respect to the real axis. Furthermore, according to the asymptotics (10), the eigenvalues { λ l , k } l 1 are simple for all sufficiently large indices l, so they are real. □

3. Main Equation

In this section, we construct the main Equation (6) of the method of spectral mappings for the system (7) basing on the results of [26]. First, we need some additional notations.
Consider two matrix functions F ( x ) and F ˜ ( x ) of the class F n , s i m p . We agree that if a symbol γ denotes an object related to F ( x ) , the symbol γ ˜ with tilde denotes the analogous object related to F ˜ ( x ) . In addition, define the matrix F ˜ ( x ) similarly to (26). Note that for solutions related to the matrix functions F ˜ ( x ) and F ˜ ( x ) , the quasi-derivatives are defined similarly to (15) and (27) with f ˜ k , j and f ˜ k , j instead of f k , j and f k , j , respectively. For technical simplicity, assume that
{ λ l , k } l 1 , k = 1 , , n 1 { λ ˜ l , k } l 1 , k = 1 , , n 1 = .
The opposite case requires minor changes (see Remark 1).
For convenience, introduce the notations
V : = { ( l , k , ε ) : l N , k = 1 , , n 1 , ε = 0 , 1 } , λ l , k , 0 : = λ l , k , λ l , k , 1 : = λ ˜ l , k , β l , k , 0 : = β l , k , β l , k , 1 : = β ˜ l , k , φ l , k , ε ( x ) : = Φ k + 1 ( x , λ l , k , ε ) , φ ˜ l , k , ε ( x ) : = Φ ˜ k + 1 ( x , λ l , k , ε ) , ( l , k , ε ) V .
Thus, indices 0 and 1 are used for the values related to F ( x ) and F ˜ ( x ) , respectively. Note that the Weyl solution Φ k + 1 ( x , λ ) and Φ ˜ k + 1 ( x , λ ) have poles { λ l , k + 1 , 0 } and { λ l , k + 1 , 1 } , respectively. Therefore, under assumptions (A-2) and (36), the numbers { λ l , k , ε } are regular points of Φ k + 1 ( x , λ ) and Φ ˜ k + 1 ( x , λ ) , so (37) correctly defines functions φ l , k , ε ( x ) and φ ˜ l , k , ε ( x ) .
Introduce auxiliary functions
D ˜ k , k 0 ( x , μ , λ ) : = Φ ˜ k ( x , μ ) , Φ ˜ k 0 ( x , λ ) λ μ , k , k 0 = 1 , , n ,
where the Lagrange bracket is defined in (29), and the quasi-derivatives for Φ ˜ k ( x , μ ) and Φ ˜ k 0 ( x , λ ) are generated by matrices F ˜ ( x ) and F ˜ ( x ) , respectively.
Lemma 4. 
The function D ˜ k , k 0 ( x , μ , λ ) has singularities at μ = λ l , n k , 1 if k < n , at λ = λ l , k 0 , 1 if k 0 < n , and at λ = μ if k + k 0 = n + 1 .
Proof. 
Using (30) and (38), we obtain
d d x D ˜ k 0 , k ( x , μ , λ ) = Φ ˜ k ( x , μ ) Φ ˜ k 0 ( x , λ ) .
The boundary conditions (19) for Φ ˜ k ( x , μ ) and Φ ˜ k 0 ( x , λ ) , together with (29), imply
Φ ˜ k ( x , μ ) , Φ ˜ k 0 ( x , λ ) | x = 0 = 0 , k + k 0 > n + 1 , ( 1 ) k + 1 , k + k 0 = n + 1 , , Φ ˜ k ( x , μ ) , Φ ˜ k 0 ( x , λ ) | x = 1 = 0 , k + k 0 < n + 1 .
Hence,
D ˜ k 0 , k ( x , μ , λ ) = 0 x Φ ˜ k ( t , μ ) Φ ˜ k 0 ( t , λ ) d t , k + k 0 > n + 1 , ( 1 ) k + 1 λ μ + 0 x Φ ˜ k ( t , μ ) Φ ˜ k 0 ( t , λ ) d t , k + k 0 = n + 1 , x 1 Φ ˜ k ( t , μ ) Φ ˜ k 0 ( t , λ ) d t , k + k 0 < n + 1 .
Consequently, the singularities of D ˜ k 0 , k ( x , λ ) coincide with the poles { λ ˜ l , k } l 1 of Φ ˜ k ( t , μ ) for k < n and with the poles { λ ˜ l , k 0 } l 1 of Φ ˜ k 0 ( t , λ ) for k 0 < n . In addition, λ = μ is a pole in the case k + k 0 = n + 1 . From Lemma 1, λ ˜ l , k = λ ˜ l , n k , which concludes the proof. □
For ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) V , denote
G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) : = ( 1 ) n k β l , k , ε D ˜ n k + 1 , k 0 + 1 ( x , λ l , k , ε , λ l 0 , k 0 , ε 0 ) .
From Lemma 4, ( μ , λ ) = ( λ l , k , ε , λ l 0 , k 0 , ε 0 ) is a regular point of D ˜ n k + 1 , k 0 + 1 ( x , μ , λ ) , so definition (39) is correct.
Proposition 3 
([26]). The following relations hold:
φ l 0 , k 0 , ε 0 ( x ) = φ ˜ l 0 , k 0 , ε 0 ( x ) + ( l , k , ε ) V ( 1 ) ε φ l , k , ε ( x ) G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) , ( l 0 , k 0 , ε 0 ) V ,
Φ k 0 ( x , λ ) = Φ ˜ k 0 ( x , λ ) + ( l , k , ε ) V ( 1 ) ε + n k β l , k , ε φ ˜ l , k , ε ( x ) D ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) , k 0 = 1 , , n .
Relation (40) can be treated as an infinite system of linear equations with respect to { φ l , k , ε ( x ) } ( l , k , ε ) V for each fixed x [ 0 , 1 ] . This system plays an important role in solving inverse spectral problems (see [26,28]). Relation (41) can be used for finding Weyl solutions { Φ k ( x , λ ) } k = 1 n from solution { φ l , k , ε ( x ) } ( l , k , ε ) V of system (40).
Remark 1. 
If the assumption (36) is violated, in other words, if there exist indices ( l , k ) such that λ l , k , 0 = λ l , k , 1 , then the corresponding triples ( l , k , 0 ) and ( l , k , 1 ) have to be excluded from set V and from the summation in (40) and (41). Thus, the relations of Proposition 3 are simplified. For example, if λ l , k , 0 = λ l , k , 1 for all ( l , k ) except for a finite set, then (40) becomes a finite linear system, which can be used for the numerical solution of inverse spectral problems.
In order to study the solvability of system (40), we reduce it to a linear equation in a suitable Banach space following the method of [10,26]. Define the numbers
ξ l : = k = 1 n 1 l ( n 1 ) | λ l , k λ ˜ l , k | + l n | β l , k β ˜ l , k | , l 1 ,
and the functions
w l , k ( x ) : = l k exp ( x l cot ( k π / n ) ) .
The numbers { ξ l } l 1 characterize the difference among the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 , and { λ ˜ l , k , β ˜ l , k } l 1 , k = 1 , , n 1 . Functions w l , k ( x ) are related to the growth of functions φ l , k , ε ( x ) : | φ l , k , ε ( x ) | C w l , k ( x ) . The latter estimate can be easily deduced from Proposition 1 and the asymptotics (10).
Apply the following transform to the functions in the system (40):
ψ l , k , 0 ( x ) ψ l , k , 1 ( x ) : = w l , k 1 ( x ) ξ l 1 ξ l 1 0 1 φ l , k , 0 ( x ) φ l , k , 1 ( x ) ,
R ˜ ( l 0 , k 0 , 0 ) , ( l , k , 0 ) ( x ) R ˜ ( l 0 , k 0 , 0 ) , ( l , k , 1 ) ( x ) R ˜ ( l 0 , k 0 , 1 ) , ( l , k , 0 ) ( x ) R ˜ ( l 0 , k 0 , 1 ) , ( l , k , 1 ) ( x ) : =                                     w l , k ( x ) w l 0 , k 0 ( x ) ξ l 0 1 ξ l 0 1 0 1 G ˜ ( l , k , 0 ) , ( l 0 , k 0 , 0 ) ( x ) G ˜ ( l , k , 1 ) , ( l 0 , k 0 , 0 ) ( x ) G ˜ ( l , k , 0 ) , ( l 0 , k 0 , 1 ) ( x ) G ˜ ( l , k , 1 ) , ( l 0 , k 0 , 1 ) ( x ) ξ l 1 0 1 .
Analogously to ψ l , k , ε ( x ) , define ψ ˜ l , k , ε ( x ) .
For brevity, denote v = ( l , k , ε ) , v 0 = ( l 0 , k 0 , ε 0 ) , v , v 0 V . Define the vectors ψ ( x ) : = [ ψ v ( x ) ] v V and ψ ˜ ( x ) = [ ψ ˜ v ( x ) ] v V .
Consider the Banach space m of bounded infinite sequences α = [ α v ] v V with norm α m = sup v V | α v | . Define the linear operator R ˜ ( x ) = [ R ˜ v 0 , v ( x ) ] v 0 , v V acting on an element α = [ α v ] v V of m using the following rule:
( R ˜ ( x ) α ) v 0 = v V R ˜ v 0 , v ( x ) α v .
Proposition 4 
([26]). For each fixed x [ 0 , 1 ] , the vectors ψ ( x ) and ψ ˜ ( x ) belong to m, and R ˜ ( x ) is a bounded operator from m to m. Moreover, the operator R ˜ ( x ) can be approximated using finite-dimensional operators with respect to the operator norm . m m , so R ˜ ( x ) is compact.
Proposition 5 
([26]). Suppose that F , F ˜ F n , s i m p . Let ψ ( x ) , ψ ˜ ( x ) , and R ˜ ( x ) be constructed by using the matrix functions F ( x ) , F ˜ ( x ) and their spectral data as described above. Then, for each fixed x [ 0 , 1 ] , the relation (6) is fulfilled in the Banach space m. Furthermore, for each fixed x [ 0 , 1 ] , the operator ( I R ˜ ( x ) ) has a bounded inverse, so Equation (6) is uniquely solvable with respect to ψ ( x ) .
The relation (6) is called the main equation of the inverse problem. Obviously, (6) is deduced from system (40) using the notations (44) and (45). It is worth mentioning that in [26], the results of Propositions 4 and 5 were obtained for the general case without the separation assumption (A-2). However, this assumption simplifies the form of the functions G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) and R ˜ v 0 , v ( x ) , which is used in the proofs. Furthermore, it is important to note that the unique solvability of the main Equation (6) was proved in [26] under the assumption that { λ l , k , β l , k } are the spectral data of some problems L k , k = 1 , , n 1 . In this case, the inverse operator ( I R ˜ ( x ) ) 1 can be found explicitly (see [26], Theorem 1). But, in this paper, we consider the main Equation (6) constructed using numbers { λ l , k , β l , k } that are not necessarily related to some matrix function F ( x ) and obtain sufficient conditions for the invertibility of operator ( I R ˜ ( x ) ) .

4. Main Equation Solvability

This section contains the proof of Theorem 1 on the unique solvability of the main Equation (6) under some simple conditions on the given data { λ l , k , β l , k } l 1 k = 1 , , n 1 . We emphasize that the numbers { λ l , k , β l , k } l 1 k = 1 , , n 1 are not assumed to be the spectral data of system (7). The proof relies on several auxiliary lemmas. The reader can skip their proofs to obtain the main idea. The central role in the proofs is played by meromorphic functions B j ( x , λ ) , j = 1 , , n , defined by (72). On the one hand, these functions are estimated as | λ | , and it is shown that their integrals over large contours tend to zero. On the other hand, those integrals are calculated using te residue theorem. This idea arises from the proofs for n = 2 (see Lemma 1.3.6 in [10] and Lemma 5.2 in [41]) and n = 3 (see Lemma 6.1 in [29]). However, the generalization to the case of arbitrary integer n requires considerable technical work.
Proof of Theorem 1. 
Fix x [ 0 , 1 ] . Consider the operator ( I R ˜ ( x ) ) satisfying the hypotheses of the theorem. By virtue of Proposition 4, the operator R ˜ ( x ) possesses the approximation property, so the Fredholm theorem can be applied. Therefore, it is sufficient to prove that the homogeneous equation
( I R ˜ ( x ) ) ζ ( x ) = 0 ,
has the only solution ζ ( x ) = 0 in m.
Let ζ ( x ) = [ ζ v ( x ) ] v V m be a solution of (46). This means that
ζ v 0 ( x ) = v V R ˜ v 0 , v ( x ) ζ v ( x ) , v 0 V .
Apply the following transform
z l , k , 0 ( x ) z l , k , 1 ( x ) : = w l , k ( x ) ξ l 1 0 1 ζ l , k , 0 ( x ) ζ l , k , 1 ( x ) ,
which is inverse to the transform in (44). Using (47) and (45), we obtain the infinite system
z l 0 , k 0 , ε 0 ( x ) = ( l , k , ε ) V ( 1 ) ε z l , k , ε ( x ) G ˜ ( l , k , ε ) , ( l 0 , k 0 , ε 0 ) ( x ) , ( l 0 , k 0 , ε 0 ) V ,
which is the homogeneous analog of (40). Since ζ ( x ) m , | ζ l , k , ε ( x ) | C . So,
| z l , k , ε ( x ) | C w l , k ( x ) , | z l , k , 0 ( x ) z l , k , 1 ( x ) | C ξ l w l , k ( x ) , ( l , k , ε ) V ,
where ξ l and w l , k ( x ) are defined in (42) and (43), respectively.
Introduce the functions
Z k 0 ( x , λ ) : = ( l , k , ε ) V ( 1 ) ε + n k β l , k , ε z l , k , ε ( x ) D ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) , k 0 = 1 , , n ,
analogously to (41). It follows from (48) and (50) that
Z k + 1 ( x , λ l , k , ε ) = z l , k , ε ( x ) , ( l , k , ε ) V .
The following lemma shows that the functions { Z k ( x , λ ) } k = 1 n have the same growth for | λ | as the corresponding Weyl solutions { Φ k ( x , λ ) } k = 1 n (see Proposition 1).
Lemma 5. 
For k 0 = 1 , , n , the function Z k 0 ( x , λ ) satisfies the estimate
| Z k 0 ( x , ρ n ) | l = 1 k = 1 n 1 C ξ l | ρ | ( k 0 1 ) | exp ( ρ ω k 0 x ) | | ρ c k l | + 1 ,
C | ρ | ( k 0 1 ) | exp ( ρ ω k 0 x ) | , x [ 0 , 1 ] , ρ Γ ¯ s , ρ * , δ ,
for each fixed s = 1 , 2 n ¯ , a sufficiently small δ > 0 , and some ρ * > 0 , where the region Γ s , ρ * , δ was defined in (21). The roots { ω k } k = 1 n are numbered in the order (20) associated with the sector Γ s , and { c k } k = 1 n 1 are constants such that ρ l , k 0 c k l as l . (In view of the asymptotics (10), c k = π sin π k n ϵ k , where | ϵ k | = 1 , and arg ϵ k depends on Γ s ).
Proof. 
Let us estimate the series in (50), which can be represented as follows:
Z k 0 ( x ) = l = 1 k = 1 n 1 ( 1 ) n k ( ( β l , k , 0 β l , k , 1 ) z l , k , 0 ( x ) D ˜ n k + 1 , k 0 ( x , λ l , k , 0 , λ ) + β l , k , 1 ( z l , k , 0 ( x ) z l , k , 1 ( x ) ) D ˜ n k + 1 , k 0 ( x , λ l , k , 0 , λ ) + β l , k , 1 z l , k , 1 ( x ) ( D ˜ n k + 1 , k 0 ( x , λ l , k , 0 , λ ) D ˜ n k + 1 , k 0 ( x , λ l , k , 1 , λ ) ) .
We begin with the functions D ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) . Due to (38) and (29), we have
D ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) = Φ ˜ n k + 1 ( x , λ l , k , ε ) , Φ ˜ k 0 ( x , λ ) λ λ l , k , ε = 1 λ λ l , k , ε j = 0 n 1 ( 1 ) j Φ ˜ n k + 1 [ j ] ( x , λ l , k , ε ) Φ ˜ k 0 [ n j 1 ] ( x , λ ) .
The estimate of Proposition 1 is valid for Φ ˜ n k + 1 ( x , λ ) . Taking the asymptotics (10) for λ l , k , ε and the definition (43) into account, we obtain
| Φ ˜ n k + 1 [ j ] ( x , λ l , k , ε ) | C | ρ l , k , ε | j ( n k ) | exp ( ρ l , k , ε ω n k + 1 x ) | C l j n w l , k 1 ( x ) ,
where ρ l , k , ε = λ l , k , ε n .
Using (10), (55), (56), and Proposition 1, we have
| D ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) | C j = 0 n 1 l j n w l , k 1 ( x ) | ρ | n j k 0 | exp ( ρ ω k 0 x ) | | ρ ρ l , k , ε | j = 0 n 1 | ρ | n j 1 l j C | ρ | ( k 0 1 ) l n w l , k 1 ( x ) | exp ( ρ ω k 0 x ) | | ρ ρ l , k 0 | , ρ Γ ¯ s , ρ * , δ , λ = ρ n .
It follows from (42) that | ρ l , k , 0 ρ l , k , 1 | C ξ l , where we choose the same branch of the root λ l , k , ε n for ε = 0 , 1 . Consequently, using the standard approach based on Schwarz’s Lemma (see [10], Lemmas 1.3.1 and 1.3.2), we estimate the difference as
| D ˜ n k + 1 , k 0 ( x , λ l , k , 0 , λ ) D ˜ n k + 1 , k 0 ( x , λ l , k , 1 , λ ) | C | ρ | ( k 0 1 ) ξ l l n w l , k 1 ( x ) | exp ( ρ ω k 0 x ) | | ρ ρ l , k 0 |
for ρ Γ ¯ s , ρ * , δ . In view of (11) and (42), we have
| β l , k , ε | C l n , | β l , k , 0 β l , k , 1 | C l n ξ l .
Using estimates (49), (57), (58), and (59), together with (54), we arrive at (52).
According to the asymptotics (10) and (11) for data { λ l , k , β l , k } l 1 , k = 1 , , n 1 and { λ ˜ l , k , β ˜ l , k } l 1 , k = 1 , , n 1 , we have { ξ l } l 2 . Hence, the Cauchy–Bunyakovsky–Schwartz inequality implies
l , k ξ l | ρ c k l | + 1 l ξ l 2 l , k 1 ( | ρ c k l | + 1 ) 2 C ,
which proves estimate (53). □
Our next goal is to study analytic properties of functions Z k ( x , λ ) , k = 1 , , n . For this purpose, we consider auxiliary functions
E ˜ k , k 0 ( x , μ , λ ) : = Φ ˜ k ( x , μ ) , C ˜ k 0 ( x , λ ) λ μ ,
z k 0 ( x , λ ) : = ( l , k , ε ) V ( 1 ) ε + n k β l , k , ε z l , k , ε ( x ) E ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) .
Thus, functions z k 0 ( x , λ ) are defined analogously to Z k 0 ( x , λ ) by replacing Φ ˜ k 0 ( x , λ ) with C ˜ k 0 ( x , λ ) . It is easier to consider the functions z k 0 ( x , λ ) than Z k 0 ( x , λ ) because the functions { C k ˜ ( x , λ ) } k = 1 n are entire in λ . Without loss of generality, we assume that λ l , k , ε λ l 0 , k 0 , ε for l l 0 .
Lemma 6. 
For k 0 = 1 , , n and each fixed x [ 0 , 1 ] , function z k 0 ( x , λ ) is analytic in the λ-plane except for the simple poles { λ l , k , ε } l 1 , k = k 0 , n 1 ¯ . In particular, z n ( x , λ ) is entire in λ. Moreover,
Res λ = λ l , k , ε z k 0 ( x , λ ) = i = 1 s ( 1 ) ε + k 0 k i β l , k i , ε z l , k i , ε ( x ) M ˜ n k 0 + 1 , n k i + 1 ( λ l , k i , ε ) , ( l , k , ε ) V , k k 0 ,
where, for a fixed triple ( l , k , ε ) V and a fixed k 0 , { k i } i = 1 s is the set of all the indices such that λ l , k , ε = λ l , k i , ε and k i k 0 .
Proof. 
Using (30) and (60), we obtain
E ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) = Φ ˜ n k + 1 ( x , λ l , k , ε ) , C ˜ k 0 ( x , λ ) | x = 0 λ λ l , k , ε + 0 x Φ ˜ n k + 1 ( t , λ l , k , ε ) C ˜ k 0 ( t , λ ) d t ,
where the integral is entire in λ . Using (29) and (18), we deduce
Φ ˜ n k + 1 ( x , λ l , k , ε ) , C ˜ k 0 ( x , λ ) | x = 0 = j = 0 n 1 ( 1 ) j Φ ˜ n k + 1 [ j ] ( 0 , λ l , k , ε ) C ˜ k 0 [ n j 1 ] ( 0 , λ ) = ( 1 ) n k 0 Φ ˜ n k + 1 [ n k 0 ] ( 0 , λ l , k , ε ) .
The relation (31) in the element-wise form implies that
Φ ˜ r ( x , λ ) = C ˜ r ( x , λ ) + j = r + 1 n M ˜ j , r ( λ ) C ˜ j ( x , λ ) , r = 1 , , n .
Taking the initial conditions (18) for C ˜ r ( x , λ ) into account, we conclude that
Φ ˜ n k + 1 [ n k 0 ] ( 0 , λ l , k , ε ) = M ˜ n k 0 + 1 , n k + 1 ( λ l , k , ε ) .
Note that this value equals zero for k < k 0 . Consequently, the function E ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) is analytic in λ except for the simple pole λ l , k , ε if k k 0 and
Res λ = λ l , k , ε E ˜ n k + 1 , k 0 ( x , λ l , k , ε , λ ) = ( 1 ) n k 0 M ˜ n k 0 + 1 , n k + 1 ( λ l , k , ε ) .
Combining (61) and (63), we arrive at (62). □
Let us apply Lemma 6 to study the analytic properties of Z k ( x , λ ) .
Lemma 7. 
For each fixed k { 1 , 2 , , n 1 } and x [ 0 , 1 ] , the function Z k ( x , λ ) is analytic in λ except for the simple poles { λ l , k , 0 } l 1 . Moreover,
Res λ = λ l , k , 0 Z k ( x , λ ) = β l , k , 0 z l , k , 0 ( x ) , l 1 .
The function Z n ( x , λ ) is entire in λ.
Proof. 
It follows from Lemma 5 that the series (50) converges absolutely and uniformly for ρ on compact sets in Γ ¯ s , ρ * , δ and λ = ρ n . Consequently, the functions Z k 0 ( x , λ ) , k 0 = 1 , , n , are analytic for such values of λ . Moreover, these functions can be analytically continued inside the circles that are cut out in Γ ¯ s , ρ * , δ with a possible exception for the values of { λ l , k , ε } . Therefore, it remains to compute the residues of Z k 0 ( x , λ ) at these points.
Using relation (22) and Proposition 2, we obtain C ˜ ( x , λ ) = Φ ˜ ( x , λ ) ( M ˜ ( λ ) ) 1 and so
C ˜ k 0 ( x , λ ) = Φ ˜ k 0 ( x , λ ) + j = k 0 + 1 n ( 1 ) j k 0 M ˜ n k 0 + 1 , n j + 1 ( λ ) Φ ˜ j ( x , λ ) , k 0 = 1 , , n .
Substituting (65) into (50), we derive
Z k 0 ( x , λ ) = z k 0 ( x , λ ) j = k 0 + 1 n ( 1 ) j k 0 M ˜ n k 0 + 1 , n j + 1 ( λ ) Z j ( x , λ ) , k 0 = 1 , , n .
Let us prove the assertion of the lemma by induction for k 0 = n , n 1 , , 2 , 1 . For k 0 = n , function Z n ( x , λ ) z n ( x , λ ) is entire in λ by virtue of Lemma 6. Next, suppose that the assertion is already proved for Z k 0 + 1 ( x , λ ) , …, Z n ( x , λ ) . Let us prove it for Z k 0 ( x , λ ) . Fix ( l , k , ε ) V and let { k i } i = 1 s denote the set of all the indices such that λ l , k , ε = λ l , k i , ε , k i k 0 , as in the statement in Lemma 6. We consider two cases.
Case 1:  ε = 0 . In view of (36), λ l , k , 0 is a regular point of M ˜ n k 0 + 1 , n j + 1 ( λ ) . Using (62), (66) and the induction hypothesis, we obtain
Z k 0 , 1 ( x , λ l , k , 0 ) = i = 1 s ( 1 ) k 0 k i β l , k i , 0 z l , k i , 0 ( x ) M ˜ n k 0 + 1 , n k i + 1 ( λ l , k i , 0 ) i = 1 k i k 0 s ( 1 ) k i k 0 M ˜ n k 0 + 1 , n k i + 1 ( λ l , k i , 0 ) Z k i , 1 ( x , λ l , k i , 0 ) .
Thus, we obtain the formula (64) for λ l , k , 0 = λ l , k 0 , 0 , and Z k 0 , 1 ( x , λ l , k , 0 ) = 0 otherwise.
Case 2:  ε = 1 . From the induction hypothesis, the functions { Z j ( x , λ ) } j = k 0 + 1 n are analytic at λ l , k , 1 . The functions M ˜ n k 0 + 1 , n j + 1 ( λ ) have a pole λ l , k , 1 if j = k i + 1 , i = 1 , , s . Therefore, using (66) and (62), we obtain
Z k 0 , 1 ( x , λ l , k , 1 ) = i = 1 s ( 1 ) k 0 k i + 1 β l , k i , 1 z l , k i , 1 ( x ) M ˜ n k 0 + 1 , n k i + 1 ( λ l , k i , 1 ) i = 1 s ( 1 ) k i k 0 + 1 M ˜ n k 0 + 1 , n k i , 1 ( λ l , k i , 1 ) Z k i + 1 ( x , λ l , k i , 1 ) .
By Lemma 1
λ l , k i , 1 = λ ˜ l , n k i , β l , k i , 1 = β ˜ l , n k i .
Substituting (51) and (68) into (67), we arrive at the relation
Z k 0 , 1 ( x , λ l , k , 1 ) = i = 1 s ( 1 ) k 0 k i + 1 z l , k i , 1 ( x ) × β ˜ l , n k i M ˜ n k i + 1 , n k i , 0 ( λ ˜ l , n k i ) M ˜ n k 0 + 1 , n k i ( λ ˜ l , n k i ) .
It follows from (23) that
M ˜ 0 ( λ ˜ l , n k i ) N ˜ ( λ ˜ l , n k i ) = M ˜ 1 ( λ ˜ l , n k ) .
By virtue of Lemma 1, F ˜ F n , s i m p , so matrices N ˜ ( λ ˜ l , n k i ) have a special structure (24). Therefore, the relation (70) implies that the expression in the brackets in (69) vanishes. Hence, λ l , k , 1 is a regular point of Z k 0 ( x , λ ) . By induction, this concludes the proof. □
We need to show that z l , k , ε ( x ) = 0 for all ( l , k , ε ) V . Fio this purpose, we use the following two lemmas.
Lemma 8. 
If z l , n p , 0 ( x ) = 0 for all l 1 , then z l , k , ε ( x ) = 0 for ( l , k , ε ) V , k = n p , , n 1 .
Proof. 
Suppose that z l , k , 0 ( x ) = 0 for some k n p and all l 1 . In view of (66) and Lemma 7, function Z k + 1 ( x , λ ) has zeros { λ l , k , 0 } l 1 and poles { λ l , k + 1 , 0 } l 1 . Denote
d j ( λ ) : = l = 1 1 λ λ l , j , 0 , j = 1 , , n 1 , d n ( λ ) : = 1 .
For simplicity, we assume that λ l , j , 0 0 . The opposite case requires minor changes. Thus, the function
G k + 1 ( x , λ ) : = Z k + 1 ( x , λ ) d k + 1 ( λ ) d k ( λ )
is entire in λ .
Since { λ l , j , 0 } satisfies the asymptotics (10), then the asymptotic properties of the function d j ( λ ) are analogous to those of Δ j , j ( λ ) , which is the characteristic function of the boundary value problem L j with boundary conditions (2). Consequently, one can show that
d j ( ρ n ) ρ s j n ( n 1 ) 2 exp ( ρ ( ω j + 1 + ω j + 2 + + ω n ) ) , ρ Γ ¯ s , ρ * , δ ,
where the notation f ( ρ ) g ( ρ ) means
C 1 | g ( ρ ) | | f ( ρ ) | C 2 | g ( ρ ) | , C 1 , C 2 > 0 ,
and s j is the sum of all the orders in the boundary conditions (2):
s j : = j ( j 1 ) 2 + ( n j ) ( n j 1 ) 2 .
Using the estimates (53) and (71), we have
| G k + 1 ( x , λ ) | C | ρ | ( n k ) | exp ( ρ ω k + 1 ( x 1 ) ) | , λ = ρ n , ρ Γ ¯ s , | ρ | ρ * .
Since k n p , then exp ( ρ ω k + 1 ( x 1 ) ) is bounded as | ρ | . Hence, G k + 1 ( x , λ ) 0 as | λ | . From Liouville’s theorem, G k + 1 ( x , λ ) 0 , so Z k + 1 ( x , λ ) 0 . Consequently, it follows from (64), (66), and the assumption β l , k + 1 0 that
z l , k , 1 ( x ) = Z k + 1 ( x , λ l , k , 1 ) = 0 , k < n , z l , k + 1 , 0 ( x ) = 1 β l , k + 1 , 0 Z k + 1 , 1 ( x , λ l , k + 1 , 0 ) = 0 , k < n 1 .
Through induction, this implies the assertion of the lemma. □
Lemma 9. 
If z l , p , 0 ( x ) = 0 for all l 1 , then z l , k , ε ( x ) = 0 for ( l , k , ε ) V , k = 1 , , p 1 .
Proof. 
Suppose that z l , k , 0 ( x ) = 0 for some k { 2 , 3 , , p } and all l 1 . Then, it follows from Lemma 7 that Z k ( x , λ ) is entire. On the other hand, the estimate (53) implies that Z k ( x , λ ) is bounded in the whole λ -plane. Hence,  Z k ( x , λ ) 0 , so the relation (66) implies that z l , k 1 , ε ( x ) = 0 , l 1 , ε { 0 , 1 } . Induction yields the assertion of the lemma. □
Introduce the following auxiliary functions
B j ( x , λ ) : = Z j ( x , λ ) Z n j + 1 ( x , ( 1 ) n λ ¯ ) ¯ , j = 1 , , n .
Lemma 10. 
There exists a sequence of circles λ C : | λ | = Θ v with radii Θ v such that
lim v | λ | = Θ v B j ( x , λ ) d λ = 0 , j = 1 , , n .
Proof. 
Fix j { 1 , 2 , , n } . The estimate (52) implies
| Z j ( x , ρ n ) Z n j + 1 ( x , ( 1 ) n ρ n ¯ ) ¯ | C | ρ | ( n 1 ) l , k ξ l | ρ c k l | + 1 2 , ρ Γ ¯ s , ρ * , δ .
Choose radii θ r such that
{ ρ C : | ρ | = θ r } Γ ¯ s , ρ * , δ , θ r + 1 θ r > 1 , r 1 ,
for s = 1 , 2 , ρ * , and δ , for which the estimate (74) holds. Denote by n r , k the closest integer to θ r | c k | . Then
| B j ( x , ρ n ) | C | ρ | ( n 1 ) l , k ξ l | n r , k l | + 1 2 , | ρ | = θ r .
For simplicity, suppose that { ξ l } l 1 . Then,
| B j ( x , ρ n ) | C | ρ | ( n 1 ) l ( ξ l ) 2 l , k ( ξ l ) 2 ( | n r , k l | + 1 ) 2 C | ρ | ( n 1 ) k = 1 n 1 g r , k , | ρ | = θ r ,
where
g r , k : = l = 1 ξ l ( | n r , k l | + 1 ) 2 .
Clearly,
r = 1 g r , k l = 1 ξ l r = 1 1 ( | n r l | + 1 ) 2 C u = 1 1 u 2 < , k = 1 , , n 1 .
Hence, for each fixed k { 1 , 2 , , n 1 } , we have { g r , k } r 1 l 1 . Therefore, one can choose a subsequence { r v } v 1 such that g r v , k = o ( r v 1 ) as v . This implies that g r v , k = o ( θ r v 1 ) , v . Put Θ v : = θ r v n . Then, B j ( x , λ ) = o ( | λ | 1 ) for | λ | = Θ v , v . This yields the claim for the case { ξ l } l 1 .
If { ξ l } l 1 , then one can apply the technique of [41] to show that
{ z l , k , ε ( x ) w l , k 1 ( x ) } l 2 , { ( z l , k , 0 ( x ) z l , k , 1 ( x ) ) w l , k 1 ( x ) } l 1 .
Using these relations, one can derive the estimate
| B j ( x , ρ n ) | C | ρ | ( n 1 ) l , k κ l | n r , k l | + 1 2 , | ρ | = θ r ,
with some sequence { κ l } l 1 . Then, the proof of the lemma can be completed analogously to that for case { ξ l } l 1 . □
Lemma 11. 
The following relation holds:
Res λ = λ l , j , 0 B j ( x , λ ) = Res λ = λ l , j , 0 B j + 1 ( x , λ ) = β l , j , 0 z l , j , 0 ( x ) z l , n j , 0 ( x ) ¯ , j = 1 , , n 1 .
At all the other points, the functions B j ( x , λ ) are analytical in λ for each fixed x [ 0 , 1 ] .
Proof. 
The assertion of the lemma immediately follows from (13), (66), (72) and Lemma 7. □
Proceed to the proof of Theorem 1. We consider two cases.
Case  n = 2 p . Introduce the following function
B ( x , λ ) : = j = 1 p ( 1 ) j B j ( x , λ ) .
From Lemma 10,
lim v | λ | = Θ v B ( x , λ ) d λ = 0 .
Lemma 11 implies that B ( x , λ ) has the only poles { λ l , p , 0 } l 1 and
Res λ = λ l , p , 0 B ( x , λ ) = ( 1 ) p β l , p , 0 z l , p , 0 ( x ) z l , p , 0 ( x ) ¯ .
Therefore, calculating the integrals in (75) using residue theorem, we obtain
l = 1 β l , p , 0 | z l , p , 0 ( x ) | 2 = 0 .
By the hypothesis in Theorem 1, we have ( 1 ) p + 1 β l , p , 0 > 0 . This implies that z l , p , 0 ( x ) = 0 for all l 1 . Applying Lemmas 8 and 9, we conclude that z l , k , ε ( x ) = 0 for all ( l , k , ε ) V .
Case  n = 2 p + 1 . Calculating the integral in (73) using residue theorem and using Lemma 11, we obtain, via induction for j = 1 , , p , that
l = 1 Res λ = λ l , j , 0 B j ( x , λ ) = l = 1 β l , j , 0 z l , j , 0 ( x ) z l , n j , 0 ( x ) ¯ = 0 .
Consider the radii Θ v , v 1 , from Lemma 10. Let Υ v denote the boundary of the half-circle
λ C : | λ | < Θ v , ( 1 ) p + 1 R e λ > 0 .
By virtue of Lemma 11, function B p + 1 ( x , λ ) has poles { λ l , p , 0 } and { λ l , p + 1 , 0 } . By the hypotheses in Theorem 1, we have λ l , p + 1 , 0 = λ l , p , 0 ¯ and ( 1 ) p + 1 R e λ l , p , 0 > 0 , so ( 1 ) p + 1 R e λ l , p + 1 , 0 < 0 . Therefore, using residue theorem, Lemma 11, and (76), we obtain
lim v 1 2 π i Υ v B p + 1 ( x , λ ) d λ = lim v | λ l , p , 0 | < Θ v Res λ = λ l , p , 0 B p + 1 ( x , λ ) = l = 1 β l , p , 0 z l , p , 0 ( x ) z l , p + 1 , 0 ( x ) ¯ = 0 .
On the other hand, Υ v = [ i Θ v , i Θ v ] Υ v + , where Υ v + is the arc { | λ | = Θ v , 0 ( 1 ) p + 1 arg λ π } , and
lim v 1 2 π i Υ v + B p + 1 ( x , λ ) d λ = 0 .
Consequently,
i i B p + 1 ( x , λ ) d λ = 0 .
Using (72) and setting λ = i τ , we arrive at the following relation
| Z p + 1 ( x , i τ ) | 2 d τ = 0 ,
which implies Z p + 1 ( x , λ ) 0 . The relations (66) and (64) imply that z l , p , ε ( x ) = 0 , ε = 0 , 1 , and z l , p + 1 , 0 ( x ) = 0 , respectively, for all l 1 . Using Lemmas 8 and 9, we conclude that z l , k , ε ( x ) = 0 for all ( l , k , ε ) V .
Thus, in both cases n = 2 p and n = 2 p + 1 , we obtain ζ ( x ) = 0 , which finishes the proof of Theorem 1.

5. Solution of the Inverse Spectral Problem

In this section, we apply Theorem 1 to obtain necessary and sufficient conditions for the solvability of an inverse spectral problem for Equation (1) with the coefficients τ ν W 2 ν 1 [ 0 , 1 ] , ν = 0 , , n 2 . In other words,
  • τ 0 belongs to the space of generalized functions W 2 1 [ 0 , 1 ] , whose antiderivatives belong to L 2 [ 0 , 1 ] .
  • τ 1 belongs to W 2 0 [ 0 , 1 ] = L 2 [ 0 , 1 ] .
  • For k 1 , τ k + 1 belongs to the Sobolev space W 2 k [ 0 , 1 ] of functions f ( x ) such that f ( k ) L 2 [ 0 , 1 ] .
On the one hand, we can reduce Equation (1) to the form (5), where
p s = k = s / 2 min { s , n / 2 1 } C k s k τ 2 k ( 2 k s ) + τ 2 k + 1 ( 2 k s + 1 ) + k = ( s 1 ) / 2 min { s , ( n 1 ) / 2 } 1 2 C k s k 1 τ 2 k + 1 ( 2 k + 1 s ) ,
C k j : = k ! j ! ( k j ) ! are the binomial coefficients; the notations a and a mean the rounding of a down and up, respectively; and τ n 1 = 0 . Clearly, p s W 2 s 1 [ 0 , 1 ] , s = 0 , , n 2 .
On the other hand, Equation (1) can be transformed to the first-order system (7). For this purpose, we apply the results of [27]. In [27], associated matrices F ( x ) were constructed for the differential expression of the following general form with various singularity orders { i ν } ν = 0 n 2 :
y ( n ) + k = 0 n / 2 ( 1 ) i 2 k + k σ 2 k ( i 2 k ) ( x ) y ( k ) ( k ) + k = 0 ( n 1 ) / 2 1 ( 1 ) i 2 k + 1 + k + 1 σ 2 k + 1 ( i 2 k + 1 ) ( x ) y ( k ) ( k + 1 ) + σ 2 k + 1 ( i 2 k + 1 ) ( x ) y ( k + 1 ) ( k ) ,
where { σ ν ( x ) } ν = 0 n 2 are regular functions on ( 0 , 1 ) . For the differential expression n ( y ) in (1) with τ ν W 2 ν 1 [ 0 , 1 ] , one can set i 0 : = 1 , σ 0 : = τ 0 ( 1 ) , i ν : = 0 , and σ ν : = ( 1 ) ν / 2 + ν τ ν for ν 1 . Then, according to the results in [27] (Section 2), the associated matrix F ( x ) can be obtained as follows: Define the matrix function Q ( x ) = [ q k , j ( x ) ] k , j = 0 p , p = n / 2 using the relations
q 0 , 1 : = σ 0 + σ 1 , q 1 , 0 : = σ 0 σ 1 , q k , k : = σ 2 k , k = 1 , , p 1 , q k , k + 1 : = σ 2 k + 1 , q k + 1 , k : = σ 2 k + 1 , k = 1 , , n p 2 .
For n 3 , construct F ( x ) = [ f k , j ( x ) ] k , j = 1 n with the formulas
f k , j : = ( 1 ) k + n + 1 q j 1 , n k , k = p + 1 , , n , j = 1 , , n p , f k , k + 1 : = 1 , k = 1 , , n 1 .
All the other entries of Q ( x ) and F ( x ) are assumed to be zero. For example,
n = 4 : Q ( x ) = 0 σ 0 + σ 1 0 σ 0 σ 1 σ 2 0 0 0 0 , F ( x ) = 0 1 0 0 0 0 1 0 ( σ 0 + σ 1 ) σ 2 0 1 0 σ 0 σ 1 0 0 , n = 5 : Q ( x ) = 0 σ 0 + σ 1 0 σ 0 σ 1 σ 2 σ 3 0 σ 3 0 , F ( x ) = 0 1 0 0 0 0 0 1 0 0 0 σ 3 0 1 0 σ 0 + σ 1 σ 2 σ 3 0 1 0 ( σ 0 σ 1 ) 0 0 0 .
The construction for n = 2 is different, see (9).
σ ν L 2 [ 0 , 1 ] for ν 0 , so F F n . Define the quasi-derivatives y [ k ] and the domain D F using (15) and (16), respectively. Then, from Theorem 2.2 in [27], for any y D F , the relation n ( y ) = y [ n ] holds, which implies the regularization of the differential expression n ( y ) . Thus, Equation (1) can be represented as (17) or as the first-order system (7). Note that there are different ways to choose an associated matrix F ( x ) for the regularization of the differential expression n ( y ) . In particular, one can use the regularization of Mirzoev and Shkalikov [33,35] or choose other singularity orders i 0 1 , i ν 0 , and ν 1 to represent n ( y ) in the form (78). For definiteness, we use the associated matrix constructed using formulas (79).
Consider the boundary value problems L k , k = 1 , , n 1 , for Equation (1) with the boundary conditions (2). We write that { τ ν } ν = 0 n 2 W s i m p if τ ν W 2 ν 1 [ 0 , 1 ] , ν = 0 , , n 2 , and the corresponding eigenvalues { λ l , k } l 1 , k = 1 , , n 1 satisfy (A-1) and (A-2). Consider the following inverse spectral problem.
Inverse Problem 1. Given the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 , find the coefficients { τ ν } ν = 0 n 2 W s i m p .
The results of [26,28] together with the relation (5) lead to the following uniqueness proposition for inverse problem 1.
Proposition 6. 
The spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 uniquely determine the coefficients τ ν W s i m p .
Proof. 
It has been proved in [26,28] that under assumptions (A-1) and (A-2), the spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 for Equation (5) uniquely specify coefficients p s W 2 s 1 [ 0 , 1 ] , s = 0 , , n 2 . Furthermore, the relation (77) implies a bijection between { p s } s = 0 n 2 and { τ ν } ν = 0 n 2 in the corresponding functional spaces, which yields the claim. □
Moreover, Theorem 2 in [28] implies the following sufficient conditions for the existence of a solution for inverse problem 1.
Proposition 7. 
Let complex numbers { λ l , k , β l , k } l 1 , k = 1 , , n 1 satisfy (A-1), (A-2), and β l , k 0 for all l , k . Suppose that there exists a model problem with coefficients { τ ˜ ν } ν = 0 n 2 W s i m p such that:
1. 
{ l n 2 ξ l } l 1 l 2 , where the numbers ξ l were defined in (42).
2. 
The operator ( I R ˜ ( x ) ) , which is constructed using { λ l , k , β l , k } l 1 , k = 1 , , n 1 and the model problem according to Section 3, has a bounded inverse operator for each fixed x [ 0 , 1 ] .
Then, there exists a unique solution { τ ν } ν = 0 n 2 W s i m p of inverse problem 1 for data { λ l , k , β l , k } l 1 , k = 1 , , n 1 .
A disadvantage of Proposition 7 is the requirement for the existence of the bounded operator ( I R ˜ ( x ) ) 1 . In general, it is difficult to verify this condition. However, in the self-adjoint case, we can apply Theorem 1 for this purpose.
Let W s i m p + denote the class of coefficients { τ ν } ν = 0 n 2 W s i m p such that functions i n + ν τ ν are real-valued for ν = 0 , n 2 ¯ . Then, the associated matrix F ( x ) , which is constructed using formulas (79), belongs to F n , s i m p + . Therefore, combining Theorem 1 and Proposition 7, we immediately arrive at Theorem 2.
Remark 2. 
For even n, the conditions of Theorem 2 are necessary and sufficient. Indeed, by necessity, condition (14) for even n holds by virtue of Lemma 2, and a model problem can be chosen as τ ˜ ν : = τ ν , ν = 0 , , n 2 . For odd n, the only “gap” between necessary and sufficient conditions is the requirement ( 1 ) p + 1 R e λ l , p > 0 , which plays an important role in the proof of Theorem 1.
Note that the assumption { l n 2 ξ l } l 2 implies asymptotics (10) and (11) for { λ l , k , β l , k } l 1 , k = 1 , , n 1 , because similar asymptotics hold for { λ ˜ l , k , β ˜ l , k } l 1 , k = 1 , , n 1 . However, condition { l n 2 ξ l } l 2 is stronger. In order to achieve it, one has to find a model problem with coefficients c ˜ j , k = c j , k and d ˜ j , k = d j , k in the sharp asymptotics
λ l , k = l n c 0 , k + c 1 , k l 1 + c 2 , k l 2 + + c n 1 , k l ( n 1 ) + l ( n 1 ) ϰ l , k , β l , k = n λ l , k 1 + d 1 , k l 1 + d 2 , k l 2 + + d n 2 , k l ( n 2 ) + l ( n 2 ) η l , k ,
where { ϰ l , k } , { η l , k } l 2 . This task is explicitly solved for n = 2 , 3 , 4 in the next section. But, for higher orders, it becomes very technically complicated.

6. Examples

In this section, we consider inverse problem 1 for n = 2 , 3 , 4 and { τ ν } ν = 0 n 2 W s i m p + . We obtain the corollaries of Theorem 2 on the spectral data’s characterization for these cases. For n = 2 and n = 3 , our results coincide with the results of [42] and [29], respectively. For n = 4 , our result (Theorem 3) is novel.

6.1. Second Order

For n = 2 , Equation (1) turns into the Sturm–Liouville equation
y + τ 0 y = λ y , x ( 0 , 1 ) ,
where τ 0 is a real-valued potential of W 2 1 [ 0 , 1 ] . Then, we only have problem L 1 with the Dirichlet boundary conditions
y ( 0 ) = y ( 1 ) = 0 .
It is well known (see, e.g., [42]) that the corresponding eigenvalues λ l , 1 = : λ l are real and simple. Furthermore, β l : = β l , 1 = 0 1 y l 2 ( x ) d x 1 , where { y l ( x ) } l 1 are the eigenfunctions of the problem L 1 , normalized by e condition y l [ 1 ] ( 0 ) = 1 . Asymptotics (10) and (11) take the form
λ l = ( π l + ϰ l ) 2 , β l = 2 ( π l ) 2 ( 1 + η l ) , l 1 , { ϰ l } , { η l } l 2 .
Therefore, choosing any real-valued model potential τ ˜ 0 W 2 1 [ 0 , 1 ] , we obtain { τ ˜ 0 } W s i m p + and { ξ l } l 2 . Hence, Theorem 2 implies the following corollary, which is equivalent to the spectral data characterization in [42].
Corollary 1. 
For numbers { λ l , β l } l 1 to be the spectral data of the Sturm–Liouville problem (80) and (81) with a real-valued potential τ 0 W 2 1 [ 0 , 1 ] , it is necessary and sufficient to satisfy asymptotics (82) and conditions
λ l R , λ l λ l 0 ( l l 0 ) , β l > 0 , l 1 .

6.2. Third Order

For n = 3 , Equation (1) takes the form
y + ( τ 1 y ) + τ 1 y + τ 0 y = λ y , x ( 0 , 1 ) ,
where functions i τ 0 ( x ) and τ 1 ( x ) are real-valued, τ 0 W 2 1 [ 0 , 1 ] , and τ 1 L 2 [ 0 , 1 ] . Then, we have two boundary value problems:
L 1 : y ( 0 ) = 0 , y ( 1 ) = y ( 1 ) = 0 , L 2 : y ( 0 ) = y ( 0 ) = 0 , y ( 1 ) = 0 ,
and the corresponding spectral data satisfy the following asymptotics (see [29]):
λ l , k = ( 1 ) k + 1 2 π 3 l + 1 6 θ 2 π 2 n + ϰ l , k l 3 , β l , k = 3 λ l , k 1 + η l , k l ,
where l 1 , k = 1 , 2 , θ = 0 1 τ 1 ( x ) d x , { ϰ l , k } , { η l , k } l 2 . The coefficient θ can be found from the eigenvalue asymptotics. By choosing a model problem { τ ˜ 0 , τ ˜ 1 } W s i m p + with 0 1 τ ˜ 1 ( x ) d x = θ , we achieve { l ξ l } l 1 l 2 . Consequently, Theorem 2 implies the following corollary, which is a special case of Theorem 2.5 in [29].
Corollary 2. 
Let complex numbers { λ l , k , β l , k } l 1 , k = 1 , 2 satisfy the assumptions (A-1), (A-2), λ l , 1 = λ l , 2 ¯ , β l , 1 = β l , 2 ¯ , R e λ l , 1 > 0 , β l , 1 0 for l 1 and the asymptotics (83) with a real coefficient θ. Then, there exists a unique solution { τ 0 , τ 1 } of inverse problem 1 with the spectral data { λ l , k , β l , k } l 1 , k = 1 , 2 .
Note that in [29], the inverse problem was investigated in a more general form, when assumption (A-2) can be violated.

6.3. Fourth Order

Consider Equation (1) for n = 4 :
y ( 4 ) + ( τ 2 ( x ) y ) + ( τ 1 ( x ) y ) + τ 1 ( x ) y + τ 0 ( x ) y = λ y ( x ) , x ( 0 , 1 ) ,
where { τ 0 , τ 1 , τ 2 } W s i m p + . This means τ 0 W 2 1 [ 0 , 1 ] , τ 1 L 2 [ 0 , 1 ] , and τ 2 W 2 1 [ 0 , 1 ] , and the functions τ 0 , i τ 1 , and τ 2 are real-valued. The spectral data { λ l , k , β l , k } l 1 , k = 1 , 2 , 3 are associated with the boundary value problems L k , k = 1 , 2 , 3 , for Equation (84) with the following boundary conditions:
L 1 : y ( 0 ) = 0 , y ( 1 ) = y ( 1 ) = y ( 1 ) = 0 , L 2 : y ( 0 ) = y ( 0 ) = 0 , y ( 1 ) = y ( 1 ) = 0 , L 3 : y ( 0 ) = y ( 0 ) = y ( 0 ) = 0 , y ( 1 ) = 0 .
Theorem 2 and the results of [43] together imply the following theorem on the spectral data characterization for the fourth-order Equation (84).
Theorem 3. 
For complex numbers { λ l , k , β l , k } l 1 , k = 1 , 2 , 3 to be the spectral data of { τ 0 , τ 1 , τ 2 } W s i m p + , it is necessary and sufficient to fulfill conditions (A-1), (A-2), (13), β l , 2 < 0 , and β l , 2 ± 1 0 for all l 1 , and the asymptotic relations
λ l , 2 ± 1 = ( 2 π l + π 2 2 4 θ 2 π l + π 2 2 2 t 0 + t 1 4 σ 2 2 π l + π 2 2 + l ϰ l , 2 ± 1 ) ,
λ l , 2 = ( π l + π 2 ) 4 θ π l + π 2 2 + ( t 0 + t 1 ) π l + π 2 + l ϰ l , 2 ,
β l , 2 ± 1 = 4 λ l , 2 ± 1 1 + t 0 + θ 8 ( π l ) 2 + η l , 2 ± 1 l 2 , β l , 2 = 4 λ l , 2 1 + t 0 + 2 θ 4 ( π l ) 2 + η l , 2 l 2 ,
where { ϰ l , k } , { η l , k } l 2 and
θ = 0 1 τ 2 ( x ) d x , t 0 = τ 2 ( 0 ) , t 1 = τ 2 ( 1 ) , σ = 0 1 τ 1 ( x ) d x .
Proof. 
By necessity, the asymptotic relations (85)–(87) were obtained in [43] and the conditions (13), β l , 2 < 0 , and β l , 2 ± 1 0 follow from Lemma 1, Lemma 2, and the structure of the weight matrices (24), respectively.
By sufficiency, asymptotics (85)–(87) allow us to construct a model problem { τ ˜ 0 , τ ˜ 1 , τ ˜ 2 } such that { l 2 ξ l } l 2 . Indeed, one can successively find constants
θ : = lim l π l + π 2 2 λ l , 2 π l + π 2 2 , t 0 : = lim l 4 ( π l ) 2 ( β l , 2 + 4 λ l , 2 ) 2 θ , t 1 : = lim l λ l , 2 π l + π 2 1 π l + π 2 3 + θ π l + π 2 t 0 , σ : = lim l λ l , 1 λ l , 3 8 π l + π 4 ,
and construct functions { τ ˜ 0 , τ ˜ 1 , τ ˜ 2 } such that θ ˜ = θ , t ˜ 0 = t 0 , t ˜ 1 = t 1 , and σ ˜ = σ . For example, put
τ ˜ 0 ( x ) 0 , τ ˜ 1 ( x ) σ , τ ˜ 2 ( x ) = ( 3 t 0 + 3 t 1 6 θ ) x 2 + ( 4 t 0 2 t 1 + 6 θ ) x + t 0 .
Then, the spectral data { λ ˜ l , k , β ˜ l , k } l 1 , k = 1 , 2 , 3 satisfy the asymptotics with the same main parts as (85)–(87). Hence, sequences { l 1 | λ l , k λ ˜ l , k | } and { l 2 | β l , k β ˜ l , k | } belong to l 2 . According to (42), this immediately implies that { l 2 ξ l } l 2 . If { τ ˜ 0 , τ ˜ 1 , τ ˜ 2 } W s i m p . Then, one can perturb a finite number of the eigenvalues { λ ˜ l , k } to achieve (A-1) and (A-2), and such perturbation does not influence the asymptotics. Thus, for any data { λ l , k , β l , k } l 1 , k = 1 , 2 , 3 satisfying the conditions in Theorem 3, the hypothesis in Theorem 2 is valid, so there exist { τ 0 , τ 1 , τ 2 } W s i m p + with spectral data { λ l , k , β l , k } l 1 , k = 1 , 2 , 3 . □

7. Conclusions

In this paper, we considered spectral data { λ l , k , β l , k } l 1 , k = 1 , , n 1 of the general first-order system (7). The corresponding main Equation (6) was constructed in the framework of the method of spectral mappings. The main result of the study is Theorem 1, on the unique solvability of the main Equation (6) in the self-adjoint case under some natural requirements on data { λ l , k , β l , k } l 1 , k = 1 , , n 1 . Furthermore, we applied Theorem 1 to obtain the necessary and sufficient conditions for the solvability of the inverse spectral problem for the higher-order differential Equation (1) with coefficients τ ν W 2 ν 1 [ 0 , 1 ] , ν = 0 , , n 2 . As a corollary, we obtained Theorem 3 on the spectral data characterization for the fourth-order differential Equation (84).
Our results have the following advantages over those of the previous studies:
  • The majority of the studies on inverse spectral problems have focused on equations of order n = 2 ; there are some papers for n = 3 and n = 4 , but the results of this paper are valid for any arbitrary integer n 2 .
  • This paper answers the most principal question of inverse problem theory about the necessary and sufficient conditions for the solvability of an inverse problem.
  • Theorem 1, on the unique solvability of the main equation, is proved for the general system (7) and so can be applied to various classes of higher-order differential operators with regular and distribution coefficients.
  • Theorem 1 is novel even for the case of regular coefficients. The solvability of the main equation was previously proven for n = 2 , 3 only.
  • Theorem 2, on the necessary and sufficient conditions, does not require main equation solvability.
  • Theorem 3 provides the spectral data characterization for n = 4 in a very simple form. Only asymptotics and simple structural properties of the spectral data are required.
In the future, the results and the methods in this paper can be applied for the investigation of inverse spectral problems for different classes of differential operators. In particular, differential operators with distribution coefficients of higher singularity orders can be considered (see, e.g., [27,33,38]), and methods for recovering operators from other types of spectral data can be developed (see, e.g., [15,18]).

Funding

This work was supported by grant 21-71-10001 of the Russian Science Foundation, https://rscf.ru/en/project/21-71-10001/ (accessed on 12 November 2023).

Data Availability Statement

Data are contained within the article.

Acknowledgments

The author is grateful to anonymous referees for their valuable remarks.

Conflicts of Interest

The author declares that this paper has no conflicts of interest.

References

  1. Marchenko, V.A. Sturm-Liouville Operators and Their Applications; Birkhäuser: Basel, Switzerland, 1986. [Google Scholar]
  2. Levitan, B.M. Inverse Sturm-Liouville Problems; VNU Science Press: Utrecht, The Netherlands, 1987. [Google Scholar]
  3. Pöschel, J.; Trubowitz, E. Inverse Spectral Theory; Academic Press: New York, NY, USA, 1987. [Google Scholar]
  4. Freiling, G.; Yurko, V. Inverse Sturm-Liouville Problems and Their Applications; Nova Science Publishers: Huntington, NY, USA, 2001. [Google Scholar]
  5. Kravchenko, V.V. Direct and Inverse Sturm-Liouville Problems; Birkhäuser: Cham, Switzerland, 2020. [Google Scholar]
  6. Gel’fand, I.M.; Levitan, B.M. On the determination of a differential equation from its spectral function. Izv. Akad. Nauk SSSR Ser. Mat. 1951, 15, 309–360. (In Russian) [Google Scholar]
  7. Leibenson, Z.L. The inverse problem of spectral analysis for higher-order ordinary differential operators. Trans. Moscow Math. Soc. 1966, 15, 78–163. [Google Scholar]
  8. Leibenson, Z.L. Spectral expansions of transformations of systems of boundary value problems. Trudy Moskov. Mat. Obshch. 1971, 25, 15–58. (In Russian) [Google Scholar]
  9. Leibenzon, Z.L. Algebraic-differential transformations of linear differential operators of arbitrary order and their spectral properties applicable to the inverse problem. Math. USSR-Sb. 1972, 18, 425–471. [Google Scholar] [CrossRef]
  10. Yurko, V.A. Method of Spectral Mappings in the Inverse Problem Theory; Inverse and Ill-Posed Problems Series; VNU Science: Utrecht, The Netherlands, 2002. [Google Scholar]
  11. Yurko, V.A. Recovery of nonselfadjoint differential operators on the half-line from the Weyl matrix. Math. USSR-Sb. 1992, 72, 413–438. [Google Scholar] [CrossRef]
  12. Yurko, V.A. Inverse problems of spectral analysis for differential operators and their applications. J. Math. Sci. 2000, 98, 319–426. [Google Scholar] [CrossRef]
  13. Beals, R. The inverse problem for ordinary differential operators on the line. Am. J. Math. 1985, 107, 281–366. [Google Scholar] [CrossRef]
  14. Beals, R.; Deift, P.; Tomei, C. Direct and Inverse Scattering on the Line, Mathematical Surveys and Monographs; AMS: Providence, RI, USA, 1988; Volume 28.
  15. Barcilon, V. On the uniqueness of inverse eigenvalue problems. Geophys. J. Intern. 1974, 38, 287–298. [Google Scholar] [CrossRef]
  16. Khachatryan, I.G. Reconstruction of a differential equation from the spectrum. Funct. Anal. Appl. 1976, 10, 83–84. [Google Scholar] [CrossRef]
  17. McKean, H. Boussinesq’s equation on the circle. Comm. Pure Appl. Math. 1981, 34, 599–691. [Google Scholar] [CrossRef]
  18. McLaughlin, J.R. Analytical methods for recovering coefficients in differential equations from spectral data. SIAM Rev. 1986, 28, 53–72. [Google Scholar] [CrossRef]
  19. Papanicolaou, V.G.; Kravvaritisz, D. An inverse spectral problem for the Euler-Bernoulli equation for the vibrating beam. Inverse Probl. 1997, 13, 1083–1092. [Google Scholar] [CrossRef]
  20. Caudill, L.F.; Perry, P.A.; Schueller, A.W. Isospectral sets for fourth-order ordinary differential operators. SIAM J. Math. Anal. 1998, 29, 935–966. [Google Scholar] [CrossRef]
  21. Gladwell, G.M.L. Inverse Problems in Vibration, 2nd ed.; Solid Mechanics and Its Applications; Springer: Dordrecht, The Netherlands, 2005; Volume 119. [Google Scholar]
  22. Badanin, A.; Korotyaev, E. Inverse problems and sharp eigenvalue asymptotics for Euler-Bernoulli operators. Inverse Probl. 2015, 31, 055004. [Google Scholar] [CrossRef]
  23. Badanin, A.; Korotyaev, E.L. Third-order operators with three-point conditions associated with Boussinesq’s equation. Appl. Anal. 2021, 100, 527–560. [Google Scholar] [CrossRef]
  24. Perera, U.; Böckmann, C. Solutions of Sturm-Liouville problems. Mathematics 2020, 8, 2074. [Google Scholar] [CrossRef]
  25. Bondarenko, N.P. Inverse spectral problems for arbitrary-order differential operators with distribution coefficients. Mathematics 2021, 9, 2989. [Google Scholar] [CrossRef]
  26. Bondarenko, N.P. Reconstruction of higher-order differential operators by their spectral data. Mathematics 2022, 10, 3882. [Google Scholar] [CrossRef]
  27. Bondarenko, N.P. Linear differential operators with distribution coefficients of various singularity orders. Math. Meth. Appl. Sci. 2023, 46, 6639–6659. [Google Scholar] [CrossRef]
  28. Bondarenko, N.P. Local solvability and stability of an inverse spectral problem for higher-order differential operators. Mathematics 2023, 11, 3818. [Google Scholar] [CrossRef]
  29. Bondarenko, N.P. Inverse spectral problem for the third-order differential equation. Results Math. 2023, 78, 179. [Google Scholar] [CrossRef]
  30. Greguš, M. Third Order Linear Differential Equations; Springer: Dordrecht, The Netherlands, 1987. [Google Scholar]
  31. Bernis, F.; Peletier, L.A. Two problems from draining flows involving third-order ordinary differential equations. SIAM J. Math. Anal. 1996, 27, 515–527. [Google Scholar] [CrossRef]
  32. Möller, M.; Zinsou, B. Sixth order differential operators with eigenvalue dependent boundary conditions. Appl. Anal. Disc. Math. 2013, 7, 378–389. [Google Scholar] [CrossRef]
  33. Mirzoev, K.A.; Shkalikov, A.A. Differential operators of even order with distribution coefficients. Math. Notes 2016, 99, 779–784. [Google Scholar] [CrossRef]
  34. Everitt, W.N.; Marcus, L. Boundary Value Problems and Symplectic Algebra for Ordinary Differential and Quasi-Differential Operators; Mathematical Surveys and Monographs; AMS: Providence, RI, USA, 1999; Volume 61.
  35. Mirzoev, K.A.; Shkalikov, A.A. Ordinary differential operators of odd order with distribution coefficients. arXiv 2019, arXiv:1912.03660. [Google Scholar]
  36. Vladimirov, A.A. On the convergence of sequences of ordinary differential equations. Math. Notes 2004, 75, 877–880. [Google Scholar] [CrossRef]
  37. Vladimirov, A.A. On one approach to definition of singular differential operators. arXiv 2017, arXiv:1701.08017. [Google Scholar]
  38. Valeev, N.F.; Nazirova, E.A.; Sultanaev, Y.T. On a method for studying the asymptotics of solutions of odd-order differential equations with oscillating coefficients. Math Notes 2021, 109, 980–985. [Google Scholar] [CrossRef]
  39. Konechnaja, N.N.; Mirzoev, K.A.; Shkalikov, A.A. Asymptotics of solutions of two-term differential equations. Math. Notes 2023, 113, 228–242. [Google Scholar] [CrossRef]
  40. Savchuk, A.M.; Shkalikov, A.A. Sturm-Liouville operators with distribution potentials. Transl. Moscow Math. Soc. 2003, 64, 143–192. [Google Scholar]
  41. Bondarenko, N.P. Solving an inverse problem for the Sturm-Liouville operator with singular potential by Yurko’s method. Tamkang J. Math. 2021, 52, 125–154. [Google Scholar] [CrossRef]
  42. Hryniv, R.O.; Mykytyuk, Y.V. Inverse spectral problems for Sturm-Liouville operators with singular potentials. Inverse Probl. 2003, 19, 665–684. [Google Scholar] [CrossRef]
  43. Bondarenko, N.P. Spectral data asymptotics for fourth-order boundary value problems. arXiv 2023, arXiv:2310.13964. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bondarenko, N.P. Necessary and Sufficient Conditions for Solvability of an Inverse Problem for Higher-Order Differential Operators. Mathematics 2024, 12, 61. https://doi.org/10.3390/math12010061

AMA Style

Bondarenko NP. Necessary and Sufficient Conditions for Solvability of an Inverse Problem for Higher-Order Differential Operators. Mathematics. 2024; 12(1):61. https://doi.org/10.3390/math12010061

Chicago/Turabian Style

Bondarenko, Natalia P. 2024. "Necessary and Sufficient Conditions for Solvability of an Inverse Problem for Higher-Order Differential Operators" Mathematics 12, no. 1: 61. https://doi.org/10.3390/math12010061

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop