Next Article in Journal
On Spectral Characterization of Two Classes of Unicycle Graphs
Previous Article in Journal
Special Issue Editorial “Symmetry in Structural Health Monitoring”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Landweber Iterative Regularization Method for Solving the Cauchy Problem of the Modified Helmholtz Equation

1
School of Science, China University of Petroleum, Qindao 266580, China
2
School of Science, Lanzhou University of Technology, Lanzhou 730050, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2022, 14(6), 1209; https://doi.org/10.3390/sym14061209
Submission received: 29 March 2022 / Revised: 1 June 2022 / Accepted: 5 June 2022 / Published: 11 June 2022
(This article belongs to the Topic Engineering Mathematics)

Abstract

:
In this manuscript, the Cauchy problem of the modified Helmholtz equation is researched. This inverse problem is a serious ill-posed problem. The classical Landweber iterative regularization method is designed to find the regularized solution of this inverse problem. The error estimations between the exact solution and the regularization solution are all obtained under the a priori and the a posteriori regularization parameter selection rule. The Landweber iterative regularization method can also be applied to solve the Cauchy problem of the modified Helmholtz equation on the spherically symmetric and cylindrically symmetric regions.

1. Introduction

The modified Helmholtz equation Δ u ( x , y ) k 2 u ( x , y ) = f ( x , y ) also named the Yukawa equation was first proposed in [1]. It has a very important application in practical problems, such as in the Debye–Huckel theory, the linear Poisson–Boltzmann equation and implicit marching schemes for the heat equation. In nuclear physics, the free-space Green’s function is usually obtained by solving the Yukawa potential equation. In physics, chemistry and biology, when Coulomb forces are damped by screening effects, the Green’s function is also known as the screened Coulomb potential. For the boundary value of the modified Helmholtz equation, there are many applied fields, especially in microstretch elastic materials [2] and in the thermoelastodynamics of microstretch bodies [3,4]. In the past few years, there have been many studies on this inverse problem of the modified Helmholtz equation. If the right term of the modified Helmholtz equation is unknown, we need additional data to identify the right term, which is called the inverse problem of identifying the unknown source. Information about the unknown source identification can be found in [5,6,7,8]. On the other hand, in practical problems we often use the Cauchy data on some boundaries of the modified Helmholtz equation to determine the field inside the object or the function value that is not easy to measure on other boundaries. For example, in the nondestructive testing of conductive materials, we cannot directly measure the potential inside the material, but we can invert the potential and boundary shape inside the object by measuring the potential u ( x , y ) and normal derivative u v on the surface of the material. This is called the inverse problem of the Cauchy problem of the modified Helmholtz equation. The Cauchy problems associated with the modified Helmholtz equation have been studied by using different numerical methods, such as the Landweber method with the boundary element method and the conjugate gradient method [9], the method of fundamental solutions (MFS) [10,11], the iteration regularization method [12], Tikhonov type regularization [13], Quasi-reversibility and truncation methods [14,15,16], the Fourier truncation method [17], a mollification regularization method [18] and so on. However, beyond these references, there is only one Cauchy data reference. For other Cauchy data, there are few results for the modified Helmholtz equation. Therefore, in this paper, the Cauchy problem of the modified Helmholtz equation is studied as follows:
Δ u ( x , y ) k 2 u ( x , y ) = 0 , x ( 0 , 1 ) , y R , u ( 0 , y ) = φ 1 ( y ) , y R , u x ( 0 , y ) = φ 2 ( y ) , y R .
We will use the Cauchy data φ 1 δ ( x ) and φ 2 δ ( x ) to solve the solution u ( x , y ) for 0 < x 1 . The measurable Cauchy data are φ 1 δ ( y ) and φ 2 δ ( y ) which satisfy
φ 1 δ ( · ) φ 1 ( · ) δ
and
φ 2 δ ( · ) φ 2 ( · ) δ ,
where δ > 0 represents the measurement error level.
Due to the linear property, we can divide (1) into two Cauchy problems:
Δ f ( x , y ) k 2 f ( x , y ) = 0 , x ( 0 , 1 ) , y R , f ( 0 , y ) = φ 1 ( y ) , y R , f x ( 0 , y ) = 0 , y R ,
and
Δ g ( x , y ) k 2 g ( x , y ) = 0 , x ( 0 , 1 ) , y R , g ( 0 , y ) = 0 , y R , g x ( 0 , y ) = φ 2 ( y ) , y R .
Because u = f + g , we only need to research (4) and (5), respectively. Problems (4) and (5) are both ill-posed problems, and the Landweber iterative regularization method is applied to solve them. The Landweber iterative regularization method can also be applied to solve the Cauchy problem of the modified Helmholtz equation on the spherically symmetric and cylindrically symmetric regions.
This article is organized as follows. Section 2 gives some necessary knowledge. Section 3 constructs the regularization methods, and the a priori and the a posteriori error estimates between the regularization and the exact solutions are given, respectively, by choosing appropriate regularization parameters. Section 4 gives a simple conclusion.

2. Auxiliary Results

Using the method of Fourier transform, the Fourier transform of the exact solutions of (4) and (5) can be formulated as follows
f ^ ( x , ω ) = cosh ( ω 2 + k 2 x ) φ ^ 1 ( ω ) ,
g ^ ( x , ω ) = sinh ( ω 2 + k 2 x ) ω 2 + k 2 φ 2 ^ ( ω ) .
Using the method of inverse Fourier transform, we obtain the exact solutions of (4), (5) as follows
f ( x , y ) = 1 2 π + cosh ( ω 2 + k 2 x ) φ 1 ^ ( ω ) e i ω y d ω ,
g ( x , y ) = 1 2 π + sinh ( ω 2 + k 2 x ) ω 2 + k 2 φ 2 ^ ( ω ) e i ω y d ω .
Now, we suppose that the solutions of (4) and (5) satisfy the a priori bound as follows:
f ^ ( 1 , ω ) L 2 ( R ) = cosh ( ω 2 + k 2 ) φ 1 ^ ( ω ) L 2 ( R ) E 1 ,
g ^ ( 1 , ω ) L 2 ( R ) = sinh ( ω 2 + k 2 ) ω 2 + k 2 φ 2 ^ ( ω ) L 2 ( R ) E 2 .
Lemma 1.
For 0 < h 1 , 0 < α 1 , and n 1 , the following inequalities hold:
( 1 h ) n h α ( n + 1 ) α .
Proof. 
Denote ρ ( h ) = ( 1 h ) n h ; then ρ ( h ) = n ( 1 h ) ( n 1 ) h + ( 1 h ) n . Setting ρ ( h ) = 0 , we have h = 1 n + 1 . Note that ρ ( 0 ) = ρ ( 1 ) = 0 , ρ ( h ) has a unique maximum value at h = 1 n + 1 . Therefore,
ρ ( h ) = ( 1 h ) n h 1 n + 1 .
Therefore, we can obtain
( 1 h ) n h α [ ( 1 h ) n h ] α ( n + 1 ) α .
Lemma 2.
When α 0 , sin ( α ) = e α e α 2 , sin ( α x ) = e α x e α x 2 and cosh ( α ) = e α + e α 2 , we obtain the following inequalities:
1. 
e α 2 cosh ( α ) e α ;
2. 
sinh ( α x ) α e α x , sinh ( α ) α e α ;
3. 
sinh ( α x ) sinh ( α ) e α ( x 1 ) .
Proof. 
The proof is simple, we omit it. □

3. Landweber Iterative Regularization Method

From (8) and (9), when ω , cosh ( ω 2 + k 2 x ) and sinh ( ω 2 + k 2 x ) ω 2 + k 2 grow exponentially and approach infinity. At this time, the noise level of the actual measurement data will increase exponentially, resulting in a significant change in the direct resolution. In other words, there is a big gap between the actual situation and the exact solution sought. Therefore, problems (4), (5) are ill-posed. If we want to restore the stability of solutions, we need to use the regularization method. In this manuscript, the Landweber iterative regularization method is applied to obtain the regularization solutions for (4) and (5). The Landweber iterative regularization method is applied to solve a lot of the ill-posed problem, as one can see in [19,20,21,22,23,24].
Due to (6), we obtain
1 c o s h ( ω 2 + k 2 x ) f ^ ( x , ω ) = φ ^ 1 .
We define the operator K ^ : f ^ φ ^ 1 , and if it is a multiplication operator, then (6) can be rewritten as the following operator equation:
K ^ f ^ = φ ^ 1 .
Owing to the kernel function K ^ = K ^ * = 1 cosh ( ω 2 + k 2 x ) , K is a self-adjoint operator. The Landweber regularization method is used to find the regularization solution of K ^ f ^ = φ ^ 1 . We replace the operator equation f ^ ( x , ω ) = ( I a K ^ * K ^ ) f ^ ( x , ω ) + a K ^ * φ 1 ^ ( ω ) with the operator equation K ^ f ^ ( x , ω ) = φ 1 ^ ( ω ) , and obtain the following iterative format:
f ^ 0 ( x , ω ) = 0 , f ^ m ( x , ω ) = ( I a K ^ * K ^ ) f ^ m 1 ( x , ω ) + a K ^ * φ 1 ^ ( ω ) ,
where a is the relaxation factor and satisfies 0 < a < 1 K 2 .
We set the operator R m : L 2 ( Ω ) L 2 ( Ω ) as follows:
R m = n = 0 m 1 ( I a K ^ * K ^ ) n K ^ * = 1 ( 1 a K ^ 2 ) m K ^ , m = 1 , 2 , 3 . . . .
Then the Landweber iterative solution with the measurable data φ 1 ^ δ ( ω ) is
f ^ m , δ ( x , ω ) = R m φ 1 ^ δ ( ω ) = a n = 0 m 1 ( I a K ^ * K ^ ) n K ^ * φ 1 ^ δ ( ω ) .
The following equation can be obtained by induction
f ^ m , δ ( x , ω ) = cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] φ 1 ^ δ ( ω ) , 0 < x 1 .
Using the Fourier inverse transform, we can obtain the Landweber iterative regularization solution of problem (4) as follows
f m , δ ( x , y ) = 1 2 π + cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] φ 1 ^ δ ( ω ) e i ω y d ω , 0 < x 1 .
We define the operator H ^ : g ^ φ ^ 2 , and if it is a multiplication operator, then (7) can be rewritten as the following operator equation:
H ^ g ^ = φ ^ 2 .
Owing to the kernel function H ^ = H ^ * = ω 2 + k 2 sinh ( ω 2 + k 2 x ) , H is a self-adjoint operator.
We set the operator H m : L 2 ( Ω ) L 2 ( Ω ) as follows:
H m = n = 0 m 1 ( I b H ^ * H ^ ) n H ^ * = 1 ( 1 b H ^ 2 ) m H ^ , m = 1 , 2 , 3 . . . ,
where b is the relaxation factor and satisfies 0 < b < 1 H 2 . Then
g ^ m , δ ( x , ω ) = H m φ 2 ^ δ ( ω ) = b n = 0 m 1 ( I b H ^ * H ^ ) n H ^ * φ 2 ^ δ ( ω ) .
Using the same method, we also obtain the Landweber iterative regularization solution of problem (5) as follows
g ^ m , δ ( x , ω ) = sinh ( ω 2 + k 2 x ) ω 2 + k 2 [ 1 [ 1 b ( ω 2 + k 2 sinh ( ω 2 + k 2 x ) ) 2 ] m ] φ 2 ^ δ ( ω ) , 0 < x 1 .
Using the Fourier inverse transform, we can obtain the Landweber iterative regularization solution:
g m , δ ( x , y ) = 1 2 π + sinh ( ω 2 + k 2 x ) ω 2 + k 2 [ 1 [ 1 b ( ω 2 + k 2 sinh ( ω 2 + k 2 x ) ) 2 ] m ] φ 2 ^ δ ( ω ) e i ω y d ω , 0 < x 1 .
Now, under the a priori and the a posteriori rules, we are going to present the error estimations for problems (4) and (5).

3.1. The A Priori Error Estimate for Problem (4)

Theorem 1.
Let f ( x , y ) in (8) be the exact solution of problem (4). Let f m , δ ( x , y ) in (18) be the regularization solution of problem (4). Suppose (2) and (10) hold. If we choose m = [ θ ( x ) ] as the regularization parameter,
[ θ ( x ) ] = ( E 1 δ ) 2 x ,
then as 0 < x < 1 , we obtain the following error estimate:
f m , δ ( x , y ) f ( x , y ) C 1 E 1 x δ 1 x ,
here [ θ ( x ) ] is the largest integer less than or equal to θ ( x ) , and C 1 = ( a + ( 1 x a ) 1 x 2 x ) .
Proof. 
Using the Parseval formula and the triangle inequality, we know
f m , δ ( x , y ) f ( x , y ) = f ^ m , δ ( x , ω ) f ^ ( x , ω ) f ^ m , δ ( x , ω ) f ^ m ( x , ω ) + f ^ m ( x , ω ) f ^ ( x , ω ) .
Now, we first compute
f ^ m , δ ( x , ω ) f ^ m ( x , ω ) = cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] φ 1 ^ δ ( ω ) cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] φ 1 ^ ( ω ) sup ω R | cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] | φ 1 ^ δ ( ω ) φ 1 ^ ( ω ) sup ω R | cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] | δ sup ω R | cosh ( ω 2 + k 2 x ) 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m | δ .
Applying Bernoulli’s inequality, we obtain
[ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] a m c o s h 2 ω 2 + k 2 x .
Thus, we obtain
f ^ m , δ ( x , ω ) f ^ m ( x , ω ) a m δ .
Now, we compute
f ^ m ( x , ω ) f ^ ( x , ω ) = cosh ( ω 2 + k 2 x ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] φ 1 ^ ( ω ) cosh ( ω 2 + k 2 x ) φ 1 ^ ( ω ) = cosh ( ω 2 + k 2 x ) ( 1 a c o s h 2 ω 2 + k 2 x ) m φ 1 ^ ( ω ) = cosh ( ω 2 + k 2 x ) ( 1 a c o s h 2 ω 2 + k 2 x ) m f ^ ( 1 , ω ) cosh ( ω 2 + k 2 ) sup ω R | cosh ( ω 2 + k 2 x ) cosh ( ω 2 + k 2 ) ( 1 a c o s h 2 ω 2 + k 2 x ) m | f ^ ( 1 , ω ) sup ω R | cosh ( ω 2 + k 2 x ) cosh ( ω 2 + k 2 ) ( 1 a c o s h 2 ω 2 + k 2 x ) m | E 1 .
Using Lemma 2, we obtain
f ^ m ( x , ω ) f ^ ( x , ω ) sup ω R | e ω 2 + k 2 ( x 1 ) ( 1 a e 2 ω 2 + k 2 x ) m | E 1 .
Let α = ω 2 + k 2 , A ( α ) = e α ( x 1 ) ( 1 a e 2 α x ) m . Taking A ( α 0 ) = 0 , we obtain α 0 = 1 2 x ln a [ ( 2 m 1 ) x + 1 ] 1 x . Therefore
A ( α ) A ( α 0 ) = [ a [ ( 2 m 1 ) x + 1 ] 1 x ] x 1 2 x [ 1 1 x ( 2 m 1 ) x + 1 ] m [ a [ ( 2 m 1 ) x + 1 ] 1 x ] x 1 2 x ( 1 x a ) 1 x 2 x ( 1 m + 1 ) 1 x 2 x .
Thus, we obtain
f ^ m ( x , ω ) f ^ ( x , ω ) ( 1 x a ) 1 x 2 x ( 1 m + 1 ) 1 x 2 x E 1 .
Combining (26) with (27), we obtain
f m , δ ( x , y ) f ( x , y ) a m δ + ( 1 x a ) 1 x 2 x ( 1 m + 1 ) 1 x 2 x E 1 a E 1 x δ 1 x + ( 1 x a ) 1 x 2 x E 1 x δ 1 x ( a + ( 1 x a ) 1 x 2 x ) E 1 x δ 1 x .
Note:The above consideration is only the error estimate when 0 < x < 1 and does not consider the error estimate when the end point is x = 1 . At this time, the error estimate in Theorem 1 only shows that it is bounded rather than convergent. If we want to obtain the error estimates of the exact solution and the regularization solution at x = 1 , stronger a priori assumptions must be introduced. In order to give the error estimation at x = 1 , we give the following priori bound:
f ^ ( 1 , ω ) H p = e p ω 2 + k 2 cosh ( ω 2 + k 2 ) φ 1 ^ ( ω ) E 3 ,
where f ^ ( 1 , ω ) H p is Sobolev space H p -norm, and p = 0 is the L 2 -norm.
Theorem 2.
Let f m , δ ( x , y ) in (18) be the regularization solution of problem (4) at x = 1 , and the prior condition (28) holds for p > 0 . If we choose m = [ θ ] as the regularization parameter,
θ = E 3 δ 2 1 + p ,
when x = 1 , we have the following error estimate:
f m , δ ( 1 , y ) f ( 1 , y ) C 2 E 3 1 1 + p δ p 1 + p ,
where [ θ ] is the largest integer less than or equal to θ, C 2 = a + ( p 2 a ) p 2 .
Proof. 
Using the Parseval formula, the triangle inequality, (6) and (14), we obtain
f m , δ 1 , y f 1 , y = f ^ m , δ 1 , ω f ^ 1 , ω f ^ m , δ 1 , ω f ^ m 1 , ω + f ^ m 1 , ω f ^ 1 , ω .
Now, we first compute f ^ m , δ ( 1 , ω ) f ^ m ( 1 , ω ) .
f ^ m , δ ( 1 , ω ) f ^ m ( 1 , ω ) = cosh ( ω 2 + k 2 ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 ) m ] φ 1 ^ δ ( ω ) cosh ( ω 2 + k 2 ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 ) m ] φ 1 ^ ( ω ) sup ω R | cosh ( ω 2 + k 2 ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] | φ 1 ^ δ ( ω ) φ 1 ^ ( ω ) sup ω R | cosh ( ω 2 + k 2 ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 x ) m ] | δ sup ω R | cosh ( ω 2 + k 2 ) 1 ( 1 a c o s h 2 ω 2 + k 2 ) m | δ .
Applying Bernoulli’s inequality, we obtain
[ 1 ( 1 a c o s h 2 ω 2 + k 2 ) m ] a m c o s h 2 ω 2 + k 2 .
Thus, we obtain
f ^ m , δ ( 1 , ω ) f ^ m ( 1 , ω ) a m δ .
Now, we compute
f ^ m ( 1 , ω ) f ^ ( 1 , ω ) = cosh ( ω 2 + k 2 ) [ 1 ( 1 a c o s h 2 ω 2 + k 2 ) m ] φ 1 ^ ( ω ) cosh ( ω 2 + k 2 ) φ 1 ^ ( ω ) = cosh ( ω 2 + k 2 ) ( 1 a c o s h 2 ω 2 + k 2 ) m φ 1 ^ ( ω ) = ( 1 a c o s h 2 ω 2 + k 2 ) m e p ω 2 + k 2 e p ω 2 + k 2 f ^ ( 1 , ω ) sup ω R | ( 1 a c o s h 2 ω 2 + k 2 ) m e p ω 2 + k 2 | e p ω 2 + k 2 f ^ ( 1 , ω ) sup ω R | ( 1 a c o s h 2 ω 2 + k 2 ) m e p ω 2 + k 2 | E 3 sup ω R | ( 1 a e 2 ω 2 + k 2 ) m e p ω 2 + k 2 | E 3 .
Let α = ω 2 + k 2 , B ( α ) = ( 1 a e 2 α ) m e p α . Let B ( α 1 * ) = 0 , then α 1 * = 1 2 ln a ( 2 m + p ) p .
B ( α 1 * ) = ( 1 a e ln 2 a m + a p p ) m e p 2 ln ( 2 a m + a p p ) = ( 2 m 2 m + p ) m ( 2 a m + a p p ) p 2 ( 2 a m + a p p ) p 2 .
Therefore,
f ^ m ( 1 , ω ) f ^ ( 1 , ω ) B ( α 1 * ) E 2 ( 2 a m + a p p ) p 2 E 2 ( a p ) p 2 ( m + 1 ) p 2 E 3 .
Thus,
f ^ m ( 1 , ω ) f ^ ( 1 , ω ) ( a p ) p 2 ( m + 1 ) p 2 E 3 .
Combining (31) with (32), and m ( E 2 δ ) 2 1 + p ( m + 1 ) , we obtain
f m , δ ( 1 , y ) f ( 1 , y ) a m δ + ( a p ) p 2 ( m + 1 ) p 2 E 3 a δ p 1 + p E 2 1 1 + p + ( p a ) p 2 δ p 1 + p E 3 1 1 + p = ( a + ( p a ) p 2 ) E 3 1 1 + p δ p 1 + p .

3.2. The A Posteriori Error Estimate for Problem (4)

This section will give an estimate of the convergence error under the a posteriori parameter selection rule. Let τ > 1 be a fixed constant, and stop the algorithm when m = m ( δ ) N 0 appears for the first time:
K ^ f ^ m , δ x , ω φ 1 ^ δ ω τ δ , 0 < x 1 ,
where φ ^ 1 δ τ δ .
Lemma 3.
Let β ( m ) = K ^ f ^ ( x , ω ) φ 1 ^ ( ω ) , then the following conclusion holds:
1. 
lim m 0 β m = φ 1 ^ ω ;
2. 
lim m + β ( m ) = 0 ;
3. 
β(m) is a continuous function;
4. 
For any m ( 0 , + ) , β ( m ) is a strictly monotonically decreasing function.
Proof. 
β ( m ) = K ^ f ^ m , δ ( ω ) φ 1 ^ δ ( ω ) = ( 1 a cos h 2 ( ω 2 + k 2 x ) ) m φ 1 ^ δ ( ω ) .
Obviously, β ( m ) satisfies the above four conditions, and the proof of Lemma 3 is completed.
Lemma 4.
For any x ( 0 , 1 ) , the regularization parameter m = m ( δ ) satisfies:
m 2 2 x a x E 1 τ 1 δ 2 x .
Proof. 
According to (16),
R m φ 1 ^ ω = 1 1 a K ^ 2 m K ^ φ 1 ^ ω .
Then
K ^ R m φ 1 ^ ω φ 1 ^ ω = 1 a K ^ 2 m φ 1 ^ ω .
Since | 1 a K ^ 2 | < 1 , K ^ R m 1 I 1 . Due to (33), we know
K ^ f ^ m , δ x , ω φ 1 ^ δ ω τ δ K ^ f ^ ( m 1 ) , δ ω φ 1 ^ δ ω .
Then we can get
K ^ R m 1 φ 1 ^ ω φ 1 ^ ω = K ^ R m 1 φ 1 ^ ω K ^ R m 1 φ 1 ^ δ ω + K ^ R m 1 φ 1 ^ δ ω φ 1 ^ δ ω + φ 1 ^ δ ω φ 1 ^ ω = K ^ R m 1 φ 1 ^ δ ω φ 1 ^ δ ω + K ^ R m 1 I φ 1 ^ ω φ 1 ^ δ ω K ^ R m 1 φ 1 ^ δ ω φ 1 ^ ω K ^ R m 1 I φ 1 ^ ω φ 1 ^ δ ω τ δ K ^ R m 1 I δ τ 1 δ .
Thus,
K ^ R m 1 φ 1 ^ ω φ 1 ^ ω τ 1 δ .
In addition,
K ^ R m 1 φ 1 ^ ω φ 1 ^ ω
= 1 a K 2 m 1 φ 1 ^ ω = 1 a K 2 m 1 f ^ 1 , ω cos h ω 2 + k 2 sup ω R 1 a cos h 2 w 2 + k 2 x m 1 cos h ω 2 + k 2 E 1 sup ω R | 2 1 a e 2 w 2 + k 2 x m 1 e w 2 + k 2 | E 1 = sup ω R | C ( ω ) | E 1 ,
where C ( ω ) = 2 1 a e 2 w 2 + k 2 x m 1 e w 2 + k 2 . Let α = w 2 + k 2 , then C ( ω ) can be rewritten as C α = 2 e α 1 a e 2 α x m 1 . If α 3 * satisfies C α 3 * = 0 , then α 3 * = 1 2 x ln 2 a x m 1 + a .   C α 3 * = 2 2 a x m 1 + a 1 2 x 2 a x m 1 2 a x m 1 + a m 1 2 2 a x m 1 + a 1 2 x 2 1 a x 1 2 x m 1 2 x .
Therefore,
K ^ R m 1 φ 1 ^ ω φ 1 ^ ω | C α 3 * | E 1 2 1 a x 1 2 x m 1 2 x E 1 .
Combining (36) with (37), we obtain
2 1 a x 1 2 x m 1 2 x E 1 τ 1 δ .
Thus,
m 2 2 x a x E 1 τ 1 δ 2 x .
Theorem 3.
Let u m , δ ( x , y ) be the regularization solution of problem (4). Assume (2) and (10) hold. The regularization parameter is chosen by (31). Then as any 0 < x < 1 , we can obtain the following error estimate:
f m , δ x , y f x , y C 3 E 1 x δ 1 x ,
where C 3 = 2 τ 1 x x 1 2 + 2 τ 2 + 2 1 x 2 is a constant.
Proof. 
Using the Parseval formula, the triangle inequality, (26) and Lemma 4, we obtain
f ^ m , δ x , y f x , y = f ^ m , δ x , ω f ^ x , ω f ^ m , δ x , ω f ^ m x , ω + f ^ m x , ω f ^ x , ω a m δ + f ^ m x , ω f ^ x , ω 2 τ 1 x x 1 2 E 1 x δ 1 x + f ^ m x , ω f ^ x , ω .
Thus,
f ^ m , δ x , y f x , y 2 τ 1 x x 1 2 E 1 x δ 1 x + f ^ m x , ω f ^ x , ω .
According to the H o ¨ l d e r inequality, we obtain
f ^ m x , ω f ^ x , ω 2 = 1 a K ^ 2 m K ^ φ 1 ^ ω 2 = 1 a K ^ 2 m K ^ ( φ 1 ^ ω ) 1 x ( φ 1 ^ ω ) x 2 = 1 a K ^ 2 m cos h ω 2 + k 2 x ( φ 1 ^ ω ) 1 x f ^ 1 , ω cos h ω 2 + k 2 x 2 = 1 a K ^ 2 m cos h ω 2 + k 2 x cos h x ω 2 + k 2 ( φ 1 ^ ω ) 1 x ( f ^ 1 , ω ) x 2 = + 1 a K ^ 2 m cos h ω 2 + k 2 x cos h x ω 2 + k 2 ( φ 1 ^ ω ) 1 x 2 f ^ 1 , ω 2 x d ω + 1 a K ^ 2 m ( φ 1 ^ ω ) 1 x 2 1 x d ω 1 x + f ^ 1 , ω 2 x 1 x d ω x + 1 a K ^ 2 m φ 1 ^ ω φ 1 ^ δ ω + φ 1 ^ δ ω 1 x 2 1 x d ω 1 x f ^ 1 , ω 2 x = + 1 a K ^ 2 2 m 1 x φ 1 ^ ω φ 1 ^ δ ω + φ 1 ^ δ ω 2 d ω 1 x E 1 2 x 2 1 x 1 a K ^ 2 2 m 1 x φ 1 ^ ω φ 1 ^ δ ω 2 + 1 a K ^ 2 2 m 1 x φ 1 ^ δ ω 2 1 x E 1 2 x 2 1 x φ 1 ^ ω φ 1 ^ δ ω 2 + 1 a k ^ 2 2 m φ 1 ^ δ ω 2 1 x E 1 2 x 2 1 x δ 2 + τ 2 δ 2 1 x E 1 2 x = 2 τ 2 + 2 1 x δ 2 1 x E 1 2 x .
Then,
f ^ m x , ω f ^ x , ω 2 τ 2 + 2 1 x 2 δ 1 x E 1 x .
Combining (40) with (41), we obtain
f m , δ x , y f x , y 2 τ 1 x x 1 2 + 2 τ 2 + 2 1 x 2 E 1 x δ 1 x .
Note:Theorem 3 only considers the error estimate in the interval but does not consider the error estimate when x = 1 at the end point, so it can only show that it is bounded rather than convergent. The error estimate when x = 1 is given below.
Lemma 5.
Assume (2) and the prior condition (28) hold. If we take the solution of Equation (33) as a regularization parameter when x = 1 , then the regularization parameter m satisfies:
m p + 1 a 2 E 3 τ 1 δ 2 p + 1 .
Proof. 
Let K ^ 0 = K ^ 0 * 1 cos h ω 2 + k 2 and R m 0 = a k = 0 m 1 I a K ^ 0 * K ^ 0 k K ^ 0 * , then
K ^ 0 R m 1 0 φ 1 ^ ω φ 1 ^ ω K ^ 0 R m 1 0 φ 1 ^ δ ω φ 1 ^ δ ω K ^ 0 R m 1 0 I φ 1 ^ ω φ 1 ^ δ ω τ δ K ^ 0 R m 1 0 I δ τ 1 δ .
Therefore,
K ^ 0 R m 1 0 φ 1 ^ ω φ 1 ^ ω τ 1 δ .
On the other hand, we compute
K ^ 0 R m 1 0 φ 1 ^ ω φ 1 ^ ω = 1 a K ^ 0 2 m 1 φ 1 ^ ω = 1 a K ^ 0 2 m 1 e p ω 2 + k 2 cos h ω 2 + k 2 φ 1 ^ ω e p ω 2 + k 2 cos h ω 2 + k 2 sup ω R 1 a cos h 2 ω 2 + k 2 m 1 cos h ω 2 + k 2 e p ω 2 + k 2 E 3 = sup ω R D ( ω ) E 3 .
If we take α = ω 2 + k 2 , then D ( ω ) can be rewritten as D α = 2 1 a e 2 α m 1 e p + 1 α . Let D α 4 * = 0 ; then α 4 * = 1 2 ln 2 a m 1 + p + 1 a p + 1 .
D α 4 * = 2 2 a m 1 2 a m 1 + a p + 1 m 1 e p + 1 2 ln 2 a m 1 + a p + 1 p + 1 2 2 a m 1 + a p + 1 p + 1 p + 1 2 2 2 a m 1 p + 1 p + 1 2 2 p + 1 a 2 p + 1 m p + 1 2 .
Then,
K ^ 0 R m 1 0 φ 1 ^ ω φ 1 ^ ω D α 4 * E 2 2 p + 1 a 2 p + 1 m p + 1 2 E 3 .
Combining (44) with (45), we obtain
( τ 1 ) δ 2 ( p + 1 a ) 2 p + 1 m p + 1 2 E 3 .
Therefore we obtain
m ( p + 1 a ) ( 2 E 3 ( τ 1 ) δ ) 2 p + 1 .
Theorem 4.
Let u m , δ ( x , y ) in (18) be the regularization solution of problem (4) at x = 1 . Assume (2) and the a priori condition (28) hold for p > 0 . If we take the solution of Equation (31) as a regularization parameter at x = 1 , there is the following error estimate:
f m , δ 1 , y f 1 , y C 4 δ p p + 1 E 3 1 p + 1 ,
where C 4 = p + 1 1 2 2 τ 1 1 p + 1 + 2 τ 2 + 2 p 2 p + 1 is a non-negative and non-zero constant.
Proof. 
Due to the triangle inequality, the Parseval equation and Lemma 5, we know
f m , δ ( 1 , y ) f ( 1 , y ) = f ^ m , δ ( 1 , ω ) f ^ ( 1 , ω ) f ^ m , δ ( 1 , ω ) f ^ m ( 1 , ω ) + f ^ m ( 1 , ω ) f ^ ( 1 , ω ) a m δ + f ^ m ( 1 , ω ) f ^ ( 1 , ω ) p + 1 1 2 2 τ 1 1 p + 1 δ 1 p + 1 E 3 1 p + 1 + f ^ m ( 1 , ω ) f ^ ( 1 , ω ) .
According to the Hölder inequality, we obtain
f ^ m 1 , ω f ^ 1 , ω = 1 a K ^ 0 2 m K ^ 0 φ 1 ^ ω 2 = 1 a K ^ 0 2 m f ^ 1 , ω 2 = 1 a K ^ 0 2 m p p + 1 φ 1 ^ ω p p + 1 1 a K ^ 0 2 m p + 1 cos h p ω 2 + k 2 f ^ 1 , ω 1 p + 1 2 + 1 a K ^ 0 2 2 m φ 1 ^ ω 2 d ω p p + 1 × + 1 a K ^ 0 2 2 m cos h 2 p ω 2 + k 2 f ^ 1 , ω 2 d ω 1 p + 1 + 1 a K ^ 0 2 2 m φ 1 ^ ω 2 d ω p p + 1 × + e 2 p ω 2 + k 2 f ^ 1 , ω 2 d ω 1 p + 1 + 1 a K ^ 0 2 2 m φ 1 ^ ω 2 d ω p p + 1 E 3 2 p + 1 2 p p + 1 + 1 a K ^ 0 2 2 m φ 1 ^ ω φ 1 ^ δ ω 2 + φ 1 ^ δ ω 2 d ω p p + 1 E 3 2 p + 1 = 2 p p + 1 1 a K ^ 0 2 m φ 1 ^ ω φ 1 ^ δ ω 2 + 1 a k ^ 0 2 m φ 1 ^ δ ω 2 p p + 1 E 3 2 p + 1 2 p p + 1 δ 2 + τ 2 δ 2 p p + 1 E 3 2 p + 1 = 2 δ 2 + 2 τ 2 δ 2 p p + 1 E 3 2 p + 1 = 2 + 2 τ 2 p p + 1 δ 2 p p + 1 E 3 2 p + 1 .
Therefore,
f ^ m ( 1 , ω ) f ^ ( 1 , ω ) ( 2 + 2 τ 2 ) p 2 ( p + 1 ) δ p p + 1 E 3 1 p + 1 .
Thus,
f m , δ ( 1 , y ) f ( 1 , y ) C 4 δ p p + 1 E 3 1 p + 1 .
where C 4 = ( p + 1 ) 1 2 2 τ 1 1 p + 1 + ( 2 τ 2 + 2 ) p 2 ( p + 1 ) is a constant. □

3.3. The a Priori Error Estimate for Problem (5)

Theorem 5.
Take g m , δ ( x , y ) in (23) as the regularization solution of problem (5). Then, (3) and the a priori condition (11) are established. If we choose m = [ θ ( x ) ] as the regularization parameter,
[ θ ( x ) ] = ( E 2 δ ) 2 x ,
then, for any 0 < x < 1 , we obtain the error estimate as follows:
g m , δ ( x , y ) g ( x , y ) C 5 E 2 x δ 1 x ,
where [ θ ( x ) ] is the largest integer less than or equal to θ ( x ) , and C 5 = b + 2 1 e 2 k ( 1 b k 2 ) 1 x 2 x is a constant.
Proof. 
Due to the triangle inequality and the Parseval equation, we obtain
g m , δ ( x , y ) g ( x , y ) = g ^ m , δ ( x , ω ) g ^ ( x , ω ) g ^ m , δ ( x , ω ) g ^ m ( x , ω ) + g ^ m ( x , ω ) g ^ ( x , ω )
Now, we first compute the first term of the right hand. Using the Bernoulli’s inequality, we obtain
g ^ m , δ ( x , ω ) g ^ m ( x , ω ) = 1 ( 1 b H ^ 2 ) m H ^ ( φ 2 ^ δ ( ω ) φ 2 ^ ( ω ) ) sup ω R 1 1 b H ^ 2 m H ^ δ b m δ .
Therefore,
g ^ m , δ ( x , ω ) g ^ m ( x , ω ) b m δ .
For (7), let x = 1 , we can obtain:
g ^ ( 1 , ω ) = sin h ω 2 + k 2 ω 2 + k 2 φ ^ 2 ( ω ) .
Using (53), we have
g ^ m x , ω g ^ x , ω = 1 1 b H ^ 2 m H ^ φ 2 ^ ω φ 2 ^ ω H ^ = 1 b H ^ 2 m H ^ φ 2 ^ ω = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 x m ω 2 + k 2 sin h ω 2 + k 2 x ω 2 + k 2 sin h ω 2 + k 2 g ^ 1 , ω sup ω R 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 x m × sin h ω 2 + k 2 x sin h ω 2 + k 2 E 2 = sup ω R F ( ω ) E 2 ,
where F ( ω ) : = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 x m sin h ω 2 + k 2 x sin h ω 2 + k 2 . Let α = ω 2 + k 2 ; therefore, F ( ω ) can be rewritten as J ( α ) . Using Lemma 1, we obtain
J ( α ) = 1 b α 2 sin h 2 α x m sin h α x sin h α = 1 b α 2 sin h 2 α x m b α 2 sin h 2 α x 1 x 2 x sin h 2 α x b α 2 1 x 2 x sin h α x sin h α m + 1 1 x 2 x b α 2 sin h 2 α x x 1 2 x sin h α x sin h α = m + 1 1 x 2 x sin h 2 α x b α 2 1 x 2 x sin h α x sin h α = m + 1 1 x 2 x sin h α x 1 x sin h α 1 b α 2 1 x 2 x 2 1 e 2 k 1 b k 2 1 x 2 x m + 1 1 x 2 x .
Thus, we obtain
g ^ m x , ω g ^ x , ω 2 1 e 2 k 1 b k 2 1 x 2 x m + 1 1 x 2 x E 2 .
Combining (52) with (54) and (50), we obtain
g m , δ ( x , y ) g ( x , y ) b m δ + 2 1 e 2 k 1 b k 2 1 x 2 x m + 1 1 x 2 x E 2 b δ 1 x E 3 x + 2 1 e 2 k 1 b k 2 1 x 2 x δ 1 x E 2 x = b + 2 1 e 2 k ( 1 b k 2 ) 1 x 2 x E 2 x δ 1 x .
Note:The above consideration is only the error estimate when 0 < x < 1 and does not consider the error estimate when the end point is x = 1 . At this time, the error estimate in Theorem 5 only shows that it is bounded rather than convergent. Because we want to obtain the error estimates of the exact solution and the regular solution at x = 1 , stronger a priori assumptions must be introduced as follows:
g ^ ( 1 , ω ) H p = e p ω 2 + k 2 sinh ( ω 2 + k 2 ) ω 2 + k 2 φ 2 ^ ( ω ) E 4 ,
where g ^ ( 1 , ω ) H p is the Sobolev space H p -norm, and p = 0 is the L 2 -norm.
Now, we give the error estimate between the regularization solution and the exact solution for x = 1 . Let H ^ 0 = ω 2 + k 2 sinh ( ω 2 + k 2 ) . □
Theorem 6.
Let g m , δ ( x , y ) in (18) be the regularization solution of problem (5) at x = 1 . Suppose (3) and the priori condition (55) hold for p > 0 . Choosing m = [ θ ] as the regularization parameter,
θ = E 4 δ 2 1 + p ,
when x = 1 , we have the following error estimates:
g m , δ 1 , y g 1 , y C 6 E 4 1 1 + p δ p 1 + p ,
where [ θ ] is the largest integer less than or equal to θ, C 6 = b + p b p 2 .
Proof. 
Using the Parseval formula and the triangle inequality, we obtain
g m , δ 1 , y g 1 , y = g ^ m , δ 1 , ω g ^ 1 , ω g ^ m , δ 1 , ω g ^ m 1 , ω + g ^ m 1 , ω g ^ 1 , ω .
Using the Bernoulli’s inequality, we obtain
g ^ m , δ 1 , ω g ^ m 1 , ω = 1 1 b H ^ 0 2 m H ^ 0 φ 2 ^ δ ω φ 2 ^ ω sup ω R 1 1 b H ^ 0 2 m H ^ 0 δ b m δ .
Therefore,
g ^ m , δ 1 , ω g ^ m 1 , ω b m δ .
g ^ m 1 , ω g ^ 1 , ω = 1 1 b H ^ 0 2 m H ^ 0 φ 2 ^ ω φ 2 ^ ω H ^ 0 = 1 b H ^ 0 2 m H ^ 0 φ 2 ^ ω = 1 b H ^ 0 2 m H ^ 0 φ 2 ^ ω = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 m sin h ω 2 + k 2 ω 2 + k 2 φ ^ 2 ω e p ω 2 + k 2 e p ω 2 + k 2 sup ω R 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 m e p ω 2 + k 2 E 4 sup ω R 1 b e 2 ω 2 + k 2 m e p ω 2 + k 2 E 4 = sup ω R | B α | E 4 .
Take B α = 1 b e 2 α m e p α and let B α 1 * = 0 . Then α 1 * = 1 2 ln 2 b m + b p p .
B α 1 * = 1 b e ln 2 b m + b p p m e p 2 ln ( 2 b m + b p p ) = 2 m 2 m + p m 2 b m + b p p p 2 2 b m + b p p p 2 b p p 2 ( m + 1 ) p 2 .
Therefore, we obtain
g ^ m 1 , ω g ^ 1 , ω b p p 2 ( m + 1 ) p 2 .
Combining (58) with (59) and (56), we obtain
g m , δ 1 , y g 1 , y b m δ + b p p 2 ( m + 1 ) p 2 b E 4 1 1 + p δ p 1 + p + b p p 2 E 3 1 1 + p δ p 1 + p = b + p b p 2 E 4 1 1 + p δ p 1 + p .

3.4. The a Posteriori Error Estimate for Problem (5)

This section will give an estimate of the convergence error under the a posteriori parameter selection rule. Let τ > 1 be a fixed constant and stop the algorithm when m = m ( δ ) N 0 appears for the first time:
H ^ g ^ m , δ x , ω φ 2 ^ δ ω τ δ , 0 < x 1 ,
where φ ^ 2 δ τ δ .
Lemma 6.
Let γ ( m ) = H ^ g ^ m , δ x , ω φ 2 ^ δ ω ; then the following conclusion holds:
1. 
lim m 0 γ m = φ 2 ^ ω ;
2. 
lim m + γ ( m ) = 0 ;
3. 
γ(m) is a continuous function;
4. 
For any m ( 0 , + ) , γ ( m ) is a strictly monotonically decreasing function.
Proof. 
γ m = H ^ g ^ m , δ x , ω φ 2 ^ δ ω = H ^ 1 1 b H ^ 2 H ^ φ 2 ^ δ ω φ 2 ^ δ ω = 1 b H ^ 2 m φ 2 ^ δ ω = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 m φ 2 ^ δ ω .
Therefore,
γ m = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 m φ 2 ^ δ ω .
Obviously, γ ( m ) satisfies the above four conditions. □
Lemma 7.
For any x ( 0 , 1 ) , the regularization parameter m = m ( δ ) which is chosen by (60) satisfies:
m 2 2 x k 2 x 1 b 1 e 2 k 2 x E 2 τ 1 δ 2 x .
Proof. 
According to (20),
H m φ 2 ^ ω = 1 1 b H ^ 2 m H ^ φ 2 ^ ω .
Since | 1 a H ^ 2 | < 1 , H ^ H m 1 I 1 .   H ^ H m 1 φ 2 ^ δ ω φ 2 ^ δ ω = H ^ g ^ m , δ x , ω φ 2 ^ δ ω τ δ H ^ H m 1 φ 2 ^ δ ω φ 2 ^ δ ω .
Then we can obtain
H ^ H m 1 φ 2 ^ ω φ 2 ^ ω = H ^ H m 1 φ 2 ^ ω H ^ H m 1 φ 2 ^ δ ω + H ^ H m 1 φ 2 ^ δ ω φ 2 ^ δ ω + φ 2 ^ δ ω φ 2 ^ ω = H ^ H m 1 φ 2 ^ δ ω φ 2 ^ δ ω + H ^ H m 1 I φ 2 ^ ω φ 2 ^ δ ω H ^ H m 1 φ 2 ^ δ ω φ 2 ^ ω H ^ H m 1 I φ 2 ^ ω φ 2 ^ δ ω τ δ H ^ H m 1 I δ τ 1 δ .
Thus, we obtain
H ^ H m 1 φ 2 ^ ω φ 2 ^ ω τ 1 δ .
In addition,
H ^ H m 1 φ ^ 2 ω φ ^ 2 ω = 1 b H ^ 2 m 1 φ ^ 2 ω = 1 b H ^ 2 m 1 ω 2 + k 2 sin h ω 2 + k 2 g ^ 1 , ω = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 x m 1 ω 2 + k 2 sin h ω 2 + k 2 g ^ 1 , ω sup ω R | 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 x m 1 ω 2 + k 2 sin h ω 2 + k 2 | E 2 = sup ω R | J ( α ) | E 2 .
Take J ( α ) = ( 1 b α 2 sin h 2 ( α x ) ) m 1 α sin h ( α ) , and α = ω 2 + k 2 .
Using Lemma 1, we obtain
J ( α ) = 1 b α 2 sin h 2 α x m 1 α sin h α = 1 b α 2 sin h 2 α x m 1 b α 2 sin h 2 α x 1 2 x b α 2 sin h 2 α x 1 2 x α sin h α m 1 2 x b α 2 sin h 2 α x 1 2 x α sin h α = m 1 2 x sin h α x 1 x sin h α α b α 2 1 2 x m 1 2 x e α x 1 x e α e α 2 α 1 2 x α 1 1 x m 1 2 x 2 1 e 2 k a 1 2 x k 1 1 x .
Therefore,
H ^ H m 1 φ ^ 2 ω φ ^ 2 ω m 1 2 x 2 1 e 2 k a 1 2 x k 1 1 x .
Combining (63) with (64), we obtain
m 2 2 x k 2 x 1 b 1 e 2 k 2 x E 2 τ 1 δ 2 x .
Now, under the a posteriori regularization choice rule, we give the error estimation between the regularization solution g m , δ ( x , y ) and the exact solution g ( x , y ) . □
Theorem 7.
Let g m , δ ( x , y ) be the regularization solution of problem (5). Suppose (3) and the a priori condition (11) hold. If we take the solution of Equation (60) as a regularization parameter, then for any 0 < x < 1 , we can obtain the following error estimate:
g m , δ x , y g x , y C 7 E 2 x δ 1 x ,
where C 7 = 2 x k x 1 1 e 2 k x τ 1 x + 2 τ 2 + 2 1 x 2 is a constant.
Proof. 
Using the Parseval formula, the triangle inequality and (58), we obtain
g ^ m , δ x , y g x , y = g ^ m , δ x , ω g ^ x , ω g ^ m , δ x , ω g ^ m x , ω + g ^ m x , ω g ^ x , ω b m δ + g ^ m x , ω g ^ x , ω .
According to the H o ¨ l d e r inequality, Lemma 2 and (45), we obtain
g ^ m x , ω g ^ x , ω 2 = 1 b H ^ 2 m H ^ φ ^ 2 ω 2 = 1 b H ^ 2 m H ^ φ ^ 2 1 x ω φ ^ 2 x ω 2 = 1 b H ^ 2 m sin h ω 2 + k 2 x ω 2 + k 2 φ ^ 2 1 x ω 2 + k 2 sin h ω 2 + k 2 g ^ 1 , ω x 2 = + 1 b H ^ 2 m sin h ω 2 + k 2 x sin h x ω 2 + k 2 ω 2 + k 2 x 1 φ ^ 2 1 x ω 2 g ^ 1 , ω 2 x d ω + 1 b H ^ 2 m sin h ω 2 + k 2 x sin h x ω 2 + k 2 ω 2 + k 2 x 1 φ ^ 2 1 x ω 2 1 x d ω 1 x + g ^ 2 x 1 , ω 1 x d ω x + 1 b H ^ 2 m sin h ω 2 + k 2 x sin h x ω 2 + k 2 ω 2 + k 2 x 1 φ ^ 2 ω φ ^ 2 δ ω + φ 2 δ ω 1 x 2 1 x d ω 1 x E 2 2 x + 1 b H ^ 2 m e ω 2 + k 2 x 1 x e ω 2 + k 2 x 1 x φ ^ 2 ω φ ^ 2 δ ω + φ 2 δ ω 1 x 2 1 x d ω 1 x E 2 2 x 2 1 x + 1 b H ^ 2 2 m 1 x φ ^ 2 ω φ ^ 2 δ ω 2 d ω + + 1 b H ^ 2 2 m 1 x φ 2 δ ω d ω 1 x E 2 2 x 2 1 x 1 b H ^ 2 m 1 x φ ^ 2 ω φ ^ 2 δ ω 2 + 1 b H ^ 2 m 1 x φ ^ 2 δ ω 2 1 x E 2 2 x 2 1 x φ ^ 2 ω φ ^ 2 δ ω 2 + 1 b H ^ 2 m φ ^ 2 δ ω 2 1 x E 2 2 x 2 1 x δ 2 + τ 2 δ 2 1 x E 3 2 x = 2 τ 2 + 2 1 x δ 2 1 x E 2 2 x .
Thus,
g ^ m x , ω g ^ x , ω 2 τ 2 + 2 1 x 2 δ 1 x E 2 x .
Combining (62) and (67), we obtain
g ^ m , δ x , y g x , y b m δ + g ^ m x , ω g ^ x , ω b m δ + 2 τ 2 + 2 1 x 2 δ 1 x E 2 x 2 x k x 1 1 e 2 k x τ 1 x + 2 τ 2 + 2 1 x 2 E 2 x δ 1 x .
Note:Theorem 7 only considers the error estimate in the interval x ( 0 , 1 ) but does not consider the error estimate when x = 1 at the end point. As a result, it can only show that it is bounded rather than convergent. The error estimate when x = 1 is given below.
Lemma 8.
Suppose (3) and the priori condition (55) hold for p > 0 . If we take the solution of Equation (45) as a regularization parameter at x = 1 , then the regularization parameter m = m ( δ ) satisfies:
m 1 a k 2 p p + 1 E 4 τ 1 δ 2 p + 1 .
Proof. 
Let H ^ 0 = H ^ 0 * = ω 2 + k 2 sinh ( ω 2 + k 2 ) and H m 0 = b k = 0 m 1 I b H ^ 0 * H ^ 0 k H ^ 0 * .
H ^ 0 H m 1 0 φ 2 ^ ω φ 2 ^ ω H ^ 0 H m 1 0 φ 2 ^ δ ω φ 2 ^ δ ω H ^ 0 H m 1 0 I φ 2 ^ ω φ 2 ^ δ ω τ δ H ^ 0 H m 1 0 I δ τ 1 δ .
Therefore,
H ^ 0 H m 1 0 φ 2 ^ ω φ 2 ^ ω τ 1 δ .
On the other hand, using Lemma 1 we obtain
H ^ 0 H m 1 0 φ 2 ^ ω φ 2 ^ ω = 1 b H ^ 0 2 m 1 φ 2 ^ ω = 1 b H 0 2 m 1 ω 2 + k 2 sin h ω 2 + k 2 e p ω 2 + k 2 φ 2 ^ ω ω 2 + k 2 sin h ω 2 + k 2 e p ω 2 + k 2 sup ω R 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 m 1 ω 2 + k 2 sin h ω 2 + k 2 e p ω 2 + k 2 E 4 = sup ω R L ( ω ) E 4 ,
where L ( ω ) = 1 b ω 2 + k 2 sin h 2 ω 2 + k 2 m 1 ω 2 + k 2 sin h ω 2 + k 2 e p ω 2 + k 2 . Let α = ω 2 + k 2 , then L ( ω ) can be rewritten as M ( α ) = 1 a α 2 sin h 2 α m 1 α sin h α e p α . Then, using Lemma 1 we obtain
M ( α ) = 1 b α 2 sin h 2 α m 1 α sin h α e p α = 1 b α 2 sin h 2 α m 1 b α 2 sin h 2 α p + 1 2 b α 2 sin h 2 α p + 1 2 α sin h α e p α m p + 1 2 sin h 2 α b α 2 p + 1 2 α sin h α e p α = m p + 1 2 sin h p α α p b p + 1 2 e p α m p + 1 2 e α p α p b p + 1 2 e p α m p + 1 2 k p b p + 1 2 .
Thus,
H ^ 0 H m 1 0 φ 2 ^ ω φ 2 ^ ω m p + 1 2 k p b p + 1 2 E 4 .
Combining (69) with (70), we obtain
τ 1 δ m p + 1 2 k p b p + 1 2 E 4 .
Therefore, we obtain
m 1 a k 2 p p + 1 E 4 τ 1 δ 2 p + 1 .
Theorem 8.
Let g m , δ ( x , y ) in (23) be the regularization solution of problem (5). Suppose (3) and the a priori condition (55) hold. If we take the solution of Equation (45) as a regularization parameter at x = 1 , there is the following error estimate:
g m , δ 1 , y g 1 , y C 8 E 4 1 p + 1 δ p p + 1 ,
where C 8 = 1 k p τ 1 1 p + 1 + 2 τ 2 + 2 k 2 p 2 p + 1 is a constant.
Proof. 
Due to the triangle inequality, the Parseval equation and Lemma 8, we obtain
g m , δ ( 1 , y ) g ( 1 , y ) = g ^ m , δ ( 1 , ω ) g ^ ( 1 , ω ) g ^ m , δ ( 1 , ω ) g ^ m ( 1 , ω ) + g ^ m ( 1 , ω ) g ^ ( 1 , ω ) b m δ + g ^ m ( 1 , ω ) g ^ ( 1 , ω ) 1 k p τ 1 1 p + 1 E 4 1 p + 1 δ p p + 1 + g ^ m ( 1 , ω ) g ^ ( 1 , ω ) .
According to the H o ¨ l d e r inequality, Lemma 2 and (45), we obtain
g ^ m 1 , ω g ^ 1 , ω 2 = 1 b H ^ 0 2 m H ^ 0 φ ^ 2 ω 2 = 1 b H ^ 0 2 m g ^ 1 , ω 2 = 1 b H ^ 0 2 m p p + 1 1 b H ^ 0 2 m p + 1 g ^ 1 , ω p p + 1 g ^ 1 , ω 1 p + 1 2 = 1 b H ^ 0 2 m p p + 1 φ ^ 2 ω p p + 1 1 b H ^ 0 2 m p + 1 sin h p ω 2 + k 2 ω 2 + k 2 p g ^ 1 , ω 1 p + 1 2 + 1 b H ^ 0 2 2 m φ ^ 2 ω 2 d ω p p + 1 + 1 b H ^ 0 2 2 m sin h 2 p ω 2 + k 2 ω 2 + k 2 2 p g ^ 2 1 , ω d ω 1 p + 1 + 1 b H ^ 0 2 2 m φ 2 ω d ω p p + 1 + 1 k 2 p e 2 p ω 2 + k 2 g ^ 2 1 , ω d ω 1 p + 1 + 1 b H ^ 0 2 2 m φ ^ 2 ω d ω p p + 1 k 2 p p + 1 E 4 2 p + 1 2 p p + 1 + 1 b H ^ 0 2 2 m φ ^ 2 ω φ ^ 2 δ ω 2 + φ ^ 2 ω 2 d ω p p + 1 k 2 p p + 1 E 4 2 p + 1 = 2 p p + 1 1 b H ^ 0 2 m φ ^ 2 ω φ ^ 2 δ ω 2 + 1 b H ^ 0 2 m φ ^ 2 δ ω 2 p p + 1 k 2 p p + 1 E 4 2 p + 1 2 p p + 1 δ 2 + τ 2 δ 2 p p + 1 k 2 p p + 1 E 4 2 p + 1 = 2 τ 2 + 2 k 2 p p + 1 δ 2 p p + 1 E 4 2 p + 1 .
Therefore,
g ^ m 1 , ω g ^ 1 , ω 2 τ 2 + 2 k 2 p 2 p + 1 δ p p + 1 E 4 1 p + 1 .
Thus,
g m , δ ( 1 , y ) g ( 1 , y ) C 8 δ p p + 1 E 4 1 p + 1 ,
where C 8 = 1 k p τ 1 1 p + 1 + 2 τ 2 + 2 k 2 p 2 p + 1 is a constant. □

4. Conclusions

In this paper, the Cauchy problem of the modified Helmholtz equation is studied. The exact solution of the problem is obtained by Fourier transform, and the ill-posed problem is solved by the Landweber iterative regularization method. Finally, given the appropriate a priori bound, the corresponding error estimates are obtained under the a priori and the a posteriori regularization parameter selection rules, respectively.

Author Contributions

The main idea of the article is given by Y.-G.C., F.Y. and Q.D. We confirmed the steps of the article. This view is shared by all the authors. All authors have read and agreed to the published version of the manuscript.

Funding

The project is supported by the National Natural Science Foundation of China (No. 11961044) and the Doctor Fund of Lan Zhou University of Technology, the Natural Science Foundation of Gansu Province (No. 21JR7RA214).

Data Availability Statement

Data sharing not applicable to this paper.

Conflicts of Interest

The authors declare that they have no conflict of interest.

References

  1. Cheng, H.W.; Huang, J.F.; Leiterman, T.J. An adaptive fast solver for the modified Helmholtz equation in two dimensions. J. Comput. Phys. 2006, 211, 616–637. [Google Scholar] [CrossRef]
  2. Marin, M. A domain of influence theorem for microstretch elastic materials. Nonlinear Anal. RWA 2010, 11, 3446–3452. [Google Scholar] [CrossRef]
  3. Marin, M.I.; Agarwal, R.P.; Abbas, I.A. Effect of intrinsic rotations, microstructural expansion and contractions in initial boundary value problem of thermoelastic bodies. Bound. Value Probl. 2014, 129, 1–16. [Google Scholar] [CrossRef] [Green Version]
  4. Marin, M.; Lupu, M. On Harmonic Vibrations in Thermoelasticity of Micropolar Bodies. J. Vib. Control 1998, 4, 507–518. [Google Scholar] [CrossRef]
  5. Gao, J.H.; Wang, D.H.; Peng, J.G. A tikhonov-type regularization method for identifying the unknown source in the modified helmholtz equation. Math. Probl. Eng. 2012, 2012, 1–13. [Google Scholar] [CrossRef]
  6. Zhao, J.J. A comparison of regularization methods for identifying unknown source problem for the modified Helmholtz equation. J. Inverse Ill-Posed Probl. 2014, 22, 277–296. [Google Scholar] [CrossRef]
  7. Li, X.X.; Yang, F.; Liu, J.; Wang, L. The Quasireversibility regularization method for identifying the unknown source for the modified Helmholtz equation. J. Appl. Math. 2013, 2013, 245963. [Google Scholar] [CrossRef]
  8. You, L.; Li, Z.; Huang, J.; Du, A. The Tikhonov regularization method in Hilbert Scales for determining the unknown source for the modified Helmholtz equation. J. Math. Phys. 2016, 4, 140–148. [Google Scholar] [CrossRef] [Green Version]
  9. Marin, L.; Elliott, L.; Heggs, P.J.; Ingham, D.B.; Lesnic, D.; Wen, X. Conjugate gradient-boundary element solution to the Cauchy problem for Helmholtz-type equations. Comput. Mech. 2003, 31, 367–377. [Google Scholar] [CrossRef]
  10. Marin, L.; Elliott, L.; Heggs, P.; Ingham, D.B.; Lesnic, D.; Wen, X. BEM solution for the Cauchy problem associated with Helmholtz-type equations by the Landweber method. Eng. Anal. Bound. Elem. 2004, 28, 1025–1034. [Google Scholar] [CrossRef]
  11. Wei, T.; Hon, Y.; Ling, L. Method of fundamental solutions with regularization techniques for Cauchy problems of elliptic operators. Eng. Anal. Bound. Elem. 2007, 31, 373–385. [Google Scholar] [CrossRef]
  12. Cheng, H.; Zhu, P.; Gao, J. A regularization method for the cauchy problem of the modified Helmholtz equzation. Math. Meth. Appl. Sci. 2015, 38, 3711–3719. [Google Scholar] [CrossRef]
  13. Qin, H.H.; Wen, D.W. Tikhonov type regularization method for the Cauchy problem of the modified Helmholtz equation. Appl. Math. Comput. 2008, 203, 617–628. [Google Scholar] [CrossRef]
  14. Qin, H.H.; Wei, T. Quasi-reversibility and truncation methods to solve a Cauchy problem of the modifed Helmholtz equation. Math. Comput. Simulat. 2009, 80, 352–366. [Google Scholar] [CrossRef]
  15. Shi, R.; Wei, T.; Qin, H.H. A fourth-order modified method for the Cauchy problem of the modified Helmholtz equation. Numer. Math. Theory Meth. Appl. 2009, 2, 326–340. [Google Scholar] [CrossRef]
  16. Xiong, X.T.; Shi, W.X.; Fan, X.Y. Two numerical methods for a Cauchy problem for modified Helmholtz equation. Appl. Math. Model. 2011, 35, 4951–4964. [Google Scholar] [CrossRef]
  17. Yang, F.; Fan, P.; Li, X.X. Fourier truncation regularization method for a three-Dimensional Cauchy problem of the modified helmholtz equation with perturbed wave number. Mathematics 2019, 7, 705. [Google Scholar] [CrossRef] [Green Version]
  18. He, S.Q.; Feng, X.F. A Regularization method to solve a Cauchy problem for the two-dimensional modified Helmholtz equation. Mathematics 2019, 7, 4. [Google Scholar] [CrossRef] [Green Version]
  19. Xiao, C.; Deng, Y. A new Newton-Landweber iteration for nonlinear inverse problems. J. Appl. Math. Comput. 2011, 36, 489–505. [Google Scholar] [CrossRef]
  20. Jose, J.; Rajan, M.P. A simplified Landweber iteration for solving nonlinear ill-posed problems. Int. J. Appl. Comput. Math. 2017, 3, 1001–1018. [Google Scholar] [CrossRef]
  21. Yang, F.; Zhang, Y.; Li, X.X. Landweber iterative method for identifying the initial value problem of the time-space fractional diffusion-wave equation. Numer. Algorithms 2020, 83, 1509–1530. [Google Scholar] [CrossRef]
  22. Yang, F.; Wang, N.; Li, X.X. Landweber iterative method for an inverse source problem of time-fractional diffusion-wave equation on spherically symmetric domain. J. Appl. Anal. Comput. 2020, 10, 514–529. [Google Scholar] [CrossRef]
  23. Yang, F.; Pu, Q.; Li, X.X. The fractional Landweber method for identifying the space source term problem for time-space fractional diffusion equation. Numer. Algorithms 2021, 87, 1229–1255. [Google Scholar] [CrossRef]
  24. Yang, F.; Wang, Q.C.; Pu, Q.; Li, X.X. A fractional Landweber iterative regularization method for stable analytic continuation. AIMS Math. 2021, 6, 404–419. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, Y.-G.; Yang, F.; Ding, Q. The Landweber Iterative Regularization Method for Solving the Cauchy Problem of the Modified Helmholtz Equation. Symmetry 2022, 14, 1209. https://doi.org/10.3390/sym14061209

AMA Style

Chen Y-G, Yang F, Ding Q. The Landweber Iterative Regularization Method for Solving the Cauchy Problem of the Modified Helmholtz Equation. Symmetry. 2022; 14(6):1209. https://doi.org/10.3390/sym14061209

Chicago/Turabian Style

Chen, Yong-Gang, Fan Yang, and Qian Ding. 2022. "The Landweber Iterative Regularization Method for Solving the Cauchy Problem of the Modified Helmholtz Equation" Symmetry 14, no. 6: 1209. https://doi.org/10.3390/sym14061209

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop