Next Article in Journal
Mechanical Models for Hermite Interpolation on the Unit Circle
Previous Article in Journal
RETRACTED: Implementation of Stochastic Analysis in Corporate Decision-Making Models
Previous Article in Special Issue
Method of Constructing a Nonlinear Approximating Scheme of a Complex Signal: Application Pattern Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition

by
Pornsarp Pornsawad
1,2,
Elena Resmerita
3,* and
Christine Böckmann
4,5
1
Department of Mathematics, Faculty of Science, Silpakorn University, 6 Rachamakka Nai Rd., Nakhon Pathom 73000, Thailand
2
Centre of Excellence in Mathematics, Mahidol University, Rama 6 Rd., Bangkok 10400, Thailand
3
Institute of Mathematics, Alpen-Adria University of Klagenfurt, Universitätsstr. 65-67, A-9020 Klagenfurt, Austria
4
Institute of Mathematics, University of Potsdam, Karl-Liebknecht-Str. 24-25, 14476 Potsdam, Germany
5
Helmholtz Centre for Polar and Marine Research, Alfred Wegener Institute, Telegrafenberg A45, 14473 Potsdam, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(9), 1042; https://doi.org/10.3390/math9091042
Submission received: 12 March 2021 / Revised: 29 April 2021 / Accepted: 1 May 2021 / Published: 4 May 2021
(This article belongs to the Special Issue Numerical Analysis: Inverse Problems – Theory and Applications)

Abstract

:
We prove the logarithmic convergence rate of the families of usual and modified iterative Runge-Kutta methods for nonlinear ill-posed problems between Hilbert spaces under the logarithmic source condition, and numerically verify the obtained results. The iterative regularization is terminated by the a posteriori discrepancy principle.

1. Introduction

Let X and Y be infinite-dimensional real Hilbert spaces with inner products · , · and norms · . Let us consider a nonlinear ill-posed operator equation
F ( w ) = g ,
where F : D ( F ) X Y is a nonlinear operator between the Hilbert spaces X and Y. We assume that (1) has a solution w + for exact data (which need not be unique). We have approximate data g ε with
g ε g ε , ε > 0 .
Besides the classical Tikhonov–Phillips regularization, a plethora of interesting variational and iterative approaches for ill-posed problems can be found, e.g., in Morozov [1], Tikhonov and Arsenin [2], Bakushinsky and Kokurin [3], and Kaltenbacher et al. [4]. We focus here on iterative methods, as they are also very popular and effective to use in applications. The simplest iterative regularization is the Landweber method—see, e.g., Hanke et al. [5], where the analysis for convergence rates is done under Hölder-type source condition. A more effective method often used in applications is the Levenberg–Marquardt method
w k + 1 ε = w k ε + ( α k I + F ( w k ε ) * F ( w k ε ) ) 1 F ( w k ε ) * ( g ε F ( w k ε ) ) .
This was investigated in [6,7,8] under the Hölder-type source condition (HSC) and a posteriori discrepancy principle (DP). Jin [7] proved optimal convergence rates for an a priori chosen geometric step size sequence α k , whereas Hochbruck and Hönig [6] showed convergence with the optimal rate for quite general step size sequences including the geometric sequence. Later, Hanke [8] avoided any constraints on the rate of decay of the regularization parameter to show the optimal convergence rate.
Tautenhahn [9] proved that asymptotic regularization, i.e., the approximation of problem (1) by a solution of the Showalter differential equation (SDE)
d d t w ε ( t ) = F ( w ε ( t ) ) * [ g ε F ( w ε ( t ) ) ] , 0 < t T , w ε ( 0 ) = w ¯ ,
where the regularization parameter T is chosen according to the DP under the HSC, is a stable method to solve nonlinear ill-posed problems.
Solving SDE by the family of Runge-Kutta (RK) methods delivers a family of RK-type iterative regularization methods
w k + 1 ε = w k ε + τ k b T ( δ + τ k A F ( w k ε ) * F ( w k ε ) ) 1 𝟙 F ( w k ε ) * ( g ε F ( w k ε ) ) , k N 0 ,
where 𝟙 denotes the ( s × 1 ) vector of identity operators, while δ is the ( s × s ) diagonal matrix of bounded linear operators with identity operator on the entire diagonal and zero operator outside of the main diagonal with respect to the appropriate spaces. The parameter τ k = 1 / α k in (5) is the step-length, also called the relaxation parameter. The ( s × s ) matrix A and the ( s × 1 ) vector b are the given parameters that correspond to the specific RK method, building the so-called Butcher tableau (succession of stages). Different choices of the RK parameters generate various iterative methods.
Böckmann and Pornsawad [10] showed convergence for the whole RK-type family (including the well-known Landweber and the Levenberg–Marquardt methods). That paper also emphasized advantages of using some procedures from the mentioned family, e.g., regarding implicit A-stable Butcher tableaux. For instance, the Landweber method needs a lot of iteration steps, while those implicit methods need only a few iteration steps, thus minimizing the rounding errors.
Later, Pornsawad and Böckmann [11] filled in the missing results on optimal convergence rates, but only for particular first-stage methods under HSC and using DP.
Our current study considers further the unifying RK-framework described above, as well as a modified version presented below, showing optimality of the RK-regularization schemes under logarithmic source conditions.
An additional term α k ( w k ε ξ ) , as in the iteratively regularized Gauss–Newton method (see, e.g., [4]),
w k + 1 ε = w k ε ( F ( w k ε ) * F ( w k δ ) + α k I ) 1 ( F ( w k ε ) * ( F ( w k ε ) g ε ) + α k ( w k ε ξ ) ) ,
was added to a modified Landweber method. Thus, Scherzer [12] proved a convergence rate result under HSC without particular assumptions on the nonlinearity of operator F. Moreover, in Pornsawad and Böckmann [13], an the additional term was included to the whole family of iterative RK-type methods (which contains the modified Landweber iteration),
w k + 1 ε = w k ε + τ k b T Π 1 𝟙 F ( w k ε ) * ( g ε F ( w k ε ) ) τ k 1 ( w k ε ζ ) ,
where ζ X and Π = δ + τ k A F ( w k ε ) * F ( w k ε ) . Using a priori and a posteriori stopping rules, the convergence rate results of the RK-type family are obtained under HSC if the Fréchet derivative is properly scaled.
Due to the minimal assumptions for the convergence analysis of the modified iterative RK-type methods, an additional term was added to SDE
d d t w ε ( t ) = F ( w ε ( t ) ) * [ g ε F ( w ε ( t ) ) ] ( w ε ( t ) w ¯ ) , 0 < t T , w ε ( 0 ) = w ¯ .
Pornsawad et al. [14] investigated this continuous version of the modified iterative RK-type methods for nonlinear inverse ill-posed problems. The convergence analysis yields the optimal rate of convergence under a modified DP and an exponential source condition.
Recently, a second-order asymptotic regularization for the linear problem A ^ x = y was investigated in [15]
x ¨ ( t ) + μ x ˙ ( t ) + A ^ * A ^ x ( t ) = A ^ * y δ , x ( 0 ) = x ¯ , x ˙ ( 0 ) = x ¯ ˙
under HSC using DP.
Define
φ = φ p , φ p ( λ ) : = ln e λ p for   0 < λ 1 0 for   λ = 0
with p > 0 and the usual logarithmic sourcewise representation
w + w 0 = φ F ( w + ) * F ( w + ) v , v X ,
where v is sufficiently small and w 0 D ( F ) is an initial guess that may incorporate a priori knowledge on the solution.
In numerous applications—e.g., heat conduction, scattering theory, which are severely ill-posed problems—the Hölder source condition is far too strong. Therefore, Hohage [16] proved convergence and logarithmic convergence rates for the iteratively regularized Gauss–Newton method in a Hilbert space setting, provided a logarithmic source condition (9) is satisfied and DP is used as the stopping rule. Deuflhard et al. [17] showed some convergence rate result for the Landweber iteration using DP and (9) under a Newton–Mysovskii condition on the nonlinear operator. Sufficient conditions for the convergence rate, which is logarithmic in the data noise level, are given.
In Hohage [18], a systematic study of convergence rates for regularization methods under (9) including the case of operator approximations for a priori and a posteriori stopping rules is provided. A logarithmic source condition is considered by Pereverzyev et al. [19] for a derivative-free method, by Mahale and Nair [20] for a simplified generalized Gauss–Newton method, and by Böckmann et al. [21] for the Levenberg–Marquardt method using DP as stopping rule. Pornsawad et al. [22] solved the inverse potential problem, which is exponentially ill-posed, employing the modified Landweber method and proved convergence rate under the logarithmic source condition via DP for this method.
To the best of our knowledge, for the first time, convergence rates are established both for the whole family of RK-methods and for the modified version, when applied to severely ill-posed problems (i.e., under the logarithmic source condition).
The structure of this article is as follows. Section 2 provides assumptions and technical estimations. We derive the convergence rate of the RK-type method (5) in Section 3 and of the modified RK-type method (6) in Section 4 under the logarithmic source conditions (8) and (9). In Section 5, the performed numerical experiments confirm the theoretical results.

2. Preliminary Results

Lemma 1.
Let K be a linear operator with K 1 . For k N with k > 1 , e 0 : = φ ( λ ) v with φ given by (8) and p > 0 , there exist positive constants c 1 and c 2 such that
( I K * K ) k e 0 c 1 ( ln ( k + e ) ) p v
and
K ( I K * K ) k e 0 c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p v .
Proof. 
By spectral theory (8), and (A1) and (A2) in [22], we have
( I K * K ) k e 0 ( I K * K ) k φ ( K * K ) v sup λ ( 0 , 1 ] | ( 1 λ ) k ( 1 ln λ ) p | v c 1 ( ln ( k + e ) ) p v ,
for some constant c 1 > 0 . Similarly, spectral theory (8), and (A3) and (A4) in [22], provides
K ( I K * K ) k e 0 ( I K * K ) k ( K * K ) 1 / 2 φ ( K * K ) v sup λ ( 0 , 1 ] | ( 1 λ ) k λ 1 / 2 ( 1 ln λ ) p | v c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p v ,
for some constant c 2 > 0 . □
Assumption 1.
There exist positive constants c L , c r , and c ^ R and a linear bounded operator R w : Y Y such that for w B ρ ( w 0 ) , the following conditions hold
F ( w ) = R w F ( w + )
R w I c L w w +
R w c r
| R w I | c ^ R ,
where w + is the exact solution of (1).
Let e k : = w + w k ε be the error of the kth iteration w k ε , S k : = F ( w k ε ) and S : = F ( w + ) .
Proposition 1.
Let the conditions (14) and (15) in Assumption 1 hold. Then,
F ( w k ε ) F ( w + ) F ( w + ) ( w k ε w + ) 1 2 c L e k S e k
for w B ρ ( w 0 ) .
The proof is given in [22] using the mean value theorem.
Assumption 2.
Let K be a linear operator and τ be a positive number. There exist positive constants D 1 and D 2 such that
I τ f ( τ K K * ) D 1
and
I + τ f ( τ K K * ) D 2 ,
with f ( t ) : = b T ( I A t ) 1 𝟙 .
We note that the explicit Euler method provides I τ f ( τ K K * ) | 1 τ | and I + τ f ( τ K K * ) | 1 + τ | . Thus, the conditions (19) and (20) hold if τ is bounded. For the implicit Euler method, we have
I τ f ( τ K K * ) sup 0 < λ λ 0 | 1 τ ( 1 + τ λ ) 1 |
and
I + τ f ( τ K K * ) sup 0 < λ λ 0 | 1 + τ ( 1 + τ λ ) 1 | ,
for some positive number λ 0 . We observe from Figure 1 that the conditions (19) and (20) hold for τ > 0 .
Finally, we need a technical result for the next two sections.
Lemma 2.
Let Assumptions 1 and 2 hold for the operator S : = F ( w + ) . Then, there exists a positive number c R such that
i ) I τ k R w k ε * f ( τ k S k S k * ) c R w + w k ε
and
i i ) ( 1 α k ) I τ k R w k ε * f ( τ k S k S k * ) c R w + w k ε .
Proof. 
(i)  Following the proof technique of Theorem 1 in [22] and using (17), we have
1 c ^ R 1 R w I
and
I + R w * c ^ R 1 I R w * I + R w * .
Using Assumption 2; the estimates in Equations (15), (16), (19), (20), (24); and the triangle inequality, we obtain
I τ k R w k ε * f ( τ k S k S k * ) = 1 2 ( I + R w k ε * ) ( I τ k f ( τ k S k S k * ) ) + 1 2 ( I R w k ε * ) ( I + τ k f ( τ k S k S k * ) ) 1 2 I + R w k ε * I τ k f ( τ k S k S k * ) + 1 2 I R w k ε * I + τ k f ( τ k S k S k * ) 1 2 c ^ R 1 D 1 I + R w k ε * + D 2 I R w k ε * c R w + w k ε ,
with positive number c R = 1 2 c ^ R 1 D 1 ( I + c r ) + D 2 c L .
(ii)  Denote A k = τ k R w k ε * f ( τ k S k S k * ) . We have
( 1 α k ) I A k 1 2 [ 1 ( 1 + α k ) ] ( I + A k ) + [ 1 + ( 1 α k ) ] ( I A k ) | α k | 2 I + A k + | 2 α k | 2 I A k .
Part (i) ensures an upper bound for the second term of the last formula. Hence, a similar upper bound for I + A k remains to be determined. To this end, we will use the inequality I + P Q = 1 2 [ ( I P ) ( I Q ) + ( I + P ) ( I + Q ) ] applied to P = R w k ε * and Q = τ k f ( τ k S k S k * ) . Thus, by using (19) and (20), we obtain
I + A k 1 2 [ ( I R w k ε * ) ( I τ k f ( τ k S k S k * ) ) + ( I + R w k ε * ) ( I + τ k f ( τ k S k S k * ) ) ] D 1 2 I R w k ε * + D 2 2 I + R w k ε * c w + w k ε ,
for some positive c, where the last inequality follows as in (24) and (25). Now, (26) combined with part (i) and the last inequality yield (22). □

3. Convergence Rate for the Iterative RK-Type Regularization

To investigate the convergence rate of the RK-type regularization method (5) under the logarithmic source condition, the nonlinear operator F has to satisfy the local property in an open ball B ρ ( w 0 ) of radius ρ around w 0
F ( w ) F ( w ˜ ) F ( w ) ( w w ˜ ) η F ( w ) F ( w ˜ ) , η < 1 2 ,
with w , w ˜ B ρ ( w 0 ) D F . In addition, the regularization parameter k * is chosen according to the generalized discrepancy principle, i.e., the iteration is stopped after k * steps with
g ε F ( w k * ε ) γ ε < g ε F ( w k ε ) , 0 k < k * ,
where γ > 2 η 1 η is a positive number. Note that the triangle inequality yields
1 1 + η F ( w ) ( w w ˜ ) F ( w ) F ( w ˜ ) 1 1 η F ( w ) ( w w ˜ ) .
In the sequel, we establish an error estimate that will be useful in deriving the logarithmic convergence rate.
Theorem 1.
Let Assumptions 1 and 2 be valid. Assume that problem (1) has a solution w + in B ρ 2 ( w 0 ) and g ε fulfills (2). Furthermore, assume that the Fr e ´ chet derivative of F is scaled such that F ( w ) 1 for w B ρ 2 ( w 0 ) and the source conditions (8) and (9) are fulfilled. Thereby, the iterative RK-type regularization is stopped according to the discrepancy principle (28). If v is sufficiently small, then there exists a constant c depending only on p and v such that for any 0 k < k * ,
w + w k ε c ( ln k ) p
and
g ε F ( w k ε ) 4 c ( k + 1 ) 1 / 2 ( ln k ) p .
Proof. 
Using (5), we can show that
e k + 1 = w + w k ε + τ k f ( τ k S k * S k ) F ( w k ε ) * ( F ( w k ε ) g ε ) = ( I S * S ) e k + S * S e k + τ k f ( τ k S k * S k ) F ( w k ε ) * ( F ( w k ε ) g ε ) = ( I S * S ) e k + S * [ F ( w k ε ) F ( w + ) S ( w k ε w + ) ] + S * [ F ( w + ) g ε + g ε F ( w k ε ) ] + τ k f ( τ k S k * S k ) F ( w k ε ) * ( F ( w k ε ) g ε ) = ( I S * S ) e k + S * [ F ( w k ε ) F ( w + ) S ( w k ε w + ) ] + S * ( g g ε ) + S * [ g ε F ( w k ε ) ] + τ k f ( τ k S k * S k ) F ( w k ε ) * ( F ( w k ε ) g ε ) = ( I S * S ) e k + S * [ F ( w k ε ) F ( w + ) S ( w k ε w + ) ] + S * ( g g ε ) + [ S * τ k f ( τ k S k * S k ) F ( w k ε ) * ] ( g ε F ( w k ε ) ) .
Using the spectral theory and (14), we have
f ( τ k S k * S k ) F ( w k ε ) * = F ( w k ε ) * f ( τ k S k S k * ) = S * R w k ε * f ( τ k S k S k * ) .
Consequently, Equation (31) can be rewritten as
e k + 1 = ( I S * S ) e k + S * [ F ( w k ε ) F ( w + ) S ( w k ε w + ) ] + S * ( g g ε ) + S * [ I τ k R w k ε * f ( τ k S k S k * ) ] ( g ε F ( w k ε ) ) = ( I S * S ) e k + S * ( g g ε ) + S * z k ,
where
z k = F ( w k ε ) F ( w + ) S ( w k ε w + ) + [ I τ k R w k ε * f ( τ k S k S k * ) ] ( g ε F ( w k ε ) ) .
By recurrence and Equation (33), we obtain
e k = ( I S * S ) k e 0 + j = 1 k ( I S * S ) j 1 S * ( g g ε ) + j = 1 k ( I S * S ) j 1 S * z k j .
Moreover, it holds that
S e k = ( I S S * ) k S e 0 + j = 1 k ( I S S * ) j 1 S S * ( g g ε ) + j = 1 k ( I S S * ) j 1 S S * z k j .
We will prove by mathematical induction that
e k c ( ln ( k + e ) ) p
and
S e k c ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p
hold for all 0 k < k * with a positive constant c independent of k. Using the discrepancy principle (28), triangle inequality, and γ > 2 η 1 η , we can show that
g ε F ( w k ε ) 2 g ε F ( w k ε ) γ ε 2 g F ( w k ε ) 2 1 η S e k .
Using Proposition 1, Lemma 2, (34), and (39), it follows that
z k F ( w k ε ) F ( w + ) S ( w k ε w + ) + I τ k R w k ε * f ( τ k S k S k * ) g ε F ( w k ε ) 1 2 c L e k S e k + 2 c R 1 η e k S e k c ^ 1 e k S e k ,
with c ^ 1 1 2 c L + 2 c R 1 η .
By assumption S 1 (see Vainikko and Veterennikov [23], as cited in Hanke et al. [5]), we have
j = 0 k 1 ( I S * S ) j S * k
and
( I S * S ) j S * ( j + 1 ) 1 / 2 , j 1 .
Therefore,
j = 1 k ( I S * S ) j 1 S * ( g g ε ) = j = 0 k 1 ( I S * S ) j S * ( g g ε ) k ε
and
j = 1 k ( I S * S ) j 1 S * z k j = j = 0 k 1 ( I S * S ) j S * z k j 1 j = 0 k 1 ( j + 1 ) 1 / 2 z k j 1 .
Using Lemma 1, (40), (43), and (44), Equation (35) becomes
e k c 1 ( ln ( k + e ) ) p v + k ε + j = 0 k 1 ( j + 1 ) 1 / 2 z k j 1 c 1 ( ln ( k + e ) ) p v + k ε + c ^ 1 j = 0 k 1 ( j + 1 ) 1 / 2 e k j 1 S e k j 1 .
Employing the assumption of the induction in Equations (37) and (38) into the third term of (45), we obtain
j = 0 k 1 ( j + 1 ) 1 / 2 e k j 1 S e k j 1 c 2 j = 0 k 1 ( j + 1 ) 1 / 2 ( k j ) 1 / 2 ( ln ( k j 1 + e ) ) 2 p = c 2 j = 0 k 1 j + 1 k + 1 1 / 2 k j k + 1 1 / 2 ( ln ( k j 1 + e ) ) 2 p 1 k + 1 c 2 ( ln ( k + e ) ) p j = 0 k 1 j + 1 k + 1 1 / 2 k j k + 1 1 / 2 1 k + 1 ln ( k + e ) ln ( k j 1 + e ) 2 p .
Similar to Equation (45) in [22], we have
ln ( k + e ) ln ( k j 1 + e ) E 1 + ln k + 1 k j 1 + e
with a generic constant E < 2 , which does not depend on k 1 . Using (47), we can estimate (46) as
j = 0 k 1 ( j + 1 ) 1 / 2 e k j 1 S e k j 1 c 2 E 2 p ( ln ( k + e ) ) p j = 0 k 1 j + 1 k + 1 1 / 2 k j k + 1 1 / 2 1 k + 1 1 + ln k + 1 k j 1 + e 2 p c 2 E 2 p ( ln ( k + e ) ) p j = 0 k 1 j + 1 k + 1 1 / 2 k j k + 1 1 / 2 1 k + 1 1 ln k j k + 1 2 p .
The sum j = 0 k 1 · is bounded because the integral
s 1 s x 1 / 2 ( 1 x ) 1 / 2 ( 1 ln ( 1 x ) ) 2 p d x
is bounded with s : = 1 2 ( k + 1 ) from above by a positive constant E p independent of k. Thus, (45) becomes
e k c 1 ( ln ( k + e ) ) p v + k ε + c ^ 1 c 2 E 2 p E p ( ln ( k + e ) ) p = c 1 v + c p c 2 ( ln ( k + e ) ) p + k ε ,
with c p = c ^ 1 E 2 p E p .
By assumption S 1 (see Vainikko and Veterennikov [23] as cited in Hanke et al. [5]), we have
j = 0 k 1 ( I S S * ) j S S * I ( I S S * ) k 1
and
( I S S * ) j S S * ( j + 1 ) 1 .
Thus,
j = 1 k ( I S S * ) j 1 S S * ( g g ε ) = j = 0 k 1 ( I S S * ) j S S * ( g g ε ) ε
and
j = 1 k ( I S S * ) j 1 S S * z k j = j = 0 k 1 ( I S S * ) j S S * z k j 1 j = 0 k 1 ( j + 1 ) 1 z k j 1 .
Using Lemma 1, (40), (52), and (53), Equation (36) can be estimated as
S e k c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p v + ε + c ^ 1 j = 0 k 1 ( j + 1 ) 1 e k j 1 S e k j 1 .
Using (47) and the assumption of the induction in Equations (37) and (38) into the third term of (54), we obtain
j = 0 k 1 ( j + 1 ) 1 e k j 1 S e k j 1 c 2 j = 0 k 1 ( j + 1 ) 1 ( k j ) 1 / 2 ( ln ( k j 1 + e ) ) 2 p = c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p × j = 0 k 1 j + 1 k + 1 1 k j k + 1 1 / 2 ln ( k + e ) ln ( k j 1 + e ) 2 p ( ln ( k + e ) ) p 1 k + 1 = c 2 E 2 p ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p × j = 0 k 1 j + 1 k + 1 1 k j k + 1 1 / 2 1 ln k j k + 1 2 p 1 k + 1 .
The summation in (55) is bounded because, with s : = 1 2 ( k + 1 ) , the integral
s 1 s x 1 ( 1 x ) 1 / 2 ( 1 ln ( 1 x ) ) 2 p d x E ˜ p ,
for some positive constant E ˜ p independently of k. Thus, (54) becomes
S e k c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p v + ε + c ^ 1 c 2 E 2 p E ˜ p ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p = [ c 2 v + c ˜ p c 2 ] ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p + ε
with c ˜ p = c ^ 1 E 2 p E ˜ p . Setting c * = max { c 1 , c 2 } , we have
e k [ c * v + c p c 2 ] ( ln ( k + e ) ) p + k ε
and
S e k [ c * v + c ˜ p c 2 ] ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p + ε .
The discrepancy principles (28) and (29) provide
γ ε g ε F ( w k ε ) ε + 1 1 η S e k , 0 k < k * .
Using (58), we obtain
( 1 η ) ( γ 1 ) ε S e k [ c * v + c ˜ p c 2 ] ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p + ε , 0 k < k * .
Setting ω = ( 1 η ) ( γ 1 ) 1 > 0 , (59) leads to
ε 1 ω c * v + c ˜ p c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p , 0 k < k * .
Applying (60) to (57), we obtain
e k 1 + 1 ω c * v + c ^ p c 2 ( ln ( k + e ) ) p , 0 k < k *
with c ^ p = max { c p , c ˜ p } .
Applying (60) to (58), we obtain
S e k 1 + 1 ω c * v + c ^ p c 2 ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p , 0 k < k * .
We choose a sufficiently small v such that 1 + 1 ω c * v + c ^ p c 2 c . Thus, the induction is completed. Using (37), we can show that
e k c ln k ln ( k + e ) p ( ln k ) p c ( ln k ) p .
The second assertion is obtained by using (39) as follows:
g ε F ( w k ε ) 2 1 η c ( k + 1 ) 1 / 2 ln k ln ( k + e ) p ( ln k ) p 4 c ( k + 1 ) 1 / 2 ( ln k ) p .
We are now in a position to show the logarithmic convergence rate for the iterative RK-type regularization under a logarithmic source condition, when the iteration is stopped according to the discrepancy principle (28).
Theorem 2.
Under the assumptions of Theorem 1 and for 1 p 2 , one has
k * 1 2 ( ln ( k * ) ) p = O ( 1 / ε ) ,
e k * = O ( ( ln ε ) p ) .
Proof. 
From (60), it follows that
ε c ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p c ( k + 1 ) 1 / 2 ( ln ( k + 1 ) ) p , 0 k < k * ,
for some positive constant c. By taking k * = k 1 , one obtains (65). Furthermore, Lemma (A4) in [22] applied to (65) yields k * = O ( ln ε ) 2 p ε 2 . For showing the second inequality, we use e 0 = φ ( S * S ) v in (30) and proceed as in the proof of Theorem 2 in [22]. □

4. Convergence Rate for the Modified Version of the Iterative RK-Type Regularization

The paper [13] contains a study of the modified iterative Runge-Kutta regularization method
w k + 1 ε = w k ε + τ k b T Π 1 𝟙 F ( w k ε ) * ( g ε F ( w k ε ) ) τ k 1 ( w k ε ζ ) ,
where ζ D ( F ) and Π = δ + τ k A F ( w k ε ) * F ( w k ε ) . More precisely, it presents a detailed convergence analysis and derives Hölder-type convergence rates.
The aim in this section is to show convergence rates for (68) with the natural choice ζ = w 0 . We consider here the logarithmic source condition (9) with φ defined by (8), where v is small enough. That is, we deal with the following method:
w k + 1 ε = w k ε + τ k b T Π 1 𝟙 F ( w k ε ) * ( g ε F ( w k ε ) ) τ k 1 ( w k ε w 0 ) .
We work further under assumptions (27) and (28) with an appropriately chosen constant γ —compare inequality (2.10) in [13]. For the sake of completeness, we recall below the convergence result adapted to the choice ζ = w 0 (compare to Proposition 2.1 and Theorem 2.1 in [13]).
Theorem 3.
Let w + be a solution of (1) in B ρ 8 ( w 0 ) with w 0 = w 0 ε and assume that g ε fulfills (2). If the parameters α k , k N 0 with k = 0 α k < are small enough and if the termination index is defined by (28), then w k * ε w as ε 0 .
We state below a result on essential upper bounds for the errors in (69).
Theorem 4.
Let Assumptions 1 and 2 hold for the operator S : = F ( w ) . Assume that problem (1) has a solution w + in B ρ 8 ( w 0 ) and g ε fulfills (2). Assume that the Fr e ´ chet derivative of F is scaled such that F ( w ) 1 for w B ρ 8 ( w 0 ) and that the parameters α k , k N 0 with k = 0 α k < are small enough. Furthermore, assume that the source condition 9 is fulfilled and that the modified RK-type regularization method (69) is stopped according to (28). If v is sufficiently small, then there exists a constant c depending only on p and v such that for any 0 k < k * ,
w + w k ε c ( ln k ) p
and
g ε F ( w k ε ) 4 c ( k + 1 ) 1 / 2 ( ln k ) p .
Proof. 
First, we deduce an explicit formula for e k = w + w k ε . We proceed similarly to the proof in Theorem 1, but this time, we need to take into account the additional term α k ( w k ε w 0 ) . Thus, the proof steps are as follows.
I. We establish an explicit formula for the error e k :
e k + 1 = e k + τ k f ( τ k S k * S k ) F ( w k ε ) * ( F ( w k ε ) g ε ) + α k ( w k ε w 0 ) = ( 1 α k ) e k + τ k f ( τ k S k * S k ) F ( w k ε ) * ( F ( w k ε ) g ε ) + α k ( w + w 0 ) = ( 1 α k ) ( I S * S ) e k + ( 1 α k ) S * [ F ( w k ε ) F ( w + ) S ( w k ε w + ) ] + ( 1 α k ) S * ( g g ε ) + ( 1 α k ) S * ( g ε F ( w k ε ) ) + α k ( w + w 0 ) τ k f ( τ k S k * S k ) F ( w k ε ) * ( g ε F ( w k ε ) ) = ( 1 α k ) ( I S * S ) e k + ( 1 α k ) S * q k + ( 1 α k ) S * ( g g ε ) + S * ( I τ k R w k ε * f ( τ k S k S k * ) ) ( g ε F ( w k ε ) ) + α k S * ( F ( w k ε g ε ) ) + α k ( w + w 0 ) ,
where the last equality follows from (32). We denoted q k = F ( w k ε ) F ( w + ) S ( w k ε w + ) .
Thus, (72) can be shortly rewritten as
e k + 1 = ( 1 α k ) ( I S * S ) e k + ( 1 α k ) S * ( g g ε ) + S * z k + α k ( w + w 0 )
with
z k = ( 1 α k ) q k + [ ( 1 α k ) I τ k R w k ε * f ( τ k S k S k * ) ] ( g ε F ( w k ε ) ) .
Therefore, we obtain the following closed formula:
e k = j = 0 k 1 ( 1 α j ) ( I S * S ) k + j = 0 k 1 α k j 1 ( I S * S ) j l = 1 j ( 1 α k l ) e 0 + j = 1 k ( I S * S ) j 1 l = 1 j ( 1 α k l ) S * ( y y ε ) + j = 0 k 1 l = k j k 1 ( 1 α l ) ( I S * S ) j S * z k j 1 .
From Proposition 1, (34), Lemma 2, (39), and (74), it follows that
z k c ^ e k S e k ,
for any 0 k k * , where c ^ is a positive constant.
II. Following the technical steps of the proof of Theorem 1 in [22], one can similarly show by induction that there is a positive number c, such that the following inequalities hold for any 0 k k * :
e k c ( ln ( k + e ) ) p ,
S e k c ( k + 1 ) 1 / 2 ( ln ( k + e ) ) p .
Then, one can eventually obtain (70) and (71) as in the mentioned proof. Note that Theorem 1 in [22] is based on Proposition 1 in [22], which requires small enough parameters α k , such that k = 0 α k < and α n k + 1 ln ( k + e ) ln ( k + 1 + e ) p , for all n > k 1 (compare to (16) on page 4 in [22]). Since the smallest value of ln ( k + e ) ln ( k + 1 + e ) is about 0.84 (when k = 1 ), one can clearly find α k ( 0 , 1 ) small enough so as to satisfy the imposed inequalities, e.g., a harmonic-type sequence such as α k = 1 / ( k + 2 ) r for some r > 1 . □
One can show convergence rates for the modified Runge-Kutta regularization method, as done in the previous section for the unmodified version.
Theorem 5.
Under the assumptions of Theorem 4 and for 1 p 2 , one has
k * 1 2 ( ln ( k * ) ) p = O ( 1 / ε ) ,
e k * = O ( ( ln ε ) p ) .

5. Numerical Example

The purpose of the following numerical example is to verify the error estimates shown above. Define the nonlinear operator F : L 2 [ 0 , 1 ] L 2 [ 0 , 1 ] as
[ F ( w ) ] ( s ) = exp 0 1 k ( s , t ) w ( t ) d t ,
with the kernel function
k ( s , t ) = s ( 1 t ) if   s < t ; t ( 1 s ) if   t s .
The noisy data is given by g ε ( s ) = exp ( sin ( π s ) / π 2 ) + ε cos ( 100 s ) , s [ 0 , 1 ] , and the exact solution is w * ( t ) = sin ( π t ) . In order to demonstrate the results in Theorems 1 and 4, we consider Landweber, Levenberg–Marquardt (LM), Lobatto IIIC, and the Radau IIA methods, see Table 1 for the Butcher tableau.
The implementation in this section is the same as the one reported in [10,13]. The number of basis functions is 65 and the number of equidistant grid points is 150, while the parameter τ k is the harmonic sequence term ( k + 1 ) 1.1 . As expected, the results in Figure 2 show that the curve of ln w + w k ε lies below a straight line with slope p , as suggested by (30) in Theorem 1 and (70) in Theorem 4.

6. Summary and Outlook

Up to now, the logarithmic convergence rate under logarithmic source condition has only been investigated for particular examples, namely, the Levenberg–Marquardt method (Böckmann et al. [21]) and the modified Landweber method (Pornsawad et al. [22]). Here, we extended the results to the whole family of Runge-Kutta-type methods with and without modification. For the future, it is still open to prove the optimal convergence rate under Hölder source condition for the whole family without modification.

Author Contributions

The authors P.P., E.R., and C.B. carried out jointly this research work and drafted the manuscript together. All the authors validated the article and read the final version. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors are grateful to the referees for the interesting comments and suggestions that helped to improve the quality of the manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Morozov, V.A. On the solution of functional equations by the method of regularization. Sov. Math. Dokl. 1966, 7, 414–417. [Google Scholar]
  2. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill Posed Problems; V. H. Winston and Sons: New York, NY, USA, 1977. [Google Scholar]
  3. Bakushinsky, A.B.; Kokurin, M.Y. Iterative Methods for Approximate Solution of Inverse Problems; Springer: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
  4. Kaltenbacher, B.; Neubauer, A.; Scherzer, O. Iterative Regularization Methods For Nonlinear Ill-Posed Problems; Walter de Gruyter: Berlin, Germany, 2008. [Google Scholar]
  5. Hanke, M.; Neubauer, A.; Scherzer, O. A convergence analysis of the Landweber iteration for nonlinear ill-posed problems. Numer. Math. 1995, 72, 21–37. [Google Scholar] [CrossRef]
  6. Hochbruck, M.; Hönig, M. On the convergence of a regularizing Levenberg-Marquardt scheme for nonlinear ill-posed problems. Numer. Math. 2010, 115, 71–79. [Google Scholar] [CrossRef] [Green Version]
  7. Jin, Q. On a regularized Levenberg-Marquardt method for solving nonlinear inverse problems. Numer. Math. 2010, 115, 229–259. [Google Scholar] [CrossRef]
  8. Hanke, M. The regularizing Levenberg-Marquardt scheme is of optimal order. J. Integral Equ. Appl. 2010, 22, 259–283. [Google Scholar] [CrossRef]
  9. Tautenhahn, U. On the asymptotical regularization of nonlinear ill-posed problems. Inverse Probl. 1994, 10, 1405–1418. [Google Scholar] [CrossRef]
  10. Böckmann, C.; Pornsawad, P. Iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Inverse Probl. 2008, 24, 025002. [Google Scholar] [CrossRef] [Green Version]
  11. Pornsawad, P.; Böckmann, C. Convergence rate analysis of the first-stage Runge-Kutta-type regularizations. Inverse Probl. 2010, 26, 035005. [Google Scholar] [CrossRef]
  12. Scherzer, O. A modified Landweber iteration for solving parameter estimation problems. Appl. Math. Optim. 1998, 38, 45–68. [Google Scholar] [CrossRef]
  13. Pornsawad, P.; Böckmann, C. Modified iterative Runge-Kutta-type methods for nonlinear ill-posed problems. Numer. Funct. Anal. Optim. 2016, 37, 1562–1589. [Google Scholar] [CrossRef] [Green Version]
  14. Pornsawad, P.; Sapsakul, N.; Böckmann, C. A modified asymptotical regularization of nonlinear ill-posed problems. Mathematics 2019, 7, 419. [Google Scholar] [CrossRef] [Green Version]
  15. Zhang, Y.; Hofmann, B. On the second order asymptotical regularization of linear ill-posed inverse problems. Appl. Anal. 2018, 1–26. [Google Scholar] [CrossRef] [Green Version]
  16. Hohage, T. Logarithmic convergence rates of the iteratively regularized Gauss-Newton method for an inverse potential and an inverse scattering problem. Inverse Probl. 1997, 13, 1279–1299. [Google Scholar] [CrossRef]
  17. Deuflhard, P.; Engl, W.; Scherzer, O. A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Inverse Probl. 1998, 14, 1081–1106. [Google Scholar] [CrossRef]
  18. Hohage, T. Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optimiz. 2000, 21, 439–464. [Google Scholar] [CrossRef]
  19. Pereverzyev, S.S.; Pinnau, R.; Siedow, N. Regularized fixed-point iterations for nonlinear inverse problems. Inverse Probl. 2005, 22, 1–22. [Google Scholar] [CrossRef] [Green Version]
  20. Mahale, P.; Nair, M.T. A simplified generalized Gauss-Newton method for nonlinear ill-posed problems. Math. Comput. 2009, 78, 171–184. [Google Scholar] [CrossRef] [Green Version]
  21. Böckmann, C.; Kammanee, A.; Braunß, A. Logarithmic convergence rate of Levenberg–Marquardt method with application to an inverse potential problem. J. Inverse Ill-Posed Probl. 2011, 19, 345–367. [Google Scholar] [CrossRef]
  22. Pornsawad, P.; Sungcharoen, P.; Böckmann, C. Convergence rate of the modified Landweber method for solving inverse potential problems. Mathematics 2020, 8, 608. [Google Scholar] [CrossRef]
  23. Vainikko, G.; Veterennikov, A.Y. Iteration Procedures in Ill-Posed Problems; Nauka: Moscow, Russia, 1986. [Google Scholar]
Figure 1. Plots of (a) z = | 1 τ ( 1 + τ λ ) 1 | and (b) z = | 1 + τ ( 1 + τ λ ) 1 | for 0 < λ 1 and 0 < τ 500 .
Figure 1. Plots of (a) z = | 1 τ ( 1 + τ λ ) 1 | and (b) z = | 1 + τ ( 1 + τ λ ) 1 | for 0 < λ 1 and 0 < τ 500 .
Mathematics 09 01042 g001
Figure 2. The plot of ln w + w k ε versus ln ( ln ( k ) ) for (a) one−step RK methods and (b) for two−step methods with ε = 10 3 . (c,d) Results with ε = 10 4 . The parameters γ are (a) 1.1 , (b) 25, (c) 1.1 , and (d) 100. For (ad), a harmonic sequence τ k = ( k + 1 ) 1.1 and w 0 = ε w * are used.
Figure 2. The plot of ln w + w k ε versus ln ( ln ( k ) ) for (a) one−step RK methods and (b) for two−step methods with ε = 10 3 . (c,d) Results with ε = 10 4 . The parameters γ are (a) 1.1 , (b) 25, (c) 1.1 , and (d) 100. For (ad), a harmonic sequence τ k = ( k + 1 ) 1.1 and w 0 = ε w * are used.
Mathematics 09 01042 g002
Table 1. Butcher tableau for (a) explicit Euler or Landweber, (b) implicit Euler or Levenberg–Marquardt, (c) Lobatto IIIC, and (d) Radau IIA methods.
Table 1. Butcher tableau for (a) explicit Euler or Landweber, (b) implicit Euler or Levenberg–Marquardt, (c) Lobatto IIIC, and (d) Radau IIA methods.
0 1 2 1 2 1 3 5 12 1 12
00 11 1 1 2 1 2 1 3 4 1 4
1 1 1 2 1 2 3 4 1 4
(a) (b) (c) (d)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Pornsawad, P.; Resmerita, E.; Böckmann, C. Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition. Mathematics 2021, 9, 1042. https://doi.org/10.3390/math9091042

AMA Style

Pornsawad P, Resmerita E, Böckmann C. Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition. Mathematics. 2021; 9(9):1042. https://doi.org/10.3390/math9091042

Chicago/Turabian Style

Pornsawad, Pornsarp, Elena Resmerita, and Christine Böckmann. 2021. "Convergence Rate of Runge-Kutta-Type Regularization for Nonlinear Ill-Posed Problems under Logarithmic Source Condition" Mathematics 9, no. 9: 1042. https://doi.org/10.3390/math9091042

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop