Next Article in Journal
Enhanced Night-to-Day Image Conversion Using CycleGAN-Based Base-Detail Paired Training
Previous Article in Journal
How Does Green Store Brand Introduction Influence the Effects of Government Subsidy on Supply Chain Performance?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solving Inverse Problem of Distributed-Order Time-Fractional Diffusion Equations Using Boundary Observations and L2 Regularization

1
School of Mathematical Sciences, Liaocheng University, Liaocheng 252000, China
2
School of Mathematical Sciences, Zhejiang University, Hangzhou 310058, China
3
College of Sciences, China Jiliang University, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(14), 3101; https://doi.org/10.3390/math11143101
Submission received: 3 June 2023 / Revised: 23 June 2023 / Accepted: 10 July 2023 / Published: 13 July 2023

Abstract

:
This article investigates the inverse problem of estimating the weight function using boundary observations in a distributed-order time-fractional diffusion equation. We propose a method based on L 2 regularization to convert the inverse problem into a regularized minimization problem, and we solve it using the conjugate gradient algorithm. The minimization functional only needs the weight to have L 2 regularity. We prove the weak closedness of the inverse operator, which ensures the existence, stability, and convergence of the regularized solution for the weight in L 2 ( 0 , 1 ) . We propose a weak source condition for the weight in C [ 0 , 1 ] and, based on this, we prove the convergence rate for the regularized solution. In the conjugate gradient algorithm, we derive the gradient of the objective functional through the adjoint technique. The effectiveness of the proposed method and the convergence rate are demonstrated by two numerical examples in two dimensions.

1. Introduction

Recently, due to the ability to effectively model complex multiscale phenomena, the distributed-order fractional derivative has received considerable attention in a variety of disciplines, including viscoelastic material [1], transport phenomena [2,3,4,5,6], and control theory [7,8]. We cite [9,10] for more detailed information on distributed-order fractional (DOF) calculus applications. However, some transport phenomena cannot be effectively modeled by fractional differential equations with single-term [11] or even multi-term [12] fractional derivatives. In these cases, the fractional derivative must be modified by the weighted integration of its derivative order over a certain range. Therefore, the distributed-order time-fractional diffusion equation (DOTFDE) can accurately characterize the forms of ultraslow anomalous diffusion that exhibit logarithmic-growth-type mean square displacement [4,5,6,13,14,15]. In addition, DOTFDE has numerous applications in various fields, such as the dynamics of quenching random force fields [4,13], polymer physics [5,14], motion in iterative mapping families [6], and motion in non-periodic environments [15].
We denote an open bounded domain by Ω R d . Let Ω be its smooth boundary and T ( 0 , ] . With suitable initial and boundary conditions, DOTFDEs are formulated as
0 D t ( μ ) u = A u + f ,
where ( x , t ) Ω × ( 0 , T ] and A is an elliptic differential operator with respect to x. The definition of the left DOF differential operator in Caputo type is given by
0 D t ( μ ) u ( t ) = 0 1 μ ( α ) 0 D t α u ( t ) d α ,
where
0 D t α u ( t ) = 0 t u ( τ ) ( t τ ) α Γ ( 1 α ) d τ , ( 0 α < 1 ) .
DOTFDE has become an issue of theoretical interest in recent years [16,17,18,19,20,21,22]. In [17], by virtue of the Fourier series solution of DOTFDE and some asymptotic results of Cauchy problems, the authors analyzed the asymptotic properties of DOTFDE’s solution. The authors of [19] studied the strong solutions of DOTFDEs on bounded domains. Using a maximum principle and the Fourier formal solution of DOTFDE, the author of [18] obtained the uniqueness and existence of strong solutions. The authors of [20,21,22] also proved the existence, uniqueness, and regularity properties for the weak solutions of DOTFDEs. In addition, research on numerical calculations of DOTFDE can be found in [23,24,25,26,27,28,29,30].
Several publications have addressed the inverse problems of DOTFDEs, where the weight function μ is unknown and must be recovered. The authors of [31] recovered μ through the Dirichlet (or Neumann) observational data from a point within a region. By virtue of the representation of the solution to the forward problem, they established the uniqueness of the solution. The authors of [22,32] obtained the uniqueness of inverting μ by measurement at an interior point, relying on the analyticity of the solutions of DOTFDEs. The authors of [33] considered the numerical recovery of the weight from observation at one interior point by solving a regularized functional using the conjugate gradient (CG) method. Jin and Kian [34] determined the support of a weight function from the measurement at one boundary point in an unknown medium. They also reconstructed a weight from boundary data in a known medium by solving a minimization functional without a penalty term. They adopted an algorithm based on the CG method and an early stopping rule under the discrepancy principle. For other types of inverse problems of DOTFDEs (such as teh estimation of diffusion or potential coefficients, source terms, initial and boundary conditions, and domain geometry), we refer to [35,36,37,38].
Despite some contributions, few papers have thoroughly studied the convergence properties of regularization methods to solve the inverse problems of DOTFDEs. Our main theoretical results are the weak closedness of the inverse operator and the convergence rate of the regularized solution. The weak closedness ensures the existence, stability, and convergence of the regularized solution. The convergence rate is given under a weak source condition that we propose. In this paper, we also focus on the numerical determination of the weight function μ over a finite interval in the inverse problem of DOTFDEs with Neumann boundary conditions. Building upon the methodology introduced in [34], we utilize the L 2 regularization method to transform the inverse problem into a minimization problem. We solve this minimization problem using the CG method. Note that we utilize an L 2 norm regularization term instead of an H 1 norm regularization term as in [33]. This implies a lower requirement for smoothness on the weight function, while there must be μ C [ 0 , 1 ] in [34] and μ H 1 [ 0 , 1 ] in [33], and the L 2 norm regularization term also makes it easier to compute the gradient in the CG algorithm.
The structure of this paper is as follows. Section 2 offers a concise overview of the forward problem. Section 3 explores the inverse problem, starting with the transformation of the problem into a functional minimization using L 2 regularization. Stability analysis for the regularized solution is then performed, and a convergence rate is given under a weak source condition. In Section 4, we introduce a CG algorithm to solve the inverse problem. The convergence rate and the algorithm’s effectiveness are demonstrated through two two-dimensional numerical examples. Finally, in Section 5, we conclude the paper.

2. The Forward Problem

Consider a two-dimensional DOTFDE:
0 D t ( μ ) u A u = f in Ω × ( 0 , T ] , u | t = 0 = u 0 in Ω , u ν A = 0 on Ω × [ 0 , T ] .
The operator A , which is symmetric and uniformly elliptic, is defined on H 0 1 ( Ω ) H 2 ( Ω ) and
A u ( x , t ) = c ( x ) u ( x , t ) i = 1 d j = 1 d x j ( a i j ( x ) u ( x , t ) x i ) ,
where the coefficients satisfy
a i j ( x ) = a j i ( x ) , and a i j ( x ) C 1 ( Ω ¯ ) , γ 1 > 0 and γ 2 > 0 , γ 1 | ξ | 2 i = 1 d j = 1 d a i j ( x ) ξ i ξ j γ 2 | ξ | 2 , ξ = ( ξ 1 , , ξ d ) T R d , x Ω ¯ , c ( x ) 0 , and c ( x ) C ( Ω ¯ ) .
Let ν ( x ) be the outward unit normal vector of Ω and ν i is the i-th component. The corresponding boundary condition in (2) takes the form
u ν A = i = 1 d j = 1 d a i j u x j ν i .
For the sake of notational convenience, we abbreviate the left fractional integration operator 0 I t 1 α as I 1 α , i.e.,
I 1 α v ( t ) : = 0 I t 1 α v ( t ) = 0 t ( t s ) α v ( s ) d s Γ ( 1 α ) .
Furthermore, we denote by V : = 0 H 1 ( 0 , T ; H 1 ( Ω ) ) the space with all functions from H 1 ( 0 , T ; H 1 ( Ω ) ) and their trace vanishing at t = 0 . Referring to [20], we have the standard derivation of the following results.
Let K = { μ ( α ) L 2 ( 0 , 1 ) μ ( α ) 0 for α ( 0 , 1 ) and C 1 μ ( α ) L 2 ( 0 , 1 ) C 2 } . Assuming that μ K , u 0 ( x ) L 2 ( Ω ) and f ( x , t ) L 2 ( 0 , T ; L 2 ( Ω ) ) , then the problem (2) has a unique weak solution u L 2 ( 0 , T ; H 1 ( Ω ) ) and
0 1 μ ( α ) I 1 α ( u u 0 ) d α V .
Furthermore, φ H 1 ( Ω ) , a . e . t ( 0 , T ] ,
d d t 0 1 Ω φ ( x ) μ ( α ) I 1 α [ u ( x , t ) u 0 ( x ) ] d x d α + i = 1 d j = 1 d Ω D i φ ( x ) a i j ( x , t ) D j u ( x , t ) d x = Ω φ ( x ) c ( x , t ) u ( x , t ) d x + Ω φ ( x ) f ( x , t ) d x ,
and
0 1 μ ( α ) I 1 α ( u u 0 ) d α H 1 ( 0 , T ; H 1 ( Ω ) ) + u L 2 ( 0 , T ; H 1 ( Ω ) ) M ,
with the positive constant M only depending on C 1 , C 2 , f, u 0 , γ 1 , γ 2 , c ( x ) , Ω , and T.

3. The Inverse Problem

3.1. The Identification Problem

Typically, we only obtain measurements on the (parts of) boundary Ω instead of inside the total Ω . Thus, our goal is to identify the exact weight μ in (2) from the extra boundary measurement
u ( x , t ) = ϕ ( x , t ) , on Γ 0 Ω ,
where u is the solution of (2) with μ = μ . In practical applications, it is often difficult to obtain the exact data ϕ ( x , t ) due to measurement errors. Instead, ϕ δ ( x , t ) , the measurement with noise, is obtained, where the noise level is known:
ϕ δ ϕ L 2 ( 0 , T ; L 2 ( Γ 0 ) ) δ .
It is widely recognized that parameter identification problems frequently exhibit ill-posedness and usually require some regularization. In this paper, L 2 regularization is adopted and the inverse problem is formulated as
μ ϵ , δ = argmin μ K J ϵ ( μ ) ,
where
J ϵ ( μ ) = 1 2 0 T Γ 0 | F ( μ ) ϕ δ | 2 d x d t + ϵ 2 μ μ L 2 ( 0 , 1 ) 2 ,
and F ( μ ) = ϕ , ϵ > 0 , μ K (see Chapter 10 in [39]). The choice of μ is an open issue and crucial for the efficacy of regularization approaches. The convergence rate results in Section 3.3 heavily depend on μ . Generally, μ should have a priori information of the exact parameter μ and is regarded as a prior guess.

3.2. Existence, Stability, and Convergence of the Regularized Solutions

We first give three properties of the regularized solutions in (6).
(i)
The existence: There exists a minimizer μ ϵ , δ for any data ϕ δ L 2 ( 0 , T ; L 2 ( Γ 0 ) ) .
(ii)
The stability: For a given regularization parameter ϵ , the minimizers of (7) depend continuously on ϕ δ .
(iii)
The convergence: As the noise level δ and the regularization parameter ϵ (chosen by a priori rule) both tend to zero, the regularized solutions μ ϵ , δ converge to the exact parameter μ .
These properties establish the well-posedness of the minimization problem and the reliability of the regularized solutions. If the weak closedness of the mapping F : μ u μ ( x , t ) | Γ 0 is provided, the proof of the (i)–(iii) is standard (see [39]).
Proposition 1.
(weak closedness). For μ n μ K in L 2 ( 0 , 1 ) and F ( μ n ) y L 2 ( 0 , T ; L 2 ( Γ 0 ) ) , then we have
F ( μ ) = y .
Proof. 
From (4), we have the conditions that { u μ n } and 0 1 I 1 α ( u μ n u 0 ) μ n ( α ) d α are bounded in L 2 ( 0 , T ; H 1 ( Ω ) ) and H 1 ( 0 , T ; H 1 ( Ω ) ) , respectively. Hence, there exists a subsequence { u μ n k } , u i n L 2 ( 0 , T ; H 1 ( Ω ) ) and v i n L 2 ( 0 , T ; H 1 ( Ω ) ) such that
u μ n k u
and
d d t 0 1 μ n k ( α ) I 1 α ( u μ n k u 0 ) d α v .
Applying the compact embedding of L 2 ( 0 , T ; H 1 ( Ω ) ) into L 2 ( 0 , T ; Ω ) , we can show that
u μ n k u i n L 2 ( 0 , T ; Ω ) .
Now, let us verify that v = d d t 0 1 I 1 α ( u u 0 ) μ ( α ) d α . For any given ψ ( x , t ) C c 1 ( 0 , T ; H 1 ( Ω ) ) , we use the triangle inequality to obtain
0 T Ω d d t 0 1 μ n k ( α ) I 1 α ( u μ n k u 0 ) d α ψ d x d t 0 T Ω d d t 0 1 μ ( α ) I 1 α ( u u 0 ) d α ψ d x d t 0 T Ω d d t 0 1 μ n k ( α ) I 1 α ( u μ n k u ) d α ψ d x d t + 0 T Ω d d t 0 1 [ μ n k ( α ) μ ( α ) ] I 1 α ( u u 0 ) d α ψ d x d t M 1 + M 2 .
For M 1 , we have
M 1 = 0 T Ω d d t 0 1 I 1 α ( u μ n k u ) μ n k ( α ) d α ψ d x d t = Ω 0 T ψ ( x , t ) d d t 0 t 0 1 μ n k ( α ) Γ ( 1 α ) ( t τ ) α d α u μ n k ( x , τ ) u ( x ) d τ d t d x = Ω 0 T ψ ( x , t ) t 0 1 μ n k ( α ) Γ ( 1 α ) t α d α u μ n k ( x , t ) u ( x ) d t d x ψ ( x , t ) t L 2 ( 0 , T ; Ω ) 0 1 μ n k ( α ) Γ ( 1 α ) t α d α u μ n k ( x , t ) u ( x ) L 2 ( 0 , T ; Ω ) ,
where the symbol ∗ denotes the convolution with respect to t. Then, utilizing Young’s convolution inequality, it follows that
M 1 ψ ( x , t ) t L 2 ( 0 , T ; Ω ) 0 1 μ n k ( α ) Γ ( 1 α ) t α d α L 1 ( 0 , T ; Ω ) u μ n k ( x , t ) u ( x ) L 2 ( 0 , T ; Ω ) .
Since μ n k ( α ) K are nonnegative and
0 1 μ n k ( α ) Γ ( 1 α ) t α d α L 1 ( 0 , T ; Ω ) = 0 1 μ n k ( α ) Γ ( 1 α ) 0 T t α d t d α = 0 1 μ n k ( α ) Γ ( 1 α ) T 1 α d α 2 C 2 max { T , 1 } ,
M 1 2 C 2 max { T , 1 } ψ ( x , t ) t L 2 ( 0 , T ; Ω ) u μ n k ( x , t ) u ( x ) L 2 ( 0 , T ; Ω ) .
From (9), M 1 tends to zero as k .
For M 2 , we can deduce
M 2 = 0 T Ω d d t 0 1 I 1 α ( u u 0 ) ( μ n k μ ( α ) ) d α ψ d x d t = 0 T Ω ψ ( x , t ) t 0 1 I 1 α ( u u 0 ) ( μ n k μ ( α ) ) d α d x d t = 0 1 ( μ n k μ ( α ) ) 0 T Ω ψ ( x , t ) t t α Γ ( 1 α ) ( u u 0 ) d x d t d α .
Again, by Young’s convolution inequality, it is straightforward to verify that
0 T Ω ψ ( x , t ) t t α Γ ( 1 α ) ( u u 0 ) d x d t L 2 ( 0 , 1 ) .
In fact,
0 T Ω ψ ( x , t ) t t α Γ ( 1 α ) ( u u 0 ) d x d t L 2 ( 0 , 1 ) ψ ( x , t ) t L 2 ( 0 , T ; Ω ) t α Γ ( 1 α ) ( u u 0 ) L 2 ( 0 , 1 ) ψ ( x , t ) t L 2 ( 0 , T ; Ω ) u u 0 L 2 ( 0 , T ; Ω ) 0 1 t α Γ ( 1 α ) L 2 ( 0 , T ) 2 d α 1 2 = ψ ( x , t ) t L 2 ( 0 , T ; Ω ) u u 0 L 2 ( 0 , T ; Ω ) 0 1 T 2 2 α ( Γ ( 2 α ) ) 2 d α 1 2 2 max { T , 1 } ψ ( x , t ) t L 2 ( 0 , T ; Ω ) u u 0 L 2 ( 0 , T ; Ω ) .
Since μ n μ K in L 2 ( 0 , 1 ) , we obtain
M 2 0 as k .
Then, for any ψ ( x , t ) L 2 ( 0 , T ; H 1 ( Ω ) ) ,
0 T Ω d d t 0 1 I 1 α ( u μ n k u 0 ) μ n k ( α ) I 1 α ( u u 0 ) μ ( α ) d α ψ d x d t 0 as k .
By the weak convergence of (8), we yield
0 T i = 1 d j = 1 d Ω a i j D j u μ n k D i ψ d x d t 0 T i = 1 d j = 1 d Ω a i j D j u D i ψ d x d t 0 T Ω c u μ n k ψ d x d t 0 T Ω c u ψ d x d t 0 as k , ψ ( x , t ) L 2 ( 0 , T ; H 1 ( Ω ) ) .
Adding (10) and (11), we obtain that for any given ψ ( x , t ) L 2 ( 0 , T ; H 1 ( Ω ) ) , as k ,
0 T d d t 0 1 Ω I 1 α ( u u 0 ) ψ d x μ ( α ) d α d t + 0 T i = 1 d j = 1 d Ω a i j D j u D i ψ d x d t = 0 T Ω c u ψ d x d t + 0 T Ω f ( x , t ) ψ ( x , t ) d x d t .
Then,
d d t 0 1 Ω I 1 α ( u u 0 ) ψ d x μ ( α ) d α d t + i = 1 d j = 1 d Ω a i j D j u D i ψ d x = Ω c u ψ d x + Ω f ( x , t ) ψ ( x ) d x , ψ ( x ) H 1 ( Ω ) .
Clearly, u is the weak solution of (2) for μ , and u = u μ due to the uniqueness of weak solutions. Additionally, as u μ n k u μ holds for any subsequence u μ n k ,
u μ n u μ in L 2 ( 0 , T ; H 1 ( Ω ) ) .
Considering the continuity of the trace operator, we obtain F ( μ n ) = u μ n ( x , t ) | Γ 0 F ( μ ) = u μ ( x , t ) | Γ 0 in L 2 ( 0 , T ; L 2 ( Γ 0 ) ) . Moreover, given the assumption that F ( μ n ) y in L 2 ( 0 , T ; L 2 ( Γ 0 ) ) , we deduce, by exploiting the uniqueness of the weak limit, that F ( μ ) = y . □

3.3. Convergence Rates

Throughout this subsection, we make the assumption that weight functions are continuous on [ 0 , 1 ] , thereby selecting K = { μ ( α ) C [ 0 , 1 ] μ ( α ) 0 for α ( 0 , 1 ) and C 1 μ ( α ) L 2 ( 0 , 1 ) C 2 } . Furthermore, we derive the convergence rates of regularized solutions under a weak source condition. To establish the existence of the weak source condition as described in Theorem 1, we introduce three lemmas.
Lemma 1.
Let ω ( α ) be continuous on [ 0 , 1 ] such that, for all t ( 0 , T ] ( T > 0 ),
0 1 ω ( α ) t α d α = 0 .
We then conclude that, for α [ 0 , 1 ] , ω ( α ) = 0 .
Proof. 
Define g ( t ) = 0 1 ω ( α ) t α d α . Then, differentiate g ( t ) = 0 with respect to t repeatedly to find, for n = 0 , 1 , 2 , ,
0 1 ω ( α ) α n t α d α = 0 .
Hence,
0 1 p ( α ) ω ( α ) t α d α = 0
for any polynomial p ( α ) . By virtue of the Weierstrass theorem, we conclude the existence of a sequence of polynomials { p n ( α ) } n 0 that converges to the continuous function ω ( α ) uniformly on [ 0 , 1 ] . Taking the limit n in 0 1 ω ( α ) p n ( α ) t α d α = 0 yields, for any t ( 0 , T ] ,
0 1 ω ( α ) 2 t α d α = 0 .
This completes the proof. □
Lemma 2.
Let ω ( α ) be continuous on [ 0 , 1 ] and 0 T | u ( t ) u ( 0 ) | d t 0 . Then, t ( 0 , T ) , 0 1 ω ( α ) 0 D t α u ( t ) d α = 0 if and only if ω ( α ) = 0 .
Proof. 
Utilizing the definition of a fractional derivative, we observe that, for t ( 0 , T ) ,
0 = 0 1 ω ( α ) 0 D t α u ( t ) d α = 0 1 d d t 0 t ( t τ ) α ( u ( τ ) u ( 0 ) ) d τ ω ( α ) Γ ( 1 α ) d α = 0 1 d d t t α ( u ( t ) u ( 0 ) ) ω ( α ) Γ ( 1 α ) d α .
Subsequently, using the Laplace transform, we can express (13) as
0 = 0 1 s Γ ( 1 α ) s α 1 L { u } ( s ) u ( 0 ) s ω ( α ) Γ ( 1 α ) d α = 0 1 ω ( α ) s α d α L { u } ( s ) u ( 0 ) s .
As 0 T | u ( t ) u ( 0 ) | d t 0 , it follows that L { u } ( s ) u ( 0 ) s 0 . This implies that 0 1 ω ( α ) s α d α = 0 . By Lemma 1, we can deduce that ω ( α ) = 0 . □
Let us consider a perturbation of μ , denoted by μ ˜ : = μ + τ ω , where τ 0 is a real parameter. Given the above perturbation μ ˜ , we denote by u ˜ the solution to the corresponding forward problem (2). Further, let us define
W ω ( x , t ; μ ) = lim τ 0 u ˜ ( x , t ) u ( x , t ) τ
to be the solution to (14)
0 D t ( μ ) W ω ( x , t ; μ ) = A W ω ( x , t ; μ ) + f ω ( x , t ; μ ) in Ω × ( 0 , T ] , W ω ( x , 0 ; μ ) = 0 in Ω , W ω ( x , t ; μ ) ν A | Ω = 0 on Ω × [ 0 , T ] ,
where f ω ( x , t ; μ ) = 0 1 ω ( α ) 0 D t α u μ d α .
Lemma 3.
Assume that ω ( α ) is continuous on [ 0 , 1 ] and 0 T | u μ ( x , t ) u 0 ( x ) | d t 0 . Then, W ω ( x , t ; μ ) = 0 if and only if ω ( α ) = 0 , where ( x , t ) Γ 0 × ( 0 , T ] and α [ 0 , 1 ] .
Proof. 
We introduce an ordinary DOF differential equation:
0 D t ( μ ) W n ( t ) = λ n W n ( t ) , W n ( 0 ) = 1 , t ( 0 , T ) .
Here, λ n and ψ n ( x ) correspond to the eigenvalues and eigenfunctions of the operator A imposed with the homogeneous Neumann boundary condition. Moreover, the solution W n ( t ) , n = 1 , 2 , are linearly independent functions. From Corollary 3.1 in [31], we obtain
W ω ( x , t ; μ ) = n = 1 0 t Ω τ f ω ( x , τ ; μ ) ψ n ( x ) d x I ( μ ) W n ( t τ ) d τ ψ n ( x ) .
Here, the distributed fractional integral operator is
I ( μ ) v ( t ) = 0 t K ( t τ ) v ( τ ) d τ ,
where L { K } ( s ) = 1 0 1 μ ( α ) s α d α . Additionally, utilizing the Laplace transform on (16) with respect to the variable t, we have
L { W ω } ( x , s ; μ ) = n = 1 s Ω L { f ω } ( x , s ; μ ) ψ n ( x ) d x L { W n } ( s ) 0 1 μ ( α ) s α d α ψ n ( x ) .
If W ω ( x , t ; μ ) = 0 , ( x , t ) Γ 0 × ( 0 , T ) , we have
Ω L { f ω } ( x , s ; μ ) ψ n ( x ) d x ψ n ( x ) = 0 , x Γ 0 , n = 1 , 2 , .
Since ( A + λ n ) ψ n ( x ) = 0 in Ω and ψ n ( x ) = ψ n ν n = 0 on Γ 0 , the uniqueness of the Cauchy problem for elliptic equations (refer to Theorem 3.3.1 in [40]) implies ψ n ( x ) = 0 in Ω , which contradicts { ψ n ( x ) } , being eigenfunctions of A with the homogeneous Neumann boundary condition on Ω . Thus, for any n N + , there exists x n Γ 0 such that ψ n ( x n ) 0 . Consequently, it follows that
Ω L { f ω } ( x , s ; μ ) ψ n ( x ) d x = 0 , n = 1 , 2 ,
and, combining the linear independence of ψ n ( x ) in Ω , we can deduce that
f ω ( x , t ; μ ) = 0 , t ( 0 , T ) .
By virtue of Lemma 2, we can determine that ω ( α ) = 0 on [ 0 , 1 ] . □
Theorem 1.
(Source condition) If 0 T Ω | u ( x , t ) u 0 ( x ) | d x d t 0 , there exists ξ ( x , t ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) such that, for any ω C [ 0 , 1 ]
( μ μ , ω ) L 2 ( 0 , 1 ) = 0 T Γ 0 W ω ( x , t ; μ ) ξ ( x , t ) d x d t .
Proof. 
We define a functional H ( ω , ξ ) by
H ( ω , ξ ) = ( μ μ , ω ) L 2 ( 0 , 1 ) 0 T Γ 0 W ω ( x , t ; μ ) ξ ( x , t ) d x d t .
In order to prove (17), it suffices to show that there exists ξ ( x , t ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) such that
H ( ω , ξ ) = 0 , ω C [ 0 , 1 ] .
To begin, we demonstrate the existence of ξ ( x , t ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) satisfying
ω H ( ω , ξ ) = 0 .
By the definition of H ( ω , ξ ) , we obtain
ω H ( ω , ξ ) = ( μ μ , 1 ) L 2 ( 0 , 1 ) 0 T Γ 0 d d ω W ω ( x , t ; μ ) ξ ( x , t ) d x d t .
Using (14) and the fact that ω f ω ( x , t ; μ ) = 0 1 0 D t α u μ d α = f 1 ( x , t ; μ ) , we derive that
ω W ω ( x , t ; μ ) = W 1 ( x , t ; μ ) ,
where W 1 ( x , t ; μ ) is the solution of (14) with ω ( α ) = 1 . Thus, (19) can be expressed as
ω H ( ω , ξ ) = ( μ μ , 1 ) L 2 ( 0 , 1 ) 0 T Γ 0 W 1 ( x , t ; μ ) ξ ( x , t ) d x d t .
Lemma 3 implies that, for ( x , t ) Γ 0 × ( 0 , T ) , W 1 ( x , t ; μ ) 0 . Accordingly, there exists ξ ( x , t ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) satisfying (18). Specifically, if ( μ μ , 1 ) L 2 ( 0 , 1 ) > 0 , we choose ξ ( x , t ) = W 1 ( x , t ; μ ) ( μ μ , 1 ) L 2 ( 0 , 1 ) W 1 ( x , t ; μ ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) ; if ( μ μ , 1 ) L 2 ( 0 , 1 ) < 0 , we choose ξ ( x , t ) = W 1 ( x , t ; μ ) ( μ μ , 1 ) L 2 ( 0 , 1 ) W 1 ( x , t ; μ ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) ; if ( μ μ , 1 ) L 2 ( 0 , 1 ) = 0 , we choose ξ ( x , t ) = 0 . Thus, ξ ( x , t ) L 2 ( 0 , T ; L 2 ( Γ 0 ) ) and there exists a constant c such that H ( ω , ξ ) = c , for all ω C [ 0 , 1 ] . Moreover, as ω ( α ) = 0 implies H ( ω , ξ ) = 0 , it follows that c = 0 . □
Theorem 2.
Let ϕ ϵ , δ ϕ L 2 ( 0 , T ; L 2 ( Γ 0 ) ) < δ and μ ϵ , δ be the minimizer of (7). Suppose that 0 T Ω | u ( x , t ) u 0 ( x ) | d x d t 0 . Furthermore, we assume that the following conditions hold:
1.
the solution W ω ( x , t ; μ ) of (14) with μ K exists for any ω ( α ) C [ 0 , 1 ] ,
2.
there exists r > 0 such that
W ω ( x , t ; μ ) W ω ( x , t ; μ ) L 2 ( 0 , T ; Γ 0 ) r ω L 2 ( 0 , 1 ) μ μ L 2 ( 0 , 1 )
in a sufficiently large ball around μ ,
3.
the function ξ, which is found in Theorem 1, satisfies r ξ L 2 ( 0 , T ; Γ 0 ) < 1 .
Then, for ϵ δ , we have
F ( μ ϵ , δ ) ϕ δ L 2 ( 0 , T ; Γ 0 ) = O ( δ )
and
μ ϵ , δ μ L 2 ( 0 , 1 ) = O ( δ ) .
Proof. 
To simplify the notation, we introduce U = L 2 ( 0 , T ; Γ 0 ) , and we omit the subscript in the norm · L 2 ( 0 , 1 ) and the inner product ( · , · ) L 2 ( 0 , 1 ) .
As μ ϵ , δ minimizes (7), we have J ϵ ( μ ϵ , δ ) J ϵ ( μ ) , which implies
F ( μ ϵ , δ ) ϕ δ U 2 + ϵ μ ϵ , δ μ 2 δ 2 + ϵ μ μ 2 .
Thus,
F ( μ ϵ , δ ) ϕ δ U 2 + ϵ μ ϵ , δ μ 2 δ 2 + ϵ μ μ 2 ϵ μ ϵ , δ μ 2 + ϵ μ ϵ , δ μ 2 = δ 2 + 2 ϵ ( μ μ ϵ , δ , μ μ ) .
Denote I 1 = 2 ϵ ( μ μ ϵ , δ , μ μ ) . By choosing ω ( α ) = μ ϵ , δ ( α ) μ ( α ) in source condition (17) and using Theorem 1, we obtain
I 1 = 2 ϵ ( μ μ , ω ) = 2 ϵ 0 T Γ 0 W ω ( x , t ; μ ) ξ ( x , t ) d x d t .
Using condition (2), we deduce that
u ϵ , δ = u + W ω ( x , t ; μ ) + r ϵ , δ , r ϵ , δ U r 2 ω μ ϵ , δ μ = r 2 μ ϵ , δ μ 2 .
By utilizing the Cauchy–Schwarz inequality and Young’s inequality, we derive
| I 1 | = 2 ϵ 0 T Γ 0 ( u ϵ , δ u ) r ϵ , δ ξ ( x , t ) d x d t 2 ϵ ξ U u ϵ , δ u U + r ϵ ξ U μ ϵ , δ μ 2 2 ϵ ξ U u ϵ , δ ϕ δ + ϕ δ u U + r ϵ ξ U μ ϵ , δ μ 2 2 ϵ ξ U F ( μ ϵ , δ ) ϕ δ U + 2 ϵ δ ξ U + r ϵ ξ U μ ϵ , δ μ 2 ϵ 2 ξ U 2 θ + θ F ( μ ϵ , δ ) ϕ δ U 2 + 2 ϵ δ ξ U + r ϵ ξ U μ ϵ , δ μ 2 .
By estimating I 1 , we see from (20) that
( 1 θ ) F ( μ ϵ , δ ) ϕ δ U 2 + ( 1 r ξ U ) ϵ μ ϵ , δ μ 2 δ 2 + ϵ 2 ξ U 2 θ + 2 ϵ δ ξ U .
Given 0 < θ < 1 and condition (3), we conclude that
F ( μ ϵ , δ ) ϕ δ U = O ( δ )
(thus, by (5), F ( μ ϵ , δ ) ϕ U = O ( δ ) ) and
μ ϵ , δ μ = O ( δ ) .

4. Numerical Computation

4.1. Computation of the Gradient for the Regularization Functional

In this subsection, we aim to find the minimizer of J ϵ ( μ ) in (7) by employing the CG method, where the gradients are determined through an adjoint technique and the sensitivity problem (14). Specifically, we derive the adjoint equation of the forward Equation (2), which takes the form
t D T ( μ ) v = A v , in Ω × [ 0 , T ) , v ν A = u ϕ δ , on Γ 0 × [ 0 , T ] , v ν A = 0 , on ( Ω \ Γ 0 ) × [ 0 , T ] , v ( x , T ) = 0 , in Ω ,
where ϕ δ is the observation data on Γ 0 × [ 0 , T ] ,
t D T ( μ ) v ( t ) = 0 1 μ ( α ) t D T α v ( t ) d α
and
t D T α v ( t ) = t T v ( τ ) ( τ t ) α d τ Γ ( 1 α ) , 0 α < 1 , v ( t ) , α = 1 .
Lemma 4.
Let u ( t ) and v ( t ) belong to A C [ 0 , T ] . Then,
0 T v ( t ) 0 D t ( μ ) u ( t ) d t = u ( 0 ) t L T ( μ ) v ( 0 ) + v ( T ) 0 L t ( μ ) u ( T ) + 0 T u ( t ) t D T ( μ ) v ( t ) d t ,
where
( 0 L t ( μ ) v ) ( t ) = 0 1 μ ( α ) 0 I t 1 α v ( t ) d α , ( t L T ( μ ) v ) ( t ) = 0 1 μ ( α ) t I T 1 α v ( t ) d α ,
and
t I T 1 α v ( t ) = 1 Γ ( 1 α ) t T ( τ t ) α v ( τ ) d τ .
The proof of Lemma 4 is based on Lemma 2.1 in [41].
Theorem 3.
The gradient of J ϵ ( μ ) at μ ( α ) along the direction ω ( α ) can be obtained through
J ϵ ( μ ) [ ω ] = 0 T Ω ( 0 1 ω ( α ) 0 D t α u d α ) v d x d t + ϵ ( μ μ , ω ) L 2 ( 0 , 1 ) ,
where v ( x , t ) is the solution of the adjoint Equation (21).
Proof. 
Consider a perturbation μ ˜ of μ , and u and u ˜ , respectively, represent the solutions of (2) under weights μ and μ ˜ . For convenience, we denote the solution W ω ( x , t ; μ ) of (14) as W. By using (7) and (14), we can write
J ϵ ( μ ) [ ω ] = lim τ 0 J ( μ ˜ ) J ( μ ) τ = lim τ 0 1 2 τ 0 T Γ 0 ( u ˜ ϕ δ ) 2 ( u ϕ δ ) 2 d x d t + ϵ 2 τ ( μ + τ ω μ , μ + τ ω μ ) L 2 ( 0 , 1 ) ( μ μ , μ μ ) L 2 ( 0 , 1 ) = lim τ 0 1 2 τ 0 T Γ 0 ( u ˜ u ) ( u ˜ + u 2 ϕ δ ) d x d t + ϵ 2 τ ( τ ω , τ ω ) L 2 ( 0 , 1 ) + ϵ τ ( μ μ , τ ω ) L 2 ( 0 , 1 ) = 0 T Γ 0 W ( u ϕ δ ) d x d t + ϵ ( μ μ , ω ) L 2 ( 0 , 1 ) = 0 T Γ 0 v ν A W d x d t + ϵ ( μ μ , ω ) L 2 ( 0 , 1 ) .
In order to prove (23), we multiply the first equation of (14) with v ( x , t ) and integrate both sides over both the x and t dimensions. This yields
0 T Ω 0 D t ( μ ) W v d x d t = 0 T Ω ( A W + f ω ) v d x d t .
By applying Lemma 4, we obtain
0 = Ω W ( x , 0 ) ( t L T ( μ ) v ) ( x , 0 ) d x + Ω ( 0 L t ( μ ) W ) ( x , T ) v ( x , T ) d x + 0 T Ω W t D T ( μ ) v A v d x d t + 0 T Ω v ν A W d x d t 0 T Ω f ω v d x d t 0 T Ω W ν A v d x d t .
Next, we derive the following result by using the initial and boundary conditions in (14) and (21):
0 T Γ 0 v ν A W d x d t = 0 T Ω f ω v d x d t = 0 T Ω ( 0 1 ω ( α ) 0 D t α u d α ) v d x d t .
Finally, we substitute this result into (24) to complete the proof. □

4.2. Transformation of the Adjoint Problem

Let v ˜ ( x , t ) be defined as v ( x , T t ) . Then, we can express t D T ( μ ) v ( x , t ) = 0 D τ ( μ ) v ˜ ( x , τ ) , which follows from
t D T α v ( x , t ) = t T v s ( x , s ) ( s t ) α d s Γ ( 1 α ) = T t 0 v s ( x , T s ) ( T s t ) α d s Γ ( 1 α ) = 0 T t v s ( x , T s ) ( T t s ) α d s Γ ( 1 α ) = 0 T t v ˜ s ( x , s ) ( T t s ) α d s Γ ( 1 α ) = let τ = T t 0 τ v ˜ s ( x , s ) ( τ s ) α d s Γ ( 1 α ) = 0 D τ α v ˜ ( x , τ ) .
By applying this equivalence, we transform the adjoint problem (21) into the problem
0 D τ ( μ ) v ˜ ( x , τ ) = A v ˜ ( x , τ ) , x Ω , τ ( 0 , T ] , v ˜ ( x , τ ) ν A = u ( x , T τ ) ϕ δ ( x , T τ ) , x Γ 0 , τ [ 0 , T ] , v ˜ ( x , τ ) ν A = 0 , x Ω \ Γ 0 , τ [ 0 , T ] , v ˜ ( x , 0 ) = 0 , x Ω .

4.3. Solving the Minimization Problem via the CG Algorithm

For the determination of the unknown weight function μ , we propose the iterative scheme
μ k + 1 : = μ k + τ k ω k , k = 0 , 1 , 2 , ,
where the descent direction ω k is updated according to
ω k : = J ϵ ( μ k ) + λ k ω k 1 ,
and
λ k : = J ϵ ( μ k ) 2 2 J ϵ ( μ k 1 ) 2 2 with λ 0 : = 0 .
For the step size τ k in (26), we give the following calculation. From (7), we obtain
J ( μ k + τ k ω k ) 1 2 u μ k + τ k W ω k ( x , t , μ k ) ϕ δ L 2 ( 0 , T ; Γ 0 ) 2 + ϵ 2 μ k + τ k ω k μ L 2 ( 0 , 1 ) 2 .
Using
d J ( μ k + τ k ω k ) d τ k W ω k ( x , t , μ k ) , u μ k ϕ δ + τ k W ω k ( x , t , μ k ) L 2 ( 0 , T ; Γ 0 ) + ϵ ( ω k , μ k + τ k ω k μ ) L 2 ( 0 , 1 ) = 0 ,
we deduce that
τ k = W ω k ( x , t , μ k ) , u μ k ϕ δ L 2 ( 0 , T ; Γ 0 ) + ϵ ( ω k , μ k μ ) L 2 ( 0 , 1 ) ( W ω k ( x , t , μ k ) , W ω k ( x , t , μ k ) ) L 2 ( 0 , T ; Γ 0 ) + ϵ ( ω k , ω k ) L 2 ( 0 , 1 ) .
To guarantee μ K , we adopt the pointwise projection
P ( C 1 , C 2 ) μ = max { C 1 , min { C 2 , μ } } .
To minimize the functional (7), we describe the whole procedure of the CG method as follows.
Algorithm 1 The CG method for the minimization problem (6)
  1.  
Select an initial guess μ 0 and set k : = 0 ;
  2.  
Solve the forward problem (2) with μ = μ k ; compute the residual E k = u μ k ϕ δ L 2 ( 0 , T ; Γ 0 ) and the minimization functional J k = J ϵ ( μ k ) in (7);
  3.  
Solve the adjoint problem (25) with μ = μ k and determine J ϵ ( μ k ) in (23);
  4.  
Calculate the conjugate coefficient λ k using (28) and the descent direction ω k using (27);
  5.  
Solve the sensitivity problem (14) with ω = ω k and μ = μ k ;
  6.  
Calculate the step size τ k with (29);
  7.  
Update the weight μ k with (26);
  8.  
Project μ k into K with μ k = P ( C 1 , C 2 ) μ k ;
  9.  
Increment k by one and return to step (2). Continue iterating the process until the specified termination condition is met.

4.4. Numerical Results

Two examples are chosen to demonstrate the effectiveness of the CG algorithm (Algorithm 1 in Section 4.3). For both examples, we set Ω = ( 0 , 1 ) × ( 0 , 1 ) , a i j ( x ) = 1 , c ( x ) = 0 , and T = 8 ( 1 i , j 2 ,). Let h x = h y = 1 16 be the spatial step sizes, h t = 1 32 be the temporal step size, and h α = 1 64 be the order step size. We add a random perturbation δ 0 ( 2 × rand ( size ( data ) ) 1 ) into the data and define the noise level δ as
δ = δ 0 ϕ ,
where δ 0 > 0 . We adopt the finite element method to be discrete in space, and we use the L1 method in time to solve the forward problem (2) [30].
In the real world, we often lack complete knowledge of the forward problem’s solution, as listed in Example 1. However, to ensure the correctness of the forward problem calculations, we provide Example 2, with an analytic expression for the forward solution. For both examples, in Algorithm 1, we set C 1 = 0.1 , C 2 = 10 . The stopping criteria are | J k + 1 J k | < 1 × 10 10 for Example 1 and | J k + 1 J k | < 1 × 10 7 for Example 2. Moreover, we illustrate the efficiency of the proposed algorithm by computing the L 2 error,
e r ( δ 0 ) = μ ϵ , δ μ ,
the relative error,
R e r ( δ 0 ) = μ ϵ , δ μ / μ ,
and the convergence order,
Corder = log 2 e r ( δ 0 ) e r ( δ 0 2 ) .
Example 1.
Take u 0 ( x , y ) = x ( x 1 ) e x + y ( y 1 ) e y , μ ( α ) = 2 α 2 + 2 α + 1 , f ( x , y , t ) = 0 , Γ 0 = { 0 x 1 } × { y = 1 } . We initialize μ 0 = 1 and μ = μ 0 , which are both far from μ ( α ) . Our goal is to recover μ ( α ) using a sequence of noisy data
ϕ δ o n Γ 0 × [ 0 , T ]
with δ 0 = 0.004 , 0.008 , 0.016 , 0.032 , 0.064 , respectively.
For comparison with the exact solution, Figure 1 displays numerical solutions for different δ 0 = 0.004 , 0.016 , 0.064 . It should be noted that the numerical solution approximates the exact solution more accurately the lower the noise level is.
The L 2 error μ k μ of the regularized solution and the residual E k are shown in Figure 2. The error consistently decreases over approximately 100 iterations and then stabilizes between the 100th and 150th iterations, indicating that Algorithm 1 can terminate at this stage. The numerical errors and the convergence orders under various δ 0 are shown in Table 1. The results show that as the noise level decreases, the numerical error also decreases, and the convergence order slightly exceeds the value of 0.5 specified in Theorem 2. It is hypothesized that with additional assumptions, (such as the assumption of the weight function distribution), a higher convergence rate can be achieved.
To illustrate the influence of the initial guess selection, we choose two different initial guesses μ 0 = 1.3 and μ 0 = 1.2 ( α 0.5 ) 2 + 1.4 for δ 0 = 0.008 in Example 1. In Figure 3, we show the reconstructions of μ ( α ) for these two different initial guesses. The L 2 errors μ k μ are 0.0093 and 0.0074 , respectively. All results of Example 1 illustrate that the algorithm is not very sensitive to the selection of the initial guess.
Example 2.
Consider the following example where u ( x , y , t ) = t 2 cos ( π x ) cos ( π y ) is its exact solution of the forward problem. The exact weight function is μ ( α ) = Γ ( 3 α ) . We can deduce the source term f ( x , y , t ) = 2 ( t ( t 1 ) ln t + π 2 t 2 ) cos ( π x ) cos ( π y ) from the forward Equation (2). Initial guess μ 0 = 1.5 and μ = μ 0 , which are both far from μ ( α ) . Let Γ 0 = { 0 x 1 } × { y = 1 } and T = 8 . Now, our goal is to recover μ ( α ) from a sequence of noisy data
ϕ δ o n Γ 0 × [ 0 , T ]
with δ 0 = 0.004 , 0.008 , 0.016 , 0.032 , 0.064 , 0.128 , respectively.
In Figure 4, the exact and corresponding numerical solutions for δ 0 = 0.004 , 0.032 , 0.128 are presented. Additionally, Figure 5 illustrates the L 2 error μ k μ and the residual E k = u μ k ϕ δ L 2 ( 0 , T ; Γ 0 ) . The convergence orders and numerical errors for various δ 0 are listed in Table 2. The estimation of the convergence order in Theorem 2 is compatible with the observed acquired convergence order of almost 0.5 . Furthermore, in Figure 6, we show the recovered weight function in the case of δ 0 = 0.008 for two different initial guesses μ 0 = 1 and μ 0 = 0.5 α 2 α + 1.5 , respectively. The corresponding L 2 errors μ k μ are 0.0210 and 0.0215 , respectively. The numerical results for Example 2 show that the proposed algorithm is not very sensitive to the selection of the initial guess.

5. Conclusions

This paper focuses on the estimation of the weight function in the DO Caputo derivative for TFDEs. To address this nonlinear inverse problem, we formulate it as a minimization problem with L 2 regularization and derive the convergence rate of the regularized weight function. Additionally, a CG method is utilized to solve the related minimization problem. Furthermore, we present numerical examples to demonstrate the robustness of the proposed algorithm against noise and its effectiveness in accurately recovering smooth solutions for two-dimensional DOTFDEs.
One of the main theoretical results is proof of the weak closedness of the mapping F : μ u μ ( x , t ) | Γ 0 , which ensures the existence, stability, and convergence of the regularized solution. We propose a weak source condition and, based on this, obtain the convergence rate of the regularized solution, which is another important theoretical result in this paper. As we know, there has been no previous study focused on the convergence of the regularized solution for the inverse weight problem. However, we choose a regularization parameter ϵ δ without providing a specific strategy for the selection of ϵ . Furthermore, the common posterior convergence analysis requires the monotonicity of F ( μ ϵ , δ ) ϕ δ with respect to ϵ , but, in this inverse problem, we are currently unable to establish the truth of this condition, so we have not provided a posterior convergence analysis. Therefore, in the future, we will consider a posterior convergence analysis of the regularized solution under a posterior regularization parameter selection strategy. Note that the convergence rate in Theorem 2 requires μ C [ 0 , 1 ] . However, for more general μ L 2 ( 0 , 1 ) or even those that are discontinuous, the convergence rate and numerical experiments could be another problem to consider in the future.

Author Contributions

Conceptualization, L.Y.; writing—original draft, L.Y.; writing—review & editing, K.L.; visualization, H.W. All authors have read and agreed to the published version of the manuscript.

Funding

The research is supported by the Shandong Province Natural Science Foundation (Grant No. K21LB24), the Research Fund for the Doctoral Program of Liaocheng University (Grant No. 318052118), and the “Guangyue Young Scholar Innovation Team” of Liaocheng University under Grant LCUGYTD2022-01.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Li, M.; Pu, H.; Cao, L.; Sha, Z.; Yu, H.; Zhang, J.; Zhang, L. Damage Creep Model of Viscoelastic Rock Based on the Distributed Order Calculus. Appl. Sci. 2023, 13, 4404. [Google Scholar] [CrossRef]
  2. Mainardi, F.; Mura, A.; Pagnini, G.; Gorenflo, R. Time-fractional diffusion of distributed order. J. Vib. Control 2008, 14, 1267–1290. [Google Scholar] [CrossRef] [Green Version]
  3. Meerschaert, M.M.; Scheffler, H.P. Stochastic model for ultraslow diffusion. Stoch. Process. Their Appl. 2006, 116, 1215–1235. [Google Scholar] [CrossRef] [Green Version]
  4. Sokolov, I.; Chechkin, A.; Klafter, J. Distributed-order fractional kinetics. arXiv 2004, arXiv:cond-mat/0401146. [Google Scholar]
  5. Chechkin, A.V.; Gorenflo, R.; Sokolov, I.M.; Gonchar, V.Y. Distributed order time fractional diffusion equation. Fract. Calc. Appl. Anal. 2003, 6, 259–280. [Google Scholar]
  6. Dräger, J.; Klafter, J. Strong anomaly in diffusion generated by iterated maps. Phys. Rev. Lett. 2000, 84, 5998. [Google Scholar] [CrossRef]
  7. Zhou, F.; Zhao, Y.; Li, Y.; Chen, Y. Design, implementation and application of distributed order PI control. ISA Trans. 2013, 52, 429–437. [Google Scholar] [CrossRef]
  8. Mahmoud, G.M.; Farghaly, A.A.; Abed-Elhameed, T.M.; Aly, S.A.; Arafa, A.A. Dynamics of distributed-order hyperchaotic complex van der Pol oscillators and their synchronization and control. Eur. Phys. J. Plus 2020, 135, 32. [Google Scholar] [CrossRef]
  9. Ding, W.; Patnaik, S.; Sidhardh, S.; Semperlotti, F. Applications of distributed-order fractional operators: A review. Entropy 2021, 23, 110. [Google Scholar] [CrossRef]
  10. Debnath, P.; Srivastava, H.; Kumam, P.; Hazarika, B. Fixed Point Theory and Fractional Calculus: Recent Advances and Applications; Forum for Interdisciplinary Mathematics; Springer Nature: Singapore, 2022. [Google Scholar]
  11. Benson, D.A.; Wheatcraft, S.W.; Meerschaert, M.M. Application of a fractional advection-dispersion equation. Water Resour. Res. 2000, 36, 1403–1412. [Google Scholar] [CrossRef] [Green Version]
  12. Li, Z.; Liu, Y.; Yamamoto, M. Initial-boundary value problems for multi-term time-fractional diffusion equations with positive constant coefficients. Appl. Math. Comput. 2015, 257, 381–397. [Google Scholar] [CrossRef] [Green Version]
  13. Sinai, Y.G. The limiting behavior of a one-dimensional random walk in a random medium. Theory Probab. Appl. 1983, 27, 256–268. [Google Scholar] [CrossRef]
  14. Schiessel, H.; Sokolov, I.; Blumen, A. Dynamics of a polyampholyte hooked around an obstacle. Phys. Rev. E 1997, 56, R2390. [Google Scholar] [CrossRef]
  15. Iglói, F.; Turban, L.; Rieger, H. Anomalous diffusion in aperiodic environments. Phys. Rev. E 1999, 59, 1465. [Google Scholar] [CrossRef] [Green Version]
  16. Kochubei, A.N. Distributed order calculus and equations of ultraslow diffusion. J. Math. Anal. Appl. 2008, 340, 252–281. [Google Scholar] [CrossRef] [Green Version]
  17. Li, Z.; Luchko, Y.; Yamamoto, M. Asymptotic estimates of solutions to initial-boundary-value problems for distributed order time-fractional diffusion equations. Fract. Calc. Appl. Anal. 2014, 17, 1114–1136. [Google Scholar] [CrossRef]
  18. Luchko, Y. Boundary value problems for the generalized time-fractional diffusion equation of distributed order. Fract. Calc. Appl. Anal. 2009, 12, 409–422. [Google Scholar]
  19. Meerschaert, M.M.; Nane, E.; Vellaisamy, P. Distributed-order fractional diffusions on bounded domains. J. Math. Anal. Appl. 2011, 379, 216–228. [Google Scholar] [CrossRef] [Green Version]
  20. Kubica, A.; Ryszewska, K. Fractional diffusion equation with distributed-order Caputo derivative. J. Integral Equ. Appl. 2019, 31, 195–243. [Google Scholar] [CrossRef] [Green Version]
  21. Li, Z.; Kian, Y.; Soccorsi, E. Initial-boundary value problem for distributed order time-fractional diffusion equations. Asymptot. Anal. 2019, 115, 95–126. [Google Scholar] [CrossRef]
  22. Li, Z.; Luchko, Y.; Yamamoto, M. Analyticity of solutions to a distributed order time-fractional diffusion equation and its application to an inverse problem. Comput. Math. Appl. 2017, 73, 1041–1052. [Google Scholar] [CrossRef]
  23. Gao, G.; Sun, Z. Two alternating direction implicit difference schemes with the extrapolation method for the two-dimensional distributed-order differential equations. Comput. Math. Appl. 2015, 69, 926–948. [Google Scholar] [CrossRef]
  24. Gao, G.; Sun, H.; Sun, Z. Some high-order difference schemes for the distributed-order differential equations. J. Comput. Phys. 2015, 298, 337–359. [Google Scholar] [CrossRef]
  25. Gao, G.; Sun, Z. Two alternating direction implicit difference schemes for two-dimensional distributed-order fractional diffusion equations. J. Sci. Comput. 2016, 66, 1281–1312. [Google Scholar] [CrossRef]
  26. Ford, N.J.; Morgado, M.L.; Rebelo, M. An implicit finite difference approximation for the solution of the diffusion equation with distributed order in time. Electron. Trans. Numer. Anal. 2015, 44, 289–305. [Google Scholar]
  27. Gao, G.; Alikhanov, A.A.; Sun, Z. The temporal second order difference schemes based on the interpolation approximation for solving the time multi-term and distributed-order fractional sub-diffusion equations. J. Sci. Comput. 2017, 73, 93–121. [Google Scholar] [CrossRef]
  28. Gao, G.; Sun, Z. Two unconditionally stable and convergent difference schemes with the extrapolation method for the one-dimensional distributed-order differential equations. Numer. Methods Partial Differ. Equ. 2016, 32, 591–615. [Google Scholar] [CrossRef]
  29. Morgado, M.L.; Rebelo, M. Numerical approximation of distributed order reaction–diffusion equations. J. Comput. Appl. Math. 2015, 275, 216–227. [Google Scholar] [CrossRef]
  30. Bu, W.; Xiao, A.; Zeng, W. Finite difference/finite element methods for distributed-order time fractional diffusion equations. J. Sci. Comput. 2017, 72, 422–441. [Google Scholar] [CrossRef]
  31. Rundell, W.; Zhang, Z. Fractional diffusion: Recovering the distributed fractional derivative from overposed data. Inverse Probl. 2017, 33, 035008. [Google Scholar] [CrossRef] [Green Version]
  32. Li, Z.; Fujishiro, K.; Li, G. Uniqueness in the inversion of distributed orders in ultraslow diffusion equations. J. Comput. Appl. Math. 2020, 369, 112564. [Google Scholar] [CrossRef]
  33. Liu, J.; Sun, C.; Yamamoto, M. Recovering the weight function in distributed order fractional equation from interior measurement. Appl. Numer. Math. 2021, 168, 84–103. [Google Scholar] [CrossRef]
  34. Jin, B.; Kian, Y. Recovery of a Distributed Order Fractional Derivative in an Unknown Medium. arXiv 2022, arXiv:2207.12929. [Google Scholar]
  35. Bazhlekova, E. Estimates for a general fractional relaxation equation and application to an inverse source problem. Math. Methods Appl. Sci. 2018, 41, 9018–9026. [Google Scholar] [CrossRef] [Green Version]
  36. Cheng, X.; Yuan, L.; Liang, K. Inverse source problem for a distributed-order time fractional diffusion equation. J. Inverse Ill-Posed Probl. 2020, 28, 17–32. [Google Scholar] [CrossRef] [Green Version]
  37. Yuan, L.; Cheng, X.; Liang, K. Solving a backward problem for a distributed-order time fractional diffusion equation by a new adjoint technique. J. Inverse Ill-Posed Probl. 2020, 28, 471–488. [Google Scholar] [CrossRef]
  38. Hai, D.N.D. Identifying a space-dependent source term in distributed order time-fractional diffusion equations. Math. Control Relat. Fields 2023, 13, 1008–1022. [Google Scholar] [CrossRef]
  39. Engl, H.W.; Hanke, M.; Neubauer, A. Regularization of Inverse Problems; Springer Science & Business Media: Berlin/Heidelberg, Germany, 1996; Volume 375. [Google Scholar]
  40. Isakov, V. Preface–Inverse Problems for Partial Differential Equations Third Edition Preface; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  41. Wei, T.; Wang, J. Determination of Robin coefficient in a fractional diffusion problem. Appl. Math. Model. 2016, 40, 7948–7961. [Google Scholar] [CrossRef]
Figure 1. Reconstruction of μ ( α ) with δ 0 = 0.004 , 0.016 , 0.064 for Example 1.
Figure 1. Reconstruction of μ ( α ) with δ 0 = 0.004 , 0.016 , 0.064 for Example 1.
Mathematics 11 03101 g001
Figure 2. Iteration error for Example 1. (a) The L 2 error μ k μ . (b) The residual E k .
Figure 2. Iteration error for Example 1. (a) The L 2 error μ k μ . (b) The residual E k .
Mathematics 11 03101 g002
Figure 3. Reconstruction of μ ( α ) for Example 1. (a) Reconstruction of μ ( α ) with initial guess μ 0 = 1.3 . (b) Reconstruction of μ ( α ) with initial guess μ 0 = 1.2 ( α 0.5 ) 2 + 1.4 .
Figure 3. Reconstruction of μ ( α ) for Example 1. (a) Reconstruction of μ ( α ) with initial guess μ 0 = 1.3 . (b) Reconstruction of μ ( α ) with initial guess μ 0 = 1.2 ( α 0.5 ) 2 + 1.4 .
Mathematics 11 03101 g003
Figure 4. Reconstruction of μ ( α ) with δ 0 = 0.004 , 0.032 , 0.128 for Example 2.
Figure 4. Reconstruction of μ ( α ) with δ 0 = 0.004 , 0.032 , 0.128 for Example 2.
Mathematics 11 03101 g004
Figure 5. Iteration error for Example 2. (a) The L 2 error μ k μ . (b) The residual E k .
Figure 5. Iteration error for Example 2. (a) The L 2 error μ k μ . (b) The residual E k .
Mathematics 11 03101 g005
Figure 6. Reconstruction of μ ( α ) for Example 2. (a) Reconstruction of μ ( α ) with initial guess μ 0 = 1 . (b) Reconstruction of μ ( α ) with initial guess μ 0 = 0.5 α 2 α + 1.5 .
Figure 6. Reconstruction of μ ( α ) for Example 2. (a) Reconstruction of μ ( α ) with initial guess μ 0 = 1 . (b) Reconstruction of μ ( α ) with initial guess μ 0 = 0.5 α 2 α + 1.5 .
Mathematics 11 03101 g006
Table 1. For Example 1, comparison results of different δ 0 .
Table 1. For Example 1, comparison results of different δ 0 .
δ 0 0.0040.0080.0160.0320.064
e r ( δ 0 ) 0.00690.00870.01490.02660.0487
R e r ( δ 0 ) 0.00520.00650.01100.01970.0362
C o r d e r 0.32560.77120.83870.8756
Table 2. For Example 2, comparison results of different δ 0 .
Table 2. For Example 2, comparison results of different δ 0 .
δ 0 0.0040.0080.0160.0320.0640.128
e r ( δ 0 ) 0.01730.02320.03200.04660.05710.0706
R e r ( δ 0 ) 0.01210.01630.02240.03260.04000.0495
C o r d e r 0.42260.46340.54160.29380.3059
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yuan, L.; Liang, K.; Wang, H. Solving Inverse Problem of Distributed-Order Time-Fractional Diffusion Equations Using Boundary Observations and L2 Regularization. Mathematics 2023, 11, 3101. https://doi.org/10.3390/math11143101

AMA Style

Yuan L, Liang K, Wang H. Solving Inverse Problem of Distributed-Order Time-Fractional Diffusion Equations Using Boundary Observations and L2 Regularization. Mathematics. 2023; 11(14):3101. https://doi.org/10.3390/math11143101

Chicago/Turabian Style

Yuan, Lele, Kewei Liang, and Huidi Wang. 2023. "Solving Inverse Problem of Distributed-Order Time-Fractional Diffusion Equations Using Boundary Observations and L2 Regularization" Mathematics 11, no. 14: 3101. https://doi.org/10.3390/math11143101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop