Next Article in Journal
Event-Triggered Attitude-Orbit Coupled Fault-Tolerant Control for Multi-Spacecraft Formation
Previous Article in Journal
On Certain Sum Involving Quadratic Residue
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reduced Basis Approximation for a Spatial Lotka-Volterra Model

Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, ul. Akademik Georgi Bonchev, Blok 8, 1113 Sofia, Bulgaria
Mathematics 2022, 10(12), 1983; https://doi.org/10.3390/math10121983
Submission received: 4 May 2022 / Revised: 2 June 2022 / Accepted: 7 June 2022 / Published: 8 June 2022

Abstract

:
We construct a reduced basis approximation for the solution to a system of nonlinear partial differential equations describing the temporal evolution of two populations following the Lotka-Volterra law. The first population’s carrying capacity contains a free parameter varying in a compact set. The reduced basis is constructed by two approaches: a proper orthogonal decomposition of a collection of solution snapshots and a greedy algorithm using an a posteriori error estimator.

1. Introduction

The reduced basis method finds applications in the construction of approximate solutions to parametrised partial differential equations using carefully chosen low-dimensional projection of the solution manifold associated with the compact set where the parameter varies. Whenever the problem needs to be solved in a multi-query context (for instance, for the needs of an optimisation problem), or for multiple parameter values, a direct approach based on repeated solutions could incur a very high computational cost. The goal of the method is to approximate the solution by computing its coefficients in an orthonormal basis, which is low-dimensional compared to the high-dimensional Galerkin finite element basis used in a standard discretisation scheme. Thus, the solutions for different parameter values as well as the approximation error can be approximated from a problem of much lower dimension.
Recent applications of the method include problems from physics and engineering such as fluid dynamics [1,2], viscous Burgers equation [3], the Navier–Stokes equations [4,5], and linear parabolic problems [6,7,8]. Achieving such a reduced basis approximation with reasonable accuracy and reduced computational effort depends on the solution manifold being of low dimension as well as the problem’s regularity and affine dependence on the free parameter.
This method has not yet been applied to evolution problems stemming from biological applications, which often contain nonlinear terms, coupling of multiple variables as well as require appropriate stable numerical schemes for time integration. Partial differential equations of reaction-diffusion type are employed, for example, in modelling the immune response to infections [9,10] and tumour growth and onco-immune interactions [11,12].
We study the performance of the reduced-basis method for a problem motivated by a biomedical application (tumour growth under chemotherapy). In particular, we take as example the parametrised Lotka-Volterra reaction-diffusion model over a bounded convex domain Ω R 2 with Lipschitz boundary, where the reaction part contains a free parameter. This model describes the evolution of the unknown u = ( u 1 , u 2 ) where u i ( x , y , t ) 0 , i = 1 , 2 represent the densities of two competing populations at ( x , y ) Ω at time t [ 0 , T end ] . The model reads
t u i = d i Δ u i + f i ( u ; μ ) , i = 1 , 2
where the diffusion rate for i-th population is d i > 0 , and the growth rate of the populations is given by a nonlinear vector function f:
f ( u ; μ ) = ( u 1 ( a 1 μ u 1 c 1 u 2 ) , u 2 ( a 2 u 2 c 2 u 1 ) ) T .
Here, a i , c i > 0 denote the carrying capacity and strength of competition, and the free parameter is μ M R + , with M being a compact set. A growth rate like (2) (without diffusion in space) has been used in [13] to model changes in the tumour composition. In particular, (1) shall model the interactions between populations of chemotherapy-sensitive cancer cells u 1 and chemotherapy-resistant cancer cells u 2 . The death (or loss) of sensitive cellls u 1 due to chemotherapy with dose μ is encoded in the term μ u 1 . The system (1) is complemented by initial conditions u ( · , 0 ) 0 and Dirichlet boundary condition, u ( · , t ) = 0 on Ω .

2. Problem Statement

In this work, we construct a reduced basis approximation of the solutions to (1) based on a Galerkin (finite element) approximation. This reduced basis is low-dimensional compared to the dimension of the finite element approximation space, and is used to compute efficiently approximate solutions to (1) for different values of μ . For the construction of the reduced basis, we use two approaches: a proper orthogonal decomposition (POD) and a POD-greedy algorithm based on an a posteriori error estimator for the approximation error, and compare their performance.
We denote by L 1 ( Ω ) , L 2 ( Ω ) , L ( Ω ) the spaces of measurable functions on Ω that are respectively Lebesgue-integrable, square-integrable or have a bounded essential supremum, and H 0 1 ( Ω ) the Sobolev space of functions w L 2 ( Ω ) vanishing on Ω with Ω w · w < . Here, w denotes the gradient ( x w , y w ) . Tensor spaces such as L p × L p inherit the respective norms from L p , etc.
Given an initial value u ( · , 0 ) H 0 1 × H 0 1 , (1) has unique solutions in time and under certain conditions on the diffusion rates d i and the parameters (e.g., c i < 1 ) admit nontrivial, nonnegative stationary in time solutions [14]. We assume such parameter values in this work.
A Galerkin finite element method applied to the Laplace operator in (1) makes the resulting semi-discrete system stiff. Since the problem is a pure reaction-diffusion problem, an implicit Euler scheme is a reasonable choice for time integration. The approximation of the time derivative t u is based on the solutions at consecutive time layers u k = u ( · , k τ ) for appropriately chosen time step τ > 0 to guarantee stability of the scheme. We consider henceforth the scheme:
1 τ u k , ϕ 2 + α ( u k , ϕ ) f ( u k ; μ ) , ϕ 2 = 1 τ u k 1 , ϕ 2
In (3), we denote the inner product
u , v 2 = def Ω ( u 1 v 1 + u 2 v 2 ) d x , u , v L 2 × L 2 ,
and the bilinear form
α ( u , v ) = def Ω d 1 u 1 · v 1 + d 2 u 2 · v 2 d x , u , v H 0 1 × H 0 1 .
Equation (4) defines the energy norm · α via w α 2 = def α ( w , w ) for w H 0 1 × H 0 1 . Friedrich’s inequality [15] implies that the energy norm · α and the H 0 1 -induced norm are equivalent on H 0 1 × H 0 1 .

3. Reduced Basis

Let T h be a triangulation of Ω . We use a Galerkin approximation with a finite element space V h = def W h × W h , where W h H 0 1 ( Ω ) is spanned by finite element functions whose restriction on each element of T h is piecewise polynomial of a fixed degree. In other words, the pair ( T h , W h ) is assumed to satisfy classical assumptions on regularity, affine equivalence and compact support of the finite element functions ([15], p. 132). At this stage, we assume that T h and W h approximate the solution to (3) with sufficient accuracy and further details on W h shall be provided later.
The space V h shall be referred to as truth space or high fidelity approximation space [16,17]. For a given μ M , let the solution snapshot of (3) in the truth space be given by the sequence of functions U h ( μ ) = { u h k ( μ ) V h : k = 0 , 1 , k m a x } that contains the values of u h k ( μ ) on time layers t = k τ , k = 0 , 1 , k m a x with k m a x τ = T end .
A reduced basis serving to approximate the solutions of (3) in the truth space for all μ M is given by functions { ξ i } i = 1 N V h defining an orthonormal basis for a subspace of dimension N dim V h . As long as a reduced basis approximation is possible, the basis elements are constructed so that the approximation error (in some norm to be defined later) between U h ( μ ) and the reduced basis solution U rb ( μ ) V rb N = def span { ξ i } i = 1 N stays within a prescribed tolerance ε . Furthermore, the computational cost of the solution U rb ( μ ) for every different parameter value μ shall remain independent of dim V h .

3.1. Numerical Scheme for Integration in Time

Problem (3) is a nonlinear equation for u h k ( μ ) , which can be solved using Newton’s method at each time layer k. Denote the linear functional
G ( ψ , ϕ ; u h k 1 ( μ ) ) = def 1 τ ψ , ϕ 2 + α ( ψ , ϕ ) f ( ψ ; μ ) , ϕ 2 1 τ u h k 1 ( μ ) , ϕ 2 , ψ , ϕ V h
and the bilinear form
DG ( ψ , ϕ ; μ , ψ s ) = def 1 τ ψ , ϕ 2 + α ( ψ , ϕ ) J ( ψ s ; μ ) ψ , ϕ 2 , ψ , ϕ V h ,
where J is the Jacobian of f at u:
J ( u ; μ ) = a 1 μ 2 u 1 c 1 u 2 c 1 u 1 c 2 u 2 a 2 c 2 u 1 2 u 2 .
Note that J is bounded for u V h . We solve for u h k = ψ from G ( ψ , ϕ ; u h k 1 ( μ ) ) = 0 . ψ can be approximated by successive Newton iterations (Algorithm 1):
Algorithm 1 Newton iteration.
Require: ψ 0 , u h k 1 ( μ ) , ε N e w t o n
  while G ( ψ s , ϕ ; u h k 1 ( μ ) ) > ε N e w t o n do
   solve
DG ( δ s , ϕ ; μ , ψ s ) = G ( ψ s , ϕ ; u h k 1 ( μ ) )
   set ψ s + 1 = ψ s + δ s , s = s + 1
end while
Return: ψ = ψ s
Lemma 1 together with the Lax–Milgram lemma [18] ensure that (6) has a unique solution in V h .
Lemma 1.
For all μ M , the form DG ( · , · ; μ , y ) in (5) is continuous on V h and coercive on V h for τ < max 1 a i .
Proof. 
We need to estimate the coercivity and continuity factors for DG ( · , · ; μ , y ) (those depend on μ ).
For the continuity factor, we use Hölder’s inequality and Poincaré’s inequality for v , w V h . Below, C Ω denotes the constant from Poincaré’s inequality [15] for W h .
| DG ( v , w ; μ , u ) | | α ( v , w ) | + | ( 1 τ J ( u ; μ ) ) v , w 2 | γ ( u , μ ) v α w α
with
γ ( u , μ ) = 2 + C Ω 1 τ + sup Ω ρ ( J ( u ; μ ) ) ,
where ρ ( J ( u ; μ ) ) denotes the spectral radius of J ( u , μ ) .
For the coercivity, we obtain DG ( v , v ; μ , u ) = α ( v , v ) + ( 1 τ J ( u ; μ ) ) v , v 2 and
( 1 τ J ( u ; μ ) ) v , v 2 = Ω ( 1 τ a 1 + μ ) v 1 ( 1 τ a 2 ) v 2 · v 1 v 2 + ( 2 u 1 + c 1 u 2 ) v 1 + c 1 u 1 v 2 c 2 u 2 v 1 + ( c 2 u 1 + 2 u 2 ) v 2 · v 1 v 2 = Ω ( 1 τ a 1 + μ ) v 1 2 + ( 1 τ a 2 ) v 2 2 + Ω ( 2 u 1 + c 1 u 2 ) v 1 2 + ( c 1 u 1 + c 2 u 2 ) v 1 v 2 + ( c 2 u 1 + 2 u 2 ) v 2 2 .
By assumption c 1 , c 2 < 1 , the last form is positive semidefinite for u 1 , u 2 0 because
( c 1 u 1 + c 2 u 2 ) 2 4 ( 2 u 1 + c 1 u 2 ) ( c 2 u 1 + 2 u 2 ) = 2 c 1 c 2 u 1 u 2 16 u 1 u 2 + ( c 1 2 8 c 1 ) u 1 2 + ( c 2 2 8 c 2 ) u 2 2 0 .
Hence, ( 1 τ J ( u ; μ ) ) v , v 2 min { 1 τ a 1 + μ , 1 τ a 2 } v 2 , implying that, for the chosen τ , DG ( v , v ; μ , u ) v α 2 . □

3.2. Offline and Online Phases

The chosen time integration scheme is used to construct the reduced basis elements ξ i . during the offline stage. Snapshots U h ( μ ) for carefully sampled values μ M are used to generate the reduced basis elements via a POD or a POD-greedy algorithm [6,7,16,17]. The training set Ξ M for sampling must be chosen sufficiently rich to provide a good approximation property of the resulting reduced basis. During this stage, all parameter-independent objects (stiffness matrices and load vectors) required for the discrete solution in V rb N are computed and stored.
Assume that the reduced basis space V rb N V h with N dim V h has already been found. Let u rb k ( μ ) be the approximation in V rb N to the solution u h k ( μ ) in the truth space at time layer t = k τ for a given μ . During the online stage, the computation of U rb ( μ ) for a new parameter value μ shall involve the assembly of stiffness matrices and load vectors for the low-dimensional problem from those obtained during the offline phase.

3.3. Solving the Problem in the Reduced Basis

The expansion coefficients u k μ = { u k , i μ } i = 1 N in the reduced basis will be computed by solving a problem in N dimensions. Let
u rb k ( μ ) = i = 1 N u k , i μ ξ i .
Using scheme (3), we solve for the reduced basis solution u rb k ( μ ) . We let ϕ = ξ j , 1 j N in (6) and obtain a linear system for u k μ . For the Newton iteration step, we set
x s = i = 1 N x i s ξ i , δ = i = 1 N δ i ξ i
which transforms (6) into an N-dimensional system. Then, we follow an analogous Algorithm 2:
Algorithm 2 Newton iteration reduced basis.
Require: x 0 , u k 1 μ , ε N e w t o n
  while  G ( i = 1 N y i s ξ i , ξ j , μ ; u k 1 , i μ ) 2 > ε N e w t o n do
  solve
DG ( ξ i , ξ j , μ ; x s ) δ = G ( i = 1 N x i s ξ i , ξ j , μ ; u k 1 , i μ )
  set x s + 1 = x s + δ , s = s + 1
  end while
  Return: u k μ = x s
Note that (8) admits a unique solution as a consequence of the well-posedness of the truth scheme (6), and the inclusion V rb N V h ([17], Lemma 3.1).
We now present the matrix formulation for the offline stage of (8), and rewrite the matrix DG and the functional G in terms of the reduced basis elements ξ i : Denote the matrices
A N = a 1 I N 0 0 a 2 I N , C N = 0 c 1 I N c 2 I N 0 , I N = I N 0 0 0
where I m , m N is the m × m identity matrix. In fact, we have (in matrix-vector notation): f ( w , μ ) = ( A N I N ) w w 2 w T C N w .
Define the matrices M , A , B 1 , B 2 via
( M ) i j = def ξ i , ξ j 2 , ( A ) i j = def α ( ξ i , ξ j ) , ( B 1 ) i j = def A N ξ i , ξ j 2 , ( B 2 ) i j = def I N ξ i , ξ j 2 , i , j = 1 , , N .
Denote the following trilinear forms:
β 0 ( ξ i , ξ j , ξ l ) = def Ω ξ i ξ j ξ l , β 1 ( ξ i , ξ j , ξ l ) = def Ω ( C N ξ i ) ξ j ξ l , β 2 ( ξ i , ξ j , ξ l ) = def Ω ξ i ( C N ξ j ) ξ l ,
which are used to define the matrix
L ( y ) : ( L ) i j ( y ) = def l = 1 N y l m = 0 2 β m ( ξ l , ξ i , ξ j ) , y R N
and the arrays of matrices P j , Q j , j = 1 , , N defined by
( P j ) i 1 i 2 = def β 0 ( ξ i 1 , ξ i 2 , ξ j ) , ( Q j ) i 1 i 2 = def β 2 ( ξ i 1 , ξ i 2 , ξ j )
with i 1 , i 2 = 1 , , N .
To evaluate the nonlinear terms inside (8) in the reduced basis setting, we have to compute the following vectors in R N : P ( y ) = def { y T P j y } j = 1 N , Q ( y ) = def { y T Q j y } j = 1 N for appropriate y R N .
In this matrix notation, we rewrite the linear problem (8) as:
DG ( μ , x s ) δ = G ( μ , x s , u k 1 μ ) where DG ( μ , x s ) = 1 τ M + A B 1 + μ B 2 + L ( x s ) G ( μ , x s , u k 1 μ ) = 1 τ M ( x s u k 1 μ ) + A y s B 1 x s + μ B 2 x s + P ( x s ) + Q ( x s )
where the objects M , A , B 1 , B 2 , P i , Q i , i = 1 , , N are matrices or arrays of matrices that are independent of μ . Finally, the solution resulting from the reduced basis approximation is recovered from u k μ via (7).

3.4. Reduced Basis Construction via the POD-Greedy Algorithm

The greedy algorithm is used for construction of reduced basis approximations in the context of elliptic problems (see [16,17] for references). It enriches at every iteration the constructed reduced basis by adding an additional snapshot associated with the parameter value μ that maximises the approximation error (in some norm) between the snapshot U h ( μ ) and its approximation U rb ( μ ) spanned by the reduced basis constructed thus far, in other words μ = arg max μ M U h ( μ ) U rb ( μ ) . An estimate of this error should be easily computable for various values of μ . The performance of the greedy algorithm relies upon a sufficiently dense training subset Ξ = { μ j , j = 1 , m } M , which is used to find a good value for μ .
In our context, the greedy algorithm is complemented by a proper orthogonal decomposition step (Algorithm 3). The motivation for this lies in (3) whose solutions converge to a stationary in time solution. The POD step minimises redundancy of storage of basis elements and prevents possible stalling of the algorithm.
Furthermore, the approximation error U h ( μ ) U n ( μ ) should be estimated by an a posteriori error estimator Δ rb ( μ ) , μ M , which is computationally inexpensive in order to serve as a stopping criterion in the POD-greedy algorithm. When the prescribed accuracy ε tol is reached, the algorithm returns the N-dimensional reduced basis { ξ i } i = 1 N .
Algorithm 3 POD-greedy algorithm.
Require: ε tol , Ξ , n 1 , n 2 N , n 2 < n 1 , μ 1
Ensure: N = 0 , = 1 , Δ rb ( μ 1 ) = 2 ε tol , Z =
  while  Δ rb ( μ ) > ε tol do
  compute snapshot U h ( μ )
  compress U h ( μ ) using POD, retain n 1 principal nodes { ζ j } j = 1 n 1
  set Z = Z { ζ j } j = 1 n 1
if  = 1 then
     N = n 1
else
     N = N + n 2
    compress Z using POD, retain N principal nodes { ξ j } j = 1 N
     Z = { ξ j } j = 1 N
end if
  compute the error estimator Δ rb ( μ ) , μ Ξ using Z as basis
  set μ + 1 = arg max μ Ξ Δ rb ( μ ) , Ξ = Ξ μ + 1 , = + 1
end while
Return: Z , N

3.5. A Posteriori Error Analysis

In line with the time integration scheme, we derive an a posteriori estimate for the approximation error between the solution in the truth space u h k and in the reduced basis u rb k at the k-th time layer, e k ( μ ) = def u h k ( μ ) u rb k ( μ ) V h . Note that f ( · ; μ ) is Lipschitz continuous on [ 0 , a 1 ] × [ 0 , a 2 ] , so
| f ( z ; μ ) f ( z ; μ ) | f | z z | .
Let the residual associated with u rb k be
r k ( ϕ ; μ ) = f ( u rb k ; μ ) , ϕ 2 1 τ u rb k u rb k 1 , ϕ 2 α ( u rb k , ϕ ) , ϕ V h .
r k is a linear functional on V h , whose norm in the dual space V h is
r k ( · ; μ ) α = def sup ϕ V h | r k ( ϕ ; μ ) | ϕ α .
the Riesz representation theorem, implies that there exists a unique r ˜ k V h such that
α ( r ˜ k ( μ ) , ϕ ) = r k ( ϕ ; μ ) , ϕ V h ,
and r k ( · ; μ ) α = r ˜ k ( μ ) α .
Because r k has an affine dependence on μ , its norm can be efficiently computed. We have the following evolution equation for the error e k based on (3):
1 τ e k e k 1 , ϕ 2 + α ( e k , ϕ ) = f ( u h k ; μ ) f ( u rb k ; μ ) , ϕ 2 + r k ( ϕ ; μ ) , k 1 , ϕ V h .
The following result gives an estimate for the approximation error e k .
Lemma 2.
Let f be a Lipschitz-continuous function with Lipschitz constant s u p (10). Consider the implicit Euler scheme (3). Let r k ( · ; μ ) be the residual from (11), with norm r k ( μ ) α defined in (12). Then, the following estimate for the approximation error e k ( μ ) between the solutions in the truth and the reduced basis space holds for τ < 1 2 s u p :
e k 2 2 e 0 2 2 ( 1 2 τ s u p ) k + τ α m i n j = 1 k r j ( μ ) α 2 ( 1 2 τ s u p ) k + 1 j
Proof. 
Letting ϕ = e k in (13), using the coercivity of the bilinear form α , and doing algebraic transformations, we arrive to:
1 τ e k e k 1 , e k 2 + α ( e k , e k ) 1 τ e k 2 2 1 τ e k , e k 1 2 + α m i n e k α 2 ,
The local Lipschitz continuity of f (10), Hölder’s inequality, and the embedding H 0 1 ( Ω ) L 2 ( Ω ) give
f ( u h k ; μ ) f ( u rb k ; μ ) , e k 2 Ω × Ω | f ( u h k ; μ ) f ( u rb k ; μ ) | | e k | d x s u p e k 2 2 .
Hence, the right-hand side of  (13) when ϕ = e k can be bounded by
f ( u h k ; μ ) f ( u rb k ; μ ) , e k 2 + r k ( e k ; μ ) s u p e k 2 + r k ( μ ) α · e k α .
Using next Young’s inequality, we have
e k , e k 1 2 1 2 ( e k 2 2 + e k 1 2 2 ) r k ( μ ) α · e k α r k ( μ ) α 2 2 α m i n + α m i n 2 e k α 2 .
Combining these inequalities gives
e k 2 2 τ + α m i n e k α 2 e k 2 2 + e k 1 2 2 2 τ + s u p e k 2 2 + r k ( μ ) α 2 2 α m i n + α m i n 2 e k α 2 ,
and, if we choose τ such that 1 2 τ > s u p , this implies
( 1 2 τ s u p ) e k 2 2 e k 1 2 2 2 τ + r k ( μ ) α 2 2 α m i n ,
giving
e k 2 2 e k 1 2 2 1 2 τ s u p + τ r k ( μ ) α 2 α m i n ( 1 2 τ s u p ) .
Applying telescopic sum and induction in k, we arrive at (14).  □

3.6. Computing the a Posteriori Error Estimator

Lemma 2 provides a bound for the approximation error e k ( μ ) between the solutions at k-th time layer in the truth space and the reduced basis space. The expression on the right-hand side of (14), which we denote by Δ k ( μ ) , can be used as an a posteriori error estimator in the POD-greedy algorithm, because for a given solution trajectory U h ( μ ) , the quantity Δ k ( μ ) attains its maximum at k = k m a x [7]. By setting as the a posteriori error estimator Δ rb N ( μ ) = def Δ k m a x ( μ ) for a reduced basis with N elements, we have to find an efficient manner to compute Δ rb N ( μ ) . Recall from (12) that r k ( μ ) α = r ˜ k ( μ ) α . The affine dependence in (3) allows us to decompose the norm of r k into summands that are computed efficiently during the online stage. Using the reduced basis expansion (7), we rewrite the residual as
r k ( ϕ ; μ ) = f ( u rb k ) , ϕ 2 1 τ u rb k u rb k 1 , ϕ 2 α ( u rb k , ϕ ) = i = 1 N u k , i μ A N ξ i , ϕ 2 μ i = 1 N u k , i μ I N ξ i , ϕ 2 i 1 , i 2 = 1 N u k , i 1 μ u k , i 2 μ ξ i 1 ( I 2 N + C N ) ξ i 2 , ϕ 2 1 τ i = 1 N ( u k , i μ u k 1 , i μ ) ξ i , ϕ 2 i = 1 N u k , i μ α ( ξ i , ϕ ) , ϕ V h .
Following ([16], Chapter 4.2), we introduce the coefficient vector r k ( μ ) R N 2 + 4 N
r k ( μ ) = def ( u k μ , μ u k μ , ( u k μ ) T · u k μ , 1 τ ( u k μ u k 1 μ ) , u k μ ) T
where ( ( u k μ ) T × u k μ ) N ( i 1 1 ) + i 2 = def { u k , i 1 μ u k , i 2 μ } , Consider the vector of N 2 + 4 N linear functionals
R = def ( { A N ξ i , · 2 } i = 1 N , { I N ξ i , · 2 } i = 1 N , { ξ i 1 ( I 2 N + C N ) ξ i 2 , · 2 } i 1 , i 2 = 1 N , { ξ i , · 2 } i = 1 N , { α ( ξ i , · ) } i = 1 N ) ,
such that R j : V rb N R , where each element R j V rb , the dual space of V rb N . Using r k ( μ ) , R , we obtain the following representation of the residual r k ( ϕ ; μ )
r k ( ϕ ; μ ) = j = 1 N 2 + 4 N r j k ( μ ) R j ( ϕ ) , ϕ V rb N ,
which can be computed efficiently. Let r ^ j denote the Riesz representation of the j-th element of R , so that α ( r ^ j , ϕ ) = R j ( ϕ ) , j (which is independent of the time layer k). To find r ^ j , we solve the linear elliptic problem α ( r ^ j , ϕ ) = R j ( ϕ ) during the offline stage. The Riesz representation of r k and its norm can be expressed as
r ˜ k ( μ ) = j = 1 N 2 + 4 N r j k ( μ ) r ^ j r ˜ k ( μ ) α 2 = j = 1 N 2 + 4 N j = 1 N 2 + 4 N r j k ( μ ) r j k ( μ ) α ( r ^ j , r ^ j ) .
The inner products α ( r ^ j , r ^ j ) can be computed during the offline stage because they are independent of μ . In this manner, the a posteriori error estimator for every new parameter value μ M can be computed efficiently during the online stage.

4. Numerical Experiment

The offline stage in the reduced basis construction for the scheme (3) is implemented in the finite element library FreeFem++ [19]. To test the performance of the constructed basis, several computations for various values of the free parameter are run during the online stage. The error between the reduced basis approximation and the truth solutions is calculated, as well as the computational effort.
The domain of definition is Ω = [ 0 , 10 ] 2 . The finite element approximation space W h consists of Lagrange finite elements of degree 2 on Ω with 6561 degrees of freedom. The time range is T end = 3.99 , and the tolerance for the Newton iteration is ε N e w t o n = 10 6 . The diffusion parameters are d 1 = d 2 = 1 , and the range of the parameter μ is M = [ 0 , 0.16 ] . The initial condition for the Lotka-Volterra model is u 1 ( 0 , x , y ) = u 2 ( 0 , x , y ) = sin π x sin π y . Two experiments with different parameter values in (3) and time step for the integration τ are presented (values in Table 1).
In Figure 1 and Figure 2, we plot the evolution of the a posteriori error estimator during the POD-greedy algorithm in the offline stage. The dimension of the reduced basis N increases until the error estimator Δ rb N reaches the desired level of accuracy ε tol . The training set for the algorithm is Ξ = { 0 , 0.02 , 0.04 , 0.16 } .
In the experiment, we use an algorithm that intertwines a POD step with a greedy step [17]. We also use a simple POD-based construction of the reduced basis via computation and compression of U h ( μ ) by sequential sampling of μ Ξ as an alternative offline stage. In this method, we skip the while condition and just do a loop over in Algorithm 3.
We compare the performance of the POD-greedy algorithm and the sequential sampling POD during the online stage. The results of the two approaches for the offline stage are shown in Table 2 and Table 3 for Experiments I and II. We define the CPU time gain factor as the ratio between the CPU time for the truth solution and the CPU time for its reduced basis approximation, averaged over 10 runs. Increasing the dimension of the reduced basis N decreases the CPU time gain factor.
The L 2 error between the truth solution and its reduced basis approximation at T end is below the a posteriori error estimator. However, in Experiment I, the difference in errors is of the order 10 2 :10 3 , meaning that the error estimator derived in Lemma 2 is not very sharp. The sharpness of the a posteriori estimator Δ rb N could be improved by reducing the time step τ , but that would offset the benefits of the implicit integration scheme, requiring more time layers where the Newton iteration must be performed.

5. Conclusions

A reduced basis for approximating the solutions to a spatial Lotka-Volterra model (1) based on the implicit Euler scheme for a Galerkin finite element approximation (3) is constructed by two approaches: a POD-greedy algorithm and a POD with sequential sampling, and their performance is compared. Due to the offline/online decomposition and the low dimension of the constructed reduced basis space, the method leads to significant computational savings.
An a posteriori estimator for the approximation error used for the POD-greedy algorithm is derived based on the chosen scheme for the time integration. The development of a posteriori error estimators for the reduced basis approximation is closely linked to the problem at hand, as has been noted elsewhere in the literature [16,17]. However, its performance in the case of this system of nonlinear reaction-diffusion equations reveals a challenge in terms of the trade-off of sharpness of the estimate and the time integration step.
The presented work attempts to bring forward the problem of generalisability and performance of the reduced basis method to nonlinear evolution models motivated by biological or biomedical applications. The development of appropriate methods that produce near-real-time approximation of the solutions to such models at low computational effort could serve contexts such as cancer therapy and increase the applicability of mathematical models to emerging fields such as personalized treatments.

Funding

This research was funded the Bulgarian National Science Fund within the National Scientific Program Petar Beron i NIE of the Bulgarian Ministry of Education (contract number KP-06-DB-5).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
dimdimension of vector space
PODProper orthogonal decomposition

References

  1. Pascarella, G.; Kokkinakis, I.; Fossati, M. Analysis of transition for a flow in a channel via reduced basis methods. Fluids 2019, 4, 202. [Google Scholar] [CrossRef] [Green Version]
  2. Nonino, M.; Ballarin, F.; Rozza, G. A monolithic and a partitioned, reduced basis method for fluid-structure interaction problems. Fluids 2021, 6, 229. [Google Scholar] [CrossRef]
  3. Veroy, K.; Prud’homme, C.; Patera, A.T. Reduced-basis approximation of the viscous Burgers equation: Rigorous a posteriori error bounds. C. R. Acad. Sci. Paris Ser. I 2003, 337, 619–624. [Google Scholar] [CrossRef]
  4. Veroy, K.; Patera, A.T. Certified real-time solution of the parametrized steady incompressible Navier–Stokes equations: Rigorous reduced-basis a posteriori error bounds. Int. J. Numer. Methods Fluids 2005, 47, 773–788. [Google Scholar] [CrossRef]
  5. Akkari, N.; Casenave, F.; Moureau, V. Time stable reduced order modeling by an enhanced reduced order basis of the turbulent and incompressible 3D Navier-Stokes equations. Math. Comput. Appl. 2019, 24, 45. [Google Scholar] [CrossRef] [Green Version]
  6. Grepl, M.A.; Patera, A.T. A posteriori error bounds for reduced-basis approximations of parametrized parabolic partial differential equations. Math. Model. Numer. Anal. 2005, 39, 157–181. [Google Scholar] [CrossRef] [Green Version]
  7. Haasdonk, B.; Ohlberger, M. Reduced basis method for finite volume approximations of parametrized linear evolution equations. Math. Model. Numer. Anal. 2008, 42, 277–302. [Google Scholar] [CrossRef]
  8. Haasdonk, B.; Ohlberger, M.; Rozza, G. A reduced basis method for evolution schemes with parameter-dependent explicit operators. Electron. Trans. Numer. Anal. 2008, 32, 145–168. [Google Scholar]
  9. Andrew, S.M.; Baker, C.T.; Bocharov, G.A. Rival approaches to mathematical modelling in immunology. J. Comput. Appl. Math. 2007, 205, 669–686. [Google Scholar] [CrossRef]
  10. Bocharov, G.; Volpert, V.; Tasevich, A. Reaction-diffusion equations in immunology. Comput. Math. Math. Phys. 2018, 58, 1967–1976. [Google Scholar] [CrossRef]
  11. Eladdadi, A.; Kim, P.; Mallet, D. (Eds.) Mathematical Models of Tumor-Immune System Dynamics; Springer: New York, NY, USA, 2014. [Google Scholar] [CrossRef]
  12. Enderling, H.; Chaplain, M.A. Mathematical modeling of tumor growth and treatment. Curr. Pharm. Des. 2014, 20, 4934–4940. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Carrère, C. Optimization of an in vitro chemotherapy to avoid resistant tumours. J. Theor. Biol. 2017, 413, 24–33. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Cantrell, R.; Cosner, C. On the uniqueness and stability of positive solutions in the Lotka-Volterra competition model with diffusion. Houst. J. Math. 1989, 15, 341–361. [Google Scholar]
  15. Ciarlet, P.G. The Finite Element Method for Elliptic Problems. In Classics in Applied Mathematics; SIAM: Philadelphia, PA, USA, 2002. [Google Scholar]
  16. Hesthaven, J.S.; Rozza, G.; Stamm, B. Certified Reduced Basis Methods for Parametrized Partial Differential Equations; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  17. Quarteroni, A.; Manzoni, A.; Negri, F. Reduced Basis Methods for Partial Differential Equations: An Introduction; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar]
  18. Quarteroni, A.; Valli, A. Numerical Approximation of Partial Differential Equations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  19. Hecht, F. New development in FreeFem++. J. Numer. Math. 2012, 20, 251–265. [Google Scholar] [CrossRef]
Figure 1. Plot of the value of the a posteriori error estimator max Δ rb N = max μ Ξ Δ rb N ( μ ) during the offline stage computed by the POD-greedy algorithm. The dimension of resulting reduced basis is N = 19 , ε tol = 1 (Experiment I).
Figure 1. Plot of the value of the a posteriori error estimator max Δ rb N = max μ Ξ Δ rb N ( μ ) during the offline stage computed by the POD-greedy algorithm. The dimension of resulting reduced basis is N = 19 , ε tol = 1 (Experiment I).
Mathematics 10 01983 g001
Figure 2. Plot of the value of the a posteriori error estimator max Δ rb N = max μ Ξ Δ rb N ( μ ) during the offline stage computed by the POD-greedy algorithm. Dimension of resulting reduced basis is N = 20 for ε tol = 1 (Experiment II).
Figure 2. Plot of the value of the a posteriori error estimator max Δ rb N = max μ Ξ Δ rb N ( μ ) during the offline stage computed by the POD-greedy algorithm. Dimension of resulting reduced basis is N = 20 for ε tol = 1 (Experiment II).
Mathematics 10 01983 g002
Table 1. Parameter values for the numerical experiment.
Table 1. Parameter values for the numerical experiment.
Experiment a 1 a 2 c 1 c 2 τ
I1.51.00.050.030.03
II1.51.00.070.150.03
Table 2. CPU time gain factor and approximation error between the truth and the reduced basis solution at t = T end with a reduced basis of dimension N. The reduced basis is computed with a POD-greedy algorithm (Experiment I).
Table 2. CPU time gain factor and approximation error between the truth and the reduced basis solution at t = T end with a reduced basis of dimension N. The reduced basis is computed with a POD-greedy algorithm (Experiment I).
μ CPU Time Gain Factor L 2 -ErrorA Posteriori Error
POD-greedy algorithm ( N = 19 )
0.0415.99 4.02 × 10 4 5.45 × 10 1
0.0717.13 3.91 × 10 4 3.03 × 10 1
0.1116.96 3.78 × 10 4 1.83 × 10 1
POD with sequential sampling ( N = 24 )
0.048.80 8.84 × 10 5 1.92 × 10 1
0.078.87 8.35 × 10 5 3.81 × 10 1
0.118.73 7.73 × 10 5 5.99 × 10 1
Table 3. CPU time gain factor and approximation error between the truth and the reduced basis solution at t = T end for a reduced basis of dimension N (Experiment II).
Table 3. CPU time gain factor and approximation error between the truth and the reduced basis solution at t = T end for a reduced basis of dimension N (Experiment II).
μ CPU Time Gain Factor L 2 -ErrorA Posteriori Error
POD-greedy algorithm ( N = 20 )
0.0412.3880 3.03 × 10 4 5.81 × 10 2
0.0712.3402 2.69 × 10 4 4.81 × 10 2
0.1113.5057 2.47 × 10 4 3.76 × 10 2
POD with sequential sampling ( N = 24 )
0.047.6595 2.25 × 10 4 3.13 × 10 2
0.077.0407 2.18 × 10 4 2.89 × 10 2
0.117.6818 2.09 × 10 4 2.99 × 10 2
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rashkov, P. Reduced Basis Approximation for a Spatial Lotka-Volterra Model. Mathematics 2022, 10, 1983. https://doi.org/10.3390/math10121983

AMA Style

Rashkov P. Reduced Basis Approximation for a Spatial Lotka-Volterra Model. Mathematics. 2022; 10(12):1983. https://doi.org/10.3390/math10121983

Chicago/Turabian Style

Rashkov, Peter. 2022. "Reduced Basis Approximation for a Spatial Lotka-Volterra Model" Mathematics 10, no. 12: 1983. https://doi.org/10.3390/math10121983

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop