Next Article in Journal
Saturated (n, m)-Regular Semigroups
Previous Article in Journal
Convolution, Correlation, and Uncertainty Principles for the Quaternion Offset Linear Canonical Transform
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability of Traveling Fronts in a Neural Field Model

1
Department of Neurosurgery, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA 19104, USA
2
Department of Mathematics, College of Arts and Sciences, Drexel University, Philadelphia, PA 19104, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2023, 11(9), 2202; https://doi.org/10.3390/math11092202
Submission received: 29 March 2023 / Revised: 26 April 2023 / Accepted: 4 May 2023 / Published: 7 May 2023

Abstract

:
We investigate the stability of traveling front solutions in the neural field model. This model has been studied intensively regarding propagating patterns with saturating Heaviside gain for neuron firing activity. Previous work has shown the existence of traveling fronts in the neural field model in a more complex setting, using a nonsaturating piecewise linear gain. We aimed to study the stability of traveling fronts in the neural field model utilizing the Evans function. We attained the Evans function of traveling fronts using an integration of analytical derivations and a computational approach for the neural field model, with previously uninvestigated piecewise linear gain. Using this approach, we are able to identify both stable and unstable traveling fronts in the neural field model.

1. Introduction

There is extensive biological evidence supporting the existence of traveling front phenomena in the brain [1,2,3,4,5,6,7]. The coarse-grained average activity of a neural network can be explained with the network equation first presented by Amari [8]:
τ u ( x , t ) t = u + w ( x y ) f [ u ( y ) ] d y .
Without loss of generality, we can set the synaptic decay time  τ = 1 . We also have that  u ( x , t )  is the synaptic input to neurons located at position  x ( , )  at time  t 0 , and this represents the level of excitation or amount of input into a neural element. The coupling function  w ( x )  determines the connections between neurons. The nonnegative and monotonically nondecreasing gain function  f [ u ]  denotes the firing rate at x at time t.
Ermentrout and McLeod [9] considered a singular network of excitatory neurons distributed on a real one-dimensional line. They showed that there is a unique and asymptotically stable monotonic traveling front solution. This front joins the stable constant solutions  u 0  and  u 1 . They also explicitly solved the integral equation for the limiting case when  f ( u )  is the Heaviside function.
Zhang investigated the existence and stability of traveling front solutions of (1) with the Heaviside gain function and a coupling function that is even, nonnegative, and piecewise smooth [10]. He derived the integral Evans function  E ( λ )  for an associated eigenvalue problem resulting from linearization (1) around the front  u ( ξ ) . Using the Evans function  E ( λ ) , he further proved that the real part of the eigenvalue  λ  cannot be nonnegative, except for the simple eigenvalue  λ = 0 , which is associated with translational invariance. Therefore, he concluded that the traveling fronts of (1) are linearly stable.
Guo showed the existence of nonmonotonic traveling front solutions of (1) with a “Mexican hat” type of lateral inhibition coupling and nonsaturating piecewise linear gain [11]. She also established an ODE formulation, to compute the point spectrum on the right half of the complex plane for traveling front solutions of (1) with a “Mexican hat” type of lateral inhibition coupling function and Heaviside gain function.
In 2004, Coombes and Owen showed how to construct Evans functions for several integral neural field equations [12]. In this work, the analysis was performed with a Heaviside firing rate function, which allowed them to construct several explicit forms of Evans function for several types of integral neural field equations. We will be able to expand on this work by investigating Evans functions with a non-Heaviside firing rate function.
There have been various studies on traveling fronts for Amari’s neural field model in the recent past, as well as in current work. For example, in 2012, Coombes, Schmidt, and Bojak analyzed a two-dimensional neural field model with Heaviside gain [13]. They investigated the stability of stationary fronts and determined the conditions for which stability is achieved. In 2016, Coombes and Liang investigated traveling fronts in a stochastic neural field model using Heaviside gain with exponentially decaying spatial interactions and threshold noise, and they were able to find several stability branches for the fronts [14]. In 2017, Coombes, Avitabile, and Gökçe considered a two-dimensional neural field model with Heaviside gain and imposed Dirichlet boundary conditions and conducted a stability analysis [15]. They were also able to verify their theoretical findings with computational results. More recently, in 2022, Cook, Peterson, Woldman, and Terry analyzed and derived the relationships and formulations of several relevant neural field models, including Amari’s with Heaviside gain [16]. They reported several modern formulations of neural field modeling, as well as its important connection to clinical applications. Also in 2022, Qin, Fu, Jin, and Peng used the neural field model with arctangent gain to compare with experimental data obtained clinically [17]. They were able to determine that the neural field model was able to effectively model real-world data more robustly than some other respected representations. Earlier in 2023, González-Ramírez investigated the existence of traveling fronts in fractional-order formulations of the neural field model with Heaviside gain [18]. Evidently, we can see that Amari’s work is still highly relevant in contemporary times.
We aim to extend the work of Ermentrout and McLeod, Zhang, and Guo’s previous studies by analyzing the problem with nonsaturating gain. This will involve linearizing the traveling front solution, to set up an eigenvalue problem. We then derive the equivalent ODE of the eigenfunction, analyzing both real and imaginary components, to set up a system of ODEs. Finally, we compute the Evans function for the system and conclude, through computation, the stability, by using the Evans function. All numerical and symbolic calculations in this paper were performed using Mathematica, and all visualizations were carried out using Python.

2. Materials and Methods

In this section of the paper, we introduce some of the functions that will be utilized in the neural field model. We then linearize the neural field model for a perturbed solution and derive ordinary differential equations equivalent to the linearized stability integrodifferential equation. Next, we derive the matching conditions for where the front crosses the threshold. We conclude by identifying possible forms for the resulting eigenfunction and then deriving the Evans function that will be used for the stability analysis.

2.1. Coupling Function, Gain Function, and Front Solutions Considered

We begin by discussing some of the functional parameters in our model. We will use a coupling function similar to the one previously used by Guo, the so-called Mexican hat function [11]. This function can be constructed using a variety of exponential functions. For our demonstration, we will consider a function of the form
w ( x ) = A e a | x | e | x | ,
where we require that  a > 1  and  A > 1 , in order to satisfy the properties of the coupling function used in these networks, in addition to the other following conditions:
  • w ( x ) > 0  on an interval  ( x 0 , x 0 ) , and  w ( x 0 ) = w ( x 0 ) = 0 ,
  • w ( x )  is decreasing on  ( 0 , x m ] ,
  • w ( x ) < 0  on  ( , x 0 ) ( x 0 , ) ,
  • w ( x )  is continuous on  R , and  w ( x )  is integrable on  R ,
  • w ( x )  has a unique minimum  x m  on  R +  such that  x m > x 0 , and  w ( x )  is strictly increasing on  ( x m , ) .
This function resembles a Mexican hat, except it has a cusp at the maximum of the function, as opposed to a smooth curve [12]. Other lateral inhibition coupling functions chosen are the Gaussian type [8], as well as the oscillatory types presented in [19,20]. Here,  x 0 = ln ( A ) a 1  and  x m = ln ( a A ) a 1 . The area above the x-axis represents the net excitation in the network, while the area below the x-axis represents the net inhibition in the network. If  A > a  then excitation dominates the network, and if  A < a , inhibition dominates. An example of this type of function is shown in Figure 1.
The next function we must consider is the gain function. Here, we consider a gain function  f [ u ( y ) ]  of the following type:
f [ u ( y ) ] = [ α ( u u T ) + β ] Θ ( u u T ) ,
where  Θ ( u u T )  is the Heaviside gain function:
Θ ( u u T ) = 1 u > u T , 0 otherwise .
Here,  u T  represents the threshold. The gain function (3) does not saturate with a positive slope  α . Without loss of generality, we set  β = 1 . Note that the gain function (3) turns into the Heaviside function when  α = 0 . Stability was investigated for this case in [11] and others. We extend this work by examining the stability when  α 0 . A graph of the gain function is seen in Figure 2.
The traveling front solution can be described by first rewriting the neural field equation in the traveling coordinate  ξ = x c t , where c is the traveling velocity:
c u ( ξ ) = u ( ξ ) + w ( ξ η ) f ( u ( η ) ) d η .
We are interested in finding traveling front solutions that connect the two constant solutions of (4), which are  u 1 = 0  and  u 2 * = ( 1 α h ) W 0 1 α W 0 ,  where we have that  W 0 = w ( x ) d x .  In the lateral inhibition network, we set  A = 1.5a  in order to normalize  W 0 .  In other words, we then want a traveling front that connects  u 1 = 0  and  u 2 = 1 α h 1 α .
Since the network given by (4) is translation invariant, the traveling front can cross the threshold h at any finite value of  ξ . We will, without loss of generality, assume that  u ( 0 ) = h u < h  on  ( , 0 ) , and  u > h  on  ( 0 , ) . Therefore, we define the traveling front solution as follows:
u ( ξ ) = > h , ξ ( 0 , ) , h , ξ = 0 , < h , ξ ( , 0 ) ,
where  u , u ,  and  u  are bounded and continuous on  R . The nth order (for  n 3 ) derivatives of u are continuous everywhere, with the exception of at  ξ = 0 . The solution  u ( ξ )  also satisfies
lim ξ u ( ξ ) = a 2 = 1 α h 1 α , and lim ξ u ( ξ ) = 0 ,
and also
lim ξ ± u ( ξ ) = lim ξ ± u ( ξ ) = lim ξ ± u ( n ) ( ξ ) = 0 , n 3 .
The existence of this front solution and its forms were proven in [11]. We find that there are six possible types of front that we will analyze the stability for. In the following cases, the  λ i  represent eigenvalues of the associated characteristic equation of the ordinary differential equations that represent the traveling fronts.
Positive Velocity  c > 0 : In the first three cases,  u ( ξ ) = c 1 e 1 c ξ + c 2 e a ξ + c 3 e ξ  on  ξ ( , 0 ) . On  ξ [ 0 , ) u ( ξ )  has the following forms:
Case L1: All five  λ i R  with  λ 1 , λ 2 < 0  and  λ 3 , λ 4 , λ 5 > 0 , then
u ( ξ ) = d 1 e λ 1 ξ + d 2 e λ 2 ξ + d 0 , ξ > 0 ,
where  d 0 = 2 ( 1 α h ) ( A a ) a + 2 α ( a A )  is a constant, and since  A = 1.5a , we find that  d 0 = 1 α h 1 α .
Case L2: Three  λ i R  with  λ 1 , λ 2 < 0 λ 3 > 0 , and  λ 4 , 5 = l ± i r  with  l > 0 . Then,
u ( ξ ) = d 1 e λ 1 ξ + d 2 e λ 2 ξ + d 0 , ξ > 0 .
Case L3: We have  λ 3 R  with  λ 3 > 0 , and four complex  λ , such that  λ 1 , 2 = p ± i q  with  p < 0  and  λ 4 , 5 = l ± i r  with  l > 0 . Then,
u ( ξ ) = d 1 e p ξ cos ( q ξ ) + d 2 e p ξ sin ( q ξ ) + d 0 , ξ > 0 .
Negative Velocity  c < 0 : In the next three cases,  u ( ξ ) = c 1 e a ξ + c 2 e ξ  on  ξ ( , 0 ) . On  ξ [ 0 , ) u ( ξ )  has the following forms:
Case L4: All five  λ i R  with  λ 1 , λ 2 , λ 3 < 0  and  λ 4 , λ 5 > 0 , then
u ( ξ ) = d 1 e λ 1 ξ + d 2 e λ 2 ξ + d 3 e λ 3 ξ + d 0 , ξ > 0 .
Case L5: Three  λ i R  with  λ 1 , 2 = l ± i r  with  l < 0 λ 3 < 0 , and   λ 4 , 5 > 0 . Then,
u ( ξ ) = d 1 e l ξ cos ( r ξ ) + d 2 e l ξ sin ( r ξ ) + d 3 e λ 3 ξ + d 0 , ξ > 0 .
Case L6: We have  λ 3 R  with  λ 3 < 0 , and four complex  λ , such that  λ 1 , 2 = p ± i q  with  p < 0  and  λ 4 , 5 = l ± i r  with  l > 0 . Then,
u ( ξ ) = d 1 e p ξ cos ( q ξ ) + d 2 e p ξ sin ( q ξ ) + d 3 e λ 3 ξ + d 0 , ξ > 0 .
Each of these cases allows for different domains for the parameters h and  α . We refer the interested reader to [11] for the details on how these may be obtained, as well as the derivation of these six cases.

2.2. Formation of the 5th Order ODE

In order to investigate the stability of the traveling front solution, we introduce a perturbation. We perturb the traveling front solution  u 0 ( ξ )  with  ϵ v ( ξ , t )  with small  ϵ > 0 , where  v ( ξ , t ) = e γ t ϕ ( ξ ) , and where  γ = x + i y C , i.e.,  u ( ξ , t ) u 0 ( ξ ) + ϵ v ( ξ , t ) .
The function  ϕ  belongs to the set  BC 1 ( R , C ) , where both  ϕ  and  ϕ  are bounded, continuous, and complex-valued integrable complex-valued functions defined on
( , ) . After perturbing the front solution, we now employ the method of linearization. By taking
c u ( ξ , t ) ξ = u ( ξ , t ) + w ( ξ η ) ( α ( u ( η , t ) h ) + 1 ) Θ ( u ( η , t ) h ) d η ,
we can linearize the front solution via introducing the perturbed front solution and Taylor expanding to order  ϵ .
The resulting eigenvalue equation is given by
( γ + 1 ) ϕ ( ξ ) = c ϕ ( ξ ) ξ + w ( ξ ) ϕ ( 0 ) u 0 ( 0 ) + α w ( ξ η ) Θ ( u 0 ( η ) h ) ) ϕ ( η ) d η ,
In the derivations needed for the stability analysis, we will study the eigenvalue Equation (6). In this equation, we note a Heaviside function in the integral term. As a result, when  ξ < 0 , we lose the integral term, and the equation becomes
( γ + 1 ) ϕ ( ξ ) = c ϕ ( ξ ) ξ + w ( ξ ) ϕ ( 0 ) u 0 ( 0 ) .
This necessitates studying the relevant details in separate domains, one being  ( , 0 )  and the other being  [ 0 , ) .
Theorem 1.
The equivalent fifth order linear ODE for Equation (6) on
( , 0 )  is:
( γ + 1 ) [ ϕ i v ( a 2 + 1 ) ϕ + a 2 ϕ ] = c [ ϕ v ( a 2 + 1 ) ϕ + a 2 ϕ ] ,
or equivalently,
0 = c ϕ v ( γ + 1 ) ϕ i v c ( a 2 + 1 ) ϕ + ( γ + 1 ) ( a 2 + 1 ) ϕ + c a 2 ϕ a 2 ( γ + 1 ) ϕ .
Proof. 
On  ( , 0 ) , the integral term in (6) disappears. Now, note that
a 4 A e a ξ e ξ ( a 2 + 1 ) ( a 2 A e a ξ e ξ ) + a 2 ( A e a ξ e ξ ) = 0 .
By rearranging and differentiating (6) repeatedly, we find that
c ϕ = ( γ + 1 ) ϕ ϕ ( 0 ) u 0 ( 0 ) ( A e a ξ e ξ ) ,
c ϕ = ( γ + 1 ) ϕ ϕ ( 0 ) u 0 ( 0 ) ( a 2 A e a ξ e ξ ) ,
and
c ϕ v = ( γ + 1 ) ϕ i v ϕ ( 0 ) u 0 ( 0 ) ( a 4 A e a ξ e ξ ) .
Now by taking ((11) − (a2 + 1)((10) + a2((9) and applying (8), the result (7) follows. □
For the above threshold values, we must incorporate the integral term of (6).
Theorem 2.
The equivalent fifth order linear ODE for Equation (6) on  [ 0 , )  is
( γ + 1 ) [ ϕ i v ( a 2 + 1 ) ϕ + a 2 ϕ ] = c [ ϕ v ( a 2 + 1 ) ϕ + a 2 ϕ ] 2 α ( a A 1 ) ϕ 2 α a ( a A ) ϕ .
or equivalently,
0 = c ϕ v ( γ + 1 ) ϕ i v c ( a 2 + 1 ) ϕ + [ ( γ + 1 ) ( a 2 + 1 ) 2 α ( a A 1 ) ] ϕ + c a 2 ϕ [ a 2 ( γ + 1 ) + 2 α a ( a A ) ] ϕ .
Proof. 
Note that for the above threshold case, we must consider the integral term in (6):
α I = α 0 w ( ξ η ) ϕ ( η ) d η .
By differentiating (6) repeatedly, we find that
c ϕ = ( γ + 1 ) ϕ ϕ ( 0 ) u 0 ( 0 ) ( A e a ξ e ξ ) α I ,
c ϕ = ( γ + 1 ) ϕ ϕ ( 0 ) u 0 ( 0 ) ( a 2 A e a ξ e ξ ) α I ,
and
c ϕ v = ( γ + 1 ) ϕ i v ϕ ( 0 ) u 0 ( 0 ) ( a 4 A e a ξ e ξ ) α I i v ;
where, with the Leibniz integral rule, we have the following:
I = 0 ( A e a | ξ η | e | ξ η | ) ϕ ( η ) d η ,
I = 2 ( 1 a A ) ϕ + 0 ( a 2 A e a | ξ η | e | ξ η | ) ϕ ( η ) d η ,
and
I i v = 2 ( 1 a A ) ϕ + 2 ( 1 a 3 A ) ϕ + 0 ( a 4 A e a | ξ η | e | ξ η | ) ϕ ( η ) d η .
Now by taking (15) − (a2 + 1 )(14) + a2(13)and again applying (8), the result (12) follows. □

2.3. Matching Conditions for the ODE

Before deriving the matching conditions for the ODE, we present a lemma on the integral term that appears in the derivation, to simplify the presentation of the matching conditions. This will allow us to closely analyze the integral term where the threshold and resulting discontinuity occur.
Lemma 1.
When analyzing the matching conditions for the equivalent ODEs of the eigenvalue Equation (6), the integral term
I ( ξ ) = w ( ξ η ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η ,
has the following matching conditions at  ξ = 0 :
I ( 0 + ) I ( 0 ) = 0 ,
I ( 0 + ) I ( 0 ) = 0 ,
I ( 0 + ) I ( 0 ) = 2 ( 1 a A ) ϕ ( 0 + ) ,
I ( 0 + ) I ( 0 ) = 2 ( 1 a A ) ϕ ( 0 + ) .
Proof. 
First, we observe that I is necessarily continuous, as its integrand is Lebesgue integrable, which gives us the equality (16a).
To see (16b), we first proceed by writing I as
J ( ξ , η ) = w ( ξ η ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η = ξ ( A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η + ξ ( A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η = J 1 ( ξ , η ) + J 2 ( ξ , η ) .
In addition, we let
F 1 ( ξ , η ) = ( A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η )
and
F 2 ( ξ , η ) = ( A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) .
Next, we take the derivative using the Leibniz integral rule. We observe that
ξ J ( ξ , η ) = ξ J 1 ( ξ , η ) + ξ J 2 ( ξ , η ) = ξ ξ F 1 ( ξ , η ) d η + ξ ξ F 2 ( ξ , η ) d η .
Since the integrands are once again continuous, we see that
ξ J ( 0 + , η ) ξ J ( 0 , η ) = I ( 0 + ) I ( 0 ) = 0 .
The proofs for (16c) and (16d) are similar, but much more cumbersome in detail, and we present these details in Appendix A. □
We are now prepared to derive the matching conditions to join the eigenfunction  ϕ ( ξ )  at  ξ = 0 .
Theorem 3.
The matching conditions for the eigenfunction  ϕ ( ξ )  at  ξ = 0  are as follows:
ϕ ( 0 + ) ϕ ( 0 ) = 0 ,
ϕ ( 0 + ) ϕ ( 0 ) = 0 ,
ϕ ( 0 + ) ϕ ( 0 ) = 2 ( a A 1 ) ϕ ( 0 ) c u 0 ( 0 ) ,
ϕ ( 0 + ) ϕ ( 0 ) = 2 ( γ + 1 ) ( a A 1 ) ϕ ( 0 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) ,
ϕ i v ( 0 + ) ϕ i v ( 0 ) = 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 ϕ ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) .
Proof. 
Matching conditions (17) and (18) are due to the fact that
ϕ BC 1 ( R , C ) .  In particular, we note that  ϕ ( 0 + ) ϕ ( 0 ) = 0 ϕ ( 0 + ) = ϕ ( 0 ) = ϕ ( 0 ) , with a similar result for  ϕ ( 0 ) , so that we may make the substitutions  ϕ ( 0 + ) = ϕ ( 0 )  and  ϕ ( 0 + ) = ϕ ( 0 )  from the previous lemma.
To see matching condition (19), we differentiate (6) with respect to  ξ  and rearrange to obtain the following:
c ϕ ( ξ ) = ( γ + 1 ) ϕ ( ξ ) w ( ξ ) ϕ ( 0 ) u 0 ( 0 ) α I ,
where
I = I ( ξ ) = ξ w ( ξ η ) ϕ ( η ) d η .
Now, since
w ( ξ η ) = A e a ( ξ η ) e ( ξ η ) , ξ η , A e a ( ξ η ) e ( ξ η ) , ξ < η ,
we note that
w ( ξ η ) = a A e a ( ξ η ) + e ( ξ η ) , ξ η , a A e a ( ξ η ) e ( ξ η ) , ξ < η .
Now, we determine that
c ϕ ( 0 + ) = ( γ + 1 ) ϕ ( 0 ) ( 1 a A ) ϕ ( 0 ) u 0 ( 0 ) α I ( 0 + ) ,
and
c ϕ ( 0 ) = ( γ + 1 ) ϕ ( 0 ) ( a A 1 ) ϕ ( 0 ) u 0 ( 0 ) α I ( 0 ) .
Using the previous lemma, we have that  I ( ξ )  is continuous at  ξ = 0 , so taking  ( 23 ) ( 24 ) c  gives (19) as needed.
To see (20), we differentiate (22) to arrive at:
c ϕ ( ξ ) = ( γ + 1 ) ϕ ( ξ ) w ( ξ ) ϕ ( 0 ) u 0 ( 0 ) α I ( ξ ) .
Contrarily to before, we note that
w ( ξ η ) = a 2 A e a ( ξ η ) e ( ξ η ) , ξ η , a 2 A e a ( ξ η ) e ( ξ η ) , ξ < η .
Thus, in this case,  w  is continuous at  ξ η = 0 . We can then calculate,
c ϕ ( 0 + ) = ( γ + 1 ) ϕ ( 0 + ) w ( 0 ) ϕ ( 0 ) u 0 ( 0 ) α I ( 0 + ) ,
and
c ϕ ( 0 ) = ( γ + 1 ) ϕ ( 0 ) w ( 0 ) ϕ ( 0 ) u 0 ( 0 ) α I ( 0 ) .
The previous lemma gives us
I ( 0 + ) I ( 0 ) = 2 ( 1 a A ) ϕ ( 0 ) .
Now taking  ( 26 ) ( 27 ) c  gives
ϕ ( 0 + ) ϕ ( 0 ) = γ + 1 c ( ϕ ( 0 + ) ϕ ( 0 ) ) 2 α c ( 1 a A ) ϕ ( 0 ) ,
which, after substituting (19), gives us (20).
The same technique can be used to show (21). □

2.4. Forms for Eigenfunction  ϕ

Next, we must discuss the forms for our eigenfunction,  ϕ ; the solution to the ODE given by (7) and (12). This will be very similar to the forms for the traveling front itself, as seen in [11].
The characteristic values for (7) are  γ + 1 c , ± 1 ,  and  ± a . To have an equation that converges to 0 at  , the eigenfunction must be in one of the following forms
ϕ ( ξ ) = s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ ξ < 0 , γ + 1 c > 0
ϕ ( ξ ) = s 1 ξ e a ξ + s 2 ξ e a ξ + s 3 e ξ ξ < 0 , γ + 1 c = a
ϕ ( ξ ) = s 1 e a ξ + s 2 e ξ + s 3 ξ e ξ ξ < 0 , γ + 1 c = 1
ϕ ( ξ ) = s 4 e a ξ + s 5 e ξ ξ < 0 , γ + 1 c < 0 .
in the case where  y = 0  (recall  γ = x + i y ).
If  y 0 , then we cannot have  γ + 1 c = a  or  γ + 1 c = 1 , since  a , 1 R . First, we analyze the  e γ + 1 c ξ  term using Euler’s formula:
e γ + 1 c ξ = e x + 1 c ξ + i y c ξ = e x + 1 c ξ e i y c ξ = e x + 1 c ξ ( cos ( y c ξ ) + i sin ( y c ξ ) ) .
We must consider the fact that coefficients of complex terms may have a real and imaginary component, i.e.,  s j = R j + I j i  with  R j , I j R . As a result, we find that if  x + 1 c > 0 , then:
s 1 e γ + 1 c ξ = ( R 1 + I 1 i ) e x + 1 c ξ ( cos ( y c ξ ) + i sin ( y c ξ ) ) = e x + 1 c ξ ( R 1 cos ( y c ξ ) I 1 sin ( y c ξ ) ) + i e x + 1 c ξ ( R 1 sin ( y c ξ ) + I 1 cos ( y c ξ ) )
In conclusion, we find that when  ξ < 0 , the eigenfunction  ϕ ( ξ )  is given by
ϕ ( ξ ) = ( R 1 + I 1 i ) e x + 1 c ξ ( cos ( y c ξ ) + i sin ( y c ξ ) ) + s 2 e a ξ + s 3 e ξ , x + 1 c > 0
ϕ ( ξ ) = s 4 e a ξ + s 5 e ξ ,   x + 1 c < 0 .
When  ξ > 0 , we must determine the form of  ϕ  by first referencing (12), which depends on the roots of the following characteristic equation that is a polynomial of degree five:
0 = c μ 5 ( γ + 1 ) μ 4 c ( a 2 + 1 ) μ 3 + [ ( a 2 + 1 ) ( γ + 1 ) + 2 α ( 1 a A ) ] μ 2 + c a 2 μ ( a 2 ( γ + 1 ) + 2 α a ( a A ) ) .
Unfortunately, the roots of the polynomial (32) now depend on the value of  γ . We cannot solve the equation without knowing  γ , and there is no explicit form for the roots of a fifth degree polynomial. However, we can use the discriminant,  Δ , to determine the explicit solution form of the eigenfunction.
We can find the discriminant as follows: first, we find the resultant of the characteristic equation. When we consider (32), and its derivative given by
5 c μ 4 4 ( γ + 1 ) μ 3 3 c ( a 2 + 1 ) μ 2 + 2 [ ( a 2 + 1 ) ( γ + 1 ) + 2 α ( 1 a A ) ] μ + c a 2 ,
we can then form the Sylvester matrix below. The Sylvester matrix is then divided by c, the leading term, to give the discriminant [21].
In this case, the matrix is given by
S = a 5 a 4 a 3 a 2 a 1 a 0 0 0 0 0 a 5 a 4 a 3 a 2 a 1 a 0 0 0 0 0 a 5 a 4 a 3 a 2 a 1 a 0 0 0 0 0 a 5 a 4 a 3 a 2 a 1 a 0 5 a 5 4 a 4 3 a 3 2 a 2 a 1 0 0 0 0 0 5 a 5 4 a 4 3 a 3 2 a 2 a 1 0 0 0 0 0 5 a 5 4 a 4 3 a 3 2 a 2 a 1 0 0 0 0 0 5 a 5 4 a 4 3 a 3 2 a 2 a 1 0 0 0 0 0 5 a 5 4 a 4 3 a 3 2 a 2 a 1 ,
where  a 5 = c a 4 = ( γ + 1 ) a 3 = c ( a 2 + 1 ) a 2 = ( a 2 + 1 ) ( γ + 1 ) + 2 α ( 1 a A ) a 1 = c a 2 , and  a 0 = ( a 2 ( γ + 1 ) + 2 α a ( a A ) ) , which are, of course, the coefficients in (32) and (33). Again, by calculating  det ( S ) / c , we find the discriminant  Δ  of (34).
In the case where  y = 0 , there are either five real characteristic values or four complex values with one real characteristic value when the discriminant  Δ > 0 . There are two complex and three real characteristic values when  Δ < 0 . Finally, there are repeated roots when  Δ = 0 .
If  y 0 , we will obtain 5 complex  μ i = ρ i + i τ i , which, in general, are not pairwise conjugate pairs. An interesting observation about the roots of the characteristic equations is given by the following lemma:
Lemma 2.
For the characteristic equations of (7) or (12):
0 = c μ 5 ( γ + 1 ) μ 4 c ( a 2 + 1 ) μ 3 + ( a 2 + 1 ) ( γ + 1 ) μ 2 + c a 2 μ a 2 ( γ + 1 ) ξ < 0 , 0 = c μ 5 ( γ + 1 ) μ 4 c ( a 2 + 1 ) μ 3 + [ ( a 2 + 1 ) ( γ + 1 ) + 2 α ( 1 a A ) ] μ 2 ξ > 0 + c a 2 μ ( a 2 ( γ + 1 ) + 2 α a ( a A ) )
if  γ 0 = x + i y  and  μ 0 = ρ + i τ  is a characteristic value of one of the characteristic equations above, then when  γ = γ 0 ¯ = x i y μ 0 ¯ = ρ i τ  is a characteristic value of the resulting characteristic equation for this value of γ.
This lemma provides some insight into the structure of the eigenfunction when  γ C . We can now give the eigenfunction structure. We show the eigenfunction forms both for when  γ R  and  γ C . Due to the nature of the roots of real and complex polynomials, we can more explicitly represent the eigenfunction when  γ R . When  γ C , we cannot explicitly write out the eigenfunction, as it will depend on complex roots of the complex characteristic polynomial (12) when  ξ > 0 . We describe the structure for both of these cases in the following two sections.

2.4.1. Eigenfunction  ϕ  Forms When  γ R  ( y = 0 )

When  γ R , ( y = 0 ), we obtain the following structure:
γ + 1 c > 0 : In the first four cases,  ϕ ( ξ ) = s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ  on  ξ ( , 0 ) . On  ξ [ 0 , ) ϕ ( ξ )  has one of the following forms:
Form 1: All five  μ i R  with  μ 1 , μ 2 < 0  and  μ 3 , μ 4 , μ 5 > 0 , then
ϕ ( ξ ) = s 4 e μ 1 ξ + s 5 e μ 2 ξ , ξ > 0 .
Form 2: Three  μ i R  with  μ 1 , μ 2 < 0 μ 3 > 0  , and  μ 4 , 5 = l ± i r  with  l > 0 . Then,
ϕ ( ξ ) = s 4 e μ 1 ξ + s 5 e μ 2 ξ , ξ > 0 .
Form 3: Three  μ i R  with  μ 1 , 2 = l ± i r  with  l < 0 μ 3 < 0 , and   μ 4 , 5 > 0 . Then,
ϕ ( ξ ) = s 4 e l ξ cos ( r ξ ) + s 5 e l ξ sin ( r ξ ) , ξ > 0 .
Form 4: We have  μ 3 R  with  μ 3 > 0 , and four complex  μ  such that  μ 1 , 2 = p ± i q  with  p < 0  and  μ 4 , 5 = l ± i r  with  l > 0 . Then,
ϕ ( ξ ) = s 4 e p ξ cos ( q ξ ) + s 5 e p ξ sin ( q ξ ) , ξ > 0 .
γ + 1 c < 0 :  In the next four cases,  ϕ ( ξ ) = s 4 e a ξ + s 5 e ξ  on  ξ ( , 0 ) . On  ξ [ 0 , ) u ( ξ )  has one of the following forms:
Form 5: All five  μ i R  with  μ 1 , μ 2 , μ 3 < 0  and  μ 4 , μ 5 > 0 , then
ϕ ( ξ ) = s 1 e μ 1 ξ + s 2 e μ 2 ξ + s 3 e μ 3 ξ , ξ > 0 .
Form 6: Three  μ i R  with  μ 1 , μ 2 , μ 3 < 0 , and  μ 4 , 5 = l ± i r  with  l > 0 . Then,
ϕ ( ξ ) = s 1 e μ 1 ξ + s 2 e μ 2 ξ + s 3 e μ 3 ξ , ξ > 0 .
Form 7: Three  μ i R  with  μ 1 , 2 = l ± i r  with  l < 0 μ 3 < 0 , and   μ 4 , 5 > 0 . Then,
ϕ ( ξ ) = s 1 e l ξ cos ( r ξ ) + s 2 e l ξ sin ( r ξ ) + s 3 e μ 3 ξ , ξ > 0 .
Form 8: We have  μ 3 R  with  μ 3 < 0 , and four complex  μ  such that  μ 1 , 2 = p ± i q  with  p < 0  and  μ 4 , 5 = l ± i r  with  l > 0 . Then,
ϕ ( ξ ) = s 1 e p ξ cos ( q ξ ) + s 2 e p ξ sin ( q ξ ) + s 3 e μ 3 ξ , ξ > 0 .
Form 9: When  Δ = 0 ,  there are repeated roots of (32). This happens only at a few isolated values of  α , and it may occur for both  c > 0  and  c < 0 .
  • If  c > 0 , then  Δ = 0  is the transition between either cases 1 and 2, cases 2 and 4, or cases 3 and 4.
  • For the first two transitions mentioned above, the repeated roots are  μ 1 = μ 2 < 0 . The solution form is
    ϕ ( ξ ) = s 4 e μ 1 ξ + s 5 ξ e μ 1 ξ , ξ > 0 , ϕ ( ξ ) = s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ , ξ < 0 .
  • If the transition is between case 3 and case 4, the solution form is the same as case 3.
  • If  c < 0 , then  Δ = 0  is the transition between either cases 4 and 5, cases 6 and 8, or cases 7 and 8.
  • For the transition from 4 to 5 or 6 to 8, the repeated roots are  μ 4 = μ 5 > 0 . The solution form is the same as case 6.
  • For the transition from 7 to 8, the solution form is the same as case 7.
Finally, when  c = 0 , there is a stationary front, where we use the following ODEs:
0 = ( γ + 1 ) ϕ i v + ( γ + 1 ) ( a 2 + 1 ) ϕ a 2 ( γ + 1 ) ϕ ξ < 0 , 0 = ( γ + 1 ) ϕ i v + [ ( γ + 1 ) ( a 2 + 1 ) 2 α ( a A 1 ) ] ϕ [ a 2 ( γ + 1 ) + 2 α a ( a A ) ] ϕ ξ > 0 .
This will occur for some  α  and h values in the range that we consider.

2.4.2. Eigenfunction  ϕ  Forms When  γ C  ( y 0 )

If  γ C  ( y 0 ), the form of the eigenfunction will resemble Cases 3 and 4 in the  γ R  ( y = 0 ) case. Again, we recall that  s j = R j + I j i , and the analysis is similar to (29). We find that, when  ξ > 0 , the eigenfunction takes the form given by
ϕ ( ξ ) = ( R 4 + I 4 i ) e ρ 1 ξ cos ( τ 1 ξ ) + i sin ( τ 1 ξ ) + ( R 5 + I 5 i ) e ρ 2 ξ cos ( τ 2 ξ ) + i sin ( τ 2 ξ ) x + 1 c > 0
ϕ ( ξ ) = ( R 1 + I 1 i ) e ρ 1 ξ cos ( τ 1 ξ ) + i sin ( τ 1 ξ ) + ( R 2 + I 2 i ) e ρ 2 ξ cos ( τ 2 ξ ) + i sin ( τ 2 ξ ) + ( R 3 + I 3 i ) e ρ 3 ξ cos ( τ 3 ξ ) + i sin ( τ 3 ξ ) x + 1 c < 0 ,
where we leave each term in its factored form for the sake of space.
We note that all of these solution form cases are possible within each of the stability investigations for the front cases of L1, L2, L3, L4, L5, and L6, as outlined in [11]. All of the relevant cases are obtained by finding the appropriate eigenvalues of the characteristic equation for each type of front in Mathematica.

2.5. Evans Function for Stability Analysis

In the stability analysis for  α 0 , we need to proceed differently from Zhang’s, Guo’s, and Coombes’ and Owen’s previous approaches. However, similarly to their previous analyses, we claim that the determinant of our coefficient matrix of the system formed by our matching conditions will represent the Evans function. One of the main differences here is that we will not be able to explicitly represent the Evans function as in [12]. Instead, we will rely on a computational approach. Another important consideration here is that the Evans function is specific to each traveling front. We first determine the traveling front equation. This allows us to obtain the required parameter values for the stability eigenfunction. By varying  γ  across a discretized domain, we can calculate the values of the Evans function. This will allow us to find the necessary eigenvalues and determine the stability of the traveling front.
First, we will formally state the necessary conditions for the Evans function.
Theorem 4.
Theorem of Evans: The Evans function is a complex analytic function, and it is real-valued if the eigenvalue parameter λ is real. The complex number λ is an eigenvalue of the operator if, and only if,  E ( λ ) = 0 . Moreover, the algebraic multiplicity of an eigenvalue is exactly equal to the order of the zero of the Evans function. (from [22,23,24,25]).
The discrete spectrum of an operator is defined as all  γ , such that the resolvent operator  R γ = ( A γ I ) 1  does not exist, or equivalently, where  ( A γ I ) ψ ( ξ ) = 0  for all  ψ B . It should be noted that, in order to confirm that the discrete spectrum, and thus our Evans function, is the only source of possible instability, the essential spectrum has previously been shown to be on the line  1 i c s  for  s R  for all traveling fronts we consider [10,26]. As a result, only the discrete spectrum, the zeroes of the Evans function  E ( γ ) , will contribute to the stability analysis for each front.
To begin, we need to define our operator, which we will denote as  A . First, recall the eigenvalue Equation (6)
( γ + 1 ) ϕ ( ξ ) = c ϕ ( ξ ) ξ + w ( ξ ) ϕ ( 0 ) u 0 ( 0 ) + α w ( ξ η ) Θ ( u 0 ( η ) h ) ) ϕ ( η ) d η .
We follow the previous analysis by [10,11,26] and others. By letting
A ϕ ( ξ ) = c ϕ ( ξ ) ξ ϕ ( ξ ) + w ( ξ ) ϕ ( 0 ) u 0 ( 0 ) + α w ( ξ η ) Θ ( u 0 ( η ) h ) ) ϕ ( η ) d η ,
our eigenvalue equation becomes
γ ϕ ( ξ ) = A ϕ ( ξ ) .
Here,  A ϕ ( ξ )  is a linear operator defined on a Banach space  B  of functions that are continuous, bounded, and decay exponentially as  ξ ± .
In this case, the function is complex-valued, and will, of course, lead to a complex-valued Evans function.
First, recall the matching conditions:
ϕ ( 0 + ) ϕ ( 0 ) = 0 , ϕ ( 0 + ) ϕ ( 0 ) = 0 , ϕ ( 0 + ) ϕ ( 0 ) = 2 ( a A 1 ) ϕ ( 0 ) c u 0 ( 0 ) , ϕ ( 0 + ) ϕ ( 0 ) = 2 ( γ + 1 ) ( a A 1 ) ϕ ( 0 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) , ϕ i v ( 0 + ) ϕ i v ( 0 ) = 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 ϕ ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) .
Now, this system, along with the corresponding form of the eigenfunction (which varies as  γ  is varied), forms a system in terms of coefficients  s 1  through  s 5  since  ϕ ( 0 )  and  ϕ ( 0 )  are expressed in terms of  s i  (presented in the previous section). We can represent this as a matrix equation:
M ( γ ) s = 0 .
where  M ( γ )  is the coefficient matrix and  s  is the column vector of  s j ’s with  s j = R j + I j · i  ( j = 1 , , 5 ). We describe the procedure for obtaining  M ( γ ) , as well as provide some examples of  M ( γ )  in Appendix B. We also include an explicit form for when  α = 0  in Appendix C. Unfortunately, no explicit solution for  M ( γ )  exists when  α 0 . Note that we can use either the above or below threshold equations to calculate  ϕ ( 0 )  in the matching conditions, since they are defined as equal at this point. There exists a nontrivial solution to (38) if, and only if, the matrix  M ( γ )  has  det ( M ( γ ) ) = | M ( γ ) | = 0 ,  where  det ( M ( γ ) )  represents the determinant of the matrix  M ( γ ) . Therefore, we let  E ( γ ) = | M ( γ ) | . We note that the Evans function will have a real and imaginary component, and the zeroes of this function will occur when
R e ( | M ( γ ) | ) = I m ( | M ( γ ) | ) = 0 .
We can locate the values for  γ  that satisfy (39) using the intersections of the zero-level contours of the real and imaginary components of  E ( γ ) . We must be careful in our analysis, since the eigenfunction undergoes structural changes as we vary  γ . Its form depends on solutions to the characteristic equation for (6), which varies depending on the choice of  α . We stress once more that the Evans function cannot be represented explicitly, and we can only numerically compute  E ( γ ) . Furthermore, we can only determine the values of  γ  that solve (39) numerically. We go through some examples of this analysis in the next section.

3. Results

In this section, we examine the computational results of the stability analysis for each of the six cases presented in Section 2. We find that the positive speed fronts we analyze are stable, a small magnitude negative speed front is stable, and two additional negative speed fronts are unstable. All calculations were performed with Mathematica, and all plots were created using Python’s matplotlib package. Depending on the desired accuracy and performance, the mesh size typically used was  Δ x = 0.0025  and  Δ y = 0.005 , although larger values were sometimes used for areas of low stability activity, and smaller values were sometimes used for areas of high stability activity, to capture all relevant details. We now examine the individual cases.

3.1. Case L1 Analysis

First, we analyze two examples for the L1 case, with parameters given in the caption of each figure. Recall that this case indicates that we have a traveling front with all real eigenvalues and a positive traveling speed c. We will consider two fronts, one where  α = 0 , as well as  α = 0.04 . Unless otherwise stated, we assume that  A = 2.25  and  a = 1.5  in  w ( ξ c t )  for all cases.
We first investigate when  α = 0  and  h = 0.6 . After finding the front solution, we can find the zeroes of the Evans function, which are the zeroes of the determinant of the matching conditions matrix given by (38). We find through computation that the zeroes are the gamma values of approximately  γ 1.14 γ 1.0975 γ 0.9075 γ 0.8575 , and  γ = 0 . We find that since all values of  γ  such that  | M ( γ ) | = 0  are negative, we conclude that this particular front is linearly stable. We show this below in Figure 3.
Next, we analyze when  α = 0.04  and  h = 0.6 . We find through computation that the zeroes are the gamma values of approximately  γ 1.2725 γ 0.9125 γ 0.8675 γ 0.5925 , and  γ = 0 . Since all values of  γ  such that  | M ( γ ) | = 0  are negative, we conclude that this particular front is linearly stable. We show this below in Figure 4.

3.2. Case L2 Analysis

Next, we analyze an example for the L2 case. Recall that this case indicates that we have a traveling front with one conjugate pair of complex eigenvalues and three real eigenvalues and a positive traveling speed c. In this example, we take  α = 0.08  and  h = 0.6 .
We find through computation that the zeroes are the gamma values of approximately  γ 1.30275 γ 0.91825 γ 0.87725 γ 0.06825 , and  γ = 0 . Since all values of  γ  such that  | M ( γ ) | = 0  are negative, we conclude that this front is linearly stable. We show this below in Figure 5.

3.3. Case L3 Analysis

Next, we analyze an example in the L3 case. Recall that this case indicates that we have a traveling front with two conjugate pairs of complex eigenvalues and one real eigenvalue and a positive traveling speed c. In this example, we will use  α = 0.3  and  h = 0.6 .
We find through computation that the zeroes are the gamma values of approximately  γ 1.2875 γ 0.9525 γ 0.9325 γ 0.9025 , and  γ = 0 . Since all values of  γ  such that  | M ( γ ) | = 0  are negative, we conclude that this particular front is linearly stable. We show this below in Figure 6.

3.4. Case L4 Analysis

Next, we analyze an example of the L4 case. Recall that this case indicates that we have a traveling front with all real eigenvalues and a negative traveling speed c. Here, we use  α = 0.01  and  h = 0.3 .
In this case, we verify that the zeroes are the gamma values of approximately  γ 1.32225 γ 1.21475 γ 0.86525 γ 0.55375 , and  γ = 0 . Since all values of  γ  such that  | M ( γ ) | = 0  are negative, we conclude that this particular front is linearly stable. We show this below in Figure 7.

3.5. Case L5 Analysis

Next, we analyze an example of the L5 case. Recall that this case indicates that we have a traveling front with one conjugate pair of complex eigenvalues and three real eigenvalues and a negative traveling speed c. We take  α = 0.8  and  h = 0.2  here.
We find through computation that the zeroes are the gamma values of approximately  γ 10.8125 γ 1.6125 γ 1.4075 γ 0.8125 , and  γ = 0 . In this case, we find a single positive eigenvalue, showing that this particular traveling front is unstable. We show this below in Figure 8.

3.6. Case L6 Analysis

Finally, we analyze an example of the L6 case. Recall that this case indicates that we have a traveling front with two conjugate pairs of complex eigenvalues and one real eigenvalue and a negative traveling speed c. We use the values  h = 0.55  and  α = 0.65 . On a region entirely in the left half of the complex plane, we determine that the eigenfunction is unbounded, and therefore does not exist. However, we find a positive eigenvalue here, which makes the analysis in the left half of the complex plane unnecessary.
We compute the values of  γ = 0  and  γ 7.4325  as zeroes of the Evans function. Since we find a single positive eigenvalue, we conclude that this particular traveling front is unstable. We show this below in Figure 9.

4. Discussion

In this paper, we investigated the stability of the traveling front solutions of (4) for  γ C . We began by deriving the eigenvalue problem for the stability analysis. We did this by utilizing the method of linearization. We then looked at the equivalent ODEs and matching conditions using a differentiation approach.
We continued by investigating the possible forms for the eigenfunction by analyzing the characteristic equations of the ODEs. For the below threshold eigenfunction forms, we were able to obtain an explicit form. In the above threshold case, we had to rely on computations of the eigenvalues of the characteristic equation. Now, having the eigenfunction forms, we then showed that the determinant of the coefficient matrix, formed by the system of the eigenfunction and matching conditions, could serve as the Evans function.
At this point, we used Mathematica to calculate the zero-level curves of the real and imaginary components of the determinant,  | M ( γ ) |  in order to find the values of  γ  such that  | M ( γ ) | = 0 . These values of  γ  make up the discrete spectrum, and they indicated the stability of the traveling fronts. The plots were created using Python’s matplotlib package. This allowed us to conclude that the examples we investigated in the L1 through L4 cases were linearly stable, while the L5 and L6 cases were unstable.
We did not observe any complex-valued  γ  such that  | M ( γ ) | = 0  in any of the cases that we studied in this paper. An interesting future investigation would be to prove that all zeroes of the Evans function are real, as seemed to be suggested here.

Author Contributions

Methodology, Y.G.; validation, Y.G.; formal analysis, D.M. and Y.G.; investigation, D.M. and Y.G.; writing—original draft, D.M.; writing—review & editing, D.M. and Y.G.; visualization, D.M. and Y.G.; supervision, Y.G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the reviewers of this paper for their valuable feedback.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Details of (16c), (16d) in Lemma (1)

We restate the full lemma, before proving (A3) and (A4).
Lemma A1.
Lemma (1): When analyzing the matching conditions for the equivalent ODEs of the eigenvalue Equation (6), the integral term
I ( ξ ) = w ( ξ η ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η ,
has the following matching conditions at  ξ = 0 :
I ( 0 + ) I ( 0 ) = 0 ,
I ( 0 + ) I ( 0 ) = 0 ,
I ( 0 + ) I ( 0 ) = 2 ( 1 a A ) ϕ ( 0 + ) ,
I ( 0 + ) I ( 0 ) = 2 ( 1 a A ) ϕ ( 0 + ) .
To see (A3), we proceed with differentiation of  ξ J ( ξ , η )  by again utilizing the Leibniz integral rule. We find that
ξ ξ J ( ξ , η ) = 2 ξ 2 J ( ξ , η ) = 2 ξ 2 J 1 ( ξ , η ) + 2 ξ 2 J 2 ( ξ , η ) = ξ ξ ξ F 1 ( ξ , η ) d η + ξ ξ F 2 ( ξ , η ) d η = 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) + ξ ( a 2 A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η + ξ ( a 2 A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η = 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) + ξ G 1 ( ξ , η ) d η + ξ G 2 ( ξ , η ) d η .
Now, the integral terms will again be continuous for all  ξ , but we note that for  2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) , with the threshold occurring at  ξ = 0 , that
lim ξ 0 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 0 lim ξ 0 + 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 2 ( 1 a ) ϕ ( 0 + ) .
As a result, we see that
2 ξ 2 J ( 0 + , η ) 2 ξ 2 J ( 0 , η ) = I ( 0 + ) I ( 0 ) = 2 ( 1 a ) ϕ ( 0 + ) .
Finally, we show (A4). We differentiate one last time, using both the Leibniz integral rule as well as the product rule, so that we obtain
ξ 2 ξ 2 J ( ξ , η ) = 3 ξ 3 J ( ξ , η ) = 3 ξ 3 J 1 ( ξ , η ) + 3 ξ 3 J 2 ( ξ , η ) = ξ 2 ( 1 a ) ϕ ( ξ ) + ξ G 1 ( ξ , η ) d η + ξ G 2 ( ξ , η ) d η = ξ 2 ( 1 a ) ϕ ( ξ ) + G 1 ( ξ , ξ ) 0 + ξ ξ G 1 ( ξ , η ) d η + 0 G 2 ( ξ , ξ ) + ξ ξ G 2 ( ξ , η ) d η = 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) + δ ( u 0 ( ξ ) h ) ϕ ( ξ ) + ξ ( a 3 A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η + ξ ( a 3 A e a ( ξ η ) e ( ξ η ) ) Θ ( u 0 ( η ) h ) ϕ ( η ) d η .
After again noting that the integral terms will be continuous, we turn our attention to the term given by
2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) + δ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) + 2 ( 1 a A ) δ ( u 0 ( ξ ) h ) ϕ ( ξ ) .
By observing the limits given by
lim ξ 0 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 0 , lim ξ 0 + 2 ( 1 a A ) Θ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 2 ( 1 a ) ϕ ( 0 + ) , lim ξ 0 2 ( 1 a A ) δ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 0 , lim ξ 0 + 2 ( 1 a A ) δ ( u 0 ( ξ ) h ) ϕ ( ξ ) = 0 ,
we see that
3 ξ 3 J ( 0 + , η ) 3 ξ 3 J ( 0 , η ) = I ( 0 + ) I ( 0 ) = 2 ( 1 a ) ϕ ( 0 + ) .

Appendix B. Examples of M(γ) Used in the Evans Function in Equation (38)

Due to the complexity and constant recalculation of the matrix  M ( γ )  used in the Evans function from Equation (38), there is not a practical way to give a general example of what this matrix looks like. We will outline and show examples of the code and procedures within Mathematica used to calculate the matrix  M ( γ )  and the relevant parameters needed in the stability analysis.
After clearing any stored variables within the software, we set the values for A, a, h, and  α  based on the criteria of the desired front discussed in [11]. We show an example of this code in Figure A1.
Figure A1. A sample of Mathematica code used to set parameter values.
Figure A1. A sample of Mathematica code used to set parameter values.
Mathematics 11 02202 g0a1
This allows us to solve a system of equations for which we obtain the front speed c, constant parameters, and eigenvalues. We show an example of this code in Figure A2.
Figure A2. Sample of Mathematica code for solving the system of equations used to obtain traveling front parameters.
Figure A2. Sample of Mathematica code for solving the system of equations used to obtain traveling front parameters.
Mathematics 11 02202 g0a2
From these values, we can form the front solution, separately for above and below threshold. We also assign a variable name for  u ( 0 )  for later use. We show an example of this code in Figure A3.
Figure A3. Example of Mathematica code used for setting the above and below threshold equations for the traveling front  u ( ξ ) .
Figure A3. Example of Mathematica code used for setting the above and below threshold equations for the traveling front  u ( ξ ) .
Mathematics 11 02202 g0a3
We show an example of a traveling front in Figure A4:
Figure A4. Example of a traveling front  u ( ξ )  obtained by solving the system of equations in Figure A2. Here, we solve this system in the L1 case where  A = 2.25 a = 1.5 h = 0.6 , and  α = 0.04 .
Figure A4. Example of a traveling front  u ( ξ )  obtained by solving the system of equations in Figure A2. Here, we solve this system in the L1 case where  A = 2.25 a = 1.5 h = 0.6 , and  α = 0.04 .
Mathematics 11 02202 g0a4
Next, after setting a desired mesh size and granularity for the calculations in the complex plane, we employ the use of a for loop to repeat the necessary calculations for each point on the complex plane. After finding the roots of the appropriate characteristic equation, (32), for the chosen value of  γ , we assign variable names to the real part, imaginary part, and magnitude of each of the roots. We show an example of this code in Figure A5.
Figure A5. Sample of Mathematica code for solving the characteristic Equation (32).
Figure A5. Sample of Mathematica code for solving the characteristic Equation (32).
Mathematics 11 02202 g0a5
With a carefully designed series of If-Then commands, we find the expressions for each term in both the above and below threshold eigenfunctions. Depending on the current location of  γ  in the desired mesh, the roots of (32) will change in structure. The If-Then commands allowed for these changes to be accounted for in the formation of the eigenfunction, without the need to know at what precise values these changes took place. We show an example of this code in Figure A6.
Figure A6. Sample of Mathematica code for setting the above threshold and below threshold terms of the eigenfunction when  y > 0 .
Figure A6. Sample of Mathematica code for setting the above threshold and below threshold terms of the eigenfunction when  y > 0 .
Mathematics 11 02202 g0a6
Another advantage of this setup, is that we can seamlessly integrate our more explicit knowledge of the case when  γ R . We show an example of this code in Figure A7.
Figure A7. Sample of Mathematica code for setting the above threshold and below threshold terms of the eigenfunction when  y = 0 .
Figure A7. Sample of Mathematica code for setting the above threshold and below threshold terms of the eigenfunction when  y = 0 .
Mathematics 11 02202 g0a7
The last section of the If-Then command handles the lower half of the complex plane. We show an example of this code in Figure A8.
Figure A8. Sample of Mathematica code for setting the above threshold and below threshold terms of the eigenfunction when  y < 0 .
Figure A8. Sample of Mathematica code for setting the above threshold and below threshold terms of the eigenfunction when  y < 0 .
Mathematics 11 02202 g0a8
Next, we place the above and below threshold terms into the proper eigenfunction equation. Using these, we can assign variable names to the expressions that we will use to equate the matching conditions. We show an example of this code in Figure A9.
Figure A9. Sample of Mathematica code for setting the eigenfunction and matching conditions.
Figure A9. Sample of Mathematica code for setting the eigenfunction and matching conditions.
Mathematics 11 02202 g0a9
Finally, we can extract the coefficients of the matching conditions equations to give us our coefficient matrix  M ( γ ) . We show an example of this code in Figure A10.
Figure A10. Sample of Mathematica code used to define the coefficient matrix M.
Figure A10. Sample of Mathematica code used to define the coefficient matrix M.
Mathematics 11 02202 g0a10
This allows us to conclude each step of the loop by calculating the determinant, with a real and imaginary component, of the matrix at each value of  γ = x + i y  on the chosen mesh. This gives us a computation of the Evans function on the complex plane, allowing us to investigate the stability of the traveling fronts.
The final part of the entire procedure, after the loop is completed for our chosen mesh, is to upload the data into Python. Using the matplotlib package, we were able to plot the zero-level curves of the real and imaginary components of  det ( M ( γ ) )  using a contour plot. We show an example of this code in Figure A11.
Figure A11. Sample of Python code used to generate zero-level curves of  det ( M ( γ ) ) .
Figure A11. Sample of Python code used to generate zero-level curves of  det ( M ( γ ) ) .
Mathematics 11 02202 g0a11
We conclude by presenting a few examples of the matrix  M ( γ )  at specific values of x and y.
L1 Case,  α = 0.04 x = 1 y = 0.5 :
M ( 1 0.5i ) = 1 i 1 i 1 i 1 1 1.32114+ 1.65214i 1.06001+ 0.911133i 5.7236 5.60023i 1 1.5 47.9751+ 46.9909i 48.5532+ 48.8466i 81.0207+ 82.4177i 1 2.25 276.92+ 287.549i 277.833+ 284.017i 466.364+ 458.802i 1 3.375 1470.96 1451.61i 1469.22 1443.98i 2460.21 2501.06i 1 5.0625
L1 Case,  α = 0.04 x = 0.5 y = 0.25 :
M ( 0.5+ 0.25i ) = 1 + i 1 + i 1 + i 1 1 1.43294+ 1.45203i 1.03306 1.02289i 14.1517 19.8124i 1 1.5 47.6126 47.5575i 48.5986 48.6195i 184.249 376.509 1 2.25 707.947 989.208i 706.108 987.217i 2063.28 6915.38i 1 3.375 9311.89 18872.5i 9315.82 18876.8i 15465.8 123277 i 1 5.0625
L4 Case,  α = 0.01 x = 0.5 y = 0.25 :
M ( 0.5 0.25i ) = 1 i 1 i 1 i 1 1 5.80977+ 8.13926i 1.52179+ 1.51423i 0.991408+ 0.994117i 1 1.5 56.3488 88.843i 27.6245 27.6016i 26.2916 26.2969i 1 2.25 289.898+ 685.56i 150.934+ 209.757i 148.384+ 207.268i 1 3.375 1295.92 5191.84i 864.781 1686.96i 860.267 1682.56i 1 5.0625
L4 Case,  α = 0.01 x = 1.5 y = 0.25 :
M ( 1.5 + 0.25) = 1 + i 1 + i 1 i 1 1 1.48242 1.466i 1.00823 1.01806i 3.48945 1.16315i 1 1.5 27.5061+ 27.4577i 26.3252+ 26.3451i 9.47041+ 1.35292i 1 2.25 84.8352+ 26.0667i 87.0675+ 28.1618i 20.4573+ 14.1628i 1 3.375 314.332+ 40.7085i 310.433+ 37.0662i 31.1165+ 56.7418i 1 5.0625

Appendix C. Explicit Matrix: L1 Case, α = 0, γ R

A closed form solution for the matrix  M ( γ )  when  α = 0  can be found, although this is quite messy. All calculations were performed by hand and then verified in Mathematica.
Our first step involves analyzing the simplified ODEs after the simplification of setting  α = 0 . As a result (7) (which remains unchanged) and (12) both become
0 = c ϕ v ( γ + 1 ) ϕ i v c ( a 2 + 1 ) ϕ + ( γ + 1 ) ( a 2 + 1 ) ϕ + c a 2 ϕ a 2 ( γ + 1 ) ϕ ,
which means that the ODE is the same on both  ( , 0 )  and  ( 0 , ) . In addition, due to the matching condition that  ϕ ( 0 + ) ϕ ( 0 ) = 0 , this holds on  ξ R . As a result, we find that the characteristic values of this ODE are  ± 1 ± a  and  γ + 1 c . In order to have a solution that converges to zero at  ± , our solution  ( , 0 )  must have one of the following forms:
ϕ ( ξ ) = s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ ξ < 0 , γ + 1 c > 0
ϕ ( ξ ) = s 1 ξ e a ξ + s 2 ξ e a ξ + s 3 e ξ ξ < 0 , γ + 1 c = a
ϕ ( ξ ) = s 1 e a ξ + s 2 e ξ + s 3 ξ e ξ ξ < 0 , γ + 1 c = 1
ϕ ( ξ ) = s 4 e a ξ + s 5 e ξ ξ < 0 , γ + 1 c < 0
and on  ( 0 , ) , must have one of these following forms:
ϕ + ( ξ ) = s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ ξ > 0 , γ + 1 c < 0
ϕ + ( ξ ) = s 1 ξ e a ξ + s 2 ξ e a ξ + s 3 e ξ ξ > 0 , γ + 1 c = a
ϕ + ( ξ ) = s 1 e a ξ + s 2 e ξ + s 3 ξ e ξ ξ > 0 , γ + 1 c = 1
ϕ + ( ξ ) = s 4 e a ξ + s 5 e ξ   ξ > 0 , γ + 1 c > 0 .
The only term that could potentially be complex-valued is the term containing  e γ + 1 c . If  γ R , we can leave the term as is; if  γ = x + i y C  with  y 0 , then we must use the previous analysis to say that
s 1 e γ + 1 c ξ = ( R 1 + I 1 i ) e x + 1 c ξ ( cos ( y c ξ ) + i sin ( y c ξ ) ) = e x + 1 c ξ ( R 1 cos ( y c ξ ) I 1 sin ( y c ξ ) ) + i e x + 1 c ξ ( R 1 sin ( y c ξ ) + I 1 cos ( y c ξ ) ) ,
again, with  s 1 = R 1 + I 1 i . We will omit the case when  γ C  in this paper. In addition, we will omit the details for the cases for when  γ + 1 c = ± 1 , ± a . In practice, it is extremely rare to select a value of  γ  that satisfies this equation, and the analysis would be similar in scope to that in the following subsections. We also recall the matching conditions, which will be needed in the analysis:
ϕ ( 0 + ) ϕ ( 0 ) = 0 , ϕ ( 0 + ) ϕ ( 0 ) = 0 , ϕ ( 0 + ) ϕ ( 0 ) = 2 ( a A 1 ) ϕ ( 0 ) c u 0 ( 0 ) , ϕ ( 0 + ) ϕ ( 0 ) = 2 ( γ + 1 ) ( a A 1 ) ϕ ( 0 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) , ϕ i v ( 0 + ) ϕ i v ( 0 ) = 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 ϕ ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) .
If  γ R  and  γ + 1 c > 0 , then analyzing the matching condition at zero provides us with
0 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + s 4 e a ξ + s 5 e ξ lim ξ 0 s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ = s 1 s 2 s 3 + s 4 + s 5 ,
and if  γ + 1 c < 0 , then we get
0 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + s 1 e γ + 1 c ξ + s 2 e a ξ + s 3 e ξ lim ξ 0 s 4 e a ξ + s 5 e ξ = s 1 + s 2 + s 3 s 4 s 5 .
Analyzing the matching condition at zero for the first derivative provides us with, for  γ + 1 c > 0
0 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + a s 4 e a ξ s 5 e ξ lim ξ 0 γ + 1 c s 1 e γ + 1 c ξ + a s 2 e a ξ + s 3 e ξ = γ + 1 c s 1 a s 2 s 3 a s 4 s 5 ,
and if  γ + 1 c < 0 , then we get
0 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + γ + 1 c s 1 e γ + 1 c ξ a s 2 e a ξ s 3 e ξ lim ξ 0 a s 4 e a ξ + s 5 e ξ = γ + 1 c s 1 a s 2 s 3 a s 4 s 5 .
For the remaining matching conditions, we must incorporate non-zero values, in order to obtain our system. In addition, we note that, since  ϕ ( 0 ) = ϕ ( 0 + ) , we may use either  ϕ  or  ϕ +  to evaluate this quantity. We will always use  ϕ ( 0 + ) = ϕ ( 0 ) = s 4 + s 5  when  γ + 1 c > 0  and  ϕ ( 0 ) = ϕ ( 0 ) = s 4 + s 5  when  γ + 1 c < 0  for consistency.
Analyzing the matching condition at zero for the second derivative provides us with, for  γ + 1 c > 0
2 ( a A 1 ) ϕ ( 0 ) c u 0 ( 0 ) = 2 ( a A 1 ) c u 0 ( 0 ) s 4 + 2 ( a A 1 ) c u 0 ( 0 ) s 5 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + a 2 s 4 e a ξ + s 5 e ξ lim ξ 0 γ + 1 c 2 s 1 e γ + 1 c ξ + a 2 s 2 e a ξ + s 3 e ξ = γ + 1 c 2 s 1 a 2 s 2 s 3 + a 2 s 4 + s 5 0 = ( γ + 1 c ) 2 s 1 a 2 s 2 s 3 + ( a 2 2 ( a A 1 ) c u 0 ( 0 ) ) s 4 + ( 1 2 ( a A 1 ) c u 0 ( 0 ) ) s 5 ,
and if  γ + 1 c < 0 , then we get
2 ( a A 1 ) ϕ ( 0 ) c u 0 ( 0 ) = 2 ( a A 1 ) c u 0 ( 0 ) s 4 + 2 ( a A 1 ) c u 0 ( 0 ) s 5 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + γ + 1 c 2 s 1 e γ + 1 c ξ + a 2 s 2 e a ξ + s 3 e ξ lim ξ 0 a 2 s 4 e a ξ + s 5 e ξ = γ + 1 c 2 s 1 + a 2 s 2 + s 3 a 2 s 4 s 5 0 = ( γ + 1 c ) 2 s 1 + a 2 s 2 + s 3 ( a 2 + 2 ( a A 1 ) c u 0 ( 0 ) ) s 4 ( 1 + 2 ( a A 1 ) c u 0 ( 0 ) ) s 5 .
Analyzing the matching condition at zero for the third derivative provides us with, for  γ + 1 c > 0
2 ( γ + 1 ) ( a A 1 ) ϕ ( 0 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) = 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) s 4 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) s 5 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + a 3 s 4 e a ξ s 5 e ξ lim ξ 0 γ + 1 c 3 s 1 e γ + 1 c ξ + a 3 s 2 e a ξ + s 3 e ξ = γ + 1 c 3 s 1 a 3 s 2 s 3 a 3 s 4 s 5 0 = γ + 1 c 3 s 1 a 3 s 2 s 3 ( a 3 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 4 ( 1 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 5 ,
and if  γ + 1 c < 0 , then we obtain
2 ( γ + 1 ) ( a A 1 ) ϕ ( 0 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) = 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) s 4 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) s 5 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + γ + 1 c 3 s 1 e γ + 1 c ξ a 3 s 2 e a ξ s 3 e ξ lim ξ 0 a 3 s 4 e a ξ + s 5 e ξ = γ + 1 c 3 s 1 a 3 s 2 s 3 a 3 s 4 s 5 0 = γ + 1 c 3 s 1 a 3 s 2 s 3 ( a 3 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 4 ( 1 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 5 .
Finally, we note the additional fact that, since  ϕ ( 0 + ) = ϕ ( 0 ) , we will take  ϕ ( 0 + ) = ϕ ( 0 ) = a s 4 s 5  when  γ + 1 c > 0  and  ϕ ( 0 ) = ϕ ( 0 ) = a s 4 + s 5  when  γ + 1 c < 0 .
Analyzing the matching condition at zero for the fourth derivative provides us with  γ + 1 c > 0
2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 ϕ ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) = 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 + 2 a α c ( 1 a A ) s 4 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 + 2 α c ( 1 a A ) s 5 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + a 4 s 4 e a ξ + s 5 e ξ lim ξ 0 γ + 1 c 4 s 1 e γ + 1 c ξ + a 4 s 2 e a ξ + s 3 e ξ = γ + 1 c 4 s 1 a 4 s 2 s 3 + a 4 s 4 + s 5 0 = γ + 1 c 4 s 1 a 4 s 2 s 3 + a 4 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) 2 ( a 3 A 1 ) c u 0 ( 0 ) + 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) s 4 + 1 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) 2 ( a 3 A 1 ) c u 0 ( 0 ) + 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) s 5 ,
and if  γ + 1 c < 0 , then we have
2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 ϕ ( 0 ) 2 α c ( 1 a A ) ϕ ( 0 ) = 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) s 4 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) s 5 = ϕ + ( 0 + ) ϕ ( 0 ) = lim ξ 0 + γ + 1 c 4 s 1 e γ + 1 c ξ + a 4 s 2 e a ξ + s 3 e ξ lim ξ 0 a 4 s 4 e a ξ + s 5 e ξ = γ + 1 c 4 s 1 + a 4 s 2 + s 3 a 4 s 4 s 5 0 = γ + 1 c 4 s 1 + a 4 s 2 + s 3 a 4 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) s 4 1 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) s 5 .
At this point, we have the following system when  γ + 1 c > 0 :
0 = s 1 s 2 s 3 + s 4 + s 5 , 0 = γ + 1 c s 1 a s 2 s 3 a s 4 s 5 , 0 = γ + 1 c 2 s 1 a 2 s 2 s 3 + ( a 2 2 ( a A 1 ) c u 0 ( 0 ) ) s 4 + ( 1 2 ( a A 1 ) c u 0 ( 0 ) ) s 5 , 0 = γ + 1 c 3 s 1 a 3 s 2 s 3 ( a 3 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 4 ( 1 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 5 , 0 = γ + 1 c 4 s 1 a 4 s 2 s 3 + a 4 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) 2 ( a 3 A 1 ) c u 0 ( 0 ) + 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) s 4 + 1 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) 2 ( a 3 A 1 ) c u 0 ( 0 ) + 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) s 5 ,
and when  γ + 1 c < 0 :
0 = s 1 + s 2 + s 3 s 4 s 5 , 0 = γ + 1 c s 1 a s 2 s 3 a s 4 s 5 , 0 = γ + 1 c 2 s 1 + a 2 s 2 + s 3 ( a 2 + 2 ( a A 1 ) c u 0 ( 0 ) ) s 4 ( 1 + 2 ( a A 1 ) c u 0 ( 0 ) ) s 5 , 0 = γ + 1 c 3 s 1 a 3 s 2 s 3 ( a 3 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 4 ( 1 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) ) s 5 , 0 = γ + 1 c 4 s 1 + a 4 s 2 + s 3 a 4 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) s 4 1 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) s 5 ,
We can now write a matrix equation of the form given in (38), specifically:
M ( γ ) s = 0 .
As demonstrated above, we can write  M ( γ )  for (A22) as
M ( γ ) = 1 1 1 1 1 γ + 1 c a 1 a 1 γ + 1 c 2 a 2 1 a 2 2 ( a A 1 ) c u 0 ( 0 ) 1 2 ( a A 1 ) c u 0 ( 0 ) γ + 1 c 3 a 3 1 a 3 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) + 2 α c ( 1 a A ) 1 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) + 2 α c ( 1 a A ) γ + 1 c 4 a 4 1 M 5 , 4 M 5 , 5 ,
where
M 5 , 4 = a 4 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) 2 ( a 3 A 1 ) c u 0 ( 0 ) + 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) ,
and
M 5 , 5 = 1 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) 2 ( a 3 A 1 ) c u 0 ( 0 ) + 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) .
We can write  M ( γ )  for (A23) as
M = 1 1 1 1 1 γ + 1 c a 1 a 1 γ + 1 c 2 a 2 1 a 2 + 2 ( a A 1 ) c u 0 ( 0 ) 1 + 2 ( a A 1 ) c u 0 ( 0 ) γ + 1 c 3 a 3 1 a 3 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) 1 + 2 ( γ + 1 ) ( a A 1 ) c 2 u 0 ( 0 ) 2 α c ( 1 a A ) γ + 1 c 4 a 4 1 M 5 , 4 M 5 , 5 ,
where
M 5 , 4 = a 4 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 a α c ( 1 a A ) ,
and
M 5 , 5 = 1 + 2 ( γ + 1 ) 2 ( a A 1 ) c 3 u 0 ( 0 ) + 2 ( a 3 A 1 ) c u 0 ( 0 ) 2 α ( γ + 1 ) ( 1 a A ) c 2 2 α c ( 1 a A ) .
The analysis for when  γ C  with  y 0  follows similarly, although it becomes even messier in detail. We have chosen to omit this derivation in the paper, due to its added length, despite the similar approach we would use.

References

  1. Bai, L.; Huang, X.; Yang, Q.; Wu, J.Y. Spatiotemporal patterns of an evoked network oscillation in neocortical slices: Coupled local oscillators. J. Neurophysiol. 2006, 96, 2528–2538. [Google Scholar] [CrossRef] [PubMed]
  2. Connors, B.W.; Amitai, Y. Generation of epileptiform discharge by local circuits of neocortex. In Epilepsy: Models, Mechanisms, and Concepts; Schwartkroin, P.A., Ed.; Cambridge University Press: Cambridge, UK, 1993; pp. 388–423. [Google Scholar]
  3. Chervin, R.D.; Pierce, P.A.; Connors, B.W. Periodicity and directionality in the propagation of epileptiform discharges across discharges across neocortex. J. Neurophysiol. 1988, 60, 1695–1713. [Google Scholar] [CrossRef] [PubMed]
  4. Guo, Y.; Chow, C. Existence and stability of standing pulses in neural networks: I. Existence. SIAM J. Appl. Dyn. Syst. 2005, 4, 217–248. [Google Scholar] [CrossRef] [Green Version]
  5. Pfurscheller, G.; Graimann, B.; Huggins, J.E.; Levine, S.P.; Schuh, L.A. Spatiotemporal patterns of beta desynchronization and gamma synchronization in cotricographic data during self-paced movement. Clin. Neurophysiol. 2003, 114, 1226–1236. [Google Scholar] [CrossRef]
  6. Pfurtscheller, G.; Neuper, C.; Brunner, C.; da Silva, F.L. Beta rebound after different types of motor imagery in man. Neurosci. Lett. 2005, 378, 156–159. [Google Scholar] [CrossRef]
  7. Sandstede, B. Evans functions and nonlinear stability of travelling waves in neuronal network models. Int. J. Bifurc. Chaos 2007, 17, 2693–2704. [Google Scholar] [CrossRef]
  8. Amari, S. Dynamics of Pattern Formation in Lateral-Inhibition Type Neural Fields. Biol. Cybern. 1977, 27, 77–87. [Google Scholar] [CrossRef]
  9. Ermentrout, G.B.; McLeod, B.J. Existence and uniqueness of traveling waves for a neural network. Proc. R. Soc. Edinb. Sect. A Math. 1993, 123A, 461–478. [Google Scholar] [CrossRef]
  10. Zhang, L. On stability of traveling wave solutions in synaptically coupled neuronal networks. Differ. Integral Equ. 2003, 16, 513–536. [Google Scholar] [CrossRef]
  11. Guo, Y. Existence and Stability of Traveling Fronts in a Lateral Inhibition Neural Network. SIAM J. Appl. Dyn. Syst. 2012, 11, 1543–1582. [Google Scholar] [CrossRef] [Green Version]
  12. Coombes, S.; Owen, M.R. Evans functions for integral neural field equations with Heaviside firing rate function. SIAM J. Appl. Dyn. Syst. 2004, 3, 574–600. [Google Scholar] [CrossRef] [Green Version]
  13. Coombes, S.; Schmidt, H.; Bojak, I. Interface dynamics in planar neural field models. J. Math. Neurosci. 2012, 2, 1–27. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Coombes, S.; Thul, R.; Laing, C. Neural Field Models with Threshold Noise. J. Math. Neurosci. 2016, 6, 3. [Google Scholar]
  15. Coombes, S.; Avitabile, D.; Gokce, A. The Dynamics of Neural Fields on Bounded Domains: An Interface Approach for Dirichlet Boundary Conditions. J. Math. Neurosci. 2017, 7, 12. [Google Scholar]
  16. Cook, B.; Peterson, A.; Woldman, W.; Terry, J. Neural Field Models: A mathematical overview and unifying framework. Math Neuro. Appl. 2022, 2, 1–67. [Google Scholar] [CrossRef]
  17. Qin, Z.; Fu, Q.; Jin, D.; Peng, J. A Looming Perception Model Based on Dynamic Neural Field. 2022. Available online: https://ssrn.com/abstract=4248213 (accessed on 28 January 2023).
  18. González-Ramírez, L.R. On the existence of traveling fronts in the fractional-order Amari neural field model. Commun. Nonlinear Sci. Numer. Simul. 2023, 116, 106790. [Google Scholar] [CrossRef]
  19. Laing, C.R.; Troy, w.C.; Gutkin, B.; Ermentrout, G.B. Multiple Bumps in a Neuronal Model of Working Memory. SIAM J. Appl. Math. 2002, 63, 62–97. [Google Scholar] [CrossRef]
  20. Laing, C.R.; Troy, W.C. Two-bump Solutions of Amari-type Models of Neuronal Pattern Formation. Phys. D Nonlinear Phenom. 2003, 178, 190–218. [Google Scholar] [CrossRef]
  21. Janson, S. Resultant and Discriminant of Polynomials. 2010. Available online: http://www2.math.uu.se/~svante/papers/sjN5 (accessed on 19 March 2020).
  22. Evans, J.W. Nerve axon equations, I: Linear approximations. Indiana Univ. Math. J. 1972, 21, 877–955. [Google Scholar] [CrossRef]
  23. Evans, J.W. Nerve axon equations, II: Stability at rest. Indiana Univ. Math. J. 1972, 22, 75–90. [Google Scholar] [CrossRef]
  24. Evans, J.W. Nerve axon equations, III: Stability of the nerve impulse. Indiana Univ. Math. J. 1972, 22, 577–594. [Google Scholar] [CrossRef]
  25. Evans, J.W. Nerve axon equations, IV: The stable and unstable impulse. Indiana Univ. Math. J. 1975, 24, 1169–1190. [Google Scholar] [CrossRef]
  26. Zhang, L. Existence, uniqueness and exponential stability of traveling wave solutions of some integral differential equations arising from neuronal networks. J. Differ. Equ. 2004, 197, 162–196. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Coupling function example.
Figure 1. Coupling function example.
Mathematics 11 02202 g001
Figure 2. Gain function example.
Figure 2. Gain function example.
Mathematics 11 02202 g002
Figure 3. L1 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.09462739380500351 h = 0.6 , and  α = 0 .
Figure 3. L1 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.09462739380500351 h = 0.6 , and  α = 0 .
Mathematics 11 02202 g003
Figure 4. L1 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.08832860463431712 h = 0.6 , and  α = 0.04 .
Figure 4. L1 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.08832860463431712 h = 0.6 , and  α = 0.04 .
Mathematics 11 02202 g004
Figure 5. L2 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.08197134618854690 h = 0.6 , and  α = 0.08 .
Figure 5. L2 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.08197134618854690 h = 0.6 , and  α = 0.08 .
Mathematics 11 02202 g005
Figure 6. L3 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.04572629352086657 h = 0.6 , and  α = 0.3 .
Figure 6. L3 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.04572629352086657 h = 0.6 , and  α = 0.3 .
Mathematics 11 02202 g006
Figure 7. L4 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and a traveling speed c. Relevant parameters are  c 0.2149337789404190 h = 0.3 , and  α = 0.01 .
Figure 7. L4 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and a traveling speed c. Relevant parameters are  c 0.2149337789404190 h = 0.3 , and  α = 0.01 .
Mathematics 11 02202 g007
Figure 8. L5 Case—Plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.4098916313370098 h = 0.2 , and  α = 0.8 .
Figure 8. L5 Case—Plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.4098916313370098 h = 0.2 , and  α = 0.8 .
Mathematics 11 02202 g008
Figure 9. L6 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.4098916313370098 h = . 55 , and  α = 0.65 .
Figure 9. L6 Case—plot of complex Evans function zero-level contours. Traveling front has all real eigenvalues and traveling speed c. Relevant parameters are  c 0.4098916313370098 h = . 55 , and  α = 0.65 .
Mathematics 11 02202 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Macaluso, D.; Guo, Y. Stability of Traveling Fronts in a Neural Field Model. Mathematics 2023, 11, 2202. https://doi.org/10.3390/math11092202

AMA Style

Macaluso D, Guo Y. Stability of Traveling Fronts in a Neural Field Model. Mathematics. 2023; 11(9):2202. https://doi.org/10.3390/math11092202

Chicago/Turabian Style

Macaluso, Dominick, and Yixin Guo. 2023. "Stability of Traveling Fronts in a Neural Field Model" Mathematics 11, no. 9: 2202. https://doi.org/10.3390/math11092202

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop