1. Introduction
Let
be a homogeneous form of degree two. An autonomous polynomial system of ODEs
where the vector function
v is defined on some real interval, will be referred to as a quadratic system. In the special case when
we can write such a system in the form
where
are real constants. The origin of
is always a critical point of a quadratic system.
A (real) Markus algebra associated to a quadratic form
which will be denoted by
is a space
equipped with a (nonassociative in the general case) product
defined by
This product is obviously commutative. The idea to study quadratic ODEs via its real algebra was considered by many authors. In [
1,
2,
3], Boujemaa et al. considered unboundedness of the solutions in quadratic systems and stated a reduction theorem based on the existence of an ideal generated by an idempotent element. Burdujan [
4,
5,
6,
7] considered quadratic systems with derivations, automorphisms, nilpotents of order three and the application in Lie triple system theory. Krasnov et al. [
8,
9] considered the connections between algebras and integral (quadratic) systems and partial differential equations. Kinyon and Sagle [
10,
11,
12] considered many general relations between commutative algebras and quadratic systems of ODEs and quadratic maps (for this paper the most important result is the result on blow-up solutions [
10]). Kutnjak [
13,
14] considered the relation between commutative algebras and quadratic maps in correspondence to chaotic dynamics in quadratic homogeneous difference systems. Some partial results in
are known for the case when the system contains a plane of singular points (for details, see [
15]).
It is easy to verify that the Markus algebra of a planar quadratic system of the form (
1) has the following multiplication rules
where the vectors
and
denote the standard basis of
.
First applications of this ring-theoretic approach to the study of quadratic ODEs were provided by Markus in [
16]. The standard monograph on this topic is [
17].
The methods using Markus algebras techniques are useful in the study of quadratic systems because there exist many connections between the properties of quadratic systems and their algebras. Some of those connections are (see [
10,
11,
17] for proofs):
The quadratic system has ray solutions if and only if there exists a nonzero idempotent in i.e., an element such that and . Any ray solution implies unstable dynamics near the origin. The solutions to lying on a line through the idempotent are called blow-up solutions. Note that this implication holds in any dimension.
The quadratic system has a line of critical points if and only if there exists a nonzero nilpotent of index two in i.e., an element such that and .
The quadratic system
has an invariant
r-dimensional linear subspace
if and only if
has an
r-dimensional subalgebra [
16]. Note that the invariance of
means that for any initial condition
the flow
remains within
for any time
and any initial time
.
The quadratic system can be solved by reduction if and only if the contains a nontrivial ideal.
If and are two quadratic systems defined on vector spaces and , respectively, and if and are the corresponding Markus algebras (associated to and , respectively), then a linear map is a solution-preserving map between the two systems if and only if is a homomorphism from into . Those two systems are equivalent if and only if their Markus algebras are isomorphic.
The last statement is especially important, since it means that we can attempt to fully classify possible behaviour of quadratic systems of a certain type if we develop the classification theory for some class of nonassociative algebras and treat only those explicit quadratic systems that emerge from such classification.
In the sequel, we will use terms idempotent and nilpotent in the restricted sense, i.e., they will only refer to nonzero elements.
The starting point for our first result in the above remarks and the following lemma which proves that locally the trajectories of the scaled linear system and the (corresponding) linear system coincide (up to the time scaling) in the half-planes determined by the common factor of the quadratic system.
Lemma 1. with a common factor can be treated in terms of linear system The common factor of (3) represents a line of singular points and splits the plane in two half-planes: on the half-plane solutions of system (3) have the same orientation as the solutions of (4), while on the half-plane , the solutions of quadratic system have reversed time comparing to the linear one (i.e., ). Proof. Let us consider two ODEs corresponding to (
3) and (
4), respectively
By
let us denote the solution of (
5) and (
6) with initial condition
where
. Obviously the trajectories of (
3) and (
4) with the initial conditions
,
are lying on the (same) curves
in
plane. We just need to find out what is the time orientation of the trajectories. Let
denote the (parametric) solution of (
3), and let
denote the (parametric) solution of (
4). Then
and
. The relation between
and
t follows from
:
on the half-plane , is always positive, implying, and have the same orientation
on the half-plane , is always negative, implying, and have the opposite orientation
□
It is of obvious interest whether the origin is a (Lyapunov) stable critical point or not. In the planar case, the analysis is rather simple. In Theorem 1 we observe that the result can be nicely expressed using a suitable matrix.
Theorem 1. A planar quadratic system has a stable origin if and only if it can be factorized in the formwhere β is nonzero. Proof. The result follows from Lemma 1, the one-to-one relation between systems and algebras [
16], the result [
18] of Kaplan and Yorke on nilpotents and idempotents, and the result due by Kinyon and Sagle on blow-up solutions [
11].
According to the Kaplan–Yorke’s result, any real finite dimensional algebra contains at least one nonzero idempotent or nonzero nilpotent of rank two. The existence of an idempotent implies by result of Kinyon and Sagle unbounded trajectories starting arbitrary close to origin which implies instability of the origin. In dimension two, this implies directly that (
1) must be of the form (
3). Note that the line
represents the nilpotent in the corresponding algebra (
2). The rest of the proof follows by Lemma 1 and the well known theory of planar linear systems; see for example ([
19], Section 4) for details. ccording to Lemma 1, just the phase portraits with bounded trajectories (i.e., foci and centres) assure the stability of the origin in (
3) which yields directly that (
1) must be of the form (
7) and concludes the proof. □
The main purpose of this paper is to show that matrix characterisation of stability also has an alternative formulation which is ring-theoretic in nature.
To explain our new result, we must also consider an obvious complexification of
which will be denoted by
This complexification is an involutive complex algebra modeled on the space
Its multiplicationby a complex number and involution are defined by
for all
and all
We can identify
with a real subalgebra
The concept of an idempotent, i.e., an element satisfying
makes sense in an arbitrary ring. The purpose of our paper is to formulate an analogue of Theorem 1 in terms of purely ring theory framework and offer a possible path toward the generalization to a three-dimensional real space stability problem.
2. Main Result
In this section, we prove our main result.
Theorem 2. A planar quadratic system, different from has a stable origin if and only if its associated complex Markus algebra is spanned by (two) idempotents, while the only idempotent in its associated real Markus algebra is the zero element.
We refer to the system as the trivial system. We will divide our arguments into two separate statements.
Proposition 1. Let be one of the nontrivial planar systems from Theorem 1. Then the only idempotent of is its zero element. The algebra contains precisely two nonzero idempotents which are linearly independent over , and therefore where and
Proof. Systems from Theorem 1 can be rewritten as
while the corresponding (real) Markus algebra is given by the following multiplication rules
The complex Markus algebra can be given by the same multiplication rules if we assume and in addition. We can solve the equation for both algebras simultaneously if we use complex arithmetics.
The condition
leads, if we expand the left-hand side according to the multiplication roules above, to the system
Its solutions, apart from the obvious one, i.e.,
are
and
which is elementary. Those solutions are well-defined since
would imply
, while
is an explicit assumption of Theorem 1. It is obvious that
and
which implies
If we assume that all solutions are real, it follows
yielding
and
which implies
This could only be possible under the condition
but then
would imply
which is a contradiction.
To prove that
and
are linearly independent, suppose for a moment that
. Since
it follows
, i.e.,
or
. Since
the solution
is not possible. Therefore
and
should be equal, yielding
The above condition clearly coincides with system (
8) which leads to a contradiction. □
Proposition 2. Let be a quadratic form, such that the only idempotent of is zero, while is spanned by idempotents. Then there exists a linear transformation on such that the quadratic system is equivalent to one of the systems from Theorem 1.
Proof. Step 1. Let be a nonzero idempotent. Since implies it follows that is also an idempotent. Since is isomorphic to it follows Let us assume that p and are linearly dependent over Since both are nonzero, there would exist such that . In the proof of Proposition 1, we saw that must be 1, i.e., which contradicts our assumption about
Step 2. Since
is two-dimensional as a complex space,
is (one of) its basis. This means that
must be a linear combination of those two elements, i.e., there exist complex numbers
and
such that
As the element
is self-adjoint,
follows. If
is any complex number, the element
is self-adjoint, and thus of the form
for some
We assumed the element
q is not an idempotent. Therefore the equation
must have
as its only solution. From
and linear independence of
p and
we infer two equivalent equations of the form
If
is nonzero, the simplified equation
must be unsolvable. If we conjugate this equation and multiply it by
we obtain the following system
whose solution is
Conversely, it is easy to check the complex algebra whose multiplication is given by
the element defined by
is nonzero, self-adjoint and idempotent. Our assumption on nonexistence of such elements now implies the only remaining possibility being
for some
. In this case system (
9) for
and
reduces to
which has (infinitely many) solutions only if
i.e.,
must also be excluded.
Step 3. We can decompose the idempotent p into where Since , the element b must be nonzero. If then together with , imply . The left-hand side is an element of , while the right-hand side is the element of . This would imply and consequently which contradicts the assumption.
If
could be linearly dependent (we know they must be nonzero elements), then there would exist a nonzero real number
such that
would hold. From
we could derive, in the second component,
If we define
we would have a nonzero element of
satisfying
which would contradict the assumption we made for this Proposition. Hence,
a and
b cannot be linearly dependent.
Step 4. Since
is two-dimensional,
is one of its bases. From the multiplication rules
we can easily compute that the multiplication rules for
are given by
corresponding quadratic system takes the form
for some value of the parameter
Step 5. Assume first that
Then system (
12) takes the following form
which can be written in form of (
7) as follows
i.e.,
,
and
.
If
, the system (
12) is linearlly equivalent to
where the transformation of the coordinates is given by
,
. Since
and
this further implies
,
. Next, note that the following change of coordinates
transforms (
13) into (
7)
The correspondence between the parameter
k from (
13) and parameters
,
from (
7) is the following
□
Remark 1. Note that the coordinate transformationtakes system (12) into system (13). To verify this, quite tedious computations must be performed. One has to use the following matrix the relation and the following trigonometric identities: ,
,
,
,
Proof of Theorem 2. If a planar quadratic system
Q has a stable origin, it is linearly equivalent to one of the systems
7. According to ([
16], Theorem 1), its real Markus algebra
is isomorphic to one of the real Markus algebras
corresponding to (
7). It is easy to see that the derived complex Markus algebras
and
are also isomorphic. By Proposition 1,
has only the zero as an idempotent, while
is spanned by idempotents. This clealy implies that the zero element is the only idempotent of
, while
is spanned by idempotents.
Conversely, assume that the quadratic system Q is such that contains only the trivial idempotent, while is spanned by idempotents. According to Proposition 1 and Theorem 1, Q is linearly equivalent to some quadratic system with a stable origin. Since this linear equivalence is clearly a bounded mapping, the system Q also has a stable origin. □
3. Three-Dimensional Case
In this section, we prove that an immediate generalisation of Theorem 2 is not true in . Such a conjecture would take a form
Statement 1. Let be a nonzero quadratic map. The system of ODEs , different from (A) has a stable origin if and only if (B) its associated complex Markus algebra is spanned by three idempotents. while the only idempotent in its associated real Markus algebra is the zero element.
To this end. we consider two (counter)examples which prove that neither of both implications in is true.
The first example contradicts the necessity of the conditions. In this example. the origin will be shown to be unstable. while the corresponding algebra will contain enough complex idempotents and no nontrivial real idempotent.
The second example contradicts the sufficiency of the conditions in the above attempt of the generalization of Theorem 2. In this example. the system has an unstable origin but enough complex idempotents.
Example 1 (
).
Let us consider the systemwith the corresponding multiplication rulesThe idempotents are determined by the solutions of Obviously, any nontrivial solution must be nonzero in all three components. Therefore, inserting into yields (after canceling by y), proving all four solutions to (14) being complex. A straigtforward computation yields exactly four nontrivial idempotents: Obviously, condition(B) is fullfilled.
To prove(⌉A), let us search for a particular solution which is arbitrary close to origin and tends to infinity when t is large enough. Dividing by yields proving that solutions lie on cyllinders .
Since , it is obvious that any solution to is strictly increasing, yielding instability of with initial condition .
Since is (for ) arbitrarily close to this yields instability of the origin.
More precisely, the solution to with intitial condition , iswhere . Either or yields . The third equation , finally yields The series solution to this ODE is clearlyyielding Thus , as , since . This clearly proves the instability of .
Example 2 (
).
Let us consider the systemwith corresponding multiplication rulesThe idempotents are determined by solutions of Obviously, any nontrivial solution must satisfy . Therefore, inserting into yields (after canceling by z), proving both solutions of (15) are complex. A straigtforward computation shows thatare the only two (nontrivial) linearly independent solutions to (15). This means there exists just two nontrivial complex idempotents in this case. Let us prove that the origin of (15) is stable. According to the (Lyapunov) definition of stability, for any , there must exist such that implies for all .
Obviously, for one can choose in order to prove the Lyapunov stability of .
Both examples clearly show that the algebraic characterization of stability properties of quadratic systems
is far from being simple even in
let alone in
for
. One attempt was to consider the relation between nilpotents and complex idempotents and the spectral properties of the (corresponding left) multiplication of nilpotents (see [
15]), but in the sense of Example 1, this is clearly not the proper way towards the adequate generalisation. It is well known that the existence of a subalgebra of the corresponding algebra yields an invariant flow of the dynamical system
in
(c.f. [
11]). To successfully tackle the problem of stability in
, one should have a classificiation of all three-dimensional real algebras with a stable two-dimensional subalgebra. In [
20], such two-dimensional algebras were succesfully classified in terms of complex idempotent. However, the (up to algebra isomorphism or up to linear equivalence) classification is (at least for now) not feasible because of the complexity of the computations. The above examples and some numerical experiments lead us to believe that one of the keys to solving this difficult problem is connected with an additional condition: the (non)existence of a two-dimensional subalgebra.
4. Possible Directions for Further Research
In the sequel, we will use the abreviation CMA for a complex Marcus algebra corresponding to a quadratic system of real ODEs The most important is
Problem 1. Classify all three-dimensional systems with a stable origin. In other words, describe necessary and sufficient conditions for coefficients (for ) of system of the type for to be a stable singular point.
In the sequel, we will use the abreviation SSO for a system with a stable origin. The idea of CMA as presented here is an attempt towards the final solution of the abovementioned problem.
This problem is not trivial, but we hope the full apparatus of complex analysis and complex spectral theory of matrices can be fruitful. Direct calculations in
involve 18 coefficients and seem not to be the best possible approach. This is the reason why we propose the introduction of CMA methods. Note also that the multiplication rules defined in (
10) involve only one real parameter.
The first obvious observation is that every invariant plane for a SSO generates a 2-dimensional SSO in a natural way. If we translate this obvious remark in the laguage of CMA, it is obvious that any two-dimensional subalgebra of a three-dimensional CMA corresponding to a SSO must also correspond to the SSO of a two-dimensional CMA. Precisely those algebras were classified in Theorem 2.
More precisely, if a three-dimensional CMA contains a two-dimensional subalgebra which does not contain two complex idempotents
with the properties defined in (
10), the original quadratic system is not a SSO. This implies that to classify all three-dimensional SSOs, we propose to first solve
Problem 2. Classify all three- dimensional complex involutive algebras with at least one two- dimensional subalgebra, whose two-dimensional subalgebras all satisfy properties in the formulation of Theorem 2.
To fully solve Problem 1, our numerical experiments suggest that the following result may be true.
Conjecture 1. If a three-dimensional CMA has no subalgebras of dimension 2, the original quadratic system is not a SSO.
The simplest open problem which we intend to solve with the CMA method is
Problem 3. Let us consider a family of three-dimensional systemswhere are some real numbers. After change of time , the corresponding CMA has the following formwhere . Elements p and generate a two-dimensional subalgebra which is isomorphic to one of the algebras from (11) when . Since the third dimension in this new basis is represented by a nilpotent of rank two, we can deduce that the corresponding system (depending on ) has a potentially stable origin. The problem is to describe precisely for which parameter values the origin is stable. We are currently working on its solution. The main idea is to find just one suitable two-dimensional subalgebra which is not isomorphic to one of the algebras described in Theorem 2, for most and study the remaining cases.