Next Article in Journal
Various Types of q-Differential Equations of Higher Order for q-Euler and q-Genocchi Polynomials
Next Article in Special Issue
On the Joint A-Numerical Radius of Operators and Related Inequalities
Previous Article in Journal
Series of Floor and Ceiling Function—Part I: Partial Summations
Previous Article in Special Issue
Stability Analysis of Parameter Varying Genetic Toggle Switches Using Koopman Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Numerical Approximations of the Koopman Operator

Mechanical Engineering and Mathematics, University of California, Santa Barbara, CA 93106, USA
Mathematics 2022, 10(7), 1180; https://doi.org/10.3390/math10071180
Submission received: 5 January 2022 / Revised: 20 March 2022 / Accepted: 22 March 2022 / Published: 5 April 2022
(This article belongs to the Special Issue Dynamical Systems and Operator Theory)

Abstract

:
We study numerical approaches to computation of spectral properties of composition operators. We provide a characterization of Koopman Modes in Banach spaces using Generalized Laplace Analysis. We cast the Dynamic Mode Decomposition-type methods in the context of Finite Section theory of infinite dimensional operators, and provide an example of a mixing map for which the finite section method fails. Under assumptions on the underlying dynamics, we provide the first result on the convergence rate under sample size increase in the finite-section approximation. We study the error in the Krylov subspace version of the finite section method and prove convergence in pseudospectral sense for operators with pure point spectrum. Since Krylov sequence-based approximations can mitigate the curse of dimensionality, this result indicates that they may also have low spectral error without an exponential-in-dimension increase in the number of functions needed.

1. Introduction

Spectral theory of dynamical systems shifts the focus of investigation of dynamical systems behavior away from trajectories in the state space and towards spectral features of an associated infinite-dimensional linear operator. Of particular interest is the composition operator—in a measure-preserving setting called the Koopman operator [1,2,3,4,5]. Its spectral triple—eigenvalues, eigenfunctions and eigenmodes—can be used in a variety of contexts, from model reduction [5] to stability and control [6]. In practice, we only have access to finite-dimensional data from observations or outputs of numerical simulations. Thus, it is important to study approximation properties of finite-dimensional numerical algorithms devised to compute spectral objects [7]. Compactness is the property that imbues infinite-dimensional operators with quasi-finite-dimensional properties. Self-adjointness also helps in proving the approximation results. However, the composition operators under study here are rarely compact or self-adjoint. In addition, in the classical, measure-preserving case, the setting is that of unitary operators (and essentially self-adjoint generators for the continuous-time setting [8]), but in the general, dissipative case, composition operators are neither.
There are three main approaches to finding spectral objects of the Koopman operator:
1.
The first, suggested already in [9] is based on long time weighted averages over trajectories, rooted in ergodic theory of measure-preserving dynamical systems. An extension of that work that captures properties of continuous spectrum was presented in [10]. This approach was named Generalized Laplace Analysis (GLA) in [11], where concepts pertaining to dissipative systems were discussed also in terms of weighted averages along trajectories. In that sense, the ideas in this context provide an extension of ergodic theory for capturing transient (off-attractor) properties of systems. For on-attractor evolution, the properties of the method acting on L 2 functions were studied in [4]. The off-attractor case was pursued in [4,12] where Fourier averages (which are Laplace averages for the case when the eigenvalue considered is on the imaginary axis) were used to compute the eigenfunction whose level sets are isochrons, and [13] in which analysis for general eigenvalue distributions was pursued in Hardy-type spaces. This study was continued in [14] to construct dynamics-adapted Hilbert spaces. The advantage of the method is that it does not require the approximation of the operator itself, as it constructs eigenfunctions and eigenmodes directly from the data. In this sense, it is close (and in fact related) to the power method of approximating the spectrum of a matrix from data on iteration of a vector. In fact, the methodology extends the power method to the case when eigenvalues can be of magnitude 1. It requires separate computation to first determine the spectrum of the operator, which is also done without constructing it. This can potentially be hard to do because of issues such as spectral pollution—see remarks at the end of Section 3; also note the general long-standing problems of spectral pollution and computing the full spectrum of Schrödinger operators on a lattice were recently solved in [15]. The recent work [16] enables computation of full spectral measures using the combination of resolvent operator techniques (used for the first time in the Koopman operator context in [17]) and ResDMD—an extension of Dynamic Mode Decomposition (introduced next) technique that incorporates computation of residues from data snapshots (computation of residues was considered earlier in [18]).
2.
The second approach requires construction of an approximate operator acting on a finite-dimensional function subspace i.e., a finite section—the problem that is also of concern in a more general context of approximating infinite dimensional operators [7,19,20]. The best known such method is the Dynamic Mode Decomposition (DMD), invented in [21] and connected to the Koopman operator in [22]. It has a number of extensions (many of which are summarized in [23]), for example, Exact DMD [24]; Bayesian/subspace DMD [25]; Optimized DMD [26,27]; Recursive DMD [28]; Variational DMD [29]; DMD with control [30,31]; sparsity promoting DMD [32]; DMD for noisy systems [33,34,35]. The original DMD algorithm featured state observables. The Extended Dynamic Mode Decomposition [36] recognizes that nonlinear functions of state might be necessary to describe a finite-dimensional invariant subset of the Koopman operator and provides an algorithm for finite-section approximation of the Koopman operator. A study of convergence of such approximations is provided in [37], but the convergence was established only along subsequences, and the rate of convergence was not addressed. Here, we provide the first result on the rate of convergence of the finite section approximation under assumptions on the nature of the underlying dynamics. In addition, spectral convergence along subsequences is proven in [37] under the assumption of the weak limit of eigenfunction approximations not being zero. This condition is hard to verify in practice. Instead, in Section 5.2, we prove a result that obviates the weak convergence assumption using some additional information on the underlying dynamics. It was observed already in [9] that, instead of an arbitrary set of observables forming a basis, one can use observables generated by the dynamics—namely time delays of a single observable filling a Krylov subspace—to study spectral properties of the Koopman operator. In the DMD context, the methods developed in this direction are known under the name Hankel-DMD [38,39]. It is worth noticing that the Hankel matrix approach of [38] is in fact based on the Prony approximation and requests a different sample structure than the Dynamic Mode Decomposition. Computation of residues was considered in [18] to address the problem of spectral pollution, where discretization introduces spurious eigenvalues. As mentioned before, the recent work [16] provides another method to resolve the spectral pollution problem, introducing ResDMD—an extension of Dynamic Mode Decomposition that incorporates computation of residues from data snapshots. The relationship between GLA and finite section methods was studied in [40].
3.
The third approach is based on the kernel integral operator combined with the Krylov subspace methodology [41], enabling approximation of continuous spectrum. While GLA and EDMD techniques have been extended to dissipative systems, the kernel integral operator technique is currently available only for measure-preserving (on-attractor) systems.
In this paper, we continue with the development of ergodic theory-rooted ideas to understanding and numerically computing the spectral triple—eigenvalues, eigenfunctions and modes—for the Koopman operator. After some preliminaries, we start in Section 3 with discussing properties of algorithms of Generalized Laplace Analysis type in Banach spaces. Such results have previously been obtained in Hardy-type spaces [13], and here, we introduce a Gel’fand-formula-based technique that allows us to expand to general Banach spaces. We continue in Section 4 with setting the finite-section approximation of the Koopman operator in the ergodic theory context. An explicit relationship of finite section coefficients to dual basis is established. Under assumptions on the underlying dynamics, we provide the first result on the convergence rate under sample size increase in the finite-section approximation. The error in the finite section approximation is analyzed. In Section 5, we study finite section approximations of the Koopman operator based on Krylov sequences of time-delays of observables, and prove that under certain conditions, the approximation error decreases as the number of samples is increased, without dependence on the dimension of the problem. Namely, the Krylov subspace (Hankel-DMD) methodology has the advantage of convergence in the number of iterates and does not require a basis exponentially large in the number of dimensions. This solves the problem of the choice of observables, since the dynamics selects the basis by itself. In Section 6, we discuss an alternative point of view on the DMD approximations which is not related to finite sections, but samples of continuous functions on finite subsets of the state-space. The concept of weak eigenfunctions is discussed, continuing the analysis in [37]. We conclude in Section 7.

2. Preliminaries

For a Lipshitz-continuous (ensuring global existence and uniqueness of solutions) dynamical system
x ˙ = F ( x ) ,
defined on a manifold M R m (i.e., x M —where we by slight abuse of notation identify a point in a manifold M with its vector representation x in R m ), where x is a vector and F is a possibly nonlinear vector-valued smooth function, of the same dimension as its argument x , denote by S t ( x 0 ) the position at time t of trajectory of (1) that starts at time 0 at point x 0 . We call the family of functions S t the flow.
Denote by f an arbitrary, vector-valued observable f : M R k . The value f ( t , x 0 ) of the observable f that the system trajectory starting from x 0 at time 0 sees at time t is
f ( t , x 0 ) = f ( S t ( x 0 ) ) .
Note that the space of all observables f is a linear vector space. The family of operators U t , acting on the space of observables parameterized by time t is defined by
U t f ( x 0 ) = f ( S t ( x 0 ) ) .
Thus, for a fixed time τ , U τ maps the vector-valued observable f ( x 0 ) to f ( τ , x 0 ) . We will call the family of operators U t indexed by time t the Koopman operator of the continuous-time system (1). This family was defined for the first time in [1], for Hamiltonian systems. In operator theory, such operators, when defined for general dynamical systems, are often called composition operators, since U t acts on observables by composing them with the mapping S t [3]. Discretization of S t for times τ , 2 τ , , n τ , leads to the τ -mapping T = S τ : M M with the discrete dynamics
x = T x ,
and the associated Koopman operator U defined by
U f = f T .
Let F be a space of observables and U : F F the Koopman operator associated with a map T (note this means that f T F if f F ). Appropriate (dynamics-adapted) spaces are discussed in [14]. A function ϕ λ F is an eigenfunction of U associated with eigenvalue λ provided
U ϕ λ = λ ϕ λ .
Let σ ( U ) C be the spectrum of U. The operator U is called scalar [42] on F provided
U = σ ( U ) β d E ( β ) ,
where E is a family of spectral projections forming resolution of the identity, and the integral is over β σ ( U ) C . Further, the operator U is called spectral provided
U = S + N ,
where S is scalar and N quasi-nilpotent. Examples of functional spaces in which Koopman operators are scalar and spectral are given in [14]. Let f F be a vector of observables. For a scalar operator U, the Koopman mode s λ of f associated with an eigenvalue λ of algebraic multiplicity 1 is given by
s λ ϕ λ = f λ ,
where ϕ λ is the unit norm eigenfunction associated with λ , and
f λ = f σ ( U ) / { λ } β d E ( β ) f = { λ } β d E ( β ) f
Note that, denoting by E λ the projection on the eigenspace associated with the eigenvalue λ , we have
E λ f λ = E λ f ,
since E λ f λ = f λ and E λ σ ( U ) / { λ } β d E ( β ) ( f ) = 0 , by one of the key properties of the spectral resolution [42]. Now,
E λ f = c ϕ λ ,
for some constant c , proving that s λ is well-defined and independent of x .
Remark 1. 
Note that in the more general case with algebraic multiplicities of eigenvalues larger than 1, an analogous definition of the Koopman mode can be obtained. For example, if algebraic multiplicity and geometric multiplicity are 2 and there are two linearly independent eigenfunctions ϕ λ 1 and ϕ λ 2 associated with the eigenvalue λ of multiplicity 2, and we are computing s λ 1 , then (10) contains an additional term on the RHS, s λ 2 ϕ λ 2 , and similarly for s λ 2 , forming 2 equations. In the case of spectral operators, one works similarly, but the added complexity is in the use of generalized eigenfunctions [14].
We assume that the dynamical system T has a Milnor attractor A such that for every continuous function g, for almost every x M with respect to an a priori measure ν on M (without loss of generality as we can replace M with the basin of attraction of A ) the limit
g * ( x ) = lim n 1 n i = 0 n 1 U i g ( x ) ,
exists. This is the case, e.g., for smooth systems on subsets of R n with Sinai–Bowen–Ruelle measures, where ν is the Lebesgue measure [43]. For such systems, Hilbert spaces on which the Koopman operator is spectral have been constructed in [14].

3. Generalized Laplace Analysis

An example of what we call Generalized Laplace Analysis (GLA) is the computation of eigenspace at 0 (namely, invariants) of dynamical systems using time averages: recall
h * ( x ) = lim t 1 t 0 t h ( S τ ( x ) ) d τ
is the time average at initial condition x of the function h under the dynamics of S t . For fixed point attractors
U t h * ( x ) = 1 · h * ( x )
As shown previously, this is valid in a much larger context: limit cycle attractors, toroidal attractors, Milnor attractors, and measure-preserving systems.
We generalize the idea that averages along trajectories produce eigenfunctions, by introducing weights:
h * ( x ) = lim t 1 t 0 t a ( τ ) h ( S τ ( x ) ) d τ 1 n j = 0 n j a ( j Δ τ ) h ( S j Δ τ ( x ) )
where a ( t ) is a function of time—typically a (possibly complex) exponential, and Δ τ is a sampling time interval. If we have a vectorized set of initial conditions x k , k = 1 , , n k , then we can generate a data matrix
H j k = h ( S j Δ τ ( x k ) )
Vectorizing ( a ) j = a ( j Δ τ ) , we get
h a * = H a .
H is the data matrix. For a ( t ) = 1 = e 0 · t , we get h a * = H 1 , where 1 is a vector of 1’s with n j components. To obtain eigenfunctions using Fourier averages, as developed in [12], we set a ( t ) = 1 = e i ω t , to obtain
h e i ω t * ( x ) = lim t 1 t 0 t e i ω τ h ( S τ ( x ) ) d τ 1 n j = 0 n j e i ω j Δ τ h ( S j Δ τ ( x ) )
Both of the above examples were for the case when | a ( t ) | = 1 corresponding to eigenvalues 0 , i ω , both on the imaginary axis. In the next subsection, we provide a general theorem that deals with eigenvalues distributed arbitrarily in the complex plane.

GLA for Fields of Observables

Many of the problems of interest in applications feature a distributed field of observables. For example, time evolution of temperature in a linear rod described by the coordinate z [ 0 , 1 ] , is T ( t , T 0 , z ) , where T 0 ( z ) is the initial condition that belongs to the state space of all possible temperature distributions satisfying the boundary conditions, and t is time. We will set our analysis up having this example in mind—namely, we consider a field of observables f ( x , z ) , where x is in state space, and z is an indexing set—and consider the time evolution of such observables starting from an initial condition x .
Let f ( x , z ) be a bounded field of observables f ( x , z ) : M × A R m , continuous in x , where the observables are indexed over elements z of a set A, and M is a compact metric space. We will occasionally drop the dependence on the state-space variable x and denote f ( x , z ) = f ( z ) and the iterates of f by f ( T i x , z ) = f i ( z ) . Let U be the Koopman operator associated with a map T : M M . We assume that U is bounded, and acting in a closed manner on a Banach space of continuous functions C (this does not have to be the space of all continuous functions on M, see the remark after the theorem).
Theorem 1. 
(Generalized Laplace Analysis). Let λ 0 , , λ K be simple eigenvalues of U such that | λ 0 | | λ 1 | | λ K | > 0 , and there are no other points λ in the spectrum of U with | λ | | λ K | . Let ϕ k be the eigenfunction of U associated with λ k , k { 0 , , K } . Then, the Koopman mode associated with λ k is obtained by computing
f k = lim n 1 n i = 0 n 1 λ k i f ( T i x , z ) j = 0 k 1 λ j i ϕ j ( x ) s j ( z ) = lim n 1 n i = 0 n 1 λ k i f i ( z ) j = 0 k 1 λ j i f j
where f k = ϕ k ( x ) s k ( z ) , ϕ k is an eigenfunction of U with | ϕ k | = 1 and s k is the k-th Koopman mode.
Proof. 
We introduce the operator
U λ 0 = λ 0 1 U .
Then, for some function g ( x ) , consider
U ( lim n 1 n i = 0 n 1 U λ 0 i g ( x ) ) = lim n 1 n i = 0 n 1 λ 0 i U i g ( T x ) = lim n 1 n i = 0 n 1 λ 0 i U i + 1 g ( x ) = λ 0 lim n 1 n i = 0 n 1 λ 0 ( i + 1 ) U i + 1 g ( x ) = λ 0 lim n 1 n λ 0 n g ( T n x ) g ( x ) + i = 0 n 1 U λ 0 i g ( x ) = λ 0 lim n 1 n λ 0 n g ( T n x ) + i = 0 n 1 U λ 0 i g ( x ) ,
where the last line is obtained by boundedness of g. Due to the boundedness of U and continuity of g, we have
lim n | λ 0 n U n g | | g | .
This is obtained as the consequence of the so-called Gel’fand formula that states that for a bounded operator V on a Banach space X, lim n | V n | 1 / n = ρ where ρ is the spectral radius of V [44] (note that in our case ρ = | λ 0 | ). Thus, the first term in (22) vanishes in the limit. Denoting
g λ 0 * ( x ) = lim n 1 n i = 0 n 1 U λ 0 i g ( x ) ,
where the convergence is again obtained from the Gel’fand formula, utilizing the assumption on convergence of time averages and (23). Thus, we obtain
U g λ 0 * ( x ) = λ 0 g λ 0 * ( x )
and, thus, g λ 0 * ( x ) is an eigenfunction of U at eigenvalue λ 0 (note that g λ 0 * ( x ) C by the fact that partial sums form a Cauchy sequence). If we have a field of observables f ( x , z ) , parameterized by z , we get
f λ 0 * ( x , z ) = ϕ k ( x ) s j ( z ) ,
since f λ 0 * ( x , z ) is an eigenfunction of U at eigenvalue λ 0 , so for every z , it is just a constant (depending on z ) multiple of the eigenfunction ϕ k ( x ) of norm 1. If we denote
P λ 0 = lim n 1 n i = 0 n 1 U λ 0 i ,
(note P λ 0 is a bounded projection operator), we can split the space of functions C into the direct sum P λ 0 C ( I P λ 0 ) C .
Now, let 0 < k < K . Consider the space of observables
( I P λ 0 , λ 1 , , λ k 1 ) C = ( I j = 0 k 1 P λ j ) C ,
complementary to the subspace Φ spanned by ϕ j , 0 j < k . The operator U | Φ , the restriction of U to Φ has eigenvalues λ 0 , , λ k 1 . Since
g k = g P λ 0 , λ 1 , , λ k 1 g
does not have a component in Φ , we can reduce the space of observables to ( I P λ 0 , λ 1 , , λ k 1 ) C , on which U λ k satisfies the assumptions of the theorem, and obtain
U ( g k ) λ k * ( x ) = λ k ( g k ) λ k * ( x ) .
If we have a field of observables f ( x , z ) , then
f k ( x , z ) = f ( x , z ) P λ 0 , λ 1 , , λ k 1 f ,
and, thus,
f k ( x , z ) = ϕ k ( x ) s k ( z ) .
In other words, f k is the skew-projection of the field of observables f ( x , z ) on the eigenspace of the Koopman operator associated with the eigenvalue λ k .
Remark 2.
The assumptions on eigenvalues in the above theorem would not be satisfied for dynamical systems whose eigenvalues are dense on the unit circle (e.g., a map that, as n approaches a unit circle in the complex plane on which the dynamics is given by z = e i ω z , where ω is irrational w.r.t. π). However, in such a case, the space of functions can be restricted to the span of functions e i k θ , k = 1 , , N , θ [ 0 , 2 π ) , and the requirements of the theorem would be satisfied. This amounts to restricting the observables to a set with finite resolution, which is standard in data analysis.
Remark 3.
Function spaces in which Koopman operators are spectral are typically special tensor products of on-attractor Hilbert spaces—for example, L 2 ( μ ) where μ is the physical invariant measure—and off-attractor spaces of functions that are continuous or possess additional smoothness [14]. Provided we do not restrict the on-attractor part to a finite-dimensional subset like we did in the previous remark, the above theorem would apply to the off-attractor subset (which is an ideal set of functions that vanish a.e. on the attractor). However, the on-attractor Koopman modes can be obtained a.e. using the same procedure as above, and results relying on the Birkhoff’s Ergodic Theorem, valid in L 2 ( μ ) , as in [4,5,45,46].
In principle, one can find the full spectrum of the Koopman operator by performing Generalized Laplace Analysis, where Theorem 1 is used on some function g ( x ) starting from the unit circle, successively subtracting parts of the signal corresponding to eigenvalues with decreasing | λ | . In practice, such computation can be unstable, since at large t, it involves a multiplication of a very large with a very small number. In addition, the eigenvalues are typically not known a priori. A large class of dynamical systems have eigenvalues on and inside the unit circle (or left half of the complex plane inclusive of the imaginary axis in the continuous time case) [14]. The eigenvalues on the unit circle can be found using the Fast Fourier Transform (FFT). Once the contributions to the dynamics from those eigenvalues are subtracted, the next largest set of eigenvalues have magnitude less than 1. Thus, the power method would enable finding the magnitude | λ 1 | of the resulting eigenvalue. Scaling the operator (restricted to the space of functions not containing components from eigenspaces corresponding to eigenvalues of magnitude 1) with that magnitude, FFT can be performed again to identify the arguments of the eigenvalues of magnitude | λ 1 | . Alternatively, as shown in the next section, we describe the finite section method, in which the operator is represented in a basis, and a finite-dimensional truncation of the resulting infinite matrix—a finite section—is used to approximate its spectral properties. Under some conditions [37], increasing the dimension of the finite section and the number of sample points, eigenvalues of the operator can be obtained.

4. The Finite Section Method

The GLA method for approximating eigenfunctions (and thus modes) of the Koopman operator, analyzed in the previous section, was proposed initially in [4,5,9] in the context of on-attractor (measure-preserving) dynamics, and extended to off-attractor dynamics in [11,12,13,39,47]. It is predicated on the knowledge of (approximate) eigenvalues—since the eigenvalues need to be known a priori to be able to perform weighted trajectory sums in (20). There is always the eigenvalue 1 that is known, and the trajectory sums in that case lead to invariants of the dynamics [45,46]. Other eigenvalues with modulus 1 can be approximated using signal processing methods (see e.g., [39]). Importantly, the GLA does not require the knowledge of an approximation to the Koopman operator and is in effect a sampling method which avoids the curse of dimensionality. In contrast, DMD-type methods, invented initially in [21] without the Koopman operator background, and connected to the Koopman operator setting in [22] produce a matrix approximation to the Koopman operator. There are many forms of the DMD methodology, but all of them require a choice of a finite set of observables that span a subspace. In this section, we analyze such methods in the context of finite section of the operator and explore connections to the dual basis.

4.1. Finite Section and the Dual Basis

Consider the Koopman operator acting on an observable space F of functions on the state space M, equipped with the complex inner product · , · (note that we are using the complex inner product linear in the first argument here; the physics literature typically employs the so-called Dirac notation, where the inner product is linear in its second argument) and let { f j } , j N be an orthonormal basis on F , such that, for any function f F , we have
f = j N c j f j .
Let
u k j = U f j , f k .
Then,
( U f ) k = U f , f k = j N c j U f j , f k = j N u k j c j .
Consider the (not necessarily orthogonal) unconditional basis { f j } . The action of U on an individual basis function f j is given by
U f j = k N u k j f k ,
where u k j are now just coefficients of U f j in the basis. We obtain
U f = j N c j U f j = j N c j k N u k j f k = k N j N u k j c j f k ,
and we again have
( U f ) k = j N u k j c j .
As in the previous section, associated with any closed linear subspace G of F , there is a projection onto it, denoted P = P 2 , that we can think of as projection “along” the space ( I P ) F , since, for any f F , we have
P ( I P ) f = ( P P 2 ) f = 0 ,
and, thus, any element of ( I P ) F has projection 0. We denote by U ˜ the infinite-dimensional matrix with elements u k j , k , j N . Thus, the finite-dimensional section of the matrix
U ˜ n = u 11 u 12 u 1 n u 21 u 22 u 2 n u n 1 u n 2 u n n ,
is the so-called compression of U ˜ that satisfies
U ˜ n = P n U ˜ P n ,
where P n is the projection “along” ( I P n ) F to the span of the first n basis functions, span ( f 1 , , f n ) .
The key question now is: how are the eigenvalues of U ˜ n related to the spectrum of the infinite-dimensional operator U? This was first addressed in [37].
Example 1.
Consider the translation T on the circle S 1 given by
z = e i ω z , z S 1 ,
Let f j = e i j θ , θ [ 0 , 2 π ) . Then,
U f j = f j T = e i j ω e i j θ .
Thus, from (34) u k j = δ k j e i j ω , where δ k j = 1 for k = j and zero otherwise (the Kronecker delta), and U ˜ is a diagonal matrix. In this case, the finite section method provides us with the subset of the exact eigenvalues of the Koopman operator.
The following example shows how careful we need to be with the finite-section method when the underlying dynamical system has chaotic behavior:
Example 2.
Consider the map T on the circle S 1 given by
z = z 2 , z S 1 ,
This is a mixing map that does not have any eigenvalues of the Koopman operator on L 2 ( S 1 ) except for the (trivial) 1, while its spectrum is the whole unit circle [48]. Let f j = e i j θ , θ [ 0 , 2 π ) . Then,
U f j = f j T = e i j 2 θ .
Let
f ( θ ) = j Z c j e i j θ .
Then,
U f ( θ ) = j Z c j e i 2 j θ .
Thus, U ˜ n is given by
U ˜ n = 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ,
provided n 2 k , k N . In this case, the finite section method fails, as U n has eigenvalue 0 of multiplicity n. This example illustrates how the condition in [37] that the weak convergence of a subsequence of eigenfunctions of U ˜ N to a function ϕ must be accompanied by the requirement | | ϕ | | 0 in order that the limit of the associated subsequence of eigenvalues converges to a true eigenvalue of the Koopman operator. In particular, no subsequence of eigenvalues in this case converges to the true eigenvalue of the Koopman operator, since the map is measure preserving, and thus, its eigenvalues are on the unit circle. The example shows the peril of applying the finite section method to find eigenvalues of the Koopman operator when the underlying dynamical system has a continuous spectral part [5] (in this case, Lebesgue [48]) spectrum. Continuous spectrum is effectively dealt with in [10,49] using harmonic analysis and periodic approximation methods, respectively.
To apply the finite-section methodology of approximation of the Koopman operator, we need to estimate the coefficients u k j from data. If we have access to measurements of N orthogonal functions f 1 , , f N on m points on state space, as indicated in [37], assuming ergodicity, this becomes possible:
Theorem 2.
Let { f 1 , , f N } be an orthogonal set of functions in L 2 ( M , μ ) and let T be ergodic on M with respect to an invariant measure μ. Let x l , l N be a trajectory on M. Then, for almost any x 1 M
u k j = lim m 1 m l = 1 m f k c ( x l ) f j T ( x l ) = lim m 1 m l = 1 m f k c ( x l ) f j ( x l + 1 )
Proof. 
This is a simple consequence of the Birkhoff ergodic theorem ([50]). Recall that
u k j = U f j , f k = M U f j f k c d μ ,
and the last expression is equal to
lim m 1 m l = 1 m f k c ( x l ) f j T ( x l ) ,
by the Birkhoff Ergodic Theorem applied to the function U f j f k c . □
In the case of non-orthonormal Riesz basis, denote by f ^ k the dual basis vectors, such that
f j , f ^ k = δ j k ,
where δ j j = 1 for any j, and δ j k = 0 if j k . For the infinite-dimensional Koopman matrix coefficients, we get
u k j = U f j , f ^ k .
Let us consider the finite set of independent functions f ˜ = { f 1 , , f N } and the associated dual set { g ^ 1 , , g ^ N } in the span F ˜ of f ˜ , that satisfy
f j , g ^ k = δ j k .
Note that the functions g ^ k are unique, since they are each orthonormal to n 1 vectors in F ˜ . Let
F = F ˜ + F ˜ T ,
and P F ˜ the orthogonal projection on F ˜ (this in effect assumes all the remaining basis functions are orthogonal to F ˜ ). Then,
g ^ k = P F ˜ f ^ k ,
since, by self-adjointness of orthogonal projections, and P F ˜ f ^ k F ˜
f j , P F ˜ f ^ k = P F ˜ f j , f ^ k = f j , f ^ k = δ j k
Now, we have
u ˜ k j = U f j , g ^ k = U f j , P F ˜ f ^ k = P F ˜ U f j , f ^ k
and thus, since f j F , the coefficients u ˜ k j are the elements of the finite section P F ˜ U P F ˜ in the basis f ˜ . It is again possible to obtain u ˜ k j from data:
Theorem 3. 
Let { f 1 , , f N } be a non-orthogonal set of functions in L 2 ( M , μ ) and let T be ergodic on M with respect to an invariant measure μ. Let x l , l N be a trajectory on M. Then, for almost any x 1 M
u ˜ k j = lim m 1 m l = 1 m f j T ( x l ) g ^ k c ( x l ) = lim m 1 m l = 1 m f j ( x l + 1 ) g ^ k c ( x l ) ,
where, for any finite m, g ^ k c ( x l ) / m , l = 1 , , m are obtained as rows of the matrix ( F F ) 1 F , where
F = f 1 ( X ) f 2 ( X ) f N ( X ) ,
F = ( F c ) T is the conjugate (Hermitian) transpose of F , and f j ( X ) is the column vector ( f j ( x 1 ) f j ( x m ) ) T .
Proof. 
The fact that g ^ k c ( x l ) / m , l = 1 , , m are obtained as rows of the matrix ( F F ) 1 F follows from
( F F ) 1 F F = I N
where I N is the N × N identity matrix. The rest of the proof is analogous to the proof of Theorem 2. □
Remark 4.
The key idea in the above results—Theorems 2 and 3—is that we sample the functions f i , i = 1 , , N and the dual basis g k , k = 1 , , N on m points in the state space, and then take the limit m . Thus, besides approximating the action of U using the finite section U ˜ N , we also approximate individual functions f j , g k by their sample on m points. The corollary of the theorems is that the finite sample approximations U ˜ N , m , obtained by setting the coefficients
u ˜ k j , m = 1 m l = 1 m f j T ( x l ) g ^ k c ( x l )
converges to U ˜ N as m . This result has been obtained in [51], without the use of the dual basis, relying on the Moore–Penrose pseudoinverse, the connection which we discuss next.
We call F the data matrix. Note that the matrix F + = ( F F ) 1 F is the so-called Moore–Penrose pseudoinverse of F. Using matrix notation, from (59), the approximation of the finite section can be written as
U ˜ N a = F + F ( T ( X ) ) = F + F ,
where X = ( x 1 , , x m ) T ,
F = F ( T ( X ) ) = f 1 ( T X ) f 2 ( T X ) f N ( T X ) .
and f k ( T X ) is the column vector
f k ( T x 1 ) f k ( T x m ) .
If we now assume that there is an eigenfunction-eigenvalue pair λ , ϕ of U such that ϕ span F ˜ , then
P F ˜ U P F ˜ ϕ = P F ˜ U ϕ = U ϕ = λ ϕ .
Thus, the eigenvalue λ will be in the spectrum of U ˜ N . More generally, it is known that an operator U and a projection P F ˜ commute if and only if F ˜ is an invariant subspace of U. Thus, the spectrum of the finite-section operator U ˜ N is a subset of the spectrum of U for the case when F ˜ is an invariant subspace.
If an eigenfunction ϕ of U is in F ˜ , it can be obtained from an eigenvector a of the finite section U ˜ N as
ϕ = a · f ˜ = k = 1 N a k f k ,
where a = ( a 1 , , a N ) satisfies U ˜ N a = λ a , since, for such ϕ ,
U ϕ = a · U f ˜ = λ ϕ = λ a · f ˜ = U N a · f ˜ .
We have introduced above the dot notation, that produces a function in F from an N-vector a and a set of functions f ˜ .
Remark 5. 
The Theorems 2 and 3 are convenient in their use of sampling along trajectory and an invariant measure, thus enabling construction of finite section representations of the Koopman operator from a single trajectory. However, the associated space of functions L 2 ( μ ) is restricted since the resulting spectrum is on the unit circle. Choosing a more general measure ν that has support in the basin of attraction is possible. Namely, when we construct the finite section, we then use a sequence x l , l = 1 , , m of points that weakly converges to the measure ν, and their images under T, y l = T ( x l ) . This is the approach in [51]. The potential issue with this approach is the choice of space—typically, L 2 ( ν ) will have a very large spectrum, for example, filling the entire unit disk of the complex plane [52]. In contrast, Hilbert spaces adapted to the dynamics of a dissipative systems can be constructed [14], starting from the ideal set of continuous functions that vanish on the attractor, enabling a natural setting for computation of spectral objects for dissipative systems.
Koopman mode is the projection of a field of observables on an eigenfunction of U. Approximations of Koopman modes can also be obtained using a finite section. Let U ˜ N be a finite section of U ˜ . Let h : M C K be a vector observable (thus, a field of observables indexed over a discrete set). Then, the Koopman mode s λ ( h ) associated with the eigenvalue λ of U is obtained as
s λ ( h ) = h , ϕ ^ ϕ ,
where ϕ , ϕ ^ are the eigenfunction and the dual eigenfunction associated with the eigenvalue λ . Let a j , j = 1 , , N be eigenvectors of U ˜ N , and thus, the associated eigenfunctions of the finite section are
ϕ j = a j · f ˜ , j = 1 , , N
where a j = ( a j 1 , , a j N ) . Then, we get the dual basis
ϕ ^ j = a ^ j , g ^ , j = 1 , , N
where
a ^ j , a k = δ j k .
This is easily checked by expanding:
ϕ j , ϕ ^ j = k = 1 N a j k f k , l = 1 N a ^ i l g ^ l = k = 1 N l = 1 N a j k a ^ i l c f k , g ^ l = k = 1 N a j k a ^ i k c = δ j i .
Thus, the approximation s ˜ j ( h ) to the Koopman mode s j ( h ) associated with the eigenvalue λ j of the finite section reads
s ˜ j ( h ) = h , ϕ ^ j ϕ j = k = 1 N a ^ j k h , g ^ k ϕ j .
Now, assume that h = f ˜ ,
f ˜ , ϕ ^ j ϕ j = k = 1 N ( a ^ j ) k f ˜ , g ^ k ϕ j = a ^ j ϕ j .
Thus, the Koopman modes associated with the data vector of observables f ˜ are obtained as the left eigenvector a ^ j ϕ j of the finite section of the Koopman operator U ˜ N .
Assuming that the approximation of the finite section, the N × N matrix U ˜ N a has distinct eigenvalues λ 1 a , , λ N a , we write the spectral decomposition
U ˜ N a = A Λ A 1 ,
where Λ is the diagonal eigenvalue matrix and
A = a 1 a 2 a N
is the column eigenvector matrix. From
U ˜ N a = ( F F ) 1 F F = A Λ A 1 ,
we get that the data can be reconstructed by first observing
F F = F F A Λ A 1 .
This represents N equations with m unknowns for each column of F . Assuming m > N , it is an underdetermined set of equations that can have many solutions for columns of F . Then,
F p = ( F F ) 1 F F F A Λ A 1 = F A Λ A 1
is the projection of all these solutions on the subspace spanned by the columns of F. If m < N , (79) is overdetermined, and the solution F p is the closest—in least squares sense—to F in the span of the columns of F.
Note that A 1 is the matrix in which rows are the Koopman modes a ^ k :
A 1 = a ^ 1 a ^ N ,
and, thus,
Λ A 1 = λ 1 a ^ 1 λ N a ^ N .
Using (68), we get
F A = f ˜ ( x 1 ) · a 1 f ˜ ( x 1 ) · a N f ˜ ( x 2 ) · a 1 f ˜ ( x 2 ) · a N f ˜ ( x m ) · a 1 f ˜ ( x m ) · a N = ϕ ˜ 1 ( x 1 ) ϕ ˜ N ( x 1 ) ϕ ˜ 1 ( x 2 ) ϕ ˜ N ( x 2 ) ϕ ˜ 1 ( x m ) ϕ ˜ N ( x m )
where ϕ ˜ j is an eigenfunction of the finite section, and a j ’s are the columns of A. Note that ϕ ˜ k ( x l ) = λ ˜ k l 1 ϕ ˜ k ( x 1 ) . Using (80), we get
F p = F A Λ A 1 = k = 1 N λ ˜ k ϕ ˜ k ( x 1 ) a ^ k k = 1 N λ k 2 ϕ ˜ k ( x 1 ) a ^ k k = 1 N λ k m ϕ ˜ k ( x 1 ) a ^ k .
Remark 6.
The novelty in this section is the explicit treatment of the finite section approximation in terms of the dual basis that enables error estimates in the next subsection. The finite section is also known under the name Galerkin projection [36]. The relationship between GLA and finite section methods was studied in [40].

4.2. Convergence of the Finite Sample Approximation to the Finite Section

The time averages in (59) converge due to the Birkhoff’s Ergodic Theorem [50]. In the case when a dynamical system is globally stable to an attractor with a physical invariant measure, the rates of convergence depend on the type of asymptotic dynamics that the system is exhibiting. Namely, the Koopman operator U, when restricted to measure-preserving, on-attractor dynamics, is unitary. Its spectrum can in that case be written as σ p ( U ) σ c ( U ) , where σ p denotes the point spectrum corresponding to eigenvalues of U and σ c the continous spectrum [53]. The next theorem describes convergence of the finite sample approximation to U ˜ N when the asymptotic dynamics has only the point spectrum—e.g., when the attractor dynamics is that of a fixed point, limit cycle or ergodic rotation on a higher dimensional torus:
Theorem 4.
Let T : M M be a C dynamical system with an attractor A and an invariant measure supported on the attractor. Let U be the Koopman operator on L 2 ( μ ) , with a pure point spectrum that is either a non-dense set on the unit circle, or generated by a set of eigenvalues whose imaginary parts ω = ( ω 1 , , ω m ) satisfy the Diophantine conditions | k · ω k 0 | 4 c 0 | k | μ , μ > m + 1 , k Z m , k 0 Z . Let f j , g k be C for all j , k . Note that the coefficients in the finite section matrices depend on the initial condition x of the trajectory that was used to generate the finite section, with the notation U N , m ( x ) , U ˜ N ( x ) . Then, for almost all initial conditions x M
| | U ˜ N , m ( x ) U ˜ N ( x ) | | 2 c ( N ) m
where | | · | | 2 is the Frobenius norm.
Proof. 
We suppress the dependence on x in the notation. The entries u ˜ k j , m = 1 m l = 1 m f j T ( x l ) g ^ k c ( x l ) of U N , m (see (62)) converge a.e. w.r.t. μ . Since T is conjugate to a rotation on an Abelian group [54], which is either discrete or the dynamics is uniformly ergodic (in which case, by assumption, the Diophantine condition is satisfied), for sufficiently smooth T and f j , g ^ k [55,56,57], we have
| | u ˜ k j , m u ˜ k j | | 2 c ( f j , g k ) m
and the statement follows by setting c ( N ) = N 2 max j , k c ( f j , g k ) . □
Remark 7.
The smoothness of T , f j , g ^ k and the Diophantine condition are required in order for the solution of the homological equation to exist [55]. Only finite smoothness is required [55], but we have assumed C for simplicity here.
The above means that U ˜ N , m ( x ) converges to U ˜ N ( x ) spectrally:
Corollary 1.
Let λ m be an eigenvalue of U ˜ N , m ( x ) with multiplicity h. Then, for arbitrary ϵ > 0 , for sufficiently large m > M , there is a set of eigenvalues λ of U ˜ N ( x ) whose multiplicity sums to h such that | λ m λ | ϵ .
Proof. 
This follows from continuity of eigenvalues [58] to continuous perturbations (established by theorem 4). □
Remark 8.
If f T n are independent, the convergence estimate above deteriorates to O ( 1 / m ) . Presence of continuous spectrum without the strong mixing property can lead to convergence estimates O ( 1 / m α ) with 0 < α < 1 / 2 [56].
Remark 9.
Spectral convergence in the infinite-dimensional setting is a more difficult question (see [37] in which only convergence along subsequences was established under certain assumptions). Even if the result could be obtained, the practical question is the convergence in m and N. To address it further, we start with the formula for error in the finite section.

4.3. The Error in the Finite Section

It is of interest to find out how big is the error we are making in the finite section approximations discussed above. We have the following result.
Proposition 1.
Let ϕ ˜ = e ˜ · f ˜ be an eigenfunction of the finite section associated with the eigenvalue λ ˜ and eigenvector e ˜ . Then,
U ϕ ˜ λ ˜ ϕ ˜ = e ˜ · ( U f ˜ P F ˜ U f ˜ ) .
Proof. 
The first term on the right side of (87) follows from the definition of ϕ ˜ . We then need to show
λ ˜ ϕ ˜ = e ˜ · P F ˜ U f ˜ .
However, the left side is just U N ϕ ˜ , and since f ˜ F ˜ , U N f ˜ = P F ˜ U P F ˜ f ˜ = P F ˜ U f ˜ , which proves the claim. □

5. Krylov Subspace Methods

A particularly useful feature of dynamical systems theory based on Koopman operator methods is that properties of the system can be surmised from data. Indeed, in the previous section, we found how a finite section of the matrix representation of the Koopman operator can be found from data. However, the discussion was based on existence of a basis, that typically might come from taking products on basis elements on 1-dimensional subspaces—for example, Fourier basis on an interval subset of R . Such constructions lead to an exponential growth in the number of basis elements, and the so-called curse of dimensionality. In this section, we study finite section numerical methods that are based on the dynamical evolution of a single or many observables—functions on state space—that span the so-called Krylov subspace. The idea is that one might start with a single observable, and due to its evolution, span an invariant subspace of the Koopman operator (note the connection of such methods with the Takens embedding theorem ideas [4,39]). Since the number of basis elements is in this case equal to the number of dynamical evolution steps, in any dimension, Krylov subspace-based methods do not suffer from the curse of dimensionality.

5.1. Single Observable Krylov Subspace Methods

Let T be a discrete-time dynamical system on a compact metric space M equipped with a measure μ on the Borel σ -algebra. Let F be a Hilbert space of functions on M (for suitable spaces, see [14]). For a finite-time evolution of an initial function f ( x ) F under T, we get a (Krylov) sequence
( f ( x ) , f T ( x ) , , f T N ( x ) ) = ( f ( x ) , U f ( x ) , , U N f ( x ) ) ,
where U is the Koopman operator associated with T. Let f i = f T i 1 ( x ) . Then, clearly f i + 1 = U f i , for i = 1 , , N . If f N + 1 was in the space spanned by f 1 , , f N , and these were linearly independent functions, we would have
f N + 1 = i = 1 N c i f i ,
for some constants c i , i = 1 , N . In that case, the operator U would have a finite-dimensional approximation U ˜ N on the span ( f 1 , f N ) , given by the companion matrix
U ˜ = C = 0 0 0 c 1 1 0 0 c 2 0 1 0 c 3 0 0 1 c N
The above is, in the terminology of the previous section, the finite section representation of U.
Example 3.
Let V be a subspace of L 2 ( M ) spanned by eigenfunctions e 1 , , e N that satisfy
U e j = e i 2 π ω j e j
Let
g = j = 1 N a j e j ,
Then,
U k g = j = 1 N a j U k e j = j = 1 N a j e i 2 π k ω j e j ,
and
U N g = l = 1 N d j e j = j = 1 N a j e i 2 π N ω j e j = k = 1 N c k j = 1 N a j e i 2 π k ω j e j = j = 1 N k = 1 N c k e i 2 π k ω j a j e j .
Thus, the numbers c k , k = 1 , , N in the companion matrix are determined by N equations with N unknowns
k = 1 N c k e i 2 π k ω j a j = d j , j = 1 , , N .
Now, let a 1 = 1 , a j = 0 , j = 2 , , N . We get U N g = d 1 e 1 = e i 2 π N ω 1 e 1 and, thus,
c 1 = e i 2 π N ω 1 .
It is clear that c j = 0 , j = 2 , , N . Note that, if ω 1 = j / N for some integer j, we get c 1 = 1 and, thus, the companion matrix becomes the circulant shift matrix
U ˜ = 0 0 0 1 1 0 0 0 0 1 0 0 0 0 1 0 .
Consider now the case when U N f is not in the span of f 1 , , f N . We have the projection formula (58)
u ˜ k j = U f j , g ^ k .
Since
f j , g ^ k = δ k , j ,
for j = 1 , , N 1 , k = 1 , , N we have
u ˜ k j = U f j , g ^ k = f j + 1 , g ^ k = δ k , j + 1
which produces zeros in all columns of row k except in the column j 1 , where we have a 1. There is no column j 1 for row 1, so we get all zeros up to the last column. Now, for the last column, we have
u ˜ k N = U f N , g ^ k = P F ˜ U f N , g ^ k ,
and, thus, c k in the matrix (91) is the k-th coefficient of the orthogonal projection of U f N on F ˜ in the basis f ˜ that here consists of the Krylov sequence of independent observables
( f , U f , , U N 1 f ) ( f 1 , , f N ) ,
where we defined f 1 , , f N by the last relationship.

5.2. Error in the Companion Matrix Representation

Let e ˜ = ( e 1 , , e N ) T be an eigenvector of U ˜ satisfying
U ˜ e ˜ = λ ˜ e ˜ ,
and f ˜ = ( f 1 , f 2 , f 3 , f N ) . The action of U on e ˜ · f ˜ is given as
U e ˜ · f ˜ = e ˜ · f ˜ T = i = 1 N e i f i T = i = 1 N e i f i + 1 .
Now, we also have
U ˜ e ˜ = 0 0 0 c 1 1 0 0 c 2 0 1 0 c 3 0 0 1 c N e 1 e 2 e 3 e N = c 1 e N e 1 + c 2 e N e 1 + c 3 e N e N 1 + c N e N = 0 e 1 e 2 e N 1 + e N c 1 c 2 c 3 c N = λ ˜ e ˜
Using this in (105), and denoting c ˜ = ( c 1 , , c N ) , we obtain
U e ˜ · f ˜ = λ e ˜ · f ˜ e N c ˜ · f ˜ + e N f T N = λ ˜ e ˜ · f ˜ + e N ( f T N c ˜ · f ˜ ) .
This formula also follows directly from (87) by observing that U f ˜ = ( f 2 , , f N , f N + 1 ) , where f N + 1 = f T N , the fact that P ( f 2 , , f N ) = ( f 2 , , f N ) , and P F ˜ f T N = c ˜ · f ˜ . Thus,
e ˜ · ( U f ˜ P F ˜ U f ˜ ) = e N ( f T N c ˜ · f ˜ ) .
We have the following simple consequence:
Lemma 1.
If f T N is in the span ( f 1 , f N ) , then ϕ ˜ = e ˜ · f ˜ is an eigenfunction of U associated with the eigenvalue λ ˜ .
If the assumption that f T N is in span ( f 1 , f N ) is relaxed, and c ˜ · f ˜ is the orthogonal projection of f T N to F ˜ , then e ˜ · f ˜ is an approximation to the eigenvector of U with an approximate eigenvalue λ ˜ , with the error
e N ( f T N c ˜ · f ˜ ) = e N r ,
where r = f T N c ˜ · f ˜ is called the residual. Note that Equation (107) could be written as
| U e ˜ · f ˜ λ ˜ e ˜ · f ˜ | = | e N r | ,
which means that e ˜ · f ˜ is in the ( λ ˜ , ϵ ) -pseudospectrum of U for ϵ = | e N r | (see [59]).
The calculations above, first presented in [39], allow us to show how the finite section spectrum approximates the spectrum of the Koopman operator when the number of functions N in the Krylov sequence goes to infinity. The specific sense of approximation here is pseudospectral, and for a class of systems that satisfy a convergence requirement on the Krylov sequence, convergence in the pseudospectral sense can be proven:
Lemma 2.
Let the Krylov sequence satisfy
lim N | | f T N c ˜ · f ˜ | | = 0
Then, for and ϵ > 0 , for large enough n, an eigenfunction of the finite section ϕ ˜ is in the ϵ-pseudospectrum of U.
Proof. 
Without loss of generality, we assume | e ˜ | = 1 and, thus, | e N | 1 . From (109), taking N large enough, we get
| U e ˜ · f ˜ λ ˜ e ˜ · f ˜ | = | e N ( f T N c ˜ · f ˜ ) | | f T N c ˜ · f ˜ | < ϵ ,
which proves the claim. □
Theorem 5.  
Assume that for any function g in the space of observables F equipped with a norm | | · | | , we have
g = j = 1 c k ϕ k
where ϕ k are normalized ( | | ϕ k | | = 1 ) eigenfunctions of the Koopman operator associated with eigenvalues | λ k | 1 , i.e., U has a pure point spectrum in F . Let f ˜ be the Krylov sequence generated by f and F f the cyclic invariant subspace of U generated by f. Let P F ˜ n be the orthogonal projection on the subspace of F generated by the first n elements of the Krylov sequence. Let λ ˜ , ϕ ˜ be an eigenvalue-eigenfunction pair for the finite section. Then, for any ϵ > 0 , there is an N such that n N implies
| | U ϕ ˜ λ ˜ ϕ ˜ | | < ϵ .
Proof. 
Due to Lemma 2, we only need to prove that, under the assumption on the spectrum,
lim N | | f T N c ˜ · f ˜ | | = 0 .
Due to the assumption in Equation (113), we have
f = k = 1 c k ϕ k ,
and, thus,
f N = f T N = j = 1 c k λ k N ϕ k .
We split the spectrum of U in F as σ ( U ) = σ ( U ) | S 1 + σ ( U ) | D , where D is the interior of the unit disk in the complex plane. Then,
f T N = f S 1 N + f D N = λ k σ ( U ) | S 1 c k λ k N ϕ k + λ j σ ( U ) | D c j λ j N ϕ j .
For sufficiently large N, for any ϵ / 2
| λ j σ ( U ) | D c j λ j N ϕ j | ϵ / 2 .
In addition,
λ k σ ( U ) | S 1 c k λ k N ϕ k
is an almost periodic function and, thus, for sufficiently large M > N , we have
| f S 1 M f S 1 N | ϵ / 2 .
Combining (119) and (121) proves the claim, since f M is ϵ -away from an element f j of the span ( f , , f M 1 ) , and | | f T M c ˜ · f ˜ | | is the minimal distance of f M to the subspace span ( f , , f M 1 ) that contains f j . □
Remark 10.
The above construction only requires the Krylov sequence, and shows that the finite section approximation reveals the pseudospectrum of the Koopman operator. Thus, methods relying on Krylov sequences are “sampling” the high-dimensional space and can approximate the part of the spectrum contained in their invariant subspace irrespective of the dimension of the problem.
The use of Krylov sequences is also of interest because they span the smallest invariant subset that the observable f belongs to:
Theorem 6. 
Let f be an observable. Then, span ( f , f T , , f T n , ) is the smallest forward invariant subspace of U that contains f.
Proof. 
Assume not. Then, there is A span ( f , f T , , f T n , ) , where A is a proper subset, that contains f, meaning that there is f T j for some integer j, that is not in A. However, then, A is not invariant since it contains f and U j f is not in A. □
Remark 11. 
The assumptions in Theorem 5 are satisfied by any dynamical system with a quasi-periodic attractor with the space of observables being an appropriately constructed Hilbert space [14]. However, they exclude systems with mixed or purely continuous spectrum, as evidenced by the Example 2.

5.3. Krylov Sequences from Data

If M is not a finite discrete set, numerically, we do not have f ˜ on the whole state space. Instead, we might be able to sample the function f on a discrete subset of points X = { x 1 , , x m } T M . We can think of f as a column vector, and form again the m × N data matrixF
F = f 1 ( X ) f 2 ( X ) f N ( X )
and its first iterate
F = f 2 ( X ) f 3 ( X ) f N + 1 ( X ) = f 1 ( T X ) f 2 ( T X ) f N ( T X ) = f 1 ( Y ) f 2 ( Y ) f N ( Y ) , .
where Y = T X . We have
F = F C ,
or
C = F + F
as could be surmised from (63), and the following corollary of Lemma 1 holds:
Corollary 2.
Let f T N be in the span ( f 1 , f N ) , and rank F = N . Let λ ˜ , e ˜ be an eigenvalue and the eigenvector of the companion matrix U ˜ . Then, an eigenvalue λ ˜ of U ˜ is an eigenvalue of U, and f ˜ ( X ) · e ˜ is a sample of the corresponding eigenfunction of U on X .
Proof. 
As soon as we know N samples of the function f, the vector c ˜ in the companion matrix is fixed, and thus the residual is zero. □
When x k = T x k 1 , k = 1 , , m + n 1 , i.e., the sampling points are on a single trajectory, the matrix F becomes the Hankel–Takens matrix
H = f ( x ) f ( T x ) f ( T n 1 x ) f ( T x ) f ( T 2 x ) f ( T n x ) f ( T m x ) f ( T m + 1 x ) f ( T m + n 1 x ) .
The reason for calling H the Hankel–Takens matrix is that, besides the usual property of Hankel matrices that have constant skew-diagonal terms—in this case H i , j = f ( T k x ) , where k = | i | + | j | 2 —it also satisfies H i , j + 1 = H i , j T = H i + 1 , j , a property which is related to the Takens embedding [4,60].
Let C have distinct eigenvalues. We diagonalize it using
C = A Λ A 1 .
The companion matrix is diagonalized by the so-called Vandermonde matrix
A 1 = 1 λ 1 λ 1 2 λ 1 N 1 1 λ 2 λ 2 2 λ 2 N 1 1 λ 3 λ 3 2 λ 3 N 1 1 λ N λ N 2 λ N N 1 .
Thus, the Koopman modes of the vector of observables f ˜ composed of time delays are precisely the columns of the Vandermonde matrix, while the right eigenvectors are the columns of the inverse of the Vandermonde matrix.

5.4. Schmid’s Dynamical Mode Decomposition as a Finite Section Method

The key numerical issue with the Krylov subspace-based algorithms is the fact that the procedure requires inversion of the Vandermonde matrix (128). Since the condition number | | A | | | | A 1 | | (where | | · | | is the induced matrix norm) of the Vandermonde matrix scales exponentially in its size provided λ k e i ω , for some k, even if | λ k | 1 [61]. There are a variety of ways to resolve this issue, and the first one that appeared [21] is the following version of the Koopman operator approximation, based on singular value decomposition.
Let
F = G Σ V
be the “thin” singular value decomposition of the m × N “data matrix” F, whose columns are samples of functions f 1 , , f N . The m × N matrix G and N × N matrix V are unitary matrices, V is the conjugate transpose of V , and Σ is an N × N diagonal matrix. Note that
F V = G Σ ,
and, thus,
F v j = σ j u j , j = 1 , , n
where v j is the j-th column of V and u j is the j-the column of u . Clearly then, u j are linear combinations of vectors f 1 ( X ) , f 2 ( X ) , , f N ( X ) , and for m N , there are N such linear combinations. We could consider each of these combinations as a sample of a function,
u ˜ j = v j · f ˜ ,
where f ˜ = ( f 1 , , f N ) is the vector of independent functions. In other words,
u ˜ = ( u 1 , , u N )
spans F and is an orthogonal basis for it. Now, G is in fact the data matrix whose columns are u j ’s:
G = [ u 1 u 2 u N ] = [ u 1 ( X ) u 2 ( X ) u N ( X ) ]
Then, the finite section is
U ˜ N S = G + G = ( G G ) 1 G G = G F V Σ 1 .
Now, since
G = ( F V Σ 1 ) = Σ 1 V F ,
G G = Σ 1 V F F V Σ 1 ,
and, thus,
( G G ) 1 G = Σ V ( F F ) 1 V Σ Σ 1 V F = Σ V ( F F ) 1 F ,
we have
U ˜ N S = Σ V ( F F ) 1 F F V Σ 1 = Σ V F + F V Σ 1 = Σ V U ˜ N a V Σ 1 .
Therefore, U ˜ N S and U ˜ N a are similar matrices that thus have the same spectrum. If a j is an eigenvector of U ˜ N S , then V Σ 1 a j is an eigenvector of U ˜ N a , and, according to (67)
ϕ ˜ j N = G a j
is a finite section approximation to an eigenfunction of the Koopman operator.

6. Weak Eigenfunctions from Data

In the sections above we presented finite section approximations of the Koopman operator, starting from the idea that bounded infinite-dimensional operators are, given a basis, represented by infinite matrices, and then truncated those. In this section, we will present an alternative point of view that provides additional insights into the relationship between the finite-dimensional approximation and the operator. As a consequence of this approach, we show how the concept of a weak eigenfunction, first discussed in [37], arises.
We start again with a vector of observables, f ˜ = ( f 1 , , f N ) . Except when we can consider this problem analytically, we know the values of observables only on a finite set of points in state space, X = { x 1 , , x m } . Assume also that we know the value of f ˜ at Y = { y k } = { T ( x k ) } . We can think of f j ( X ) = ( f j ( x 1 ) , , f j ( x m ) ) , j { 1 , , N } as a sample of the observable f j on X M .
Consider the case x k + 1 = T x k , k = 1 , , m 1 . There are many m × m matrices A such that
f j ( Y ) T = A f j ( X ) T
One of them is the transpose of the companion matrix (91)
U ˜ T = 0 1 0 0 0 0 1 0 0 0 0 0 c j 1 c j 2 c j 3 c j m ,
but there are many values that c j k , k = 1 , m can assume, since the only requirement on them is
k = 1 m c j k f j ( x k ) = f j ( y m )
and there are m unknowns and 1 equation that determines them. However, the c s need not depend on j, since the operator that maps the vectors f j ( X ) T to f j ( Y ) T is not dependent on j. Clearly, if there are m observables, then we get
k = 1 m c k f j ( x k ) = f j ( y m ) , j { 1 , , m } ,
and, thus, we can determine c = ( c 1 , , c m ) uniquely.
If the number of observables N is larger than m, then f j k = f j ( x k ) are elements of an N × m matrix F (note that this data matrix is precisely the transpose of the one we have used before, in (122)) and, thus, there are not enough components in c to solve
F c = f ˜ ( y m ) T .
This system is overdetermined, so in general does not have a solution. The Dynamic Mode Decomposition method then solves for c using the following procedure: let P be the orthogonal projection onto span of columns of F. Then,
F c M P = P f ˜ ( y m ) T ,
has a solution, provided F has rank m: P f ˜ ( y m ) T is an N-dimensional vector in the span of the columns of F and thus can be written as a linear combination of those vectors. In fact, we can write
c M P = F + f ˜ ( y m ) T .
We now discuss the nature of the approximation of the Koopman operator U by the companion matrix (91)
U ˜ T = C T = 0 1 0 0 0 0 1 0 0 0 0 0 c 1 c 2 c 3 c m ,
where c = ( c 1 , , c m ) = c M P obtained from Equation (147).
Let S = { x 1 , , x m } be an invariant set for T : M M , where M is a measure space, with measure μ . Consider the space C | S , of continuous functions in L 2 ( μ ) restricted to S . This is an m-dimensional vector space. The restriction U | S of the Koopman operator to C | S , is then a finite-dimensional linear operator that can be represented in a basis by an m × m matrix. An explicit example is given when x j , j = 1 , , m represent successive points on a periodic trajectory, and the resulting matrix representation in the standard basis is the m × m cyclic permutation matrix
Π = 0 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 ,
If S is not an invariant set, an m × m approximation of the reduced Koopman operator can still be provided. Namely, if we know m independent functions’ restrictions ( f j ) | S , j = 1 , , m in C | S , and we also know f j ( T x k ) , j , k { 1 , , m } , we can provide a matrix representation of U | S . However, while in the case where S is an invariant set, the iterate of any function in C | S can be obtained in terms of the iterate of m independent functions, for the case when S is not invariant, this is not necessarily so. Namely, the fact that S is not invariant means that functions in C | S do not necessarily experience linear dynamics under U | S . However, one can take N observables f j , j = 1 , , N , where N > m , and approximate the nonlinear dynamics using linear regression on f ˜ ( X ) ( f ( x 1 ) , , f ( x m ) ) , where f ( · ) = ( f 1 ( · ) , , f N ( · ) ) T —i.e., by finding an m × m matrix C that gives the best approximation of the data in the Frobenius norm,
C T = arg min B C m × m | | f ( T x ) f ( x ) B | | F arg min B C m × m ( f j ( T x k ) ) j , k = 1 , 1 n , m ( f j ( x k ) ) j , k = 1 , 1 n , m B F .
We have the following:
Theorem 7.
Let T : M M be a measure μ-preserving transformation on a metric space M, and let S m = { x j } , j = 1 , , m be a trajectory such that, when m , S m becomes dense in a compact invariant set A M . Then, for any N-vector of observables f C | S m , N m , we have
lim m | U | S m f C f | = 0
Proof. 
By density of S , for sufficiently large M, m M implies | x m x j | < ϵ M for some x j x 1 , , x m 1 . By continuity of observables,
| U | S m f C f | D ϵ M
for some constant D. Taking M sufficiently large makes ϵ M 0 . □
Consider an m-dimensional eigenvector e ˜ = ( e 1 , , e m ) of U ˜ T , associated with the eigenvalue λ . Since the eigenvector satisfies
U ˜ T e ˜ = λ e ˜
we have
e ˜ k + 1 = λ e ˜ k , k = 1 , , m 2 .
Thus, e ˜ can be considered as an eigenfunction on the finite set x 1 , , x m 1 . On the last point of the sample, x m , we have
j = 1 m c j e ˜ j = λ e ˜ m
Let us now consider the concept of the weak eigenfunction, or eigendistribution. Let ν be some prior measure of interest on M. Let ϕ be a bounded function that satisfies ϕ T = λ ϕ . We construct the functional L on C ( M ) by defining
L ( h ) = M h ϕ d ν .
Set U L ( h ) = M h ϕ ( T x ) d ν and we get
U L ( h ) = M h ϕ ( T x ) d ν = λ M h ϕ d ν = λ L ( h ) .
Clearly, this is satisfied if ϕ is a continuous eigenfunction of U at eigenvalue λ . However, Equation (157) is applicable for cases with much less regularity. Namely, if μ is a measure and
L ( f ) = f ( x ) d μ ( x )
the associated linear functional, then we can define the action of U on L by
U L ( f ) = f ( x ) d μ ( T x ) .
Consider, for example, a set of points x k , k N + and assume that for every continuous h there exists the limit
L ( h ) = lim K 1 K k = 1 K h ( x k ) e ˜ ( x k ) .
Then, by the Riesz representation theorem, there is a measure μ such that
L ( h ) = M h d μ .
Definition 1.
Let a measure μ be such that the associated linear functional L satisfies
U L = λ L ,
for some λ C . Then, μ is called a weak eigenfunction of U.
Now, we have
U L ( h ) = lim K 1 K k = 1 K h ( x k ) e ˜ ( x k + 1 ) = λ lim K 1 K k = 1 K h ( x k ) e ˜ ( x k ) = λ L ( h ) ,
proving the following theorem:
Theorem 8.
Consider a set of points x k , k N + , on a trajectory of T, and assume that for every continuous h, there exists the limit
L ( h ) = lim K 1 K k = 1 K h ( x k ) e ˜ ( x k ) ,
where
e ˜ ( x k ) = λ e ˜ ( x k 1 ) .
Then, the μ associated with L ( h ) by
L ( h ) = M h d μ ,
is a weak eigenfunction of U associated with the eigenvalue λ.
From the above, it follows that the left eigenvectors of U ˜ T are approximations of the associated (possibly weak) Koopman modes, as it is assumed that ł is such an eigenvector,
ł j U ˜ = λ j ł j .
Then,
ł , f j ( X )
is the projection of f j ( X ) on the eigenspace spanned by the eigenvector e j . Moreover, since
ł j = λ ł j + 1 c m ł m
the statement can be obtained in the limit K by the so-called Generalized Laplace Analysis (GLA) that we described in Section 3.
Remark 12.
The standard interpretation of the Dynamic Mode Decomposition (e.g., on Wikipedia) was in some way a transpose of the one presented here: the observables f 1 T , f m T (interpreted as column vectors) were assumed to be related by a matrix A : f j + 1 = A f j . Instead, in the nonlinear, Koopman operator interpretation, each row is mapped into its image, and this allows interpretation on the space of observables. This is particularly important in the context of evolution equations, for example, fluid flows, where the evolution of the observables’ field—the field of velocity vectors at different spatial points—is not evolving linearly.

7. Conclusions

In this paper, we pursued analysis of two of the major approaches to computation of the Koopman operator spectrum: the Generalized Laplace Analysis and the finite section method. We derived approximation results and reinterpreted finite section as a method acting on samples of continuous functions on the state space. The example of a chaotic system with continuous spectrum shows how a failure of the finite section method can occur for that class of systems. The question of choice of observables is often raised in the context of finite-section approximations such as the EDMD. Specifically, the number of basis functions—e.g., Fourier basis on a box in a d-dimensional space—selected as observables can increase exponentially with the dimension d. The pseudospectral result proven here shows that choosing time-delayed observables avoids this issue, making time-delayed observations a natural choice. However, it is clear from the example we gave that the finite section method can fail to converge spectrally for systems with continuous spectrum.
One can understand the Krylov subspace approach as sampling by dynamics in the observables space. The weak eigenfunction approach is based on sampling in the state space. Thus, both techniques avoid the curse of dimensionality that methods such as EDMD potentially introduce.
There are a number of directions for future research based on the work presented here. Generalized Laplace Analysis methods could use results in numerical approximations of Laplace transfroms [62] to remedy some of the difficulties arising in computation. It could be coupled with the power methods in numerical linear algebra for computation of eigenvalues, eigenfunctions and modes. There is a vast literature on Krylov subspace methods that can be used to refine the computations using finite section methodologies. Finally, recent results in computation of Koopman operator approximations might provide a direction for obtaining pseudospectrum convergence results for the cases of dynamical systems with (partially) continuous spectrum.

Funding

This research was supported in part by the DARPA contract HR0011-16-C-0116, ARO grants W911NF-11-1-0511 and W911NF-14-1-0359, and AFOSR contract FA9550-17-C-0012.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

I am thankful to Hassan Arbabi and Mathias Wanner for carefully reading the paper and useful comments.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Koopman, B.O. Hamiltonian systems and transformation in Hilbert space. Proc. Natl. Acad. Sci. USA 1931, 17, 315. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Lasota, A.; Mackey, M.C. Chaos, Fractals and Noise; Springer: New York, NY, USA, 1994. [Google Scholar]
  3. Singh, R.K.; Manhas, J.S. Composition Operators on Function Spaces; Elsevier: Amsterdam, The Netherlands, 1993; Volume 179. [Google Scholar]
  4. Mezić, I.; Banaszuk, A. Comparison of systems with complex behavior. Phys. D Nonlinear Phenom. 2004, 197, 101–133. [Google Scholar] [CrossRef]
  5. Mezić, I. Spectral properties of dynamical systems, model reduction and decompositions. Nonlinear Dyn. 2005, 41, 309–325. [Google Scholar] [CrossRef]
  6. Mauroy, A.; Mezić, I.; Susuki, Y. Koopman Operator in Systems and Control; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  7. Hansen, A.C. Infinite-dimensional numerical linear algebra: Theory and applications. Proc. R. Soc. A Math. Phys. Eng. Sci. 2010, 466, 3539–3559. [Google Scholar] [CrossRef]
  8. Tao, T. The Spectral Theorem and Its Converses for Unbounded Symmetric Operators. 2009. Available online: https://terrytao.wordpress.com/2011/12/20/the-spectral-theorem-and-its-conversesfor-unbounded-symmetric-operators/ (accessed on 3 February 2014).
  9. Mezić, I.; Banaszuk, A. Comparison of systems with complex behavior: Spectral methods. In Proceedings of the 39th IEEE Conference on Decision and Control (Cat. No. 00CH37187), Sydney, Australia, 12–15 December 2000; Volume 2, pp. 1224–1231. [Google Scholar]
  10. Korda, M.; Putinar, M.; Mezić, I. Data-driven spectral analysis of the Koopman operator. Appl. Comput. Harmon. Anal. 2020, 48, 599–629. [Google Scholar] [CrossRef] [Green Version]
  11. Mezić, I. Analysis of fluid flows via spectral properties of the Koopman operator. Annu. Rev. Fluid Mech. 2013, 45, 357–378. [Google Scholar] [CrossRef] [Green Version]
  12. Mauroy, A.; Mezić, I. On the use of fourier averages to compute the global isochrons of (quasi) periodic dynamics. Chaos Interdiscip. J. Nonlinear Sci. 2012, 22, 033112. [Google Scholar] [CrossRef] [Green Version]
  13. Mohr, R.; Mezić, I. Construction of eigenfunctions for scalar-type operators via Laplace averages with connections to the Koopman operator. arXiv 2014, arXiv:1403.6559. [Google Scholar]
  14. Mezić, I. Spectrum of the Koopman operator, spectral expansions in functional spaces, and state-space geometry. J. Nonlinear Sci. 2020, 30, 2091–2145. [Google Scholar] [CrossRef] [Green Version]
  15. Colbrook, M.J.; Roman, B.; Hansen, A.C. How to compute spectra with error control. Phys. Rev. Lett. 2019, 122, 250201. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Colbrook, M.J.; Townsend, A. Rigorous data-driven computation of spectral properties of koopman operators for dynamical systems. arXiv 2021, arXiv:2111.14889. [Google Scholar]
  17. Susuki, Y.; Mauroy, A.; Mezic, I. Koopman resolvent: A laplace-domain analysis of nonlinear autonomous dynamical systems. SIAM J. Appl. Dyn. Syst. 2021, 20, 2013–2036. [Google Scholar] [CrossRef]
  18. Drmac, Z.; Mezic, I.; Mohr, R. Data driven modal decompositions: Analysis and enhancements. SIAM J. Sci. Comput. 2018, 40, A2253–A2285. [Google Scholar] [CrossRef]
  19. Böttcher, A.; Silbermann, B. The finite section method for toeplitz operators on the quarter-plane with piecewise continuous symbols. Math. Nachrichten 1983, 110, 279–291. [Google Scholar] [CrossRef]
  20. Lewin, M.; Séré, É. Spectral pollution and how to avoid it. Proc. Lond. Math. Soc. 2010, 100, 864–900. [Google Scholar] [CrossRef] [Green Version]
  21. Schmid, P.J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 2010, 656, 5–28. [Google Scholar] [CrossRef] [Green Version]
  22. Rowley, C.W.; Mezić, I.; Bagheri, S.; Schlatter, P.; Henningson, D.S. Spectral analysis of nonlinear flows. J. Fluid Mech. 2009, 641, 115–127. [Google Scholar] [CrossRef] [Green Version]
  23. Kutz, J.N.; Brunton, S.L.; Brunton, B.W.; Proctor, J.L. Dynamic Mode Decomposition: Data-Driven Modeling of Complex Systems; SIAM: Philadelphia, PA, USA, 2016. [Google Scholar]
  24. Tu, J.H. Dynamic Mode Decomposition: Theory and Applications. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 2013. [Google Scholar]
  25. Takeishi, N.; Kawahara, Y.; Tabei, Y.; Yairi, T. Bayesian dynamic mode decomposition. In Proceedings of the IJCAI, Melbourne, Australia, 19–25 August 2017; pp. 2814–2821. [Google Scholar]
  26. Chen, K.K.; Tu, J.H.; Rowley, C.W. Variants of dynamic mode decomposition: Boundary condition, koopman, and fourier analyses. J. Nonlinear Sci. 2012, 22, 887–915. [Google Scholar] [CrossRef]
  27. Askham, T.; Kutz, J.N. Variable projection methods for an optimized dynamic mode decomposition. SIAM J. Appl. Dyn. Syst. 2018, 17, 380–416. [Google Scholar] [CrossRef] [Green Version]
  28. Noack, B.R.; Stankiewicz, W.; Morzyński, M.; Schmid, P.J. Recursive dynamic mode decomposition of transient and post-transient wake flows. J. Fluid Mech. 2016, 809, 843–872. [Google Scholar] [CrossRef] [Green Version]
  29. Azencot, O.; Yin, W.; Bertozzi, A. Consistent dynamic mode decomposition. SIAM J. Appl. Dyn. Syst. 2019, 18, 1565–1585. [Google Scholar] [CrossRef]
  30. Proctor, J.L.; Brunton, S.L.; Kutz, J.N. Dynamic mode decomposition with control. SIAM J. Appl. Dyn. Syst. 2016, 15, 142–161. [Google Scholar] [CrossRef] [Green Version]
  31. Korda, M.; Mezić, I. Linear predictors for nonlinear dynamical systems: Koopman operator meets model predictive control. Automatica 2018, 93, 149–160. [Google Scholar] [CrossRef] [Green Version]
  32. Jovanović, M.R.; Schmid, P.J.; Nichols, J.W. Sparsity-promoting dynamic mode decomposition. Phys. Fluids 2014, 26, 024103. [Google Scholar] [CrossRef]
  33. Bagheri, S. Effects of weak noise on oscillating flows: Linking quality factor, floquet modes, and koopman spectrum. Phys. Fluids 2014, 26, 094104. [Google Scholar] [CrossRef] [Green Version]
  34. Hemati, M.S.; Rowley, C.W.; Deem, E.A.; Cattafesta, L.N. De-biasing the dynamic mode decomposition for applied koopman spectral analysis of noisy datasets. Theor. Comput. Fluid Dyn. 2017, 31, 349–368. [Google Scholar] [CrossRef]
  35. Dawson, S.T.M.; Hemati, M.S.; Williams, M.O.; Rowley, C.W. Characterizing and correcting for the effect of sensor noise in the dynamic mode decomposition. Exp. Fluids 2016, 57, 42. [Google Scholar] [CrossRef] [Green Version]
  36. Williams, M.O.; Kevrekidis, I.G.; Rowley, C.W. A data-driven approximation of the Koopman operator: Extending dynamic mode decomposition. J. Nonlinear Sci. 2015, 25, 1307–1346. [Google Scholar] [CrossRef] [Green Version]
  37. Korda, M.; Mezić, I. On convergence of extended dynamic mode decomposition to the Koopman operator. J. Nonlinear Sci. 2018, 28, 687–710. [Google Scholar] [CrossRef] [Green Version]
  38. Susuki, Y.; Mezic, I. A Prony approximation of Koopman mode decomposition. In Proceedings of the 2015 54th IEEE Conference on Decision and Control (CDC), Osaka, Japan, 15–18 December 2015; pp. 7022–7027. [Google Scholar]
  39. Arbabi, H.; Mezic, I. Ergodic theory, dynamic mode decomposition, and computation of spectral properties of the Koopman operator. SIAM J. Appl. Dyn. Syst. 2017, 16, 2096–2126. [Google Scholar] [CrossRef]
  40. Mezic, I.; Arbabi, H. On the computation of isostables, isochrons and other spectral objects of the Koopman operator using the dynamic mode decomposition. IEICE Proc. Ser. 2017, 29, 1–4. [Google Scholar]
  41. Das, S.; Giannakis, D. Delay-coordinate maps and the spectra of Koopman operators. J. Stat. Phys. 2019, 175, 1107–1145. [Google Scholar] [CrossRef] [Green Version]
  42. Dunford, N. Spectral operators. Pac. J. Math. 1954, 4, 321–354. [Google Scholar] [CrossRef]
  43. Hunt, F.Y. Unique ergodicity and the approximation of attractors and their invariant measures using Ulam’s method. Nonlinearity 1998, 11, 307. [Google Scholar] [CrossRef]
  44. Megginson, R.E. An Introduction to Banach Space Theory; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2012; Volume 183. [Google Scholar]
  45. Mezić, I. On Geometrical and Statistical Properties of Dynamical Systems: Theory and Applications. Ph.D. Thesis, California Institute of Technology, Pasadena, CA, USA, 1994. [Google Scholar]
  46. Mezić, I.; Wiggins, S. A method for visualization of invariant sets of dynamical systems based on the ergodic partition. Chaos 1999, 9, 213–218. [Google Scholar] [CrossRef] [Green Version]
  47. Mauroy, A.; Mezić, I.; Moehlis, J. Isostables, isochrons, and Koopman spectrum for the action–angle representation of stable fixed point dynamics. Phys. D Nonlinear Phenom. 2013, 261, 19–30. [Google Scholar] [CrossRef] [Green Version]
  48. Arnold, V.I.; Avez, A. Ergodic Problems of Classical Mechanics, 1968; Benjamin: New York, NY, USA, 1968. [Google Scholar]
  49. Govindarajan, N.; Mohr, R.; Chandrasekaran, S.; Mezic, I. On the approximation of Koopman spectra for measure preserving transformations. SIAM J. Appl. Dyn. Syst. 2019, 18, 1454–1497. [Google Scholar] [CrossRef] [Green Version]
  50. Petersen, K. Ergodic Theory; Cambridge University Press: Cambridge, UK, 1995. [Google Scholar]
  51. Klus, S. On the Numerical Approximation of the Perron-Frobenius and Koopman Operator. arXiv 2015, arXiv:1512.05997. [Google Scholar]
  52. Ridge, W.C. Spectrum of a composition operator. Proc. Am. Math. Soc. 1973, 37, 121–127. [Google Scholar] [CrossRef]
  53. Nadkarni, M.G. Spectral Theory of Dynamical Systems; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  54. Neumann, J.V. Zur operatorenmethode in der klassischen mechanik. Ann. Math. 1932, 587–642. [Google Scholar] [CrossRef]
  55. Mezić, I. DSample: A deterministic algorithm for sampling with o(1/n) error. Preprint 2009. [Google Scholar]
  56. Kachurovskii, A.G. The rate of convergence in ergodic theorems. Russ. Math. Surv. 1996, 51, 653. [Google Scholar] [CrossRef]
  57. Jakvsić, V.; Molchanov, S. A note on the regularity of solutions of linear homological equations. Appl. Anal. 2000, 75, 371–377. [Google Scholar]
  58. Texier, B.; Basic Matrix Perturbation Theory. Expository Note. 2017. Available online: www.math.jussieu.fr/~texier (accessed on 4 January 2022).
  59. Trefethen, L.N.; Embree, M. Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators; Princeton University Press: Princeton, NJ, USA, 2005. [Google Scholar]
  60. Takens, F. Detecting strange attractors in turbulence. In Dynamical Systems and Turbulence, Warwick 1980; Springer: Berlin/Heidelberg, Germany, 1981; pp. 366–381. [Google Scholar]
  61. Pan, V.Y. How bad are Vandermonde matrices? SIAM J. Matrix Anal. Appl. 2016, 37, 676–694. [Google Scholar] [CrossRef] [Green Version]
  62. Rokhlin, V. A fast algorithm for the discrete laplace transformation. J. Complex. 1988, 4, 12–32. [Google Scholar] [CrossRef] [Green Version]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mezić, I. On Numerical Approximations of the Koopman Operator. Mathematics 2022, 10, 1180. https://doi.org/10.3390/math10071180

AMA Style

Mezić I. On Numerical Approximations of the Koopman Operator. Mathematics. 2022; 10(7):1180. https://doi.org/10.3390/math10071180

Chicago/Turabian Style

Mezić, Igor. 2022. "On Numerical Approximations of the Koopman Operator" Mathematics 10, no. 7: 1180. https://doi.org/10.3390/math10071180

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop