This is a short overview of the Proper Orthogonal and Dynamic/Koopman Mode Decompositions, which are commonly used in analysis of velocity fields of fluid flows. While I worked with the theoretical side of Koopman modes, I never implemented the numerical code myself; I wrote these notes up a I was teaching myself the basics of numerics of these decompositions, and consequently used the notes for two lectures. The notes are based References at the end of the post. Caveat lector: Notes may contain gross oversimplifications — the emphasis was on understanding and not on precision.

I welcome your corrections and comments below. (You can always stop by my Van Vleck office if you’re in Madison to discuss any part of this).

UPDATE:I have now posted my own implementation of several algorithms for Koopman mode decompositions. [GitHub]

#### 1. Decompositions in general

Let’s assume that we are recording evolution of a dynamical quantity associated with a planar fluid flow. Commonly, contains (one or more) components of the velocity field. An objective of a *decomposition* is to decouple the spatial and temporal variation of the flow, and write the entire evolution as a superposition of spatial *modes* whose change in time is governed by scalar *coefficients* :

Depending on the dynamics, we might be able to perform such a decomposition exactly using finitely many modes; in real and realistic flows, this is highly unlikely. Instead, we aim to achieve the decomposition only approximately, by retaining a relatively-small number of modes .

As the fluid evolves, we measure the at discrete time steps and at a finitely many spatial locations . As a result, our measurements can be stored in a , in which *columns are time-snapshots* of the entire field, while *rows are time series* of evolution of at a particular location .

The snapshot at time , stored in the column , can be written as (per the aforementioned decomposition)

where we can further store coefficients in the matrix . Therefore, we seek the decomposition of the data matrix

, where denotes the conjugate transpose. The goal is to make as small as possible; in particular, much smaller than either .

Notice now that the columns of are all linear combinations of columns of . Therefore, we are approximating a high-rank matrix by a -rank matrix . The manner in which this approximation is achieved is what distinguishes POD and DMD.

#### 2. Proper Orthogonal Decomposition

Given the desired number of modes , Proper Orthogonal Decomposition finds of rank which attains the shortest distance . (The norm is either Frobenius or 2-norm.)

One explanation comes from statistics, where we imagine that our “true” dynamics really is composed of a small number of modes and that the remaining complexity in the data is just due to Gaussian random noise. If this is truly the case, then the approximation that minimizes the 2-norm of the error, as above, recovers the “deterministic” modes, and “filters-out” the noise. In statistics, this type of an approximation is known as Karhunen-Loève decomposition.

In fluid dynamics, the 2-norm of the fluid flow is associated with the kinetic energy stored in it. In this sense, Proper Orthogonal Decomposition finds modes that store the largest possible amount of energy present in .

Computation of the best lower-rank approximation in a vector space with a scalar product is a well-known problem. How do we do this on vectors? Lower rank approximations are just those that live inside one of the axial planes. If our vector is already written in terms of the axial coordinate system, then the approximation is achieved by erasing enough coordinates. How do we know which coordinates to erase (and which axial plane to project on)? We don’t: we just erase the smallest coordinates until we achieve the target rank.

Now, our problem is more complicated than the simple image above, as, in addition to coordinates, we are looking for the appropriate orthogonal basis in which to perform the “coordinate deletion” on all columns of simultaneously. Problem is solved by computing the Singular Value Decomposition of

where both and are orthonormal matrices, and is a diagonal matrix containing singular values in the decreasing order.

Calculating SVD matrices as eigenvectors and eigenvalues of the symmetrized matrices and is not advised numerically, as eigenvectors are notoriously ill-conditioned. Nonetheless, standard textbooks on numerical linear algebra, e.g., by Trefethen and Bau, or Golub and Van Loan, contain detailed account of the numerical algorithm. In a nutshell, is first reduced to a bidiagonal matrix using Householder reflections (in a finitely many steps). SVDs of such matrices can be computed iteratively in a numerically stable fashion, which is achieved by, e.g., Golub-Kahan algorithm. These are not the only two steps that can be employed, but are the most common, and are implemented in, e.g., GNU Scientific Library.

Rank of the matrix is equal to the number of nonzero elements in . Notice that the matrix suits our needs for the matrix in POD decomposition. To reduce the rank, we just compute by erasing everything but largest diagonal elements in and obtain the POD decomposition:

How do we choose the truncation order ? Singular values can be used to compute the magnitude of error between and its POD approximation: , . Since all singular values are known, we can find the point of “diminishing returns” (loosely speaking), where increasing stops reaping big returns in capturing the energy contained in .

In terms of singular values , 2-norm of a matrix is given by the largest singular value, while the Frobenius norm is given by the root of the sum of squares of singular values. In approximations formed as above,. Regardless of how the error is measured, Eckart–Young theorem states that POD approximations really are the best reduced-rank approximations to in either 2- or Frobenius norm.

Now that we have the matrix , what is it good for? Well, first, visualization. Remember, each column of the matrix is one independent component of our measured data, which only changes in magnitude, i.e., its shape stays the same (or coherent!).

Second, we might want to write the evolution equation for on a component-by component basis. If is a velocity field of a fluid, its dynamics are typically described by a PDE . However, if we establish that , then . If furthermore, modes are invariants of the right-hand side, , then the entire evolution of the dynamics reduces to just ODEs.

Unfortunately, PDEs governing fluid motion are rarely linear and therefore there is little hope that POD modes will lead to dynamical reduction to ODEs. Note however that if the PDEs possess particular symmetries, partial success is possible, and these kinds of problems generated a lot of activities in fluid dynamics throughout 1990s. For more, see Ref [1].

#### 3. Dynamic Mode/Koopman Decomposition

In fluid dynamics, velocity field is treated both as a measured quantity and as a state quantity, i.e., the one governing the evolution. In this approach, evolution of is nonlinear. An alternative approach allows for treating as a sequence of projections of an *infinite*-dimensional state evolved by a *linear* operator.

Let the “true” dynamics be described by state , evolving under some unknown operator .

Our measurements, e.g., velocity field, are just static evaluations of some measurement function . For example, let return the velocity field at point , i.e., . The velocity field at the same point, but a time step later is given by

.

The operator that maps is the Koopman operator; it is linear, since composition is linear in the “outer” function, but it is infinite-dimensional, since the space of functions over the states is infinite-dimensional, *even when the state space is finite-dimensional*.

Let’s see how that affects our analysis. Assume that are eigenpairs of the Koopman operator. Remember, are of the same class of objects the Koopman operator works on, so they are not vectors, but particular kinds of measurement functions taking states as inputs. If the measurement functions we care about lie in the span of these eigenfunctions, they can be written as , where are combination coefficients. The evolution of is then given by . If we stack the eigenfunctions in a row-vector , we can write

.

The data matrix can be consequently written using the matrix

as and .

In this Koopman-mode interpretation of our data matrix, modes of importance are column-vectors of . They provide the axis along which the state measurement eigenfunctions affect our measured data.

As we can only experience the action of the Koopman operator through its transformation of the matrix , for our purposes, Koopman operator acts as a large, inaccessible matrix. Given the data matrix as before, Koopman operator acts by shifting its columns by one:

The only new column on the right hand side is the last one: if it can be written as a linear combinations of the “old” ones, at least approximately, then this evolution can be written as

where is a matrix in the companion form:

.

If this decomposition is exact, then advancing time just iterates :

.

We want to compute the matrix from an approximate equality obtained by putting eigendecomposition derived earlier and the companion matrix together:

If can be diagonalized, it can be written as , then the equality becomes

where both (eigenvalues of ) and (eigenvalues of ) are diagonal. But can they be made equal? The answer is yes!

For an eigenpair ,

therefore is an eigenpair of . It follows that and can be arranged to be the same matrix:

.

We can therefore set and to obtain our decomposition.

Notice that, unlike POD, matrix is not orthonormal.

The description above is OK for conceptual purposes, but it is not the way we will want to compute . Instead, we interpret the matrix as a sequence of Krylov vector that lie in the output space of the (inaccessible) matrix .

Arnoldi iteration, applied to provides us progressively with matrices (orthornormal) and (Hessenberg), which ideally satisfy

, .

Diagonalization of Hessenberg matrix is ; substituting it into the derived relationships involving the companion matrix, we can obtain that

, which is useful only as an intermediate result, since the Arnoldi process never explicitly computes . However, is lost in the final expression

.

#### 4. Discussion

There are several differences between POD and DMD modes:

- POD modes are orthogonal; DMD are not
- DMD modes are dynamically invariant; POD are not, in general
- POD are computed using SVD; common application of DMD (additionally) uses Krylov decompositions
- Magnitude of the characteristic value of the POD mode relates to the fraction of L2 energy/variance-explained by the mode; characteristic value of the DMD mode determines its dynamics

In these notes I tried to give a brief introduction into how to compute POD and DMD modes. Perhaps in one of the future posts I will actually post some computations. In the mean time, Ref. [3] compares POD and DMD modes on a fabricated data set, as well as some experimentally-recorded von Karman streets.

#### 5. References

[1] Smith, Troy R., Jeff Moehlis, and Philip Holmes. “Low-Dimensional Modelling of Turbulence Using the Proper Orthogonal Decomposition: A Tutorial.” Nonlinear Dynamics 41, no. 1–3 (August 1, 2005): 275–307. doi:10.1007/s11071-005-2823-y.

[2] Rowley, Clarence W, Igor Mezić, Shervin Bagheri, Philipp Schlatter, and Dan S Henningson. “Spectral Analysis of Nonlinear Flows.” Journal of Fluid Mechanics 641 (2009): 115–27. doi:10.1017/S0022112009992059.

[3] Zhang, Qingshan, Yingzheng Liu, and Shaofei Wang. “The Identification of Coherent Structures Using Proper Orthogonal Decomposition and Dynamic Mode Decomposition.” Journal of Fluids and Structures 49 (August 2014): 53–72. doi:10.1016/j.jfluidstructs.2014.04.002.

“DMD modes are dynamically invariant; POD are not, in general.”

How to understand this point?

Thanks