# A deeper look into the Eigensystem Realisation Algorithm

## 2019/10/01

For my quadcopter control project, I am currently in the process of system identification, trying to find a model describing the dynamics of my motor-propeller-combination. Finding the theoretical model is quite simple, but then you’re left with finding the parameters – and not all of them are directly available from the motor vendors. So I have no other choice than to determine these parameters from measurements.

In my endeavour I came across this series of videos on data-driven control by machine learning by Professor Steve Brunton of the University of Washington, Seattle. There, basic principles of modern methods for system identification are explained very well. If you are new to control systems, I can also recommend his control boot camp.

In part of the series, Professor Brunton explains the Eigensystem Realisation Algorithm (ERA) ( Citation: & , & (). An Eigensystem Realization Algorithm for Modal Parameter Identification and Model Reduction. Journal of Guidance Control and Dynamics, 8(5). https://doi.org/10.2514/3.20031 ) . The ERA is a procedure by which a reduced-order model for a linear and time-invariant system (LTI) can be derived from data of an impulse-response measurement.

However, while the videos do a pretty solid job of explaining the concepts and give the final formulae, I wanted to know more about their derivation. For this, I went to the original paper where the approaches were developed, and tried to reproduce the steps that led to the final results.

I follow this approach quite a lot when acquiring new skills. It helps not only better understanding – and memorizing – the final solution, but I also tend to pick up neat tricks along the way that often prove helpful elsewhere – even completely outside the domain of the original paper.

• When doing system identification, we mostly work with discrete-time systems, and in some aspects they are much easier to handle than continuous-time systems.
• The discrete-time impulse response, which uniquely characterises the behaviour of LTI systems, can be easily expressed in closed form using the system matrices.
• By reorganising our data, we can express seemingly non-linear problems in linear form.
• The singular value decomposition allows us to examine the composition of a matrix and simplify it.
• There are a few tricks for transforming matrix power expressions that may reduce the order of our problem.

## Our Example

Juang and Pappa worked on large, flexible space structure such as the Galileo spacecraft (which should not be mixed up with the European Galileo satellite navigation system), and aimed to identify the oscillation modes of such structures, so that these could be taken into account in control of the spacecraft.

We’ll look at something similar, although much more simple: A spring-loaded, dampened horizontal pendulum. This pendulum consists of a mass $m$ connected to a spring and a linear dampener. The mass can move horizontally, and we measure the position $x$ of its centre of gravity, with $x=0$ being the position when the system is at rest. This system can be modelled as a Single-Input, Single-Output system: We have a single input $u=F$ and a single output $y=x$. Normally, we could easily provide at least the structure of a model for this system from first principles – namely, the conservation of momentum:

\begin{equation} m \ddot{x} + d \dot{x} + k x = F \end{equation}

Introducing state variables $x_1 = x$ and $x_2 = \dot{x}$, we can define the dynamics of this system in state space notation:

\begin{align} \frac{d}{dt} \begin{bmatrix} x_1 \ x_2 \end{bmatrix} &= \begin{bmatrix} 0 & 1 \
-\frac{k}{m} & -\frac{d}{m} \end{bmatrix} & \begin{bmatrix} x_1 \ x_2 \end{bmatrix} &+ \begin{bmatrix} 0 \ \frac{1}{m} \end{bmatrix} F \
y &= \begin{bmatrix} 1 & 0 \end{bmatrix} & \begin{bmatrix} x_1 \ x_2 \end{bmatrix} \end{align}

Instead of actually measuring our system, we will be using the continuous-time dynamics to simulate the system and thereby perform “measurements”. We will add white noise to emulate measurement noise.

## Impulse-Response Measurement

Now, what happens if we whack this system with a hammer? What if we give it a short kick and then measure the position of the mass? Let’s find out! The figure shows the position of the mass over time. We see that the system oscillates around the neutral position, but the amplitude of the oscillation falls exponentially with time, until measurement noise exceeds the movement of the system. In this example, we whacked the mass with a short force of 1 Newton, and then let go. So, our input would look like this: This function is called the Kronecker Delta Function, and when used in analysis of signals and systems, it is referred to as the (discrete-time) unit impulse. It is one for exactly one tick of our (discrete) time-base and zero everywhere else. When given as an input to a discrete-time system, the output observed afterwards is called the discrete-time impulse response. Specifically, LTI systems are completely characterised by their impulse response.

As we are working on data that we would acquire by measurement, considering the continuous-time formulation is not useful, as we cannot get continuous-time measurement data. Instead, we’ll have some kind of quantisation in both time and value. This actually simplifies the maths of our problem a bit, as we will see in a moment.

We now want to have a look at how we can mathematically and generally describe the impulse response. This will be the basis of our model to which we will then fit our measurement data. In discrete time, we describe the dynamics of an LTI system using a recurrence relation in state space:

\begin{equation} \label{eq:state_recurrence} \mathbf{x}_{k+1} = \mathbf{A} \mathbf{x}_k + \mathbf{b} u_k \end{equation}

Let us assume that for our initial state $\mathbf{x}\left(0\right)=0$ holds. This is certainly true for dampened structures if we wait long enough for all oscillations to die down before we whack the structure. Now, if our input is $1$ for time step $k=0$ and 0 for all other time steps, we can see the progression of our state by repeatedly applying Equation \ref{eq:state_recurrence}:

Time Step $k$ State $\mathbf{x}\left(k\right)$
0 0
1 $\mathbf{A} \underbrace{\mathbf{x}\left(0\right)}_{=0} + \mathbf{b} \underbrace{u\left(0\right)}_{=1}=\mathbf{b}$
2 $\mathbf{A} \underbrace{\mathbf{x}\left(1\right)}_{=\mathbf{b}} + \mathbf{b} \underbrace{u\left(1\right)}_{=0} = \mathbf{A}\mathbf{b}$
3 $\mathbf{A} \underbrace{\mathbf{x}\left(2\right)}_{=\mathbf{A} \mathbf{b}} + \mathbf{b} \underbrace{u\left(2\right)}_{=0} = \mathbf{A}^2\mathbf{b}$
n $\mathbf{A}^{n-1}\mathbf{b}$

So, in general we have $\mathbf{x}_k = \mathbf{A}^{k-1} \mathbf{b}$ for $k>0$, and with the output relation

\begin{equation} \label{eq:measurement} y_k = \mathbf{c}^T \mathbf{x}_k \end{equation}

we get

\begin{equation} \label{eq:measurement_closed} y_k = \mathbf{c}^T \mathbf{A}^{k-1} \mathbf{b} \end{equation}

Note that this impulse-response is completely independent of what our state variables $\mathbf{x}_k$ represent. You can try it yourself: Substitute $\mathbf{x}_k = \mathbf{T} \tilde{\mathbf{x}}_k$ with some invertible square matrix $\mathbf{T}$ and determine the impulse response of that system. However, this actually works in our favour, as it allows us to arbitrarily select at least part of the parameters later on.

In general, the terms $\mathbf{c}^T \mathbf{A}^{k-1} \mathbf{b}$ – or, in their Multiple-Input, Multiple-Output form $\mathbf{C} \mathbf{A}^{k-1} \mathbf{B}$ – are called the Markov Parameters of the system.

## Finding an Exact Model of Order n

What Juang and Pappa aim to do is to find parameter matrices $\mathbf{A}$, $\mathbf{c}^T$ and $\mathbf{b}$ so that the impulse response so described fits the measured data exactly. One way of doing so would be to explicitly write down the parametrised impulse-response – as we did in the last chapter – and try to solve this for the parameters directly.

However, that approach is rather ugly, and looking at Equation \ref{eq:measurement_closed}, this is a non-linear problem due to the powers of $\mathbf{A}$ occurring. Instead, Juang and Pappa use quite a nice trick: They restructure the data in such a way that the problem becomes linear. This allows us to use our toolkit from linear algebra to find a solution.

In a first step, Juang and Pappa construct the vectors $\bar{\mathbf{y}}_k$, which are the column vectors of $n$ observations starting at time step $k$:

\begin{equation} \bar{\mathbf{y}}_k := \begin{bmatrix} y_k \\ y_{k+1} \\ \vdots \\ y_{k+n-1} \end{bmatrix} \end{equation}

We will see later that $n$ is the order of the system we will build. For the time being, we shall assume that we have an arbitrary amount of measurements, so there are no limits on $n$. From Equation \ref{eq:measurement_closed}, we can see that

\begin{equation} \bar{\mathbf{y}}_k = \mathcal{O} \mathbf{A}^{k-1} \mathbf{b} \end{equation}

with the observability matrix

\begin{equation} \mathcal{O} = \begin{bmatrix} \mathbf{c}^T \ \mathbf{c}^T \mathbf{A} \ \mathbf{c}^T \mathbf{A}^2 \ \vdots \ \mathbf{c}^T \mathbf{A} ^{n-1} \end{bmatrix} \end{equation}

Now, remember the function of the observability matrix: If the state has dimension $n$ and the observability matrix has at least rank $n$, then we can use it to uniquely reconstruct the state from the outputs at time steps $k,\ldots,k+n-1$. Thus, if our system is observable, we can use the inverse of the observability matrix $\mathcal{O}$ to find the state $\mathbf{x}_k$:

\begin{equation} \mathcal{O}^{-1} \bar{\mathbf{y}}_k = \mathbf{A}^{k-1} \mathbf{b} = \mathbf{x}_k \end{equation}

However, from that we can construct any state at any time due to:

\begin{equation} \mathbf{x}_{k+1}=\mathbf{A} \mathbf{x}_k = \mathbf{A}\mathcal{O}^{-1} \bar{\mathbf{y}}_k \end{equation}

Applying this to $y_k$ we get the recurrence

\begin{equation} \label{eq:measurement_step} \bar{\mathbf{y}}_{k+1} = \mathcal{O} \mathbf{x}_{k+1} = \mathcal{O} \mathbf{A}\mathcal{O}^{-1} \bar{\mathbf{y}}_k \end{equation}

Now, let’s look at what we have done: We are able to describe the output at time steps $k+1,\ldots,k+n$ from the output at time steps $k,\ldots,k+n-1$ using a linear operator! How do we find $\mathcal{O} \mathbf{A}\mathcal{O}^{-1}$? Equation \ref{eq:measurement_step} does not sufficiently specify their value. To increase the rank of the equation system, Joung and Pappa proceed to build the Hankel-matrices:

\begin{align} \mathbf{H}_k &= \begin{bmatrix} \bar{\mathbf{y}}_k & \bar{\mathbf{y}}_{k+1} & \cdots &\bar{\mathbf{y}}_{k+n-1} \end{bmatrix} \\ &= \begin{bmatrix} y_k & y_{k+1} & \cdots & y_{k+n-1} \\ y_{k+1} & y_2 & \cdots & y_{n+1} \\ \vdots & \vdots & \ddots & \vdots \\ y_{k+n-1} & y_{k+n} & \cdots & y_{k+2n-2} \end{bmatrix} \end{align}

These matrices are constructed by listing $n$ measurements starting at time step $k$ in the first column, then listing $n$ measurements starting at time step $k+1$ in the second column, and so on. Using Equation \ref{eq:measurement_step}, we can find the following recurrence for $\mathbf{H}_ {k+1}$:

\begin{equation} \label{eq:hankel_step} \mathbf{H}_{k+1} = \mathcal{O} \mathbf{A} \mathcal{O}^{-1} \mathbf{H}_k \end{equation}

Thus, if we have any matrices $\mathbf{H}_k$ and $\mathbf{H}_{k+1}$, with the former being a regular, invertible matrix, we can find

\begin{equation} \mathcal{O} \mathbf{A}^{-1} \mathcal{O}^{-1} = \mathbf{H}_{k+1} {\mathbf{H}_k}^{-1} \end{equation}

Thus, we can express $y_k$ using

\begin{align} \label{eq:output-exact} y_k &= \mathbf{e}^T \mathbf{H}_k \mathbf{e} \\ &= \mathbf{e}^T \left(\mathcal{O} \mathbf{A}^{-1} \mathcal{O}^{-1}\right)^{k-1} \mathbf{H}_1 \mathbf{e} \\ &= \underbrace{\mathbf{e}^T}_{=:\hat{\mathbf{c}}^T} {\underbrace{\left(\mathbf{H}_{2} {\mathbf{H}_1}^{-1}\right)}_{=:\hat{\mathbf{A}}}}^{k-1} \underbrace{\mathbf{H}_1 \mathbf{e}}_{=:\hat{\mathbf{b}}} \\ &:= \hat{\mathbf{c}}^T \hat{\mathbf{A}}^{k-1} \hat{\mathbf{b}} \end{align}

where $\mathbf{e}^T=\begin{bmatrix} 1 & 0 & \ldots & 0 \end{bmatrix}$. This exactly describes our measured system response: ## The Problem of Noise

However, we also see that our estimated system exactly fits all the noise from our measurement. In our case we know from first-principles considerations that the exact model would only be of order $n=2$, while the model we developed here is of much larger order. Besides being quite a misuse of resources, this may also lead to considerable estimation errors, as we can see when plotting the step responses: Our estimated system clearly deviates from the system we measured. This is no wonder, as the estimated system incorporates all the noise we measured.

Further, our Hankel-matrices are pretty badly conditioned This means that small rounding errors during calculations may become large errors in the result. We know that – in the absence of measurement error – the rank of the Hankel-matrices cannot be larger than the order of the underlying system. If they are indeed regular, they only are due to measurement noise, which should ideally be much smaller in magnitude than the actual data. Thus, simple inversion is prone to large numerical errors.

In their original paper, Juang and Pappa describe a method of using only a subset of the measurements to reduce the size of the Hankel-matrices, and thereby to improve the conditioning of the matrices. However, today, the method of ERA is almost universally presented (e.g. on Wikipedia) as being based on Singular Value Decomposition (SVD), using the whole set of measurement data.

## Singular Value Decomposition

Any matrix $\mathbf{M} \in \mathbb{R}^{m \times n}$ can be represented in the form of its Singular Value Decomposition (SVD):

\begin{equation} \mathbf{M} = \mathbf{P} \mathbf{D} \mathbf{Q}^{T} \end{equation}

The matrices all have special forms:

• The matrices $\mathbf{P} \in \mathbb{R}^{m \times m}$ and $\mathbf{Q} \in \mathbb{R}^{n \times n}$ are square, orthogonal matrices, i.e. $\mathbf{P}\mathbf{P}^{T}=\mathbf{I}_m$ and $\mathbf{Q}\mathbf{Q}^{T}=\mathbf{I}_n$. Note that the identity matrices may have different dimensions if $\mathbf{M}$ is not square.
• The matrix $\mathbf{D} \in \mathbb{R}^{m \times n}$ is a diagonal matrix, with all diagonal elements being non-negative. As a common convention, the diagonal elements $\sigma_k$ of $\mathbf{D}$ – called the singular values of $\mathbf{M}$ – are listed in decreasing order.

Essentially, the SVD expresses the contents of the matrix as the sum of matrices with decreasing amount of impact:

\begin{equation} \mathbf{M}_{ij} = \sum_{r=1}^{\min\left\{m,n\right\}} \sigma_r \mathbf{P}_{ir} \mathbf{Q}^T_{rj} \end{equation}

As the matrices $\mathbf{P}$ and $\mathbf{Q}$ are orthogonal, the impact of the individual element is represented by $\sigma_r$. If we look at the Pareto plot of the singular values, we see that there are some dominant elements in there: The plot shows the individual singular values in orange, ordered in decreasing order from left to right, and the cumulative proportion of the singular values, added up from left to right. We can see that the first two singular values are the most prominent, making up for about 55% of the total data values, and the remaining singular values are pretty small in comparison to that – although they still add up to representing 45% of the data. We also see that our matrix is awfully close to being singular, which it would be if any of the singular values were zero.

This is somewhat consistent with our original considerations, where we found out that our system is actually of second order. Thus, we may assume that the data represented by the major two singular values is our actual data, while the remaining singular values represent noise.

Now, what do we do with that information? We can use it to remove what we consider to be noise from our data. In essence, the vectors of $\mathbf{P}$ and $\mathbf{Q}$ for bases of vector spaces, and the singular values determine the magnitude by which the base vectors are represented in the data in the matrix. Thus, by removing some singular values and base vectors, we can project the data in the matrix onto the directions represented by the remaining base vectors, and thereby approximate the original matrix:

\begin{align} \mathbf{M} &= \mathbf{P} \mathbf{D} \mathbf{Q}^{T} \
&= \begin{bmatrix} \tilde{\mathbf{P}} & \mathbf{P}_R \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{D}} & \mathbf{0} \
\mathbf{0} & \mathbf{D}_R \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{Q}}^T \
{\mathbf{Q}_R}^T \end{bmatrix} \
&\approx \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \
&=: \tilde{\mathbf{M}} \end{align}

By appropriately selecting $\tilde{\mathbf{D}}$, we can approximate $\mathbf{M}$ arbitrarily close. Usually, one would determine the size of $\tilde{\mathbf{D}}$ from first-principles considerations, or by defining a cut-off value in relative error.

With that, we can determine the pseudo-inverse of $\tilde{\mathbf{M}}$:

\begin{equation} \tilde{\mathbf{M}}^{*} = \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \end{equation}

I encourage you to personally verify that indeed $\tilde{\mathbf{M}} \tilde{\mathbf{M}}^{*} = \tilde{\mathbf{M}}^{*} \tilde{\mathbf{M}} = \mathbf{I}_{n \times m}$ holds, where $\mathbf{I}_{n \times m}$ is the $n \times m$ identity matrix (ones on the diagonal, zeroes everywhere else).

## Simplified Model

With the simplified Hankel-Matrix

\begin{equation} \tilde{\mathbf{H}}_1 = \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \end{equation}

and its pseudo-inverse ${\tilde{\mathbf{H}}_1}^{*}$, we can build a simplified, cleaned-up model:

\begin{align} \label{eq:output-simplified} \tilde{y}_k &= \underbrace{\mathbf{e}^T}_{=:\tilde{\mathbf{c}}^T} {\underbrace{\left(\mathbf{H}_{2} {\tilde{\mathbf{H}}_1}^{*}\right)}_{=:\tilde{\mathbf{A}}}}^{k-1} \underbrace{\tilde{\mathbf{H}}_1 \mathbf{e}}_{=:\tilde{\mathbf{b}}} \\ &:= \tilde{\mathbf{c}}^T \tilde{\mathbf{A}}^{k-1} \tilde{\mathbf{b}} \end{align}

Determining the impulse response of that model, we get a pretty clean fit, and it seems that the noise is actually being ignored. Also, our step response fits much better: ## Reduced-Order Model

But still, our matrices $\tilde{\mathbf{c}}^T$, $\tilde{\mathbf{A}}$ and $\tilde{\mathbf{b}}$ are of order $\mathbf{n}$, while they should be of second order for our system. To reduce that order, we’ll have to make use of some trickery. We’ll re-use Equation \ref{eq:output-simplified} and add an identity matrix in there:

\begin{align} \tilde{y}_k &= \tilde{\mathbf{c}}^T \tilde{\mathbf{A}}^{k-1} \tilde{\mathbf{b}} \\ &= \underbrace{\tilde{\mathbf{c}}^T}_{=:\mathbf{e}^T} \underbrace{\mathbf{I}_n}_{=\tilde{\mathbf{H}}_1 {\tilde{\mathbf{H}}_1}^{*}} {\underbrace{\tilde{\mathbf{A}}}_{=\mathbf{H}_{2} {\tilde{\mathbf{H}}_1}^{*}}}^{k-1} \underbrace{\tilde{\mathbf{b}}}_{=:\tilde{\mathbf{H}}_1 \mathbf{e}} \\ &= \mathbf{e}^T \tilde{\mathbf{H}}_1 {\tilde{\mathbf{H}}_1}^{*} \left(\mathbf{H}_{2} {\tilde{\mathbf{H}}_1}^{*} \right)^{k-1} \tilde{\mathbf{H}}_1 \mathbf{e} \\ &= \mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \left(\mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \right)^{k-1} \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \end{align}

Keep in mind that the values in $\tilde{\mathbf{D}}$ are non-negative and all non-zero values are on the diagonal, so that we can take the square-root of both $\tilde{\mathbf{D}}$ and its inverse. We’ll use that to regroup a bit:

\begin{align} \tilde{y}_k &= \mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}} \underbrace{\tilde{\mathbf{Q}}^{T} \tilde{\mathbf{Q}}}_{\mathbf{I}_n} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \left(\mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \right)^{k-1} \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \\ &= \mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{D}}^{-1} \left(\tilde{\mathbf{P}}^{T} \mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \right)^{k-1} \underbrace{\tilde{\mathbf{P}}^{T} \tilde{\mathbf{P}}}_{=\mathbf{I}_n} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \\ &= \mathbf{e}^T \tilde{\mathbf{P}} \underbrace{\tilde{\mathbf{D}} \tilde{\mathbf{D}}^{-\frac{1}{2}}}_{=\tilde{\mathbf{D}}^{\frac{1}{2}}} \left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{P}}^{T} \mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-\frac{1}{2}} \right)^{k-1} \underbrace{\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{D}}}_{=\tilde{\mathbf{D}}^{\frac{1}{2}}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \\ &= \underbrace{\mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}}^{\frac{1}{2}}}_{=:\hat{\mathbf{c}}^T}{\underbrace{\left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{P}}^{T} \mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-\frac{1}{2}} \right)}_{=:\hat{\mathbf{A}}}}^{k-1} \underbrace{\tilde{\mathbf{D}}^{\frac{1}{2}} \tilde{\mathbf{Q}}^{T} \mathbf{e}}_{=:\hat{\mathbf{b}}} \\ &= \hat{\mathbf{c}}^T \hat{\mathbf{A}}^{k-1} \hat{\mathbf{b}} \end{align}

Notice that in the first step, we have pushed $\tilde{\mathbf{P}}^{T}$ from the left into the parenthesised, inner expression, and out the right side again. This does not change the value of the expression. We have then done the same with $\tilde{\mathbf{D}}^{-\frac{1}{2}}$, and then simplified the remaining expression. Now, if we check the dimensions of $\hat{\mathbf{c}}^T$, $\hat{\mathbf{A}}$ and $\hat{\mathbf{b}}$, they indeed match the reduced order of our system.

In fact, nothing should have changed except for the order of the system, so looking at our impulse response, we should not see any difference: Similarly, our step response should still look the same: ## Conclusion

Our expedition into the details of the ERA was quite fruitful: We have learned a few techniques for restructuring, analysis and reduction of measurement data.

Many of these will come handy when we examine the Observer/Kalman System Identification Algorithm (OKID). The OKID is used to extract the impulse response from measurement data that was acquired using arbitrary control input.

Further, some of these ideas also will allow us to identify non-linear systems – or at least approximate them. As the engine/propeller combination includes possibly non-linear elements due to aerodynamic drag, this is specifically relevant for the identification of multicopter propulsion systems.

## Bibliography

Juang & Pappa (1985)
& (). An Eigensystem Realization Algorithm for Modal Parameter Identification and Model Reduction. Journal of Guidance Control and Dynamics, 8(5). https://doi.org/10.2514/3.20031