A deeper look into the Eigensystem Realisation Algorithm

Ralf Gerlich

2019/10/01

For my quadcopter control project, I am currently in the process of system identification, trying to find a model describing the dynamics of my motor-propeller-combination. Finding the theoretical model is quite simple, but then you’re left with finding the parameters – and not all of them are directly available from the motor vendors. So I have no other choice than to determine these parameters from measurements.

In my endeavour I came across this series of videos on data-driven control by machine learning by Professor Steve Brunton of the University of Washington, Seattle. There, basic principles of modern methods for system identification are explained very well. If you are new to control systems, I can also recommend his control boot camp.

In part of the series, Professor Brunton explains the Eigensystem Realisation Algorithm (ERA) ( Citation: & , & (). An Eigensystem Realization Algorithm for Modal Parameter Identification and Model Reduction. Journal of Guidance Control and Dynamics, 8(5). https://doi.org/10.2514/3.20031 ) . The ERA is a procedure by which a reduced-order model for a linear and time-invariant system (LTI) can be derived from data of an impulse-response measurement.

However, while the videos do a pretty solid job of explaining the concepts and give the final formulae, I wanted to know more about their derivation. For this, I went to the original paper where the approaches were developed, and tried to reproduce the steps that led to the final results.

I follow this approach quite a lot when acquiring new skills. It helps not only better understanding – and memorizing – the final solution, but I also tend to pick up neat tricks along the way that often prove helpful elsewhere – even completely outside the domain of the original paper.

Here’s some interesting ideas you may pick up by reading this article:

Our Example

Juang and Pappa worked on large, flexible space structure such as the Galileo spacecraft (which should not be mixed up with the European Galileo satellite navigation system), and aimed to identify the oscillation modes of such structures, so that these could be taken into account in control of the spacecraft.

We’ll look at something similar, although much more simple: A spring-loaded, dampened horizontal pendulum. This pendulum consists of a mass [latex]m[/latex] connected to a spring and a linear dampener. The mass can move horizontally, and we measure the position [latex]x[/latex] of its centre of gravity, with [latex]x=0[/latex] being the position when the system is at rest.

Spring Pendulum

This system can be modelled as a Single-Input, Single-Output system: We have a single input [latex]u=F[/latex] and a single output [latex]y=x[/latex].

Schematic of a Single-Input, Single-Output (SISO) System

Normally, we could easily provide at least the structure of a model for this system from first principles – namely, the conservation of momentum:

\begin{equation} m \ddot{x} + d \dot{x} + k x = F \end{equation}

Introducing state variables [latex]x_1 = x[/latex] and [latex]x_2 = \dot{x}[/latex], we can define the dynamics of this system in state space notation:

\begin{align} \frac{d}{dt} \begin{bmatrix} x_1 \ x_2 \end{bmatrix} &= \begin{bmatrix} 0 & 1 \
-\frac{k}{m} & -\frac{d}{m} \end{bmatrix} & \begin{bmatrix} x_1 \ x_2 \end{bmatrix} &+ \begin{bmatrix} 0 \ \frac{1}{m} \end{bmatrix} F \
y &= \begin{bmatrix} 1 & 0 \end{bmatrix} & \begin{bmatrix} x_1 \ x_2 \end{bmatrix} \end{align}

Instead of actually measuring our system, we will be using the continuous-time dynamics to simulate the system and thereby perform “measurements”. We will add white noise to emulate measurement noise.

Impulse-Response Measurement

Now, what happens if we whack this system with a hammer? What if we give it a short kick and then measure the position of the mass? Let’s find out!

Measured Impulse Response of a Dampened Spring-Mass System

The figure shows the position of the mass over time. We see that the system oscillates around the neutral position, but the amplitude of the oscillation falls exponentially with time, until measurement noise exceeds the movement of the system. In this example, we whacked the mass with a short force of 1 Newton, and then let go. So, our input would look like this:

Discrete-Time Impulse Input - Spring Pendulum

This function is called the Kronecker Delta Function, and when used in analysis of signals and systems, it is referred to as the (discrete-time) unit impulse. It is one for exactly one tick of our (discrete) time-base and zero everywhere else. When given as an input to a discrete-time system, the output observed afterwards is called the discrete-time impulse response. Specifically, LTI systems are completely characterised by their impulse response.

As we are working on data that we would acquire by measurement, considering the continuous-time formulation is not useful, as we cannot get continuous-time measurement data. Instead, we’ll have some kind of quantisation in both time and value. This actually simplifies the maths of our problem a bit, as we will see in a moment.

We now want to have a look at how we can mathematically and generally describe the impulse response. This will be the basis of our model to which we will then fit our measurement data. In discrete time, we describe the dynamics of an LTI system using a recurrence relation in state space:

\begin{equation} \label{eq:state_recurrence} \mathbf{x}_{k+1} = \mathbf{A} \mathbf{x}_k + \mathbf{b} u_k \end{equation}

Let us assume that for our initial state [latex]\mathbf{x}\left(0\right)=0[/latex] holds. This is certainly true for dampened structures if we wait long enough for all oscillations to die down before we whack the structure. Now, if our input is [latex]1[/latex] for time step [latex]k=0[/latex] and 0 for all other time steps, we can see the progression of our state by repeatedly applying Equation \ref{eq:state_recurrence}:

Time Step [latex]k[/latex] State [latex]\mathbf{x}\left(k\right)[/latex]
0 0
1 [latex]\mathbf{A} \underbrace{\mathbf{x}\left(0\right)}_{=0} + \mathbf{b} \underbrace{u\left(0\right)}_{=1}=\mathbf{b}[/latex]
2 [latex]\mathbf{A} \underbrace{\mathbf{x}\left(1\right)}_{=\mathbf{b}} + \mathbf{b} \underbrace{u\left(1\right)}_{=0} = \mathbf{A}\mathbf{b}[/latex]
3 [latex]\mathbf{A} \underbrace{\mathbf{x}\left(2\right)}_{=\mathbf{A} \mathbf{b}} + \mathbf{b} \underbrace{u\left(2\right)}_{=0} = \mathbf{A}^2\mathbf{b}[/latex]
n [latex]\mathbf{A}^{n-1}\mathbf{b}[/latex]

So, in general we have [latex]\mathbf{x}_k = \mathbf{A}^{k-1} \mathbf{b}[/latex] for [latex]k>0[/latex], and with the output relation

\begin{equation} \label{eq:measurement} y_k = \mathbf{c}^T \mathbf{x}_k \end{equation}

we get

\begin{equation} \label{eq:measurement_closed} y_k = \mathbf{c}^T \mathbf{A}^{k-1} \mathbf{b} \end{equation}

Note that this impulse-response is completely independent of what our state variables [latex]\mathbf{x}_k[/latex] represent. You can try it yourself: Substitute [latex]\mathbf{x}_k = \mathbf{T} \tilde{\mathbf{x}}_k[/latex] with some invertible square matrix [latex]\mathbf{T}[/latex] and determine the impulse response of that system. However, this actually works in our favour, as it allows us to arbitrarily select at least part of the parameters later on.

In general, the terms [latex]\mathbf{c}^T \mathbf{A}^{k-1} \mathbf{b}[/latex] – or, in their Multiple-Input, Multiple-Output form [latex]\mathbf{C} \mathbf{A}^{k-1} \mathbf{B}[/latex] – are called the Markov Parameters of the system.

Finding an Exact Model of Order n

What Juang and Pappa aim to do is to find parameter matrices [latex]\mathbf{A}[/latex], [latex]\mathbf{c}^T[/latex] and [latex]\mathbf{b}[/latex] so that the impulse response so described fits the measured data exactly. One way of doing so would be to explicitly write down the parametrised impulse-response – as we did in the last chapter – and try to solve this for the parameters directly.

However, that approach is rather ugly, and looking at Equation \ref{eq:measurement_closed}, this is a non-linear problem due to the powers of [latex]\mathbf{A}[/latex] occurring. Instead, Juang and Pappa use quite a nice trick: They restructure the data in such a way that the problem becomes linear. This allows us to use our toolkit from linear algebra to find a solution.

In a first step, Juang and Pappa construct the vectors [latex]\bar{\mathbf{y}}_k[/latex], which are the column vectors of [latex]n[/latex] observations starting at time step [latex]k[/latex]:

\begin{equation} \bar{\mathbf{y}}_k := \begin{bmatrix} y_k \\ y_{k+1} \\ \vdots \\ y_{k+n-1} \end{bmatrix} \end{equation}

We will see later that [latex]n[/latex] is the order of the system we will build. For the time being, we shall assume that we have an arbitrary amount of measurements, so there are no limits on [latex]n[/latex]. From Equation \ref{eq:measurement_closed}, we can see that

\begin{equation} \bar{\mathbf{y}}_k = \mathcal{O} \mathbf{A}^{k-1} \mathbf{b} \end{equation}

with the observability matrix

\begin{equation} \mathcal{O} = \begin{bmatrix} \mathbf{c}^T \ \mathbf{c}^T \mathbf{A} \ \mathbf{c}^T \mathbf{A}^2 \ \vdots \ \mathbf{c}^T \mathbf{A} ^{n-1} \end{bmatrix} \end{equation}

Now, remember the function of the observability matrix: If the state has dimension [latex]n[/latex] and the observability matrix has at least rank [latex]n[/latex], then we can use it to uniquely reconstruct the state from the outputs at time steps [latex]k,\ldots,k+n-1[/latex]. Thus, if our system is observable, we can use the inverse of the observability matrix [latex]\mathcal{O}[/latex] to find the state [latex]\mathbf{x}_k[/latex]:

\begin{equation} \mathcal{O}^{-1} \bar{\mathbf{y}}_k = \mathbf{A}^{k-1} \mathbf{b} = \mathbf{x}_k \end{equation}

However, from that we can construct any state at any time due to:

\begin{equation} \mathbf{x}_{k+1}=\mathbf{A} \mathbf{x}_k = \mathbf{A}\mathcal{O}^{-1} \bar{\mathbf{y}}_k \end{equation}

Applying this to [latex]y_k[/latex] we get the recurrence

\begin{equation} \label{eq:measurement_step} \bar{\mathbf{y}}_{k+1} = \mathcal{O} \mathbf{x}_{k+1} = \mathcal{O} \mathbf{A}\mathcal{O}^{-1} \bar{\mathbf{y}}_k \end{equation}

Now, let’s look at what we have done: We are able to describe the output at time steps [latex]k+1,\ldots,k+n[/latex] from the output at time steps [latex]k,\ldots,k+n-1[/latex] using a linear operator! How do we find [latex]\mathcal{O} \mathbf{A}\mathcal{O}^{-1}[/latex]? Equation \ref{eq:measurement_step} does not sufficiently specify their value. To increase the rank of the equation system, Joung and Pappa proceed to build the Hankel-matrices:

\begin{align} \mathbf{H}_k &= \begin{bmatrix} \bar{\mathbf{y}}_k & \bar{\mathbf{y}}_{k+1} & \cdots &\bar{\mathbf{y}}_{k+n-1} \end{bmatrix} \\ &= \begin{bmatrix} y_k & y_{k+1} & \cdots & y_{k+n-1} \\ y_{k+1} & y_2 & \cdots & y_{n+1} \\ \vdots & \vdots & \ddots & \vdots \\ y_{k+n-1} & y_{k+n} & \cdots & y_{k+2n-2} \end{bmatrix} \end{align}

These matrices are constructed by listing [latex]n[/latex] measurements starting at time step [latex]k[/latex] in the first column, then listing [latex]n[/latex] measurements starting at time step [latex]k+1[/latex] in the second column, and so on. Using Equation \ref{eq:measurement_step}, we can find the following recurrence for [latex]\mathbf{H}_ {k+1}[/latex]:

\begin{equation} \label{eq:hankel_step} \mathbf{H}_{k+1} = \mathcal{O} \mathbf{A} \mathcal{O}^{-1} \mathbf{H}_k \end{equation}

Thus, if we have any matrices [latex]\mathbf{H}_k[/latex] and [latex]\mathbf{H}_{k+1}[/latex], with the former being a regular, invertible matrix, we can find

\begin{equation} \mathcal{O} \mathbf{A}^{-1} \mathcal{O}^{-1} = \mathbf{H}_{k+1} {\mathbf{H}_k}^{-1} \end{equation}

Thus, we can express [latex]y_k[/latex] using

\begin{align} \label{eq:output-exact} y_k &= \mathbf{e}^T \mathbf{H}_k \mathbf{e} \\ &= \mathbf{e}^T \left(\mathcal{O} \mathbf{A}^{-1} \mathcal{O}^{-1}\right)^{k-1} \mathbf{H}_1 \mathbf{e} \\ &= \underbrace{\mathbf{e}^T}_{=:\hat{\mathbf{c}}^T} {\underbrace{\left(\mathbf{H}_{2} {\mathbf{H}_1}^{-1}\right)}_{=:\hat{\mathbf{A}}}}^{k-1} \underbrace{\mathbf{H}_1 \mathbf{e}}_{=:\hat{\mathbf{b}}} \\ &:= \hat{\mathbf{c}}^T \hat{\mathbf{A}}^{k-1} \hat{\mathbf{b}} \end{align}

where [latex]\mathbf{e}^T=\begin{bmatrix} 1 & 0 & \ldots & 0 \end{bmatrix}[/latex]. This exactly describes our measured system response:

Comparison of Impulse Responses: Measured vs. Estimated from Full Order System

The Problem of Noise

However, we also see that our estimated system exactly fits all the noise from our measurement. In our case we know from first-principles considerations that the exact model would only be of order [latex]n=2[/latex], while the model we developed here is of much larger order. Besides being quite a misuse of resources, this may also lead to considerable estimation errors, as we can see when plotting the step responses:

Comparison of Step Responses: Measured vs. Estimated from Full Order System

Our estimated system clearly deviates from the system we measured. This is no wonder, as the estimated system incorporates all the noise we measured.

Further, our Hankel-matrices are pretty badly conditioned This means that small rounding errors during calculations may become large errors in the result. We know that – in the absence of measurement error – the rank of the Hankel-matrices cannot be larger than the order of the underlying system. If they are indeed regular, they only are due to measurement noise, which should ideally be much smaller in magnitude than the actual data. Thus, simple inversion is prone to large numerical errors.

In their original paper, Juang and Pappa describe a method of using only a subset of the measurements to reduce the size of the Hankel-matrices, and thereby to improve the conditioning of the matrices. However, today, the method of ERA is almost universally presented (e.g. on Wikipedia) as being based on Singular Value Decomposition (SVD), using the whole set of measurement data.

Singular Value Decomposition

Any matrix [latex]\mathbf{M} \in \mathbb{R}^{m \times n}[/latex] can be represented in the form of its Singular Value Decomposition (SVD):

\begin{equation} \mathbf{M} = \mathbf{P} \mathbf{D} \mathbf{Q}^{T} \end{equation}

The matrices all have special forms:

Essentially, the SVD expresses the contents of the matrix as the sum of matrices with decreasing amount of impact:

\begin{equation} \mathbf{M}_{ij} = \sum_{r=1}^{\min\left\{m,n\right\}} \sigma_r \mathbf{P}_{ir} \mathbf{Q}^T_{rj} \end{equation}

As the matrices [latex]\mathbf{P}[/latex] and [latex]\mathbf{Q}[/latex] are orthogonal, the impact of the individual element is represented by [latex]\sigma_r[/latex]. If we look at the Pareto plot of the singular values, we see that there are some dominant elements in there:

Pareto Plot of Singular Values of the Measurement Hankel Matrix

The plot shows the individual singular values in orange, ordered in decreasing order from left to right, and the cumulative proportion of the singular values, added up from left to right. We can see that the first two singular values are the most prominent, making up for about 55% of the total data values, and the remaining singular values are pretty small in comparison to that – although they still add up to representing 45% of the data. We also see that our matrix is awfully close to being singular, which it would be if any of the singular values were zero.

This is somewhat consistent with our original considerations, where we found out that our system is actually of second order. Thus, we may assume that the data represented by the major two singular values is our actual data, while the remaining singular values represent noise.

Now, what do we do with that information? We can use it to remove what we consider to be noise from our data. In essence, the vectors of [latex]\mathbf{P}[/latex] and [latex]\mathbf{Q}[/latex] for bases of vector spaces, and the singular values determine the magnitude by which the base vectors are represented in the data in the matrix. Thus, by removing some singular values and base vectors, we can project the data in the matrix onto the directions represented by the remaining base vectors, and thereby approximate the original matrix:

\begin{align} \mathbf{M} &= \mathbf{P} \mathbf{D} \mathbf{Q}^{T} \
&= \begin{bmatrix} \tilde{\mathbf{P}} & \mathbf{P}_R \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{D}} & \mathbf{0} \
\mathbf{0} & \mathbf{D}_R \end{bmatrix} \begin{bmatrix} \tilde{\mathbf{Q}}^T \
{\mathbf{Q}_R}^T \end{bmatrix} \
&\approx \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \
&=: \tilde{\mathbf{M}} \end{align}

By appropriately selecting [latex]\tilde{\mathbf{D}}[/latex], we can approximate [latex]\mathbf{M}[/latex] arbitrarily close. Usually, one would determine the size of [latex]\tilde{\mathbf{D}}[/latex] from first-principles considerations, or by defining a cut-off value in relative error.

With that, we can determine the pseudo-inverse of [latex]\tilde{\mathbf{M}}[/latex]:

\begin{equation} \tilde{\mathbf{M}}^{*} = \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \end{equation}

I encourage you to personally verify that indeed [latex]\tilde{\mathbf{M}} \tilde{\mathbf{M}}^{*} = \tilde{\mathbf{M}}^{*} \tilde{\mathbf{M}} = \mathbf{I}_{n \times m}[/latex] holds, where [latex]\mathbf{I}_{n \times m}[/latex] is the [latex]n \times m[/latex] identity matrix (ones on the diagonal, zeroes everywhere else).

Simplified Model

With the simplified Hankel-Matrix

\begin{equation} \tilde{\mathbf{H}}_1 = \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \end{equation}

and its pseudo-inverse [latex]{\tilde{\mathbf{H}}_1}^{*}[/latex], we can build a simplified, cleaned-up model:

\begin{align} \label{eq:output-simplified} \tilde{y}_k &= \underbrace{\mathbf{e}^T}_{=:\tilde{\mathbf{c}}^T} {\underbrace{\left(\mathbf{H}_{2} {\tilde{\mathbf{H}}_1}^{*}\right)}_{=:\tilde{\mathbf{A}}}}^{k-1} \underbrace{\tilde{\mathbf{H}}_1 \mathbf{e}}_{=:\tilde{\mathbf{b}}} \\ &:= \tilde{\mathbf{c}}^T \tilde{\mathbf{A}}^{k-1} \tilde{\mathbf{b}} \end{align}

Determining the impulse response of that model, we get a pretty clean fit, and it seems that the noise is actually being ignored.

Comparison of Impulse Responses: Measured vs. Estimated from Simplified, Full Order System

Also, our step response fits much better:

Comparison of Step Responses: Measured vs. Estimated from Full Order, Simplified System

Reduced-Order Model

But still, our matrices [latex]\tilde{\mathbf{c}}^T[/latex], [latex]\tilde{\mathbf{A}}[/latex] and [latex] \tilde{\mathbf{b}}[/latex] are of order [latex]\mathbf{n}[/latex], while they should be of second order for our system. To reduce that order, we’ll have to make use of some trickery. We’ll re-use Equation \ref{eq:output-simplified} and add an identity matrix in there:

\begin{align} \tilde{y}_k &= \tilde{\mathbf{c}}^T \tilde{\mathbf{A}}^{k-1} \tilde{\mathbf{b}} \\ &= \underbrace{\tilde{\mathbf{c}}^T}_{=:\mathbf{e}^T} \underbrace{\mathbf{I}_n}_{=\tilde{\mathbf{H}}_1 {\tilde{\mathbf{H}}_1}^{*}} {\underbrace{\tilde{\mathbf{A}}}_{=\mathbf{H}_{2} {\tilde{\mathbf{H}}_1}^{*}}}^{k-1} \underbrace{\tilde{\mathbf{b}}}_{=:\tilde{\mathbf{H}}_1 \mathbf{e}} \\ &= \mathbf{e}^T \tilde{\mathbf{H}}_1 {\tilde{\mathbf{H}}_1}^{*} \left(\mathbf{H}_{2} {\tilde{\mathbf{H}}_1}^{*} \right)^{k-1} \tilde{\mathbf{H}}_1 \mathbf{e} \\ &= \mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \left(\mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \right)^{k-1} \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \end{align}

Keep in mind that the values in [latex]\tilde{\mathbf{D}}[/latex] are non-negative and all non-zero values are on the diagonal, so that we can take the square-root of both [latex]\tilde{\mathbf{D}}[/latex] and its inverse. We’ll use that to regroup a bit:

\begin{align} \tilde{y}_k &= \mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}} \underbrace{\tilde{\mathbf{Q}}^{T} \tilde{\mathbf{Q}}}_{\mathbf{I}_n} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \left(\mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \tilde{\mathbf{P}}^{T} \right)^{k-1} \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \\ &= \mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}} \tilde{\mathbf{D}}^{-1} \left(\tilde{\mathbf{P}}^{T} \mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-1} \right)^{k-1} \underbrace{\tilde{\mathbf{P}}^{T} \tilde{\mathbf{P}}}_{=\mathbf{I}_n} \tilde{\mathbf{D}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \\ &= \mathbf{e}^T \tilde{\mathbf{P}} \underbrace{\tilde{\mathbf{D}} \tilde{\mathbf{D}}^{-\frac{1}{2}}}_{=\tilde{\mathbf{D}}^{\frac{1}{2}}} \left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{P}}^{T} \mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-\frac{1}{2}} \right)^{k-1} \underbrace{\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{D}}}_{=\tilde{\mathbf{D}}^{\frac{1}{2}}} \tilde{\mathbf{Q}}^{T} \mathbf{e} \\ &= \underbrace{\mathbf{e}^T \tilde{\mathbf{P}} \tilde{\mathbf{D}}^{\frac{1}{2}}}_{=:\hat{\mathbf{c}}^T}{\underbrace{\left(\tilde{\mathbf{D}}^{-\frac{1}{2}} \tilde{\mathbf{P}}^{T} \mathbf{H}_{2} \tilde{\mathbf{Q}} \tilde{\mathbf{D}}^{-\frac{1}{2}} \right)}_{=:\hat{\mathbf{A}}}}^{k-1} \underbrace{\tilde{\mathbf{D}}^{\frac{1}{2}} \tilde{\mathbf{Q}}^{T} \mathbf{e}}_{=:\hat{\mathbf{b}}} \\ &= \hat{\mathbf{c}}^T \hat{\mathbf{A}}^{k-1} \hat{\mathbf{b}} \end{align}

Notice that in the first step, we have pushed [latex]\tilde{\mathbf{P}}^{T}[/latex] from the left into the parenthesised, inner expression, and out the right side again. This does not change the value of the expression. We have then done the same with [latex]\tilde{\mathbf{D}}^{-\frac{1}{2}}[/latex], and then simplified the remaining expression. Now, if we check the dimensions of [latex]\hat{\mathbf{c}}^T[/latex], [latex]\hat{\mathbf{A}}[/latex] and [latex] \hat{\mathbf{b}}[/latex], they indeed match the reduced order of our system.

In fact, nothing should have changed except for the order of the system, so looking at our impulse response, we should not see any difference:

Comparison of Impulse Responses: Measured vs. Estimated from Reduced Order System

Similarly, our step response should still look the same:

Comparison of Step Responses: Measured vs. Estimated from Reduced Order System

Conclusion

Our expedition into the details of the ERA was quite fruitful: We have learned a few techniques for restructuring, analysis and reduction of measurement data.

Many of these will come handy when we examine the Observer/Kalman System Identification Algorithm (OKID). The OKID is used to extract the impulse response from measurement data that was acquired using arbitrary control input.

Further, some of these ideas also will allow us to identify non-linear systems – or at least approximate them. As the engine/propeller combination includes possibly non-linear elements due to aerodynamic drag, this is specifically relevant for the identification of multicopter propulsion systems.

Bibliography

Juang & Pappa (1985)
& (). An Eigensystem Realization Algorithm for Modal Parameter Identification and Model Reduction. Journal of Guidance Control and Dynamics, 8(5). https://doi.org/10.2514/3.20031