# Lesson 0.5: Modelling the system (dynamics)

Course index:

We continue from the previous lesson where we have learned about how to represent mathematically a stochastic variable. In particular, we continue by finishing the model of our system. We have already the state (vector) variable $x = \begin{bmatrix}p & v & b \end{bmatrix}$. Now we have to set how this state variable evolves over time. One of the simplest way to model such an evolution is to consider discrete events. The word discrete stands for updating the states of the system only at a regular time stamps. For example, from elementary physics we have the following discrete equations

$\begin{cases}p(k+1) =& p(k) + \Delta T v(k) + \frac{\Delta T^2}{2}a(k)\\ v(k+1) =& v(k) + \Delta T a(k),\end{cases} \quad (1)$

where $k\in\mathbb{N}$ is the time stamp index, $\Delta T\in\mathbb{R}$ is the time in seconds between two consecutive time stamps, and $a\in\mathbb{R}$ is a perfect acceleration, so it is not yet the acceleration model that we have derived in the previous lesson.

From the previous lesson we have seen that it does not make sense to talk about $p$ but to talk about $\hat p$ and $\sigma_p$. So we have to write dynamics for both of them. We will first address the dynamics of the mean values $\hat p, \hat v$ and $\hat b$ as we do with the perfect readings in (1). We know that the reading of our accelerometer $a_r$ is biased. On the other hand, we have already introduced the state $b$ in order to compensate such a bias, namely

$\begin{cases}\hat p(k+1) =& \hat p(k) + \Delta T \hat v(k) + \frac{\Delta T^2}{2} ( a_r(k) - \hat b(k) )\\ \hat v(k+1) =& \hat v(k) + \Delta T (a_r(k) - \hat b(k) ) \\ \hat b(k+1) =& \hat b(k).\end{cases} \quad (2)$

Let us explain the system dynamics (2). The first equation describes how the mean of the position evolves. Note that $\hat b$ is employed in order to compensate the bias from the accelerometer. We will consider that the bias of the accelerometer is constant, i.e. the third equation in (2). While this is not completely true, it is a mathematical way of saying the bias is varying very slowly. Also note that we are not employing $\hat a$ but the actual reading from the accelerometer $a_r$. Why is that? if you look careful at the left hand side of (2) you will not see any $\hat a(k+1)$, i.e. we have not written down any dynamics for the accelerometer. In fact, we will see that we consider the acceleration as an input to our dynamics (2). Why? because we are not interested in how the acceleration evolves and neither we have a sensor to tell us such an information. A second reason is that we do not need more information for determining the expected position $\hat p$ of the IMU, which is usually one of the important states for a drone. In fact, you might think, what would happen if instead of having an accelerometer, we had a sensor giving us the information about the velocity? In such a case, the dynamics (2) can be made shorter, for example

$\begin{cases}\hat p(k+1) =& \hat p(k) + \Delta T (v_r(k) - \hat b(k))\\ \hat b(k+1) =& \hat b(k),\end{cases} \quad (3)$

where $v_r = v + f_v$ in a similar way as we have modeled the accelerometer in the previous lesson, and $\hat b$ now is compensating a possible bias in the velocity sensor. Unfortunately, the current IMUs cannot measure the velocity with respect to an intertial frame and GPS systems cannot provide such a measurement with a high frequency rate. This is why it is common to employ (2) and not (3) for the dynamics of our system. Note how we use the same strategy in (2) and (3) with $\hat b$ in order to compensate the bias of a given sensor.

We now proceed to write in a more compact form the dynamics (2). You will find more familiar the following (compacted) expression

$\hat x(k+1) = F\hat x(k) + Gu(k). \quad (4)$

If we consider that the input $u(k) = a_r(k)$, then by comparing the dynamics of (4) and (2) one can identify the matrices

$F = \begin{bmatrix}1 & \Delta T & -\frac{\Delta T^2}{2} \\ 0 & 1 & -\Delta T \\ 0 & 0 & 1\end{bmatrix}, \quad G=\begin{bmatrix}\frac{\Delta T^2}{2} & \Delta T & 0\end{bmatrix}. \quad (5)$

The structure of (4) is why of the name Discrete linear Kalman filter. The states are updated at discrete events of time and the relation between two consecutive states in time is linear. In the following lesson we will study the Discrete linear Kalman filter in detail, but first we need to derive the dynamics of $\sigma_p, \sigma_v$ and $\sigma_b$. Let us rewrite again the equation that describes the Gaussian position

$f(p) = \frac{1}{2\pi \sigma_p^2} e^{-\left(\frac{p-\hat p}{\sqrt{2}\sigma_p}\right)^2}. \quad (6)$

By inspecting carefully at (6), one realizes that what matters is not $\sigma_p$ but $\sigma^2_p$. The square of the standard deviation is also called as the variance of the state. How to compute the variance $\sigma_p^2$ from an arbirtrary $f_p$ is not really relevant for this guide,  but it is closely related with the calculation of the mean value from $f_p$. If one is curious about how to compute them from an arbitrary $f_p$, I recommend the following entries in Wikipedia, variance and expected value. Nevertheless, we enumerate here a series of properties related with the variance in order to understand its role in the Kalman filter.

• It seems obvious that for an unidimensional state the variance $\sigma_p^2$ is a scalar number. However, what is the variance of the vector $x$, namely $\sigma_x^2 := P$? It is called the covariance matrix and it is a square matrix. If one checks (4), then one realizes that the states are coupled, i.e. they are correlated. That means, that the uncertainty for guessing one state is also related with the uncertainty of another state. The following definition for the covariance matrix of our state vector can be checked from the Wikipedia entry of variance.

$P := \begin{bmatrix}\sigma_p^2 & cov(p,v) & cov(p,b)\\ cov(v,p) & \sigma_v^2 & cov(v,b) \\ cov(b,p) & cov(b,v) & \sigma_b^2 \end{bmatrix}. \quad (7)$

• It is always positive. It is obvious for a scalar number, but what does that mean for matrix? It means that $P$ as in (7) is positive definite, i.e. it is symmetric and $x^TPx > 0$ with $x$ being a vector with the appropriated dimensions. If a matrix is positive definite, then all its eigenvalues are real and positive. Note that our $P$ cannot be zero for any of the elements of its diagonal, because that would mean we know precisely the value of a state. We are dealing with a system with noise, therefore this is something physically impossible. We can also note that having a non-positive definite $P$ does not make any sense. This would be a good sanity check in other to verify if we have implemented correctly our Kalman filter algorithm later.
• The variance of a vector (consisting of stochastic variables) that is multiplied by a matrix, e.g. $Fx$ satisfies the following equality

$\sigma_{Fx}^2 = FPF^T. \quad (8)$

Therefore, for computing the variance of $x(k+1)$, we can employ the dynamics (4) as follows

$P(k+1) = \\ = \sigma_{x(k+1)}^2 \\ = \sigma_{Fx(k) + Gu(k)}^2 \\ =\sigma_{Fx(k)}^2 + \sigma_{Gu(k)}^2 \\ = FP(k)F^T + G\sigma_{u(k)}^2G^T, \quad (9)$

where the third step has been possible since the accelerometer (the input) is uncorrelated with the rest of the states, i.e. we have an open-loop system where the input is completely independent of the states. For example, when one is flying a drone in manual mode, the accelerations are commanded by the stick of the radio transmitter. The pilot is totally free for operating this stick, no matter what is the position or velocity of the drone. So the acceleration is uncorrelated or independent of the position and the velocity in our system model (2).

In our example we have that $u(k) = a_r(k)$, therefore we have that $\sigma_{u(k)} = \sigma_a$. It is also common in the literature to call $G\sigma_{u(k)}G^T =: Q$ the process noise. Personally I do not like this name since in our example the process is considered perfect and what actually has noise is the accelerometer sensor. It is important to highlight the role of the matrix $G$ in $Q$. If one inspects (2) and (4), then one realizes that $G$ is indeed doing a changes of coordinates. For example, in the particular case of the first equation of (2), $G$ is transforming the units (in fact it comes from the physics relation $p(0+\Delta T) - p(0)= \int_0^{\Delta T} (\int_0^{\Delta T} a(0) dt ) dt$) from $m/s^2$ to $m$. So there are physical and mathematical grounds about how to compute and chose the matrix $Q$.

I have seen that how to choose $Q$ sometimes is a black magic moment since there are people that choose $Q$ arbitrarily. We have already seen that this is not the case, there is a mathematical expression for $Q$, so it can be calculated. If you are not happy with the results of your filter with the calculated $Q$, it is usually a sign of that your model is not accurate or realistic enough for your system.

The set of equations (4) and (9) determines how our Gaussian states evolves over time. With these two derivations we conclude this lesson and we are ready for studying the Discrete linear Kalman filter.