Course index:

- Home: Artificial Intelligence for mobile robots.
- Lesson 0: Modelling a robot/agent.
- Lesson 1: Flocking.

Throughout all the course we will call *agent* to the robot. This is because the tools explained here can be applied to other entities that they are not robots, for example, an object in a video game.

We describe with the following diagram the different layers or blocks involved in the control of our agent.

And here it is the explanation for each block:

- The
*Desired behaviour*is the highest level of*intelligence*, this block determines the task to be accomplished. For example,*I want to go to the point A before three o’clock*. The output of this block is usually based on the estimation of the current state of the agent or the environment, for example,*today is Monday.*And my desired behaviour on Monday morning is to go to the point A. - The
*Guidance*takes the high level command and decides what should be the states in the agent in order to achieve the mission given by the desired behavior. For example,*I am far away from point A and it is almost three o’clock, so I should achieve a fast velocity towards the point A.*Technically we would say that the Guidance sets the angle heading (towards A) and the speed (fast enough) of the agent. - The
*Control*takes the setting points given by the Guidance and its mission is to track them, so the error between the actual and desired state is zero. For example,*my actual speed is low and the Guidance tells me to go faster, the Control then will push the gas*(agent’s actuator). - The
*Physical System*is composed of the agent and the environment. After the control actuates over the agent, its states and its relation with respect to the environment usually changes. Following the previous example, after pushing the gas, the position and velocity of the agent will change, and also its position with respect to the point A. This states and relations are measured by what we call*sensors*. - The
*Navigation*figures out what is the situation of the agent (its states, relations with the environment, etc) from the measurement coming from the sensors. This is the information available for our agent, and based on this information it will make its own decisions in the different levels of*Desired behavior, Guidance*and*Control.*

As I introduced, the AI involves many different disciplines. For instance, tools like machine or deep learning are mostly focused on the highest level block of *Desired behaviour*. The blocks of *Guidance Navigation* and* Control* involve mainly what we call control engineering. And finally the physical system involves applied physics, mechanical engineering and other more *close to practice* fields. In this course I will mainly focus on the *Guidance Navigation *and* Control *in the first part. Once we know how to control an agent, I will proceed to explain how to design the *Desired bahaviour. *

We need to understand that the previous block diagram is not 100% accurate. In practice some blocks overlap each other and in the end how to design a system depends on highly the kind of addressed problem. In fact, there is not a unique (systematic) way about how to tackle these kind of problems.

#### Dynamical Models

The dynamical model of the agents plays a fundamental role in their control and how they achieve a desired mission. For example, when you take into account realistic physics you should not expect to have an airplane flying backwards.

In this course we will mainly use three kinds of dynamical models for the agents: the first-order dynamics, the second-order dynamics and the unicycle dynamics.

##### 1st order dynamics

A simple approach is to consider just to actuate over the velocities of the agents or, equivalently, to model them employing the first-order dynamics

where is the position of the agent either in 2D or 3D and is the control action over it.

Depending on the particular problem, this differential equation might represent the commanded velocity to a more complex system, e.g. a quadrotor or a legged robot. Then the design of would correspond to the output of the *Guidance* block. On the other hand, if you consider your agent as an object without realistic physics in a video game, you could see the previous equation as the control action from the *Control* block actuating directly over the agent.

##### 2nd order dynamics

A more realistic dynamical model for actual vehicles is the so-called second-order dynamics given by

where is the agents’ velocity. It is more realistic in the sense that the control input in (2) is actuating over the accelerations of the agent, i.e. we consider Newtonian dynamics where the actuators of the system produce a desired force or torque.

##### Unicycle dynamics

In the previous two models, we are not restricting the motions of the agent in any direction. They are just considered as points that can move freely in the space. We introduce a set of constraints called holonomic, which are dependent only on the states (and not on their time derivatives a.k.a. velocities). For example, consider two points linked by a rigid rod of length 1m. These two points cannot move independently and freely anymore since they will be always separately exactly by 1m. This is the case of what we call in engineering *the unicycle*. A popular case is the differential wheeled robot

where two wheels are linked by a rigid rod. The wheels can rotate in both directions. In both rotate with the same angular speed and same directions (*common mode*), the unicycle travels straight. On the contrary, if the wheels rotate with the same angular speed but with opposite directions (*differential mode*), then a pure rotation about the centroid of the rod occurs. All the possible velocities in an unicycle can be constructed by the combination of the common and the differential mode, i.e. we can command the translational and the rotational speed independently. Because of the holonomic constraint between the two wheels, the unicycle cannot drift or slide. In the previous picture it is obvious that the unicycle cannot travel parallel to the linking rigid rod.

We can model this dynamics as follow

where is the translational speed, is the heading (with 0 pointing to positive X-axis and a positive angular velocity is considered clockwise), and are the positions of the centroid, and are the corresponding control actions.

In many cases we consider that the unicycle is travelling with a constant , therefore . For example, vehicles such as airplanes or boats travelling in cruise mode.

##### Simulation examples

In the following animation you can see examples about the previously explained agents’ dynamics. The **source code** for such a simulation can be found here.

- The red triangle is an unicycle with control inputs and , i.e. it will describe a closed orbit (a circumference).
- The green circle is ruled by second order dynamics with , so it keeps constant its initial velocity.
- The white circles are ruled by first order dynamics with an arbitrary and constant .

##### Brief explanation of the code

The code is mainly self-explained and commented. If you have any doubts or questions, please post it below in the comments section. Regarding the graphics, I just followed the good beginners tutorial of SFML. Nevertheless, I have also commented this part of the code.

The “lesson000.cpp” contains the main loop, where the agents are created and then their states are updated in an infinite loop.

Let me explain briefly the class Agent in the files “agent.hpp” and “agent.cpp”. In this classs I have codified the three explained dynamics, namely

// For the agent's dynamics we can chose among three different models void update1stDyn(Eigen::Vector2f); // kinematical point void update2ndDyn(Eigen::Vector2f); // Newtonian point void updateUnicycle(float, float); // Unicycle

while the theoretical explanation above has been done considering continuous time, i.e. for the position signal , the time . While this is true in a physical system, this is not the case for a numerical simulation . For example, in the case of first order dynamics, the word *update* means that we are going to calculate the value based on the value of , where is a time instant and is a fixed time step employed for the following numerical integration

which is called Euler integration. There are other numerical algorithms for integrating (4), but usually they are computationally more expensive and beyond the scope of this course. In the class Agent, is defined by *dt*. Here is an example of how we create an agent in the code

Agent agent(i, init_pos, 30*init_vel); // i is the tag in "p_i" in (1). agent.setIntegrationTime(dt*1e-6); // In seconds, this is \Delta t agent.setPositionLimits(wwidth, wheight); // Limits of the rendering window agent.setColorShape(sf::Color::Red); // Color of the shape to draw

and this is for example how we update the state of an unicycle. We can check the new position just drawing it on the rendering window

agent.updateUnicycle(0, 30*M_PI/180); // 30 degrees/sec agent.draw(&window, Shape::triangle); // rendering Window SFML and we choose among "triangle" or "circle"

and finally an example of the integration of (5) for the 1st order dynamics

void Agent::update1stDyn(Eigen::Vector2f u) { // Update of World Coordinates velocity_ = u; position_ += u*dt_; // For really small velocities we do not update theta // In fact, theta should not be useful for 1st order dynamics // but for being consistent with the code. speed_ = velocity_.norm(); if (speed_ > 1e-5) theta_ = atan2(velocity_(1), velocity_(0)); // Update of UV Coordinates setPositionGraphics_(); setOrientationGraphics_(); }

note that in the code (it is explained there), the World Coordinates has the origin at the bottom left corner, with X positive to the right and Y positive upwards. SFML follows the convention UV, where the origin is at the top left corner, with X positive to the right and Y positive downwards. SFML can manage with different coordinate systems, but I have preferred to do it myself in order to be more self-explicit.

The next lesson will be about how to create flockings, i.e. we will focus on the agent’s block *Desired Behaviour* in a scenario involving hundreds of agents!

- Next Lesson: Flocking.