# 1: Flocking

Course index:

This post introduces one of the very first algorithms for swarm of agents. In particular, the algorithm induces the behavior of fish schools or flock of birds in the agents. The algorithm was introduced in 1986 by Craig Reynolds. You can check it out at his webpage. In this post I will discuss in more detail the algorithm, with small modifications, and extend the code from the post 0.

First of all, we set the unicycle dynamics introduced in the post 0 for the agents’ dynamics. Consider that we have an arbitrary number $n$ of agents, let $p_i$ be the position of the agent $i$ and $\mathcal{N}_i$ be the set of neighboring agents of $i$. For example, the collection of agents within a radius of 1 meter w.r.t. $p_i$. We also denote the position of a neighbor of $i$ as $p_k, k\in\mathcal{N}_i$. The algorithm introduced by Craig Reynolds is quite simple and when it was presented it lacked of  a proper mathematical analysis (I believe it is still missing). Therefore, it is difficult to predict precisely the behavior of the whole system of agents. Indeed, as we will see we will need to set a collection of gains that can make our system unstable and with non-desired behaviors. Nevertheless, the physical meaning of the algorithm is pretty straightforward and I would say it fits nicely in this introductory course.

The algorithm consists of three concepts:

Separation: The agent $i$ tries to avoid collisions with its neighbors, i.e., for each neighbor $k$, the agent $i$ computes the relative position $z_k = p_k - p_i$ and takes the opposite direction. The separation vector $s_i$ is then computed by $s_i = -\sum_{k\in\mathcal{N}_i} z_k$ Alignment: The agent $i$ tries to follow the average velocity given by its neighboring agents and itself. This step is slightly different from the original algorithm, where the agents try to follow the average of the headings. The alignment vector $a_i$ is then computed by $a_i = \frac{\Big(\dot p_i + \sum_{k\in\mathcal{N}_i} \dot p_k\Big)}{1 +|\mathcal{N}_i |}$

Cohesion: The group of neighboring agents tries to stick together. This is done by steering the agent $i$ to the centroid computed from its neighboring agents and itself. The cohesion vector $c_i$ is then computed by $c_i = p_i - \frac{\Big(p_i + \sum_{k\in\mathcal{N}_i} p_k\Big)}{1 + |\mathcal{N}_i |}$ In order to compare easier the three introduced vectors, we proceed to normalize them and we denote by $\hat x$ the unit vector with the same direction as $x$. For computing the desired heading to be followed by the agent $i$ we first need to calculate the following vector $d_i = k_s \hat s_i + k_a \hat a_i + k_c \hat c_i,$

where $k_s, k_a$ and $k_c \in \mathbb{R}$ are constants that sort the priority of the three rules for the agent $i$. We finally take the desired heading $\theta_{d_i}$ as the heading described by $d_i$, i.e., $\theta_{d_i} = atan2(d_{i_y},d_{i_x})$.

Since for the agent $i$ its actual heading $\theta_i$ is in general different from $\theta_{d_i}$ we define the heading error as $e_{\theta_i} = \theta_{d_i} - \theta_i$. We will use this error signal to steer our agent $i$. Recalling from the unicycle dynamics that $\dot \theta_i = u_{\theta_i}$, we design such control input as $u_{\theta_i} = -k_e e_{\theta_i},$

also known as a Proportional controller, i.e. we command the agent to steer to the right/left if the desired heading is at its right/left. How fast we turn will be determined by the constant $k_e$ and the error $e_{\theta_i}$ itself.

##### Simulation Example

In the following animation you can check out the algorithm explained above. The source code for such a simulation can be found here. There are 60 agents, all of them with arbitrary velocities with speeds within 25 and 35 units per second. The gains $k_s, k_a, k_c$ and $k_e$ are chosen as $1, 0.3, 1$ and $1.3$ respectively, but at each time instant we multiply them by a random value between $0.9$ and $1.2$. This randomness in the gains makes the feeling of having more alive agents. If you want them to look more robotic, just choose constants gains throughout the whole experiment. As I have introduced before, this algorithm lacks of proper mathematical analysis, therefore there is not a guaranteed selection of gains that can make the whole system stable, e.g., they will not follow their neighbors, they will spinning about themselves or other kind of ill behaviors. So, the try and error process (together with some experience) is required in order to tune the gains of the algorithm.

##### Brief explanation of the code

The flocking algorithm has been implemented in the agent class. For the simulation, once an agent goes out of the screen, then it appears again with a random position within the screen. I have not commented before, but obviously, if an agent does not have any neighbor, then it does not modify its velocity at all. The main code can be founded at lesson001.cpp.

void Agent::flocking(std::vector<Agent> *agents, float radius,
float kva, float ks, float kc, float ke)
{
int neighbor_count = 0;
Eigen::Vector2f velAvg = Eigen::Vector2f::Zero();
Eigen::Vector2f centroid = Eigen::Vector2f::Zero();
Eigen::Vector2f separation = Eigen::Vector2f::Zero();
Eigen::Vector2f desired_velocity = Eigen::Vector2f::Zero();

// We check all the agents on the screen.
// Any agent closer than radius units is a neighbor.
for (std::vector<Agent>::iterator it = agents->begin();
it != agents->end(); ++it){

Eigen::Vector2f neighbor = it->getPosition();
Eigen::Vector2f relativePosition = neighbor - position_;

// We have found a neighbor
neighbor_count++;

// We add all the positions
centroid += it->getPosition();

// We add all the velocities
velAvg += it->getVelocity();

// Vector pointing at the opposite direction w.r.t. your
// neighbor
separation -= relativePosition;
}
}

centroid /= neighbor_count; // All the positions over the num of neighbors
velAvg /= neighbor_count; // All the velocities over the numb of neighbors

// Relative position of the agent w.r.t. centroid
Eigen::Vector2f cohesion = centroid - position_;

// In order to compare the following vectors we normalize all of them,
// so they have the same magnitude. Later on with the gains
// kva, ks and kc we assing which vectors are more important.
velAvg.normalize();
cohesion.normalize();
separation.normalize();

if(neighbor_count == 1)
desired_velocity = velocity_;
else
desired_velocity += kva*velAvg + ks*separation + kc*cohesion;

float error_theta = atan2(desired_velocity(1), desired_velocity(0))
- theta_;

updateUnicycle(0, ke*error_theta);

if(position_(0) < 0 || position_(0) > limitX_
|| position_(1) < 0 || position_(1) > limitY_)
{
position_ = limitX_ / 2*Eigen::Vector2f::Ones() +
limitX_ / 2*Eigen::Vector2f::Random();
velocity_ = Eigen::Vector2f::Random();
theta_ = atan2(velocity_(1), velocity_(0));
}
}