Course index:

- Home: Artificial intelligence for mobile robots.
- Lesson 0: Modelling a robot/agent.
- Lesson 1: Flocking.

This lesson introduces one of the very first algorithms for swarm of agents. In particular, the algorithm induces the behavior of fish schools or flock of birds in the agents. The algorithm was introduced in 1986 by Craig Reynolds. You can check it out at his webpage. In this lesson I will discuss in more detail the algorithm, with small modifications, and extend the code from the lesson 0.

First of all, we set the unicycle dynamics introduced in the lesson 0 for the agents’ dynamics. Consider that we have an arbitrary number of agents, let be the position of the agent and be the set of neighboring agents of . For example, the collection of agents within a radius of 1 meter w.r.t. . We also denote the position of a neighbor of as .

The algorithm introduced by Craig Reynolds is quite simple and when it was presented it lacked of a proper mathematical analysis (I believe it is still missing). Therefore, it is difficult to predict precisely the behavior of the whole system of agents. Indeed, as we will see we will need to set a collection of gains that can make our system unstable and with non-desired behaviors. Nevertheless, the physical meaning of the algorithm is pretty straightforward and I would say it fits nicely in this introductory course.

The algorithm consists of three concepts:

**Separation**: The agent tries to avoid collisions with its neighbors, i.e., for each neighbor , the agent computes the relative position and takes the opposite direction. The separation vector is then computed by

** Alignment**: The agent tries to follow the average velocity given by its neighboring agents and itself. This step is slightly different from the original algorithm, where the agents try to follow the average of the headings. The alignment vector is then computed by

**Cohesion**: The group of neighboring agents tries to stick together. This is done by steering the agent to the centroid computed from its neighboring agents and itself. The cohesion vector is then computed by

In order to compare easier the three introduced vectors, we proceed to normalize them and we denote by the unit vector with the same direction as . For computing the desired heading to be followed by the agent we first need to calculate the following vector

where and are constants that sort the priority of the three rules for the agent . We finally take the desired heading as the heading described by , i.e., .

Since for the agent its actual heading is in general different from we define the heading error as . We will use this error signal to steer our agent . Recalling from the unicycle dynamics that , we design such control input as

also known as a Proportional controller, i.e. we command the agent to steer to the right/left if the desired heading is at its right/left. How fast we turn will be determined by the constant and the error itself.

##### Simulation Example

In the following animation you can check out the algorithm explained above. The **source code** for such a simulation can be found here.

There are 60 agents, all of them with arbitrary velocities with speeds within 25 and 35 units per second. The gains and are chosen as and respectively, but at each time instant we multiply them by a random value between and . This randomness in the gains makes the feeling of having *more alive* agents. If you want them to look more *robotic*, just choose constants gains throughout the whole experiment. As I have introduced before, this algorithm lacks of proper mathematical analysis, therefore there is not a guaranteed selection of gains that can make the whole system stable, e.g., they will not follow their neighbors, they will spinning about themselves or other kind of ill behaviors. So, the try and error process (together with some experience) is required in order to tune the gains of the algorithm.

##### Brief explanation of the code

The flocking algorithm has been implemented in the agent class. For the simulation, once an agent goes out of the screen, then it appears again with a random position within the screen. I have not commented before, but obviously, if an agent does not have any neighbor, then it does not modify its velocity at all. The main code can be founded at *lesson001.cpp*.

void Agent::flocking(std::vector&lt;Agent&gt; *agents, float radius, float kva, float ks, float kc, float ke) { int neighbor_count = 0; Eigen::Vector2f velAvg = Eigen::Vector2f::Zero(); Eigen::Vector2f centroid = Eigen::Vector2f::Zero(); Eigen::Vector2f separation = Eigen::Vector2f::Zero(); Eigen::Vector2f desired_velocity = Eigen::Vector2f::Zero(); // We check all the agents on the screen. // Any agent closer than radius units is a neighbor. for (std::vector&lt;Agent&gt;::iterator it = agents-&gt;begin(); it != agents-&gt;end(); ++it){ Eigen::Vector2f neighbor = it-&gt;getPosition(); Eigen::Vector2f relativePosition = neighbor - position_; if(relativePosition.norm() &lt; radius){ // We have found a neighbor neighbor_count++; // We add all the positions centroid += it-&gt;getPosition(); // We add all the velocities velAvg += it-&gt;getVelocity(); // Vector pointing at the opposite direction w.r.t. your // neighbor separation -= relativePosition; } } centroid /= neighbor_count; // All the positions over the num of neighbors velAvg /= neighbor_count; // All the velocities over the numb of neighbors // Relative position of the agent w.r.t. centroid Eigen::Vector2f cohesion = centroid - position_; // In order to compare the following vectors we normalize all of them, // so they have the same magnitude. Later on with the gains // kva, ks and kc we assing which vectors are more important. velAvg.normalize(); cohesion.normalize(); separation.normalize(); if(neighbor_count == 1) desired_velocity = velocity_; else desired_velocity += kva*velAvg + ks*separation + kc*cohesion; float error_theta = atan2(desired_velocity(1), desired_velocity(0)) - theta_; updateUnicycle(0, ke*error_theta); if(position_(0) &lt; 0 || position_(0) &gt; limitX_ || position_(1) &lt; 0 || position_(1) &gt; limitY_) { position_ = limitX_ / 2*Eigen::Vector2f::Ones() + limitX_ / 2*Eigen::Vector2f::Random(); velocity_ = Eigen::Vector2f::Random(); theta_ = atan2(velocity_(1), velocity_(0)); } }

- Previous lesson: Modelling a robot/agent.