The flock behavior: from scratch till now

Introduction

The  flock behavior is one of the last things we had to deal with because of all the amount of work before that, but it is one of the most important feature of the project and we spent a lot of time implementing, coding and testing it in order to make it close to our expectations. This article will be the most important concerning the flock behavior: it might talk about points that have already been mentioned before but that is only in order to give more details and further explanations.

Our supervisor let us know that a former student already worked on mobile robots moving in formation1. It was really interesting to see how a same project can be approached in different ways: instead of giving most of the control to the robots in the flock, we decided to put the leader in charge for almost everything. This decision made the programming and implementation different but we came out with a very autonomous system at the end of the project.

This article will be structured in the same way the flock behavior was designed: with progressive layers. We integrated one behavior at a time, tested it over and over and passed to the next one. In every point here, we will ask what we wanted to do, why we wanted to do it, what was expected and what were the results.

 

The different steps of the flock

The basic flock

Expectations

This was the really first approach we had. Basically, the image analysis would return two coordinates: one for the leader and the other for the robot following. The robot following the leader was only supposed to go “down” the leader (we take the coordinates of the leader and subtract a value called “desiredSpace” that will define the space between each neighbor unit in the formation). With such an implementation, we were expecting the robot following to act roughly like the leader with a little delay that we were ready to tackle if it was too important.

Results

We posted videos in the previous article (under “Very first working test”) and we could see that the behavior was working but wasn’t fluid at all. The robot was properly changing its position according to the leader movements, we just needed to tune our parameters in order to make it more reactive, more suitable.

Improvements

In order to understand how we improved the fluidity, we have to tell you more about how the flock is being monitored. The code in itself is really long, and it might be too much to expose it straight here; instead, a basic representation should do the trick and expose you the inner mechanics of the algorithm.


[Figure 1] Cycle of the flock handling process

 

As you might have pictured it in your mind, the big loop is indeed an infinite loop in our program. Every time one cycle has been done, the system makes a pause of a certain time (that we can settle) in order not to send too much commands to the bricks. Indeed, if hadn’t do that, the program would still work, but we would have a lot of useless information sent to the brick that wouldn’t even be treated because they would be erased by the next commands (if the robot is executing a command and receives a new one, it will abort the previous one and execute the most recent).

All of those explanation to make you understand our system and to show you where we tuned our parameter in order to fix our delay problem. First, we change the pause in the loop from 1 second to 0,25 second (which made us gain in response time). Secondly, we change a layer in the “Assign and Send Movement Command”. Actually this minor change was operated on the brick in itself: the robot is asked to move with a speed inversely proportional to the distance it is asked to move. Nevertheless, an overshot problem was still remaining and we had to tune another parameter to fix it: every time a robot is on its desired location, we define a threshold, a circle of tolerance acknowledging the robot in good position or not. We changed it from 10cm to 30cm and made it work.

At this point, tuning few parameters of our system and adding a proportional correction transformed our jerky system into a reliable and fluid one.

 

A dynamic and oriented flock

Expectations

At this point, one robot was properly following the other one. But the “Compute desired positions” layer was quite simple as we mentioned before: it only translated the position of the leader down and gave it to the other robot. We wanted to have a random number of robots in the flock and we wanted the flock to adapt itself in real-time according to this number. Besides, we wanted to add the “orientation” feature. Instead of staying below the leader, we wanted the flock to stay behind (that is to say in the opposite direction that the leader is looking at). Why such a choice you might say. Indeed, if our project was designed to work only in straight lines, we wouldn’t mind but we really wanted to implement something that would be coherent for direction changes. This feature would give the ability to the flock to follow the leader instead of simply staying behind, naively copying the leader’s gesture.

How we implemented it

The algorithm is quite detailed once more; instead of explaining the code line by line or so, we split the functioning in its three biggest parts in order to show you the mechanics.

[Figure 2] The algorithm detects the number of units detected (4 in our case) [Figure 3] It draws a polygon of n (number of units detected) sides with the leader being at the center… [Figure 4]…and then translates the polygon such as the first summit would be the leader. All the other summits represent the positions for the flock

 

After the algorithm has run, we have a set of positions of the robots and a set of positions for the desired positions. The next challenge was to find the best position to go for each robot and make it optimal for the whole flock. For this matter, several solution were found:

  • look at all the possible solutions and take the best one ~ O(n!) which is barely acceptable even if we are working with four units at most (we rejected this solution because our algorithm is supposed to work with n robots and who could accept an algorithm with such a complexity…)
  • look at the best solution for each individual ~ O(n²)
The second solution tends to be really close to the first (and best in term of distance but not in time) one, especially when there are not that many robots. This is why we chose that solution and instead of taking the units in the same order at each iteration, we randomize it every time (this would avoid some blocking problems).

Results

[Video 1] Looking at the flock self-adjusting in real-time

 

We had exactly what we wanted for this feature, we can exactly see how the different polygons shape with the robots coming on the field and this works whatever the direction of the leader. Nevertheless, the robots are not moving on this video because they could intersect each other’s path and therefore ruin the flock. This is why the next step was crucial: we had to deal with the avoidance. [Video 1]

Improvements

The first version of the algorithm that computed the position for the robots only needed a radius for the circle containing the polygon of the units. Problem being: the more units we had in the flock, the closer they would be. In order to fix that, the radius of this circle had to change according to the number of present units in the flock. Using basic trigonometry, we came out with a formula for this radius that make the distance between neighbor units constant.

This arrangement is indeed proper with a small number of robots but the bigger the flock gets, the bigger the polygon will be. This can be annoying, especially if we want to use the less space possible on the floor; we have an unused area proportional to the square of the number of units and the distance between them (2,8m² for 10 units for instance). In order to solve that, we could change the “Compute desired location” layer and instead of making one polygon of robots, we could make several polygons inside each other and save this way all the unused space. Here are some ideas we could think of on the Figure 5.

[Figure 5] Template we could use for improved flock position (with numerous robots). On the left template, we multiply the units by two on the outer belt; on the right template, we add one more unit to the outer belt. Those are indeed examples, we don’t have a precise idea of what is or should be the best one, it is just fuel for thoughts.

We could even think to dispatch the units all around the leader: the leader would be the center of all those nested polygons. It could be a real advantage if we have a lot of units in the flock: they would be all in an optimal layout in order to stay in the FOV. Nevertheless, we would have a lot of problem with the avoidance that we are going to develop in the next section.

Implementing the movements of the flock

Expectations

At this point, we had everything working in theory. We mean in theory because every time a robot is asked to go to a position, it will go where it is ask to go without asking further question. Basically, this section describes how the “Assign and send movement commands to bricks” layer [Figure 1] is designed. In order to have a stable system, we had to answer/find a way to solve the following problems:

  • The robots in the flock should not bump into each other;
  • The robots should try to stay in the FOV of the camera in order to stay alive in the flock (if the robot is not detected for a while, it will be deleted from the flock);
  • The robots in the flock should do their best in order to avoid the leader when maneuvering;
  • Give a priority to all those behaviors in order to make everything coherent and working.

How we implemented it

How the robots avoid each other

As soon as the “Compute desired location” layer [Figure 1] has been executed, we have two things: a list of bricks in the flock with their positions and a set of desired positions (and they should be of the exact same length). Randomly, we are taking a robot and we are assigning it to a position. This is the point where the following algorithm is triggered:

public void giveBrickDirection(Brick b, Point p)
{
	while( distancePositions(b.getPosition(), p) > proximityThreshold )
	{
		if(movementPossible(b, p))
		{
			b.setGoToPostion(p);
			return;
		}
		else
		{
			p.translate((-p.x+b.getPosition().x)/3, (-p.y+b.getPosition().y)/3);
		}
	}
}

The mechanics are that simple: as long as the distance between the robot and its goal position is longer than the proximity threshold (this is the “error distance” we define in order to establish when a robot is close enough to its target), we look if the goal position is within range. If it is reachable, we send the command to the robot in order to send it to the desired position. If not, we reduce the distance of the command by 33%. The recursion is terminal because either the robot will be able to make the movement within few recursions or at one point the distance will get smaller than the threshold and the robot will stand still, waiting for the next command.

The black box in here would be the method movementPossible(Brick b, Point p). To make it understandable, the method goes through the list of all the brick and check if the movement of the considered brick b will not cross any other brick location or movement. In order to make it even simpler, take a look at the Figure 6.

[Figure 6] How the “collision area” is defined. As long as nothing is into or enters the green area, the red robot will be allowed to make its move.

 

If the red brick has to move, we will compare with every other brick (the blue one here for instance) that there is no conflict. If the blue brick is not moving (as in our example), we check if the initial position is not in the  “no collision area”; if not, the movement will be allowed. If the blue brick was moving, we would do two more checks: one to test if the blue robot’s desired location is not in the “no collision area” and we’ll check if the lines are intersecting with basic geometry rules.

How the robots survive (from being excluded from the FOV)

This behavior was really simple to approach and implement, here is some code to illustrate it:

if (  getConsideredBricks().get(i).isOnEdge() )
{
	giveBrickDirection(currentBrick,
			new Point((brickInControl.getPosition().x-brickPosition.x)/2+brickPosition.x,
					(brickInControl.getPosition().y-brickPosition.y)/2+brickPosition.y));
}

Basically, every brick has an boolean attribute “isOnEdge” given by the drone. If the robot is too close to the limit of the FOV, the attribute “isOnEdge” switches to true and we just ask the robot to get closer to the leader. With the drone being supposed to stay on top of the leader, the robot getting closer to the edge will therefore get closer to the leader/the center of the FOV.

How the robots avoid the leader

This feature works exactly the same way than the previous one. We check if the robot is too close to the leader. If not, the robot might proceed to the normal routine (previously mentioned); if so, the robot has to “escape” from the leader. Why should we do complicated when a simple solution works a charm?

How we merge all those behaviors together

How to order those behaviors would have an important impact on the global behavior of the system. If we take a closer, we can detect that some commands contradict each other (survival and leader avoidance for instance): this is why we took a long time in order prioritize each behavior according to the way we wanted the system to respond [Figure 7].

[Figure 7] The prioritization of the movement behavior

 

Our first goal was to keep the most of the units in the flock, this is why the survival behavior (staying inside the FOV) behavior is the one with most priority. The leader avoidance is something crucial for us because we know how annoying it is to be blocked by another unit while trying to move in a world: this is why it has been given the second priority. And at last, if no one of the previous behavior has been activated, the normal movement behavior will be triggered. Indeed, in every of those case, when a robot is asked to go to a position, the giveBrickDirection method is called and we check if the robot is enabled to move or not (or we try to make a part of this movement).

Results

Well those videos are the one we took at the end of our project. Everything is working as we expected and described it above, using a webcam on the roof or using the drone. Nonetheless, if we had more time, there would be a lot more to do in order to improve this project and we will expose few ideas in the next section. In the meanwhile here is all the videos that are relevant to all that have been mentioned before. [Videos 2-5]

 

[Video 2] from above: the green robot is the leader all the time it appears on the screen; as soon as it goes out of the FOV, the yellow one takes the lead. It is a great video if you want to see how the leader avoidance works: every time the leader changes direction and goes towards a unit, this very unit escapes the leader and lets the leader pass. [Video 3] from above: green has the lead. This time, we can see that the red goes out of the screen (the edges were defined very low for this video) and as soon it has disappeared, you can see the blue robot changing the formation.
[Video 4] from the field: blue is the leader and we use the drone. You will see a lot of errors in this video: the yellow color is not well detected from time to time, the avoidance is not that well tuned (we had to increase the proximity threshold) and the orientation of the robot were not that well handled. The yellow unit died quite often even if we replaced it in the flock. [Video 5] from the field: blue is the leader and we use once more the drone. You can see the dynamic flock (at the beginning, when inserting unit one at the time), and all the avoidance behavior (all the units try not to bump into the leader and you can watch at the end, before the crash, how the red and the green unit slow down their speed in order not to cross each other’s path)

 

Improvements

First of all, we needed the robot to keep a straight orientation. It could have been done with a compass sensor and we actually did it. But we couldn’t handle all the magnetic fields in the room we are working in and this has paralyzed our project in a certain way (the use of a PID correction in order to keep the orientation almost solved the problem). And this is the reason why you might see us on some videos putting the robots straight on the floor. So, one major improvement: restore the thread on the robots making them face the same direction as the drone.

Second and biggest improvement: adding behaviors to the robots. They could indeed have an obstacle avoidance behavior in order to give them more responsibilities for instance. They could try to find their way back into the FOV when lost (and the rest of the flock could wait for it or even try to look for him). Giving the robots a more autonomous behavior would without a doubt improve the project, but we should always keep in mind that the leader is the hive-mind of the system and the robots have to give it the highest priority.

 
References

  1. PhD dissertation by Jakob Fredslund; Simplicity Applied in Projects Involving Embodied, Autonomous Robots; pp67-124 []


Final words on the drone: merging previous work with new tasks

Last automatic control improvements What is wrong with our Proportional Integrative Derivative (PID) controller… We have spent quite a good amount of time implementing, testing, tuning parameters and tweaking our PID controller over the last months. Our understanding of all its underlying theoretical aspects -mathematical, physical and computational- improved as much, and our results got […] more

Image analysis: color detection for multiple robots

Introduction to our problem Image analysis is nothing new in our project. We have already performed some by ourselves, using the vertical camera of the drone to track roundels on the ground. While this was a good solution, quite fast to implement and that did not required lots of tuning, it is no longer valid […] more

Merging our work together: the beginnings of worthy and notable results.

It’s been two weeks now we have been working together in order to merge our work (one on the drone/image analysis and the other on the land units/flock behavior). This post will present you our latest changes, improvements and show you the first results we came out with.   More about the flock behavior The […] more

Tracking algorithm: considering the inclination of the drone

Setting down the problem Our PID controller has proven to be working but without achieving an almost perfect stability, even when it comes to stay on top of a still roundel. A hypothesis was then made to explain our difficulty to fulfill our goal, apart from having to correctly tune the gain parameters. So far, […] more

Arrival of the new omniwheels

It’s been one week now we are testing the new Rotacaster omniwheels LEGO compatible1 with our omnidirectional robot. Here is a picture with those wheels integrated with the new design. We ran quite numerous tests with the new omniwheels: we’ll show you in this article the differences with the old ones, fully study the performances of […] more

Drone: new PID with polar coordinates and HowTo improve reactivity and accuracy

Handling polar coordinates for the PID Defining a new error Previously1, we have seen how to manage a fair tracking with a PID control loop that uses a traditional Cartesian coordinates system. Picturing its idea seemed however rather less intuitive than by considering polar coordinates. We are indeed considering a central point and an offset between it […] more

Some steps further with the omnidirectional robot

In this article, we’ll focus on the driving behavior of the robot. Indeed, we need it to be as accurate as possible in every of his moves and we need it to be able to go in every direction. So we’ll explain here how the robot is supposed to move without turning and how we […] more

Performing simple image analysis and full PID controller with the Drone

So far, we have been able to detect roundels on the floor with the drone by using an already existing embedded function. We have even managed to conceive a basic proportional controller (P) on the drone to track the detected roundel. Now, we have added more flexibility by using our own detection algorithm to track […] more

Improving the omnidirectional robot design

It’s been two weeks now we have been struggling with a major issue. After implementing the manual driving mode, we wanted the robot to move to specific positions and we encountered problems with accuracy. For instance, on a same command, the robot could have different behaviors, totally unpredictable. We sure knew this could be improved […] more

Coding with the drone – Performing roundel tracking

Developing settings Our developing environment for the drone is now properly settled, meaning that we can finally now be efficient while programming by writing a few lines of codes and testing it on the drone seconds later. It wasn’t an easy task, between the drone motherboard that suddenly ceased to work properly (thanks to the […] more

Improving the kiwi drive: tests with a compass sensor

After hours of testing (or “playing”, depends on how you appreciate remote-controlled cars), we’ve been forced to acknowledge that our omnidirectional robot was not completely accurate: after several movements in random directions, the robot has changed its orientation compared its inner one. The robot is indeed omnidirectional in the way that it can move in […] more

Controlling the drone with our own built program – roundels recognition

So far, what has been shown here about the drone was mostly made possible thanks to third-party applications, that we managed in some way to have working with our devices (PC or smartphone). This was good as an introduction, but here you will read how we had it controlled thanks to programs we could compile […] more

Working out the omnidirectional behavior of the kiwi drive

It’s been a while we’ve been working on this omnidirectional robot. We spent a lot of time on the construction (so as to provide a solid and reliable structure), trying to figure out how to move it (just for pure experiments, regardless to the omnidirectional purposes) and now, the last step was to implement the […] more

Controlling the LEGO brick via computer: the beginnings of the kiwi drive

The only tests we’ve done so far with our omnidirectional robot was using some applications (found the android market) so as control the LEGO brick with the basic LEGO firmware. It was indeed working properly (cf our previous articles and videos) but those softwares weren’t designed for the kiwi drive1 we want to use in […] more

The day we crashed the drone

Sometimes, tragic accidents just happen. Well, we were not exactly surprised by the one that affected us last week, when we damaged our AR.Drone so bad it couldn’t fly anymore, since we played with fire. Or rather with wind, to be precise. Besides, when you do numerous experiments with a really light device that can […] more

Ground unit construction and structure ideas: implementing omniwheels

Thinking about the project in itself, we focused on the ground units in the flock and their movements. In order to follow the commands it has been given, a ground unit must turn on itself and then move forward an indicated direction (and eventually make a last rotation to be well oriented with the leader). […] more

Testing the drone with third-party applications

Trying to connect to the AR.Drone When you buy the AR.Drone, the first thing you may want to do before developing your own programs for it is to test the UAV itself, straight out of the box. The easiest way to proceed is to own an iDevice, i.e.an  iPhone/iPad/iPod Touch, download the free application AR […] more

First experiments

Before going on a coding spree and working on the project itself, we wanted to run some tests just to see if the drone could track a robot on the floor and follow it. But in order to do this, we didn’t want to code anything so we used a subterfuge: instead of using a tracking […] more

Same idea… different approach

While browsing the web looking for potential examples, demonstrations and applications of the main idea behind our project, we stumbled on some good videos. It appears that, so far, few people managed to have at least the AR.Drone following one moving object on the ground, let alone an autonomous robot vehicle. We still did not […] more

Getting started

First things first, we had to take the time to find out all the equipment we would need, all the objectives we wanted to reach and therefore all the tasks we had to achieve so as to complete the project properly. Nevertheless we obviously can’t predict everything right now, but we think that the board […] more

Getting some inspiration…

The AR.Drone is quite a recent device, given that it became available for sale last August (2010). It nonetheless did not prevent lots of developer to eagerly jump on the opportunity of programming this new hi-tech gadget.  Soon the web began to offer lots of inspirational ideas about the possibilities provided by the quadricopter. Considering […] more

Next Page »