Category: Issues

The flock behavior: from scratch till now

Introduction

The  flock behavior is one of the last things we had to deal with because of all the amount of work before that, but it is one of the most important feature of the project and we spent a lot of time implementing, coding and testing it in order to make it close to our expectations. This article will be the most important concerning the flock behavior: it might talk about points that have already been mentioned before but that is only in order to give more details and further explanations.

Our supervisor let us know that a former student already worked on mobile robots moving in formation1. It was really interesting to see how a same project can be approached in different ways: instead of giving most of the control to the robots in the flock, we decided to put the leader in charge for almost everything. This decision made the programming and implementation different but we came out with a very autonomous system at the end of the project.

This article will be structured in the same way the flock behavior was designed: with progressive layers. We integrated one behavior at a time, tested it over and over and passed to the next one. In every point here, we will ask what we wanted to do, why we wanted to do it, what was expected and what were the results.

 

The different steps of the flock

The basic flock

Expectations

This was the really first approach we had. Basically, the image analysis would return two coordinates: one for the leader and the other for the robot following. The robot following the leader was only supposed to go “down” the leader (we take the coordinates of the leader and subtract a value called “desiredSpace” that will define the space between each neighbor unit in the formation). With such an implementation, we were expecting the robot following to act roughly like the leader with a little delay that we were ready to tackle if it was too important.

Results

We posted videos in the previous article (under “Very first working test”) and we could see that the behavior was working but wasn’t fluid at all. The robot was properly changing its position according to the leader movements, we just needed to tune our parameters in order to make it more reactive, more suitable.

Improvements

In order to understand how we improved the fluidity, we have to tell you more about how the flock is being monitored. The code in itself is really long, and it might be too much to expose it straight here; instead, a basic representation should do the trick and expose you the inner mechanics of the algorithm.


[Figure 1] Cycle of the flock handling process

 

As you might have pictured it in your mind, the big loop is indeed an infinite loop in our program. Every time one cycle has been done, the system makes a pause of a certain time (that we can settle) in order not to send too much commands to the bricks. Indeed, if hadn’t do that, the program would still work, but we would have a lot of useless information sent to the brick that wouldn’t even be treated because they would be erased by the next commands (if the robot is executing a command and receives a new one, it will abort the previous one and execute the most recent).

All of those explanation to make you understand our system and to show you where we tuned our parameter in order to fix our delay problem. First, we change the pause in the loop from 1 second to 0,25 second (which made us gain in response time). Secondly, we change a layer in the “Assign and Send Movement Command”. Actually this minor change was operated on the brick in itself: the robot is asked to move with a speed inversely proportional to the distance it is asked to move. Nevertheless, an overshot problem was still remaining and we had to tune another parameter to fix it: every time a robot is on its desired location, we define a threshold, a circle of tolerance acknowledging the robot in good position or not. We changed it from 10cm to 30cm and made it work.

At this point, tuning few parameters of our system and adding a proportional correction transformed our jerky system into a reliable and fluid one.

 

A dynamic and oriented flock

Expectations

At this point, one robot was properly following the other one. But the “Compute desired positions” layer was quite simple as we mentioned before: it only translated the position of the leader down and gave it to the other robot. We wanted to have a random number of robots in the flock and we wanted the flock to adapt itself in real-time according to this number. Besides, we wanted to add the “orientation” feature. Instead of staying below the leader, we wanted the flock to stay behind (that is to say in the opposite direction that the leader is looking at). Why such a choice you might say. Indeed, if our project was designed to work only in straight lines, we wouldn’t mind but we really wanted to implement something that would be coherent for direction changes. This feature would give the ability to the flock to follow the leader instead of simply staying behind, naively copying the leader’s gesture.

How we implemented it

The algorithm is quite detailed once more; instead of explaining the code line by line or so, we split the functioning in its three biggest parts in order to show you the mechanics.

[Figure 2] The algorithm detects the number of units detected (4 in our case) [Figure 3] It draws a polygon of n (number of units detected) sides with the leader being at the center… [Figure 4]…and then translates the polygon such as the first summit would be the leader. All the other summits represent the positions for the flock

 

After the algorithm has run, we have a set of positions of the robots and a set of positions for the desired positions. The next challenge was to find the best position to go for each robot and make it optimal for the whole flock. For this matter, several solution were found:

  • look at all the possible solutions and take the best one ~ O(n!) which is barely acceptable even if we are working with four units at most (we rejected this solution because our algorithm is supposed to work with n robots and who could accept an algorithm with such a complexity…)
  • look at the best solution for each individual ~ O(n²)
The second solution tends to be really close to the first (and best in term of distance but not in time) one, especially when there are not that many robots. This is why we chose that solution and instead of taking the units in the same order at each iteration, we randomize it every time (this would avoid some blocking problems).

Results

[Video 1] Looking at the flock self-adjusting in real-time

 

We had exactly what we wanted for this feature, we can exactly see how the different polygons shape with the robots coming on the field and this works whatever the direction of the leader. Nevertheless, the robots are not moving on this video because they could intersect each other’s path and therefore ruin the flock. This is why the next step was crucial: we had to deal with the avoidance. [Video 1]

Improvements

The first version of the algorithm that computed the position for the robots only needed a radius for the circle containing the polygon of the units. Problem being: the more units we had in the flock, the closer they would be. In order to fix that, the radius of this circle had to change according to the number of present units in the flock. Using basic trigonometry, we came out with a formula for this radius that make the distance between neighbor units constant.

This arrangement is indeed proper with a small number of robots but the bigger the flock gets, the bigger the polygon will be. This can be annoying, especially if we want to use the less space possible on the floor; we have an unused area proportional to the square of the number of units and the distance between them (2,8m² for 10 units for instance). In order to solve that, we could change the “Compute desired location” layer and instead of making one polygon of robots, we could make several polygons inside each other and save this way all the unused space. Here are some ideas we could think of on the Figure 5.

[Figure 5] Template we could use for improved flock position (with numerous robots). On the left template, we multiply the units by two on the outer belt; on the right template, we add one more unit to the outer belt. Those are indeed examples, we don’t have a precise idea of what is or should be the best one, it is just fuel for thoughts.

We could even think to dispatch the units all around the leader: the leader would be the center of all those nested polygons. It could be a real advantage if we have a lot of units in the flock: they would be all in an optimal layout in order to stay in the FOV. Nevertheless, we would have a lot of problem with the avoidance that we are going to develop in the next section.

Implementing the movements of the flock

Expectations

At this point, we had everything working in theory. We mean in theory because every time a robot is asked to go to a position, it will go where it is ask to go without asking further question. Basically, this section describes how the “Assign and send movement commands to bricks” layer [Figure 1] is designed. In order to have a stable system, we had to answer/find a way to solve the following problems:

  • The robots in the flock should not bump into each other;
  • The robots should try to stay in the FOV of the camera in order to stay alive in the flock (if the robot is not detected for a while, it will be deleted from the flock);
  • The robots in the flock should do their best in order to avoid the leader when maneuvering;
  • Give a priority to all those behaviors in order to make everything coherent and working.

How we implemented it

How the robots avoid each other

As soon as the “Compute desired location” layer [Figure 1] has been executed, we have two things: a list of bricks in the flock with their positions and a set of desired positions (and they should be of the exact same length). Randomly, we are taking a robot and we are assigning it to a position. This is the point where the following algorithm is triggered:

public void giveBrickDirection(Brick b, Point p)
{
	while( distancePositions(b.getPosition(), p) > proximityThreshold )
	{
		if(movementPossible(b, p))
		{
			b.setGoToPostion(p);
			return;
		}
		else
		{
			p.translate((-p.x+b.getPosition().x)/3, (-p.y+b.getPosition().y)/3);
		}
	}
}

The mechanics are that simple: as long as the distance between the robot and its goal position is longer than the proximity threshold (this is the “error distance” we define in order to establish when a robot is close enough to its target), we look if the goal position is within range. If it is reachable, we send the command to the robot in order to send it to the desired position. If not, we reduce the distance of the command by 33%. The recursion is terminal because either the robot will be able to make the movement within few recursions or at one point the distance will get smaller than the threshold and the robot will stand still, waiting for the next command.

The black box in here would be the method movementPossible(Brick b, Point p). To make it understandable, the method goes through the list of all the brick and check if the movement of the considered brick b will not cross any other brick location or movement. In order to make it even simpler, take a look at the Figure 6.

[Figure 6] How the “collision area” is defined. As long as nothing is into or enters the green area, the red robot will be allowed to make its move.

 

If the red brick has to move, we will compare with every other brick (the blue one here for instance) that there is no conflict. If the blue brick is not moving (as in our example), we check if the initial position is not in the  “no collision area”; if not, the movement will be allowed. If the blue brick was moving, we would do two more checks: one to test if the blue robot’s desired location is not in the “no collision area” and we’ll check if the lines are intersecting with basic geometry rules.

How the robots survive (from being excluded from the FOV)

This behavior was really simple to approach and implement, here is some code to illustrate it:

if (  getConsideredBricks().get(i).isOnEdge() )
{
	giveBrickDirection(currentBrick,
			new Point((brickInControl.getPosition().x-brickPosition.x)/2+brickPosition.x,
					(brickInControl.getPosition().y-brickPosition.y)/2+brickPosition.y));
}

Basically, every brick has an boolean attribute “isOnEdge” given by the drone. If the robot is too close to the limit of the FOV, the attribute “isOnEdge” switches to true and we just ask the robot to get closer to the leader. With the drone being supposed to stay on top of the leader, the robot getting closer to the edge will therefore get closer to the leader/the center of the FOV.

How the robots avoid the leader

This feature works exactly the same way than the previous one. We check if the robot is too close to the leader. If not, the robot might proceed to the normal routine (previously mentioned); if so, the robot has to “escape” from the leader. Why should we do complicated when a simple solution works a charm?

How we merge all those behaviors together

How to order those behaviors would have an important impact on the global behavior of the system. If we take a closer, we can detect that some commands contradict each other (survival and leader avoidance for instance): this is why we took a long time in order prioritize each behavior according to the way we wanted the system to respond [Figure 7].

[Figure 7] The prioritization of the movement behavior

 

Our first goal was to keep the most of the units in the flock, this is why the survival behavior (staying inside the FOV) behavior is the one with most priority. The leader avoidance is something crucial for us because we know how annoying it is to be blocked by another unit while trying to move in a world: this is why it has been given the second priority. And at last, if no one of the previous behavior has been activated, the normal movement behavior will be triggered. Indeed, in every of those case, when a robot is asked to go to a position, the giveBrickDirection method is called and we check if the robot is enabled to move or not (or we try to make a part of this movement).

Results

Well those videos are the one we took at the end of our project. Everything is working as we expected and described it above, using a webcam on the roof or using the drone. Nonetheless, if we had more time, there would be a lot more to do in order to improve this project and we will expose few ideas in the next section. In the meanwhile here is all the videos that are relevant to all that have been mentioned before. [Videos 2-5]

 

[Video 2] from above: the green robot is the leader all the time it appears on the screen; as soon as it goes out of the FOV, the yellow one takes the lead. It is a great video if you want to see how the leader avoidance works: every time the leader changes direction and goes towards a unit, this very unit escapes the leader and lets the leader pass. [Video 3] from above: green has the lead. This time, we can see that the red goes out of the screen (the edges were defined very low for this video) and as soon it has disappeared, you can see the blue robot changing the formation.
[Video 4] from the field: blue is the leader and we use the drone. You will see a lot of errors in this video: the yellow color is not well detected from time to time, the avoidance is not that well tuned (we had to increase the proximity threshold) and the orientation of the robot were not that well handled. The yellow unit died quite often even if we replaced it in the flock. [Video 5] from the field: blue is the leader and we use once more the drone. You can see the dynamic flock (at the beginning, when inserting unit one at the time), and all the avoidance behavior (all the units try not to bump into the leader and you can watch at the end, before the crash, how the red and the green unit slow down their speed in order not to cross each other’s path)

 

Improvements

First of all, we needed the robot to keep a straight orientation. It could have been done with a compass sensor and we actually did it. But we couldn’t handle all the magnetic fields in the room we are working in and this has paralyzed our project in a certain way (the use of a PID correction in order to keep the orientation almost solved the problem). And this is the reason why you might see us on some videos putting the robots straight on the floor. So, one major improvement: restore the thread on the robots making them face the same direction as the drone.

Second and biggest improvement: adding behaviors to the robots. They could indeed have an obstacle avoidance behavior in order to give them more responsibilities for instance. They could try to find their way back into the FOV when lost (and the rest of the flock could wait for it or even try to look for him). Giving the robots a more autonomous behavior would without a doubt improve the project, but we should always keep in mind that the leader is the hive-mind of the system and the robots have to give it the highest priority.

 
References

  1. PhD dissertation by Jakob Fredslund; Simplicity Applied in Projects Involving Embodied, Autonomous Robots; pp67-124 []

Tracking algorithm: considering the inclination of the drone

Setting down the problem

Our PID controller has proven to be working but without achieving an almost perfect stability, even when it comes to stay on top of a still roundel. A hypothesis was then made to explain our difficulty to fulfill our goal, apart from having to correctly tune the gain parameters. So far, we have not taken into account the fact that the drone tilts a little while it moves. Yet, an inclination on one or two axis moves also the vertical camera, which then changes the roundel position returned by our algorithm.

Indeed, if the drone is located of top of the same spot where there is a roundel, the coordinates returned will vary more or less depending on the tilt angle. The greater the tilting, the bigger the offset. And a PID controller cannot behaves well if its core principle, that is the parameter measured which has to be corrected, is changing in an unexpected way because of the results of the PID correction.

Situation modeling

Geometric representation

The figures below illustrate the problem that occurs while the drone is moving. First, Figure 1 pictures the ideal situation, where the camera keeps itself perfectly vertical at any time. The field of view (FOV) of the camera is represented by a 1000*1000 matrix whose size does not change accordingly to the altitude. The coordinates returned by the detection algorithm are therefore given without units (in blue on the picture) and only specify relatives distances. To apply our own correction, we will need to work with SI units. This will be possible by using the altitude value that the drone navigation data knows at any time, and the FOV angle, which is equal to 64 degrees.

 

Figure 2 shows how tilting the vertical camera distorts the coordinate system on the ground. Furthermore, the roundel is clearly not at the same location anymore when it is viewed from the drone viewpoint, whereas the drone and the roundel are still over the same spot.

 

 

On Figure 3, it is possible to see that the inclination angle and the position of the roundel may affect the representation of the situation: the subsequent angles are not calculated as in the previous case. All those figures are obviously symmetrical, and what happens on one side of the x axis happens the same way on the other side.

 

The goal is now to analyse all these possible cases and find a corrective function that can be applied to the coordinate thats the algorithm receives, no matter what they are.

Mathematical analysis

First, we need a function that returns a converting factor that will be used to transform a value into millimeters from a measurement given in arbitrary units (as returned by the embedded algorithm on the drone).

Then, when we need to convert a value read by the camera into millimeters at a given altitude, we just need to apply the following:

With                                                                            

To keep our explanation simple, we take only two dimensions into account, that are the height and a length along the x axis. The reasoning and calculus are exactly the same with the y axis, apart from one minus sign.

Let us now consider a tilted camera that makes a φ angle with a vertical line perpendicular to the ground. Figure 2.b illustrates the problem we have to solve: even if neither the roundel nor the drone have moved -except for the tilting-, the coordinates returned by the tracking algorithm will be much different from what is expected (Figure 1.b).

The value xRead  returned by the camera is not actually the one corresponding to the real distance as seen on the ground, since the scale on the projected field of view on the ground is now distorted because of the tilting. To keep an orthonormal coordinate system with evenly scaled values, we have to consider a plane perpendicular to the line that go straight into the camera lens. Then, no matter where this plane is located along this line, every single point that belongs to this newly enclosed space will keep the same relative distance to the origin zero.

We define a new angle α as showed on Figure 2.a, such as:

We also define xReal as the actual position of the roundel on the x-axis in a situation where the camera is perfectly vertical.

Where                                                                         And xOut_1||2  is the equivalent of xRead distorted on the ground (one cannot talk about “projection” since not perpendicular angle is considered there). The value of xOut_1||2 is actually different depending on the camera inclination and the roundel location (Figure 2.a involves xOut_1, Figure 3.a shows xOut_2) . Keep in mind that xRead, xReal, xIn and xOut can be negative depending on the tilt angle φ. Besides, the value φ is returned positive by the drone navigation data when the drone is in a situation likewise to Figure 2.a, and negative when the tilting is in the opposite direction. Using the law of sines1, that states that the ratio of the length of a side to the sine of its corresponding opposite angle is constant, we get, from the green triangle in Figure 2.a: And since We get                                                                         The same goes with xOut_2, except that the angles are different (cf. Figure 3.a): Hence the result that applies in a case similar to Figure 2.a: When we generalize the calculus and consider every possible situation, we get the following conclusion:

Experiments and performance results

Protocol

To test our model in a real world setup, we built and filled a datalog in real-time during different test flights to keep track of the raw values returned by the detection algorithm and the corrected values. Besides, we also saved the angles made by the drone on both axis. Again, to keep the results readable, we chose to display data referring only to the x-axis, so it makes sense to compare our previous data. We performed the same experiments on the y-axis, for the same performance.

Both graphs below reports these data on the same timeline, during one of our running test times. Basically, we took the drone, activated our recognition algorithm, and did the following, in this order:

  1. The drone is put on top of the roundel, at a steady altitude. We then rotate it around one axis at a regular pace, from one side to the other (no more than 60 degrees on each side), in order to register different lateral angles.
  2. The drone is then put on the far right of the roundel, without changing the altitude nor the y axis position. Rotations are then applied as before.
  3. Step 2 is repeated, except that it is done one the far left of the roundel.

Experiments

The experiment results are reported on the graphs below. Please note that values are actually registered when a roundel is detected. That is why the range of the angle vary for each step, even if the drone is each time moved the same way. The achieved results for these steps are:

  1. As expected, the more the drone is tilted (in green on the graph), the further from the zero origin the roundel is detected (in blue on the graph). The corrected value (xReal) is staying really close to zero, which is what we wanted to perform.
  2. The corrected value stays close to the real one, with a range of 50 cm at maximum, way better than a range of 2 meters as it is the case with the raw values.
  3. Observed results are symmetrical to those of step 2.

Closing comments

What can also be noted is that the sensors perform really well: they return accurate and consistent values at any time. This is especially true with the altitude, since the tilt sensors seem to lose accuracy when they are shaken too fast or if the angle is too big, which accounts for bigger errors in the correction. Overall, they all refresh themselves fast enough to be consistent with each other at a given time, and the communication delay does not really interfere with this process.

As for the drone itself, once the correction is applied for the PID controller, we clearly noted that a lot of steadiness has been gained through this process, with a reduced settling time and a less random behavior. This itself confirmed the relevance of our study and the efficiency provided by the sensors and by our algorithm. We will soon provide new results about our tracking controller, with some further investigation into other solutions.

Approximating our model

Simplification of the situation

The high mathematical precision that we got with our previous model is not required because the sensors do not allow for such precision. Hence a simplification may be welcome, be it only out of concern for maximum clarity in the explanation. Besides, it saves having to rely more than once on values returned by the sensors. If a sensor value is slightly offset, it is indeed better to use it once and for all in our equations, rather than reporting errors many times and increasing its effects on the results (especially here with the altitude and xReadmm that were each called three times before, because of the α angle). Figure 4.a shows how the model can be simplified.

 

We therefore have only one function to compute xReal, whose domain of definition is broader, because neither the tilting angle φ nor the sign of xReadmm change the model anymore :

Performance

This alternative model performed surprisingly very well, insofar as we got on average a shift of about 2 millimeters between both models. It even appears on average more accurate when we are dealing with positions further from the roundel, that are critical ones since the angle is greater there, and the sensor accuracy worse. This is explained by the fact that this new equation is less sensitive to small variations of the parameters.

As a conclusion, we plan on keeping this last implementation because of its really good performance, both in terms of simplicity and accuracy.

 

  
References

  1. http://en.wikipedia.org/wiki/Law_of_sines []

Drone: new PID with polar coordinates and HowTo improve reactivity and accuracy

Handling polar coordinates for the PID

Defining a new error

Previously1, we have seen how to manage a fair tracking with a PID control loop that uses a traditional Cartesian coordinates system. Picturing its idea seemed however rather less intuitive than by considering polar coordinates.

We are indeed considering a central point and an offset between it and the position of the roundel. The goal of the PID is to make them be about the same. Therefore, we can simply consider that the distance between the middle of our plan and the roundel is a radius, and an angle is formed by the abscissae axis and this radius (look at the figure below to picture the situation).

With using the radius as the only error parameter, a PID controller can be implemented. In such a representation of the system, what does really matter is for the radius to be as close to zero as possible. Having a different angle does not make any difference in measuring the angle: it is as wrong to be at a 3Pi/4 angle as at a -Pi/2 angle (as long as the radius is the same in both cases). The correction applied to the motors will be the same in intensity, and power applied is what is really at stakes while dealing with this kind of system. The angle will serve the only purpose of telling to the motors in what direction they have to rotate in order to move the drone in the right direction – no PID is necessary for that. Our PID is rather here to tell how fast the drone has to move in that direction.

Changing the code

It appears then more natural and even easier to handle one radius parameter instead of the old two x and y error parameters – one for each axis. This change required yet a few tweaks in the code that had to be tested independently:

  • image analysis returns Cartesian coordinates for roundel position. A switch from Cartesian coordinates to polar ones has to be done. The maths behind this change are straightforward:

  • before doing so, it might be nice to perform an axial symmetry using the x-axis, in order to get a more intuitive picture of the plan. Here is the call to the function changing the coordinates – the symmetry is done while passing parameters:
convertToPolarCoordinates(xval - XMIDDLE, -(yval - YMIDDLE), &radius, &theta);

//XMIDDLE is the x-value for which the image is equally split in two parts (same goes with YMIDDLE and the y-axis)
  • creating a new function for the drone is necessary: it has to be possible to tell it to go in a defined direction, at a given speed. Since the API can only handle orders on two Cartesian axis, to pitch and roll (not mentioning yaw to turn and gaz to change altitude), some basic conversion (converting a movement on one axis to a movement on two perpendicular axis)  has also to be taken care of here.

 

The core of the algorithm kept unchanged: we merely apply a PID control loop that take the radius as an error parameter that should be close to zero. The results were therefore as good as the previous one (not better). A simpler Proportional Derivative (PD) is being considered, insofar as the Integral term main purpose is to help remove small errors to help being exactly on top of the target, which is not essential for us, as long as the drone does not describe huge circles around it. We will go back later on this precise matter.

Responsiveness tests: which detection is really efficient ?

A need for faster loops

We recently introduced image analysis to deal with tag detection on the PC side. This was done with the idea of taking advantage of a greater computing power and the possibility to choose the kind of tag we want to track – hence getting ride of the limitation induced by the drone firmware. We have experimented that our detection roughly provided the same results, even better on average than the one given by the drone.

Well, this conclusion proved to be partly right. We were indeed a little more efficient than the embedded program in terms of frames received and analyzed: for a new frame received by the computer, our OpenCV algorithm performs a little better than the one embedded on the drone for the same frame. Since the PC sends orders to the drone only when a new frame was received, no matter if we are considering the OpenCV analysis or the embedded one, the PID results were almost the same.

The problem lies in the fact that the computer does not received all the frames got by the 60 fps vertical camera. Whereas this is due to a loss of data happening during the WiFi communication, a problem of bandwidth, or a slow processing time of new frames on the drone or computer side, we don’t really know. Since we have no access to the drone’s firmware yet, we cannot do anything about it. Anyway, our loop were therefore quite slow, running at an average speed of 62ms, meaning less than 10 frames per second (without image analysis, which would decrease again this speed). So as much new orders per second sent to the drone. And this is without taking into account some big slowdown on the computer side, entailing in delays of sometimes more than a second. Which is huge while considering such a reactive system: if the power applied to the motors at a given time is someway high due to a PID correction, a delay even as small as 3/4 of a second can have the drone overshoot its target so it will lose it for good.

Experiments

To see how much useful data were lost and hence unanalysable by our algorithm, we kept running our OpenCV image analysis and the embedded roundel recognition at the same time, comparing the number of matches. However, the waiting time we used to have in each loop was deleted, so the program could run a new iteration even if the frame received was still the same. Because not getting any new frame does not mean not getting new navigation data, the program had then access to those navigation data send by the drone faster. And among those navigation data are kept the coordinates of tags detected by the algorithm embedded on the drone.

The next figure pictures the experiment process in a chart. Note: the results would have been even more obvious if we  were to split the OpenCV analysis and the navdata handling in two different threads.

The results are listed in the graph below. On average, the embedded algorithm records 1.45 times more different coordinates of the ground tag than the OpenCV algorithm running on the computer.

Our recent discoveries with the speed acquisition of navigation made us test it without any video display on our computer, not to mention video analysis. Our running loop went therefore faster, multiplying its speed by about 300 times. Even if it does not multiply the navigation data like this, we still receive some more, and are sure to get all of them, without losing them while the image analysis is being processed.

Conclusion

Gained responsiveness

The main comment that can be done about those results is that the embedded detection is obviously much more efficient (about 45% more)  once we consider all the useful frames. And the reason for that is that the drone has more frame at its disposal on which it can run the analysis than the WiFi connected computer. The actual drone’s navigation data keep changing even while no new frame is received, which confirms that some frames are lost in the process (otherwise coordinates sent by the drone would come at the same pace than those got by our OpenCV algorithm). Add this to the fact that the image is converted from raw data to an actual image that can be displayed on the computer’s screen during the transmission process, and you can start having a better idea of the benefits in dealing with algorithms on board rather than with a second device, no matter how powerful it is.

One path we could follow in order to get improved results without changing our way of doing things lies in using the newly release ARDrone’s firmware, that allegedly improves the video decoding time process thanks to an other codec. The problem is that this firmware does not seem stable enough at the moment, and it really messes things up with our code.

We could however implement the image analysis in a separate thread, without slowing down the PID algorithm. Since we will gain speed in receiving navigation data as we saw it, we might want to not check twice the same package (i.e. filtering data), and therefore send only once the same order to the drone, so as to avoid jamming the bandwidth.

What to do with those performance conclusions ?

One legitimate question one may ask is whether we really need all those frames for fulfilling our tracking purpose. Our early tests showed us a much more responsive and accurate drone, that kept its target in sight longer. The PID (or PD at least) needs to be tuned again, since the drone has still a tendency to wander around the roundel, and not hover perfectly on top of it.

As for our flock of robot tracking purpose, we may have now a major issue. We will need to do image analysis to detect different robots while following them at the same time. But since we actually need to be quickly responsive for the task that helps follow the leader, we can contemplate doing the following:

  • use the really efficient embedded detection for hovering on top of the leader.
  • take advantage of our own OpenCV image analysis in a separate, slower thread, for reporting coordinates and orientation of the other robots in the flock. We do not need a speed as high as for the hovering task to do so, so it should be just fine for keeping the formation on the ground.
  • since we plan one being able to change our leader at any time, and tags recognized by the drone are limited, we will have to use whatever is made available by the engineers at Parrot. For the moment, roundels, oriented roundels and stripes of different colors can be tracked. This should be just enough for our task.

This can all be summed up in the chart below.

 
References

  1. As stated in a former article : http://www.ludep.com/performing-simple-image-analysis-and-full-pid-controller-with-the-drone/ []

Some steps further with the omnidirectional robot

In this article, we’ll focus on the driving behavior of the robot. Indeed, we need it to be as accurate as possible in every of his moves and we need it to be able to go in every direction. So we’ll explain here how the robot is supposed to move without turning and how we implemented the movement system accordingly.

How can we get rid of the rotation ?

Mathematical point of view or how to “mettre les mains dans le cambouis”

Basically, the kiwi drive is just a point on which we apply three different rotations. The question is “how should we settle the rotations (i.e. power applied to the motors) so as to ensure a translation instead of a clumsy rotation”.

If we use the complex notation for planar geometry, a rotation would be in the form of the following equation, with being the image of  by the rotation of center  and angle θ.

If we compose two rotations, we’ll get the following equation

We put this equation (4) in that special form so as to show that if we have θ + θ ‘ = 0, the first exponential term is equal to one and thus we obtain z ‘ ‘ = z + a   ; which is the formula of a translation in complex notation. We can see that the composition of two rotations is a translation as long as the sum of the rotation angles is equal to zero. This is very understandable; indeed if we compose two opposite rotations we have a translation, as everybody who has already driven a car may have noticed… The problem is that we compose three rotations for the kiwi drive: do we get the same result? Well, let’s just look at that.

Once more with (6), we can see that θ + θ ‘+ θ ‘ ‘ = 0 induces a translation. This could have been easily anticipated knowing that planar rotations form a group (mathematically speaking; namely that a composition of two rotation is either a rotation or a translation) and therefore, our result for two rotations would be expandable for three. At this point we knew that keeping this sum equal to zero was primary in order to ensure the reliability of the kiwi drive and that corrections like PID would be a major and powerful tool to use.

Physical point of view or how to prevent and correct errors

If the physics could stick to the mathematical theory, well first we would not be forced to make obscures approximations and inconvenient assumptions but in our case, our robot would never turn on itself if we respect the condition “the sum of the rotations must be equal to zero”. Unfortunately due to imperfection purpose (robot construction not 100% robust, friction, measurement errors, etc), we’re forced to face the case that the robot might and will change its orientation within the time. In order to counteract this issue, we came out with a simple system. We’ve implemented a thread running all the time (but can still disabled for testing or manual driving purpose) which samples the motors position thanks to the method Motor.MOTOR_PORT.getTachoCount(): and if the sum of every motor is different to the reference it should have, we command the robot to make a gentle counter rotation.

Here is some code in which you can see the way we counter the rotation using a PI correction (soon to be PID or PD, according to the result obtained we’ll have with our new omniwheels):

while(isInRegulation()){
    noRotInt += noRot;  // Integral term
    noRot = ref - getMotorTachoCountSum(); // Proportional Term
    corRot = noRot*getCm().getFactorP() + noRotInt*getCm().getFactorI(); // PI Correction
    getCm().setRotationPower(corRot); // Apply counter rotation
    getCm().refreshMotors(); // Apply new powers to motors
}
public int getMotorTachoCountSum(){
    return Motor.A.getTachoCount() + Motor.B.getTachoCount() + Motor.C.getTachoCount();
}

How do we make the robot move ?

The first thing we wanted the robot to do was to go in a direction within a certain range. Then we implemented a Cartesian referential (so as to be more convenient for the flock behavior implementation later) simply by changing the base. So for the implementation, we had to ask ourselves:

  • what is the power to give to each motor so as to respect the condition “the sum of every rotation angle must be equal to zero”;
  • when and how the robot should stop;
  • how to interpret several commands (stack or “last command prevails” behavior).

 

In order to do deal with the powers, we knew that for any movement, each motor has to rotate different amount of degrees (in the major part of all the cases) in the same amount of time. Thus, motors’ power had to be different without making the robot rotate on itself. At this point, we tried to apply powers to each motors with the sum of powers equals to zero. And the results went better than expected because it was working pretty well. We had to take care of the low powers (because the robot wasn’t moving that much between 0 and 20) simply by applying a scale factor.

So as to be functional, every motor has to rotate a certain amount of degree (we’ll say that the motor A has to rotate a degrees, b degrees for B and c degrees for C) and respect a+b+c=0 (we write “0” in order to be coherent with our reasoning but in reality, it’s equal to the very first value sampled by getMotorTachoCountSum() written above). The point of this equation is that it has to be true at every moment of the movement. So we made a graph that shows the percentage of progress of each motor, without correction in order to picture how accurate the system could be.

Percentage of progress of each motor over a movement of 2 meters with no correction. The motors are however regulating themselves with the method Motor.MOTOR_PORT.regulateSpeed(true)

Showing this graph was important in order to jump over the second question, which tends to find out how to stop the robot. The first and naive way would be to stop the movement as soon as the three motors completed their movements, i.e. when the line  (letting the first two one continuing rotating till the last one ends). Problem is, when one of the motors is under very low-speed, he may never complete his rotation or complete it instantaneously. So waiting for three motors led to infinite movement, waiting for one led to premature stops and waiting for two was just perfect because in any movement, the robot is supposed to use two motors with reasonable (>20) powers.

Concerning the commands behavior, we had to adopt a fusion of “last command prevails” and the stack behavior so as to be able to merge autonomous and manual movements.

Testing the robot

Here is two videos about the robot making a rectangle (simple succession of four commands) without and with PI correction.

Without correction With PI correction

 

As you can see, the system is quite more accurate with the PI correction. Nonetheless, it is still not reliable and does not fit our project requirements. We’ve been trying to tune the PI (/PID) and obtained more or less satisfying results but still not that acceptable. But we got a very nice surprise at the end of the last week: we received the Rotacaster omniwheels and tried it. We don’t want to spoil the next article, but the accuracy should not be a major issue from now on. We’re still delivering a picture of the measurement for the errors in order to give you an idea of the impact of the correction (knowing that the robot is supposed to make a square of side sqrt(2), so more than 7 meters).

 

Performing simple image analysis and full PID controller with the Drone

So far, we have been able to detect roundels on the floor with the drone by using an already existing embedded function. We have even managed to conceive a basic proportional controller (P) on the drone to track the detected roundel. Now, we have added more flexibility by using our own detection algorithm to track our own patterns freely, on top of having greatly improved our tracking control loop with a full Proportional, Derivative, Integrative (PID) control. Altitude is also managed by a simpler PI control.

Detecting circles with OpenCV

Why image analysis ?

The algorithm for roundel detection developed by the engineers at Parrot was working well for achieving its immediate purpose, but this was not satisfying for us in many ways:

  • We have no control over its implementation. Since it is embedded on the drone firmware and we don’t have access to it, we are not able to change even a small part of it. Moreover, this function may disappear at any time in later firmware updates.
  • We cannot decide the shape it recognizes. This is especially annoying for us, since we would like to detect different robots and identify them as such, preferably with an orientation (the pre-existing function can detect the direction of a stripped roundel, but not its orientation).
  • Doing the image analysis on a separate computer provides us with more computing power than what is possible with the drone alone. Thus we may theoretically achieve a more efficient pattern recognition. We stay however careful about this assumption, since the delay induced by the WiFi communication and the other navigation data transfer may counterbalance this positive effect. Plus, we are beginners with OpenCV image analysis in real-time.

Starting with OpenCV

OpenCV seemed to be the natural way of doing our own video analysis: it is open source, greatly supported by developers and big companies such as Intel, the library has more than 500 optimized algorithms that do not need to be reinvented, it has been primarily developed for a C++ use, and so on.

To first get accustomed with OpenCV and video analysis, we decided to replicate what the algorithm we have used until now is able to do. And for that matter, since we work for the moment in an environment we know, that the drone and robots are probing, we kept things simple by just detecting circles on the video stream. Our floor is indeed perfectly flat and made of only one color, without any pattern. An efficient algorithm to perform that is explained by Robert Laganière in one of his books1.

Basically, for each new frame send by the drone to the computer, we apply a Gaussian blur filter to smooth the image and avoid detecting false circles because of the noise. We then use a Canny detection2 and a Hough transform3 that are regrouped in one function call, that need to have a one channel image as an input. That is why we need to first convert the color frame into an image represented by shades of gray:

cv::GaussianBlur(bottomMat, bottomMat, cv::Size(5,5),1.5); //bottomMat is the matrix of the frame we analyse
cv::cvtColor(bottomMat, bottomMatDraw, CV_BGR2GRAY); // convert 3-channel image to 1-channel gray image
cv::equalizeHist(bottomMatDraw, bottomMatDraw); // Equalize the histogram on this matrix to improve contrast; 
//may not be necessary, depending on lighting conditions
cv::HoughCircles(bottomMatDraw,         // Frame to be analysed
            	circles, 		// Vector returned containing detected circles parameters
            	CV_HOUGH_GRADIENT,      // Two-pass circle detection method
            	2, 			// Accumulator resolution (image size / 2)
            	50, 			// Min. distance between two circles
            	200, 			// Canny high threshold (low thresh. = high / 2)
            	75, 			// Minimum number of votes to pass to consider a new candidate as valid
            	5, 75); 		// Min and max radius for circles to detect

 

Results

Example of the roundel detection performed with a streaming video. Bottom left: Circles are drawn on top of the actual roundels. Top left: position of the roundel as detected by the Parrot embedded algorithm. Top right: front view of the drone, overlapped by the bottom view (disabled here). Video showing how the recognition performs in real-time. The drone is voluntarily shaken to test the efficiency and the speed of the image analysis, that is actually fast and accurate enough, despite varying heights and non-steady rolling from side to side.

 

The algorithm proved to be eventually efficient: on average, we detect even more circles than with the embedded algorithm. The good refreshing frequency of the vertical camera is really helpful for such detection (60 frames per second) – even if we don’t get all of the frames over the WiFi connection. Yet, its biggest drawback lays in its poor resolution: we have to deal with a CMOS sensor that has a QCIF resolution of 176*144 pixels. Our recent experiments show that this is going to be a major issue while dealing with more than one robot on the ground.

And we have another practical concern: it is well possible to detect our robot and keep track of it at a height of around 180 cm – but we may certainly need more than that for many robots at the same time. Our ceiling (less than 3 meters high) will be an inconvenient limitation regarding this issue.

A next step will be to detect different kinds of oriented patterns at the same time. That will involve more elaborated algorithms and will require more computing resources.

PID control loop

Since we now have a fairly reliable detection that reports the coordinates of a roundel (therefore a robot), we can bother with implementing a proper tracking algorithm. A feedback look using a PID seems appropriate for such a device. It is the controller used by the engineers at Parrot for managing many aspects of the drone, and it is the only kind that we really know; besides, it appears to be not to complex, and can achieve really good performances.

How to give the quadricopter orders

The drone is controlled thanks to high-level commands that will rotate the rotors accordingly, and this is mainly done by changing four parameters:

  • pitch: to go forward (negative values) or backward (positive values)
  • roll: to go right (positive values) or left (negative values)
  • gaz: to go up (negative values) and down (positive values)
  • yaw: to turn right (positive values) and left (negative values)

Those values range from -25000 to +25000: the greatest they are modulus-wise, the fastest the movement will be. To track an object on the ground, the drone can stay at the same altitude (that is performed by a simple proportional and derivative controller) and does not need to turn on itself, since it can move in any direction on the same 2D plane only with pitching and rolling (yawing is mainly a good way to point the horizontal camera in a specific direction). We therefore need to concentrate solely on the pitch and roll parameters.

Defining the PID error

While using a PID controller, one has to define what the error is. For that, we first want to describe what is the goal of our system. We want our drone to hover on top of our roundel that is detected by the vertical camera. We can therefore simplify the problem by looking at the rectangle formed by the field of view of the camera, and saying that we want our roundel located in its very center at any time. The roundel is represented by a circle thanks to our image analysis, for which we really just need the center. Basically, our problem consists in having one given point of known coordinates centered in the middle of one rectangle.

The error can then be measured by looking -at the same time- at the differences between:

  • the x-axis coordinate of the roundel point and the x-axis coordinate of the center of the image. This is going to influence on the roll value.
  • the y-axis coordinate of the roundel point and the y-axis coordinate of the center of the image. This is going to influence on the pitch value.
This is how is calculated the error for the PID controller. We compute an error on each axis, apply the PID algorithm on both of them separately, and give the orders to activate the pitch and roll movement accordingly. Hence a unique movement that is the result of the composition of two different actions. Note that the arrows do mean that the roundel moves relatively to the camera; in fact, the result of the PID control is that the camera moves towards the roundel.

This boils down to having two PID controllers that yet keep the same gains parameters, since nothing is different in both cases except the orientation (same motors, same sensor, symmetric goal). Each of them has however its own error tracking and proportional, integral and derivative values evolving through time. Once this has been established, the PID implementation is straightforward, as described by J. Sluka4, who provides a very good description of its purpose, its tuning and an easy to understand pseudo-code. We do not write our code right here, since there are plenty of examples of what we did on the web, and the nearest one can be found on Sluka’s webpage.

Tuning the PID and first tests

The hardest part in implementing a PID feedback loop is to correctly tune the three gain parameters: Kp, Ki and Kd. They greatly depend on the device used, and may even change between two robots with the same characteristics. A not too hard method to find a fair approximation of those gains values is the Ziegler-Nichols heuristic method5. But even after that, some more tuning is still needed, and we don’t have the hardware or software that could help us in that matter.

In order to ease the tuning, we kept track of all variables during the running time. Here, gain parameters are still not perfect, yet they are good enough to have a estimation of the oscillation period Pc on both x and y axis (movements represented in blue and red), which has to be measured to use the Ziegler-Nichols method. Reducing the oscillation is usually made by increasing the proportional term Kp. Note also that the PI control for the altitude (in green) already does a good job.

To tune it faster, we made it possible to change each gain parameter in real-time with the XBox controller. The results are pretty convincing so far, and we were able to follow our omniwheel robot in our office over 4 meters (we lack the necessary space to test it quickly on greater distances). However, it has to be known that some conditions need to be respected in order to have a good result, for the moment at least:

  • no obstacles should be present around the field where both devices are moving. If a sharp difference in height is detected on the ground, the PI controller for the altitude will react quickly and the drone will suddenly go up or done to correct its altitude. This is efficient in a way, since it is working well and as expected, but is remains a problem in an enclosed room like ours, since the UAV can be stuck to the ceiling, and then lose its target.
  • the robot shall not move too fast, otherwise the drone will lose its sight.
  • the environment has to be correctly lighted, otherwise the image analysis may prove fail from time to time. We do not need a bright light or even a uniform light over all the surroundings, but we do at least need some light to be shed from the ceiling for instance (or the sun in an outside setting).
  • the floor shall not be made of any pattern -at least, no circular one-, because of our own specific image analysis. This shall be addressed later.
Our actual progress in action. This time, everything is performed by our own algorithms, for both the drone and the omniwheel. For the record, here the gains are : Kp = 21, Ki = 1, Kd = 83. Oh, by the way, nobody was hurt in the making of this video: at the end, you can actually see me taking control of the drone just before fulfilling a safe landing.

Going on…

Our next short-term goals are therefore to tune better the PID gains (Kp, Ki and Kd) in order to assure more stability while hovering over the roundel. Furthermore, we are willing to switch to a polar coordinate representation of what is seen by the drone, that should add clarity and consistency to our PID algorithm.

 
References

  1. OpenCV 2 Computer Vision Application Programming Cookbook (Paperback) by Robert Laganiere, Packt Publishing Limited – ISBN 13: 9781849513241 ISBN 10: 1849513244 []
  2. Edge detection algorithm by John F. Canny, explained on Wikipedia: http://en.wikipedia.org/wiki/Canny_edge_detector []
  3. Feature extraction technique, explained on Wikipedia: http://en.wikipedia.org/wiki/Hough_transform []
  4. A PID Controller For Lego Mindstorms Robots, by J. Sluka: http://www.inpharmix.com/jps/PID_Controller_For_Lego_Mindstorms_Robots.html []
  5. Ziegler-Nichols heuristic method, explained on Wikipedia: http://en.wikipedia.org/wiki/Ziegler%E2%80%93Nichols_method []

Improving the omnidirectional robot design

It’s been two weeks now we have been struggling with a major issue. After implementing the manual driving mode, we wanted the robot to move to specific positions and we encountered problems with accuracy. For instance, on a same command, the robot could have different behaviors, totally unpredictable. We sure knew this could be improved thanks to several corrections (like a PID that we’ve been working on recently) but still, we thought those errors couldn’t be only due to the mechanics. Finally, we found that our error was tremendously amplified by the orientation of the omniwheels.

 

Indeed, instead of putting the omniwheels with their axis parallel to the floor, we decided to use some angle for stability and design purposes. But with some hindsight, we just understood that the omniwheels are just meant to be used completely perpendicular to the floor for some physical properties. When you take a normal wheel, whatever angle you put it on the floor, the only translation you can do is along the line parallel to the floor and perpendicular to the axis of the wheel. Setting the omniwheel with a none right angle messed the previous rule just mentioned.

 

 

And this is the reason why we came out with a new design for our ground units that you can see just above. Maybe you just noticed that the omniwheels are different too, and you would be right if you did. We had to change it to because the old design didn’t allow a perpendicular setup. If you’re interested in this new design, it’s one of our own that you can use and we’re delivering some pictures below if you need a pattern.

 

 

Now we’re working on a new system of movement: acknowledging the robot is located at the point (0;0), he’s supposed to move to any specific (x;y) position. The system is working according to our expectations but we need to work on the correction to make it more reliable (because we’re loosing significant accuracy within time and distance). We’ll try to show some videos with and without corrections on the next post.

Coding with the drone – Performing roundel tracking

Developing settings

Our developing environment for the drone is now properly settled, meaning that we can finally now be efficient while programming by writing a few lines of codes and testing it on the drone seconds later. It wasn’t an easy task, between the drone motherboard that suddenly ceased to work properly (thanks to the warranty and the good customer service of Parrot, this issue was solved ten days later, by receiving a brand new motherboard), the wifi connection that behaved randomly since the last Ubuntu update and an existing API code that is sometimes hard to follow.

Basically, we are now in the following configuration -as long as the drone is concerned:

  • Ubuntu 11.04 (natty) with GNOME on the computer side (Intel Core 2 Duo @2.20GHz)
  • Firmware 1.5.1 on the drone
  • ARDrone API 1.6
  • XBox 360 controller for manual inputs (keyboard mapping currently broken)
  • jEdit as a code editor

Software-wise, nothing else is needed. Obviously, some librairies are required, such as SDL, g++ for the compiler, and later OpenCV for the image analysis. All the code will be indeed done in C and C++; most of what already exists is written in C (i.e. the API), and our personnal code shall mostly be written in C++.

Activating our own algorithm

We now have the possibility to switch between a manual control for the drone (e.g. just after taking off and before landing) and a automated control managed by our own custom algorithm, by merely pushing one button on our controller. Besides, all other necessary commands are also here, coded by ourselves, like performing an emergency shutdown or a flat trim (calibration on the ground). A lot of this was achieved thanks to a helpful presentation found on the web1, on top of excerpts of code2.

Some early tests were about having the drone describing a square pattern on a horizontal plane, or circles of ever increasing radius. Everything responds well – the biggest task remaining to avoid drone collisions with its surroundings was to understand how to handle properly the power of the motors, whose ranges go from -25000 to 25000 (what’s the difference in numbers between fast and really fast for instance ?). It has to be stated that the whole custom algorithm is running in real time on the computer, that constantly exchange data and commands with the drone.

Tracking a roundel

One of the other objective we had in mind while taking the time to set a neat developing environment was to be able to soon integrate our own image analysis. This will be done in a specific part of our code using the library OpenCV.

But before moving on to this next step that has still to be mastered, we wanted to use the already existing roundel detection enabled by the latest ARDrone firmware. Thanks to the API, we can get the coordinates of one (or many) roundels detected on the ground, by using the vertical camera. With these information, we quickly developed a really basic algorithm supposed to keep track of a roundel by hovering on top of it and hopefully following it. The drone basically uses a kind of a proportional controler: the furthest it is from its goal (that is, having the roundel located in a square centered in its vertical camera video field), the fastest it will activate its rotors to correct the error. Our first rough results with this approach can be seen in the video below.

 

NB: the XBox controller only purpose is to assure that the drone is located on top of a roundel before activating our algorithm with a button.

 

Some obvious issues appear after our first tests :

  • The drone is not stable enough, it oscillates a lot, which may be enough for tracking one unique robot, but certainly a problem when more are involved
  • The drone overshoots regularly while trying to correct its error, risking losing track of the roundel
  • The altitude handling is also far from being smooth

All those remarks boil down to one: the controller is not satisfying enough, and more tuning with the constants won’t provide a dramatical improvement. This therefore leads to this conclusion: we need a better controller, and me may want to investigate a PID (Proportional, Derivative, Integrative) one. We have already done some promising tests with it so far, and it proves to be much more promising in terms of steadiness and robustness. It will however be the topic of a future article.

  
References

  1. OpenCV/ARDrone Two Parts Presentation – PDF file, by Cooper Bills []
  2. Robot learning, page by Cooper Bills – see at the bottom of the page, in optional TA lectures []

Improving the kiwi drive: tests with a compass sensor

After hours of testing (or “playing”, depends on how you appreciate remote-controlled cars), we’ve been forced to acknowledge that our omnidirectional robot was not completely accurate: after several movements in random directions, the robot has changed its orientation compared its inner one. The robot is indeed omnidirectional in the way that it can move in any direction but we still need to define the “front” so as to have a reference when we want it to move in a desired direction.

So this was without a doubt a problem we had to focus on even if  we knew that the robot won’t ever be 100% accurate over a full experiment. Thus, we came out with the idea of mounting a compass sensor on the robots. This way, we can set a “reference angle” for the robot and it will move according to this angle. For instance, if the north is the reference and we want the robot to go towards the north, it will move in the right direction even if it’s not facing it. The principle is very easy: we’re taking the angle given by the controller (so 0° if it’s the north) and when the brick receives the command, it looks what direction it’s facing and corrects the angle accordingly (so if it was facing the south, it would adjust the angle with +180°).

 

 

As you can see, we tried different spots for the compass sensor. On the left video, the compass is quite low, very close to the motors and the induced magnetic fields by the motors alter the compass sensor measurements which leads to a poor driving behavior. On the right video, we set up the sensor far from the motors and the robot’s behavior is way more suitable as you can observe. Nonetheless, we had another problem: the robot has a random behavior in certain spots. Even if the robot is asked to go forward all along the experiment, at one point, it’s just turning and making a loop on itself.

 

 

This behavior is due to some magnetic perturbations in the room: we took a compass and checked it everywhere in the room. After sampling every area, we found that the north was actually pointing in other directions in several parts in the room. Thus, our system is handy in the way that no matter which direction the robot is pointing towards, it’ll go in the direction you input with the controller. Besides, the errors made with the time (the orientation difference) would be totally solved because the orientation wouldn’t matter any longer with such a system.

But the system would only work in particular areas (with no magnetic fields) and this is something that we have to solve within the next articles. Ideas about accelerometer and gyroscope came out, or trying to use the drone to help the robots, or even implement a PID.

Controlling the drone with our own built program – roundels recognition

So far, what has been shown here about the drone was mostly made possible thanks to third-party applications, that we managed in some way to have working with our devices (PC or smartphone). This was good as an introduction, but here you will read how we had it controlled thanks to programs we could compile by ourselves.

First of all, we are using the AR.Drone SDK 1.6, with the last firmware to this day running on the drone, that is version 1.5.1. Lots of issues have been encountered with the latter by the developing community, among them:

  • Updating the firmware from version 1.3.3 directly to 1.5.1 while skipping 1.4 results in bricking the quadricopter. Parrot is currently working on a solution affordable to everybody12.
  • Using the Windows SDK 1.6 with the newest firmware implies losing the video stream, on top of many timeout issues and not working controls (well, not sure is this last one is directly linked to that combination).

So we tried the examples provided in the SDK, that enable basic control of the drone. However, before running them, one needs to compile and build the source code, making sure that all the libraries are correctly installed.

Windows SDK example

To basically enjoy a good experiment with the Windows SDK 1.6, it is recommended to stay to a firmware version no more recent than the 1.4.7. This way, it is possible to display the camera feeds on the computer screen. The setup is completely tedious since nothing is working straight out of the zip archive. One would have to follow instructions made by other developers3, on top of looking for help in the official API forums. It looks as though this Windows part of the SDK was not tested before its release.

We eventually has it working on our Windows 7 laptop. It was possible to move the UAV using the keyboard, but the program was really high processor consuming, which often resulted in laggy controls, timeout issues and unresponsive drone. Yet commands were better once the video shut down.

We then took the opportunity to play a little with the source code and do our own stuff, like launching LED animations, but its was overall not satisfying because of the bad response time.

 

Observe the frequent timeout issues resulting in unwanted moves. It was still nice to run this Windows example to get a first glance of the raw navigation data in real time directly in our Windows terminal. Keyboard controls give a good feeling of the drone piloting. Note how CPU demanding this program is.

 

Linux SDK example

So we had to move on to another platform, that is better supported and knows less issues. Linux with Ubuntu 10.10 proved its magic in this matter, and the instructions to follow in the official developer where quite comprehensive and sufficient to have a running project after a few command lines.

The icing on the cake is that the navigation example gets on improved GUI that displays tons of navigation data in real-time. It is also CPU friendly, is efficient with the video while using the latest firmware, and works well with a game controller.

Excerpt of the Linux navigation screen. The yellow section shows that an oriented roundel has been found on the ground, and provides its orientation, its coordinates (xc, yc) in the camera view, its width, its distance from the drone.

The other good news is that the code is more readable, and it was not a pain in the neck to manage to display some data about tag tracking. We indeed fulfill this goal by calling some of the drone embedded functions that enable roundels tracking on the ground or in the air.

Basically, we can now recognize one or more -up to five, actually, according to the documentation- ground robots carrying the same kind of roundels on their back, and report their coordinates with their orientation as well (coordinates represented in the system of the video capture – we still have to investigate its specifications). We did not fulfill following one specific robot by hovering over a roundel, since the pre-implemented function does not seem to work anymore (we are waiting for an answer from the developers on their forum). We guess then that we will have to do it by ourselves, which is now our next objective.

As an aside, we discovered the resolution of the vertical camera: it is a QCIF resolution of 176×144 pixels4.
  
References

  1. Have a look at the official and public forums to be kept up to date: http://forum.parrot.com/ardrone/en/viewforum.php?id=7 []
  2. http://www.ardrone-flyers.com/ is a great website to get answers on how to save your drone by yourself. Previous firmware versions can also be found in its wiki section. []
  3. Very clear instructions can be followed on this website to achieve that: http://www.sparticusthegrand.com/ardrone/ardrone-compile-windows-sdk-1-5-in-visual-studios-2010/ (it is about SDK 1.5, but still necessary for 1.6) []
  4. Wikipedia explanations on the CIF conventions []

The day we crashed the drone

Sometimes, tragic accidents just happen. Well, we were not exactly surprised by the one that affected us last week, when we damaged our AR.Drone so bad it couldn’t fly anymore, since we played with fire. Or rather with wind, to be precise. Besides, when you do numerous experiments with a really light device that can fly really high in the sky, you have to expect to perform some repairing, sooner or later.

Events were as follows. So far, our UAV had only been flying indoors, and we really wanted to try it outside, to watch its behavior in a different setting, to test the wireless range, its reactivity to navigation commands, its speed, its ability to hover despite the wind… We therefore took a laptop, and Xbox 360 controller and our drone in the courtyard next to our building, and started our first outdoor flights.

In spite of an overall mild weather, there was some wind regularly blowing and disturbing from time to time our controls. But the quadricopter proved to be quite steady on the whole and able to respond well, even while being 30 meters away from us. Except with sudden gusts of wind and an obstacle nearby. We crashed it a few times: a window and a vertical pole were its very first close encounters. Those happened however at heights of no more than 2-3 meters, and did not really damage the drone; its security system is indeed really efficient when it comes to shut down the rotors as soon as one of them is blocked. The last accident happened a little higher, around 5 meters; the AR.Drone then immediately stop to start its emergency fall and proceeded to a harsh landing on the concrete ground, using one of its rotors as a shock absorber. Not the best scenario.

As a result, a gear connected to one of the four motors was destroyed, and, worse, the central cross that hold the whole body together was broken at one of four parts. It was still possible to start the drone, but after around two seconds of initiating the rotors – the four of them were still working once the faulty gear removed – its internal sensors detected a problem and it automatically shut down (most probably because one motor wasn’t loaded enough).

The central cross after our accident. No wire was cut, so we simply tried a glue + duct tape solution to fix it !

A gear in a really bad shape, of no use anymore. The drone hit the pavement right with this piece after its 5m fall.

The good thing with this accident is that we learned quite a lot on the drone itself, since we had to open it completely to perform the “surgery”. We have seen that it is highly modular, and its functions are well split into distinct parts: propellers, gears, shafts, motors, central cross, motherboard, navigation board, horizontal camera, hull, and so on. Replacement parts can be ordered and video for each kind of fixation are shown on Parrot official website, so the repairing is easily made, whatever the problem is. It is even possible to rebuild a new drone only thanks to replacement parts.

A bag of new gears. We also bought a new central cross, but we haven't use it yet; we will see how long our own repairing lasts.

The finishing touch

After fixing it, the drone - seen from below - looks as if it is new.

Even if we had to wait ten days to get the replacement parts, the good news are that our drone was then fully functional again. We then took the opportunity to make a video – see below – of us controlling the drone with a Xbox 360 controller linked to a PC which is connected to the drone’s hotspot (WD ARDrone and Xpadder were running on the laptop to enable this kind of flight). Watch how steady it is, and the accuracy provided with the gamepad.