Home » Drone, Ground robots, Ideas, Programming, Testing » Final words on the drone: merging previous work with new tasks

Final words on the drone: merging previous work with new tasks

Last automatic control improvements

What is wrong with our Proportional Integrative Derivative (PID) controller…

We have spent quite a good amount of time implementing, testing, tuning parameters and tweaking our PID controller over the last months. Our understanding of all its underlying theoretical aspects -mathematical, physical and computational- improved as much, and our results got generally better with time. Overall, we managed to get our drone to stay on top of a defined target, at a given altitude, without drifting much. However, our main concern was that, at some point, after a few minutes of running time, it would start describing circles around the target that would get bigger and bigger, or simply drift away. It was made clear that it was nothing due to our detection algorithm, but rather had to do with our PID.

And indeed, when we started to take again some hindsight and got back on all our results and data, we confirmed our very first doubts. PID controllers are great when you have to stabilize a system with one degree of freedom; more precisely, they are efficient when you need to work with one error parameter, which can be corrected by a set of actuators that do not have influence of any sort on other error parameters which also have to be regulated. Here, with our drone, we face a situation with three degrees of freedom (i.e. pitch, roll, yaw) that we try to stabilize toward our goal, and thanks to the very same actuators (i.e. our four rotors). Well, so far, we have not even tried to mess with yawing, since it was not absolutely required to achieve our project. Therefore, every time our algorithm needs to correct one degree of freedom, it is influencing the correction of the other, which needs then a better correction, and it goes on toward more and more instability, making it an explosive system.

This is the conclusion we eventually came to, and that is commonly admitted by other people working on this kind of system1. A solution has been researched to solve this difficulty, and a non-linear stabilization algorithm2, adapted to the four rotors design, seems to be the more appropriate. Solutions have been designed, implemented and experimented successfully3, on similar platforms. This is a vast field of study in itself, and could still use more investigation. Unfortunately, this calls for some long dwelling on the mechanics of the quadricopter, and would require to have independent access to each of the motors (with the API, we can only control movements at a higher level, by choosing the pitch, roll and yaw), and preferably on the embedded firmware, both of which being as for now not made possible by Parrot who designed our ARDrone.

Patching our PID controller with an auto-hovering threshold

Once again, we wanted to find our way toward our project’s goal by investigating our own solutions. As we saw it during the first experiment we performed at the beginning of our project, the embedded hovering function is pretty efficient in terms of stabilizing the drone. And stabilization is a feature we lack when our PID is reaching its zero-error point, since this is when it starts to “explode” by correcting parameters it should, due to small errors right on top of the target (our PID is nevertheless best at going right to the target, by adjusting itself progressively, without much overshoot, when the drone is further from its target).

However, we cannot rely on this hovering feature alone to track one robot, not to mention a whole flock of them. As a consequence, we thought about the following compromise: auto-hovering (i.e. stabilized hovering performed by an embedded algorithm we do not control) would be automatically activated when the drone finds itself in a circular area on top of our point of interest (the robot leader), whereas our PID controller would take control over the motor when outside of this restricted area (cf. Figure 1).

Figure 1: Illustration of the behavior of the drone. The green rectangle shows the 2D field of view (FOV) as it is projected on the ground for the vertical camera. As long as the error radius (distance between target and drone) is greater than a threshold, the drone is controlled by our PID regulator. Once it reaches this threshold, PID control is stopped and the embedded hovering algorithm takes over and tries to stabilize the quadricopter.

By combining the best of both designs, we did achieve our best stabilization so far. Then, tracking a moving robot is also a task well-performed, which now completely fulfills the purpose of our drone, when added to the flock coordinate-reporting process. Yet we really lacked a bigger test-room setting, where our main concern was the low ceiling, preventing a long-term efficient robot-tracking, since it is really hard to keep even one robot in the camera field of view when you cannot fly comfortably higher than an average of 1,8 meters. Well, at least, that put some stimulating additional challenge into our tasks.

Drone and ground units: how our UAV is fully controlled

We have accomplished many tasks with our ARDrone, so it might not seem quite obvious what their purpose are in the scope of our project, to the point where one could wonder how we give it commands depending on which events. This part of our article focuses on a quick recap on all the implemented methods that enable the drone to do its required actions.

Auto landing and taking-off

Landing and taking-off are the only two stages of a whole flight over which we cannot have any control beyond sending commands to start either of those maneuvers. Once the drone acknowledges the command, the embedded software takes over the controls and operates it. Our custom algorithm never triggers these operations, which are decided by the person in charge of the drone who uses a specifically programmed controller (see part below).

Besides, safe landing may happen without our own input, in case of low battery.

Manual control

Even though most of flying time is managed by our custom algorithm which takes control over commands of the drone, we still need to be able to enter some user input at some point. The drone is first of all a rather flimsy and even sometimes dangerous flying object that could deal some minor damage to itself our its surroundings in case of bad handling: it is therefore necessary to have at least the capacity to quickly stop it completely in an emergency, or better, to switch back to a full human control in case of unwanted behavior. Moreover, since our algorithms are a lot about tuning parameters, it is advisable to enable on-the-flight parameter tweaking. The most appropriate way appeared to use a game controller that we would map to take advantage of all the sticks (great for moving the drone) and the different buttons (to give more custom orders).

Figure 2 shows the XBox 360 we chose and how we map its buttons to call different functions in our program. The following sums up our gamepad functionalities:

  • Start or stop landing or taking off phase. This can be done at any time.
  • Emergency stop, enabled at any time, to suddenly cut all four motors. Sometimes also required before a new take off to reset the drone’s state.
  • Flat trim tells the drone that it is currently in a horizontal position: necessary for proper landing, take off and hovering – and should be done before taking off.
  • Yaw, altitude, pitch and roll represent all degrees of freedom of the drone, split on two sticks. If our custom algorithm is disabled, then it is possible to control all the drone’s movements into 3D space in real-time.
  • Start or stop custom algorithm enables or disables our control algorithm that will try to locate and track a leader on the ground, while reporting coordinates of all units in the flock to the leader.
  • Viewpoint changes the camera viewpoint displayed on the user screen (switching between vertical, horizontal, or both cameras). While custom algorithm is enabled, you want and need to use the vertical camera.
  • Stop program stops pretty much everything, except manual control.
  • Start/stop hovering disables custom algorithm and manual input to activate the embedded function that stabilizes the drone. It comes of great use when you just need a stable hovering drone that does not move much.
  • At last, you can select PID parameters, one after the other, and change them.

 

Figure 2: Finally, this is how our controller is mapped. We get with it an absolute control over all the possible actions of the drone in real-time, on top of lots of ease to tune our parameters during running time. Note that this should work with any other generic controller (at least remotely compatible with Linux).

 

Algorithm control

By a simple button press, one can enable our disable our custom algorithm. But what exactly does it encompass ?

Double PID

We have a double PID controller affecting three degrees of freedom, in order to stabilize two behaviors:

  • a constant altitude, that we want to be set (by default) around 1,8 meters. This is done through the gaz command.
  • tracking the leader, and hovering on top of it by detecting its tag. This is done through the pitch and roll commands.

More details are provided in the first part of this article and in previous ones: 1, 2.

Tilt handling

Coordinates of the target to follow are corrected depending on the inclination and the altitude of the drone, before being fed to the PID loop. This problem and its solution are thoroughly discussed in a former article.

Detecting ground units and reporting their coordinates

Not only do we detect the tag that the drone has to track, but we also use our color object detection algorithm to report all coordinates of the whole set of robots to the leader of the flock.

This implies that it also handles a part of a network layer, while the program starts a thread that acts as a client connecting to the leader.

Change of tracked leader

The very last feature we needed to carry out in order to fulfill our main goals was to make sure we are tracking the right robot. Since all the robots are supposed to follow the leader, and the leader being controlled by a human user, it seems appropriate to track the unit targeted by the other units.  This way, we make sure that the leader in always in sight, which enables establishing the rest of the formation accordingly. Furthermore, if a random unit were to get lost (out of the camera’s FOV), the leader, assisted by a human, could go looking for it; once it gets close to the lost unit, the latter becomes part of the formation again, since it is finally in the camera’s FOV.

We then had to establish a protocol to decide who is going to be the leader, how the drone gets this information and how it should respond to it:

  1. Leader number (LeaderNumber, i.e. the id of the robot chosen as a leader) is decided on the flock side, usually by the human who controls the leader itself.
  2. LeaderNumber is passed to the drone’s program through the same client/server socket connection that is used for sending the flock coordinates. The drone has a thread continuously listening for new events on this socket, and registers the current LeaderNumber.
  3. Once we get this LeaderNumber, the drone has to try to follow the corresponding unit on the ground. However, in some cases, the unit may not be in sight, or the LeaderNumber may not even be decided yet. The logical steps that help the drone take the right course of action is detailed in the decision tree in Figure 3.

 

Figure 3: Decision tree to find the number (id) of the robot leader and the drone’s action that should ensue. Hover means here simply hovering on the current spot, with the embedded stabilization algorithm.

Basically, this means that we have added a new way to directly influence the behavior of the drone, by adding a input for the robot leader, on top of the human direct input with the gamepad and the algorithmic PID control.

Testing everything together

Our very last tests include all the above-mentioned features, that appear to work quite smoothly simultaneously, even when working with our flock of four robots. No unexpected behaviors were observed during our final experiments, so everything was pretty much already discussed in previous articles. The last part that was asking for testing consists in the leader switching task. Since our robot detection, our communication protocol and our drone hovering were already performing good separately, we did not have much tuning to do. Video 1 below is here to illustrate this performance.

 

Video 1: The ARDrone is tracking two robots from our flock. Our flock-control program switches the “leader” (done manually here, for illustration purpose), i.e. the robot that leads the flock and that the drone is supposed to follow.So here, the leader switches back and forth between red and blue, and the drone moves accordingly and tries to hover on top of it.Hovering is not 100% steady. This is greatly due to a lack of altitude from the drone; our ceiling is too low to get a field of view good enough!For filming and testing purpose, it was necessary to focus on no more than two still units. Note that it however work as good as with more, moving robots.

Videos and other more complete experiments will be to watch in our next article, where we deal with the whole flock and the drone. 
References

  1. Dronolab, a quadrotor project involving mechanical, electrical and software engineering students. They moved from a PID controller to a more sophisticated one. http://dronolab.etsmtl.ca/uav/ []
  2. Daniel Tabak, on a general digitally controlled system: An Algorithm for Nonlinear Process Stabilization and Control, 1970 []
  3. A interesting thorough study on the design of an embedded control architecture for a four-rotors unmanned air vehicle to perform autonomous hover flight: Escareno J., Salazar-Cruz S. and Lozano R, Embedded control of a four-rotor UAV, 2006 []

Comments are closed.