Coding with the drone – Performing roundel tracking

Developing settings

Our developing environment for the drone is now properly settled, meaning that we can finally now be efficient while programming by writing a few lines of codes and testing it on the drone seconds later. It wasn’t an easy task, between the drone motherboard that suddenly ceased to work properly (thanks to the warranty and the good customer service of Parrot, this issue was solved ten days later, by receiving a brand new motherboard), the wifi connection that behaved randomly since the last Ubuntu update and an existing API code that is sometimes hard to follow.

Basically, we are now in the following configuration -as long as the drone is concerned:

  • Ubuntu 11.04 (natty) with GNOME on the computer side (Intel Core 2 Duo @2.20GHz)
  • Firmware 1.5.1 on the drone
  • ARDrone API 1.6
  • XBox 360 controller for manual inputs (keyboard mapping currently broken)
  • jEdit as a code editor

Software-wise, nothing else is needed. Obviously, some librairies are required, such as SDL, g++ for the compiler, and later OpenCV for the image analysis. All the code will be indeed done in C and C++; most of what already exists is written in C (i.e. the API), and our personnal code shall mostly be written in C++.

Activating our own algorithm

We now have the possibility to switch between a manual control for the drone (e.g. just after taking off and before landing) and a automated control managed by our own custom algorithm, by merely pushing one button on our controller. Besides, all other necessary commands are also here, coded by ourselves, like performing an emergency shutdown or a flat trim (calibration on the ground). A lot of this was achieved thanks to a helpful presentation found on the web1, on top of excerpts of code2.

Some early tests were about having the drone describing a square pattern on a horizontal plane, or circles of ever increasing radius. Everything responds well – the biggest task remaining to avoid drone collisions with its surroundings was to understand how to handle properly the power of the motors, whose ranges go from -25000 to 25000 (what’s the difference in numbers between fast and really fast for instance ?). It has to be stated that the whole custom algorithm is running in real time on the computer, that constantly exchange data and commands with the drone.

Tracking a roundel

One of the other objective we had in mind while taking the time to set a neat developing environment was to be able to soon integrate our own image analysis. This will be done in a specific part of our code using the library OpenCV.

But before moving on to this next step that has still to be mastered, we wanted to use the already existing roundel detection enabled by the latest ARDrone firmware. Thanks to the API, we can get the coordinates of one (or many) roundels detected on the ground, by using the vertical camera. With these information, we quickly developed a really basic algorithm supposed to keep track of a roundel by hovering on top of it and hopefully following it. The drone basically uses a kind of a proportional controler: the furthest it is from its goal (that is, having the roundel located in a square centered in its vertical camera video field), the fastest it will activate its rotors to correct the error. Our first rough results with this approach can be seen in the video below.

 

NB: the XBox controller only purpose is to assure that the drone is located on top of a roundel before activating our algorithm with a button.

 

Some obvious issues appear after our first tests :

  • The drone is not stable enough, it oscillates a lot, which may be enough for tracking one unique robot, but certainly a problem when more are involved
  • The drone overshoots regularly while trying to correct its error, risking losing track of the roundel
  • The altitude handling is also far from being smooth

All those remarks boil down to one: the controller is not satisfying enough, and more tuning with the constants won’t provide a dramatical improvement. This therefore leads to this conclusion: we need a better controller, and me may want to investigate a PID (Proportional, Derivative, Integrative) one. We have already done some promising tests with it so far, and it proves to be much more promising in terms of steadiness and robustness. It will however be the topic of a future article.

  
References

  1. OpenCV/ARDrone Two Parts Presentation – PDF file, by Cooper Bills []
  2. Robot learning, page by Cooper Bills – see at the bottom of the page, in optional TA lectures []

Improving the kiwi drive: tests with a compass sensor

After hours of testing (or “playing”, depends on how you appreciate remote-controlled cars), we’ve been forced to acknowledge that our omnidirectional robot was not completely accurate: after several movements in random directions, the robot has changed its orientation compared its inner one. The robot is indeed omnidirectional in the way that it can move in any direction but we still need to define the “front” so as to have a reference when we want it to move in a desired direction.

So this was without a doubt a problem we had to focus on even if  we knew that the robot won’t ever be 100% accurate over a full experiment. Thus, we came out with the idea of mounting a compass sensor on the robots. This way, we can set a “reference angle” for the robot and it will move according to this angle. For instance, if the north is the reference and we want the robot to go towards the north, it will move in the right direction even if it’s not facing it. The principle is very easy: we’re taking the angle given by the controller (so 0° if it’s the north) and when the brick receives the command, it looks what direction it’s facing and corrects the angle accordingly (so if it was facing the south, it would adjust the angle with +180°).

 

 

As you can see, we tried different spots for the compass sensor. On the left video, the compass is quite low, very close to the motors and the induced magnetic fields by the motors alter the compass sensor measurements which leads to a poor driving behavior. On the right video, we set up the sensor far from the motors and the robot’s behavior is way more suitable as you can observe. Nonetheless, we had another problem: the robot has a random behavior in certain spots. Even if the robot is asked to go forward all along the experiment, at one point, it’s just turning and making a loop on itself.

 

 

This behavior is due to some magnetic perturbations in the room: we took a compass and checked it everywhere in the room. After sampling every area, we found that the north was actually pointing in other directions in several parts in the room. Thus, our system is handy in the way that no matter which direction the robot is pointing towards, it’ll go in the direction you input with the controller. Besides, the errors made with the time (the orientation difference) would be totally solved because the orientation wouldn’t matter any longer with such a system.

But the system would only work in particular areas (with no magnetic fields) and this is something that we have to solve within the next articles. Ideas about accelerometer and gyroscope came out, or trying to use the drone to help the robots, or even implement a PID.

Controlling the drone with our own built program – roundels recognition

So far, what has been shown here about the drone was mostly made possible thanks to third-party applications, that we managed in some way to have working with our devices (PC or smartphone). This was good as an introduction, but here you will read how we had it controlled thanks to programs we could compile by ourselves.

First of all, we are using the AR.Drone SDK 1.6, with the last firmware to this day running on the drone, that is version 1.5.1. Lots of issues have been encountered with the latter by the developing community, among them:

  • Updating the firmware from version 1.3.3 directly to 1.5.1 while skipping 1.4 results in bricking the quadricopter. Parrot is currently working on a solution affordable to everybody12.
  • Using the Windows SDK 1.6 with the newest firmware implies losing the video stream, on top of many timeout issues and not working controls (well, not sure is this last one is directly linked to that combination).

So we tried the examples provided in the SDK, that enable basic control of the drone. However, before running them, one needs to compile and build the source code, making sure that all the libraries are correctly installed.

Windows SDK example

To basically enjoy a good experiment with the Windows SDK 1.6, it is recommended to stay to a firmware version no more recent than the 1.4.7. This way, it is possible to display the camera feeds on the computer screen. The setup is completely tedious since nothing is working straight out of the zip archive. One would have to follow instructions made by other developers3, on top of looking for help in the official API forums. It looks as though this Windows part of the SDK was not tested before its release.

We eventually has it working on our Windows 7 laptop. It was possible to move the UAV using the keyboard, but the program was really high processor consuming, which often resulted in laggy controls, timeout issues and unresponsive drone. Yet commands were better once the video shut down.

We then took the opportunity to play a little with the source code and do our own stuff, like launching LED animations, but its was overall not satisfying because of the bad response time.

 

Observe the frequent timeout issues resulting in unwanted moves. It was still nice to run this Windows example to get a first glance of the raw navigation data in real time directly in our Windows terminal. Keyboard controls give a good feeling of the drone piloting. Note how CPU demanding this program is.

 

Linux SDK example

So we had to move on to another platform, that is better supported and knows less issues. Linux with Ubuntu 10.10 proved its magic in this matter, and the instructions to follow in the official developer where quite comprehensive and sufficient to have a running project after a few command lines.

The icing on the cake is that the navigation example gets on improved GUI that displays tons of navigation data in real-time. It is also CPU friendly, is efficient with the video while using the latest firmware, and works well with a game controller.

Excerpt of the Linux navigation screen. The yellow section shows that an oriented roundel has been found on the ground, and provides its orientation, its coordinates (xc, yc) in the camera view, its width, its distance from the drone.

The other good news is that the code is more readable, and it was not a pain in the neck to manage to display some data about tag tracking. We indeed fulfill this goal by calling some of the drone embedded functions that enable roundels tracking on the ground or in the air.

Basically, we can now recognize one or more -up to five, actually, according to the documentation- ground robots carrying the same kind of roundels on their back, and report their coordinates with their orientation as well (coordinates represented in the system of the video capture – we still have to investigate its specifications). We did not fulfill following one specific robot by hovering over a roundel, since the pre-implemented function does not seem to work anymore (we are waiting for an answer from the developers on their forum). We guess then that we will have to do it by ourselves, which is now our next objective.

As an aside, we discovered the resolution of the vertical camera: it is a QCIF resolution of 176×144 pixels4.
  
References

  1. Have a look at the official and public forums to be kept up to date: http://forum.parrot.com/ardrone/en/viewforum.php?id=7 []
  2. http://www.ardrone-flyers.com/ is a great website to get answers on how to save your drone by yourself. Previous firmware versions can also be found in its wiki section. []
  3. Very clear instructions can be followed on this website to achieve that: http://www.sparticusthegrand.com/ardrone/ardrone-compile-windows-sdk-1-5-in-visual-studios-2010/ (it is about SDK 1.5, but still necessary for 1.6) []
  4. Wikipedia explanations on the CIF conventions []

Working out the omnidirectional behavior of the kiwi drive

It’s been a while we’ve been working on this omnidirectional robot. We spent a lot of time on the construction (so as to provide a solid and reliable structure), trying to figure out how to move it (just for pure experiments, regardless to the omnidirectional purposes) and now, the last step was to implement the kiwi drive ( i.e. how to move the robot in any direction without rotating it). In order to make it easy to understand and picture the movements of the robot, we’ll refer to the image below.

 

Omnidirectional robot with axis and conventions

 

The motor A | B | C is along [OA) | [OB) | [OC) and can move the whole robot according to the black arrow if activated (in the opposite direction if we apply a “negative” power) and every motor is at 120° from each other. So this was basically all the information we had about the structure of our robot and we had to implement the kiwi drive with no knowledge about its working. We’ll explain step by step the way we programmed it.

First, we tried to make the robot go in “basic” directions. Indeed, if A gets the power |p| and B gets -|p|, the robot will move along [ON), translating itself. So if we input two opposite powers to two motors, the robot will translate along one of the blue axes. This way, we knew how to go towards six directions in omnidirectional mode using the parameters given in the table below.

 

Repartition of the power for the motors according to angle

 

From this point, we wanted to check other values: we started with 30°. Our first strategy was to take the values of 0° and 60° and make the average for every motor. We ended up with 0.5, -1 and 0.5. The result was better than expected because it actually worked… So we tried to use the same strategy for any angle and thus we applied a linear interpolation (we came out with the diagram above). In order to make it easy to control and to appreciate playing with it (that’s one of the big advantages when you make a master thesis in robotics), we struggled to implement the XBox 360 Controller in our system and make the control very smooth. We deal directly with the controller in our code, we don’t use software like XPadder1 that we used for instance in another projects; so the control is very light and fast. –we used the class XBoxController2 instead

You can see and enjoy the results on the video below… (notice that the robot is fully omnidirectional and we used the second joystick in order to make it turn on itself). Nevertheless we’re still aware that the robot makes some errors, especially at low speed: it makes it turn a bit. We’re working on a solution right now (trying for the moment to integrate a compass sensor) and we’re looking forward to lower the number of messages the computer sends to the brick so as not to flood the robot with useless and numerous commands.

 

 
References

  1. http://www.xpadder.com/ []
  2. everything that you need to run in on Windows is available on this website http://www.aplu.ch/home/apluhomex.jsp?site=36 []

Controlling the LEGO brick via computer: the beginnings of the kiwi drive

The only tests we’ve done so far with our omnidirectional robot was using some applications (found the android market) so as control the LEGO brick with the basic LEGO firmware. It was indeed working properly (cf our previous articles and videos) but those softwares weren’t designed for the kiwi drive1 we want to use in our project and the firmware (LeJOS) we’re using on the LEGO bricks. But, before going even deeper in this article, you may want to be told a little bit more about this notion. The kiwi drive (a.k.a. the Killough Platform) is just the structure of a omnidirectional robot with three omniwheels. According to the preferences of the developers, the number of omniwheels are either three or four in the majority of the construction (even if some fanciful but impressive constructions require an unusual number of owmniwheels…).

 

So the first step was to ensure a connection between the brick and the computer via Bluetooth. After that, we enabled the robot and the computer to communicate exchanging strings. The last step of this procedure was to code (on each side) a communication protocol based on string analysis/keywords matching. Thus, you’ll see on the left video below how easy we can control every motor using simple commands on the computer and some driving test on the right one.

 

Testing the motors with command lines Demo of the driving skills of the robots

 

Nevertheless, the robot doesn’t have an omnidirectional behavior yet: indeed, we expect the robot to be able to move toward any direction without making any rotation. Our first tests on this feature seem to be promising, but there’s still a lot of development to do to make it work properly and smoothly. You’ll be soon informed about the outcome of our experiments…

 
References

  1. Omniwheel article on Wikipedia []

The day we crashed the drone

Sometimes, tragic accidents just happen. Well, we were not exactly surprised by the one that affected us last week, when we damaged our AR.Drone so bad it couldn’t fly anymore, since we played with fire. Or rather with wind, to be precise. Besides, when you do numerous experiments with a really light device that can fly really high in the sky, you have to expect to perform some repairing, sooner or later.

Events were as follows. So far, our UAV had only been flying indoors, and we really wanted to try it outside, to watch its behavior in a different setting, to test the wireless range, its reactivity to navigation commands, its speed, its ability to hover despite the wind… We therefore took a laptop, and Xbox 360 controller and our drone in the courtyard next to our building, and started our first outdoor flights.

In spite of an overall mild weather, there was some wind regularly blowing and disturbing from time to time our controls. But the quadricopter proved to be quite steady on the whole and able to respond well, even while being 30 meters away from us. Except with sudden gusts of wind and an obstacle nearby. We crashed it a few times: a window and a vertical pole were its very first close encounters. Those happened however at heights of no more than 2-3 meters, and did not really damage the drone; its security system is indeed really efficient when it comes to shut down the rotors as soon as one of them is blocked. The last accident happened a little higher, around 5 meters; the AR.Drone then immediately stop to start its emergency fall and proceeded to a harsh landing on the concrete ground, using one of its rotors as a shock absorber. Not the best scenario.

As a result, a gear connected to one of the four motors was destroyed, and, worse, the central cross that hold the whole body together was broken at one of four parts. It was still possible to start the drone, but after around two seconds of initiating the rotors – the four of them were still working once the faulty gear removed – its internal sensors detected a problem and it automatically shut down (most probably because one motor wasn’t loaded enough).

The central cross after our accident. No wire was cut, so we simply tried a glue + duct tape solution to fix it !

A gear in a really bad shape, of no use anymore. The drone hit the pavement right with this piece after its 5m fall.

The good thing with this accident is that we learned quite a lot on the drone itself, since we had to open it completely to perform the “surgery”. We have seen that it is highly modular, and its functions are well split into distinct parts: propellers, gears, shafts, motors, central cross, motherboard, navigation board, horizontal camera, hull, and so on. Replacement parts can be ordered and video for each kind of fixation are shown on Parrot official website, so the repairing is easily made, whatever the problem is. It is even possible to rebuild a new drone only thanks to replacement parts.

A bag of new gears. We also bought a new central cross, but we haven't use it yet; we will see how long our own repairing lasts.

The finishing touch

After fixing it, the drone - seen from below - looks as if it is new.

Even if we had to wait ten days to get the replacement parts, the good news are that our drone was then fully functional again. We then took the opportunity to make a video – see below – of us controlling the drone with a Xbox 360 controller linked to a PC which is connected to the drone’s hotspot (WD ARDrone and Xpadder were running on the laptop to enable this kind of flight). Watch how steady it is, and the accuracy provided with the gamepad.

 

Ground unit construction and structure ideas: implementing omniwheels

Thinking about the project in itself, we focused on the ground units in the flock and their movements. In order to follow the commands it has been given, a ground unit must turn on itself and then move forward an indicated direction (and eventually make a last rotation to be well oriented with the leader). This sequence of three different steps can obviously be executed in one single move; but without considering the obstacle avoidance part, the split way seems to be a reliable first approach. However, we wanted to suppress one of those constraint so as to make the flock units movements much more easier to establish. This way we tried to get rid of the “rotation” part in order to gain time: thus we came out with the idea of using omniwheels. It’s something we already thought about for our last project and agreed that it would be worthy to implement for this one.

Now that you’re aware of the reasons that led us over here, it’s maybe about time to tell you some more about the omniwheels (the name in itself can give you clues about its function but as this isn’t a very common system, we’ll spend some time on giving explanations). Before giving any theory, a visual contact with the device should give you some hints and satisfy your curiosity.

 

Omniwheel design on www.holonomicwheel.com TETRIX 3″ Omni Wheel found on www.legoeducation.us Excentric design of omniwheel found in the LEGO Lab

 

So, as you have noticed, an omniwheel (or polywheel or even holonomic wheel) is a wheel composed of little wheels all around the circumference of the big one. The axis of those little wheels are tangent to the circumference of the big one and they are all included in the same plan (thus, they should all be orthogonal to the axis of the big wheel). This architecture confers “normal wheel properties” to the omniwheel but on top of that, the omniwheel can slide along its axis of rotation (or along the projection of this axis on the surface the wheel is used on).

Nevertheless, they are some different architectures according to the creativity of the builders as you can see on the last picture on right above (the axis of the “little wheels” are not orthogonal with the axis of the omniwheel). A omniwheel isn’t omnidirectional in itself but using several of them can turn a robot into a omnidirectional car. And this was the point of our work…

 

 

The omniwheel above is the one we came out with. We only used LEGO pieces, it is composed of 16 little wheels and as you can see the structure isn’t that massive. After that we tackled the robot construction, knowing that we wanted to use 3 of those omniwheels. The two main problems were to work with the angles so as to respect the LEGO construction conventions (the axis of the omniwheels must cross in one point and their center must describe an equilateral triangle) and have a robust structure for the robot.

 

 

Those two video show how the robot is moving and it was our first tests so from this point, we had to find out what could and should be upgraded. Namely, if you watch closely when the robot stops, it tends to balance and doesn’t here but could fall in another situations (different speed, different slope, etc). Another notable point is that the robot “bends itself” when moving or turning: this is because the fixations and the structure are not strong enough (and this was the most annoying problem). So from this point, we tried to lower the robot and widen the motors so as to provide a better stability; and tried to reinforce the structure without overweighting it (despite of the fact that we can’t neglect the stability even if it’s at the expense of the robot’s weight).

 

 

This way, we crafted a new robot much more stable (and quite heavier too) but the balance and the solidity were no longer problems. Something that need to be mentioned is that we inclined the omniwheels instead of installing them perpendicular to the floor (according to our experiments, this difference doesn’t seem to alter the robot’s behaviour and movements from any other “regular omnidirectionnal” robot).

 

Testing the drone with third-party applications

Trying to connect to the AR.Drone

When you buy the AR.Drone, the first thing you may want to do before developing your own programs for it is to test the UAV itself, straight out of the box. The easiest way to proceed is to own an iDevice, i.e.an  iPhone/iPad/iPod Touch, download the free application AR FreeFlight made by Parrot, connect to the drone and simply run the app. It will enable you to pilot the UAV while streaming the video feed from both cameras directly to the phone screen.

As far as we are concerned, we do not use such devices, and we only plan on piloting the drone with a computer or an Android phone. Unfortunately, a technical detail prevent us so far from doing it as easily as it can be done with an iPhone. The AR.Drone requires an ad-hoc connection which cannot be established with our Android phone, since it is still not software-wise supported by Google. It is however possible to do that with the Parrot Software Development Kit, as long as you manage to circumvent the WiFi connection issue.

Many applications for the AR.Drone are already available on the Android market (see our Links and Downloads page), however most of them require that you root your smartphone prior to using it. Rooting a smartphone usually void your warranty (it is yet possible to unroot it), and furthermore you need to apply a patch to enables ad-hoc connections. It is completely doable and we even tried it. However the process is tedious and is not easily explainable to anyone. Since we want to develop a system that can be started by anybody as long as the material is there, this is not an option we would like to keep.

 

What is efficiently working so far

On a HTC Desire smartphone

Fortunately, Shellware developed a little PC program that reverses the connection process. Instead of having the phone connecting to the hotspot generated by the drone, it makes the drone connect to the hotspot enabled by the phone.

To achieve that is easy thanks to the AR Assist infrastructure WiFi program (running on a WiFi-enabled computer) that tells the drone which hotspot it has to connect to. Then, one only has to start the AR Pro Android application (running on an Android phone) to pilot the UAV as it would be done on an iPhone.

 

Screenshot of the AR.Pro interface running on a HTC Desire. The screen is mainly covered with the frontal camera view, while on the top left is located the vertical camera view. Joysticks enable throttle, roll, pitch and yaw, while buttons take care of automatic maneuvers. Altitude, quality of WiFi connection and battery information are displayed on the screen. Note that the vertical size of the camera view has been scaled to fit the screen resolution.

 

The last annoying thing with this solution is that you need a PC around and it takes around one minute to configure. Unless you then install a patch on the drone (that can be made through AR Pro, even if we did not succeed in doing so), you then have to follow those same steps every time you reboot the drone.

On a laptop

It is also possible to skip the phone layer and pilot the drone directly with the computer. You just need to connect to the drone hotspot with your computer through WiFi, and then run for instance the program WD ARDrone that uses Windev 16.

 

Screenshot of the WD ARDrone application running on Windows. This is what the UAV sees with its horizontal camera.

 

This program is really reliable and enables video recording and piloting with a specific controller, like a joystick or an XBox 360 controller – as long as you map your keyboard keys with your controller buttons. The interface provides the user with information about all navigation data which increases the impression of flying from inside a cockpit. Experimenting piloting commands with a game controller proved to be more accurate and steady than with a smartphone.

The powerful Xpadder software has been used to do the key mapping with our game controller.

First experiments

Before going on a coding spree and working on the project itself, we wanted to run some tests just to see if the drone could track a robot on the floor and follow it. But in order to do this, we didn’t want to code anything so we used a subterfuge: instead of using a tracking feature of the drone, we investigated its stabilization process. Indeed, in order to stay on the same spot when it doesn’t receive any instructions, the drone uses its own camera pointing at the ground, analyzes the images and try to stay above the pattern it’s matching at this moment.

In our experiments, we took a ground with an almost uniform surface (this way, it would be hard for the drone to find a landmark to stay stable), made the drone take off, and then we placed the drone above a LEGO land unit with a disk on it (the results seemed to be better with the disk). At this point, the land unit would be a landmark for the drone and according to our theories, if the drone doesn’t receive any instructions and the land unit move, the drone should follow the car.

 

 

The results are better than what we were expecting, you can see by yourself on the video above… Notice that the right smartphone remotely commands the drone (via WiFi), the left one remotely commands the ground unit (via Bluetooth) and that nobody’s touching at the drone or its remote control during all the “tracking” phase.

Same idea… different approach

While browsing the web looking for potential examples, demonstrations and applications of the main idea behind our project, we stumbled on some good videos. It appears that, so far, few people managed to have at least the AR.Drone following one moving object on the ground, let alone an autonomous robot vehicle. We still did not find anyone using the UAV to help robots maintain a formation – it does certainly not mean that it has not been done or tried though.

Among the rare examples found, the one that is closest to our project has been done at the NASA Langley Research Center. It features a remote-controlled car moving on a flat surface and followed in real-time by the very same AR.Drone we use.

 

Demonstration of the autonomous tracking of a ground vehicle that is remote-controlled at the NASA Langley Research Center.

 

This is one of the tasks we have to implement in order to fulfill our project. However, as you may see in the above video, the way of having it working efficiently is different from the idea we wish to develop. There is image analysis but not achieved by the drone alone; the system operates indeed with multiple surrounding and stationary cameras that do motion tracking of both the drone and the vehicle. It works thanks to a Vicon motion capture system that enables such a smooth result with a powerful software.

LARSS (Langley Aeronautics Research Summer Scholars) students worked on this project at this research center last year, helped by NASA scientists. According to a comment, they chose to use many cameras instead of the one embedded on the drone because the latter was not good enough resolution-wise to provide the required accuracy. It may be due to them working with an early prototype though. It will in any case be a good challenge for us to try to achieve what they couldn’t and find ways to get around these problems. We have more time than they had for that, and we can benefit from a still greater community of developers using this platform.

It is anyway great to see that researchers of such well-known places are interested in this kind of projects. It also shows that there is still work that need to be done in this field.

 

Learn more about it:

The Parrot AR.Drone was put on display at the last NASA Aviation Unleashed Conference, featuring the above-mentioned project, while they also did other experiments involving autonomous movement and obstacle avoidance with the drone. An interesting article regarding the Aviation Unleashed Conference and forecasts about autonomous aviation may also be found here.

« Previous PageNext Page »