Before I talk about the results of the first official autonomous quadcopter test, I want to state the project goals. Here’s a video first to show what we’ve accomplished…
We hope to build an autonomous drone control scheme and create an environment where the drone can follow a red line on the ground autonomously. We will process a downward-looking camera feed and send appropriate command controls to the drone all in the Labview environment. Thanks to Mike Mogenson for his help on developing the platform and support on this project.
In addition to this we would like the drone to stop and hover at intermediary points on the line, marked by blue tape, for a predefined amount of time, before continuing on the path. We need to build in a safety function so that the drone lands automatically if it looses its contact with the line. We also need the drone to turn around at the end of the line and come back. Finally, we hope that the line following drone can be flown using an external, wired power supply as opposed to a 12V LiPo. We will do all our autonomous navigation testing with batteries and worry about external power when navigation is ready for prime time.
Here are some of the interesting findings from Test Day. First the control cluster–which is basically four controls (pitch, roll,elevation, yaw)–has values between -1 and 1 sent to the drone that are important parameters, We send commands to the drone every 20ms and found there was a sweet spot between the command doing nothing (.03 or -.03) and the command doing too much and the drone oscillating (.1 or -.1). By trial and error, we were able to get reasonable line tracking using this method. However for the next test cycle, there will be more of a proportional control applied to pitch and roll. For instance, when the drone is way off the line it will roll harder than when it is slightly off.
Another change we need to make has to do with the take-off process. Because our control loop begins immediately when the Labview program starts and the ‘take-off’ command comes later, we ran into the issue of giving erroneous control commands (usually yaw) while the drone was in the process of taking off. This moment is already an unstable time for the drone, and it led to many cases where the drone lost contact with the line immediately. Unfortunately, there is no way for the drone to scout out the line after it leaves its field of vision (not for us at least). The solution is hopefully to suspend commands until the drone reaches a given height. We also want the drone to fly a little higher than the typical 1m (where it automatically sits after take-off). If we can fly the drone at 2m, the field of vision for the camera will be larger and it will be harder for it to loose the line.
Another recommendation has to do with the turn-around process at the end of the line. In this iteration the drone was programmed to yaw 180 when it looses the red line at the top of the field of view. The idea was that after yawing the drone would reconnect with the line and continue with the same control structure returning back on the line. Unfortunately we found that the drone had too much inertia for this to happen. The only way we can really control the inertia is to give the drone the opposite command and try to minimize the oscillation. An easier solution with the turn-around will be to simply have the drone register the end of the line and start to fly itself backwards on the line. This will not be difficult to implement.
To conclude, as the video attests, we made some strides on the control but still have a ways to go to really make this thing autonomous.