Intel Edison for Smart Drones - Intel Software Academic Program

Intel Edison for Smart Drones
“Collision avoidance using Intel Edison” project
Authors: Pierre Collet, Paul Guermonprez @ Intel Paris
www.Intel-Software-Academic-Program.com
[email protected]
Mapping or measuring
Civilian drones are getting better at recording HD video and stabilizing their flight. You can even plan limited
automated flights routes from your tablet. But there’s still a major problem: how do you stop small drones from
colliding with humans or buildings when they fly? Big drones have transponders and radars. They fly in regulated
spaces, far from obstacles. And they have pilots with remote cameras. But small civilian drones must stay close to
the ground, were obstacles abound. And pilots are not always able to get the video feed or they’d like to stop
monitoring. That’s why we must develop collision avoidance systems.
Imagine you’re walking and want to avoid collisions. You have three choices:
1. Use only a simple short distance detection system, like having your arm in from of you when you walk in
the dark. Very simple, requires low energy to operate and the information is very easy to analyze.
2. Use a sonar, like a bat. Efficient, shorter distance, requires a minimal amount of computational power to
analyze and the result is available quickly. That’s why you can count of it to avoid last minute obstacles,
like a bat flying though branches. But it requires energy to emit a wave and get the results. That’s why you
can’t detect long distance obstacles.
3. Use your eyes. That’s stereoscopic vision. It requires a lot of computational power (your brain) to
understand volumes from two images, or from two successive images. That’s why you have a delay
between the data reception and the result available to your brain. But all you need is to receive light, no
need to emit anything. That’s why you can see obstacles very far ahead.
For drones, that’s nearly the same thing.
1. You can use a single beam sonar. A simple drone will often have a vertical sonar to keep a stable height
when close to the ground. The sonar data is simple to understand: you get a measure of distance in
millimeters from the sensor.
Similar systems exist for big planes, but horizontally: to detect collision with something ahead. But you
can’t fly with this information alone. Knowing if you have something right ahead or not is not enough. It’s
like using your arm in the dark. Advanced drones have 6 horizontal sonars. It help to avoid collisions with
objects on the sides, but that’s all.
2.
3.
You can get an advanced sensor. We’ve seen the sonar beam was 1 distance information, like 1 pixel.
With a horizontal radar, you get a full dimension of information: 1 distance information for all angles.
Such radars can use radio waves or laser. We know this kind of radar since world war two, first on the
ground, then on planes. You can afford that kind of technology on big planes or helicopters to detect
volumes kilometers ahead, not on small drones.
An evolution of the 1D sensor is the 2D sensor. From this family of sensor you probably know the
Microsoft Kinect. At Intel, we have Intel® RealSense. What you get from this sensor is a 2D matrix of
distances. Each pixel is a distance. It looks like a black and white image, where white is close and black is
far. That’s why the data is so easy to manipulate by software developers: if you see a bit block of dark
pixels in a direction, you know there’s an object.
The distance is limited, usually between 2-3meters for webcam sized sensors to 5m for larger sensors.
That’s why they may be useful on drones to avoid obstacles at short distance (and low speed), but they
won’t allow you to detect obstacles 100m ahead). You can fly like a bat.
An example of this kind of drones was demonstrated at CES : https://www.youtube.com/watch?v=Gj5RNdUz3I
So, we’ve seen the single beam sonar was perfect to measure distance to the ground at low height.
Sensors like RealSense are great for short distances. But what if you want to see and understand volumes
ahead? You need computer vision and artificial intelligence!
To detect volumes, you can detect differences in 2 images from 2 close webcams (stereoscopic vision), or
you can use successive images taken during a movement.
Our Autonomous Drone project
In our project, we’ll use an Intel-Creative Senz3D camera. There’s a 2D distance sensor (320x240 resolution), a
regular webcam, two microphones an accelerometers. We’ll use the distance sensor for short distances and the
webcam for long distances. http://us.creative.com/p/web-cameras/creative-senz3d (Note: we have a new sensor
called Intel RealSense, with 640x480 resolution, but we don’t really need the added resolution for this project.)
In this specific white paper, we are mainly focused on long distance computer vision, not on short distance depth
2D information. So you run the same code with a cheap 5$ linux webcam instead of the full features Senz3D
sensor. But if you want both the computer vision and the depth information, Senz3D is great.
We selected Intel Edison as our platform embedded on the drone. In the past white paper we worked with a full
featured android phone as embedded platform, but Edison is smaller and cheaper.
Edison is not just a processor, it’s a full featured Linux PC : dual core Intel Atom processor, storage, memory, wifi,
Bluetooth … and more. But no connectivity. You select an extension board based on your IO requirements and plug
the two together. In our case we want to connect the USB Senz3D, so we use the big extension board. But there’s
tiny boards with only what you need, and you can easily recreate your board. After all, it’s just an extension board,
not a motherboard.
Installation
OS: We unbox Edison, upgrade the firmware and keep the Yocto Linux OS. Other linux flavors with lots of
precompiled packages are available but we just need to install a simple software with little dependencies so Yocto
won’t be a problem in this case.
Software: We configure the wifi and access the board from ssh. We’ll edit the source files and compile on the
board itself through ssh. We could also compile on a linux PC and transfer binaries. As long as you compile for
32bit i686, it will run on Edison. All we need to do is install the gcc toolchain and favorite source editor.
Camera: First obstacle is to use the PerC camera on Linux. The camera was designed to be used on Windows.
Thankfully, the company who designed the original sensor maintains a driver for linux : SoftKinetic DepthSense
325. http://www.softkinetic.com/products/depthsensecameras.aspx
It’s a binary driver, but a version compiled for Intel processors is available. With Intel Edison, you can compile on
the board with gcc or Intel Compiler, but you can also get binaries compiled for Intel and deploy them without
modification. Intel compatibility is a big advantage in such situations. After solving a few dependencies and linking
problems, the driver software is up and running. We are now able to get pictures from the cameras.
Sensor data: Depth information is very simple to get and analyze from the sensor. It’s just a matrix of depth
informations, 320x240. Depth is encoded as levels of grey. White is close, black is far. The visual part of the sensor
is a regular webcam. From the developer point of vue, the sensor is exposing 2 webcams: one is black and white
and returns depth information, the other is in color for the regular visual information.
We’ll use the visual information to detect obstacles from a distance. All you need is light, there is no distance
limitation. The depth information will be used to detect and map obstacles very close to the drones, 2-3 meters
maximum.
Safety notice: You’ll notice from the photos we work in a lab, simulating the drone flight. The algorithm
proposed is far from being ready for production, that’s why you should not try to fly with humans around you. First
because it’s rightfully forbidden is several countries. But more importantly because it’s truly dangerous! A lot of
drone videos you see online (outside or inside) are actually very dangerous. We also fly the drone outside, but with
a very different setup to respect the French law and prevent accidents.
The Code
So we have two sensors. One is returning depth information. You don’t need our help to process this data: black is
close, white is far. It’s very precise (order of 1-3 millimeter) and it’s very low latency. As explained before, it’s
perfect but it’s not going very far.
The other sensor is a regular webcam. That’s where we have a problem. How do you get volumetric information
from a webcam? In our case, the drone is moving with a rather straight path. So we can analyze 2 consecutive
images to detect differences.
We get all the important points of each, try to map them between the 2 images to define a vector. Here’s a result
with the drone sitting on a chair and moving slowly in the lab :
Small vectors are in green, long vectors are in red. It
means the point moving quickly between 2
consecutive images are in red, and the ones moving
slowly are in green. Obviously, there’s still a few errors
but that good enough.
It’s like Star Trek. Remember when the ship is going
faster than the speed of light? The close stars form a
strong white vector. The stars far away form a short
vector.
Then we filter the vectors. If the vectors, big or small,
are on the side: no risk for collision. In the test photo,
we detect the 2 black suitcases on the side but we are
flying straight between the masses. No rick.
If the vector is in front of you, it means you are on a path to collision. A short vector means you have time. A long
vector means you don’t.
Example: in the previous photo, the suitcases where
close but on the side, posing no risk. The objects in the
back where in front but far, posing no rick.
In this second shot, the objects are right in front of us
with big vectors: there’s a risk.
Available from : http://intel-software-academicprogram.com/courses/diy/Intel_Academic_-_DIY__Drone/IntelAcademic_IoT_White_Paper__Intel_Edison_for_smart_drones.zip
Results
With his hardware setup and code, we demonstrated 4 points:
 Intel Edison is powerful enough to handle data from complex 2D sensors and USB webcams. You can even
perform computer vision on the board itself, and it’s easy to work with as it’s a linux PC.
Yes you could have better performance per watt with a dedicated image processing unit, but it would take
weeks or months to develop a simple software. Prototyping with Intel Edison is easy.
 Analyzing data from a single webcam can give you a basic volume detection, roughly 10-20 times per
second on Intel Edison, without optimization. That’s fast enough to detect volumes far ahead and adapt
the trajectory.
 The same hardware and software setup can handle a 2D volume sensor to handle super low latency and
precise 3D mapping. Intel Edison and Senz3D together can solve both your low speed-low distance and
high speed-long distance collision avoidance problems.
 The proposed setup is cheap, light and does not require a low of power to operate. It is a practical
solution for small consumer drones and professional drones.
What’s next?
The drone project has three parts. The first part was showing how to interact with servo motors, and the drone
stabilization card. We saw how easy it is to tell the drone to go up, left, right and the rest of the 8 possible
directions. That part was focusing on the interaction between real time (drone stabilization) and heavy processing
(artificial intelligence). At the time Edison was not available yet so we used an Intel Android phone as embedded
computer but the concept is exactly the same.
With this second article, we see how to develop the beginning of an artificial intelligence: getting lots of data from
sensors and getting valuable information. We could use a single laser beam and avoid collisions but it could not
evolve to do more than that, and it would be too easy. That’s why we want to use both visual and depth matrix
information. The potential is huge.
With the two articles combined, you can easily give your drone instructions, and get complex sensor data analyzed
locally. You have the full link between the three subcomponents: real time flight stabilization, volume detection
and artificial intelligence. The last part is the fun one: decide what to do with your smart drone and code the
remaining artificial intelligence.
References and Resources
[1] Intel Software Academic Program: http://intel-software-academic-program.com/pages/courses#drones
[2] Intel Real Sense http://www.intel.com/content/www/us/en/architecture-and-technology/realsenseoverview.html
[3] SoftKinetic DS325 http://www.softkinetic.com/products/depthsensecameras.aspx
Notices
INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,
BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS
PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER
AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING
LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY
PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT.
UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY
APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR
DEATH MAY OCCUR.
Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the
absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future
definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The
information here is subject to change without notice. Do not finalize a design with this information.
The products described in this document may contain design defects or errors known as errata which may cause the product to
deviate from published specifications. Current characterized errata are available on request.
Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
Copies of documents that have an order number and are referenced in this document or other Intel literature may be obtained
by calling 1-800-548-4725 or going to: http://www.intel.com/design/literature.htm
Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors.
Performance tests, such as SYSmark* and MobileMark*, are measured using specific computer systems, components, software,
operations, and functions. Any change to any of those factors may cause the results to vary. You should consult other
information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of
that product when combined with other products.
Any software source code reprinted in this document is furnished under a software license and may only be used or copied in
accordance with the terms of that license.
Intel, the Intel logo, and Atom are trademarks of Intel Corporation in the U.S. and/or other countries.
Copyright ©2014 Intel Corporation. All rights reserved.
*Other names and brands may be claimed as the property of others.