January 27, 2020

What is Robotic Autonomy?

Autonomy is a defining characteristic of all robotic systems. It’s what makes robots so useful, enabling them to do the kinds of jobs that are too dull, dirty, dangerous, or difficult for humans.

by Jason Derenick, CTO

Autonomy is a defining characteristic of all robotic systems. It’s what makes robots so useful, enabling them to do the kinds of jobs that are too dull, dirty, dangerous, or difficult for humans.

But the term “autonomy” can mean many different things in a robotics context, encompassing systems that range in capability from the most sophisticated robots in the world to something as simple as your toaster. There isn’t always a direct correlation between how autonomous a robotic system seems, and how autonomous that system in fact really is.

Consequently, many robotic systems that are called “autonomous” are, in reality, either constrained in what they can do on their own, or dependent on remote humans for help. In practice, “autonomy” usually means “autonomy, sometimes,” whether that’s having a human in the loop somewhere or limiting autonomy to just a few specific situations.

Since different use cases require different levels of autonomy, it’s important to understand which kinds of autonomy are possible (and practical) to apply to the task that you want a robot to do, as well as what the current state of the art in robotic autonomy actually is.

The Current State of Robotic Autonomy

Robotic autonomy can be somewhat counterintuitive in terms of what sorts of things it can and can’t handle. As Moravec’s paradox explains, robots can easily do some things that would be impossible for a human to do, while there are other tasks that humans find easy that are extraordinarily difficult for robots.

For example, robots have no problem demonstrating an incredible amount of speed and precision in assembling car interiors that a human could never match, it’s almost impossible for them to do things that would be trivial for most humans, like fold and sort clothing.

Moravec's chart of All Thinks, Great and Small
Photo Credit: Hans Moravec

In general, robots lose out to humans when they have to deal with the unpredictable nature of most real-life environments. Any kind of unstructured environment, like a living room, a construction site, or a city street, represents the very edge of what most robots are capable of safely and reliably operating in today. 

Autonomous cars, which are perhaps the most recognizable category of autonomous robots at the moment, are no exception. They’re only able to manage as well as they do because most roads share critical features that an autonomous system can rely on, like being flat, generally clear of obstacles, and having consistent lane markings.

Things get more complicated when an autonomous car has to deal with cyclists, pedestrians, or anything else that doesn’t always behave predictably, which is why autonomous cars are a good example of how simply calling a robotic system “autonomous” doesn’t effectively communicate what that system can and cannot do by itself.

For autonomous cars, the Society of Automotive Engineers (SAE) has defined six levels of driving automation, where Level 0 is the least amount of autonomy, including basic features like automatic emergency braking. Tesla’s autonomy, which takes over some driving tasks for you on highways, is Level 2— the car will accelerate and brake and steer, but the human in the driver’s seat is still in control. Waymo’s autonomy is Level 4, which means that its vans will drive themselves entirely, but only in specific places and under limited conditions. Level 5 autonomy would mean driving anywhere at any time, and we’re nowhere close to that level of competency, (at least when it comes to cars).

SAE J3016 Levels of Driving Automation Graphic
Photo Credit: SAE

SAE’s well-defined levels of autonomy for vehicles make it easy to compare the capabilities of one system to another, even if both are called “autonomous.” But this is the exception, and for most other robotic systems, it’s not nearly as straightforward. 

Why Aerial Robotic Autonomy is Different

For drones, aerial autonomy is much more challenging than it is for cars. Not only are there no industry-wide definitions of what levels of autonomy there are, but drones can’t necessarily rely on any kind of consistent structure in the environments that they operate in.

Drones fly in three dimensions rather than drive in just two, meaning that they have to consider what’s above and below them, a much harder navigation problem. And while most vehicles with higher levels of autonomy can still rely on GPS and pre-existing maps to navigate, highly autonomous drones must detect and avoid obstacles and make maps as they go.

Exyn robot autonomously flying into underground mine stope
Exyn drone flying autonomously into an underground mine stope.

Adding to the complications are the severe restrictions that a flying platform places on resources, like power and computing, which are essential for the dynamic navigation that drones need to fly autonomously. The more batteries and processors you add to improve performance, the shorter your drone’s flight time will be. This extends to sensors as well and trying to compensate by increasing the overall size of the drone just ends up making it more difficult to transport and use. 

Consumer vs Commercial Grade Autonomy

All of these constraints mean that consumer drones (and even some commercial drones) that are commonly described as autonomous are actually very limited in what they’re able to do by themselves. The vast majority are entirely reliant on GPS, don’t have any onboard obstacle avoidance, and depend on being able to execute flights in pre-scouted areas or high enough above the ground to make obstacles a non-issue. 

Even drones that are capable of obstacle avoidance can only handle relatively simple tasks autonomously. For example, one of the most popular consumer drones, the DJI Mavic 2, uses an omnidirectional camera array to provide some assistance to human pilots in detecting and avoiding obstacles and allows for a limited amount of obstacle avoidance when autonomously tracking a person in motion.

The Skydio 2 is a consumer drone that offers much more robust autonomy which (like the Mavic 2) uses an omnidirectional camera array for obstacle detection. Skydio’s drone is able to identify and avoid most obstacles in most environments while autonomously flying at a useful speed, which makes its autonomy significantly more robust than a Mavic 2.

While the Skydio 2 likely represents the current state of the art in consumer drone autonomy, its reliance on cameras means that it only works when there’s enough light to see, and even if there’s enough light, low sun angles or scenes with both bright light and dark shadow can cause problems. And its capabilities are limited to “follow-me” videos, which is not useful for typical commercial applications such as the need to explore and map unknown areas. As with most other autonomous robots, despite its impressive capabilities, the Skydio 2 is still restricted to being “autonomous, sometimes, for selfie videos.” 

Next-Generation Commercial Aerial Autonomy

Exyn has developed true and full autonomy for drones. This means drones that can fly themselves from takeoff to landing in almost any environment without any human intervention at all. Our goal has been to handle almost any situation where a drone could be uniquely useful in gathering data, and it’s a goal that we’ve achieved on a commercial scale.

3D map identifying occupied and unoccupied space for robotic autonomy
How an Exyn robot "sees" occupied and unoccupied space for autonomous flight.

Exyn’s autonomy strategy is unique because its navigation is entirely infrastructure-free. Our robots don’t need GPS, they don’t need maps or any prior information about the environment they’ll be operating in, and once launched, they don’t even need to be able to communicate with a base station. This level of autonomy means that the operator doesn’t need to be an expert (or even a pilot) to use our robots as automated data collection tool such as automated mapping of sites.

LiDAR sensors enable both mapping and autonomy in our robots. An array of lasers rapidly generates a detailed and highly accurate three-dimensional map of the space that the robot is in, and the robot is smart enough to pilot itself to areas that it has yet to explore and then return to its launch point once the map is complete — you don’t even have to know the precise extent of the area you want to map. LiDAR is the reason that our drones can operate equally well indoors, outdoors, or even a kilometer underground in complete darkness. 

The Future of Autonomous Robotics

With our comprehensive autonomy as a foundation, Exyn is actively working on making commercial drones smarter and more capable. We’re enhancing our system with semantic mapping capabilities, meaning that it will be able to understand some of the things that it’s seeing as it’s making maps in real-time. This will enable the robot to answer questions like, “Where are all the people in the building?” or “Is there a doorway here that is closed?.” 

Trajectory flight path for autonomous robotic navigation
Trajectory paths generated autonomously by exynAI

Exyn focuses on robots you can trust to do the job that you want them to do safely, reliably, and efficiently. Our exceptional level of autonomy means that our drones can work in places where humans (and other drones) can’t, with the ability to collect the kind of data that simply cannot be acquired any other way. Exyn's work represents the future of aerial robotics — systems that will be versatile, adaptable, and fully real-world autonomous wherever and whenever they’re needed.

Subscribe to email updates