For the past decade, autonomous vehicles (AVs) have been revolutionizing many different industries around the world from daily transportation to large-scale manufacturing facilities. Each week, over 100,000 passengers are dropped off at their destination by AVs in cities such as San Francisco and Phoenix who have been early adopters of autonomous vehicles for taxi service.
AVs are also being used in manufacturing and warehouse facilities to reduce costs, improve safety, and increase productivity. However, many of the most sophisticated AVs still need training on some kind of map before they can perform safely at scale.
Part of that training involves supplying highly accurate 3D data into machine learning algorithms so the vehicle can better navigate a specific environment. This is where Nexys can be the most helpful by quickly and accurately capturing high resolution 3D models that can be used to train semi-autonomous robotic platforms.
And if you're interested in moving into fully autonomous robotics, Nexys' modular capabilities mean it can power any integrated aerial- or ground-based robot with Level 4B Autonomy -- meaning the robotic platform would be capable of intelligently exploring an area of interest without needing input from a pilot or existing GPS while simultaneously identifying and overcoming dust as a perceived obstacle. ExynAI, the LiDAR-based SLAM technology stack that powers Nexys, is also available in beta as an API for further integration with any robotic platform.
AVs are expected to be able to navigate the world around them, even if unforeseen obstacles appear in their path. However, for that process to work, certain vehicles need verified external data to use as a baseline. That baseline is used to help the robot establish a ground truth that can match the actual environment relative to what the autonomous vehicle is detecting through its sensors.
This is how many of the autonomous robots that are appearing around us more and more each day can so cleverly navigate around their environments. Take, for example, a robot inside a small restaurant that's programmed to deliver food to certain tables. It might look highly autonomous as it dodges people to get to a table, but look at the code and you'll find a robot navigating through a stored map to a pre-determined table checkpoint. This doesn't make the feat of the delivery robot any less spectacular, but it does not reflect the level of autonomy capable on other platforms.
And as more autonomous systems come online we will need feature-rich 3D models to train them and help power them before higher levels of autonomy are unlocked. A modular platform like Nexys can be used in a variety of locations to capture these accurate maps which can all be georeferenced with existing coordinate frames for global accuracy. This will be crucial in creating any interconnected smart city or autonomous driving infrastructure.
There are countless different ways to capture data to be used in creating a map for an autonomous robot. Using Nexys, we'll be capturing a feature-rich 3D model that can be exported into a standard PLY, LAS, or XYZ file to be used in any downstream machine learning pipeline or training software.
The first step is data collection, where Nexys does the heavy lifting. Using two cameras and LiDAR-based SLAM technology, Nexys accurately captures roadways, structures, and other features of any area where the robotic platform would need to operate safely. Like a terrestrial laser scanner (TLS), the final 3D model can be georeferenced with any pre-surveyed ground control points, but unlike a TLS you can quickly and continually move throughout your environment to capture every bit of detail.
Due to the flexibility and modularity of Nexys, the same unit can be configured for hand-held scanning, mounted to a vehicle, attached to a drone, or attached to a backpack. This allows AV trainers to capture data in a way that fits the environment best, all with the same single platform.
The 3D data captured by Nexys is then processed and formatted so it can be used by other software and the autonomous vehicle’s internal machine-learning algorithms. With Nexys, all captured 3D data is processed directly through a ruggedized tablet that ships with each platform, without the need to upload any data to cloud services. This speeds up the capture-to-data-analysis timeline and also ensures your team can verify the training data before leaving a site.
The post-processing software suite available through the ExynView software gives users the ability to subsample point clouds, clean any ghosting or moving objects through a non-static point filter, colorize a point cloud using RGB information captured via onboard cameras, or georeference a newly captured point cloud into an existing coordinate frame through pre-surveyed ground control points or via GNSS/RTK.
With a 3D model captured and properly formatted, it can be loaded into any training model being used for robotic exploration and mapped closely with a ground truth. That can be used to help robots navigate through an environment in real time or used in a machine learning model to help future systems predict better outcomes. Prior mapping with Nexys like that is most useful for smaller robots inside offices and buildings.
For larger autonomous systems like cars more sophisticated maps and sensor arrays will need to be assembled to work in unison. Using exisiting GPS and wireless internet networks we will be able to take big strides in autonomous vehicles, but for them to be truly safe enough for mass adoption they will need robust, reliable edge computing like what is available through Nexys' LiDAR-based SLAM autonomy.
Because maps that can be useful for training today's era of autonomous robotics can be so quickly and easily captured with Nexys, they can be captured more often to give autonomous vehicles a better understanding on how roadways change and evolve. We could eventually get to building a constantly evolving map that's being assembled each time an autonomous platform begins a mapping run. The possibilities for robotic autonomy really start to expand exponentially once you remove the confines of existing mapping infrastructure.
At Exyn, the philosophy at the core of ExynAI has been a focused blend of accuracy, flexibility, and modularity. With that balance the aim was to always offer ExynAI's proven autonomy as a SaaS option for customers beyond the Nexys mapping and autonomy ecosystem.
We understand that not every AV application has the budget, time, or resources to train from the ground up, so we're aiming to make ExynAI available as either a software module or through an API that can expand the capabilities of nearly any robotic platform. Now, your advanced robotic workflows can immediately gain access to our proven autonomous technology that has been used in thousands of locations around the world. You can get in touch with our team to learn more.
This means for many AV applications, you can skip the processing pipeline of capturing training data and go directly to autonomous exploration at scale. Our Nexys hardware has always stived to be the most flexible and modular 3D mapping product on the market. Today, we are offering our ExynAI technology stack with the same modularity and flexibility for even more integration possibilities designed for the next generation of AVs.
AVs are advancing at an incredible rate and teams need to adapt to changing demands and training methods.
With Nexys, you have the freedom to capture external training data whenever you need it to streamline your processing pipeline. In addition, you can now be the first to beta-test our proven ExynAI autonomy and mapping technology stack directly into your robotic workflows for immediate scalability and reliability.
Contact us today for a personalized demo of our Nexys hardware or ExynAI autonomy software to learn more about how Nexys is helping to train the next generation of autonomous vehicles.