September Update: Announcing NavAbility Mapper and New Features

What do you get when you cross a world-class robotics conference (ICRA2022) with a localization and mapping startup? A brand-new product!

ICRA2022 was a game-changer for NavAbility, one that took almost three months for us to digest. We want to thank the participants for their overwhelming support at our tutorial as well as their invaluable feedback on our product direction. 

We’ve listened – from construction leaders through to agricultural automators – and we have weaved your needs into a new product that we will start releasing over the coming months. 

Announcing NavAbility Mapper

A key takeaway from the conference is that robotic mapping is an ongoing challenge, one that our software is uniquely positioned to solve. So we’re taking our toolset and designing ways to make your mapping problems simpler, faster, and easier to address. 

NavAbility Mapper is a cloud-based SLAM solver that allows you to build, refine, and update enterprise-scale maps for your automation. We’re excited about building living, breathing maps of your environment that give your robots trustworthy navigation in dynamic spaces. 

At the moment we’re focusing on providing examples in construction automation, warehouse automation, and marine applications. But, we’re looking for users in the Agriculture 4.0 space if you are looking to build the next generation of agricultural robotics.  

More information on NavAbility Mapper can be found on our Products page. Follow us on LinkedIn to keep up to date as we release Mapper features!

Mapping a Construction Site with NavAbility Mapper

We’re going to use your data to demonstrate how we’re making it easier to build, manage, and update large-scale maps in autonomy. We’ll start this with a Hilti Challenge dataset, which is an open construction-site dataset that can be found at HILTI SLAM CHALLENGE 2022.

In upcoming posts we will be documenting how NavAbility Mapper solves key challenges in producing and maintaining construction site maps.

In the meantime here’s a sneak preview of the raw data and the maps we are producing (visualized using the new FoxGlove integration):

Raw camera and LIDAR data available from the Hilti Challenge dataset
Preliminary 3D map from the NavAbility Mapper SLAM solution (using the new FoxGlove integration to visualize it)

New Features in NavAbility Cloud

In addition to this, we’re adding features that you asked for, including new visualizations and ways to build+solve your mapping challenges. We’ll dive in and highlight a few of these in later posts. 

If you have any questions about these features please feel free to contact us on Slack!

Multiple Solvers

A frequent request was to allow the NavAbility cloud to produce SLAM solver answers using traditional Gaussian solving in addition to the multimodal solutions. This is commonly called parametric SLAM, or unimodal SLAM. 

You can now produce both parametric and multimodal solutions by specifying the type when requesting a SLAM solve. This is available in v0.4.6 of the Python SDK.     

Visualization with FoxGlove

Do you want to see your data in all its 3D glory? In addition to the topographical graphs and the 2D spatial graphs in the NavAbility App, we’re integrating FoxGlove into the App to allow you to use the FoxGlove tools to examine your results.

We’ll write a post on how to do this in the coming weeks. 

Big Data in NavAbility Cloud

The marine surface vehicle example highlighted the need to allow users to tightly link their big data (e.g. camera image, radar data, and LIDAR scans) to the factor graph for efficient querying.

We now have endpoints to upload, link, and download big data related to variables in your graph. This allows you to upload all the raw sensor data, solve graphs, and later query it efficiently for additional processing. This is currently available via the NavAbility App and in v0.4.7 of the Julia SDK. Let us know if you want us to prioritize adding it to the Python SDK.

NavAbility App demonstrating the big data available across a user’s robots. This data is indexed by the factor graph and can be efficiently queried to, say, find all images around a specific location.