Dev Update: Preview NavAbility Mapper, a 3D Construction site example

Dev Update: Preview of NavAbility Mapper, a Construction site Example

NavAbility Mapper is making it easier to build, manage, and update large-scale maps in autonomy. This post will briefly showcase the building of a 3D map using an open construction-site dataset.

A key feature of NavAbility Mapper is that it strongly decouples the map building process so you can focus on the task at hand. For example, with NavAbility Mapper your team can efficiently split up the following map-building tasks:

    • Importing data into the cloud (e.g. ROS bags)
    • Inspecting and selecting data of importance
    • Map tuning using both parametric and non-Gaussian SLAM solvers
    • Resolving conflicts in areas with contradictory data
    • Manually refining results with human input in ambiguous areas where automation needs the human touch
    • Exporting the map to your robots in various formats

In this post we’ll give a preview of a concrete case study we’re building in the construction space.

Sneak Preview: Mapping a Construction site

Side-by-side view of 3D world map view (left) and first person camera data (right). Notice how information is aggregated+improved in the 3D world map as more data is injected. 

Challenges in Mapping

Building and maintaining 3D spatial maps from fragmented sensor data in a construction environment presents a number of challenges, for example:

    • There is no single sensor that solves all problems, and the key to robust mapping is flexibly solving data fusion problems from multiple (heterogeneous) sensors.
    • Measurements data itself is not perfect and each have unique errors or ambiguities that are hard to model, predict, or reduce to a Gaussian-only error model. This is especially true when incorporating prior information (CAD models) or resolving contradictions in dynamic environments.
    • Verifying map accuracy in a dynamic environment (i.e. construction) is a delicate balance between automation and user input, and requires continuous validation as new data is added.
      • We jokingly say you’re doing your job correctly in construction only if the map keeps changing.
    • Maps need to be shared – between automation, human stakeholders, and ideally CAD/BIM software – and this requires a rich representation of maps, not just a networked filesystem.
    • Leveraging data collection from mobile equipment (possibly hand-held) provides more opportunities for collaborative robotic systems, but requires significantly more advanced data processing capabilities.

Diving into Mapper

We’re resolving these issues with NavAbility Mapper by building it to be sensor-flexible and suitable for enterprise use.

Flexible Sensor Types

Firstly, let’s take a quick look at some of the specific sensor aspects of NavAbility Mapper.  No single sensing modality can do it all, so Mapper is designed from the ground up to combine various sensor types (“apples and oranges”) into a common “apples and apples” joint inference framework, for example:

    • LIDAR produces semi-dense point clouds, but its cost and size means it is not always available
    • Inertial sensors provide self-contained estimates, but they require complex calibration and tricky data processing considerations
    • Camera imagery is ubiquitous, but also requires unique calibration, adaptation for lighting variations, and scene obstructions to contend with

In short, no sensor gives you a complete solution. We believe how you merge the sensor data is what makes (or breaks) a solution. NavAbility Mapper is designed to be flexible, incorporating a range of different sensor types out of the box, with the ability to extend it as needed.

In this post, we’ll look at the three that are available in the construction data.

3D LIDAR Scans

LIDAR scans are a popular sensor type for mapping and localization.  One of the key operations is to be able to calibrate and align point clouds, also known as the registration problem.  An example of a LIDAR alignment problem is shown in Figure 1 below.

A key feature of NavAbility Mapper is that it employs multiple methods to align point clouds.  We integrate Gaussian techniques, non-Gaussian techniques, and supervisory human-intervention to enable an efficient mapping process. Ideally, everything aligns automatically, but in cases where it doesn’t (the critical cases!) we use novel solvers and judicial human intervention to ensure robust autonomy.

Even in cases of high ambiguity (when the going gets really tough!), the non-Gaussian alignment correlations are used directly as measurements in a factor graph model for further joint processing with other sensor data.

Figure 1: Two point clouds from the Construction dataset before alignment.

IMU Data

Inertial measurement units – gyros and accelerometers – may or may not be available in various cases but provide a valuable data input for fully autonomous data processing.  The figure below shows a short data gyro rate measurement segment between keyframes in the Construction dataset.  This data clearly shows a mobile sensor platform, rotating aggressively on three axes while collecting data! 

NavAbility Mapper fuses this data with other sensors into a unified mapping solution.

Figure 2: A short three-axis rotation rate data segment, as measured by gyroscopes firmly mounted to the measurement platform.

Camera Data

Camera imagery is another popular (and ubiquitous) data source useful for mapping and localization.  While camera data is easy to capture, numerous challenges in terms of lighting, obstruction, or dynamic scenery complicate their use. 

Camera data, in combination with other sensors, are a valuable data source for mapping and localization.  We incorporate camera data into the factor graph, which can be extracted and used for improving the mapping result.

Stereo or structured light cameras provide reasonable depth estimate data through computer vision processing.  In general camera data processing can either be done via brute force matching, sparse feature extraction, or semi-dense methods.

More to follow on camera data!

Figure 3: selection of camera angles captured under motion during data collection.

NavAbility Mapper for Enterprise Use

Multisensor Calibration

Naturally, the combination of multiple sensors requires calibration of each sensor individually (a.k.a. intrinsics) as well as the inter-sensor transforms (a.k.a. extrinsics).  Often, these calibration parameters are computed through optimization routines, not unlike the underlying mapping or localization problem itself, sometimes referred to as simultaneous localization and mapping (SLAM). 

A feature of NavAbility Mapper is that calibration is treated similarly to localization and mapping, solving both problems at the same time.

Gaussian and Non-Gaussian Algorithms

Robust mapping requires more than traditional parametric (Gaussian-only) processing.  NavAbility develops both non-Gaussian and parametric algorithms that operate at both the measurement and joint factor graph inference level for more robust computations.  While non-Gaussian techniques are more computationally intensive, the higher robustness can dramatically improve overall mapping process timelines. 

NavAbility Mapper combines both techniques at the heart of the software (the factor graph) to ensure your map is always stable and reliable in enterprise applications.

Multi-Stakeholder Access to Maps and Privacy

Collecting, ingesting, organizing and then producing maps is only part of the overall mapping problem.  The goal is to produce a digital twin representation of ongoing operations, one that can be used for everything from automation to progress reports. 

In construction, the map is inherently dynamic, must be constantly updated, and it must be available to a variety of stakeholders and end-users.  NavAbility understands these stakeholders may be both human or robotic, and we strongly believe in defining a common reference for human+machine collaboration through a shared understanding of the same spatial environment. 

NavAbility maps are:

    • Built and persisted in the cloud for easy access
    • Optimized and indexed for efficient access whether by human or machine
    • Secured by state-of-the-art cloud security and user authorization to ensure your data is kept private
Figure 4: Screen capture of a 3D Point cloud map from the NavAbility Mapper SLAM solution.

More details to follow in future posts and we invite visitors to reach out to NavAbility for help or interest. Follow us on LinkedIn to keep up to date with new articles on how Mapper can empower customer and end-user products, services, and solutions.

September Update: Announcing NavAbility Mapper and New Features

September Update: Announcing NavAbility Mapper and New Features

What do you get when you cross a world-class robotics conference (ICRA2022) with a localization and mapping startup? A brand-new product!

ICRA2022 was a game-changer for NavAbility, one that took almost three months for us to digest. We want to thank the participants for their overwhelming support at our tutorial as well as their invaluable feedback on our product direction. 

We’ve listened – from construction leaders through to agricultural automators – and we have weaved your needs into a new product that we will start releasing over the coming months. 

Announcing NavAbility Mapper

A key takeaway from the conference is that robotic mapping is an ongoing challenge, one that our software is uniquely positioned to solve. So we’re taking our toolset and designing ways to make your mapping problems simpler, faster, and easier to address. 

NavAbility Mapper is a cloud-based SLAM solver that allows you to build, refine, and update enterprise-scale maps for your automation. We’re excited about building living, breathing maps of your environment that give your robots trustworthy navigation in dynamic spaces. 

At the moment we’re focusing on providing examples in construction automation, warehouse automation, and marine applications. But, we’re looking for users in the Agriculture 4.0 space if you are looking to build the next generation of agricultural robotics.  

More information on NavAbility Mapper can be found on our Products page. Follow us on LinkedIn to keep up to date as we release Mapper features!

Mapping a Construction Site with NavAbility Mapper

We’re going to use your data to demonstrate how we’re making it easier to build, manage, and update large-scale maps in autonomy. We’ll start this with a Hilti Challenge dataset, which is an open construction-site dataset that can be found at HILTI SLAM CHALLENGE 2022.

In upcoming posts we will be documenting how NavAbility Mapper solves key challenges in producing and maintaining construction site maps.

In the meantime here’s a sneak preview of the raw data and the maps we are producing (visualized using the new FoxGlove integration):

Raw camera and LIDAR data available from the Hilti Challenge dataset
Preliminary 3D map from the NavAbility Mapper SLAM solution (using the new FoxGlove integration to visualize it)

New Features in NavAbility Cloud

In addition to this, we’re adding features that you asked for, including new visualizations and ways to build+solve your mapping challenges. We’ll dive in and highlight a few of these in later posts. 

If you have any questions about these features please feel free to contact us on Slack!

Multiple Solvers

A frequent request was to allow the NavAbility cloud to produce SLAM solver answers using traditional Gaussian solving in addition to the multimodal solutions. This is commonly called parametric SLAM, or unimodal SLAM. 

You can now produce both parametric and multimodal solutions by specifying the type when requesting a SLAM solve. This is available in v0.4.6 of the Python SDK.     

Visualization with FoxGlove

Do you want to see your data in all its 3D glory? In addition to the topographical graphs and the 2D spatial graphs in the NavAbility App, we’re integrating FoxGlove into the App to allow you to use the FoxGlove tools to examine your results.

We’ll write a post on how to do this in the coming weeks. 

Big Data in NavAbility Cloud

The marine surface vehicle example highlighted the need to allow users to tightly link their big data (e.g. camera image, radar data, and LIDAR scans) to the factor graph for efficient querying.

We now have endpoints to upload, link, and download big data related to variables in your graph. This allows you to upload all the raw sensor data, solve graphs, and later query it efficiently for additional processing. This is currently available via the NavAbility App and in v0.4.7 of the Julia SDK. Let us know if you want us to prioritize adding it to the Python SDK.

NavAbility App demonstrating the big data available across a user’s robots. This data is indexed by the factor graph and can be efficiently queried to, say, find all images around a specific location.

ICRA Update: One Week to Go, Preparing Tutorials and SDKs​

ICRA Update: One Week to Go, Preparing Tutorials and SDKs

We’re looking forward to our workshop on May 27th at ICRA the premier robotics conference, IEEE’s International Conference on Robotics and Automation (ICRA) in Philadelphia!

We encourage anyone who is interested in building robotic systems to come meet or visit us in Philadelphia during the ICRA week.  If you can’t be in Philadelphia sign up for complimentary virtual attendance and join us virtually on Gather.Town (we will be supporting both in-person and virtual attendees). We will also be there for the full week so feel free to reach out to us to set up a meeting. We would like to meet you, visit your booth, or share a coffee and discuss next-generation robotics.

Tutorials are Now Available!

Our workshop session is aimed at providing multiple levels of engagement – from brief overviews of how to solve complex real-world navigation problems, through to trying the trying the tutorial code snippets for your self. Everything is available on the NavAbility App page if you want to take an earlier peek!

Upgrading SDKs for Tutorials

We are working hard to provide “zero install” and local install options for visitors.  We are also improving our SDKs for easier and better interfacing from different programming languages like Python.  Our ICRA tutorials will also show how our SDKs can readily be integrated into your existing software stack, and make the features of our technology readily available with the least amount of effort.

Who Should Attend

We encourage robotic system developers, integrators, OEM and sensor manufacturers, navigation system experts, and project leads alike will find this Workshop insightful and constructive.  We also encourage researchers in simultaneous localization and mapping (SLAM) to visit.

Stay Tuned

We will be updating our ICRA Landing Page.  See you in Philadelphia May 23rd – 27th!  Reach out, follow us, or subscribe to our feeds for more info!

How can we help?

Let us know if you have any specific questions relating to our Technology, or Company, or Challenging problems you have encountered relating to Navigation / Localization / Mapping.  We want to help!

Contact us

Find out how we can help you in your navigation journey

Visit us at the NASA LSIC 2022 Spring Meeting!

Visit us the NASA LSIC 2022 Spring Meeting

The Lunar Surface Innovation Consortium, a NASA Lunar Innovation Initiative, is hosting its Spring meeting this week.  NavAbility is presenting at the event and will be engaged with breakout sessions.  Come visit us at the event to learn how we are enabling more capable, distributed, and robust robotic technologies through hybrid open and platform software!

This event is a great opportunity to connect with a community of experts in a variety of advanced robotic technologies, and learn about the ongoing innovation from industry, academia, private, and national lab groups.  See you there on 4-5 May at John Hopkins APL, Lauren, MD, USA!

PDF, NavAbility Poster 2022 with Hyperlinks

How can we help?

Let us know if you have any specific questions relating to our Technology, or Company, or Challenging problems you have encountered relating to Navigation / Localization / Mapping.  We want to help!

Contact us

Find out how we can help you in your navigation journey

New YouTube video: Easily using camera data for navigation and localization

New YouTube video: Easily using camera data for navigation and localization

You have a camera on your robot… but now what?

We’re starting the discussion on how to convert camera data into information that can be used for navigation and localization.

Next up: Loop closures and why that’s the crux of a robust navigation solution!

References:
* Edwin Olson’s AprilTags paper: https://april.eecs.umich.edu/media/pdfs/olson2011tags.pdf
* C/C++ library: https://github.com/AprilRobotics/apriltag
* A Python library: https://github.com/duckietown/lib-dt-apriltags
* Julia library: https://github.com/JuliaRobotics/AprilTags.jl

We’re all learning here, so please feel free to comment about other good wrappers of the AprilTags library!

New YouTube video: Factor graphs and their importance in robotics

New YouTube video: Factor graphs and their importance in robotics

We promised to have a conversation on all things robotics, and a great place to start that conversation is on factor graphs. This is the topic of your second video on the NavAbility YouTube channel, which is embedded below.

We also love communication, so If you have a topic in mind please comment on the videos or email us at info@navability.io.

Announcing our YouTube channel on all things robotics!

Announcing our YouTube channel and Livestream on all things robotics!

We’re excited to announce our NavAbility YouTube channel on all things robotics!

We’ll dive into interesting topics about robots, sensors, navigation, and coordination – the “what to expect when you’re expecting a robot” for everyone from commercial users through to home hobbyists. Jim will also be doing a YouTube Live stream to discuss the last video, answer questions, and talk about industry news.

Subscribe the NavAbility YouTube channel to follow us as we release these discussions. We also love communication, so If you have a topic in mind please comment on the videos or email us at info@navability.io.