Dev Update: Preview NavAbility Mapper, a 3D Construction site example

Dev Update: Preview of NavAbility Mapper, a Construction site Example

NavAbility Mapper is making it easier to build, manage, and update large-scale maps in autonomy. This post will briefly showcase the building of a 3D map using an open construction-site dataset.

A key feature of NavAbility Mapper is that it strongly decouples the map building process so you can focus on the task at hand. For example, with NavAbility Mapper your team can efficiently split up the following map-building tasks:

    • Importing data into the cloud (e.g. ROS bags)
    • Inspecting and selecting data of importance
    • Map tuning using both parametric and non-Gaussian SLAM solvers
    • Resolving conflicts in areas with contradictory data
    • Manually refining results with human input in ambiguous areas where automation needs the human touch
    • Exporting the map to your robots in various formats

In this post we’ll give a preview of a concrete case study we’re building in the construction space.

Sneak Preview: Mapping a Construction site

Side-by-side view of 3D world map view (left) and first person camera data (right). Notice how information is aggregated+improved in the 3D world map as more data is injected. 

Challenges in Mapping

Building and maintaining 3D spatial maps from fragmented sensor data in a construction environment presents a number of challenges, for example:

    • There is no single sensor that solves all problems, and the key to robust mapping is flexibly solving data fusion problems from multiple (heterogeneous) sensors.
    • Measurements data itself is not perfect and each have unique errors or ambiguities that are hard to model, predict, or reduce to a Gaussian-only error model. This is especially true when incorporating prior information (CAD models) or resolving contradictions in dynamic environments.
    • Verifying map accuracy in a dynamic environment (i.e. construction) is a delicate balance between automation and user input, and requires continuous validation as new data is added.
      • We jokingly say you’re doing your job correctly in construction only if the map keeps changing.
    • Maps need to be shared – between automation, human stakeholders, and ideally CAD/BIM software – and this requires a rich representation of maps, not just a networked filesystem.
    • Leveraging data collection from mobile equipment (possibly hand-held) provides more opportunities for collaborative robotic systems, but requires significantly more advanced data processing capabilities.

Diving into Mapper

We’re resolving these issues with NavAbility Mapper by building it to be sensor-flexible and suitable for enterprise use.

Flexible Sensor Types

Firstly, let’s take a quick look at some of the specific sensor aspects of NavAbility Mapper.  No single sensing modality can do it all, so Mapper is designed from the ground up to combine various sensor types (“apples and oranges”) into a common “apples and apples” joint inference framework, for example:

    • LIDAR produces semi-dense point clouds, but its cost and size means it is not always available
    • Inertial sensors provide self-contained estimates, but they require complex calibration and tricky data processing considerations
    • Camera imagery is ubiquitous, but also requires unique calibration, adaptation for lighting variations, and scene obstructions to contend with

In short, no sensor gives you a complete solution. We believe how you merge the sensor data is what makes (or breaks) a solution. NavAbility Mapper is designed to be flexible, incorporating a range of different sensor types out of the box, with the ability to extend it as needed.

In this post, we’ll look at the three that are available in the construction data.

3D LIDAR Scans

LIDAR scans are a popular sensor type for mapping and localization.  One of the key operations is to be able to calibrate and align point clouds, also known as the registration problem.  An example of a LIDAR alignment problem is shown in Figure 1 below.

A key feature of NavAbility Mapper is that it employs multiple methods to align point clouds.  We integrate Gaussian techniques, non-Gaussian techniques, and supervisory human-intervention to enable an efficient mapping process. Ideally, everything aligns automatically, but in cases where it doesn’t (the critical cases!) we use novel solvers and judicial human intervention to ensure robust autonomy.

Even in cases of high ambiguity (when the going gets really tough!), the non-Gaussian alignment correlations are used directly as measurements in a factor graph model for further joint processing with other sensor data.

Figure 1: Two point clouds from the Construction dataset before alignment.

IMU Data

Inertial measurement units – gyros and accelerometers – may or may not be available in various cases but provide a valuable data input for fully autonomous data processing.  The figure below shows a short data gyro rate measurement segment between keyframes in the Construction dataset.  This data clearly shows a mobile sensor platform, rotating aggressively on three axes while collecting data! 

NavAbility Mapper fuses this data with other sensors into a unified mapping solution.

Figure 2: A short three-axis rotation rate data segment, as measured by gyroscopes firmly mounted to the measurement platform.

Camera Data

Camera imagery is another popular (and ubiquitous) data source useful for mapping and localization.  While camera data is easy to capture, numerous challenges in terms of lighting, obstruction, or dynamic scenery complicate their use. 

Camera data, in combination with other sensors, are a valuable data source for mapping and localization.  We incorporate camera data into the factor graph, which can be extracted and used for improving the mapping result.

Stereo or structured light cameras provide reasonable depth estimate data through computer vision processing.  In general camera data processing can either be done via brute force matching, sparse feature extraction, or semi-dense methods.

More to follow on camera data!

Figure 3: selection of camera angles captured under motion during data collection.

NavAbility Mapper for Enterprise Use

Multisensor Calibration

Naturally, the combination of multiple sensors requires calibration of each sensor individually (a.k.a. intrinsics) as well as the inter-sensor transforms (a.k.a. extrinsics).  Often, these calibration parameters are computed through optimization routines, not unlike the underlying mapping or localization problem itself, sometimes referred to as simultaneous localization and mapping (SLAM). 

A feature of NavAbility Mapper is that calibration is treated similarly to localization and mapping, solving both problems at the same time.

Gaussian and Non-Gaussian Algorithms

Robust mapping requires more than traditional parametric (Gaussian-only) processing.  NavAbility develops both non-Gaussian and parametric algorithms that operate at both the measurement and joint factor graph inference level for more robust computations.  While non-Gaussian techniques are more computationally intensive, the higher robustness can dramatically improve overall mapping process timelines. 

NavAbility Mapper combines both techniques at the heart of the software (the factor graph) to ensure your map is always stable and reliable in enterprise applications.

Multi-Stakeholder Access to Maps and Privacy

Collecting, ingesting, organizing and then producing maps is only part of the overall mapping problem.  The goal is to produce a digital twin representation of ongoing operations, one that can be used for everything from automation to progress reports. 

In construction, the map is inherently dynamic, must be constantly updated, and it must be available to a variety of stakeholders and end-users.  NavAbility understands these stakeholders may be both human or robotic, and we strongly believe in defining a common reference for human+machine collaboration through a shared understanding of the same spatial environment. 

NavAbility maps are:

    • Built and persisted in the cloud for easy access
    • Optimized and indexed for efficient access whether by human or machine
    • Secured by state-of-the-art cloud security and user authorization to ensure your data is kept private
Figure 4: Screen capture of a 3D Point cloud map from the NavAbility Mapper SLAM solution.

More details to follow in future posts and we invite visitors to reach out to NavAbility for help or interest. Follow us on LinkedIn to keep up to date with new articles on how Mapper can empower customer and end-user products, services, and solutions.

Meaning of the NavAbility Logo

Meaning of the NavAbility Logo

The NavAbility logo carries special meaning.  NavAbility has a firm belief that the future of AI and robotics will depend on the combination of People and Knowledge.  We believe that people develop and grow value through a virtuous cycle.  Similarly, ideas develop into knowledge through another virtuous cycle.  These two infinite virtuous cycles are combined, people on the horizontal, and knowledge along the vertical, as the symbolic meaning of our logo.

People: A Virtuous Cycle between the Community and Industry

A virtuous cycle is a chain of complicated events that reinforces repetition of the cycle, and if balanced and well managed can establish a steady engine of productivity.  NavAbility is working to bring Community and Industry together, through a process of managing risk, to the benefit of society.  

The Community is home to anyone just starting out in robotics or who is a seasoned expert, and includes folks who prioritize social interaction, new technology, problem solving, and expanding capabilities.  By investing in people, we can develop the means towards a sustainable and empowered future where automation, AI, and robotics raise the standard of living for everyone.

Industry is home to everything that we need to survive in our modern lives.  Whether it is construction of your new home, your job, where you buy your car, your phone, or advances in agriculture, and so much more.  Survival means timeliness and developing opportunities to adapt and grow.

NavAbility is working to bridge the gap between Community and Industry, to solve important problems faster and more effectively.  To promote a future where everyone can benefit and contribute back in their own unique and talented ways.  This is the horizontal cycle of the NavAbility logo.

Knowledge: A Virtuous Cycle between Ideas and Science

The vertical cycle of the NavAbility logo is a second virtuous cycle involving the necessary Knowledge for developing robotic systems that help both society and our planet.  Knowledge not only empowers education, but inspires new ideas which are vital in driving the virtuous cycle of progress and growth.

Creativity is not something that anyone can own.  It’s something that happens when the conditions are right.  Creativity is fueled through inspiration from previous ideas, and is expressed through available technology and resources.  New ideas overcome barriers and solve vital problems faced by industry today, yet so rarely are the connections made.  Creativity thrives in openness and fairness.

Science is dictated by budgets.  Research proposals and intellectual property ownership is subject to finance.  Science is not creativity.  Science is the verification and cataloging of new ideas into a format that is reproducible, reliable, and the basis for investment.  Science is an artform by which new readers can more quickly understand the knowledge that exists, and go out to creatively explore new solutions.

NavAbility values knowledge and the process by which knowledge is created.  NavAbility is working to foster solutions for both Community and Industry that rely on knowledge to function in this high tech domain.  NavAbility accepts a hybrid approach where openness and commercial interests can work together to develop a more sustainable future and higher standard of living for all.

Our NavAbility Logo

ICRA Update: One Week to Go, Preparing Tutorials and SDKs​

ICRA Update: One Week to Go, Preparing Tutorials and SDKs

We’re looking forward to our workshop on May 27th at ICRA the premier robotics conference, IEEE’s International Conference on Robotics and Automation (ICRA) in Philadelphia!

We encourage anyone who is interested in building robotic systems to come meet or visit us in Philadelphia during the ICRA week.  If you can’t be in Philadelphia sign up for complimentary virtual attendance and join us virtually on Gather.Town (we will be supporting both in-person and virtual attendees). We will also be there for the full week so feel free to reach out to us to set up a meeting. We would like to meet you, visit your booth, or share a coffee and discuss next-generation robotics.

Tutorials are Now Available!

Our workshop session is aimed at providing multiple levels of engagement – from brief overviews of how to solve complex real-world navigation problems, through to trying the trying the tutorial code snippets for your self. Everything is available on the NavAbility App page if you want to take an earlier peek!

Upgrading SDKs for Tutorials

We are working hard to provide “zero install” and local install options for visitors.  We are also improving our SDKs for easier and better interfacing from different programming languages like Python.  Our ICRA tutorials will also show how our SDKs can readily be integrated into your existing software stack, and make the features of our technology readily available with the least amount of effort.

Who Should Attend

We encourage robotic system developers, integrators, OEM and sensor manufacturers, navigation system experts, and project leads alike will find this Workshop insightful and constructive.  We also encourage researchers in simultaneous localization and mapping (SLAM) to visit.

Stay Tuned

We will be updating our ICRA Landing Page.  See you in Philadelphia May 23rd – 27th!  Reach out, follow us, or subscribe to our feeds for more info!

How can we help?

Let us know if you have any specific questions relating to our Technology, or Company, or Challenging problems you have encountered relating to Navigation / Localization / Mapping.  We want to help!

Contact us

Find out how we can help you in your navigation journey

Visit us at the NASA LSIC 2022 Spring Meeting!

Visit us the NASA LSIC 2022 Spring Meeting

The Lunar Surface Innovation Consortium, a NASA Lunar Innovation Initiative, is hosting its Spring meeting this week.  NavAbility is presenting at the event and will be engaged with breakout sessions.  Come visit us at the event to learn how we are enabling more capable, distributed, and robust robotic technologies through hybrid open and platform software!

This event is a great opportunity to connect with a community of experts in a variety of advanced robotic technologies, and learn about the ongoing innovation from industry, academia, private, and national lab groups.  See you there on 4-5 May at John Hopkins APL, Lauren, MD, USA!

PDF, NavAbility Poster 2022 with Hyperlinks

How can we help?

Let us know if you have any specific questions relating to our Technology, or Company, or Challenging problems you have encountered relating to Navigation / Localization / Mapping.  We want to help!

Contact us

Find out how we can help you in your navigation journey