Map resolution

One of the most important things for ground robots is the map. When a robot is navigating around, it uses the map for figuring out where it is, this is call localizing. We want a fine enough resolution map to capture room features reliably – the coarser the resolution, the more distorted the room will be and the robot will struggle to localize. If we sample the room at too fine a resolution, it can overwhelm the localization algorithm, but it’s always possible to downsample in an image editor without scanning the room again, so in general its better to create maps with higher resolution than you think you need.

Since maps are so important we spent this month investigation how to improve our maps. The default map resolution in all the Google Cartographer examples is 0.05. We changed the resolution from 0.05 to 0.02 (meters per pixel) to create a higher res map thinking we can downsample as needed later on and presto – RViz started crashing! As it turns out, there was a bug in RViz introduced when porting to ROS 2 that it can’t load images larger than 510 x 510 pixels. This is now fixed in rviz PR #425.

Even after adjusting the resolution, the walls looked a bit… fuzzy. I had to adjust Cartographer’s trajectory builder grid resolution to match the map.

I’m still not sure if increasing the map resolution is a good thing to do for SLAM purposes. In theory, we’d like the most detailed maps possible, but in practice, I’m having trouble getting Cartographer to produce consistent maps (without erasing previously mapped areas) at high resolutions.

Lidar data quality

As the saying goes “garbage in, garbage out”, and that definitely applies to localization. If you don’t have good lidar data, it doesn’t matter how good your localization algorithm is, the result is going to be garbage. I’m using the Slamtec “RPLIDAR A2” with the rplidar_ros node, and initially my data did not look so hot. The walls were all wiggly, even with RViz’s “fixed frame” set to “lidar_link” (so all motion is due to the lidar data, not any uncertainty about the robot’s location).

The issue is that, while lidar data coming out of the RPLidar comes in angle/distance pairs with unequal spacing, ROS 2 LaserScan Messages expect just evenly-spaced distances. In converting a small number of unevenly-spaced values to evenly-spaced buckets, some distortion is introduced.

Watch the corner in the top-right of the animations. It should stay in a consistent position. Also note in the lidar images the not-very-reflective surface in the bottom left. The lidar has a little trouble seeing it, but more data gives us better chances.

There are two ways to improve this situation, both involving changes in therplidar_node parameters:

(1) Increase the number of data points to start with. RPLIDAR A2 has several different scan modes mentioned in the Protocol Documentation (figure 3-3) – Standard, Express, Boost, Stability), and Boost yields the most data points so we use that.

(2) Fudge the data by introducing additional buckets for evenly spaced data (many of which will be empty) and put the data in the nearest bucket to the actual angle. This is what the angle_compensate flag for rplidar_node does. Note it introduces a visual “marching ants” artifact which, while weird looking, doesn’t hurt the quality of the map.

Map quantization problems

When cartographer publishes the map, it does so as a nav_msgs/OccupancyGrid. This represents the data as a degree of occupancy (100 = cell is occupied, 0 = cell is free, -1 = cell is unknown), but map_saver saves it in “trinary” format as one of three values: 254 = unoccupied, 0 = unoccupied 205 = unknown. The map itself may have some visual information that is lost (like walls that do a poor job of reflecting lidar) which we’d really like to use. You can see some examples of this data being lost in the accompanying images.

To this end, we submitted navigation2 PR #989 so map_saver can save these maps in the same detail it receives them.

Bloopers and things that didn’t work

In trying to add additional map modes to the map_saver node, I sent some data that was not in the range [-1,100] expected in a map cell. The result was some funky colors in RViz.

When troubleshooting the lidar data, one fix that didn’t pan out was resampling the data at regularly spaced angles. Using fixed angles resulted in some starburst-patterned striping (not pictured here) as the resampled data was tracing some fixed angles and missing others. As it turns, out sample-to-sample variation in lidar angles (seen above like “ants crawling along the outline”) helps flesh out the generated map.

Very often when driving, the map data will lag behind the lidar data because of delays in transforming map data between frames.

We still don’t have Cartographer tuned right – you can see where the map gets screwed up and starts redrawing over the same place, resulting in conflicted and fuzzy obstacles

Get In Touch

Need help using ROS to build an autonomous solution? Contact us to discuss our consulting capabilities.

Contact Us