This blog post is now outdated, check out our latest posting here
About the authors: This post was written by Dan Rose and Nick Fragale from Rover Robotics (a robotics OEM) with help from Open Robotics, the AWS RoboMaker team, ADLINK Technology, and Intel. Our goal is to progress and proliferate ROS 2 and we would like to share our experience and try to advise people on if and when they should start using ROS 2 or switch from ROS 1 to ROS 2. We will pin useful information to the top of the article. If you have feedback please email email@example.com. Thanks for reading!
Who should switch to ROS 2 right now 📍
|Demographic||Description of user||Advice|
|Students||Those who are just learning to use ROS||Stick with ROS 1 for now. Many of the concepts in ROS 1 and ROS 2 are the same so learning ROS 1 will help you to learn ROS 2 later on.|
|Professors||Those teaching ROS||Keep teaching ROS 1 for now but start thinking about curriculum for ROS 2. There are many entities interested in helping to develop curriculum for ROS 2 including Rover Robotics so you don’t have to go at it alone|
|Researchers||Those using ROS to publish papers|| Unless your paper is specifically to show off using ROS 2 our advice is to stick with ROS 1 for the time being. |
|Large Companies||Those who are in R&D groups funded by a large corporate entity||Strongly consider ROS 2 to reduce the amount of technical debt in the future. Put people with experience with ROS 1 on the project.|
|New Robotics Startups||Those who are thinking about starting a robotics company||Strongly consider ROS 2 to reduce the amount of technical debt in the future. Hire people with experience with ROS 1.|
|Existing Robotics Startups||Those working at a robotics startup that’s either using ROS 1 or not using ROS at all.||This is the hardest group to offer advice to. It really depends on where you are at with your startup. Keep an ear to the ground on ROS 2, at some point you will want to switch but it will be like ripping off a band-aid. |
|Robotics OEM||Those who make either robots, sensors for robots, or anything that needs a ROS driver||Now is a good time to switch. ROS 2 Dashing is the first LTS release so its now safe for OEMs to start porting drivers without fear of new features that will break functionality. Additionally we have seen large companies like Amazon, Intel, and Microsoft devote significant resources towards ROS 2 development.|
Which DDS to use 📍
ROS 2 uses a DDS (Data Distribution Service) for publishing and subscribing. The DDS you choose can greatly affect how ROS 2 behaves. DDS is an open standard and there are several vendors of DDS’s like eProsima and ADLINK Technology that provide both free and paid versions of DDS. The default for ROS 2 currently is FastRTPS from eProsima, but we had trouble with FastRTPS so we started testing the others. Below are our findings.
Sometimes has trouble with topic discovery so we don’t use it. We don’t really understand this issue but it manifests in things like publishing /odom and only sometimes seeing the data in RViz but not whe using ros2 topic echo or vice versa.
Out of the box it was slow but after changing the allow multicast setting to spdp it works fast enough for our needs so this is the main DDS we use. It also has descriptive logging when something goes wrong. We couldn’t get some of the tools, like the osplconf GUI Configurator working under Linux.
Have not yet tried these yet since OpenSplice works well enough
We tested it and got it working, some hiccups building. Seems lightweight and fast which is promising but still maturing so we are keeping an eye on it but not using it yet.
Update #1 – July 1st 2019
I first began looking into ROS 2 in January 2018. At the time I was working at a robotics startup trying to design a lawn care robot. I was eager to learn about the benefits of ROS 2 and how it could help with the issues my team was facing with ROS 1 like dealing with a crappy internet connection (either cellular of WiFi from the persons house) and starting up nodes in a predetermined order so avoid certain nodes freaking out. I downloaded ROS 2 Crystal, tried to startup a ROS core, realized that that was no longer a thing in ROS 2, spent several hours researching the changes in paradigm between ROS 1 and ROS 2, and eventually got a webcam feed working several days later and then shelved it because it was clear that the time needed to invest in this was more than the benefits.
Fast forward to June 2018 when I started Rover Robotics. My team and I decided to develop our driver in ROS 1 based on the difficulties that I had had with ROS 2.
In April of 2019 we were contacted by the AWS RoboMaker team to work with them to help develop demos for ROS 2. The AWS RoboMaker team is very invested in helping ROS make the switch from ROS 1 to ROS 2 because both AWS RoboMaker and ROS 2 target reliability and scale. For those of you who aren’t familiar with AWS RoboMaker it is a set of cloud extensions to ROS that makes it easy to develop, test, and deploy intelligent robotics applications at scale. I’d recommend checking out their deployment tool if you are managing a fleet of robots.
Our experience with ROS 2 Dashing thus far
For the past 2 months we have been developing with ROS 2 Dashing. Our ultimate goal is to get autonomous map based navigation where a user can easily create and edit new maps. This is something we currently can do in ROS 1 so we thought it would be a good starting point for ROS 2.
The main reason that we don’t recommend ROS 2 for all users is that we ran into performance issues caused by the DDS layer that significantly slowed down our progress. So we will keep a table of our opinions on the DDS implementations pinned to the top of this article.
This combined with the fact that tools that are essential to us like RViz and RQTplot either haven’t been ported or are much buggier than ROS 1. are the main reasons why we don’t recommend ROS 2 yet for all users. We will be keeping a table of our opinion of the different DDS implementations as well as a table of our experience with the different tools that have been ported to ROS 2 pinned to the top of this post.
If we have one nugget of advice to give people starting ROS 2 development it would be to use ADLINK Technology’s OpenSplice DDS and ROS 2 RMW (we have had problems with message discovery when using FastRTPS) and to tune it well otherwise you will get all sorts of errors because it can be really slow if not tuned properly. Some WiFI networks are notoriously bad at handling multi-cast traffic so configuring DDS to use multicast only for discovery (SPDP) traffic can fix that. So in OpenSplice we set General/AllowMulticast to “spdp” which made everything work 10x better for our environment:
<DDSI2Service name="ddsi2"> <General> <AllowMulticast>spdp</AllowMulticast> </General> </DDSI2Service>
We have indeed succeeded with getting rudimentary point and click navigation working and as a precursor to showing that off we have provided instructions on how to generate a map by teleoping the robot around. We will be releasing instructions on the point and click navigation for next months update.
How to get the code
Ideally, all dependencies would be released and available in the ROS Package Index, but that’s not currently the case. Instead, some packages need to be built from source. The index to the source code is available in the file openrover-demo.repos.
How to build the demo
You must repeat this on both the robot and the workstation:
|command||what it does|
|Activates the underlay |
|Ensures the directory |
|Change directories to the workspace. All |
|wget https://gist.githubusercontent.com/rotu/0f29b7df4eb6134d4df3f0dce6b38f7e/raw/||Download the file |
|Import (i.e. clone) all the repos specified in |
|Search the packages in the source code subfolder for |
|Build all packages from your workspace into the |
How to map the space
Except for launching drive.launch.py, all below commands are to be run on your workstation. Also don’t forget to activate the workspace first!
|command||what it does|
|# on the robot |
ros2 launch openrover_demo drive.launch.py
|Kicks off ROS2 nodes to interface with the OpenRover, the LIDAR, and the IMU.|
|ros2 launch openrover_demo rviz.launch.py frame:=odom||Opens RViz, a GUI program for visualizing all kinds of data coming out of ROS. Most useful will be the robot’s location, the raw LIDAR data being fed into the SLAM algorithm, and the map as it is generated. At this point, you will not see a map, but you may see red dots, each of which represents a bit of raw LIDAR data.|
|ros2 launch openrover_demo slam.launch.py||Kicks off the Cartographer ROS SLAM node. This will take in all the LIDAR scan data and stitch it together into a map. It will also publish the relation between that map and the robot’s local position. At this point, you should start seeing a fragment of the map in RViz. Note that killing this node will cause you to lose the map, so don’t forget to save the map as below!|
|ros2 launch openrover_demo teleop.launch.py||Your robot probably can’t see the whole room from where it is, so this launches an interactive process to command the rover with the keyboard as it explores the room.Spacebar stops the robot, up/down arrows change the forward speed, left/right arrows change the angular speed. Go slow and take your time: Turning too fast can cause smearing artifacts on the map. Driving forward too fast can bruise your coworkers’ shins and break things.Also be aware that some obstacles may not be visible to the robot – transparent, reflective, or dark objects may not reflect the laser well enough, and anything below the LIDAR’s plane of sight will also not show up on the map.|
|ros2 run nav2_map_server map_saver||When you have adequately explored the space and your map looks good, this commits the map to disk as a map.pgm and map.yaml file.|
The generated map serve two purposes: It can be used to determine where the robot is in the mapped space, and can be used as an obstacle map for the robot to plan paths.
- Our custom teleop tools are clunky, we are working to port keyboard teleop to ROS 2
- Using RViz requires a Linux computer. We are working on a web based way to view the map being create
- When creating the map the robots Lidar cannot see glass and very dark objects. A second map should be created that allows you to add areas that the robot shouldn’t go
- When creating a map if you mess up you have to start over. We are working on a fix for this