Skip to content

SLAM & Autonomous Navigation

In this project you will give a mobile robot the ability to explore an unknown environment, build a map, and navigate autonomously to any goal pose. You will implement three algorithm tiers:

Tier Approach Sensor Key Package
Traditional 2D Laser SLAM (gmapping) 2D LiDAR slam_gmapping
Intermediate Cartographer / RTAB-Map 2D LiDAR / RGB-D cartographer_ros, rtabmap_ros
Modern Visual SLAM (ORB-SLAM3) Mono / Stereo / RGB-D Camera orb_slam3_ros

All three tiers are integrated with the ROS Navigation Stack (move_base, costmaps, global/local planners) so the robot can actually drive to a goal.


Learning Objectives

  • Understand the SLAM problem: simultaneous localization and mapping
  • Implement particle-filter-based 2D SLAM (gmapping)
  • Use graph-based SLAM (Cartographer, RTAB-Map) for robust mapping
  • Run feature-based visual SLAM (ORB-SLAM3) on camera data
  • Configure the ROS Navigation Stack for autonomous goal-reaching
  • Compare SLAM approaches on accuracy, robustness, and sensor requirements

1. The SLAM Problem

1.1 What Is SLAM?

SLAM (Simultaneous Localization and Mapping) is the problem of building a map of an unknown environment while simultaneously localizing the robot within that map. It is a chicken-and-egg problem:

 ┌──────────────────────────────────────────────────────┐
 │                   SLAM Pipeline                      │
 │                                                      │
 │   Sensors ──► Feature / Scan ──► Data ──► Map        │
 │   (LiDAR,     Extraction        Association  Update  │
 │    Camera)        │                  │          │     │
 │                   │                  │          │     │
 │                   ▼                  ▼          │     │
 │              Pose Estimate ◄──── Pose Update    │     │
 │                   │                              │     │
 │                   ▼                              │     │
 │              Localization ───────────────────────┘     │
 │              (where am I?)                             │
 └──────────────────────────────────────────────────────┘

Mathematical formulation: At each time step \(t\), the robot receives odometry \(u_t\) and observation \(z_t\). SLAM estimates:

\[ p(x_{1:t}, m \mid z_{1:t}, u_{1:t}) \]

where \(x_{1:t}\) is the trajectory, \(m\) is the map, \(z_{1:t}\) are observations, and \(u_{1:t}\) are control inputs.

1.2 Front-End vs Back-End

┌─────────────────────────────────────────────────┐
│                 SLAM System                      │
│                                                  │
│  ┌──────────────┐        ┌──────────────┐       │
│  │   Front-End   │        │   Back-End   │       │
│  │              │        │              │       │
│  │ • Scan match │───────►│ • Pose graph │       │
│  │ • Feature    │  edges │   optimization│      │
│  │   extraction │  nodes │ • Loop       │       │
│  │ • Odometry   │        │   closure    │       │
│  │              │        │              │       │
│  └──────────────┘        └──────────────┘       │
│       sensors              optimized map         │
└─────────────────────────────────────────────────┘
  • Front-End: processes raw sensor data into constraints (edges/nodes)
  • Back-End: optimizes the pose graph to produce a consistent map

2. Traditional: 2D Laser SLAM with gmapping

2.1 Algorithm — Rao-Blackwellized Particle Filter

gmapping uses a Rao-Blackwellized Particle Filter (RBPF) which represents the posterior as:

\[ p(x_{1:t}, m \mid z_{1:t}, u_{1:t}) = p(m \mid x_{1:t}, z_{1:t}) \cdot p(x_{1:t} \mid z_{1:t}, u_{1:t}) \]

Each particle \(i\) carries:

  • A robot trajectory hypothesis \(x_{1:t}^{(i)}\)
  • A map \(m^{(i)}\) conditioned on that trajectory

Key steps per iteration:

  1. Motion update — sample new pose from motion model:
\[ x_t^{(i)} \sim p(x_t \mid x_{t-1}^{(i)}, u_t) \]
  1. Measurement update — compute importance weight using scan matching:
\[ w_t^{(i)} = w_{t-1}^{(i)} \cdot p(z_t \mid x_t^{(i)}, m_{t-1}^{(i)}) \]
  1. Map update — update the occupancy grid map for particle \(i\)

  2. Resampling — low-variance resampling to avoid particle deprivation

Adaptive number of particles: gmapping adjusts \(N\) based on the estimated effective sample size \(N_{\text{eff}}\):

\[ N_{\text{eff}} = \frac{1}{\sum_{i=1}^{N} (w_t^{(i)})^2} \]

When \(N_{\text{eff}}\) drops below a threshold, resample.

2.2 ROS Launch File — gmapping

<!-- launch/gmapping.launch -->
<launch>
  <!-- Arguments -->
  <arg name="base_frame"    default="base_link"/>
  <arg name="odom_frame"    default="odom"/>
  <arg name="map_frame"     default="map"/>
  <arg name="scan_topic"    default="/scan"/>

  <!-- gmapping node -->
  <node pkg="slam_gmapping" type="slam_gmapping" name="slam_gmapping" output="screen">
    <param name="base_frame"       value="$(arg base_frame)"/>
    <param name="odom_frame"       value="$(arg odom_frame)"/>
    <param name="map_frame"        value="$(arg map_frame)"/>
    <param name="delta"            value="0.05"/>        <!-- 5 cm grid resolution -->
    <param name="xmin"             value="-25.0"/>
    <param name="ymin"             value="-25.0"/>
    <param name="xmax"             value="25.0"/>
    <param name="ymax"             value="25.0"/>
    <param name="particles"        value="30"/>
    <param name="minimumScore"     value="200"/>
    <param name="linearUpdate"     value="1.0"/>         <!-- update every 1 m -->
    <param name="angularUpdate"    value="0.5"/>         <!-- update every 0.5 rad -->
    <param name="lsigma"           value="0.075"/>
    <param name="ogain"            value="3.0"/>
    <param name="srr"              value="0.01"/>        <!-- odometry noise -->
    <param name="srt"              value="0.02"/>
    <param name="str"              value="0.01"/>
    <param name="stt"              value="0.02"/>
    <param name="lstep"            value="0.05"/>        <!-- search step -->
    <param name="astep"            value="0.05"/>
    <param name="iterations"       value="5"/>
    <remap from="scan" to="$(arg scan_topic)"/>
  </node>
</launch>

2.3 Save the Map

After mapping, save the map using map_server:

# Save occupancy grid map
rosrun map_saver map_saver -f ~/maps/my_map
# Produces: my_map.yaml + my_map.pgm

The YAML file:

# my_map.yaml
image: my_map.pgm
resolution: 0.050000
origin: [-25.000000, -25.000000, 0.000000]
negate: 0
occupied_thresh: 0.65
free_thresh: 0.196

3. Intermediate: Cartographer & RTAB-Map

3.1 Google Cartographer

Cartographer uses submaps and pose graph optimization. The front-end builds local submaps via correlative scan matching; the back-end performs Ceres-based optimization when loop closures are detected.

Scan matching — Ceres cost function:

\[ \min_{\xi} \sum_{k=1}^{K} \left( 1 - M_{\text{smooth}}\!\left( T_\xi \, h_k \right) \right)^2 \]

where \(\xi \in \mathfrak{se}(2)\) is the pose, \(h_k\) is the \(k\)-th hit point, and \(M_{\text{smooth}}\) is the bilinear-interpolated submap value.

Pose graph — when a submap is finished, constraints (edges) are added between:

  • Submap nodes ↔ submap nodes (inter-submap)
  • Submap nodes ↔ landmark nodes

Optimization minimizes:

\[ \min_{\mathbf{x}} \sum_{(i,j) \in \mathcal{E}} \left\| z_{ij} - \hat{z}_{ij}(\mathbf{x}_i, \mathbf{x}_j) \right\|^2_{\Omega_{ij}} \]

where \(\mathbf{x}\) are poses, \(z_{ij}\) are observed constraints, and \(\Omega_{ij}\) is the information matrix.

Launch file — 2D Cartographer:

<!-- launch/cartographer_2d.launch -->
<launch>
  <arg name="configuration_directory" default="$(find cartographer_ros)/configuration_files"/>
  <arg name="configuration_basename"  default="backpack_2d.lua"/>

  <node pkg="cartographer_ros" type="cartographer_node" name="cartographer_node"
        args="-configuration_directory $(arg configuration_directory)
              -configuration_basename $(arg configuration_basename)"
        output="screen">
    <remap from="scan" to="/scan"/>
    <remap from="odom" to="/odom"/>
  </node>

  <!-- Occupancy grid node for Navigation Stack -->
  <node pkg="cartographer_ros" type="cartographer_occupancy_grid_node"
        name="cartographer_occupancy_grid_node" output="screen">
    <param name="resolution" value="0.05"/>
  </node>
</launch>

Lua configuration (backpack_2d.lua):

-- backpack_2d.lua
include "map_builder.lua"
include "trajectory_builder.lua"

options = {
  map_builder = MAP_BUILDER,
  trajectory_builder = TRAJECTORY_BUILDER,
  map_frame = "map",
  tracking_frame = "base_link",
  published_frame = "base_link",
  odom_frame = "odom",
  provide_odom_frame = true,
  publish_frame_projected_to_2d = true,
  use_odometry = true,
  use_nav_sat = false,
  use_landmarks = false,
  num_laser_scans = 1,
  num_multi_echo_laser_scans = 0,
  num_subdivisions_per_laser_scan = 1,
  num_point_clouds = 0,
}

MAP_BUILDER.use_trajectory_builder_2d = true

TRAJECTORY_BUILDER_2D.min_range = 0.1
TRAJECTORY_BUILDER_2D.max_range = 30.0
TRAJECTORY_BUILDER_2D.missing_data_ray_length = 5.0
TRAJECTORY_BUILDER_2D.use_imu_data = false
TRAJECTORY_BUILDER_2D.motion_filter.max_time_seconds = 5.0
TRAJECTORY_BUILDER_2D.motion_filter.max_distance_meters = 0.2
TRAJECTORY_BUILDER_2D.motion_filter.max_angle_radians = math.rad(1.0)

POSE_GRAPH.optimization_problem.huber_scale = 1e1
POSE_GRAPH.optimize_every_n_nodes = 90
POSE_GRAPH.constraint_builder.min_score = 0.65

return options

3.2 RTAB-Map (Real-Time Appearance-Based Mapping)

RTAB-Map fuses laser, RGB-D, and odometry data using a graph-based approach with memory management. It stores nodes in a working memory / long-term memory system to handle large-scale environments.

Key features:

  • Appearance-based loop closure detection using visual bag-of-words
  • Graph optimization via g2o / GTSAM
  • Multi-session mapping support
  • RGB-D or laser-only modes

Launch file — RTAB-Map with RGB-D camera:

<!-- launch/rtabmap_rgbd.launch -->
<launch>
  <!-- Arguments -->
  <arg name="frame_id"        default="base_link"/>
  <arg name="rgb_topic"       default="/camera/rgb/image_raw"/>
  <arg name="depth_topic"     default="/camera/depth/image_raw"/>
  <arg name="camera_info"     default="/camera/rgb/camera_info"/>
  <arg name="odom_topic"      default="/odom"/>
  <arg name="wait_for_transform" default="0.2"/>

  <!-- RTAB-Map node -->
  <node name="rtabmap" pkg="rtabmap_ros" type="rtabmap" output="screen" args="--delete_db_on_start">
    <param name="frame_id"            type="string" value="$(arg frame_id)"/>
    <param name="subscribe_depth"     type="bool"   value="true"/>
    <param name="subscribe_rgb"       type="bool"   value="true"/>
    <param name="subscribe_odom"      type="bool"   value="true"/>
    <param name="wait_for_transform_duration" type="double" value="$(arg wait_for_transform)"/>
    <param name="queue_size"          type="int"    value="10"/>
    <param name="RGBD/AngularUpdate"  type="double" value="0.05"/>
    <param name="RGBD/LinearUpdate"   type="double" value="0.05"/>
    <param name="RGBD/OptimizeFromGraphEnd" type="bool" value="false"/>
    <param name="Kp/MaxFeatures"      type="int"    value="500"/>
    <param name="Mem/RehearsalSimilarity" type="double" value="0.3"/>

    <remap from="rgb/image"       to="$(arg rgb_topic)"/>
    <remap from="depth/image"     to="$(arg depth_topic)"/>
    <remap from="rgb/camera_info" to="$(arg camera_info)"/>
    <remap from="odom"            to="$(arg odom_topic)"/>
  </node>

  <!-- Visualization -->
  <node name="rtabmapviz" pkg="rtabmap_ros" type="rtabmapviz" output="screen"/>
</launch>

4. Modern: ORB-SLAM3 — Visual SLAM

4.1 Architecture

ORB-SLAM3 is a feature-based visual SLAM system that supports monocular, stereo, and RGB-D cameras, with an optional IMU. It uses ORB features and a covisibility graph for tracking, mapping, and loop closure.

┌─────────────────────────────────────────────────────────────┐
│                      ORB-SLAM3 Pipeline                     │
│                                                             │
│  ┌──────────┐   ┌───────────┐   ┌──────────┐   ┌────────┐ │
│  │ Tracking  │──►│ Local     │──►│ Loop     │──►│ Atlas  │ │
│  │           │   │ Mapping   │   │ Closing  │   │ (maps) │ │
│  │ • ORB     │   │ • KeyFrame│   │ • DBoW2  │   │ • Active│ │
│  │   extract │   │   insert  │   │   detect │   │   map  │ │
│  │ • Motion  │   │ • BA      │   │ • Sim(3) │   │ • Atlas│ │
│  │   model   │   │ • Points  │   │   verify │   │        │ │
│  │ • Frame   │   │   culling │   │ • Graph  │   │        │ │
│  │   match   │   │           │   │   fusion │   │        │ │
│  └──────────┘   └───────────┘   └──────────┘   └────────┘ │
│       ▲                                                  │   │
│       │              Pose output (Tcw)                   │   │
│       └──────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────┘

4.2 Bundle Adjustment (BA)

The back-end solves a sparse BA problem. Given \(n\) camera poses and \(m\) 3D map points:

\[ \min_{\mathbf{T}_i, \mathbf{P}_j} \sum_{i=1}^{n} \sum_{j \in \mathcal{V}(i)} \rho\!\left( \left\| \mathbf{u}_{ij} - \pi(\mathbf{T}_i, \mathbf{P}_j) \right\|^2_{\Sigma} \right) \]

where:

  • \(\mathbf{T}_i \in SE(3)\) — camera pose \(i\)
  • \(\mathbf{P}_j \in \mathbb{R}^3\) — 3D point \(j\)
  • \(\pi(\cdot)\) — projection function
  • \(\mathbf{u}_{ij}\) — observed 2D keypoint
  • \(\rho(\cdot)\) — Huber robust kernel
  • \(\Sigma\) — covariance of the measurement

Solved efficiently with Levenberg–Marquardt using the Schur complement trick.

4.3 ROS Wrapper — ORB-SLAM3

<!-- launch/orb_slam3_stereo.launch -->
<launch>
  <arg name="vocabulary_path"
       default="$(find orb_slam3_ros)/vocabulary/ORBvoc.txt"/>
  <arg name="settings_path"
       default="$(find orb_slam3_ros)/config/stereo/TUM_RGBD.yaml"/>
  <arg name="world_frame_id" default="map"/>

  <node pkg="orb_slam3_ros" type="orb_slam3_ros_node" name="orb_slam3"
        output="screen">
    <param name="voc_path"      value="$(arg vocabulary_path)"/>
    <param name="settings_path" value="$(arg settings_path)"/>
    <param name="world_frame_id" value="$(arg world_frame_id)"/>
    <remap from="camera/left/image_raw"  to="/camera/left/image_raw"/>
    <remap from="camera/right/image_raw" to="/camera/right/image_raw"/>
  </node>
</launch>

Settings YAML (example):

# TUM_RGBD.yaml — stereo settings
Camera.type: "PinHole"
Camera.fx: 525.0
Camera.fy: 525.0
Camera.cx: 319.5
Camera.cy: 239.5
Camera.bf: 40.0        # baseline * fx
Camera.width: 640
Camera.height: 480
Camera.fps: 30.0
ThDepth: 40.0

ORBextractor.nFeatures: 1200
ORBextractor.scaleFactor: 1.2
ORBextractor.nLevels: 8
ORBextractor.iniThFAST: 20
ORBextractor.minThFAST: 7

Viewer.KeyFrameSize: 0.05
Viewer.KeyFrameLineWidth: 1
Viewer.GraphLineWidth: 0.9
Viewer.PointSize: 2

5. ROS Navigation Stack Integration

5.1 Architecture

┌─────────────────────────────────────────────────────────┐
│               ROS Navigation Stack                      │
│                                                         │
│  ┌───────────┐    ┌──────────────────────────────┐     │
│  │ move_base │    │       Maps                   │     │
│  │           │    │  ┌────────────┐ ┌──────────┐ │     │
│  │ ┌───────┐ │    │  │ Global     │ │ Local    │ │     │
│  │ │Global │ │◄───│  │ Costmap    │ │ Costmap  │ │     │
│  │ │Planner│ │    │  │ (static,   │ │ (rolling │ │     │
│  │ │(A*,   │ │    │  │  obstacle) │ │  window) │ │     │
│  │ │Dijkstra│ │    │  └────────────┘ └──────────┘ │     │
│  │ │NavFn) │ │    └──────────────────────────────┘     │
│  │ └───────┘ │    ┌──────────────────────────────┐     │
│  │ ┌───────┐ │    │       Recovery Behaviors     │     │
│  │ │Local  │ │    │  • Rotate recovery           │     │
│  │ │Planner│ │    │  • Clear costmap              │     │
│  │ │(DWA,  │ │    │  • Conservative reset         │     │
│  │ │TEB)   │ │    └──────────────────────────────┘     │
│  │ └───────┘ │                                          │
│  └───────────┘                                          │
│       │        goal       output: cmd_vel               │
│       ▼                                                │
│    Robot Hardware (base_controller)                     │
└─────────────────────────────────────────────────────────┘

5.2 move_base Launch File

<!-- launch/move_base.launch -->
<launch>
  <arg name="map_file"    default="$(find my_robot)/maps/my_map.yaml"/>
  <arg name="cmd_vel"     default="/cmd_vel"/>

  <!-- Map server -->
  <node pkg="map_server" type="map_server" name="map_server"
        args="$(arg map_file)" output="screen"/>

  <!-- AMCL localization -->
  <node pkg="amcl" type="amcl" name="amcl" output="screen">
    <param name="odom_frame_id"     value="odom"/>
    <param name="base_frame_id"     value="base_link"/>
    <param name="global_frame_id"   value="map"/>
    <param name="use_map_topic"     value="true"/>
    <param name="min_particles"     value="500"/>
    <param name="max_particles"     value="5000"/>
    <param name="laser_max_beams"   value="30"/>
    <param name="laser_max_range"   value="12.0"/>
    <param name="update_min_d"      value="0.2"/>
    <param name="update_min_a"      value="0.5"/>
    <param name="odom_model_type"   value="diff-corrected"/>
    <param name="odom_alpha1"       value="0.2"/>
    <param name="odom_alpha2"       value="0.2"/>
    <param name="odom_alpha3"       value="0.2"/>
    <param name="odom_alpha4"       value="0.2"/>
    <remap from="scan" to="/scan"/>
  </node>

  <!-- move_base -->
  <node pkg="move_base" type="move_base" name="move_base" output="screen">
    <rosparam file="$(find my_robot)/config/costmap_common_params.yaml"
              command="load" ns="global_costmap"/>
    <rosparam file="$(find my_robot)/config/costmap_common_params.yaml"
              command="load" ns="local_costmap"/>
    <rosparam file="$(find my_robot)/config/local_costmap_params.yaml"
              command="load"/>
    <rosparam file="$(find my_robot)/config/global_costmap_params.yaml"
              command="load"/>
    <rosparam file="$(find my_robot)/config/base_local_planner_params.yaml"
              command="load"/>
    <param name="base_local_planner" value="dwa_local_planner/DWAPlannerROS"/>
    <remap from="cmd_vel" to="$(arg cmd_vel)"/>
  </node>
</launch>

5.3 Costmap Configuration

# config/costmap_common_params.yaml
robot_radius: 0.18
inflation_layer:
  inflation_radius: 0.50
  cost_scaling_factor: 10.0

obstacle_layer:
  observation_sources: laser_scan
  laser_scan:
    sensor_frame: laser_link
    data_type: LaserScan
    topic: /scan
    marking: true
    clearing: true
    max_obstacle_height: 2.0
    min_obstacle_height: 0.0
# config/local_costmap_params.yaml
local_costmap:
  global_frame: odom
  robot_base_frame: base_link
  update_frequency: 5.0
  publish_frequency: 2.0
  static_map: false
  rolling_window: true
  width: 4.0
  height: 4.0
  resolution: 0.05
# config/global_costmap_params.yaml
global_costmap:
  global_frame: map
  robot_base_frame: base_link
  update_frequency: 1.0
  publish_frequency: 0.5
  static_map: true
  rolling_window: false
# config/base_local_planner_params.yaml (DWA)
DWAPlannerROS:
  max_vel_x: 0.5
  min_vel_x: -0.1
  max_vel_y: 0.0        # non-holonomic
  min_vel_y: 0.0
  max_vel_trans: 0.5
  min_vel_trans: 0.1
  max_vel_theta: 1.5
  min_vel_theta: 0.3

  acc_lim_x: 2.5
  acc_lim_y: 0.0
  acc_lim_theta: 3.2

  xy_goal_tolerance: 0.15
  yaw_goal_tolerance: 0.15
  latch_xy_goal_tolerance: true

  sim_time: 2.0
  vx_samples: 10
  vy_samples: 1
  vtheta_samples: 20

  path_distance_bias: 32.0
  goal_distance_bias: 24.0
  occdist_scale: 0.1

6. Comparison of SLAM Approaches

Feature gmapping Cartographer RTAB-Map ORB-SLAM3
Type 2D Occupancy Grid 2D/3D Submaps Graph-based (2D/3D) Sparse visual
Sensor 2D LiDAR 2D/3D LiDAR LiDAR / RGB-D / Stereo Mono / Stereo / RGB-D + IMU
Back-end Particle filter Ceres (graph opt.) g2o / GTSAM g2o / GTSAM (BA)
Loop closure ❌ No ✅ Correlation-based ✅ Visual BoW ✅ DBoW2
3D mapping ❌ No ✅ Yes (3D mode) ✅ Yes ✅ Yes (sparse)
Real-time ✅ Yes ✅ Yes ✅ Yes ✅ Yes
Map output Occupancy grid Grid / Point cloud Grid / Point cloud Sparse 3D points + poses
Memory Scales with particles Submap-based Memory management Keyframe-based
Robustness Low (no loop closure) High High High (needs texture)
Best for Simple 2D mapping Production robots Multi-sensor fusion Visual-only or lightweight setups
Complexity ⭐⭐ ⭐⭐⭐ ⭐⭐⭐ ⭐⭐⭐⭐

7. Complete System Launch — Full Stack

Combine SLAM + Navigation into a single launch file:

<!-- launch/slam_navigation_full.launch -->
<launch>
  <!-- 1. Robot description -->
  <include file="$(find my_robot)/launch/urdf.launch"/>

  <!-- 2. Sensor drivers -->
  <include file="$(find my_robot)/launch/sensors.launch"/>

  <!-- 3. SLAM (choose one) -->
  <!-- Option A: gmapping -->
  <include file="$(find my_robot)/launch/gmapping.launch">
    <arg name="scan_topic" value="/scan"/>
  </include>

  <!-- Option B: Cartographer (uncomment to use) -->
  <!--
  <include file="$(find my_robot)/launch/cartographer_2d.launch"/>
  -->

  <!-- 4. move_base (no map_server when SLAM is running) -->
  <node pkg="move_base" type="move_base" name="move_base" output="screen">
    <rosparam file="$(find my_robot)/config/costmap_common_params.yaml"
              command="load" ns="global_costmap"/>
    <rosparam file="$(find my_robot)/config/costmap_common_params.yaml"
              command="load" ns="local_costmap"/>
    <rosparam file="$(find my_robot)/config/local_costmap_params.yaml"
              command="load"/>
    <rosparam file="$(find my_robot)/config/global_costmap_params.yaml"
              command="load"/>
    <rosparam file="$(find my_robot)/config/base_local_planner_params.yaml"
              command="load"/>
    <param name="base_local_planner" value="dwa_local_planner/DWAPlannerROS"/>
    <remap from="cmd_vel" to="/cmd_vel"/>
  </node>

  <!-- 5. RViz -->
  <node pkg="rviz" type="rviz" name="rviz"
        args="-d $(find my_robot)/rviz/slam_nav.rviz"/>
</launch>

7.1 Testing the System

# Terminal 1: Launch Gazebo simulation
roslaunch my_robot gazebo.launch

# Terminal 2: Launch SLAM + Navigation
roslaunch my_robot slam_navigation_full.launch

# Terminal 3: Send a navigation goal (or use RViz "2D Nav Goal")
rostopic pub /move_base_simple/goal geometry_msgs/PoseStamped \
  "header:
    frame_id: 'map'
  pose:
    position: {x: 3.0, y: 2.0, z: 0.0}
    orientation: {w: 1.0}"

8. Hands-On Exercises

Exercise 1: Map a Room with gmapping

  1. Launch Gazebo with your robot in a known world
  2. Use teleop_twist_keyboard to drive the robot around
  3. Observe the map building in RViz
  4. Save the map and reload it for navigation

Exercise 2: Compare SLAM Approaches

  1. Record a rosbag with LiDAR + camera data
  2. Run gmapping, Cartographer, and RTAB-Map on the same data
  3. Compare map quality, accuracy, and computation time

Exercise 3: Visual SLAM Challenge

  1. Run ORB-SLAM3 in RGB-D mode on TUM RGB-D dataset
  2. Evaluate ATE (Absolute Trajectory Error) using rpg_trajectory_evaluation
  3. Experiment with different feature counts and compare results

Exercise 4: Full Navigation Pipeline

  1. Build a map with Cartographer
  2. Save the map, launch AMCL localization + move_base
  3. Send navigation goals programmatically with a Python node
  4. Add dynamic obstacle avoidance with a simulated person

References