## SLAM - Simultaneous Localization And Mapping

The problem of learning maps is one of the fundamental problems in mobile robotics. Models of the environment are needed for a series of applications such as transportation, cleaning, rescue, and various other service robotic tasks. Learning maps passively, i.e. by perceiving sensor data only, requires the solution of two tasks, which are mapping, and localization. Mapping is the problem of integrating the information gathered with the robot's sensors into a given representation. It can intuitively be described by the question ``What does the world look like?'' Central aspects in mapping are the representation of the environment and the interpretation of sensor data. In contrast to this, localization is the problem of estimating the pose of the robot relative to a map. In other words, the robot has to answer the question ``Where am I?'' These two tasks cannot be solved independently of each other.

Whenever robots operate in the real world, not only their observations but also their motion is affected by noise. Building accurate spatial models under those conditions is widely known as the simultaneous localization and mapping (SLAM) problem. In the last few years, we developed multiple solution to the SLAM problem. One solution is based on a Rao-Blackwellized particle filter. A Rao-Blackwellized particle filter is a probabilistic state estimation technique that can deal with non-Gaussian noise and provides a joint posterior about the map of the environment and the trajectory taken by the robot. Our mapping approach uses an informed proposal distribution to create the next generation of particles. This allows us to draw samples only in those areas where the robot is likely to be located. As a result, our technique can construct grid maps from large datasets in indoor as well as in structured outdoor environments. Our approach makes no assumptions about the structure of the environment such as parallel walls or any pre-defined feature extractors.

The other solutions I developed use the so-called graph-based formulation of the SLAM problems in which the poses of the robot are modeled by nodes in a graph. Constraints between poses resulting from observations or from odometry are encoded in the edges between the nodes. The goal of an algorithm designed to solve this problem is to find the configuration of the nodes that maximizes the observation likelihood encoded in the constraints. The approach "TORO" We developed is an extension of Olson's algorithm and uses a variant of stochastic gradient descent to compute node configurations with low errors. Here, we parametrize the nodes in the graph using a tree which yields several improvements verses traditional parametrization. Additionally, we are able to bound the complexity to the size of the environment and not to the length of the trajectory. In addition to that, we recently published our optimization back-end called HOG-Man. This highly efficient system exploits that the underlying space is not Euclidean but a manifold. It furthermore builds up a hierarchical pose graph and can exploit this to efficiently provide the topology of the environment to allow for efficient data associations. It is explicitly designed for online operations.

All three approaches (GMapping, TORO, HOG-Man) are available at www.OpenSLAM.org.