Publications

Bastian Steder.
Feature-Based 3D Perception for Mobile Robots.
PhD Thesis, Albert-Ludwigs-University of Freiburg, Department of Computer Science, April 2013.

Abstract

Perception is one of the key topics in robotics research. It is about the processing of external sensor data and its interpretation. Strong perceptual abilities are a basic requirement for a robot working in an environment that was not specifically designed for the robot. Such a surrounding might be completely unknown or may change over time, so that a model cannot be provided to the robot a priori. Most people would only judge a robot to be truly intelligent if it perceives its environment, understands what is happening around it and acts accordingly. This is especially important in mobile robotics. A robot that moves through an environment and interacts with it has to know what is going on around it, where it is, where it can go, and where objects necessary for its task are located. The topic of this thesis is the interpretation of low-level sensor information and its application in high-level tasks, specifically in the area of mobile robotics. We mainly focus on 3D perception, meaning the analysis and interpretation of 3D range scans. This kind of data provides accurate spatial information and is typically not dependent on the light conditions. Spatial information is especially important for navigation tasks, which, by nature, are elementary for mobile platforms. To solve different problems, we extract features from the sensor data, which can then be used efficiently to solve the task at hand. The term “feature” is very broad in this context and means that some useful information is derived from raw data, which is easier to interpret than the original input. At first, we discuss the benefits of point feature extraction from 3D range scans, meaning the detection of interesting areas and an efficient description of the local data. Such point features are typically employed for similarity measures between chunks of data. We present an approach for point feature extraction and then build on it to create several systems that tackle highly relevant topics of modern robotics. These include the detection of known objects in a 3D scan, the unsupervised creation of object models from a collection of scans, the representation of an environment with a small number of surface primitives, and the ability to find the current position of a robot based on a single 3D scan and a database of already known places. All these problems require an algorithm that detects similar structures in homogeneous sensor data, i.e., to find corresponding areas between 3D range scans. In addition to this, we discuss a system where finding correspondences between heterogeneous data types is relevant. Specifically, we search for corresponding areas in 3D range data and visual images, to determine the position of a robot in an aerial image. Finally, we present a complete robotic system, designed to navigate as a pedestrian in an urban environment. Such a system is built up from a multitude of different modules, whereas high robustness requirements apply to each of them to ensure the reliability of the entire system. Our core contribution to this system is a module that analyzes the low-level sensor data and provides high-level information to the other modules, specifically, it performs a traversability analysis and obstacle detection in 2D and 3D laser data. We present several innovative algorithms and techniques that advance the state of the art. We use them to build systems that enable us to address complex 3D perception problems, outperforming existing approaches. We evaluate our methods in challenging settings, focusing on realistic applications and using real world data.

BibTeX entry:

@phdthesis{steder13phd,
  author = {Steder, Bastian},
  school = {Albert-Ludwigs-University of Freiburg, Department of Computer Science},
  title = {Feature-Based 3D Perception for Mobile Robots},
  month = {April},
  year = {2013}
}