This website is outdated, please visit my new site here.

TORO - News and Updates

2008-01-26
New version of TORO has been released that fixes some convergence problems in 3D datasets (the distribution of the rotation error in 3D has changed).

What is TORO - a Tree-based netwORk Optimizer


(logo artwork by Christian Plagemann)

Abstract

TORO is an optimization approach for constraint-network. It provides a highly efficient, gradient descent-based error minimization procedure.
In 2006, Olson et al. presented a novel approach to solve the graph-based SLAM problem by applying stochastic gradient descent to minimize the error introduced by constraints. TORO is an extension of Olson's algorithm. It applies a tree parameterization of the nodes in the graph that significantly improves the performance and enables a robot to cope with arbitrary network topologies. The latter allows us to bound the complexity of the algorithm to the size of the mapped area and not to the length of the trajectory.

Papers

Giorgio Grisetti, Cyrill Stachniss, Slawomir Grzonka, and Wolfram Burgard
A Tree Parameterization for Efficiently Computing
Maximum Likelihood Maps using Gradient Descent.

Robotics: Science and Systems (RSS),
Atlanta, GA, USA, 2007.
paper (8 pg, pdf)
 
Grisetti Giorgio, Slawomir Grzonka, Cyrill Stachniss, Patrick Pfaff, and Wolfram Burgard
Efficient Estimation of Accurate Maximum Likelihood Maps in 3D.
In Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2007.
paper (7 pg, pdf) -- WARNING: The math in this paper can cause convergence problems in 3D datasets resulting from the distribution of the rotation error. The implementation is fixed so that this issue does not arise.


Get the Source Code!

The source code is available via svn:

svn co https://www.openslam.org/data/svn/toro

The software is written in C++ (GNU C++ compiler, developed under Linux) and requires no additional libraries or has any other dependency than the standard template library. A doxygen documentaion is available.

Logfile Format

A set of simple text messages to represent nodes and edges of the graph. Note that examples files are in the repository, see folder data.

Format of the 2D graph files:

Every line in the file specifies either one vertex or one edge

The vertices are specified as follws: VERTEX2 id x y orientation (A 2D node in the graph)

EDGE2 observed_vertex_id observing_vertex_id forward sideward rotate inf_ff inf_fs inf_ss inf_rr inf_fr inf_sr (A 2D-edge in the graph. inf_xx are the information matrix entries of the constraint)

EQUIV id1 id2 (Equivalence constraints between nodes. It merges the node id1 and id2 wrt to the constraint between both vertices.)

Format of the 3D graph files:

Every line in the file specifies either one vertex or one edge

The vertices are specified as follws: VETREX3 x y z phi theta psi

The edges are specified as follows: EDGE3 observed_vertex_id observing_vertex_id x y z roll pitch yaw inf_11 inf_12 .. inf_16 inf_22 .. inf_66 (the information matrix is specified via its upper triangular block that means 21 values).

Datasets

Some 2D and 3D examples files are in the repository, see folder data.

Images and Video Material

Corrected network
Method: Our 3d approach (3d tree parameterization), no node reduction
Result after 100 iterations
Corrected network
Method: Our approach (tree parameterization), no node reduction
Size: 500.000 nodes/2 million constraints
Noise: sigma-x/y=0.05, sigma-theta=0.02
Result after 100 iterations
Corrected network
Method: Our approach (tree parameterization), no node reduction
Size: 200.000 nodes/1 million constraints
Noise: sigma-x/y=0.01, sigma-theta=0.005
Result after 100 iterations
Video of the individual iterations
Method: Our approach (tree parameterization), no node reduction
Size: 5.000 nodes/30.000 constraints
Real world dataset: Intel Research Lab
Video of the individual iterations
Constraints are obtained via pair-wise scan-matching
Comparison of the error per link per iteration
Method: Olson's algorithm, our approach (tree parameterization) with and without node reduction
Size: 5000 nodes
Noise: sigma-x/y = 0.1, sigma-theta = 0.05
Confidence: 0.05
Note: Using the node reduction technique, the error per iteration decreases slower compared to the pure tree approach (it is log-scale). Each iteration, however, can be executed significantly faster (see Fig.6 in the RSS paper).
Average path length for random networks of different size
(Operations per constraint and iteration)