View-based SLAM using Omnidirectional Images

D. Valiente, A. Gil, L. Fernández, O. Reinoso

2012

Abstract

In this paper we focus on the problem of Simultaneous Localization and Mapping (SLAM) using visual information obtained from the environment. In particular, we propose the use of a single omnidirectional camera to carry out this task. Many approaches to visual SLAM concentrate on the estimation of the position of a set of 3D points, commonly denoted as visual landmarks which are extracted from images acquired at the environment. Thus the complexity of the map computation grows as the number of visual landmarks in the map increases. In this paper we propose a different representation of the environment that presents a series of advantages compared to the before mentioned approaches, such as a simplified computation of the map and a more compact representation of the environment. Concretely, the map is represented by a set of views captured from particular places in the environment. Each view is composed by its position and orientation in the map and a set of 2D interest points represented in the image reference frame. Thus, in each view the relative orientation of a set of visual landmarks is stored. During the map building stage, the robot captures an image and finds corresponding points between the current view and the views stored in the map. Assuming that a set of corresponding points is found, the transformation between both views can be computed, thus allowing us to build the map and estimate the pose of the robot. In the suggested framework, the problem of finding correspondences between views is troublesome. Consequently, with the aim of performing a more reliable approach, we propose a new method to find correspondences between two omnidirectional images when the relative error between them is modeled by a gaussian distribution which correlates the current error on the map. In order to validate the ideas presented here, we have carried out a series of experiment in a real environment using real data. Experiment results are presented to demonstrate the validity of the proposed solution.

Download


Paper Citation


in Harvard Style

Valiente D., Gil A., Fernández L. and Reinoso O. (2012). View-based SLAM using Omnidirectional Images . In Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO, ISBN 978-989-8565-22-8, pages 48-57. DOI: 10.5220/0004031800480057

in Bibtex Style

@conference{icinco12,
author={D. Valiente and A. Gil and L. Fernández and O. Reinoso},
title={View-based SLAM using Omnidirectional Images},
booktitle={Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,},
year={2012},
pages={48-57},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0004031800480057},
isbn={978-989-8565-22-8},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 9th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO,
TI - View-based SLAM using Omnidirectional Images
SN - 978-989-8565-22-8
AU - Valiente D.
AU - Gil A.
AU - Fernández L.
AU - Reinoso O.
PY - 2012
SP - 48
EP - 57
DO - 10.5220/0004031800480057