Learning Visual Odometry with a Convolutional Network

Kishore Konda, Roland Memisevic

2015

Abstract

We present an approach to predicting velocity and direction changes from visual information (”visual odometry”) using an end-to-end, deep learning-based architecture. The architecture uses a single type of computational module and learning rule to extract visual motion, depth, and finally odometry information from the raw data. Representations of depth and motion are extracted by detecting synchrony across time and stereo channels using network layers with multiplicative interactions. The extracted representations are turned into information about changes in velocity and direction using a convolutional neural network. Preliminary results show that the architecture is capable of learning the resulting mapping from video to egomotion.

Download


Paper Citation


in Harvard Style

Konda K. and Memisevic R. (2015). Learning Visual Odometry with a Convolutional Network . In Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015) ISBN 978-989-758-089-5, pages 486-490. DOI: 10.5220/0005299304860490

in Bibtex Style

@conference{visapp15,
author={Kishore Konda and Roland Memisevic},
title={Learning Visual Odometry with a Convolutional Network},
booktitle={Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015)},
year={2015},
pages={486-490},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005299304860490},
isbn={978-989-758-089-5},
}


in EndNote Style

TY - CONF
JO - Proceedings of the 10th International Conference on Computer Vision Theory and Applications - Volume 1: VISAPP, (VISIGRAPP 2015)
TI - Learning Visual Odometry with a Convolutional Network
SN - 978-989-758-089-5
AU - Konda K.
AU - Memisevic R.
PY - 2015
SP - 486
EP - 490
DO - 10.5220/0005299304860490