Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?

作者:

Highlights:

摘要

Visual odometry using only a monocular camera faces more algorithmic challenges than stereo odometry. We present a robust monocular visual odometry framework for automotive applications. An extended propagation-based tracking framework is proposed which yields highly accurate (unscaled) pose estimates. Scale is supplied by ground plane pose estimation employing street pixel labeling using a convolutional neural network (CNN). The proposed framework has been extensively tested on the KITTI dataset and achieves a higher rank than current published state-of-the-art monocular methods in the KITTI odometry benchmark. Unlike other VO/SLAM methods, this result is achieved without loop closing mechanism, without RANSAC and also without multiframe bundle adjustment. Thus, we challenge the common belief that robust systems can only be built using iterative robustification tools like RANSAC.

论文关键词:Visual odometry,Monocular visual odometry,SLAM,Pose prediction,Joint epipolar tracking,Ground plane estimation,Driver assistance

论文评审过程:Received 14 April 2016, Revised 12 July 2017, Accepted 20 August 2017, Available online 25 August 2017, Version of Record 30 November 2017.

论文官网地址:https://doi.org/10.1016/j.imavis.2017.08.002