Photometric transfer for direct visual odometry

作者:

Highlights:

摘要

Due to efficient photometric information utilization, direct visual odometry (DVO) is getting widely used to estimate the ego-motion of moving cameras as well as map the environment from videos simultaneously, especially in challenging weak-texture scenarios. However, DVO suffers from brightness discrepancies since it directly utilizes intensity patterns of pixels to register frames for camera pose estimation. Most existing brightness transfer methods build a fixed transfer function which is inappropriate for successive and inconsistent brightness changes in practice. To overcome this problem, we propose a Photometric Transfer Net (PTNet) which is trained to pixel-wisely remove brightness discrepancies between two frames without ruining the context information. Photometric consistency in DVO is obtained by adjusting the source frame according to the reference frame. Since no dataset is available for training the photometric transfer model, we augment the EuRoC dataset by generating a certain number of frames with different brightness levels for each original frame by a nonlinear transformation. Afterwards, required training data containing various brightness changes and scene movements along with ground truth can be collected from the extended sequences. Evaluations on both real-world and synthetic datasets demonstrate the effectiveness of the proposed model. Assessment on an unseen dataset with fixed model parameters trained on another dataset proves the generalization ability of the model. Furthermore, we embed the model into DVO to preprocess input data with brightness discrepancies. Experimental results show that PTNet-based DVO achieves more robust initialization and accurate pose estimation than the original one.

论文关键词:Photometric transfer,Direct visual odometry,Data augmentation,Brightness discrepancy,Deep learning

论文评审过程:Received 4 July 2020, Revised 23 October 2020, Accepted 9 December 2020, Available online 24 December 2020, Version of Record 26 December 2020.

论文官网地址:https://doi.org/10.1016/j.knosys.2020.106671