A framework for estimating relative depth in video

作者:

Highlights:

摘要

We present a method for efficiently generating dense, relative depth estimates from video without requiring any knowledge of the imaging system, either a priori or by estimating it during processing. Instead we only require that the epipolar constraint between any two frames is satisfied and that the fundamental matrix can be estimated. By tracking sparse features across many frames and aggregating the multiple depth estimates together, we are able to improve the overall estimate for any given frame. Once the depth estimates are available, we treat the generation of the depth maps as a label propagation problem. This allows us to combine the automatically generated depth maps with any user corrections and modifications (if so desired).

论文关键词:

论文评审过程:Received 29 July 2014, Accepted 4 January 2015, Available online 9 January 2015.

论文官网地址:https://doi.org/10.1016/j.cviu.2015.01.001