From rendering to tracking point-based 3D models

作者:

Highlights:

摘要

This paper adds to the abundant visual tracking literature with two main contributions. First, we illustrate the interest of using Graphic Processing Units (GPU) to support efficient implementations of computer vision algorithms, and secondly, we introduce the use of point-based 3D models as a shape prior for real-time 3D tracking with a monocular camera.The joint use of point-based 3D models together with GPU allows to adapt and simplify an existing tracking algorithm originally designed for triangular meshes. Point-based models are of particular interest in this context, because they are the direct output of most laser scanners.We show that state-of-the-art techniques developed for point-based rendering can be used to compute in real-time intermediate values required for visual tracking. In particular, apparent motion predictors at each pixel are computed in parallel, and novel views of the tracked object are generated online to help wide-baseline matching. Both computations derive from the same general surface splatting technique which we implement, along with other low-level vision tasks, on the GPU, leading to a real-time tracking algorithm.

论文关键词:Visual tracking,Point-based model,Surface splatting,GPGPU

论文评审过程:Received 10 September 2008, Revised 8 February 2010, Accepted 2 March 2010, Available online 15 March 2010.

论文官网地址:https://doi.org/10.1016/j.imavis.2010.03.001