Leveraging observation uncertainty for robust visual tracking

作者:

Highlights:

摘要

In this paper, the accuracy of visual tracking is enhanced by leveraging a novel measure for observation quality. We measure observation quality with mutual information, then look at the interval covered by that mutual information. As observation uncertainty the interval length is proposed. The best observation is considered the one that both maximizes the observation quality and minimizes the observation uncertainty. We show that searching for the best observation in these terms amounts to preprocessing the image by subtracting the background, detecting salient regions, and rendering the image illumination-invariant. These preprocessing steps are very fast and can precede any existing tracker. In experiments it is shown that the performance of several trackers can be substantially boosted when they run on our preprocessed images, rather than the raw input for which they were intended. In all cases the version with preprocessing significantly outperforms the original tracker’s performance.

论文关键词:

论文评审过程:Received 29 June 2016, Revised 7 December 2016, Accepted 5 February 2017, Available online 8 February 2017, Version of Record 17 April 2017.

论文官网地址:https://doi.org/10.1016/j.cviu.2017.02.003