Two motion models for improving video object tracking performance

作者:

Highlights:

摘要

Two motion models are proposed to enhance the performance of video object tracking (VOT) algorithms. The first one is a random walk model that captures the randomness of motion patterns. The second one is a data-adaptive vector auto-regressive (VAR) model that exploits more regular motion patterns. The performance of these models is evaluated empirically using real-world datasets. Three real-time publicly available visual object trackers: the normalized cross-correlation (NCC) tracker, the New Scale Adaptive with Multiple Features (NSAMF) tracker, and the correlation filter neural network (CFNet) are modified using each of these two models. The tracking performances are then compared against the original formulation. It is observed that both models of the prior information lead to performance enhancement of all three trackers. This validates the hypothesis that when training videos are available, prior information embodied in the motion models can improve the tracking performance.

论文关键词:

论文评审过程:Received 22 August 2019, Revised 1 March 2020, Accepted 13 March 2020, Available online 31 March 2020, Version of Record 2 April 2020.

论文官网地址:https://doi.org/10.1016/j.cviu.2020.102951