Dynamic feature fusion with spatial-temporal context for robust object tracking

作者:

Highlights:

• We propose a spatial-temporal context-based dynamic feature fusion method (STCDFF) with the correlation filters. The proposed STCDFF method aims to exploit spatial-temporal context to perform effective feature fusion to achieve better tracking performance.

• Spatial context and temporal context are employed to evaluate the discriminative ability and discriminative ability of different visual features in complex tracking scenes

• Numerous experiments on OTB-2015, VOT2016, VOT2018, TC-128, UVA123 and LaSOT prove that the proposed STCDFF tracker performs competitively against several popular trackers.

摘要

•We propose a spatial-temporal context-based dynamic feature fusion method (STCDFF) with the correlation filters. The proposed STCDFF method aims to exploit spatial-temporal context to perform effective feature fusion to achieve better tracking performance.•Spatial context and temporal context are employed to evaluate the discriminative ability and discriminative ability of different visual features in complex tracking scenes•Numerous experiments on OTB-2015, VOT2016, VOT2018, TC-128, UVA123 and LaSOT prove that the proposed STCDFF tracker performs competitively against several popular trackers.

论文关键词:Object tracking,Dynamic feature fusion,Spatial-temporal context,Correlation filters framework

论文评审过程:Received 2 December 2021, Revised 1 April 2022, Accepted 1 May 2022, Available online 3 May 2022, Version of Record 15 May 2022.

论文官网地址:https://doi.org/10.1016/j.patcog.2022.108775