Short-term anchor linking and long-term self-guided attention for video object detection

作者:

Highlights:

摘要

We present a new network architecture able to take advantage of spatio-temporal information available in videos to boost object detection precision. First, box features are associated and aggregated by linking proposals that come from the same anchor box in the nearby frames. Then, we design a new attention module that aggregates short-term enhanced box features to exploit long-term spatio-temporal information. This module takes advantage of geometrical features in the long-term for the first time in the video object detection domain. Finally, a spatio-temporal double head is fed with both spatial information from the reference frame and the aggregated information that takes into account the short- and long-term temporal context. We have tested our proposal in five video object detection datasets with very different characteristics, in order to prove its robustness in a wide number of scenarios. Non-parametric statistical tests show that our approach outperforms the state-of-the-art. Our code is available at https://github.com/daniel-cores/SLTnet.

论文关键词:Video object detection,Spatio-temporal features,Convolutional neural networks

论文评审过程:Received 12 March 2021, Accepted 9 April 2021, Available online 18 April 2021, Version of Record 24 April 2021.

论文官网地址:https://doi.org/10.1016/j.imavis.2021.104179