Dynamic scene understanding by improved sparse topical coding

作者:

Highlights:

摘要

The explosive growth of cameras in public areas demands a technique which develops a fully automated surveillance and monitoring system. In this paper, we propose a novel unsupervised approach to automatically explore motion patterns occurring in dynamic scenes under an improved sparse topical coding (STC) framework. Given an input video, it is segmented into a sequence of clips without overlapping. Optical flow features are extracted from each pair of consecutive frames, and quantized into discrete visual flow words. Each video clip is interpreted as a document and visual flow words as words within the document. Then the improved STC is applied to explore latent patterns which represent the common motion distributions of the scene. Finally, each video clip is represented as a weighted summation of these patterns with only a few non-zero coefficients. The proposed approach is purely data-driven and scene independent, which make it suitable for very large range applications of scenarios, such as rule mining and abnormal event detection. Experimental results and comparisons on various public datasets demonstrate the promise of the proposed approach.

论文关键词:Motion patterns,Sparse topical coding,Scene understanding

论文评审过程:Available online 27 November 2012.

论文官网地址:https://doi.org/10.1016/j.patcog.2012.11.013