An efficient, chromatic clustering-based background model for embedded vision platforms

作者:

Highlights:

摘要

People naturally identify rapidly moving foreground and ignore persistent background. Identifying background pixels belonging to stable, chromatically clustered objects is important for efficient scene processing. This paper presents a technique that exploits this facet of human perception to improve performance and efficiency of background modeling on embedded vision platforms. Previous work on the Multimodal Mean (MMean) approach achieves high quality foreground extraction (comparable to Mixture of Gaussians (MoG)) using fast integer computation and a compact memory representation. This paper introduces a more efficient hybrid technique that combines MMean with palette-based background matching based on the chromatic distribution in the scene. This hybrid technique suppresses computationally expensive model update and adaptation, providing a 45% execution time speedup over MMean. It reduces model storage requirements by 58% over a MMean-only implementation. This background analysis enables higher frame rate, lower cost embedded vision systems.

论文关键词:

论文评审过程:Received 26 January 2009, Accepted 17 March 2010, Available online 4 May 2010.

论文官网地址:https://doi.org/10.1016/j.cviu.2010.03.014