Memory-based cognitive modeling for robust object extraction and tracking

作者:Yanjiang Wang, Yujuan Qi

摘要

Inspired by the way humans perceive the environment, in this paper, we propose a memory-based cognitive model for visual information processing which can imitate some cognitive functions of the human brain such as remembering, recall, forgetting, learning, classification, and recognition, etc. The proposed model includes five components: information granule, memory spaces, cognitive behaviors, rules for manipulating information among memory spaces, and decision-making processes. Three memory spaces are defined for separately storing the current, temporal and permanent information acquired, i.e. ultra short-term memory space (USTMS), short-term memory space (STMS) and long-term memory space (LTMS). The proposed memory-based model can remember or forget what the scene has ever been which helps the model adapt to the variation of the scene more quickly. We apply the model to address two hot issues in computer vision: background modeling and object tracking. A memory-based Gaussian mixture model (MGMM) for object segmentation and a memory-based template updating (MTU) model for object tracking with particle filter (PF) are exhibited respectively. Experimental results show that the proposed model can deal with scenes with sudden background and foreground changes more robustly when segmenting and tracking moving objects under complex background.

论文关键词:Cognitive modeling, Biologically inspired, Human brain memory, Background modeling, Gaussian mixture model, Foreground segmentation, Object tracking, Particle filter

论文评审过程:

论文官网地址:https://doi.org/10.1007/s10489-013-0437-5