Dynamic visual attention model in image sequences

作者:

Highlights:

摘要

A new computational architecture of dynamic visual attention is introduced in this paper. Our approach defines a model for the generation of an active attention focus on a dynamic scene captured from a still or moving camera. The aim is to obtain the objects that keep the observer’s attention in accordance with a set of predefined features, including color, motion and shape. The solution proposed to the selective visual attention problem consists in decomposing the input images of an indefinite sequence of images into its moving objects, by defining which of these elements are of the user’s interest, and by keeping attention on those elements through time. Thus, the three tasks involved in the attention model are introduced. The Feature-Extraction task obtains those features (color, motion and shape features) necessary to perform object segmentation. The Attention-Capture task applies the criteria established by the user (values provided through parameters) to the extracted features and obtains the different parts of the objects of potential interest. Lastly, the Attention-Reinforcement task maintains attention on certain elements (or objects) of the image sequence that are of real interest.

论文关键词:Dynamic visual attention,Motion,Segmentation,Feature extraction,Feature integration

论文评审过程:Received 10 September 2004, Revised 2 May 2006, Accepted 16 May 2006, Available online 13 July 2006.

论文官网地址:https://doi.org/10.1016/j.imavis.2006.05.004