Learning natural scene categories by selective multi-scale feature extraction

作者:

Highlights:

摘要

Natural scene categorization from images represents a very useful task for automatic image analysis systems. In the literature, several methods have been proposed facing this issue with excellent results. Typically, features of several types are clustered so as to generate a vocabulary able to describe in a multi-faceted way the considered image collection. This vocabulary is formed by a discrete set of visual codewords whose co-occurrence and/or composition allows to classify the scene category. A common drawback of these methods is that features are usually extracted from the whole image, actually disregarding whether they derive properly from the natural scene to be classified or from foreground objects, possibly present in it, which are not peculiar for the scene. As quoted by perceptual studies, objects present in an image are not useful to natural scene categorization, indeed bringing an important source of clutter, in dependence of their size.In this paper, a novel, multi-scale, statistical approach for image representation aimed at scene categorization is presented. The method is able to select, at different levels, sets of features that represent exclusively the scene disregarding other non-characteristic, clutter, elements. The proposed procedure, based on a generative model, is then able to produce a robust representation scheme, useful for image classification. The obtained results are very convincing and prove the goodness of the approach even by just considering simple features like local color image histograms.

论文关键词:Image representation,Image classification,Generative modeling

论文评审过程:Received 5 January 2009, Revised 16 November 2009, Accepted 19 November 2009, Available online 26 November 2009.

论文官网地址:https://doi.org/10.1016/j.imavis.2009.11.007