Discriminative part model for visual recognition

作者:

Highlights:

摘要

The recent literature on visual recognition and image classification has been mainly focused on Deep Convolutional Neural Networks (Deep CNN) [A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: Advances in neural information processing systems, 2012, pp. 1097–1105.] and their variants, which has resulted in a significant progression of the performance of these algorithms.Building on these recent advances, this paper proposes to explicitly add translation and scale invariance to Deep CNN-based local representations, by introducing a new algorithm for image recognition which is modeling image categories as a collection of automatically discovered distinctive parts. These parts are matched across images while learning their visual model and are finally pooled to provide images signatures.The appearance model of the parts is learnt from the training images to allow the distinction between the categories to be recognized. A key ingredient of the approach is a softassign-like matching algorithm that simultaneously learns the model of each part and automatically assigns image regions to the model’s parts. Once the model of the category is trained, it can be used to classify new images by finding image’s regions similar to the learned parts and encoding them in a single compact signature.The experimental validation shows that the performance of the proposed approach is better than those of the latest Deep Convolutional Neural Networks approaches, hence providing state-of-the art results on several publicly available datasets.

论文关键词:

论文评审过程:Received 17 March 2015, Revised 16 June 2015, Accepted 3 August 2015, Available online 10 August 2015, Version of Record 1 November 2015.

论文官网地址:https://doi.org/10.1016/j.cviu.2015.08.002