Illumination invariant feature extraction and mutual-information-based local matching for face recognition under illumination variation and occlusion

作者:

Highlights:

摘要

An efficient method for face recognition which is robust under illumination variations is proposed. The proposed method achieves the illumination invariants based on the illumination-reflection model employing local matching for best classification. Different filters have been tested to achieve the reflectance part of the image, which is illumination invariant, and maximum filter is suggested as the best method for this purpose. A set of adaptively weighted classifiers vote on different sub-images of each input image and a decision is made based on their votes. Image entropy and mutual information are used as weight factors. The proposed method does not need any prior information about the face shape or illumination and can be applied on each image separately. Unlike most available methods, our method does not need multiple images in training stage to get the illumination invariants. Support vector machines and k-nearest neighbors methods are used as classifier. Several experiments are performed on Yale B, Extended Yale B and CMU-PIE databases. Recognition results show that the proposed method is suitable for efficient face recognition under illumination variations.

论文关键词:Face recognition,Reflectance illumination model,Variant illumination,Local matching,Partial occlusion

论文评审过程:Received 17 May 2010, Revised 10 November 2010, Accepted 13 March 2011, Available online 21 March 2011.

论文官网地址:https://doi.org/10.1016/j.patcog.2011.03.012