An efficient illumination invariant face recognition framework via illumination enhancement and DD-DTCWT filtering

作者:

Highlights:

摘要

In this paper, it is shown that multiscale analysis of facial structure and features of face images leads to superior recognition rates for images under varying illumination. The proposed method, which is computationally cost effective, significantly suppresses illumination effects. The problem is defined as how better to extract the reflectance portion from a given image. This can be used directly as an input to a dimensionality reduction unit followed by a classifier for recognition purposes. We first assume that an image I(x,y) is a black box consisting of a combination of illumination and reflectance. A new approximation is proposed to enhance the illumination removal phase. As illumination resides in the low-frequency part of image, it is reasonable to consider the use of a high-performance multiresolution transformation to first accurately separate the frequency components of an image. The double-density dual-tree complex wavelet transform (DD-DTCWT), possesses three core advantages, i.e., the transformation is (i) shift-invariant, (ii) directionally selective with no checkerboard effect, (iii) enriched by extra wavelets interpreted as double-density. The output of the first phase is sent to a DD-DTCWT unit to be decomposed into frequency subbands. High-frequency subbands are thresholded and an inverse DD-DTCWT is then applied to subbands to construct a low-frequency raw image, which is followed by a fine-tuning process. Finally, after extracting a mask, feature vector is formed and the principal component analysis (PCA) is used for dimensionality reduction which is then proceeded by the extreme learning machine (ELM) as a classifier to evaluate the performance of the proposed algorithm for face recognition under varying illumination. Unlike similar works, the proposed method is free of any prior information about the face shape, it is systematic and easy to implement, and it can be applied separately on each image. Furthermore, the proposed method which is significantly faster than similar techniques presents a robust behavior against the reduction in the number of images required for the training cycle. Several experiments are performed employing the available well-known databases such as the Yale B, Extended-Yale B, CMU-PIE, FERET, AT&T, and the Labeled Faces in the Wild (LFW). Illustrative examples are given and the results compare favorably to the current results in the literature.

论文关键词:Illumination invariance,Face recognition,Double-density dual-tree complex wavelets,Subband filtering,Extreme learning machine

论文评审过程:Received 12 April 2011, Revised 6 March 2012, Accepted 14 June 2012, Available online 27 June 2012.

论文官网地址:https://doi.org/10.1016/j.patcog.2012.06.007