Header menu link for other important links
X
Disguise Resilient Face Verification
Maneet Singh, Shruti Nagpal, , Mayank Vatsa
Published in
2021
Abstract
With increasing usage of face recognition algorithms, it is well established that external artifacts and makeup accessories can be applied to different facial features such as eyes, nose, mouth and cheek, to obfuscate one’s identity or to impersonate someone else’s identity. Recognizing faces in the presence of these artifacts comprises the problem of disguised face recognition, which is one of the most arduous covariates of face recognition. The challenge becomes exacerbated when disguised faces are captured in real-time environment, with low resolution images. To address the challenge of disguised face recognition, this paper first proposes a novel multi-objective encoder-decoder network, termed as DED-Net. DED-Net attempts to learn the class variations in the feature space generated by both disguised as well non-disguised images, using a combination of Mahalanobis and Cosine distance metrics, along with Mutual Information based supervision. The DED-Net is then extended to learn from the local and global features of both disguised and non-disguised face images for efficient face recognition and the complete framework is termed as Disguise Resilient (D-Res) framework. The efficacy of the proposed framework has been demonstrated on two real-world benchmark datasets: Disguised Faces in the Wild (DFW) 2018 and DFW2019 competition datasets. In addition, this research also emphasizes on the importance of recognizing disguised faces in low resolution settings and proposes three experimental protocols to simulate the real-world surveillance scenario. To this effect, benchmark results have been shown on seven protocols for three low resolution settings (32×32, 24×24 and 16×16) of the two DFW benchmark datasets. The results demonstrate superior performance of the D-Res framework, in comparison with benchmark algorithms. For example, an improvement of around 3% is observed on the Overall protocol of the DFW2019 dataset, where the D-Res framework achieves 96.3%. Experiments have also been performed on benchmark face verification datasets (LFW, YTF and IJB-B), where the D-Res framework achieves improved verification accuracy.
About the journal
JournalIEEE Transactions on Circuits and Systems for Video Technology
ISSN15582205