Header menu link for other important links
X
Emerging Covariates of Face Recognition
Himanshu Bhatt Sharad, , Mayank Vatsa, Afzel Noore, Nalini Ratha K
Published in
2013
Abstract
A covariate in face recognition can be defined as an effect that independently\r\nincreases the intra-class variability or decreases the inter-class variability or\r\nboth. Covariates such as pose, illumination, expression, aging and disguise\r\nare established and extensively studied in literature and are categorized as\r\nexisting covariates of face recognition. However, ever increasing applications\r\nof face recognition have instigated many new and exciting scenarios such as\r\nmatching forensic sketches to mug-shot photos, faces altered due to plastic\r\nsurgery, low resolution surveillance images and individual from videos. These\r\ncovariates are categorized as emerging covariates of face recognition, which\r\nis the primary emphasis of this dissertation. One of the important cues in\r\nsolving crimes and apprehending criminals is matching forensic sketches with\r\ndigital face images. The first contribution of this dissertation is a memetically\r\noptimized multi-scale circular Weber's local descriptor (MCWLD) for matching\r\nforensic sketches with digital face images. This dissertation presents an\r\nautomated algorithm to extract discriminative information from local regions\r\nof both sketches and digital images using MCWLD. An evolutionary memetic\r\noptimization is proposed to assign optimal weights to every local facial region\r\nto boost the identification performance. Since, forensic sketches and digital images\r\ncan be of poor quality, a pre-processing technique is also used to enhance\r\nthe quality of images. Results on different sketch databases, including forensic\r\nsketch database, illustrate the efficacy of the proposed algorithm. Widespread\r\nacceptability and use of biometrics for person authentication has instigated\r\nseveral techniques for evading identification such as altering facial appearance\r\nusing surgical procedures. These procedures modify both the shape and texture\r\nof facial features to varying degrees and thus degrade the performance\r\nof face recognition when matching pre- and post-surgery images. The second\r\ncontribution of this dissertation is a multi-objective evolutionary granular algorithm\r\nfor matching face images altered due to plastic surgery procedures.\r\nThe algorithm first generates non-disjoint face granules at multiple levels of\r\ngranularity. The granular information is assimilated using a multi-objective genetic\r\nalgorithm that simultaneously optimizes the selection of feature extractor\r\nfor each face granule along with the weights of individual granules. On IIIT-D\r\nplastic surgery database, the proposed algorithm yields the state-of-the-art performance.\r\nFace recognition performance degrades when a low resolution face\r\nimage captured in unconstrained settings, such as surveillance, is matched with\r\nhigh resolution gallery images. The primary challenge is to extract discriminative\r\nfeatures from the limited biometric content in low resolution images\r\nand match it with information-rich high resolution face images. The problem\r\nof cross-resolution face matching is further alleviated when there is limited\r\nlabeled low resolution training data. The third contribution of this dissertation\r\nis co-transfer learning framework, a cross pollination of transfer learning\r\nand co-training paradigms, for enhancing the performance of cross-resolution\r\nface recognition. The transfer learning component transfers the knowledge\r\nthat is learned while matching high resolution face images during training\r\nfor matching low resolution probe images with high resolution gallery during\r\ntesting. On the other hand, co-training component facilitates this knowledge\r\ntransfer by assigning pseudo labels to unlabeled probe instances in the target\r\ndomain. Experiments on a synthetic, three low resolution surveillance\r\nquality face databases and real world examples show the efficacy of the proposed\r\nco-transfer learning algorithm as compared to other approaches. Due\r\nto prevalent applications and availability of large intra-personal variations,\r\nvideos have gained significant attention for face recognition. Unlike still face\r\nimages, videos provide abundant information that can be leveraged to compensate\r\nfor variations in intra-personal variations and enhance face recognition\r\nperformance. The fourth contribution of this dissertation is a video based face\r\nrecognition algorithm which computes a discriminative video signature as an\r\nordered (ranked) list of still face images from a large dictionary. A three stage\r\napproach is developed for optimizing ranked lists across multiple video frames\r\nand fusing them into a single composite ordered list to compute the video signature.\r\nThe signature embeds diverse intra-personal variations and facilitates in\r\nmatching two videos across large variations. Results obtained on Youtube and\r\nMBGC v2 video databases show the effectiveness of the proposed algorithm.
About the journal