Header menu link for other important links
X
Discriminative shared transform learning for sketch to image matching
S. Nagpal, M. Singh, ,
Published in Elsevier Ltd
2021
Volume: 114
   
Abstract
Sketch to digital image matching refers to the problem of matching a sketch image (often drawn by hand or created by a software) against a gallery of digital images (captured via an acquisition device such as a digital camera). Automated sketch to digital image matching has applicability in several day to day tasks such as similar object image retrieval, forensic sketch matching in law enforcement scenarios, or profile linking using caricature face images on social media. As opposed to the digital images, sketch images are generally edge-drawings containing limited (or no) textural or colour based information. Further, there is no single technique for sketch generation, which often results in varying artistic or software styles, along with the interpretation bias of the individual creating the sketch. Beyond the variations observed across the two domains (sketch and digital image), automated sketch to digital image matching is further marred by the challenge of limited training data and wide intra-class variability. In order to address the above problems, this research proposes a novel Discriminative Shared Transform Learning (DSTL) algorithm for sketch to digital image matching. DSTL learns a shared transform for data belonging to the two domains, while modeling the class variations, resulting in discriminative feature learning. Two models have been presented under the proposed DSTL algorithm: (i) Contractive Model (C-Model) and (ii) Divergent Model (D-Model), which have been formulated with different supervision constraints. Experimental analysis on seven datasets for three case studies of sketch to digital image matching demonstrate the efficacy of the proposed approach, highlighting the importance of each component, its input-agnostic behavior, and improved matching performance. © 2021 Elsevier Ltd
About the journal
JournalData powered by TypesetPattern Recognition
PublisherData powered by TypesetElsevier Ltd
ISSN00313203
Open AccessNo