Matching facial sketches to digital face images has widespread application in law enforcement scenarios. Recent advancements in technology have led to the availability of sketch generation tools, minimizing the requirement of a sketch artist. While these sketches have helped in manual authentication, matching composite sketches with digital mugshot photos automatically show high modality gap. This research aims to address the task of matching a composite face sketch image to digital images by proposing a transfer learning based evolutionary algorithm. A new feature descriptor, Histogram of Image Moments, has also been presented for encoding features across modalities. Moreover, IIITD Composite Face Sketch Database of 150 subjects is presented to fill the gap due to limited availability of databases in this problem domain. Experimental evaluation and analysis on the proposed dataset show the effectiveness of the transfer learning approach for performing cross-modality recognition. © 2017 IEEE.