The mismatch between the training data and the test data distributions is a challenging issue while designing many practical computer vision systems. In this paper, we propose a domain adaptation technique to tackle this issue. We are interested in a domain adaptation scenario where source domain has large amount of labeled examples and the target domain has large amount of unlabeled examples. We align the source domain subspace with the target domain subspace in order to reduce the mismatch between the two distributions. We model the subspace using Locality Preserving Projections (LPP). Unlike previous subspace alignment approaches, we introduce a strategy to effectively utilize the training labels in order to learn discriminative subspaces. We validate our domain adaptation approach by testing it on two different domains, i.e. handwritten and printed digit images. We compare our approach with other existing approaches and show the superiority of our method. © 2015 IEEE.