Due to high variability in writing style of different individuals, non-centered and non-uniformly scaled optical characters are very difficult to recognize. Several techniques are proposed in-order to solve the recognition problem. In this work, we highlight that the performance of optical character classifiers which are based on the deep learning framework can be improved through feature-map alignment. Here, we have used spatial transformer network to align the feature maps of a convolutional neural network model which is proposed for the classification problem. We demonstrate that with the proposed framework not only the slight transformed versions which are usually considered in the conventional datasets can be classified with high accuracy, but also highly non-uniform in scale characters can also be fairly recognized with quite higher accuracy. We evaluate our proposed model on CMATERdb 3.1.3 database which consists of isolated Bangla handwritten compound characters and our model obtained 97.86 % recognition accuracy in the original database and 96.34 % on various rotated data in training and testing. © 2017 IEEE.