Due to the large applicability of AI systems in various applications, fairness in model predictions is extremely important to ensure that the systems work equally well for everyone. Biased feature representations might often lead to unfair model predictions. To address the concern, in this research, a novel method, termed as Attention Aware Debiasing (AAD) method, is proposed to learn unbiased feature representations. The proposed method uses an attention mechanism to focus on the features important for the main task while suppressing the features related to the sensitive attributes. This minimizes the model's dependency on the sensitive attribute while performing the main task. Multiple experiments are performed on two publicly available datasets, MORPH and UTKFace, to showcase the effectiveness of the proposed AAD method for bias mitigation. The proposed AAD method enhances the overall model performance and reduces the disparity in model prediction across different subgroups.