Recent studies have highlighted a major caveat in various high performing automated systems for tasks such as facial analysis (e.g. gender prediction), object classification, and image to caption generation. Several of the existing systems have been shown to yield biased results towards or against a particular subgroup. The biased behavior exhibited by these models when deployed and used in a real world scenario presents with the challenge of automated systems being unfair. In this research, we propose a novel technique, diversity block, for de-biasing existing models without re-training them. The proposed technique requires small amount of training data and can be incorporated with an existing model for addressing the challenge of biased predictions. This is done by adding a diversity block and computing the prediction based on the scores of the original model and the diversity block in order to get a more confident and de-biased prediction. The efficacy of the proposed technique has been demonstrated on the task of gender prediction, along with an auxiliary case study on object classification. © 2020 IEEE.