This paper formulates an evidence theoretic multimodal fusion approach using belief functions that takes into account the variability in image characteristics. When processing non-ideal images the variation in the quality of features at different levels of abstraction may cause individual classifiers to generate conflicting genuine-impostor decisions. Existing fusion approaches are non-adaptive and do not always guarantee optimum performance improvements. We propose a contextual unification framework to dynamically select the most appropriate evidence theoretic fusion algorithm for a given scenario. The effectiveness of our approach is experimentally validated by fusing match scores from level-2 and level-3 fingerprint features. Compared to existing fusion algorithms, the proposed approach is computationally efficient, and the verification accuracy is not compromised even when conflicting decisions are encountered. ©2007 IEEE.