Biometrics, the science of verifying the identity of individuals, is increasingly being used in several applications such as assisting law enforcement agencies to control crime and fraud. Existing techniques are unable to provide significant levels of accuracy in uncontrolled noisy environments. Further, scalability is another challenge due to variations in data distribution with changing conditions. This paper presents an adaptive context switching algorithm coupled with online learning to address both these challenges. The proposed framework, termed as QFuse, uses the quality of input images to dynamically select the best biometric matcher or fusion algorithm to verify the identity of an individual. The proposed algorithm continuously updates the selection process using online learning to address the scalability and accommodate the variations in data distribution. The results on the WVU multimodal database and a large real world multimodal database obtained from a law enforcement agency show the efficacy of the proposed framework. © 2015 Elsevier Ltd. All rights reserved.