Accurate and robust visual object tracking is one of the most challenging computer vision problems. Recently, discriminative correlation filter trackers have shown promising results on benchmark datasets with continuous performance improvements in tracking accuracy and robustness. Still, these algorithms fail to track as the target object and background conditions undergo drastic changes over time. They are also incapable to resume tracking once the target is lost, limiting the ability to track long term. The proposed BoVW-CFT is a classifier-based generic technique to handle tracking uncertainties in correlation filter trackers. Tracking failures in correlation trackers are automatically identified and an image classifier with training, testing and online update stages is proposed as detector in the tracking scenario using Bag of Visual Words (BoVW) features. The proposed detector falls under the parts based model and is quite well suited in the tracking framework. Further, the online training stage in the proposed framework with updated model or training samples, incorporates temporal information, helping to detect rotated, blurred and scaled versions of the target. On detecting a target loss in the correlation tracker, the trained classifier, referred to as detector, is invoked to re-initialize the tracker with the actual target location. Therefore, for each tracking uncertainty, two output patches are obtained, one each from the base tracker and the classifier. The final target location is estimated using the normalized cross-correlation with the initial target patch. The method has the advantages of mitigating the model drift in correlation trackers and learns a robust model that tracks long term. Extensive experimental results demonstrate an improvement of 4.1% in the expected overlap, 1.86% in accuracy and 15.46% in robustness on VOT2016 and 1.82% in overlap precision, 2.32% in AUC and 2.87% in success rates on OTB100. © 2018 ACM.