Header menu link for other important links
Deep GoogLeNet Features for Visual Object Tracking
P. Aswathy, Siddhartha,
Published in Institute of Electrical and Electronics Engineers Inc.
Pages: 60 - 66
Convolutional Neural Network (CNN) has recently become very popular in visual object tracking due to their strong feature representation capabilities. Almost all of the CNN based trackers currently use the features extracted from shallow convolutional layers of VGGNet architecture. This paper presents an investigation of the impact of deep convolutional layer features in an object tracking framework. In this study, we demonstrate for the first time, the viability of features extracted from deep layers of GoogLeNet CNN architecture for the purpose of object tracking. We integrated GoogLeNet features in a discriminative correlation filter based tracking framework. Our experimental results show that the GoogLeNet features provides significant computational advantages over the conventionally used VGGNet features, without much compromise on the tracking performance. It was observed that features obtained from inception modules of GoogLeNet have high depths. Further, Principal Component Analysis (PCA) was employed to reduce the dimensionality of the extracted features. This greatly reduces the computational cost and thus improve the speed of the tracking process. Extensive evaluation have been performed on three benchmark datasets: OTB, ALOV300++ and VOT2016 datasets and its performances are measured in terms of metrics like F-score, One Pass Evaluation, robustness and accuracy. © 2018 IEEE.