A gesture-based interface involves tracking a moving hand across frames, and extracting the semantic interpretation corresponding to the gesture. This is a difficult task, since there is a change in both the position as well as the appearance of the hand. Further, such a system should be robust to the speed at which the gesture is performed. This paper presents a novel attempt at developing a hand gesture-based interface. We propose an on-line predictive EigenTracker for the moving hand. Our tracker can learn the eigenspace on the fly. We propose a new state-based representation scheme for hand gestures, based on the eigenspace reconstruction error. This makes the system independent of the speed of performing the gesture. We use learning for adapting the gesture recognition system to individual requirements. We show results of successful operation of our system even in cases of background clutter and other moving objects.