In this paper we propose SEVA (Salient Events for Video Analytics), a framework for accurate detection and localization of salient events from a given video shot. Our proposed method is based on statistical learning theory and models salient event detection as a one-class classification problem. Video frames are split into blocks for extracting both the spatial and temporal features. Given a video shots we first track the moving foreground blob. Features are extracted using only the foreground pixels to avoid influence of the background. Using Support Vector Data Description (SVDD) in kernel feature space for each block in a given video frame, the decision boundary for the normal activity class is modeled. For a test video sequence, feature vectors are computed from the video frames and the learnt model is utilized to classify each block as normal or salient. Finally, we have adapted a spatio-temporal smoothing approach to remove the false positives. We have reported both qualitative and quantitative results of our experiments on two real-world benchmarked video datasets. Performance of SEVA is compared with five recent works on video event detection to validate its effectiveness. © 2014 IEEE.