Header menu link for other important links
X
Background foreground boundary aware efficient motion search for surveillance videos
T.S. Shinde, , W. Lin, L. Shen
Published in Elsevier B.V.
2020
Volume: 82
   
Abstract
The huge amount of data in surveillance video coding demands high compression rates with lower computational requirements for efficient storage and archival. The motion estimation is a very time-consuming process in the traditional video coding framework, and hence reducing computational complexity is a pressing task, especially for surveillance videos. The presence of significant background proportion in surveillance videos makes its special case for coding. The existing surveillance video coding methods propose separate search mechanisms for background and foreground regions. However, they still suffer from misclassification and inefficient search strategies since it does not consider the inherent motion characteristics of the foreground regions. In this paper, a background-foreground-boundary aware block matching algorithm is proposed to exploit special characteristics of the surveillance videos. A novel three-step framework is proposed for boundary aware block matching process. For this, firstly, the blocks are categorized into three classes, namely, background, foreground, and boundary blocks. Secondly, the motion search is performed by employing different search strategies for each class. The zero-motion vector-based search is employed for background blocks. Whereas, to exploit fast and directional motion characteristics of the boundary and foreground blocks, the eight rotating uni-wing diamond search patterns are proposed. Thirdly, the speed-up is achieved through the novel region-based sub-sampled structure. The experimental results demonstrate that two to four times speed-up over existing methods can be achieved through this scheme while maintaining better matching accuracy. © 2020 Elsevier B.V.
About the journal
JournalData powered by TypesetSignal Processing: Image Communication
PublisherData powered by TypesetElsevier B.V.
ISSN09235965
Open AccessNo