Header menu link for other important links
X
Domain Adaptive Egocentric Person Re-identification
A. Choudhary, , A. Karmakar
Published in Springer Science and Business Media Deutschland GmbH
2021
Volume: 1378 CCIS
   
Pages: 81 - 92
Abstract
Person re-identification (re-ID) in first-person (egocentric) vision is a fairly new and unexplored problem. With the increase of wearable video recording devices, egocentric data becomes readily available, and person re-identification has the potential to benefit greatly from this. However, there is a significant lack of large scale structured egocentric datasets for person re-identification, due to the poor video quality and lack of individuals in most of the recorded content. Although a lot of research has been done in person re-identification based on fixed surveillance cameras, these do not directly benefit egocentric re-ID. Machine learning models trained on the publicly available large scale re-ID datasets cannot be applied to egocentric re-ID due to the dataset bias problem. The proposed algorithm makes use of neural style transfer (NST) that incorporates a variant of Convolutional Neural Network (CNN) to utilize the benefits of both fixed camera vision and first-person vision. NST generates images having features from both egocentric datasets and fixed camera datasets, that are fed through a VGG-16 network trained on a fixed-camera dataset for feature extraction. These extracted features are then used to re-identify individuals. The fixed camera dataset Market-1501 [20] and the first-person dataset EGO Re-ID [3] are applied for this work and the results are on par with the present re-identification models in the egocentric domain. © 2021, Springer Nature Singapore Pte Ltd.
About the journal
JournalData powered by TypesetCommunications in Computer and Information Science
PublisherData powered by TypesetSpringer Science and Business Media Deutschland GmbH
ISSN18650929