Header menu link for other important links
A Robust Pose Transformational GAN for Pose Guided Person Image Synthesis
A. Karmakar,
Published in Springer Science and Business Media Deutschland GmbH
Volume: 1249
Pages: 89 - 99
Generating photorealistic images of human subjects in any unseen pose have crucial applications in generating a complete appearance model of the subject. However, from a computer vision perspective, this task becomes significantly challenging due to the inability of modelling the data distribution conditioned on pose. Existing works use a complicated pose transformation model with various additional features such as foreground segmentation, human body parsing etc. to achieve robustness that leads to computational overhead. In this work, we propose a simple yet effective pose transformation GAN by utilizing the Residual Learning method without any additional feature learning to generate a given human image in any arbitrary pose. Using effective data augmentation techniques and cleverly tuning the model, we achieve robustness in terms of illumination, occlusion, distortion and scale. We present a detailed study, both qualitative and quantitative, to demonstrate the superiority of our model over the existing methods on two large datasets. © 2020, Springer Nature Singapore Pte Ltd.
About the journal
JournalData powered by TypesetCommunications in Computer and Information Science
PublisherData powered by TypesetSpringer Science and Business Media Deutschland GmbH