Header menu link for other important links
Deceiving the Protector: Fooling Face Presentation Attack Detection Algorithms
A. Agarwal, A. Sehwag, ,
Published in Institute of Electrical and Electronics Engineers Inc.
Pages: 1 - 6
Face recognition systems are vulnerable to presentation attacks such as replay and 3D masks. In the literature, several presentation attack detection (PAD) algorithms are developed to address this problem. However, for the first time in the literature, this paper showcases that it is possible to "fool" the PAD algorithms using adversarial perturbations. The proposed perturbation approach attacks the presentation attack detection algorithms at the PAD feature level via transformation of features from one class (attack class) to another (real class). The PAD feature tampering network utilizes convolutional autoencoder to learn the perturbations. The proposed algorithm is evaluated with respect to CNN and local binary pattern (LBP) based PAD algorithms. Experiments on three databases, Replay, SMAD, and Face Morph, showcase that the proposed approach increases the equal error rate of PAD algorithms by at least two times. For instance, on the SMAD database, PAD equal error rate (EER) of 20.1% is increased to 55.7% after attacking the PAD algorithm. © 2019 IEEE.
About the journal
JournalData powered by Typeset2019 International Conference on Biometrics, ICB 2019
PublisherData powered by TypesetInstitute of Electrical and Electronics Engineers Inc.
Open AccessNo