Face recognition systems are vulnerable to presentation attacks such as replay and 3D masks. In the literature, several presentation attack detection (PAD) algorithms are developed to address this problem. However, for the first time in the literature, this paper showcases that it is possible to "fool" the PAD algorithms using adversarial perturbations. The proposed perturbation approach attacks the presentation attack detection algorithms at the PAD feature level via transformation of features from one class (attack class) to another (real class). The PAD feature tampering network utilizes convolutional autoencoder to learn the perturbations. The proposed algorithm is evaluated with respect to CNN and local binary pattern (LBP) based PAD algorithms. Experiments on three databases, Replay, SMAD, and Face Morph, showcase that the proposed approach increases the equal error rate of PAD algorithms by at least two times. For instance, on the SMAD database, PAD equal error rate (EER) of 20.1% is increased to 55.7% after attacking the PAD algorithm. © 2019 IEEE.