Presentation attacks can provide unauthorized access to the users and fool face recognition systems for both small scale and large scale applications. Among all the presentation attacks, 2D print and replay attacks are very popular due to their ease and cost-effectiveness in attacking face recognition systems. However, over the years, there are several successful presentation attack detection algorithms developed to detect 2D print and replay attacks. Generally, 2D presentation attacks are detected using the presence or absence of micro patterns which distinguish a real input from an attacked input. However, if a smart attacker digitally 'pre-processes' the image using intensity transforms and then performs 2D presentation attack, differences between real and attacked samples due to the micro-patterns would be minimized. In this paper, for the first time, we show that simple intensity transforms such as Gamma correction, log transform, and brightness control can help an attacker to deceive face presentation attack detection algorithms. Experimental results demonstrate that the smart attacker can increase the error rate of the hand-crafted as well as deep learning based presentation attack detectors. © 2019 IEEE.