State-of-the-art deep learning models have achieved superlative performance across multiple computer vision applications such as object recognition, face recognition, and digits/character classification. Most of these models highly rely on the gradient information flows through the network for learning. By utilizing this gradient information, a simple gradient sign method based attack is developed to fool the deep learning models. However, the primary concern with this attack is the perceptibility of noise for large degradation in classification accuracy. This research address the question of whether an imperceptible gradient noise can be generated to fool the deep neural networks? For this, the role of sign function in the gradient attack is analyzed. The analysis shows that without-sign function, i.e. gradient magnitude, not only leads to a successful attack mechanism but the noise is also imperceptible to the human observer. Extensive quantitative experiments performed using two convolutional neural networks validate the above observation. For instance, AlexNet architecture yields 63.54% accuracy on the CIFAR-10 database which reduces to 0.0% and 26.39% when sign (i.e., perceptible) and without-sign (i.e., imperceptible) of the gradient is utilized, respectively.Further, the role of the direction of the gradient for image manipulation is studied. When an image is manipulated in the positive direction of the gradient, an adversarial image is generated. On the other hand, if the opposite direction of the gradient is utilized for image manipulation, it is observed that the classification error rate of the CNN model is reduced. On AlexNet, the error rate of 36.46% reduces to 4.29% when images of CIFAR-10 are manipulated in the negative direction of the gradient. To explore other enthusiastic results on multiple object databases, including CIFAR-100, fashion-MNIST, and SVHN, please refer to the full paper. © 2020 IEEE.