The night-vision image fusion plays a critical role in detecting targets and obstructions in low light or total darkness, which has great importance for pedestrian recognition, vehicle navigation, surveillance and monitoring applications. The central idea is to fuse low-light visible and infrared imagery into a single output. In this paper, we describe a new fusion framework for spatially registered visual and infrared images. The proposed framework utilizes the properties of fractal dimension and phase congruency in the non-subsampled contourlet transform (NSCT) domain. The proposed framework applies multiscale NSCT on visual and IR images to get low- and high-frequency bands. The varied frequency bands of the transformed images are then fused while exploiting their characteristics. Finally, the inverse NSCT is performed to get the fused image. The performance of the proposed framework is validated by extensive experiments on different scene imaginary, where the definite advantages are demonstrated subjectively and objectively. © 2015, Springer-Verlag London.