Pose of an UAV has been traditionally estimated by On Board Computers (OBCs) using Inertial Measurement Unit (IMU) sensor data as input. In this paper, the development of an architecture for estimating the pose of an UAV using popular Computer Vision based methods has bis proposed. Which consists of three sub modules, namely, Image Segmentation Block (IS), Perspective Transform (PT) and Pose Determination (PD) respectively. IS block uses segmentation to detect salient points from the image where as the PT block transforms image coordinates to world coordinates, finally the PD block uses camera parameters and object dimensions to provide Attitude and Translation matrix of the UAV. The proposed approach can adjust with change in environmental parameters. The system was characterized by observing the error in the estimated yaw and estimated depth. Analysis was made on the nature and variation of error with various experimental parameters. An in depth analysis of the paper was carried out. We were finally able to devise an algorithm for estimating the pose of a body without establishing any communication with the body. © 2020, Springer Nature Singapore Pte Ltd.