Bin picking using vision based sensors requires accurate estimation of location and pose of the object for positioning the end effector of the robotic arm. The computational burden and complexity depends upon the parametric model adopted for the task. Learning based techniques to implement the scheme using low dimensional manifolds offer computationally more efficient alternatives. In this paper we have employed Locally Linear Embedding (LLE) and Deep Learning (with auto encoders) for manifold learning in the visual domain as well as for the parameters of robotic manipulator for visual servoing. Images of clusters of cylindrical pellets were used as the training data set in the visual domain. Corresponding parameters of the six degrees of freedom robot for picking designated cylindrical pellet formed the training dataset in the robotic configuration space. The correspondence between the weight coefficients of LLE manifold in the visual domain and robotic domain is established through regression. Autoencoders in conjunction with feed forward neural networks were used for learning of correspondence between the high dimensional visual space and low dimensional configuration space. We have compared the results of the two implementations for the same dataset and found that manifold learning using auto encoders resulted in better performance. The eye-in-hand configuration used with KUKA KR5 robotic arm and Basler camera offers a potentially effective and efficient solution to the bin picking problem through learning based visual servoing. © Springer Science+Business Media Singapore 2017.