We propose a graph-based deep network for predicting the associations pertaining to field labels and field values in heterogeneous handwritten form images. We consider forms in which the field label comprises printed text and field value can be the handwritten text. Inspired by the relationship predicting capability of the graphical models, we use a Graph Autoencoder to perform the intended field label to field value association in a given form image. To the best of our knowledge, it is the first attempt to perform label-value association in a handwritten form image using a machine learning approach. We have prepared our handwritten form image dataset comprising 300 images from 30 different templates having 10 images per template. Our framework is experimented on different network parameter and has shown promising results.