Header menu link for other important links
Generating image descriptions using semantic similarities in the output space
, A. Gupta, P. Mannem, C.V. Jawahar
Published in
Pages: 288 - 293
Automatically generating meaningful descriptions for images has recently emerged as an important area of research. In this direction, a nearest-neighbour based generative phrase prediction model (PPM) proposed by (Gupta et al. 2012) was shown to achieve state-of-the-art results on PASCAL sentence dataset, thanks to the simultaneous use of three different sources of information (i.e. visual clues, corpus statistics and available descriptions). However, they do not utilize semantic similarities among the phrases that might be helpful in relating semantically similar phrases during phrase relevance prediction. In this paper, we extend their model by considering inter-phrase semantic similarities. To compute similarity between two phrases, we consider similarities among their constituent words determined using WordNet. We also re-formulate their objective function for parameter learning by penalizing each pair of phrases unevenly, in a manner similar to that in structured predictions. Various automatic and human evaluations are performed to demonstrate the advantage of our 'semantic phrase prediction model' (SPPM) over PPM. © 2013 IEEE.
About the journal
JournalIEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops