Header menu link for other important links
X
From strings to things: Knowledge-enabled VQA model that can read and reason
A.K. Singh, , S. Shekhar, A. Chakraborty
Published in Institute of Electrical and Electronics Engineers Inc.
2019
Volume: 2019-October
   
Pages: 4601 - 4611
Abstract
Text present in images are not merely strings, they provide useful cues about the image. Despite their utility in better image understanding, scene texts are not used in traditional visual question answering (VQA) models. In this work, we present a VQA model which can read scene texts and perform reasoning on a knowledge graph to arrive at an accurate answer. Our proposed model has three mutually interacting modules: I. proposal module to get word and visual content proposals from the image, ii. fusion module to fuse these proposals, question and knowledge base to mine relevant facts, and represent these facts as multi-relational graph, iii. reasoning module to perform a novel gated graph neural network based reasoning on this graph. The performance of our knowledge-enabled VQA model is evaluated on our newly introduced dataset, viz. text-KVQA. To the best of our knowledge, this is the first dataset which identifies the need for bridging text recognition with knowledge graph based reasoning. Through extensive experiments, we show that our proposed method outperforms traditional VQA as well as question-answering over knowledge base-based methods on text-KVQA. © 2019 IEEE.
About the journal
JournalData powered by TypesetProceedings of the IEEE International Conference on Computer Vision
PublisherData powered by TypesetInstitute of Electrical and Electronics Engineers Inc.
ISSN15505499