Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Compact Trilinear Interaction for Visual Question Answering

About

In Visual Question Answering (VQA), answers have a great correlation with question meaning and visual contents. Thus, to selectively utilize image, question and answer information, we propose a novel trilinear interaction model which simultaneously learns high level associations between these three inputs. In addition, to overcome the interaction complexity, we introduce a multimodal tensor-based PARALIND decomposition which efficiently parameterizes trilinear interaction between the three inputs. Moreover, knowledge distillation is first time applied in Free-form Opened-ended VQA. It is not only for reducing the computational cost and required memory but also for transferring knowledge from trilinear interaction model to bilinear interaction model. The extensive experiments on benchmarking datasets TDIUC, VQA-2.0, and Visual7W show that the proposed compact trilinear interaction model achieves state-of-the-art results when using a single model on all three datasets.

Tuong Do, Thanh-Toan Do, Huy Tran, Erman Tjiputra, Quang D. Tran• 2019

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy67.4
664
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy70.1
337
Visual Question AnsweringGQA (test-dev)
Accuracy54.9
178
Visual Question AnsweringVQA 2.0 (val)
Accuracy (Overall)66
143
Visual Question AnsweringGQA (val)
Accuracy61.7
22
Multiple-choice Visual Question AnsweringVisual7W (test)
Accuracy (MC)72.3
13
Visual Question AnsweringTDIUC (val)
Accuracy87
7
Visual Question AnsweringVisual7W (val)
Acc-MC67
4
Showing 8 of 8 rows

Other info

Code

Follow for update