Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MUTAN: Multimodal Tucker Fusion for Visual Question Answering

About

Bilinear models provide an appealing framework for mixing and merging information in Visual Question Answering (VQA) tasks. They help to learn high level associations between question meaning and visual concepts in the image, but they suffer from huge dimensionality issues. We introduce MUTAN, a multimodal tensor-based Tucker decomposition to efficiently parametrize bilinear interactions between visual and textual representations. Additionally to the Tucker framework, we design a low-rank matrix-based decomposition to explicitly constrain the interaction rank. With MUTAN, we control the complexity of the merging scheme while keeping nice interpretable fusion relations. We show how our MUTAN model generalizes some of the latest VQA architectures, providing state-of-the-art results.

Hedi Ben-younes, R\'emi Cadene, Matthieu Cord, Nicolas Thome• 2017

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy66.01
664
Visual Question AnsweringVQA v2 (test-std)
Accuracy66.38
466
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy66.01
337
Visual Question AnsweringOK-VQA (test)
Accuracy27.8
296
Visual Question AnsweringVQA (test-dev)
Acc (All)67.42
147
Visual Question AnsweringVQA (test-std)--
110
Visual Question AnsweringOK-VQA v1.0 (test)
Accuracy26.41
77
Visual Question AnsweringVQA (val)
Overall Accuracy58.76
55
Open-Ended Visual Question AnsweringVQA 1.0 (test-standard)
Overall Accuracy67.36
50
Visual Question AnswerVQA 1.0 (test-dev)
Overall Accuracy67.42
44
Showing 10 of 20 rows

Other info

Code

Follow for update