Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RUBi: Reducing Unimodal Biases in Visual Question Answering

About

Visual Question Answering (VQA) is the task of answering questions about an image. Some VQA models often exploit unimodal biases to provide the correct answer without using the image information. As a result, they suffer from a huge drop in performance when evaluated on data outside their training set distribution. This critical issue makes them unsuitable for real-world settings. We propose RUBi, a new learning strategy to reduce biases in any VQA model. It reduces the importance of the most biased examples, i.e. examples that can be correctly classified without looking at the image. It implicitly forces the VQA model to use the two input modalities instead of relying on statistical regularities between the question and the answer. We leverage a question-only model that captures the language biases by identifying when these unwanted regularities are used. It prevents the base VQA model from learning them by influencing its predictions. This leads to dynamically adjusting the loss in order to compensate for biases. We validate our contributions by surpassing the current state-of-the-art results on VQA-CP v2. This dataset is specifically designed to assess the robustness of VQA models when exposed to different question biases at test time than what was seen during training. Our code is available: github.com/cdancette/rubi.bootstrap.pytorch

Remi Cadene, Corentin Dancette, Hedi Ben-younes, Matthieu Cord, Devi Parikh• 2019

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy64.75
664
Visual Question AnsweringVQA 2.0 (val)
Accuracy (Overall)63.1
143
Visual Question AnsweringVQA-CP v2 (test)
Overall Accuracy47.11
109
Visual Question AnsweringVQA-CP v1 (test)
Accuracy (Overall)50.9
26
Visual Question AnsweringGQA OOD (test)
Accuracy45.85
14
Visual Question Answering GroundingVQA-HAT (val)
Rank Correlation44.9
8
Visual Question AnsweringVQA-CE (test)
Overall Accuracy0.6188
6
Showing 7 of 7 rows

Other info

Code

Follow for update