Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pythia v0.1: the Winning Entry to the VQA Challenge 2018

About

This document describes Pythia v0.1, the winning entry from Facebook AI Research (FAIR)'s A-STAR team to the VQA Challenge 2018. Our starting point is a modular re-implementation of the bottom-up top-down (up-down) model. We demonstrate that by making subtle but important changes to the model architecture and the learning rate schedule, fine-tuning image features, and adding data augmentation, we can significantly improve the performance of the up-down model on VQA v2.0 dataset -- from 65.67% to 70.22%. Furthermore, by using a diverse ensemble of models trained with different features and on different datasets, we are able to significantly improve over the 'standard' way of ensembling (i.e. same model with different random seeds) by 1.31%. Overall, we achieve 72.27% on the test-std split of the VQA v2.0 dataset. Our code in its entirety (training, evaluation, data-augmentation, ensembling) and pre-trained models are publicly available at: https://github.com/facebookresearch/pythia

Yu Jiang, Vivek Natarajan, Xinlei Chen, Marcus Rohrbach, Dhruv Batra, Devi Parikh• 2018

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVizWiz
Accuracy54.22
1043
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy70.01
664
Visual Question AnsweringVQA v2 (test-std)
Accuracy64.81
466
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy70
337
Visual Question AnsweringVQA 2.0 (val)
Accuracy (Overall)66.3
143
Visual Question AnsweringA-OKVQA (test)
Accuracy21.9
79
Visual Question AnsweringA-OKVQA (val)
Accuracy0.49
56
Multi-choice Visual Question AnsweringA-OKVQA
Accuracy49
49
Visual Question Answering (Multi-choice)A-OKVQA (test)
Accuracy40.1
19
Direct Answer Visual Question AnsweringA-OKVQA (test)
Accuracy21.9
7
Showing 10 of 11 rows

Other info

Code

Follow for update