ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning
About
Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different levels of uni-modal semantic knowledge. In this work, we propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels. The managers introduced in each cross-modal layer can adaptively aggregate uni-modal semantic knowledge to facilitate more comprehensive cross-modal alignment and fusion. ManagerTower outperforms previous strong baselines both with and without Vision-Language Pre-training (VLP). With only 4M VLP data, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15% accuracy on VQAv2 Test-Std, 86.56% IR@1 and 95.64% TR@1 on Flickr30K. Code and checkpoints are available at https://github.com/LooperXX/ManagerTower.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 (test-dev) | Overall Accuracy79.39 | 664 | |
| Visual Question Answering | VQA v2 (test-std) | Accuracy79.15 | 466 | |
| Natural Language Visual Reasoning | NLVR2 (test-p) | Accuracy83.34 | 327 | |
| Natural Language Visual Reasoning | NLVR2 (dev) | Accuracy82.81 | 288 | |
| Visual Entailment | SNLI-VE (test) | Overall Accuracy81.44 | 197 | |
| Image Retrieval | Flickr30K | R@186.56 | 144 | |
| Text Retrieval | Flickr30K | R@195.64 | 75 | |
| Visual Entailment | SNLI-VE (dev) | Accuracy81.26 | 70 |