Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning

About

Two-Tower Vision-Language (VL) models have shown promising improvements on various downstream VL tasks. Although the most advanced work improves performance by building bridges between encoders, it suffers from ineffective layer-by-layer utilization of uni-modal representations and cannot flexibly exploit different levels of uni-modal semantic knowledge. In this work, we propose ManagerTower, a novel VL model architecture that gathers and combines the insights of pre-trained uni-modal experts at different levels. The managers introduced in each cross-modal layer can adaptively aggregate uni-modal semantic knowledge to facilitate more comprehensive cross-modal alignment and fusion. ManagerTower outperforms previous strong baselines both with and without Vision-Language Pre-training (VLP). With only 4M VLP data, ManagerTower achieves superior performances on various downstream VL tasks, especially 79.15% accuracy on VQAv2 Test-Std, 86.56% IR@1 and 95.64% TR@1 on Flickr30K. Code and checkpoints are available at https://github.com/LooperXX/ManagerTower.

Xiao Xu, Bei Li, Chenfei Wu, Shao-Yen Tseng, Anahita Bhiwandiwalla, Shachar Rosenman, Vasudev Lal, Wanxiang Che, Nan Duan• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy79.39
664
Visual Question AnsweringVQA v2 (test-std)
Accuracy79.15
466
Natural Language Visual ReasoningNLVR2 (test-p)
Accuracy83.34
327
Natural Language Visual ReasoningNLVR2 (dev)
Accuracy82.81
288
Visual EntailmentSNLI-VE (test)
Overall Accuracy81.44
197
Image RetrievalFlickr30K
R@186.56
144
Text RetrievalFlickr30K
R@195.64
75
Visual EntailmentSNLI-VE (dev)
Accuracy81.26
70
Showing 8 of 8 rows

Other info

Code

Follow for update