Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PaLI-3 Vision Language Models: Smaller, Faster, Stronger

About

This paper presents PaLI-3, a smaller, faster, and stronger vision language model (VLM) that compares favorably to similar models that are 10x larger. As part of arriving at this strong performance, we compare Vision Transformer (ViT) models pretrained using classification objectives to contrastively (SigLIP) pretrained ones. We find that, while slightly underperforming on standard image classification benchmarks, SigLIP-based PaLI shows superior performance across various multimodal benchmarks, especially on localization and visually-situated text understanding. We scale the SigLIP image encoder up to 2 billion parameters, and achieves a new state-of-the-art on multilingual cross-modal retrieval. We hope that PaLI-3, at only 5B parameters, rekindles research on fundamental pieces of complex VLMs, and could fuel a new generation of scaled-up models.

Xi Chen, Xiao Wang, Lucas Beyer, Alexander Kolesnikov, Jialin Wu, Paul Voigtlaender, Basil Mustafa, Sebastian Goodman, Ibrahim Alabdulmohsin, Piotr Padlewski, Daniel Salz, Xi Xiong, Daniel Vlasic, Filip Pavetic, Keran Rong, Tianli Yu, Daniel Keysers, Xiaohua Zhai, Radu Soricut• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy80.78
1285
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy85
706
Image CaptioningMS COCO Karpathy (test)
CIDEr1.459
682
Visual Question AnsweringVQA v2 (test-std)
Accuracy85.2
486
Visual Question AnsweringChartQA--
371
Science Question AnsweringScienceQA (test)
Average Accuracy55.2
245
Referring Expression SegmentationRefCOCO+ (val)--
223
Document Visual Question AnsweringDocVQA (test)
ANLS88.6
213
Referring Expression SegmentationRefCOCO (val)--
212
Chart Question AnsweringChartQA (test)--
176
Showing 10 of 48 rows

Other info

Follow for update