Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PaliGemma: A versatile 3B VLM for transfer

About

PaliGemma is an open Vision-Language Model (VLM) that is based on the SigLIP-So400m vision encoder and the Gemma-2B language model. It is trained to be a versatile and broadly knowledgeable base model that is effective to transfer. It achieves strong performance on a wide variety of open-world tasks. We evaluate PaliGemma on almost 40 diverse tasks including standard VLM benchmarks, but also more specialized tasks such as remote-sensing and segmentation.

Lucas Beyer, Andreas Steiner, Andr\'e Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bo\v{s}njak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier Henaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, Xiaohua Zhai• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA--
1117
Object Hallucination EvaluationPOPE--
935
Visual Question AnsweringGQA
Accuracy62.57
374
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy76.3
337
Mathematical ReasoningMathVista
Score28.7
322
OCR EvaluationOCRBench
Score614
296
Multimodal ReasoningMM-Vet
MM-Vet Score33.1
281
Multimodal UnderstandingSEED-Bench--
203
Multimodal UnderstandingMME
MME Score1.69e+3
158
Multimodal UnderstandingMMMU (val)
MMMU Score30.7
111
Showing 10 of 50 rows

Other info

Follow for update