PaliGemma: A versatile 3B VLM for transfer
About
PaliGemma is an open Vision-Language Model (VLM) that is based on the SigLIP-So400m vision encoder and the Gemma-2B language model. It is trained to be a versatile and broadly knowledgeable base model that is effective to transfer. It achieves strong performance on a wide variety of open-world tasks. We evaluate PaliGemma on almost 40 diverse tasks including standard VLM benchmarks, but also more specialized tasks such as remote-sensing and segmentation.
Lucas Beyer, Andreas Steiner, Andr\'e Susano Pinto, Alexander Kolesnikov, Xiao Wang, Daniel Salz, Maxim Neumann, Ibrahim Alabdulmohsin, Michael Tschannen, Emanuele Bugliarello, Thomas Unterthiner, Daniel Keysers, Skanda Koppula, Fangyu Liu, Adam Grycner, Alexey Gritsenko, Neil Houlsby, Manoj Kumar, Keran Rong, Julian Eisenschlos, Rishabh Kabra, Matthias Bauer, Matko Bo\v{s}njak, Xi Chen, Matthias Minderer, Paul Voigtlaender, Ioana Bica, Ivana Balazevic, Joan Puigcerver, Pinelopi Papalampidi, Olivier Henaff, Xi Xiong, Radu Soricut, Jeremiah Harmsen, Xiaohua Zhai• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | TextVQA | -- | 1117 | |
| Object Hallucination Evaluation | POPE | -- | 935 | |
| Visual Question Answering | GQA | Accuracy62.57 | 374 | |
| Visual Question Answering | VQA 2.0 (test-dev) | Accuracy76.3 | 337 | |
| Mathematical Reasoning | MathVista | Score28.7 | 322 | |
| OCR Evaluation | OCRBench | Score614 | 296 | |
| Multimodal Reasoning | MM-Vet | MM-Vet Score33.1 | 281 | |
| Multimodal Understanding | SEED-Bench | -- | 203 | |
| Multimodal Understanding | MME | MME Score1.69e+3 | 158 | |
| Multimodal Understanding | MMMU (val) | MMMU Score30.7 | 111 |
Showing 10 of 50 rows