Jina-VLM: Small Multilingual Vision Language Model
About
We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. The model achieves leading results on standard VQA benchmarks and multilingual evaluations while preserving competitive text-only performance. Model weights and code are publicly released at https://huggingface.co/jinaai/jina-vlm .
Andreas Koukounas, Georgios Mastrapas, Florian H\"onicke, Sedigheh Eslami, Guillaume Roncari, Scott Martens, Han Xiao• 2025
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-based Visual Question Answering | TextVQA (val) | Accuracy83.2 | 262 | |
| Mathematical Reasoning | MathVista | Accuracy59.5 | 257 | |
| Mathematical Reasoning | WeMath | Accuracy17.1 | 161 | |
| Document Visual Question Answering | DocVQA (val) | Accuracy90.6 | 157 | |
| Mathematical Reasoning | MathVision | Accuracy19.2 | 144 | |
| Multimodal Reasoning | MMMU | Accuracy45.6 | 130 | |
| Visual Question Answering | InfoVQA (val) | Accuracy71.6 | 91 | |
| Logical reasoning | LogicVista | Accuracy33.3 | 84 | |
| Visual Question Answering | AI2D (test) | Accuracy82 | 73 | |
| Multilingual text-centric visual question answering | MTVQA | Average Score25.6 | 37 |
Showing 10 of 20 rows