Jina-VLM: Small Multilingual Vision Language Model
About
We present Jina-VLM, a 2.4B parameter vision-language model that achieves state-of-the-art multilingual visual question answering among open 2B-scale VLMs. The model couples a SigLIP2 vision encoder with a Qwen3 language backbone through an attention-pooling connector that enables token-efficient processing of arbitrary-resolution images. The model achieves leading results on standard VQA benchmarks and multilingual evaluations while preserving competitive text-only performance. Model weights and code are publicly released at https://huggingface.co/jinaai/jina-vlm .
Andreas Koukounas, Georgios Mastrapas, Florian H\"onicke, Sedigheh Eslami, Guillaume Roncari, Scott Martens, Han Xiao• 2025
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-based Visual Question Answering | TextVQA (val) | Accuracy83.2 | 146 | |
| Mathematical Reasoning | MathVista | Accuracy59.5 | 97 | |
| Mathematical Reasoning | WeMath | Accuracy17.1 | 75 | |
| Document Visual Question Answering | DocVQA (val) | Accuracy90.6 | 66 | |
| Visual Question Answering | AI2D (test) | Accuracy82 | 54 | |
| Multimodal Reasoning | MMMU | Accuracy45.6 | 44 | |
| Visual Question Answering | InfoVQA (val) | Accuracy71.6 | 41 | |
| Mathematical Reasoning | MathVision | Accuracy19.2 | 38 | |
| Multilingual text-centric visual question answering | MTVQA | Average Score25.6 | 37 | |
| Visual Question Answering | ChartQA (val) | Accuracy81.9 | 25 |
Showing 10 of 20 rows