EVEv2: Improved Baselines for Encoder-Free Vision-Language Models
About
Existing encoder-free vision-language models (VLMs) are rapidly narrowing the performance gap with their encoder-based counterparts, highlighting the promising potential for unified multimodal systems with structural simplicity and efficient deployment. We systematically clarify the performance gap between VLMs using pre-trained vision encoders, discrete tokenizers, and minimalist visual layers from scratch, deeply excavating the under-examined characteristics of encoder-free VLMs. We develop efficient strategies for encoder-free VLMs that rival mainstream encoder-based ones. After an in-depth investigation, we launch EVEv2.0, a new and improved family of encoder-free VLMs. We show that: (i) Properly decomposing and hierarchically associating vision and language within a unified model reduces interference between modalities. (ii) A well-designed training strategy enables effective optimization for encoder-free VLMs. Through extensive evaluation, our EVEv2.0 represents a thorough study for developing a decoder-only architecture across modalities, demonstrating superior data efficiency and strong vision-reasoning capability. Code is publicly available at: https://github.com/baaivision/EVE.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 935 | |
| Text-based Visual Question Answering | TextVQA | Accuracy71.1 | 496 | |
| Multi-discipline Multimodal Understanding | MMMU | -- | 266 | |
| Chart Question Answering | ChartQA | Accuracy73.9 | 229 | |
| Visual Question Answering | AI2D | Accuracy74.8 | 174 | |
| Optical Character Recognition Evaluation | OCRBench | Score70.2 | 46 | |
| Multi-modal Vision-Language Understanding | MMVet | Score45 | 38 | |
| General Vision-Language Understanding | MMB | Score66.3 | 25 | |
| Image-centric Multimodal Understanding | SEED-I | Score71.4 | 16 |