QLIP: A Dynamic Quadtree Vision Prior Enhances MLLM Performance Without Retraining
About
Multimodal Large Language Models (MLLMs) encode images into visual tokens, aligning visual and textual signals within a shared latent space to facilitate crossmodal representation learning. The CLIP model is a widely adopted foundational vision language model whose vision encoder has played a critical role in the development of MLLMs such as LLaVA. However, the CLIP vision encoder suffers from notable limitations including being constrained to only handling fixed input resolutions and a failure to produce separated embeddings for dissimilar images. Replacing the vision encoder of an existing model typically incurs substantial computational costs because such a change often necessitates retraining the entire model pipeline. In this work, we identify two factors which underlie the limitations of the CLIP vision encoder: mesoscopic bias and interpolation bias. To address these issues, we propose QLIP, a drop-in replacement for CLIP that can be seamlessly integrated with existing MLLMs with only a few lines of code and can enhance both coarse-grained and fine-grained visual understanding, without re-training. QLIP is designed around an image quadtree which replaces the standard uniform grid patches with a novel content aware patchification. Our experimental results demonstrate that QLIP improves the general visual question answering accuracy of the LLaVA v1.5 model series across various model sizes--without requiring retraining or fine-tuning of the full MLLM. Notably, QLIP boosts detailed understanding performance on the challenging V-star benchmark by up to 13.6 percent.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | -- | 1455 | |
| Visual Question Answering | ScienceQA | Accuracy67.9 | 370 | |
| Visual Question Answering | RealworldQA | Accuracy49.4 | 179 | |
| Visual Question Answering | MMBench (MMB) | Accuracy67.9 | 76 | |
| Visual Question Answering | V* | Accuracy58.6 | 45 | |
| Visual Question Answering | MME | MME Total Score1.39e+3 | 8 | |
| Fine-grained Grounding | V* | V*-Att Score53.9 | 5 | |
| Visual Question Answering | CV-Bench | Accuracy60.7 | 4 |