Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Multi-modal Large Language Model through Boosting Vision Capabilities

About

We focus on improving the visual understanding capability for boosting the vision-language models. We propose \textbf{Arcana}, a multiModal language model, which introduces two crucial techniques. First, we present Multimodal LoRA (MM-LoRA), a module designed to enhance the decoder. Unlike traditional language-driven decoders, MM-LoRA consists of two parallel LoRAs -- one for vision and one for language -- each with its own parameters. This disentangled parameters design allows for more specialized learning in each modality and better integration of multimodal information. Second, we introduce the Query Ladder adapter (QLadder) to improve the visual encoder. QLadder employs a learnable ``\textit{ladder}'' structure to deeply aggregates the intermediate representations from the frozen pretrained visual encoder (e.g., CLIP image encoder). This enables the model to learn new and informative visual features, as well as remaining the powerful capabilities of the pretrained visual encoder. These techniques collectively enhance Arcana's visual perception power, enabling it to leverage improved visual information for more accurate and contextually relevant outputs across various multimodal scenarios. Extensive experiments and ablation studies demonstrate the effectiveness and generalization capability of our Arcana. The code and re-annotated data are available at \url{https://arcana-project-page.github.io}.

Yanpeng Sun, Huaxin Zhang, Qiang Chen, Xinyu Zhang, Nong Sang, Gang Zhang, Jingdong Wang, Zechao Li• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy59.5
1117
Visual Question AnsweringGQA
Accuracy61.8
963
Object Hallucination EvaluationPOPE--
935
Multimodal UnderstandingMM-Vet
MM-Vet Score34.8
418
Multimodal UnderstandingMMBench
Accuracy67.4
367
Visual Question AnsweringOKVQA
Top-1 Accuracy58.9
283
Visual Question AnsweringScienceQA
Accuracy71.2
210
Multimodal UnderstandingSEED-Bench
Accuracy63.2
203
Visual Question AnsweringAI2D
Accuracy56.9
174
Multimodal UnderstandingMME
MME Score1.52e+3
158
Showing 10 of 16 rows

Other info

Code

Follow for update