Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SEA: Supervised Embedding Alignment for Token-Level Visual-Textual Integration in MLLMs

About

Multimodal Large Language Models (MLLMs) have demonstrated remarkable capabilities by integrating visual and textual inputs, yet modality alignment remains one of the most challenging aspects. Current MLLMs typically rely on simple adapter architectures and pretraining approaches to bridge vision encoders with large language models (LLM), guided by image-level supervision. We identify this paradigm often leads to suboptimal alignment between modalities, significantly constraining the LLM's ability to properly interpret and reason with visual features particularly for smaller language models. This limitation degrades overall performance-particularly for smaller language models where capacity constraints are more pronounced and adaptation capabilities are limited. To address this fundamental limitation, we propose Supervised Embedding Alignment (SEA), a token-level supervision alignment method that enables more precise visual-text alignment during pretraining. SEA introduces minimal computational overhead while preserving language capabilities and substantially improving cross-modal understanding. Our comprehensive analyses reveal critical insights into the adapter's role in multimodal integration, and extensive experiments demonstrate that SEA consistently improves performance across various model sizes, with smaller models benefiting the most (average performance gain of 7.61% for Gemma-2B). This work establishes a foundation for developing more effective alignment strategies for future multimodal systems.

Yuanyang Yin, Yaqi Zhao, Yajie Zhang, Yuanxing Zhang, Ke Lin, Jiahao Wang, Xin Tao, Pengfei Wan, Wentao Zhang, Feng Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA
Accuracy68
1117
Visual Question AnsweringVizWiz
Accuracy64.7
1043
Visual Question AnsweringGQA
Accuracy65.1
963
Multimodal UnderstandingMM-Vet
MM-Vet Score48.8
418
Multimodal UnderstandingMMBench--
367
Visual Question AnsweringVQAv2
Accuracy83.1
177
Hallucination EvaluationPOPE
Accuracy88.4
132
Science Question AnsweringSciQA-IMG
SciQA-IMG Accuracy80.9
53
Showing 8 of 8 rows

Other info

Follow for update