Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unified Generative and Discriminative Training for Multi-modal Large Language Models

About

In recent times, Vision-Language Models (VLMs) have been trained under two predominant paradigms. Generative training has enabled Multimodal Large Language Models (MLLMs) to tackle various complex tasks, yet issues such as hallucinations and weak object discrimination persist. Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval, yet struggles with complex scenarios requiring fine-grained semantic differentiation. This paper addresses these challenges by proposing a unified approach that integrates the strengths of both paradigms. Considering interleaved image-text sequences as the general format of input samples, we introduce a structure-induced training strategy that imposes semantic relationships between input samples and the MLLM's hidden state. This approach enhances the MLLM's ability to capture global semantics and distinguish fine-grained semantics. By leveraging dynamic sequence alignment within the Dynamic Time Warping framework and integrating a novel kernel for fine-grained semantic differentiation, our method effectively balances generative and discriminative tasks. Extensive experiments demonstrate the effectiveness of our approach, achieving state-of-the-art results in multiple generative tasks, especially those requiring cognitive and discrimination abilities. Additionally, our method surpasses discriminative benchmarks in interleaved and fine-grained retrieval tasks. By employing a retrieval-augmented generation strategy, our approach further enhances performance in some generative tasks within one model, offering a promising direction for future research in vision-language modeling.

Wei Chow, Juncheng Li, Qifan Yu, Kaihang Pan, Hao Fei, Zhiqi Ge, Shuai Yang, Siliang Tang, Hanwang Zhang, Qianru Sun• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy76
1165
Visual Question AnsweringTextVQA
Accuracy57.5
1117
Visual Question AnsweringVizWiz
Accuracy60.4
1043
Visual Question AnsweringGQA
Accuracy58.7
963
Object Hallucination EvaluationPOPE
Accuracy86.6
935
Multimodal EvaluationMME
Score300
557
Multimodal Model EvaluationMMBench
Accuracy64.9
180
Image-to-Text RetrievalMSCOCO
R@125.6
124
Multimodal EvaluationMM-Vet
Accuracy31.3
122
Text-to-Image RetrievalMSCOCO
R@122
118
Showing 10 of 15 rows

Other info

Follow for update