Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fine-tuning Multimodal LLMs to Follow Zero-shot Demonstrative Instructions

About

Recent advancements in Multimodal Large Language Models (MLLMs) have been utilizing Visual Prompt Generators (VPGs) to convert visual features into tokens that LLMs can recognize. This is achieved by training the VPGs on millions of image-caption pairs, where the VPG-generated tokens of images are fed into a frozen LLM to generate the corresponding captions. However, this image-captioning based training objective inherently biases the VPG to concentrate solely on the primary visual contents sufficient for caption generation, often neglecting other visual details. This shortcoming results in MLLMs' underperformance in comprehending demonstrative instructions consisting of multiple, interleaved, and multimodal instructions that demonstrate the required context to complete a task. To address this issue, we introduce a generic and lightweight Visual Prompt Generator Complete module (VPG-C), which can infer and complete the missing details essential for comprehending demonstrative instructions. Further, we propose a synthetic discriminative training strategy to fine-tune VPG-C, eliminating the need for supervised demonstrative instructions. As for evaluation, we build DEMON, a comprehensive benchmark for demonstrative instruction understanding. Synthetically trained with the proposed strategy, VPG-C achieves significantly stronger zero-shot performance across all tasks of DEMON. Further evaluation on the MME and OwlEval benchmarks also demonstrate the superiority of VPG-C. Our benchmark, code, and pre-trained models are available at https://github.com/DCDmllm/Cheetah.

Juncheng Li, Kaihang Pan, Zhiqi Ge, Minghe Gao, Wei Ji, Wenqiao Zhang, Tat-Seng Chua, Siliang Tang, Hanwang Zhang, Yueting Zhuang• 2023

Related benchmarks

TaskDatasetResultRank
Vision-Language EvaluationMME (test)
Communication Score98.57
17
Multimodal Perception and CognitionMME (test)
Overall Score1.58e+3
14
Multi-modal EvaluationDEMON Benchmark zero-shot evaluation
Multi Modal Dialogue37.5
11
Multimodal UnderstandingDEMON
MMD37.5
9
General Multi-image Reasoning and GeneralizationLLaVA-Interleave Bench Out-domain
Average Score34.5
7
Multi-image Understanding and ReasoningLLaVA-Interleave Bench In-domain
Avg Score35.8
7
Showing 6 of 6 rows

Other info

Follow for update