Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MultiModal-GPT: A Vision and Language Model for Dialogue with Humans

About

We present a vision and language model named MultiModal-GPT to conduct multi-round dialogue with humans. MultiModal-GPT can follow various instructions from humans, such as generating a detailed caption, counting the number of interested objects, and answering general questions from users. MultiModal-GPT is parameter-efficiently fine-tuned from OpenFlamingo, with Low-rank Adapter (LoRA) added both in the cross-attention part and the self-attention part of the language model. We first construct instruction templates with vision and language data for multi-modality instruction tuning to make the model understand and follow human instructions. We find the quality of training data is vital for the dialogue performance, where few data containing short answers can lead the model to respond shortly to any instructions. To further enhance the ability to chat with humans of the MultiModal-GPT, we utilize language-only instruction-following data to train the MultiModal-GPT jointly. The joint training of language-only and visual-language instructions with the \emph{same} instruction template effectively improves dialogue performance. Various demos show the ability of continuous dialogue of MultiModal-GPT with humans. Code, dataset, and demo are at https://github.com/open-mmlab/Multimodal-GPT

Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, Kai Chen• 2023

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Object HallucinationPOPE (Random)
F1 Score66.71
200
Object HallucinationPOPE Adversarial
Accuracy50
196
Object HallucinationPOPE Popular
F1 Score66.7
188
Object Hallucination EvaluationPOPE (test)--
44
Multi-modal UnderstandingMMBench (dev)
Overall Score16
40
Visual Hallucination EvaluationPOPE MS-COCO Adversarial sampling (val)
Accuracy50
39
Object Hallucination AssessmentMSCOCO
CHAIR Instance Score18.2
38
Vision-Language EvaluationMME (test)
Communication Score49.29
17
Object Hallucination EvaluationCOCO (val)
Random Accuracy50.1
8
Showing 10 of 13 rows

Other info

Follow for update