Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DreamLLM: Synergistic Multimodal Comprehension and Creation

About

This paper presents DreamLLM, a learning framework that first achieves versatile Multimodal Large Language Models (MLLMs) empowered with frequently overlooked synergy between multimodal comprehension and creation. DreamLLM operates on two fundamental principles. The first focuses on the generative modeling of both language and image posteriors by direct sampling in the raw multimodal space. This approach circumvents the limitations and information loss inherent to external feature extractors like CLIP, and a more thorough multimodal understanding is obtained. Second, DreamLLM fosters the generation of raw, interleaved documents, modeling both text and image contents, along with unstructured layouts. This allows DreamLLM to learn all conditional, marginal, and joint multimodal distributions effectively. As a result, DreamLLM is the first MLLM capable of generating free-form interleaved content. Comprehensive experiments highlight DreamLLM's superior performance as a zero-shot multimodal generalist, reaping from the enhanced learning synergy. Project page: https://dreamllm.github.io.

Runpei Dong, Chunrui Han, Yuang Peng, Zekun Qi, Zheng Ge, Jinrong Yang, Liang Zhao, Jianjian Sun, Hongyu Zhou, Haoran Wei, Xiangwen Kong, Xiangyu Zhang, Kaisheng Ma, Li Yi• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy77.4
1460
Visual Question AnsweringVQA v2
Accuracy72.9
1165
Visual Question AnsweringTextVQA
Accuracy41.8
1117
Visual Question AnsweringVizWiz
Accuracy49.3
1043
Object Hallucination EvaluationPOPE
Accuracy41.8
935
Multi-task Language UnderstandingMMLU
Accuracy41.8
842
Commonsense ReasoningWinoGrande
Accuracy68.5
776
Image CaptioningMS COCO Karpathy (test)
CIDEr1.154
682
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy56.6
664
Commonsense ReasoningPIQA
Accuracy78.6
647
Showing 10 of 51 rows

Other info

Code

Follow for update