Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Transfusion: Predict the Next Token and Diffuse Images with One Multi-Modal Model

About

We introduce Transfusion, a recipe for training a multi-modal model over discrete and continuous data. Transfusion combines the language modeling loss function (next token prediction) with diffusion to train a single transformer over mixed-modality sequences. We pretrain multiple Transfusion models up to 7B parameters from scratch on a mixture of text and image data, establishing scaling laws with respect to a variety of uni- and cross-modal benchmarks. Our experiments show that Transfusion scales significantly better than quantizing images and training a language model over discrete image tokens. By introducing modality-specific encoding and decoding layers, we can further improve the performance of Transfusion models, and even compress each image to just 16 patches. We further demonstrate that scaling our Transfusion recipe to 7B parameters and 2T multi-modal tokens produces a model that can generate images and text on a par with similar scale diffusion models and language models, reaping the benefits of both worlds.

Chunting Zhou, Lili Yu, Arun Babu, Kushal Tirumala, Michihiro Yasunaga, Leonid Shamis, Jacob Kahn, Xuezhe Ma, Luke Zettlemoyer, Omer Levy• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
1455
Image CaptioningMS COCO Karpathy (test)--
682
Text-to-Image GenerationGenEval
Overall Score67
506
Text-to-Image GenerationGenEval
Overall Score67
391
Text-to-Image GenerationGenEval
GenEval Score63
360
Text-to-Image GenerationGenEval (test)--
221
Text-to-Image GenerationGenEval
Overall Score63
218
Visual UnderstandingMM-Vet
MM-Vet Score13.9
142
Vision UnderstandingMMBench--
141
Text-to-Image GenerationGenEval
GenEval Score0.63
88
Showing 10 of 18 rows

Other info

Follow for update