Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MMAR: Towards Lossless Multi-Modal Auto-Regressive Probabilistic Modeling

About

Recent advancements in multi-modal large language models have propelled the development of joint probabilistic models capable of both image understanding and generation. However, we have identified that recent methods suffer from loss of image information during understanding task, due to either image discretization or diffusion denoising steps. To address this issue, we propose a novel Multi-Modal Auto-Regressive (MMAR) probabilistic modeling framework. Unlike discretization line of method, MMAR takes in continuous-valued image tokens to avoid information loss in an efficient way. Differing from diffusion-based approaches, we disentangle the diffusion process from auto-regressive backbone model by employing a light-weight diffusion head on top each auto-regressed image patch embedding. In this way, when the model transits from image generation to understanding through text generation, the backbone model's hidden representation of the image is not limited to the last denoising step. To successfully train our method, we also propose a theoretically proven technique that addresses the numerical stability issue and a training strategy that balances the generation and understanding task goals. Extensive evaluations on 18 image understanding benchmarks show that MMAR significantly outperforms most of the existing joint multi-modal models, surpassing the method that employs pre-trained CLIP vision encoder. Meanwhile, MMAR is able to generate high quality images. We also show that our method is scalable with larger data and model size.

Jian Yang, Dacheng Yin, Yizhou Zhou, Fengyun Rao, Wei Zhai, Yang Cao, Zheng-Jun Zha• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Vision UnderstandingMMBench--
104
Visual UnderstandingMM-Vet
MM-Vet Score30.64
102
Text-to-Image GenerationMJHQ-30K
Overall FID15.6
59
Text-to-Image GenerationMS-COCO 30K (test)
FID22.9
41
Image GenerationGenEval overall
GenEval Overall Score51
30
Visual UnderstandingMME perception and cognition v1.0
MME Perception Score1.49e+3
24
Text-to-Image GenerationCOCO 2014
FID17.1
15
Visual UnderstandingSEED-Bench
SEED Score68.63
9
Visual Understanding18 Visual Understanding Assessments VLMEvalKit
AVE@18Und.48.25
6
Showing 10 of 10 rows

Other info

Code

Follow for update