Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation

About

VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.

Yecheng Wu, Zhuoyang Zhang, Junyu Chen, Haotian Tang, Dacheng Li, Yunhao Fang, Ligeng Zhu, Enze Xie, Hongxu Yin, Li Yi, Song Han, Yao Lu• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy85.8
1455
Visual Question AnsweringTextVQA
Accuracy60.8
1285
Visual Question AnsweringGQA
Accuracy60.8
1249
Text-based Visual Question AnsweringTextVQA
Accuracy60.8
807
Multimodal EvaluationMME--
658
Multimodal UnderstandingMMBench--
637
Multimodal UnderstandingMM-Vet
MM-Vet Score33.5
531
Multimodal UnderstandingMMMU
Accuracy33.5
437
Multimodal Capability EvaluationMM-Vet
Score33.5
345
Multimodal UnderstandingSEED-Bench
Accuracy59
343
Showing 10 of 89 rows
...

Other info

Follow for update