ANOLE: An Open, Autoregressive, Native Large Multimodal Models for Interleaved Image-Text Generation
About
Previous open-source large multimodal models (LMMs) have faced several limitations: (1) they often lack native integration, requiring adapters to align visual representations with pre-trained large language models (LLMs); (2) many are restricted to single-modal generation; (3) while some support multimodal generation, they rely on separate diffusion models for visual modeling and generation. To mitigate these limitations, we present Anole, an open, autoregressive, native large multimodal model for interleaved image-text generation. We build Anole from Meta AI's Chameleon, adopting an innovative fine-tuning strategy that is both data-efficient and parameter-efficient. Anole demonstrates high-quality, coherent multimodal generation capabilities. We have open-sourced our model, training framework, and instruction tuning data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Captioning | MS COCO Karpathy (test) | CIDEr0.1507 | 682 | |
| Text-to-Image Generation | MS-COCO | -- | 75 | |
| Text-to-Image In-Context Learning | T2IFMIT Text-to-Image Fast Mini-ImageNet | Accuracy11 | 18 | |
| Image Generation | Parti-Prompts (val) | SR1 | 10 | |
| Text-to-Image Generation | MSCOCO 2017 (val) | SR1 | 10 | |
| Image Generation | MSCOCO 2017 (val) | IS30.25 | 5 | |
| Text-to-interleaved generation | RecipeGen (test) | Temporal Coherence (GPT-4o)1.55 | 5 |