LaMI: Augmenting Large Language Models via Late Multi-Image Fusion
About
Commonsense reasoning often requires both textual and visual knowledge, yet Large Language Models (LLMs) trained solely on text lack visual grounding (e.g., "what color is an emperor penguin's belly?"). Visual Language Models (VLMs) perform better on visually grounded tasks but face two limitations: (i) often reduced performance on text-only commonsense reasoning compared to text-trained LLMs, and (ii) adapting newly released LLMs to vision input typically requires costly multimodal training. An alternative augments LLMs with test-time visual signals, improving visual commonsense without harming textual reasoning, but prior designs often rely on early fusion and a single image, which can be suboptimal. We propose a late multi-image fusion method: multiple images are generated from the text prompt with a lightweight parallel sampling, and their prediction probabilities are combined with those of a text-only LLM through a late-fusion layer that integrates projected visual features just before the final prediction. Across visual commonsense and NLP benchmarks, our method significantly outperforms augmented LLMs on visual reasoning, matches VLMs on vision-based tasks, and, when applied to strong LLMs such as LLaMA 3, also improves NLP performance while adding only modest test-time overhead. Project page is available at: https://guyyariv.github.io/LaMI.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | CR | Accuracy75.9 | 27 | |
| Reading Comprehension | RC | Accuracy64.4 | 23 | |
| Visual Commonsense | Visual Commonsense (VC) | VC Score57.8 | 16 | |
| Object Color Prediction | Memory Color zero-shot | Accuracy (zero-shot)74.5 | 12 | |
| Object Color Prediction | Color Terms zero-shot | Accuracy72.5 | 12 | |
| Object Shape Prediction | ViComTe zero-shot | Accuracy (zero-shot)67.3 | 11 | |
| Relative Size Prediction | Relative Size zero-shot | Accuracy85.5 | 11 |