Visually-Augmented Language Modeling
About
Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VaLM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VaLM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VaLM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | LAMBADA | Accuracy35.61 | 268 | |
| Text Classification | SST-2 | Accuracy44.66 | 125 | |
| Object Color Prediction | Memory Color zero-shot | Accuracy (zero-shot)58.6 | 12 | |
| Object Color Prediction | Color Terms zero-shot | Accuracy52.7 | 12 | |
| Relative Size Prediction | Relative Size zero-shot | Accuracy85 | 11 | |
| Object Shape Prediction | ViComTe zero-shot | Accuracy (zero-shot)62.8 | 11 | |
| Natural Language Understanding | AGNews | Accuracy41.63 | 9 | |
| Natural Language Understanding | MPQA | Accuracy67.95 | 4 | |
| Visual Language Understanding | VLU (Visual Language Understanding) evaluation suite | MemoryC47.09 | 4 | |
| Language Modeling | WikiText-103 | Perplexity (PPL)43.68 | 4 |