Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visually-Augmented Language Modeling

About

Human language is grounded on multimodal knowledge including visual knowledge like colors, sizes, and shapes. However, current large-scale pre-trained language models rely on text-only self-supervised training with massive text data, which precludes them from utilizing relevant visual information when necessary. To address this, we propose a novel pre-training framework, named VaLM, to Visually-augment text tokens with retrieved relevant images for Language Modeling. Specifically, VaLM builds on a novel latent text-image alignment method via an image retrieval module to fetch corresponding images given a textual context. With the visually-augmented context, VaLM uses a visual knowledge fusion layer to enable multimodal grounded language modeling by attending to both text context and visual knowledge in images. We evaluate VaLM on various visual knowledge-intensive commonsense reasoning tasks, which require visual information to excel. The experimental results illustrate that VaLM outperforms all strong language-only and vision-language baselines with substantial gains in reasoning object commonsense including color, size, and shape. Our code is available at https://github.com/Victorwz/VaLM.

Weizhi Wang, Li Dong, Hao Cheng, Haoyu Song, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, Furu Wei• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingLAMBADA
Accuracy35.61
268
Text ClassificationSST-2
Accuracy44.66
125
Object Color PredictionMemory Color zero-shot
Accuracy (zero-shot)58.6
12
Object Color PredictionColor Terms zero-shot
Accuracy52.7
12
Relative Size PredictionRelative Size zero-shot
Accuracy85
11
Object Shape PredictionViComTe zero-shot
Accuracy (zero-shot)62.8
11
Natural Language UnderstandingAGNews
Accuracy41.63
9
Natural Language UnderstandingMPQA
Accuracy67.95
4
Visual Language UnderstandingVLU (Visual Language Understanding) evaluation suite
MemoryC47.09
4
Language ModelingWikiText-103
Perplexity (PPL)43.68
4
Showing 10 of 11 rows

Other info

Follow for update