Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vokenization: Improving Language Understanding with Contextualized, Visual-Grounded Supervision

About

Humans learn language by listening, speaking, writing, reading, and also, via interaction with the multimodal real world. Existing language pre-training frameworks show the effectiveness of text-only self-supervision while we explore the idea of a visually-supervised language model in this paper. We find that the main reason hindering this exploration is the large divergence in magnitude and distributions between the visually-grounded language datasets and pure-language corpora. Therefore, we develop a technique named "vokenization" that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images (which we call "vokens"). The "vokenizer" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora. Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks such as GLUE, SQuAD, and SWAG. Code and pre-trained models publicly available at https://github.com/airsplay/vokenization

Hao Tan, Mohit Bansal• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (test dev)
MRPC Accuracy87
87
Natural Language UnderstandingGLUE standard (val test)
SST-2 Accuracy92.2
13
Object Color PredictionMemory Color zero-shot
Accuracy (zero-shot)14.2
12
Object Color PredictionColor Terms zero-shot
Accuracy20
12
Relative Size PredictionRelative Size zero-shot
Accuracy72.4
11
Object Shape PredictionViComTe zero-shot
Accuracy (zero-shot)43.2
11
Natural Language UnderstandingGLUE large tasks (dev)
SST-2 Score92.2
9
Showing 7 of 7 rows

Other info

Follow for update