Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Z-LaVI: Zero-Shot Language Solver Fueled by Visual Imagination

About

Large-scale pretrained language models have made significant advances in solving downstream language understanding tasks. However, they generally suffer from reporting bias, the phenomenon describing the lack of explicit commonsense knowledge in written text, e.g., ''an orange is orange''. To overcome this limitation, we develop a novel approach, Z-LaVI, to endow language models with visual imagination capabilities. Specifically, we leverage two complementary types of ''imaginations'': (i) recalling existing images through retrieval and (ii) synthesizing nonexistent images via text-to-image generation. Jointly exploiting the language inputs and the imagination, a pretrained vision-language model (e.g., CLIP) eventually composes a zero-shot solution to the original language tasks. Notably, fueling language models with imagination can effectively leverage visual knowledge to solve plain language tasks. In consequence, Z-LaVI consistently improves the zero-shot performance of existing language models across a diverse set of language tasks.

Yue Yang, Wenlin Yao, Hongming Zhang, Xiaoyang Wang, Dong Yu, Jianshu Chen• 2022

Related benchmarks

TaskDatasetResultRank
Topic ClassificationAG-News
Accuracy82.4
173
Science Question AnsweringARC-E
Accuracy59.5
138
Science Question AnsweringARC-C
Accuracy36.5
127
Multiple-choice Question AnsweringSciQ
Accuracy74
74
Question AnsweringQASC
Score42.1
36
Commonsense knowledge probingViComTe (test)
Color Spearman Correlation49.6
20
Word Sense DisambiguationCoarseWSD-20
Accuracy90.6
20
Topic ClassificationSituation
Accuracy46.6
16
Image-to-Image Translationsummer-winter Global 512x512
FID92.65
12
Image-to-Image Translationhorse-zebra Local 512x512
FID72.68
11
Showing 10 of 10 rows

Other info

Code

Follow for update