Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Commonsense Knowledge Transfer for Pre-trained Language Models

About

Despite serving as the foundation models for a wide range of NLP benchmarks, pre-trained language models have shown limited capabilities of acquiring implicit commonsense knowledge from self-supervision alone, compared to learning linguistic and factual knowledge that appear more explicitly in the surface patterns in text. In this work, we introduce commonsense knowledge transfer, a framework to transfer the commonsense knowledge stored in a neural commonsense knowledge model to a general-purpose pre-trained language model. It first exploits general texts to form queries for extracting commonsense knowledge from the neural commonsense knowledge model and then refines the language model with two self-supervised objectives: commonsense mask infilling and commonsense relation prediction, which align human language with the underlying commonsense knowledge. Empirical results show that our approach consistently improves the model's performance on downstream tasks that require commonsense reasoning. Moreover, we find that the improvement is more significant in the few-shot setting. This suggests that our approach helps language models better transfer to downstream tasks without extensive supervision by injecting commonsense knowledge into their parameters.

Wangchunshu Zhou, Ronan Le Bras, Yejin Choi• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy72.26
647
Common Sense ReasoningCOPA
Accuracy73.4
138
Commonsense ReasoningSocialIQA
Accuracy67.3
97
Commonsense ReasoningOBQA
Accuracy61.58
75
Commonsense ReasoningCommonsenseQA (CSQA) v1.0 (test)
Accuracy64.11
46
Commonsense GenerationCommonGen (test)--
31
Commonsense ReasoningaNLI
Accuracy64.37
28
Commonsense ReasoningCSQA (dev)
Accuracy72.15
16
Common Sense ReasoningPIQA (dev)
Accuracy76.07
11
Commonsense ReasoningOBQA (dev)
Accuracy66.7
3
Showing 10 of 13 rows

Other info

Follow for update