Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pre-training Text-to-Text Transformers for Concept-centric Common Sense

About

Pre-trained language models (PTLM) have achieved impressive results in a range of natural language understanding (NLU) and generation (NLG) tasks. However, current pre-training objectives such as masked token prediction (for BERT-style PTLMs) and masked span infilling (for T5-style PTLMs) do not explicitly model the relational commonsense knowledge about everyday concepts, which is crucial to many downstream tasks that need common sense to understand or generate. To augment PTLMs with concept-centric commonsense knowledge, in this paper, we propose both generative and contrastive objectives for learning common sense from the text, and use them as intermediate self-supervised learning tasks for incrementally pre-training PTLMs (before task-specific fine-tuning on downstream datasets). Furthermore, we develop a joint pre-training framework to unify generative and contrastive objectives so that they can mutually reinforce each other. Extensive experimental results show that our method, concept-aware language model (CALM), can pack more commonsense knowledge into the parameters of a pre-trained text-to-text transformer without relying on external knowledge graphs, yielding better performance on both NLU and NLG tasks. We show that while only incrementally pre-trained on a relatively small corpus for a few steps, CALM outperforms baseline methods by a consistent margin and even comparable with some larger PTLMs, which suggests that CALM can serve as a general, plug-and-play method for improving the commonsense reasoning ability of a PTLM.

Wangchunshu Zhou, Dong-Ho Lee, Ravi Kiran Selvam, Seyeon Lee, Bill Yuchen Lin, Xiang Ren• 2020

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy71.01
647
Common Sense ReasoningCOPA
Accuracy72.2
138
Commonsense ReasoningSocialIQA
Accuracy66
97
Commonsense ReasoningOBQA
Accuracy60.9
75
Commonsense ReasoningCommonsenseQA (CSQA) v1.0 (test)
Accuracy63.32
46
Commonsense GenerationCommonGen (test)--
31
Commonsense ReasoningaNLI
Accuracy63.2
28
Abductive Commonsense ReasoningANLI (test)
Accuracy77.12
23
Commonsense ReasoningCSQA (dev)
Accuracy71.31
16
Common Sense ReasoningPIQA (dev)
Accuracy75.11
11
Showing 10 of 14 rows

Other info

Follow for update