Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Surprisingly Robust Trick for Winograd Schema Challenge

About

The Winograd Schema Challenge (WSC) dataset WSC273 and its inference counterpart WNLI are popular benchmarks for natural language understanding and commonsense reasoning. In this paper, we show that the performance of three language models on WSC273 strongly improves when fine-tuned on a similar pronoun disambiguation problem dataset (denoted WSCR). We additionally generate a large unsupervised WSC-like dataset. By fine-tuning the BERT language model both on the introduced and on the WSCR dataset, we achieve overall accuracies of 72.5% and 74.7% on WSC273 and WNLI, improving the previous state-of-the-art solutions by 8.8% and 9.6%, respectively. Furthermore, our fine-tuned models are also consistently more robust on the "complex" subsets of WSC273, introduced by Trichelair et al. (2018).

Vid Kocijan, Ana-Maria Cretu, Oana-Maria Camburu, Yordan Yordanov, Thomas Lukasiewicz• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceWNLI
Accuracy74.7
40
Coreference ResolutionWinograd WSC273 (test)
Accuracy71.4
34
Pronoun DisambiguationWinograd Schema Challenge
Accuracy72.5
27
Common Sense ReasoningWSC273
Accuracy72.5
26
Natural Language InferenceWNLI (test)
Accuracy71.9
25
Commonsense ReasoningWinograd Schema Challenge (WSC) (test)
Accuracy72.2
17
Pronoun ResolutionDPR
Accuracy0.848
14
Pronoun DisambiguationWSC (test)--
14
Showing 8 of 8 rows

Other info

Code

Follow for update