Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Exploring Unsupervised Pretraining and Sentence Structure Modelling for Winograd Schema Challenge

About

Winograd Schema Challenge (WSC) was proposed as an AI-hard problem in testing computers' intelligence on common sense representation and reasoning. This paper presents the new state-of-theart on WSC, achieving an accuracy of 71.1%. We demonstrate that the leading performance benefits from jointly modelling sentence structures, utilizing knowledge learned from cutting-edge pretraining models, and performing fine-tuning. We conduct detailed analyses, showing that fine-tuning is critical for achieving the performance, but it helps more on the simpler associative problems. Modelling sentence dependency structures, however, consistently helps on the harder non-associative subset of WSC. Analysis also shows that larger fine-tuning datasets yield better performances, suggesting the potential benefit of future work on annotating more Winograd schema sentences.

Yu-Ping Ruan, Xiaodan Zhu, Zhen-Hua Ling, Zhan Shi, Quan Liu, Si Wei• 2019

Related benchmarks

TaskDatasetResultRank
Pronoun DisambiguationWinograd Schema Challenge
Accuracy71.1
27
Pronoun DisambiguationWSC (test)--
14
Showing 2 of 2 rows

Other info

Follow for update