Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explaining and Improving BERT Performance on Lexical Semantic Change Detection

About

Type- and token-based embedding architectures are still competing in lexical semantic change detection. The recent success of type-based models in SemEval-2020 Task 1 has raised the question why the success of token-based models on a variety of other NLP tasks does not translate to our field. We investigate the influence of a range of variables on clusterings of BERT vectors and show that its low performance is largely due to orthographic information on the target word, which is encoded even in the higher layers of BERT representations. By reducing the influence of orthography we considerably improve BERT's performance.

Severin Laicher, Sinan Kurtyigit, Dominik Schlechtweg, Jonas Kuhn, Sabine Schulte im Walde• 2021

Related benchmarks

TaskDatasetResultRank
Lexical Semantic Change DetectionSemEval Task 1 Subtask 2 English 2020
Spearman Correlation0.571
54
Lexical Semantic Change DetectionSemEval (test)
Accuracy (En)57.1
8
Showing 2 of 2 rows

Other info

Follow for update