Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Using Similarity Measures to Select Pretraining Data for NER

About

Word vectors and Language Models (LMs) pretrained on a large amount of unlabelled data can dramatically improve various Natural Language Processing (NLP) tasks. However, the measure and impact of similarity between pretraining data and target task data are left to intuition. We propose three cost-effective measures to quantify different aspects of similarity between source pretraining and target task data. We demonstrate that these measures are good predictors of the usefulness of pretrained models for Named Entity Recognition (NER) over 30 data pairs. Results also suggest that pretrained LMs are more effective and more predictable than pretrained word vectors, but pretrained word vectors are better when pretraining data is dissimilar.

Xiang Dai, Sarvnaz Karimi, Ben Hachey, Cecile Paris• 2019

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionConll 2003
F1 Score89.78
86
Named Entity RecognitionCADEC
F1 Score70.46
9
Named Entity RecognitionJNLPBA
F1 Score74.29
4
Named Entity RecognitionScienceIE
F1 Score42.07
4
Named Entity RecognitionWetLab
F1 Score79.62
4
Named Entity RecognitionCRAFT
F1 Score0.7545
4
Showing 6 of 6 rows

Other info

Code

Follow for update