Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Named Entity Recognition with Truecasing Pretraining

About

Although modern named entity recognition (NER) systems show impressive performance on standard datasets, they perform poorly when presented with noisy data. In particular, capitalization is a strong signal for entities in many languages, and even state of the art models overfit to this feature, with drastically lower performance on uncapitalized text. In this work, we address the problem of robustness of NER systems in data with noisy or uncertain casing, using a pretraining objective that predicts casing in text, or a truecaser, leveraging unlabeled data. The pretrained truecaser is combined with a standard BiLSTM-CRF model for NER by appending output distributions to character embeddings. In experiments over several datasets of varying domain and casing quality, we show that our new model improves performance in uncased text, even adding value to uncased BERT embeddings. Our method achieves a new state of the art on the WNUT17 shared task dataset.

Stephen Mayhew, Nitish Gupta, Dan Roth• 2019

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionWNUT 2017 (test)
F1 Score46.9
63
Named Entity RecognitionCoNLL Cased (test)
F1 Score91.2
12
Named Entity RecognitionOntonotes Cased (test)
F1 Score88.1
10
Named Entity RecognitionCoNLL Uncased (test)
F1 Score84.5
3
Named Entity RecognitionOntonotes Uncased (test)
F1 Score81.1
3
Showing 5 of 5 rows

Other info

Follow for update