Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MELM: Data Augmentation with Masked Entity Language Modeling for Low-Resource NER

About

Data augmentation is an effective solution to data scarcity in low-resource scenarios. However, when applied to token-level tasks such as NER, data augmentation methods often suffer from token-label misalignment, which leads to unsatsifactory performance. In this work, we propose Masked Entity Language Modeling (MELM) as a novel data augmentation framework for low-resource NER. To alleviate the token-label misalignment issue, we explicitly inject NER labels into sentence context, and thus the fine-tuned MELM is able to predict masked entity tokens by explicitly conditioning on their labels. Thereby, MELM generates high-quality augmented data with novel entities, which provides rich entity regularity knowledge and boosts NER performance. When training data from multiple languages are available, we also integrate MELM with code-mixing for further improvement. We demonstrate the effectiveness of MELM on monolingual, cross-lingual and multilingual NER across various low-resource levels. Experimental results show that our MELM presents substantial improvement over the baseline methods.

Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao• 2021

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 03
F1 (Entity)81.9
102
Named Entity RecognitionOntoNotes
F1-score54.97
91
Complex Named Entity RecognitionMultiCoNER (test)
Score (Bn)30.27
76
Named Entity RecognitionCoNLL NER 2002/2003 (test)
German F1 Score80.33
59
Named Entity RecognitionMultiCoNER
F1 Score0.4901
48
Named Entity RecognitionNCBI
F1 Score75.11
26
Named Entity Recognitionbc2gm
Entity F156.83
21
Named Entity RecognitionTDMSci
F1 Score57.8
10
Named Entity RecognitionCoNLL
F1 Score0.8351
10
Named Entity RecognitionMultiCoNER entire dataset 1.0 (full)
Accuracy (En)66.27
5
Showing 10 of 10 rows

Other info

Code

Follow for update