Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Multilingual Part-of-Speech Tagging via Adversarial Training

About

Adversarial training (AT) is a powerful regularization method for neural networks, aiming to achieve robustness to input perturbations. Yet, the specific effects of the robustness obtained from AT are still unclear in the context of natural language processing. In this paper, we propose and analyze a neural POS tagging model that exploits AT. In our experiments on the Penn Treebank WSJ corpus and the Universal Dependencies (UD) dataset (27 languages), we find that AT not only improves the overall tagging accuracy, but also 1) prevents over-fitting well in low resource languages and 2) boosts tagging accuracy for rare / unseen words. We also demonstrate that 3) the improved tagging performance by AT contributes to the downstream task of dependency parsing, and that 4) AT helps the model to learn cleaner word representations. 5) The proposed AT model is generally effective in different sequence labeling tasks. These positive results motivate further use of AT for natural language tasks.

Michihiro Yasunaga, Jungo Kasai, Dragomir Radev• 2017

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)
F1 Score91.56
539
Named Entity RecognitionOntoNotes 5.0 (test)
F1 Score86.99
90
ChunkingCoNLL 2000 (test)
F1 Score95.25
88
Part-of-Speech TaggingWSJ (test)
Accuracy97.58
51
CCG SupertaggingCCGBank (test)
Accuracy94.1
35
Part-of-Speech TaggingUD Average 1.2 (test)
Accuracy96.65
22
Part-of-Speech TaggingWall Street Journal (WSJ) section 23 (test)
Accuracy97.58
12
POS TaggingWSJ (Section 23)
Mean Accuracy97.5
4
Part-of-Speech TaggingUD v2.2 (test)
POS Accuracy (cs)98.42
3
Showing 9 of 9 rows

Other info

Follow for update