Structured Training for Neural Network Transition-Based Parsing
About
We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide in-depth ablative analysis to determine which aspects of our model provide the largest gains in accuracy.
David Weiss, Chris Alberti, Michael Collins, Slav Petrov• 2015
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Dependency Parsing | English PTB Stanford Dependencies (test) | UAS94.26 | 76 | |
| Dependency Parsing | WSJ (test) | UAS94.26 | 67 | |
| Dependency Parsing | Penn Treebank (PTB) Section 23 v2.2 (test) | UAS93.99 | 17 | |
| POS Tagging | Penn Treebank (PTB) Section 23 v2.2 (test) | POS Accuracy97.44 | 15 | |
| Dependency Parsing | Treebank Union (test) | Accuracy (News)92.62 | 10 | |
| Dependency Parsing | Union-Web (test) | UAS89.29 | 8 | |
| Dependency Parsing | Union-News (test) | UAS93.91 | 8 | |
| Dependency Parsing | Union-QTB (test) | UAS94.17 | 8 |
Showing 8 of 8 rows