Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Parallel Iterative Edit Models for Local Sequence Transduction

About

We present a Parallel Iterative Edit (PIE) model for the problem of local sequence transduction arising in tasks like Grammatical error correction (GEC). Recent approaches are based on the popular encoder-decoder (ED) model for sequence to sequence learning. The ED model auto-regressively captures full dependency among output tokens but is slow due to sequential decoding. The PIE model does parallel decoding, giving up the advantage of modelling full dependency in the output, yet it achieves accuracy competitive with the ED model for four reasons: 1.~predicting edits instead of tokens, 2.~labeling sequences instead of generating sequences, 3.~iteratively refining predictions to capture dependencies, and 4.~factorizing logits over edits and their token argument to harness pre-trained language models like BERT. Experiments on tasks spanning GEC, OCR correction and spell correction demonstrate that the PIE model is an accurate and significantly faster alternative for local sequence transduction.

Abhijeet Awasthi, Sunita Sarawagi, Rasna Goyal, Sabyasachi Ghosh, Vihari Piratla• 2019

Related benchmarks

TaskDatasetResultRank
Grammatical Error CorrectionCoNLL 2014 (test)
F0.5 Score61.2
207
Grammatical Error CorrectionJFLEG
GLEU61
47
Grammatical Error CorrectionJFLEG (test)
GLEU60.3
45
Grammatical Error CorrectionCoNLL M2 14
Precision (P)66.1
27
Grammatical Error CorrectionBEA 2019 (dev)
F0.5 Score34.1
19
Grammatical Error CorrectionFCGEC
EM22.07
9
OCR CorrectionFinnish newspaper corpus (test)
Whole-Word Accuracy87.6
5
Spell CorrectionTwitter dataset (test)
Whole-Word Accuracy67
5
Grammatical Error CorrectionGMEG-wiki (test)
Precision52.1
3
Grammatical Error CorrectionGMEG-yahoo (test)
Precision (%)44.4
3
Showing 10 of 10 rows

Other info

Code

Follow for update