Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Convolutional Sequence to Sequence Learning

About

The prevalent approach to sequence to sequence learning maps an input sequence to a variable length output sequence via recurrent neural networks. We introduce an architecture based entirely on convolutional neural networks. Compared to recurrent models, computations over all elements can be fully parallelized during training and optimization is easier since the number of non-linearities is fixed and independent of the input length. Our use of gated linear units eases gradient propagation and we equip each decoder layer with a separate attention module. We outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both WMT'14 English-German and WMT'14 English-French translation at an order of magnitude faster speed, both on GPU and CPU.

Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin• 2017

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy83.9
681
Machine TranslationWMT En-De 2014 (test)
BLEU26.43
379
Machine TranslationWMT En-Fr 2014 (test)
BLEU41.62
237
SummarizationXSum (test)
ROUGE-211.54
231
Natural Language InferenceSNLI (train)
Accuracy91.3
154
Machine TranslationIWSLT De-En 2014 (test)
BLEU32.31
146
Machine TranslationWMT English-German 2014 (test)
BLEU26.3
136
Machine TranslationIWSLT En-De 2014 (test)
BLEU26.73
92
Machine TranslationWMT14 En-De newstest2014 (test)
BLEU25.2
65
Machine TranslationWMT en-fr 14
BLEU Score40.5
56
Showing 10 of 26 rows

Other info

Follow for update