EIT: Enhanced Interactive Transformer
About
Two principles: the complementary principle and the consensus principle are widely acknowledged in the literature of multi-view learning. However, the current design of multi-head self-attention, an instance of multi-view learning, prioritizes the complementarity while ignoring the consensus. To address this problem, we propose an enhanced multi-head self-attention (EMHA). First, to satisfy the complementary principle, EMHA removes the one-to-one mapping constraint among queries and keys in multiple subspaces and allows each query to attend to multiple keys. On top of that, we develop a method to fully encourage consensus among heads by introducing two interaction models, namely inner-subspace interaction and cross-subspace interaction. Extensive experiments on a wide range of language tasks (e.g., machine translation, abstractive summarization and grammar correction, language modeling), show its superiority, with a very modest increase in model size. Our code would be available at: https://github.com/zhengkid/EIT-Enhanced-Interactive-Transformer.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText-103 (test) | Perplexity20 | 524 | |
| Abstractive Text Summarization | CNN/Daily Mail (test) | ROUGE-L38.33 | 169 | |
| Machine Translation | WMT EN-DE 2017 (test) | BLEU Score0.2958 | 46 | |
| Machine Translation | WMT En-Ro 2016 (test) | BLEU35.4 | 39 | |
| Grammar Error Correction | CoNLL (test) | Precision69.98 | 5 | |
| Brain disease diagnosis | ABIDE CC200 ROI atlas (test) | AUROC82.9 | 4 | |
| Machine Translation | WMT De-En 2017 (test) | BLEU35.62 | 4 |