Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EIT: Enhanced Interactive Transformer

About

Two principles: the complementary principle and the consensus principle are widely acknowledged in the literature of multi-view learning. However, the current design of multi-head self-attention, an instance of multi-view learning, prioritizes the complementarity while ignoring the consensus. To address this problem, we propose an enhanced multi-head self-attention (EMHA). First, to satisfy the complementary principle, EMHA removes the one-to-one mapping constraint among queries and keys in multiple subspaces and allows each query to attend to multiple keys. On top of that, we develop a method to fully encourage consensus among heads by introducing two interaction models, namely inner-subspace interaction and cross-subspace interaction. Extensive experiments on a wide range of language tasks (e.g., machine translation, abstractive summarization and grammar correction, language modeling), show its superiority, with a very modest increase in model size. Our code would be available at: https://github.com/zhengkid/EIT-Enhanced-Interactive-Transformer.

Tong Zheng, Bei Li, Huiwen Bao, Tong Xiao, Jingbo Zhu• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity20
524
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L38.33
169
Machine TranslationWMT EN-DE 2017 (test)
BLEU Score0.2958
46
Machine TranslationWMT En-Ro 2016 (test)
BLEU35.4
39
Grammar Error CorrectionCoNLL (test)
Precision69.98
5
Brain disease diagnosisABIDE CC200 ROI atlas (test)
AUROC82.9
4
Machine TranslationWMT De-En 2017 (test)
BLEU35.62
4
Showing 7 of 7 rows

Other info

Code

Follow for update