Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Time-aware Large Kernel Convolutions

About

To date, most state-of-the-art sequence modeling architectures use attention to build generative models for language based tasks. Some of these models use all the available sequence tokens to generate an attention distribution which results in time complexity of $O(n^2)$. Alternatively, they utilize depthwise convolutions with softmax normalized kernels of size $k$ acting as a limited-window self-attention, resulting in time complexity of $O(k{\cdot}n)$. In this paper, we introduce Time-aware Large Kernel (TaLK) Convolutions, a novel adaptive convolution operation that learns to predict the size of a summation kernel instead of using a fixed-sized kernel matrix. This method yields a time complexity of $O(n)$, effectively making the sequence encoding process linear to the number of tokens. We evaluate the proposed method on large-scale standard machine translation, abstractive summarization and language modeling datasets and show that TaLK Convolutions constitute an efficient improvement over other attention/convolution based approaches.

Vasileios Lioutas, Yuhong Guo• 2020

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity23.3
524
Machine TranslationWMT En-De 2014 (test)
BLEU29.6
379
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L36.81
169
Machine TranslationWMT En-Fr newstest 2014 (test)
BLEU43.2
46
Machine TranslationIWSLT de-en (test)
BLEU35.5
13
Showing 5 of 5 rows

Other info

Code

Follow for update