Converting Transformers into DGNNs Form
About
Recent advances in deep learning have established Transformer architectures as the predominant modeling paradigm. Central to the success of Transformers is the self-attention mechanism, which scores the similarity between query and key matrices to modulate a value matrix. This operation bears striking similarities to digraph convolution, prompting an investigation into whether digraph convolution could serve as an alternative to self-attention. In this study, we formalize this concept by introducing a synthetic unitary digraph convolution based on the digraph Fourier transform. The resulting model, which we term Converter, effectively converts a Transformer into a Directed Graph Neural Network (DGNN) form. We have tested Converter on Long-Range Arena benchmark, long document classification, and DNA sequence-based taxonomy classification. Our experimental results demonstrate that Converter achieves superior performance while maintaining computational efficiency and architectural simplicity, which establishes it as a lightweight yet powerful Transformer variant.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-sequence modeling | Long Range Arena (LRA) v1 (test) | ListOps60.38 | 66 | |
| DNA Sequence-based Taxonomy Classification | Ensembl (B/S) 3 (test) | Accuracy84.59 | 9 | |
| DNA Sequence-based Taxonomy Classification | Ensembl (M/R) 3 (test) | Accuracy59.49 | 9 | |
| Long Document Classification | LongDoc16K (test) | Accuracy81.77 | 9 | |
| Long Document Classification | LongDoc 32K (test) | Accuracy82.34 | 9 |