Complex Transformer: A Framework for Modeling Complex-Valued Sequence
About
While deep learning has received a surge of interest in a variety of fields in recent years, major deep learning models barely use complex numbers. However, speech, signal and audio data are naturally complex-valued after Fourier Transform, and studies have shown a potentially richer representation of complex nets. In this paper, we propose a Complex Transformer, which incorporates the transformer model as a backbone for sequence modeling; we also develop attention and encoder-decoder network operating for complex input. The model achieves state-of-the-art performance on the MusicNet dataset and an In-phase Quadrature (IQ) signal dataset.
Muqiao Yang, Martin Q. Ma, Dongyu Li, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Music Transcription | MusicNet | APS74.22 | 6 | |
| Automatic Music Transcription | MusicNet (test) | APS74.22 | 5 | |
| Device ID Classification | IQ wireless signal dataset (test) | Accuracy59.94 | 4 | |
| Sequence Generation | MusicNet (test) | BCE Loss0.0492 | 3 | |
| Sequence Generation | IQ dataset | Cross Entropy Loss2.2335 | 3 |
Showing 5 of 5 rows