Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Complex Transformer: A Framework for Modeling Complex-Valued Sequence

About

While deep learning has received a surge of interest in a variety of fields in recent years, major deep learning models barely use complex numbers. However, speech, signal and audio data are naturally complex-valued after Fourier Transform, and studies have shown a potentially richer representation of complex nets. In this paper, we propose a Complex Transformer, which incorporates the transformer model as a backbone for sequence modeling; we also develop attention and encoder-decoder network operating for complex input. The model achieves state-of-the-art performance on the MusicNet dataset and an In-phase Quadrature (IQ) signal dataset.

Muqiao Yang, Martin Q. Ma, Dongyu Li, Yao-Hung Hubert Tsai, Ruslan Salakhutdinov• 2019

Related benchmarks

TaskDatasetResultRank
Music TranscriptionMusicNet
APS74.22
6
Automatic Music TranscriptionMusicNet (test)
APS74.22
5
Device ID ClassificationIQ wireless signal dataset (test)
Accuracy59.94
4
Sequence GenerationMusicNet (test)
BCE Loss0.0492
3
Sequence GenerationIQ dataset
Cross Entropy Loss2.2335
3
Showing 5 of 5 rows

Other info

Code

Follow for update