Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence-to-Sequence Piano Transcription with Transformers

About

Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets. However, these models have required extensive domain-specific design of network architectures, input/output representations, and complex decoding schemes. In this work, we show that equivalent performance can be achieved using a generic encoder-decoder Transformer with standard decoding methods. We demonstrate that the model can learn to translate spectrogram inputs directly to MIDI-like output events for several transcription tasks. This sequence-to-sequence approach simplifies transcription by jointly modeling audio features and language-like output dependencies, thus removing the need for task-specific architectures. These results point toward possibilities for creating new Music Information Retrieval models by focusing on dataset creation and labeling rather than custom model design.

Curtis Hawthorne, Ian Simon, Rigel Swavely, Ethan Manilow, Jesse Engel• 2021

Related benchmarks

TaskDatasetResultRank
Automatic Piano TranscriptionMAESTRO v3.0.0 (test)
Note F1 Score (w/ Offset)83.94
19
Music TranscriptionMAESTRO
Frame F166
4
Showing 2 of 2 rows

Other info

Follow for update