Multi-resolution Time-Series Transformer for Long-term Forecasting
About
The performance of transformers for time-series forecasting has improved significantly. Recent architectures learn complex temporal patterns by segmenting a time-series into patches and using the patches as tokens. The patch size controls the ability of transformers to learn the temporal patterns at different frequencies: shorter patches are effective for learning localized, high-frequency patterns, whereas mining long-term seasonalities and trends requires longer patches. Inspired by this observation, we propose a novel framework, Multi-resolution Time-Series Transformer (MTST), which consists of a multi-branch architecture for simultaneous modeling of diverse temporal patterns at different resolutions. In contrast to many existing time-series transformers, we employ relative positional encoding, which is better suited for extracting periodic components at different scales. Extensive experiments on several real-world datasets demonstrate the effectiveness of MTST in comparison to state-of-the-art forecasting techniques.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Human Activity Recognition | UCI-HAR 6-Classes (test) | Accuracy90.99 | 11 | |
| Medical Time Series Classification | PTB-XL 5-Classes (test) | Accuracy0.7214 | 11 | |
| Human Activity Recognition | FLAAP 10-Classes (test) | Accuracy70.57 | 11 | |
| Medical Time Series Classification | PTB 2-Classes (test) | Accuracy0.7659 | 11 | |
| Medical Time Series Classification | ADFTD 3-Classes (test) | Accuracy45.6 | 11 | |
| Medical Time Series Classification | APAVA 2-Classes (test) | Accuracy71.14 | 11 | |
| Medical Time Series Classification | TDBrain 2-Classes (test) | Accuracy76.96 | 11 |