Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revisiting the Generic Transformer: Deconstructing a Strong Baseline for Time Series Foundation Models

About

The recent surge in Time Series Foundation Models has rapidly advanced the field, yet the heterogeneous training setups across studies make it difficult to attribute improvements to architectural innovations versus data engineering. In this work, we investigate the potential of a standard patch Transformer, demonstrating that this generic architecture achieves state-of-the-art zero-shot forecasting performance using a straightforward training protocol. We conduct a comprehensive ablation study that covers model scaling, data composition, and training techniques to isolate the essential ingredients for high performance. Our findings identify the key drivers of performance, while confirming that the generic architecture itself demonstrates excellent scalability. By strictly controlling these variables, we provide comprehensive empirical results on model scaling across multiple dimensions. We release our open-source model and detailed findings to establish a transparent, reproducible baseline for future research.

Yunshi Wen, Wesley M. Gifford, Chandra Reddy, Lam M. Nguyen, Jayant Kalagnanam, Anak Agung Julius• 2026

Related benchmarks

TaskDatasetResultRank
Time Series ForecastingGIFT-Eval (test)
MASE70.7
34
Time Series ForecastingGift-Eval Short horizon
MASE0.616
6
Time Series ForecastingGift-Eval Long horizon
MASE0.745
6
Time Series ForecastingGift-Eval Medium horizon
MASE0.722
6
Showing 4 of 4 rows

Other info

Follow for update