Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Test-Time Efficient Pretrained Model Portfolios for Time Series Forecasting

About

Is bigger always better for time series foundation models? With the question in mind, we explore an alternative to training a single, large monolithic model: building a portfolio of smaller, pretrained forecasting models. By applying ensembling or model selection over these portfolios, we achieve competitive performance on large-scale benchmarks using much fewer parameters. We explore strategies for designing such portfolios and find that collections of specialist models consistently outperform portfolios of independently trained generalists. Remarkably, we demonstrate that post-training a base model is a compute-effective approach for creating sufficiently diverse specialists, and provide evidences that ensembling and model selection are more compute-efficient than test-time fine-tuning.

Mert Kayaalp, Caner Turkmen, Oleksandr Shchur, Pedro Mercado, Abdul Fatir Ansari, Michael Bohlke-Schneider, Bernie Wang• 2025

Related benchmarks

TaskDatasetResultRank
Short-term forecastingM4 Quarterly
MASE0.075
67
Short-term forecastingM4 Yearly
MASE0.114
63
Time Series ForecastingM3 Monthly
MASE0.093
42
Time Series ForecastingM3 Quarterly
MASE0.072
42
Time Series ForecastingTourism Monthly
MASE0.095
42
Time Series ForecastingTourism Quarterly
MASE0.071
42
Time Series ForecastingTraffic
MASE0.25
30
Multivariate Time-series ForecastingNN5
MASE0.188
17
Time Series ForecastingGIFT-Eval bizitobs-application-60
MASE0.021
13
Time Series ForecastingGIFT-Eval Part 1
LOOP-SEATTLE-5T-48 Error0.062
13
Showing 10 of 80 rows
...

Other info

Follow for update