Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Tiny Time Mixers (TTMs): Fast Pre-trained Models for Enhanced Zero/Few-Shot Forecasting of Multivariate Time Series

About

Large pre-trained models excel in zero/few-shot learning for language and vision tasks but face challenges in multivariate time series (TS) forecasting due to diverse data characteristics. Consequently, recent research efforts have focused on developing pre-trained TS forecasting models. These models, whether built from scratch or adapted from large language models (LLMs), excel in zero/few-shot forecasting tasks. However, they are limited by slow performance, high computational demands, and neglect of cross-channel and exogenous correlations. To address this, we introduce Tiny Time Mixers (TTM), a compact model (starting from 1M parameters) with effective transfer learning capabilities, trained exclusively on public TS datasets. TTM, based on the light-weight TSMixer architecture, incorporates innovations like adaptive patching, diverse resolution sampling, and resolution prefix tuning to handle pre-training on varied dataset resolutions with minimal model capacity. Additionally, it employs multi-level modeling to capture channel correlations and infuse exogenous signals during fine-tuning. TTM outperforms existing popular benchmarks in zero/few-shot forecasting by (4-40%), while reducing computational requirements significantly. Moreover, TTMs are lightweight and can be executed even on CPU-only machines, enhancing usability and fostering wider adoption in resource-constrained environments. The model weights for reproducibility and research use are available at https://huggingface.co/ibm/ttm-research-r2/, while enterprise-use weights under the Apache license can be accessed as follows: the initial TTM-Q variant at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r1, and the latest variants (TTM-B, TTM-E, TTM-A) weights are available at https://huggingface.co/ibm-granite/granite-timeseries-ttm-r2.

Vijay Ekambaram, Arindam Jati, Pankaj Dayama, Sumanta Mukherjee, Nam H. Nguyen, Wesley M. Gifford, Chandra Reddy, Jayant Kalagnanam• 2024

Related benchmarks

TaskDatasetResultRank
Long-term forecastingETTm1
MSE0.293
375
Long-term forecastingETTh1
MSE0.363
365
Long-term time-series forecastingTraffic
MSE0.365
362
Time Series ForecastingETTh1 (test)
MSE0.362
348
Time Series ForecastingETTm1 (test)
MSE0.315
278
Time Series ForecastingETTh2 (test)
MSE0.253
232
Time Series ForecastingWeather (test)
MSE0.154
200
Time Series ForecastingETTm2 (test)
MSE0.151
171
Long-term forecastingElectricity
MSE0.129
167
Time Series ForecastingElectricity (test)
MSE0.172
109
Showing 10 of 66 rows

Other info

Follow for update