Learning to Factorize and Adapt: A Versatile Approach Toward Universal Spatio-Temporal Foundation Models
About
Spatio-Temporal (ST) Foundation Models (STFMs) promise cross-dataset generalization, yet joint ST pretraining is computationally expensive and grapples with the heterogeneity of domain-specific spatial patterns. Substantially extending our preliminary conference version, we present FactoST-v2, an enhanced factorized framework redesigned for full weight transfer and arbitrary-length generalization. FactoST-v2 decouples universal temporal learning from domain-specific spatial adaptation. The first stage pretrains a minimalist encoder-only backbone using randomized sequence masking to capture invariant temporal dynamics, enabling probabilistic quantile prediction across variable horizons. The second stage employs a streamlined adapter to rapidly inject spatial awareness via meta adaptive learning and prompting. Comprehensive evaluations across diverse domains demonstrate that FactoST-v2 achieves state-of-the-art accuracy with linear efficiency - significantly outperforming existing foundation models in zero-shot and few-shot scenarios while rivaling domain-specific expert baselines. This factorized paradigm offers a practical, scalable path toward truly universal STFMs. Code is available at https://github.com/CityMind-Lab/FactoST.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Traffic Forecasting | PEMS-03 Long-term (24 -> 24avg) | MAE28.62 | 30 | |
| Traffic Forecasting | PEMS-03 Short-term (12 -> 12avg) | MAE21.08 | 30 | |
| Grid-based Spatio-Temporal Forecasting | Traffic-SH (Short-term) | MAE0.4 | 19 | |
| Grid-based Spatio-Temporal Forecasting | Traffic-SH long-term (24 -> 24avg) (test) | MAE0.58 | 19 | |
| Grid-based Spatio-Temporal Forecasting | Bike-NYC (Short-term) | MAE3.77 | 5 | |
| Traffic Forecasting | PEMS-07 Long-term (24 -> 24avg) | MAE39.99 | 5 |