Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FiCoTS: Fine-to-Coarse LLM-Enhanced Hierarchical Cross-Modality Interaction for Time Series Forecasting

About

Time series forecasting is central to data analysis and web technologies. The recent success of Large Language Models (LLMs) offers significant potential for this field, especially from the cross-modality aspect. Most methods adopt an LLM-as-Predictor paradigm, using LLM as the forecasting backbone and designing modality alignment mechanisms to enable LLM to understand time series data. However, the semantic information in the two modalities of time series and text differs significantly, making it challenging for LLM to fully understand time series data. To mitigate this challenge, our work follows an LLM-as-Enhancer paradigm to fully utilize the advantage of LLM in text understanding, where LLM is only used to encode text modality to complement time series modality. Based on this paradigm, we propose FiCoTS, an LLM-enhanced fine-to-coarse framework for multimodal time series forecasting. Specifically, the framework facilitates progressive cross-modality interaction by three levels in a fine-to-coarse scheme: First, in the token-level modality alignment module, a dynamic heterogeneous graph is constructed to filter noise and align time series patches with text tokens; Second, in the feature-level modality interaction module, a global cross-attention mechanism is introduced to enable each time series variable to connect with relevant textual contexts; Third, in the decision-level modality fusion module, we design a gated network to adaptively fuse the results of the two modalities for robust predictions. These three modules work synergistically to let the two modalities interact comprehensively across three semantic levels, enabling textual information to effectively support temporal prediction. Extensive experiments on seven real-world benchmarks demonstrate that our model achieves state-of-the-art performance. The codes will be released publicly.

Yafei Lyu, Hao Zhou, Lu Zhang, Xu Yang, Zhiyong Liu• 2025

Related benchmarks

TaskDatasetResultRank
Long-term time-series forecastingETTh1
MAE0.419
351
Long-term time-series forecastingWeather
MSE0.224
348
Long-term time-series forecastingETTh2
MSE0.335
327
Long-term time-series forecastingETTm2
MSE0.251
305
Long-term time-series forecastingETTm1
MSE0.346
295
Long-term time-series forecastingTraffic
MSE0.403
278
Long-term time-series forecastingElectricity
MSE0.161
103
Time Series ForecastingETTh1 5% (train)
MSE0.426
12
Time Series ForecastingETTh2 (5% train)
MSE0.337
12
Time Series ForecastingETTm1 (5% train)
MSE0.361
12
Showing 10 of 15 rows

Other info

Follow for update