Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LSTPrompt: Large Language Models as Zero-Shot Time Series Forecasters by Long-Short-Term Prompting

About

Time-series forecasting (TSF) finds broad applications in real-world scenarios. Prompting off-the-shelf Large Language Models (LLMs) demonstrates strong zero-shot TSF capabilities while preserving computational efficiency. However, existing prompting methods oversimplify TSF as language next-token predictions, overlooking its dynamic nature and lack of integration with state-of-the-art prompt strategies such as Chain-of-Thought. Thus, we propose LSTPrompt, a novel approach for prompting LLMs in zero-shot TSF tasks. LSTPrompt decomposes TSF into short-term and long-term forecasting sub-tasks, tailoring prompts to each. LSTPrompt guides LLMs to regularly reassess forecasting mechanisms to enhance adaptability. Extensive evaluations demonstrate consistently better performance of LSTPrompt than existing prompting methods, and competitive results compared to foundation TSF models.

Haoxin Liu, Zhiyuan Zhao, Jindong Wang, Harshavardhan Kamarthi, B. Aditya Prakash• 2024

Related benchmarks

TaskDatasetResultRank
Time Series ForecastingETTm1--
334
Time Series ForecastingETTh1 (test)--
262
Time Series ForecastingILI
MAE0.42
58
Time Series ForecastingStock
MAE0.19
28
Time Series ForecastingWeather
MAE0.31
28
Time Series ForecastingDarts AirPassengers library (test)
MAE13.02
7
Time Series ForecastingDarts MilkProduction
MAE7.71
7
Time Series ForecastingDarts Sunspots
MAE46.84
7
Time Series ForecastingDarts BeerProduction
MAE13.29
7
Time Series ForecastingMonash RiverFlow
MAE24.17
7
Showing 10 of 11 rows

Other info

Code

Follow for update