Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

About

Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short-cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target domains with different temporal dynamics and be capable of doing so without seeing any target examples during pre-training. Relative to other modalities, in time series, we expect that time-based and frequency-based representations of the same example are located close together in the time-frequency space. To this end, we posit that time-frequency consistency (TF-C) -- embedding a time-based neighborhood of an example close to its frequency-based neighborhood -- is desirable for pre-training. Motivated by TF-C, we define a decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation. We evaluate the new method on eight datasets, including electrodiagnostic testing, human activity recognition, mechanical fault detection, and physical status monitoring. Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15.4% (F1 score) on average in one-to-one settings (e.g., fine-tuning an EEG-pretrained model on EMG data) and by 8.4% (precision) in challenging one-to-many settings (e.g., fine-tuning an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction), reflecting the breadth of scenarios that arise in real-world applications. Code and datasets: https://github.com/mims-harvard/TFC-pretraining.

Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, Marinka Zitnik• 2022

Related benchmarks

TaskDatasetResultRank
Multivariate long-term forecastingETTh1
MSE0.637
394
Multivariate long-term series forecastingETTh2
MSE2.85
367
Multivariate long-term series forecastingWeather
MSE0.286
359
Multivariate long-term series forecastingETTm1
MSE0.744
305
Multivariate long-term forecastingElectricity
MSE0.363
236
Multivariate long-term series forecastingETTm2
MSE1.755
223
Multivariate long-term forecastingTraffic
MSE0.717
165
Multivariate long-term forecastingETTh1 (test)
MSE0.596
125
Long-term forecasting9 datasets average
MSE0.459
60
ClassificationDiabetes (test)
Accuracy82.96
49
Showing 10 of 51 rows

Other info

Follow for update