Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Contrastive Pre-Training For Time Series via Time-Frequency Consistency

About

Pre-training on time series poses a unique challenge due to the potential mismatch between pre-training and target domains, such as shifts in temporal dynamics, fast-evolving trends, and long-range and short-cyclic effects, which can lead to poor downstream performance. While domain adaptation methods can mitigate these shifts, most methods need examples directly from the target domain, making them suboptimal for pre-training. To address this challenge, methods need to accommodate target domains with different temporal dynamics and be capable of doing so without seeing any target examples during pre-training. Relative to other modalities, in time series, we expect that time-based and frequency-based representations of the same example are located close together in the time-frequency space. To this end, we posit that time-frequency consistency (TF-C) -- embedding a time-based neighborhood of an example close to its frequency-based neighborhood -- is desirable for pre-training. Motivated by TF-C, we define a decomposable pre-training model, where the self-supervised signal is provided by the distance between time and frequency components, each individually trained by contrastive estimation. We evaluate the new method on eight datasets, including electrodiagnostic testing, human activity recognition, mechanical fault detection, and physical status monitoring. Experiments against eight state-of-the-art methods show that TF-C outperforms baselines by 15.4% (F1 score) on average in one-to-one settings (e.g., fine-tuning an EEG-pretrained model on EMG data) and by 8.4% (precision) in challenging one-to-many settings (e.g., fine-tuning an EEG-pretrained model for either hand-gesture recognition or mechanical fault prediction), reflecting the breadth of scenarios that arise in real-world applications. Code and datasets: https://github.com/mims-harvard/TFC-pretraining.

Xiang Zhang, Ziyuan Zhao, Theodoros Tsiligkaridis, Marinka Zitnik• 2022

Related benchmarks

TaskDatasetResultRank
Multivariate long-term forecastingETTh1
MSE0.637
344
Multivariate long-term series forecastingETTh2
MSE2.85
319
Multivariate long-term series forecastingWeather
MSE0.286
288
Multivariate long-term series forecastingETTm1
MSE0.744
257
Multivariate long-term forecastingElectricity
MSE0.363
183
Multivariate long-term series forecastingETTm2
MSE1.755
175
Multivariate long-term forecastingTraffic
MSE0.717
159
Multivariate long-term forecastingETTh1 (test)
MSE0.596
77
Activity RecognitionHHAR (test)
Mean F1 Score75.13
46
Activity RecognitionUCIHAR (test)
Macro F1 Score71.16
43
Showing 10 of 28 rows

Other info

Follow for update