Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mixing Up Contrastive Learning: Self-Supervised Representation Learning for Time Series

About

The lack of labeled data is a key challenge for learning useful representation from time series data. However, an unsupervised representation framework that is capable of producing high quality representations could be of great value. It is key to enabling transfer learning, which is especially beneficial for medical applications, where there is an abundance of data but labeling is costly and time consuming. We propose an unsupervised contrastive learning framework that is motivated from the perspective of label smoothing. The proposed approach uses a novel contrastive loss that naturally exploits a data augmentation scheme in which new samples are generated by mixing two data samples with a mixing component. The task in the proposed framework is to predict the mixing component, which is utilized as soft targets in the loss function. Experiments demonstrate the framework's superior performance compared to other representation learning approaches on both univariate and multivariate time series and illustrate its benefits for transfer learning for clinical time series.

Kristoffer Wickstr{\o}m, Michael Kampffmeyer, Karl {\O}yvind Mikalsen, Robert Jenssen• 2022

Related benchmarks

TaskDatasetResultRank
ECG ClassificationPTB 100% labeled training data (test)
Accuracy0.8761
7
EEG ClassificationTDBRAIN 100% labels (test)
Accuracy81.47
7
EEG ClassificationTDBRAIN 10% labels (test)
Accuracy0.775
7
EEG ClassificationTDBRAIN 1% labels (test)
Accuracy63.91
7
ECG ClassificationPTB 10% labeled train (test)
Accuracy87.05
7
ECG ClassificationPTB 1% labeled training data (test)
Accuracy84.71
7
EEG ClassificationAD 1% labels (test)
Accuracy63.67
7
EEG ClassificationAD 100% labels (test)
Accuracy65.68
7
EEG ClassificationAD 10% labels (test)
Accuracy59.38
7
Showing 9 of 9 rows

Other info

Follow for update