Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Computer Vision Self-supervised Learning Methods on Time Series

About

Self-supervised learning (SSL) has had great success in both computer vision. Most of the current mainstream computer vision SSL frameworks are based on Siamese network architecture. These approaches often rely on cleverly crafted loss functions and training setups to avoid feature collapse. In this study, we evaluate if those computer-vision SSL frameworks are also effective on a different modality (\textit{i.e.,} time series). The effectiveness is experimented and evaluated on the UCR and UEA archives, and we show that the computer vision SSL frameworks can be effective even for time series. In addition, we propose a new method that improves on the recently proposed VICReg method. Our method improves on a \textit{covariance} term proposed in VICReg, and in addition we augment the head of the architecture by an iterative normalization layer that accelerates the convergence of the model.

Daesoo Lee, Erlend Aune• 2021

Related benchmarks

TaskDatasetResultRank
ClassificationSVHN (test)--
182
Image ClassificationImageNet 20 Dp: CIFAR10 downstream (test)
Balanced Accuracy73.35
44
ClassificationCIFAR-10 (test)
Robust Accuracy14.07
24
Image ClassificationANIMALS10 CIFAR10 downstream (test)
BA94.7
22
Image ClassificationSVHN Dp: CIFAR10 (test)
Balanced Accuracy73.26
22
Image ClassificationSTL10 Dp: ImageNet (test)
BA68.76
22
Image ClassificationANIMALS10 Dp: ImageNet (downstream test)
BA76.57
22
Image ClassificationGTSRB Dp: CIFAR10 (test)
BA82.25
22
Image ClassificationGTSRB Dp: ImageNet (test)
BA77.96
22
Image ClassificationSVHN Dp: ImageNet (test)
BA69.51
22
Showing 10 of 18 rows

Other info

Follow for update