Computer Vision Self-supervised Learning Methods on Time Series
About
Self-supervised learning (SSL) has had great success in both computer vision. Most of the current mainstream computer vision SSL frameworks are based on Siamese network architecture. These approaches often rely on cleverly crafted loss functions and training setups to avoid feature collapse. In this study, we evaluate if those computer-vision SSL frameworks are also effective on a different modality (\textit{i.e.,} time series). The effectiveness is experimented and evaluated on the UCR and UEA archives, and we show that the computer vision SSL frameworks can be effective even for time series. In addition, we propose a new method that improves on the recently proposed VICReg method. Our method improves on a \textit{covariance} term proposed in VICReg, and in addition we augment the head of the architecture by an iterative normalization layer that accelerates the convergence of the model.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Classification | SVHN (test) | -- | 182 | |
| Image Classification | ImageNet 20 Dp: CIFAR10 downstream (test) | Balanced Accuracy73.35 | 44 | |
| Classification | CIFAR-10 (test) | Robust Accuracy14.07 | 24 | |
| Image Classification | ANIMALS10 CIFAR10 downstream (test) | BA94.7 | 22 | |
| Image Classification | SVHN Dp: CIFAR10 (test) | Balanced Accuracy73.26 | 22 | |
| Image Classification | STL10 Dp: ImageNet (test) | BA68.76 | 22 | |
| Image Classification | ANIMALS10 Dp: ImageNet (downstream test) | BA76.57 | 22 | |
| Image Classification | GTSRB Dp: CIFAR10 (test) | BA82.25 | 22 | |
| Image Classification | GTSRB Dp: ImageNet (test) | BA77.96 | 22 | |
| Image Classification | SVHN Dp: ImageNet (test) | BA69.51 | 22 |