CLOCS: Contrastive Learning of Cardiac Signals Across Space, Time, and Patients
About
The healthcare industry generates troves of unlabelled physiological data. This data can be exploited via contrastive learning, a self-supervised pre-training method that encourages representations of instances to be similar to one another. We propose a family of contrastive learning methods, CLOCS, that encourages representations across space, time, \textit{and} patients to be similar to one another. We show that CLOCS consistently outperforms the state-of-the-art methods, BYOL and SimCLR, when performing a linear evaluation of, and fine-tuning on, downstream tasks. We also show that CLOCS achieves strong generalization performance with only 25\% of labelled training data. Furthermore, our training procedure naturally generates patient-specific representations that can be used to quantify patient-similarity.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| ECG Classification | CPSC 2018 | Macro AUC (1%)59.59 | 17 | |
| ECG Classification | PTBXL Sub | Macro AUC (1%)0.5794 | 17 | |
| ECG Classification | PTBXL Form | Macro AUC (1%)51.97 | 17 | |
| ECG Classification | PTBXL Rhythm | Macro AUC (1%)47.19 | 17 | |
| ECG Classification | CSN | Macro AUC (1%)54.38 | 17 | |
| ECG Classification | PTBXL Super | Macro AUC (1%)68.94 | 17 | |
| ECG Classification | PTB 10% labeled train (test) | Accuracy88.25 | 7 | |
| ECG Classification | PTB 1% labeled training data (test) | Accuracy88.8 | 7 | |
| EEG Classification | AD 100% labels (test) | Accuracy78.37 | 7 | |
| EEG Classification | AD 10% labels (test) | Accuracy76.97 | 7 |