A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks
About
While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of higher task accuracy requires a larger hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications. This work explores a promising direction for graph contrastive learning (GCL) with spiking neural networks (SNNs), which leverage sparse and binary characteristics to learn more biologically plausible and compact representations. We propose SpikeGCL, a novel GCL framework to learn binarized 1-bit representations for graphs, making balanced trade-offs between efficiency and performance. We provide theoretical guarantees to demonstrate that SpikeGCL has comparable expressiveness with its full-precision counterparts. Experimental results demonstrate that, with nearly 32x representation storage compression, SpikeGCL is either comparable to or outperforms many fancy state-of-the-art supervised and self-supervised methods across several graph benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Node Classification | IMDB | Macro F1 Score0.602 | 179 | |
| Node Classification | Photo | Mean Accuracy92.5 | 165 | |
| Node Classification | Physics | Accuracy95.21 | 145 | |
| Node Classification | Computers | Mean Accuracy89.04 | 143 | |
| Node Classification | CS | Accuracy91.77 | 128 | |
| Node Classification | ACM | Macro F190.5 | 104 | |
| Node Classification | DBLP | Micro-F191.5 | 24 | |
| Link Prediction | Photo | AUC-ROC95.58 | 19 | |
| Link Prediction | Photo (test) | AP95.16 | 19 | |
| Link Prediction | Computers | AUC-ROC92.72 | 19 |