Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Matrix Information Theory for Self-Supervised Learning

About

The maximum entropy encoding framework provides a unified perspective for many non-contrastive learning methods like SimSiam, Barlow Twins, and MEC. Inspired by this framework, we introduce Matrix-SSL, a novel approach that leverages matrix information theory to interpret the maximum entropy encoding loss as matrix uniformity loss. Furthermore, Matrix-SSL enhances the maximum entropy encoding method by seamlessly incorporating matrix alignment loss, directly aligning covariance matrices in different branches. Experimental results reveal that Matrix-SSL outperforms state-of-the-art methods on the ImageNet dataset under linear evaluation settings and on MS-COCO for transfer learning tasks. Specifically, when performing transfer learning tasks on MS-COCO, our method outperforms previous SOTA methods such as MoCo v2 and BYOL up to 3.3% with only 400 epochs compared to 800 epochs pre-training. We also try to introduce representation learning into the language modeling regime by fine-tuning a 7B model using matrix cross-entropy loss, with a margin of 3.1% on the GSM8K dataset over the standard cross-entropy loss. Code available at https://github.com/yifanzhang-pro/Matrix-SSL.

Yifan Zhang, Zhiquan Tan, Jingqin Yang, Weiran Huang, Yang Yuan• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy30.2
535
Mathematical ReasoningGSM8K
Accuracy (GSM8K)72.3
358
Image ClassificationImageNet (val)--
300
Instance SegmentationMS COCO 2014 2017 (val)
AP Mask @ IoU=0.7538
46
Object DetectionCOCO 2014 (val)
AP7544.2
9
Image ClassificationImageNet 1k (train)
Accuracy @ 100 Epochs69.2
8
Semi-Supervised LearningImageNet 1% labels (train val)
Top-1 Acc45.158
2
Semi-Supervised LearningImageNet 10% labels (train val)
Top-1 Acc63.94
2
Showing 8 of 8 rows

Other info

Code

Follow for update