Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Max-Margin Contrastive Learning

About

Standard contrastive learning approaches usually require a large number of negatives for effective unsupervised learning and often exhibit slow convergence. We suspect this behavior is due to the suboptimal selection of negatives used for offering contrast to the positives. We counter this difficulty by taking inspiration from support vector machines (SVMs) to present max-margin contrastive learning (MMCL). Our approach selects negatives as the sparse support vectors obtained via a quadratic optimization problem, and contrastiveness is enforced by maximizing the decision margin. As SVM optimization can be computationally demanding, especially in an end-to-end setting, we present simplifications that alleviate the computational burden. We validate our approach on standard vision benchmark datasets, demonstrating better performance in unsupervised representation learning over state-of-the-art, while having better empirical convergence properties.

Anshul Shah, Suvrit Sra, Rama Chellappa, Anoop Cherian• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationSUN397
Accuracy62.78
425
Image ClassificationStanford Cars (test)
Accuracy89.23
306
Image ClassificationFGVC-Aircraft (test)
Accuracy85.38
231
Surface Normal EstimationNYU v2 (test)--
206
Image ClassificationPets
Accuracy87.81
204
Image ClassificationDTD (test)
Accuracy73.51
181
Video Action RecognitionUCF101
Top-1 Acc68.01
153
Image ClassificationCaltech101 (test)
Accuracy87.82
121
Image ClassificationFood-101 (test)
Top-1 Acc82.39
89
Image ClassificationImageNet-100--
84
Showing 10 of 14 rows

Other info

Follow for update