Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenCon: Open-world Contrastive Learning

About

Machine learning models deployed in the wild naturally encounter unlabeled samples from both known and novel classes. Challenges arise in learning from both the labeled and unlabeled data, in an open-world semi-supervised manner. In this paper, we introduce a new learning framework, open-world contrastive learning (OpenCon). OpenCon tackles the challenges of learning compact representations for both known and novel classes and facilitates novelty discovery along the way. We demonstrate the effectiveness of OpenCon on challenging benchmark datasets and establish competitive performance. On the ImageNet dataset, OpenCon significantly outperforms the current best method by 11.9% and 7.4% on novel and overall classification accuracy, respectively. Theoretically, OpenCon can be rigorously interpreted from an EM algorithm perspective--minimizing our contrastive loss partially maximizes the likelihood by clustering similar samples in the embedding space. The code is available at https://github.com/deeplearning-wisc/opencon.

Yiyou Sun, Yixuan Li• 2022

Related benchmarks

TaskDatasetResultRank
Generalized Category DiscoveryImageNet-100
All Accuracy84
138
Generalized Category DiscoveryStanford Cars
Accuracy (All)49.1
128
Generalized Category DiscoveryCUB
Accuracy (All)54.7
113
Generalized Category DiscoveryHerbarium19
Score (All Categories)39.3
47
Open-world semi-supervised learningCIFAR-100 (test)
Overall Accuracy52.7
40
Generalized Category DiscoveryHerbarium19 (test)
Score (All Categories)39.3
37
Open-world semi-supervised learningCIFAR-10 (test)
Overall Accuracy90.4
28
Novel Class DiscoveryCIFAR-100
ACC (Seen)0.625
19
Category DiscoveryStanford Cars Old classes
Accuracy78.6
15
Category DiscoveryStanford Cars All classes
Accuracy49.1
15
Showing 10 of 20 rows

Other info

Follow for update