Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring the Limits of Deep Image Clustering using Pretrained Models

About

We present a general methodology that learns to classify images without labels by leveraging pretrained feature extractors. Our approach involves self-distillation training of clustering heads based on the fact that nearest neighbours in the pretrained feature space are likely to share the same label. We propose a novel objective that learns associations between image features by introducing a variant of pointwise mutual information together with instance weighting. We demonstrate that the proposed objective is able to attenuate the effect of false positive pairs while efficiently exploiting the structure in the pretrained feature space. As a result, we improve the clustering accuracy over $k$-means on $17$ different pretrained models by $6.1$\% and $12.2$\% on ImageNet and CIFAR100, respectively. Finally, using self-supervised vision transformers, we achieve a clustering accuracy of $61.6$\% on ImageNet. The code is available at https://github.com/HHU-MMBS/TEMI-official-BMVC2023.

Nikolas Adaloglou, Felix Michels, Hamza Kalisch, Markus Kollmann• 2023

Related benchmarks

TaskDatasetResultRank
Image ClusteringSTL-10
ACC96.7
229
ClusteringCIFAR-10 (test)
Accuracy94.5
184
ClusteringSTL-10 (test)
Accuracy98.5
146
ClusteringCIFAR-100 (test)
ACC57.8
110
ClusteringCIFAR-100-20 (test)
Accuracy63.2
68
ClusteringImageNet (val)
AMI59.9
22
ClusteringCIFAR10 small-scale (val)
NMI92.6
17
ClusteringCIFAR20 small-scale (val)
NMI65.4
17
ClusteringSTL10 small-scale (val)
NMI96.5
17
Deep Image ClusteringImageNet 50 (val)
NMI95.75
10
Showing 10 of 13 rows

Other info

Code

Follow for update