Multi-Modal Representation Learning via Semi-Supervised Rate Reduction for Generalized Category Discovery
About
Generalized Category Discovery (GCD) aims to identify both known and unknown categories, with only partial labels given for the known categories, posing a challenging open-set recognition problem. State-of-the-art approaches for GCD task are usually built on multi-modality representation learning, which is heavily dependent upon inter-modality alignment. However, few of them cast a proper intra-modality alignment to generate a desired underlying structure of representation distributions. In this paper, we propose a novel and effective multi-modal representation learning framework for GCD via Semi-Supervised Rate Reduction, called SSR$^2$-GCD, to learn cross-modality representations with desired structural properties based on emphasizing to properly align intra-modality relationships. Moreover, to boost knowledge transfer, we integrate prompt candidates by leveraging the inter-modal alignment offered by Vision Language Models. We conduct extensive experiments on generic and fine-grained benchmark datasets demonstrating superior performance of our approach.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Generalized Category Discovery | ImageNet-100 | All Accuracy92.1 | 138 | |
| Generalized Category Discovery | CIFAR-100 | Accuracy (All)86.4 | 133 | |
| Generalized Category Discovery | Stanford Cars | Accuracy (All)89.2 | 128 | |
| Generalized Category Discovery | CUB | Accuracy (All)78.3 | 113 | |
| Generalized Category Discovery | CIFAR-10 | All Accuracy98.5 | 105 | |
| Generalized Category Discovery | ImageNet-1K | Accuracy (All)66.7 | 19 | |
| Generalized Category Discovery | Oxford Pets | Accuracy (All)95.7 | 11 | |
| Generalized Category Discovery | Flowers102 | Accuracy (All)93.5 | 10 |