Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SphOR: A Representation Learning Perspective on Open-set Recognition for Identifying Unknown Classes in Deep Learning Models

About

The reliance on Deep Neural Network (DNN)-based classifiers in safety-critical and real-world applications necessitates Open-Set Recognition (OSR). OSR enables the identification of input data from classes unknown during training as unknown, as opposed to misclassifying them as belonging to a known class. DNNs consist of a feature extraction backbone and classifier head; however, most OSR methods typically train both components jointly, often yielding feature representations that adapt poorly to unknown data. Other approaches employ off-the-shelf objectives, such as supervised contrastive learning, which are not specifically designed for OSR. To address these limitations, we propose SpHOR, which explicitly shapes the feature space via supervised representation learning, before training a classifier. Instead of relying on generic feature learning, SpHOR custom-designs representation learning for OSR through three key innovations: (1) enforcing discriminative class-specific features via orthogonal label embeddings, ensuring clearer separation between classes. (2) imposing a spherical constraint, modeling representations as a mixture of von Mises-Fisher distributions. (3) integrating Mixup and Label Smoothing (LS) directly into the representation learning stage. To quantify how these techniques enhance representations for OSR, we introduce two metrics: the Angular Separability (AS) and Norm Separability (NS). Combining all three innovations, SpHOR achieves state-of-the-art results (in AUROC and OSCR) across various coarse-grained and fine-grained open-set benchmarks, particularly excelling on the Semantic Shift Benchmark with improvements up to 5.1\%. Code at https://github.com/nadarasarbahavan/SpHOR

Nadarasar Bahavan, Sachith Seneviratne, Saman Halgamuge• 2025

Related benchmarks

TaskDatasetResultRank
Open Set RecognitionCIFAR10
AUROC0.945
76
Open Set RecognitionTinyImageNet
AUROC84.1
51
Open Set RecognitionSVHN
AUROC0.991
51
Open Set RecognitionCIFAR+50
AUROC97.2
50
Open Set RecognitionCaltech-UCSD-Birds (CUB) Easy split 42 (test)
Closed-set Accuracy90.8
30
Open Set RecognitionCIFAR+10
AUROC0.982
24
Open Set RecognitionCIFAR10 vs SVHN Legacy Benchmark B
DTACC97.6
12
Open Set RecognitionCIFAR10 vs CIFAR100 Legacy Benchmark B
DTACC86.7
12
Open Set RecognitionCaltech-UCSD-Birds (CUB) Hard split 42 (test)
Closed-set Accuracy0.908
2
Showing 9 of 9 rows

Other info

Follow for update