Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ProtoQuant: Quantization of Prototypical Parts For General and Fine-Grained Image Classification

About

Prototypical parts-based models offer a "this looks like that" paradigm for intrinsic interpretability, yet they typically struggle with ImageNet-scale generalization and often require computationally expensive backbone finetuning. Furthermore, existing methods frequently suffer from "prototype drift," where learned prototypes lack tangible grounding in the training distribution and change their activation under small perturbations. We present ProtoQuant, a novel architecture that achieves prototype stability and grounded interpretability through latent vector quantization. By constraining prototypes to a discrete learned codebook within the latent space, we ensure they remain faithful representations of the training data without the need to update the backbone. This design allows ProtoQuant to function as an efficient, interpretable head that scales to large-scale datasets. We evaluate ProtoQuant on ImageNet and several fine-grained benchmarks (CUB-200, Cars-196). Our results demonstrate that ProtoQuant achieves competitive classification accuracy while generalizing to ImageNet and comparable interpretability metrics to other prototypical-parts-based methods.

Miko{\l}aj Janusz, Adam Wr\'obel, Bartosz Zieli\'nski, Dawid Rymarczyk• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)--
359
Fine-grained Image ClassificationStanford Cars
Accuracy92.6
206
Fine-grained Image ClassificationOxford Flowers
Accuracy97
49
Fine-grained Image ClassificationCUB-200
Accuracy (All)87.6
32
Fine-grained Image ClassificationStanford Dogs--
18
Prototypical Part PurityCUB 200-2011 (train)
Purity47
7
Prototypical Part PurityCUB-200-2011 (test)
Purity47
7
Showing 7 of 7 rows

Other info

Follow for update