Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VLG-CBM: Training Concept Bottleneck Models with Vision-Language Guidance

About

Concept Bottleneck Models (CBMs) provide interpretable prediction by introducing an intermediate Concept Bottleneck Layer (CBL), which encodes human-understandable concepts to explain models' decision. Recent works proposed to utilize Large Language Models and pre-trained Vision-Language Models to automate the training of CBMs, making it more scalable and automated. However, existing approaches still fall short in two aspects: First, the concepts predicted by CBL often mismatch the input image, raising doubts about the faithfulness of interpretation. Second, it has been shown that concept values encode unintended information: even a set of random concepts could achieve comparable test accuracy to state-of-the-art CBMs. To address these critical limitations, in this work, we propose a novel framework called Vision-Language-Guided Concept Bottleneck Model (VLG-CBM) to enable faithful interpretability with the benefits of boosted performance. Our method leverages off-the-shelf open-domain grounded object detectors to provide visually grounded concept annotation, which largely enhances the faithfulness of concept prediction while further improving the model performance. In addition, we propose a new metric called Number of Effective Concepts (NEC) to control the information leakage and provide better interpretability. Extensive evaluations across five standard benchmarks show that our method, VLG-CBM, outperforms existing methods by at least 4.27% and up to 51.09% on Accuracy at NEC=5 (denoted as ANEC-5), and by at least 0.45% and up to 29.78% on average accuracy (denoted as ANEC-avg), while preserving both faithfulness and interpretability of the learned concepts as demonstrated in extensive experiments.

Divyansh Srivastava, Ge Yan, Tsui-Wei Weng• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (val)
Accuracy65.73
661
Image ClassificationFood-101
Accuracy92.8
494
Image ClassificationFlowers102
Accuracy97.1
478
Image ClassificationFood101
Accuracy81.6
309
Image ClassificationCUB-200-2011 (test)
Top-1 Acc66.03
276
Image ClassificationRESISC45--
263
Image ClassificationCUB-200 2011
Accuracy84.5
257
Image ClassificationOxford Flowers 102--
172
Image ClassificationCUB200 (val)
Accuracy60.38
66
Image ClassificationCIFAR-10 (test)
Accuracy88.63
59
Showing 10 of 22 rows

Other info

Follow for update