Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Faithful Multimodal Concept Bottleneck Models

About

Concept Bottleneck Models (CBMs) are interpretable models that route predictions through a layer of human-interpretable concepts. While widely studied in vision and, more recently, in NLP, CBMs remain largely unexplored in multimodal settings. For their explanations to be faithful, CBMs must satisfy two conditions: concepts must be properly detected, and concept representations must encode only their intended semantics, without smuggling extraneous task-relevant or inter-concept information into final predictions, a phenomenon known as leakage. Existing approaches treat concept detection and leakage mitigation as separate problems, and typically improve one at the expense of predictive accuracy. In this work, we introduce f-CBM, a faithful multimodal CBM framework built on a vision-language backbone that jointly targets both aspects through two complementary strategies: a differentiable leakage loss to mitigate leakage, and a Kolmogorov-Arnold Network prediction head that provides sufficient expressiveness to improve concept detection. Experiments demonstrate that f-CBM achieves the best trade-off between task accuracy, concept detection, and leakage reduction, while applying seamlessly to both image and text or text-only datasets, making it versatile across modalities.

Pierre Moreau, Emeline Pineau Ferrand, Yann Choho, Benjamin Wong, Annabelle Blangero, Milan Bhan• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationCUB-200 (test)
Accuracy85.3
98
Text ClassificationDBPedia (test)--
40
News ClassificationAG News (test)
Accuracy90.7
34
ClassificationN24 News (test)
Accuracy98.2
10
Showing 4 of 4 rows

Other info

Follow for update