Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CFM: Language-aligned Concept Foundation Model for Vision

About

Language-aligned vision foundation models perform strongly across diverse downstream tasks. Yet, their learned representations remain opaque, making interpreting their decision-making difficult. Recent work decompose these representations into human-interpretable concepts, but provide poor spatial grounding and are limited to image classification tasks. In this work, we propose CFM, a language-aligned concept foundation model for vision that provides fine-grained concepts, which are human-interpretable and spatially grounded in the input image. When paired with a foundation model with strong semantic representations, we get explanations for any of its downstream tasks. Examining local co-occurrence dependencies of concepts allows us to define concept relationships through which we improve concept naming and obtain richer explanations. On benchmark data, we show that CFM provides performance on classification, segmentation, and captioning that is competitive with opaque foundation models while providing fine-grained, high quality concept-based explanations. Code at https://github.com/kawi19/CFM.

Kai Wittenmayer, Sukrut Rao, Amin Parchami-Araghi, Bernt Schiele, Jonas Fischer• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet (test)
Top-1 Accuracy78.9
299
Open Vocabulary Semantic SegmentationPascal VOC 20
mIoU80.7
104
Open-Vocabulary SegmentationCityscapes
mIoU31.5
49
Open-Vocabulary SegmentationCOCO Object
mIoU33.3
34
Open Vocabulary Semantic SegmentationCOCO Stuff
mIoU24
34
Open-Vocabulary SegmentationPascal Context
mIoU33.2
20
Open-Vocabulary SegmentationADE20K
mIoU20.4
18
Open Vocabulary Semantic SegmentationPascal VOC
mIoU61.6
14
Open Vocabulary Semantic SegmentationPascal Context 59
mIoU36.5
10
Image ClassificationPlaces365 (test)
Accuracy55.4
9
Showing 10 of 12 rows

Other info

Follow for update