Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VISIONLOGIC: From Neuron Activations to Causally Grounded Concept Rules for Vision Models

About

While concept-based explanations improve interpretability over local attributions, they often rely on correlational signals and lack causal validation. We introduce VisionLogic, a novel neural-symbolic framework that produces faithful, hierarchical explanations as global logical rules over causally validated concepts. VisionLogic first learns activation thresholds that abstract neuron activations into predicates, then induces class-level logical rules from these predicates. It then grounds predicates to visual concepts via ablation-based causal tests with iterative region refinement, ensuring that discovered concepts correspond to features that are causal for predicate activation. Across different vision architectures such as CNNs and ViTs, it produces interpretable concepts and compact rules that largely preserve the original model's predictive performance. In our large-scale human evaluations, VisionLogic's concept explanations significantly improve participants' understanding of model behavior over prior concept-based methods.

Chuqin Geng, Yuhe Jiang, Ziyu Zhao, Haolin Ye, Anqi Xing, Li Zhang, Xujie Si• 2025

Related benchmarks

TaskDatasetResultRank
Predicting Model OutputHusky vs. Wolf
Session 1 Accuracy74.8
5
Predicting Model OutputOtter vs. Beaver
Accuracy (Session 1)96.8
5
Predicting Model OutputKit Fox vs. Red Fox
Accuracy (Session 1)84.1
5
Showing 3 of 3 rows

Other info

Follow for update