CRAFT: A Neuro-Symbolic Framework for Visual Functional Affordance Grounding
About
We introduce CRAFT, a neuro-symbolic framework for interpretable affordance grounding, which identifies the objects in a scene that enable a given action (e.g., "cut"). CRAFT integrates structured commonsense priors from ConceptNet and language models with visual evidence from CLIP, using an energy-based reasoning loop to refine predictions iteratively. This process yields transparent, goal-driven decisions to ground symbolic and perceptual structures. Experiments in multi-object, label-free settings demonstrate that CRAFT enhances accuracy while improving interpretability, providing a step toward robust and trustworthy scene understanding.
Zhou Chen, Joe Lin, Sathyanarayanan N. Aakur• 2025
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Affordance Grounding | Affordance Grounding Dataset Static Evaluation 1.0 | Accuracy58.76 | 20 | |
| Functional Object Selection | ImageNet Functional Grounding (val) | Accuracy44.62 | 9 | |
| Robotic Grasping | Real-world Robotic Evaluation (deployment) | 3D Accuracy86.11 | 8 |
Showing 3 of 3 rows