Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Keep it SymPL: Symbolic Projective Layout for Allocentric Spatial Reasoning in Vision-Language Models

About

Perspective-aware spatial reasoning involves understanding spatial relationships from specific viewpoints-either egocentric (observer-centered) or allocentric (object-centered). While vision-language models (VLMs) perform well in egocentric settings, their performance deteriorates when reasoning from allocentric viewpoints, where spatial relations must be inferred from the perspective of objects within the scene. In this study, we address this underexplored challenge by introducing Symbolic Projective Layout (SymPL), a framework that reformulates allocentric reasoning into symbolic-layout forms that VLMs inherently handle well. By leveraging four key factors-projection, abstraction, bipartition, and localization-SymPL converts allocentric questions into structured symbolic-layout representations. Extensive experiments demonstrate that this reformulation substantially improves performance in both allocentric and egocentric tasks, enhances robustness under visual illusions and multi-view scenarios, and that each component contributes critically to these gains. These results show that SymPL provides an effective and principled approach for addressing complex perspective-aware spatial reasoning.

Jaeyun Jang, Seunghui Shin, Taeho Park, Hyoseok Hwang• 2026

Related benchmarks

TaskDatasetResultRank
Allocentric Spatial ReasoningCOMFORT#
Left/Right Accuracy69
19
Allocentric Spatial Reasoning3DSRBench
Left/Right Acc79.94
19
Egocentric Spatial ReasoningCOCOSPATIAL
Left/Right Accuracy89.83
19
Perspective-aware spatial reasoningCOMFORT Visual Illusions
Directional Accuracy (Left/Right)95.38
19
Viewpoint-Aware Consistency ReasoningCOMFORT Multi
Left/Right Consistency76
7
Showing 5 of 5 rows

Other info

Follow for update