Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cognitively-Inspired Tokens Overcome Egocentric Bias in Multimodal Models

About

Multimodal language models (MLMs) perform well on semantic vision-language tasks but fail at spatial reasoning that requires adopting another agent's visual perspective. These errors reflect a persistent egocentric bias and raise questions about whether current models support allocentric reasoning. Inspired by human spatial cognition, we introduce perspective tokens, specialized embeddings that encode orientation through either (1) embodied body-keypoint cues or (2) abstract representations supporting mental rotation. Integrating these tokens into LLaVA-1.5-13B yields performance on level-2 visual perspective-taking tasks. Across synthetic and naturalistic benchmarks (Isle Bricks V2, COCO, 3DSRBench), perspective tokens improve accuracy, with rotation-based tokens generalizing to non-human reference agents. Representational analyses reveal that fine-tuning enhances latent orientation sensitivity already present in the base model, suggesting that MLMs contain precursors of allocentric reasoning but lack appropriate internal structure. Overall, embedding cognitively grounded spatial structure directly into token space provides a lightweight, model-agnostic mechanism for perspective-taking and more human-like spatial reasoning.

Bridget Leonard, Scott O. Murray• 2026

Related benchmarks

TaskDatasetResultRank
3D Spatial Reasoning3DSRBench
Accuracy67
23
Spatial ReasoningCOCO 2017 (val)
Alignment Accuracy70
12
Visual Perspective TakingPerspective Taking
Alignment Score100
11
Spatial ReasoningIsle Bricks V2
Alignment Score100
11
Showing 4 of 4 rows

Other info

Follow for update