Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cambrian-S: Towards Spatial Supersensing in Video

About

We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.

Shusheng Yang, Jihan Yang, Pinzhi Huang, Ellis Brown, Zihao Yang, Yue Yu, Shengbang Tong, Zihan Zheng, Yifan Xu, Muhan Wang, Daohan Lu, Rob Fergus, Yann LeCun, Li Fei-Fei, Saining Xie• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal UnderstandingMMStar
Accuracy43.9
324
Diagram UnderstandingAI2D
Accuracy76.9
247
Optical Character RecognitionOCRBench
Score648
232
Spatial ReasoningVSI-Bench
Avg Score67.5
192
Document Visual Question AnsweringDocVQA
Accuracy83.7
132
Multimodal ReasoningWeMath
Accuracy40.7
129
Spatial ReasoningViewspatial
Accuracy40.9
92
Multimodal UnderstandingPOPE
POPE Score0.868
90
Visual PerceptionMMVP
Accuracy54
82
Spatial ReasoningVSI-Bench 1.0 (test)
Relative Distance Error64.8
80
Showing 10 of 47 rows

Other info

Follow for update