Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cosmos-Reason1: From Physical Common Sense To Embodied Reasoning

About

Physical AI systems need to perceive, understand, and perform complex actions in the physical world. In this paper, we present the Cosmos-Reason1 models that can understand the physical world and generate appropriate embodied decisions (e.g., next step action) in natural language through long chain-of-thought reasoning processes. We begin by defining key capabilities for Physical AI reasoning, with a focus on physical common sense and embodied reasoning. To represent physical common sense, we use a hierarchical ontology that captures fundamental knowledge about space, time, and physics. For embodied reasoning, we rely on a two-dimensional ontology that generalizes across different physical embodiments. Building on these capabilities, we develop two multimodal large language models, Cosmos-Reason1-7B and Cosmos-Reason1-56B. We curate data and train our models in two stages: Physical AI supervised fine-tuning (SFT) and Physical AI reinforcement learning (RL). To evaluate our models, we build comprehensive benchmarks for physical common sense and embodied reasoning according to our ontologies. Evaluation results show that Physical AI SFT and RL bring significant improvements. To facilitate the development of Physical AI, we make our code and pre-trained models available under the NVIDIA Open Model License at https://github.com/nvidia-cosmos/cosmos-reason1.

NVIDIA: Alisson Azzolini, Junjie Bai, Hannah Brandon, Jiaxin Cao, Prithvijit Chattopadhyay, Huayu Chen, Jinju Chu, Yin Cui, Jenna Diamond, Yifan Ding, Liang Feng, Francesco Ferroni, Rama Govindaraju, Jinwei Gu, Siddharth Gururani, Imad El Hanafi, Zekun Hao, Jacob Huffman, Jingyi Jin, Brendan Johnson, Rizwan Khan, George Kurian, Elena Lantz, Nayeon Lee, Zhaoshuo Li, Xuan Li, Maosheng Liao, Tsung-Yi Lin, Yen-Chen Lin, Ming-Yu Liu, Xiangyu Lu, Alice Luo, Andrew Mathau, Yun Ni, Lindsey Pavao, Wei Ping, David W. Romero, Misha Smelyanskiy, Shuran Song, Lyne Tchapmi, Andrew Z. Wang, Boxin Wang, Haoxiang Wang, Fangyin Wei, Jiashu Xu, Yao Xu, Dinghao Yang, Xiaodong Yang, Zhuolin Yang, Jingxu Zhang, Xiaohui Zeng, Zhe Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Spatial ReasoningCV-Bench
Accuracy75.2
46
Spatial Logical ReasoningSpatiaLQA
Rc48.2
42
Temporal Autonomous Driving UnderstandingTAD 1.0 (test)
EA Action Recognition50.46
32
Spatial ReasoningEmbSpatial
Overall Accuracy68.9
30
Spatial ReasoningROBOSPATIAL
Overall Score38.81
29
Egocentric daily-task planningEgoPlanBench2
Overall Success Rate29.8
26
Egocentric Spatial Reasoning3DSRBench Egocentric (test)
Orientation Accuracy (Cam.V)0.2359
24
Computer Vision EvaluationCV-Bench
Average Score75.2
22
Spatial ReasoningRefSpatial-Bench
Localization Score9.84
19
Multimodal Reward ModelingVL-RewardBench
Accuracy44.8
17
Showing 10 of 31 rows

Other info

Follow for update