Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks

About

Existing Visual-Language-Action (VLA) models have shown promising performance in zero-shot scenarios, demonstrating impressive task execution and reasoning capabilities. However, a significant challenge arises from the limitations of visual encoding, which can result in failures during tasks such as object grasping. Moreover, these models typically suffer from high computational overhead due to their large sizes, often exceeding 7B parameters. While these models excel in reasoning and task planning, the substantial computational overhead they incur makes them impractical for real-time robotic environments, where speed and efficiency are paramount. To address the limitations of existing VLA models, we propose NORA, a 3B-parameter model designed to reduce computational overhead while maintaining strong task performance. NORA adopts the Qwen-2.5-VL-3B multimodal model as its backbone, leveraging its superior visual-semantic understanding to enhance visual reasoning and action grounding. Additionally, our \model{} is trained on 970k real-world robot demonstrations and equipped with the FAST+ tokenizer for efficient action sequence generation. Experimental results demonstrate that NORA outperforms existing large-scale VLA models, achieving better task performance with significantly reduced computational overhead, making it a more practical solution for real-time robotic autonomy.

Chia-Yu Hung, Qi Sun, Pengfei Hong, Amir Zadeh, Chuan Li, U-Xuan Tan, Navonil Majumder, Soujanya Poria• 2025

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement89.4
494
Robot ManipulationLIBERO (test)
Average Success Rate87.9
142
Robot ManipulationSimplerEnv WidowX Robot tasks (test)
Success Rate (Spoon)80.2
79
Robot ManipulationSimplerEnv Google Robot tasks Visual Matching
Pick Coke Can Success Rate86
62
Robot ManipulationDiverse Manipulation Tasks Put S in S
PSR100
40
Robotic ManipulationLIBERO-Plus
Camera Robustness Score220
34
Robot ManipulationLIBERO-Plus Zero-shot
Camera Score2.2
20
Multi-task LearningLIBERO
Object Score97.5
18
Robot Policy LearningLIBERO
S (Spatial) Rate92.2
16
Robot ManipulationDiverse Manipulation Tasks Put U in U
PSR100
12
Showing 10 of 15 rows

Other info

Follow for update