HiVLA: A Visual-Grounded-Centric Hierarchical Embodied Manipulation System
About
While end-to-end Vision-Language-Action (VLA) models offer a promising paradigm for robotic manipulation, fine-tuning them on narrow control data often compromises the profound reasoning capabilities inherited from their base Vision-Language Models (VLMs). To resolve this fundamental trade-off, we propose HiVLA, a visual-grounded-centric hierarchical framework that explicitly decouples high-level semantic planning from low-level motor control. In high-level part, a VLM planner first performs task decomposition and visual grounding to generate structured plans, comprising a subtask instruction and a precise target bounding box. Then, to translate this plan into physical actions, we introduce a flow-matching Diffusion Transformer (DiT) action expert in low-level part equipped with a novel cascaded cross-attention mechanism. This design sequentially fuses global context, high-resolution object-centric crops and skill semantics, enabling the DiT to focus purely on robust execution. Our decoupled architecture preserves the VLM's zero-shot reasoning while allowing independent improvement of both components. Extensive experiments in simulation and the real world demonstrate that HiVLA significantly outperforms state-of-the-art end-to-end baselines, particularly excelling in long-horizon skill composition and the fine-grained manipulation of small objects in cluttered scenes.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Robot Manipulation | RoboTwin | Success Rate (Click Bell)95 | 6 | |
| Click Bell | Real-world 1 Bell (test) | Success Rate13 | 2 | |
| Click Bell | Real-world 2 Bells (test) | Success Rate17 | 2 | |
| Pick & Place Block | Real-world 1 Block (test) | Success Rate20 | 2 | |
| Pick & Place Block | Real-world 3 Blocks (test) | Success Rate7 | 2 | |
| Pick and Place Cup | Real-world 1 Cup (test) | Success Rate21 | 2 | |
| Pick and Place Cup | Real-world 3 Cups (test) | Success Rate6 | 2 |