Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Shallow-{\pi}: Knowledge Distillation for Flow-based VLAs

About

The growing demand for real-time robotic deployment necessitates fast and on-device inference for vision-language-action (VLA) models. Within the VLA literature, efficiency has been extensively studied at the token level, such as visual token pruning. In contrast, systematic transformer layer reduction has received limited attention and, to the best of our knowledge, has not been explored for flow-based VLA models under knowledge distillation. In this work, we propose Shallow-pi, a principled knowledge distillation framework that aggressively reduces the transformer depth of both the VLM backbone and the flow-based action head, compressing the model from 18 to 6 layers. Shallow-pi achieves over two times faster inference with less than one percent absolute drop in success rate on standard manipulation benchmarks, establishing state-of-the-art performance among reduced VLA models. Crucially, we validate our approach through industrial-scale real-world experiments on Jetson Orin and Jetson Thor across multiple robot platforms, including humanoid systems, in complex and dynamic manipulation scenarios.

Boseong Jeon, Yunho Choi, Taehan Kim• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO (test)
Average Success Rate97
142
Showing 1 of 1 rows

Other info

GitHub

Follow for update