Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CLAP: Contrastive Latent Action Pretraining for Learning Vision-Language-Action Models from Human Videos

About

Generalist Vision-Language-Action models are currently hindered by the scarcity of robotic data compared to the abundance of human video demonstrations. Existing Latent Action Models attempt to leverage video data but often suffer from visual entanglement, capturing noise rather than manipulation skills. To address this, we propose Contrastive Latent Action Pretraining (CLAP), a framework that aligns the visual latent space from videos with a proprioceptive latent space from robot trajectories. By employing contrastive learning, CLAP maps video transitions onto a quantized, physically executable codebook. Building on this representation, we introduce a dual-formulation VLA framework offering both CLAP-NTP, an autoregressive model excelling at instruction following and object generalization, and CLAP-RF, a Rectified Flow-based policy designed for high-frequency, precise manipulation. Furthermore, we propose a Knowledge Matching (KM) regularization strategy to mitigate catastrophic forgetting during fine-tuning. Extensive experiments demonstrate that CLAP significantly outperforms strong baselines, enabling the effective transfer of skills from human videos to robotic execution. Project page: https://lin-shan.com/CLAP/.

Chubin Zhang, Jianan Wang, Zifeng Gao, Yue Su, Tianru Dai, Cai Zhou, Jiwen Lu, Yansong Tang• 2026

Related benchmarks

TaskDatasetResultRank
Robot ManipulationLIBERO
Goal Achievement93
494
Pack the DollReal-world original setup (test)
Packing/Placing Success Rate90
5
Pick-&-PlaceReal-world Seen Objects original setup (test)
Pick Success Rate95
5
Pick-&-PlaceReal-world OOD Objects original setup (test)
Pick Success Rate85
5
Make BouquetsReal-world original setup (test)
C-1 Success Rate0.4
5
Fold T-shirtReal-world original setup (test)
Success Rate40
5
Showing 6 of 6 rows

Other info

Follow for update