Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ConLA: Contrastive Latent Action Learning from Human Videos for Robotic Manipulation

About

Vision-Language-Action (VLA) models achieve preliminary generalization through pretraining on large scale robot teleoperation datasets. However, acquiring datasets that comprehensively cover diverse tasks and environments is extremely costly and difficult to scale. In contrast, human demonstration videos offer a rich and scalable source of diverse scenes and manipulation behaviors, yet their lack of explicit action supervision hinders direct utilization. Prior work leverages VQ-VAE based frameworks to learn latent actions from human videos in an unsupervised manner. Nevertheless, since the training objective primarily focuses on reconstructing visual appearances rather than capturing inter-frame dynamics, the learned representations tend to rely on spurious visual cues, leading to shortcut learning and entangled latent representations that hinder transferability. To address this, we propose ConLA, an unsupervised pretraining framework for learning robotic policies from human videos. ConLA introduces a contrastive disentanglement mechanism that leverages action category priors and temporal cues to isolate motion dynamics from visual content, effectively mitigating shortcut learning. Extensive experiments show that ConLA achieves strong performance across diverse benchmarks. Notably, by pretraining solely on human videos, our method for the first time surpasses the performance obtained with real robot trajectory pretraining, highlighting its ability to extract pure and semantically consistent latent action representations for scalable robot learning.

Weisheng Dai, Kai Lan, Jianyi Zhou, Bo Zhao, Xiu Su, Junwen Tong, Weili Guan, Shuo Yang• 2026

Related benchmarks

TaskDatasetResultRank
General Robot ManipulationSimplerEnv
Average Success Rate64.6
23
Robot ManipulationReal-world Robot Manipulation Novel Object
Success Rate47.2
9
Robot ManipulationReal-world Robot Manipulation Average
Success Rate0.482
8
Pick-&-PlacePick & Place Box Task
Total Success Rate48.18
5
Robot ManipulationReal-world Robot Manipulation Seen Objects, Unseen Combinations
Success Rate59.1
5
Robot ManipulationReal-world Robot Manipulation Seen Objects, Unseen Instructions
Success Rate38.3
5
Showing 6 of 6 rows

Other info

Follow for update