Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

TGM-VLA: Task-Guided Mixup for Sampling-Efficient and Robust Robotic Manipulation

About

The performance of robotic imitation learning is fundamentally limited by data quality and training strategies. Prevalent sampling strategies on RLBench suffer from severe keyframe redundancy and imbalanced temporal distribution, leading to inefficient memory usage and unstable optimization. Moreover, reprojecting point clouds onto multi-view images with a black background--while more efficient than voxel-based methods--often causes dark objects to be indistinguishable and hard to manipulate. In this work, we propose a novel holistic framework that significantly improves both model performance and training efficiency. First, we redesign and optimize the keyframe sampling strategy, reducing memory consumption by 80% and accelerating training speed by 5x. Second, we augment the model with a color inversion projection branch--a simple yet effective module that resolves the ambiguity of dark objects. Finally, we propose a task-guided mixup technique that dynamically fuses point clouds and action heatmaps according to task instructions, greatly improving robustness to distractors and performance in multi-goal scenarios. Extensive experiments demonstrate that our method achieves state-of-the-art performance with a 90.5% success rate on RLBench and 68.8% on the COLOSSEUM benchmark under challenging interference conditions. Our code and checkpoints are available at https://github.com/PuFanqi23/TGM-VLA.

Fanqi Pu, Lei Jiang, Wenming Yang• 2026

Related benchmarks

TaskDatasetResultRank
Robotic ManipulationRLBench (test)
Average Success Rate90.5
49
Robotic ManipulationCOLOSSEUM
Avg SR68.8
20
Showing 2 of 2 rows

Other info

Follow for update