Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

OmniVTA: Visuo-Tactile World Modeling for Contact-Rich Robotic Manipulation

About

Contact-rich manipulation tasks, such as wiping and assembly, require accurate perception of contact forces, friction changes, and state transitions that cannot be reliably inferred from vision alone. Despite growing interest in visuo-tactile manipulation, progress is constrained by two persistent limitations: existing datasets are small in scale and narrow in task coverage, and current methods treat tactile signals as passive observations rather than using them to model contact dynamics or enable closed-loop control explicitly. In this paper, we present \textbf{OmniViTac}, a large-scale visuo-tactile-action dataset comprising $21{,}000+$ trajectories across $86$ tasks and $100+$ objects, organized into six physics-grounded interaction patterns. Building on this dataset, we propose \textbf{OmniVTA}, a world-model-based visuo-tactile manipulation framework that integrates four tightly coupled modules: a self-supervised tactile encoder, a two-stream visuo-tactile world model for predicting short-horizon contact evolution, a contact-aware fusion policy for action generation, and a 60Hz reflexive controller that corrects deviations between predicted and observed tactile signals in a closed loop. Real-robot experiments across all six interaction categories show that OmniVTA outperforms existing methods and generalizes well to unseen objects and geometric configurations, confirming the value of combining predictive contact modeling with high-frequency tactile feedback for contact-rich manipulation. All data, models, and code will be made publicly available on the project website at https://mrsecant.github.io/OmniVTA.

Yuhang Zheng, Songen Gu, Weize Li, Yupeng Zheng, Yujie Zang, Shuai Tian, Xiang Li, Ce Hao, Chen Gao, Si Liu, Haoran Li, Yilun Chen, Shuicheng Yan, Wenchao Ding• 2026

Related benchmarks

TaskDatasetResultRank
AdjustmentOmniVTA Object Diversity
Success Rate65
7
AdjustmentOmniVTA Real-robot manipulation Generalization
Success Rate65
7
assemblyOmniVTA Real-robot manipulation Perturbation Robustness
Success Rate40
7
CutOmniVTA Real-robot manipulation (Object Diversity)
Success Rate85
7
CutOmniVTA Real-robot manipulation Generalization
Success Rate83
7
CutOmniVTA Real-robot manipulation Perturbation Robustness
Success Rate60
7
GraspOmniVTA Real-robot manipulation (Object Diversity)
Success Rate90
7
PeelOmniVTA Real-robot manipulation (Object Diversity)
Success Rate55
7
PeelOmniVTA Real-robot manipulation Generalization
Success Rate48
7
PeelOmniVTA Real-robot manipulation Perturbation Robustness
Success Rate63
7
Showing 10 of 15 rows

Other info

Follow for update