PAINT: Partner-Agnostic Intent-Aware Cooperative Transport with Legged Robots
About
Collaborative transport requires robots to infer partner intent through physical interaction while maintaining stable loco-manipulation. This becomes particularly challenging in complex environments, where interaction signals are difficult to capture and model. We present PAINT, a lightweight yet efficient hierarchical learning framework for partner-agonistic intent-aware collaborative legged transport that infers partner intent directly from proprioceptive feedback. PAINT decouples intent understanding from terrain-robust locomotion: A high-level policy infers the partner interaction wrench using an intent estimator and a teacher-student training scheme, while a low-level locomotion backbone ensures robust execution. This enables lightweight deployment without external force-torque sensing or payload tracking. Extensive simulation and real-world experiments demonstrate compliant cooperative transport across diverse terrains, payloads, and partners. Furthermore, we show that PAINT naturally scales to decentralized multi-robot transport and transfers across robot embodiments by swapping the underlying locomotion backbone. Our results suggest that proprioceptive signals in payload-coupled interaction provide a scalable interface for partner-agnostic intent-aware collaborative transport.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Collaborative Transport | Isaac Gym Collaborative Transport (2 kg) | Linear Tracking Error (m/s)0.119 | 7 | |
| Collaborative Transport | Isaac Gym Collaborative Transport (4 kg) | Linear Tracking Error (m/s)0.135 | 7 | |
| Collaborative Transport | Isaac Gym Collaborative Transport (8 kg) | Linear Tracking Error (m/s)0.202 | 7 |