Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RoboTAP: Tracking Arbitrary Points for Few-Shot Visual Imitation

About

For robots to be useful outside labs and specialized factories we need a way to teach them new useful behaviors quickly. Current approaches lack either the generality to onboard new tasks without task-specific engineering, or else lack the data-efficiency to do so in an amount of time that enables practical use. In this work we explore dense tracking as a representational vehicle to allow faster and more general learning from demonstration. Our approach utilizes Track-Any-Point (TAP) models to isolate the relevant motion in a demonstration, and parameterize a low-level controller to reproduce this motion across changes in the scene configuration. We show this results in robust robot policies that can solve complex object-arrangement tasks such as shape-matching, stacking, and even full path-following tasks such as applying glue and sticking objects together, all from demonstrations that can be collected in minutes.

Mel Vecerik, Carl Doersch, Yi Yang, Todor Davchev, Yusuf Aytar, Guangyao Zhou, Raia Hadsell, Lourdes Agapito, Jon Scholz• 2023

Related benchmarks

TaskDatasetResultRank
Point TrackingTAP-Vid DAVIS (First)
Delta Avg (<c)70
76
Point TrackingTAP-Vid Kinetics (First)--
53
Showing 2 of 2 rows

Other info

Follow for update