Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Learning for Physical Interaction through Video Prediction

About

A core challenge for an agent learning to interact with the world is to predict how its actions affect objects in its environment. Many existing methods for learning the dynamics of physical interactions require labeled object information. However, to scale real-world interaction learning to a variety of scenes and objects, acquiring labeled data becomes increasingly impractical. To learn about physical object motion without labels, we develop an action-conditioned video prediction model that explicitly models pixel motion, by predicting a distribution over pixel motion from previous frames. Because our model explicitly predicts motion, it is partially invariant to object appearance, enabling it to generalize to previously unseen objects. To explore video prediction for real-world interactive agents, we also introduce a dataset of 59,000 robot interactions involving pushing motions, including a test set with novel objects. In this dataset, accurate prediction of videos conditioned on the robot's future actions amounts to learning a "visual imagination" of different futures based on different courses of action. Our experiments show that our proposed method produces more accurate video predictions both quantitatively and qualitatively, when compared to prior methods.

Chelsea Finn, Ian Goodfellow, Sergey Levine• 2016

Related benchmarks

TaskDatasetResultRank
Video PredictionMoving MNIST (test)
MSE97.4
82
Video PredictionMoving-MNIST 10 → 10 (test)
MSE97.4
39
Future video predictionBAIR 64x64 and 256x256 (test)
FVD297
16
Video modelingBAIR Robot Pushing (test)
FVD296.5
14
Video PredictionBAIR 64x64
FVD297
14
Spatiotemporal Predictive LearningMoving MNIST 10 time steps 2-digit (test)
SSIM72.1
11
Spatiotemporal Predictive LearningMoving MNIST 10 time steps 3-digit (test)
SSIM0.669
11
Video PredictionMoving-MNIST 10 → 30 (test)
MSE142.3
8
Showing 8 of 8 rows

Other info

Follow for update