Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Playable Video Generation

About

This paper introduces the unsupervised learning problem of playable video generation (PVG). In PVG, we aim at allowing a user to control the generated video by selecting a discrete action at every time step as when playing a video game. The difficulty of the task lies both in learning semantically consistent actions and in generating realistic videos conditioned on the user input. We propose a novel framework for PVG that is trained in a self-supervised manner on a large dataset of unlabelled videos. We employ an encoder-decoder architecture where the predicted action labels act as bottleneck. The network is constrained to learn a rich action space using, as main driving loss, a reconstruction loss on the generated video. We demonstrate the effectiveness of the proposed approach on several datasets with wide environment variety. Further details, code and examples are available on our project page willi-menapace.github.io/playable-video-generation-website.

Willi Menapace, St\'ephane Lathuili\`ere, Sergey Tulyakov, Aliaksandr Siarohin, Elisa Ricci• 2021

Related benchmarks

TaskDatasetResultRank
Playable video generationStatic Tennis
LPIPS0.102
6
Proxy-supervised Video GenerationBAIR 64x64 Full (test)
LPIPS0.202
6
Playable video generationTennis
LPIPS0.102
5
Showing 3 of 3 rows

Other info

Code

Follow for update