Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Clockwork Variational Autoencoders

About

Deep learning has enabled algorithms to generate realistic images. However, accurately predicting long video sequences requires understanding long-term dependencies and remains an open challenge. While existing video prediction models succeed at generating sharp images, they tend to fail at accurately predicting far into the future. We introduce the Clockwork VAE (CW-VAE), a video prediction model that leverages a hierarchy of latent sequences, where higher levels tick at slower intervals. We demonstrate the benefits of both hierarchical latents and temporal abstraction on 4 diverse video prediction datasets with sequences of up to 1000 frames, where CW-VAE outperforms top video prediction models. Additionally, we propose a Minecraft benchmark for long-term video prediction. We conduct several experiments to gain insights into CW-VAE and confirm that slower levels learn to represent objects that change more slowly in the video, and faster levels learn to represent faster objects.

Vaibhav Saxena, Jimmy Ba, Danijar Hafner• 2021

Related benchmarks

TaskDatasetResultRank
Long-Context Video PredictionDMLab 64x64
FVD125
12
Video CompletionGQN-Mazes
FVD837
8
Video CompletionMineRL
FVD1.57e+3
8
Video CompletionCARLA Town01
FVD1.16e+3
8
Long-Context Video PredictionMinecraft 128x128 (test)
SSIM0.338
6
Showing 5 of 5 rows

Other info

Follow for update