Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Video Transformer

About

The video generation task can be formulated as a prediction of future video frames given some past frames. Recent generative models for videos face the problem of high computational requirements. Some models require up to 512 Tensor Processing Units for parallel training. In this work, we address this problem via modeling the dynamics in a latent space. After the transformation of frames into the latent space, our model predicts latent representation for the next frames in an autoregressive manner. We demonstrate the performance of our approach on BAIR Robot Pushing and Kinetics-600 datasets. The approach tends to reduce requirements to 8 Graphical Processing Units for training the models while maintaining comparable generation quality.

Ruslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, Evgeny Burnaev• 2020

Related benchmarks

TaskDatasetResultRank
Video PredictionBAIR (test)
FVD125.8
59
Video PredictionKinetics-600 (test)
FVD224.7
46
Video PredictionBAIR Robot Pushing
FVD125.8
38
Video PredictionBair
FVD125.8
34
Video PredictionBAIR Push (test)
FVD125.8
30
Video Frame PredictionKinetics-600
gFVD224.7
28
Future video predictionBAIR 64x64 and 256x256 (test)
FVD126
16
Frame predictionBair
FVD126
15
Video PredictionBAIR 64x64
FVD126
14
Video modelingBAIR Robot Pushing (test)--
14
Showing 10 of 12 rows

Other info

Code

Follow for update