Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Models for Video Prediction and Infilling

About

Predicting and anticipating future outcomes or reasoning about missing information in a sequence are critical skills for agents to be able to make intelligent decisions. This requires strong, temporally coherent generative capabilities. Diffusion models have shown remarkable success in several generative tasks, but have not been extensively explored in the video domain. We present Random-Mask Video Diffusion (RaMViD), which extends image diffusion models to videos using 3D convolutions, and introduces a new conditioning technique during training. By varying the mask we condition on, the model is able to perform video prediction, infilling, and upsampling. Due to our simple conditioning scheme, we can utilize the same architecture as used for unconditional training, which allows us to train the model in a conditional and unconditional fashion at the same time. We evaluate RaMViD on two benchmark datasets for video prediction, on which we achieve state-of-the-art results, and one for video generation. High-resolution videos are provided at https://sites.google.com/view/video-diffusion-prediction.

Tobias H\"oppe, Arash Mehrjou, Stefan Bauer, Didrik Nielsen, Andrea Dittadi• 2022

Related benchmarks

TaskDatasetResultRank
Video GenerationUCF-101 (test)
Inception Score21.71
105
Video PredictionBAIR (test)
FVD82.64
59
Video GenerationUCF101--
54
Video PredictionBAIR Robot Pushing
FVD84
38
Video PredictionBair
FVD84.2
34
Video Frame PredictionKinetics-600
gFVD16.5
28
Video PredictionKinetics-600
FVD16.5
18
Frame predictionBair
FVD84
15
Video GenerationUCF-101 64 x 64 (test)
FVD396.7
12
Video PredictionBAIR 64x64 (test)
SSIM0.758
12
Showing 10 of 18 rows

Other info

Code

Follow for update