Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LDMVFI: Video Frame Interpolation with Latent Diffusion Models

About

Existing works on video frame interpolation (VFI) mostly employ deep neural networks that are trained by minimizing the L1, L2, or deep feature space distance (e.g. VGG loss) between their outputs and ground-truth frames. However, recent works have shown that these metrics are poor indicators of perceptual VFI quality. Towards developing perceptually-oriented VFI methods, in this work we propose latent diffusion model-based VFI, LDMVFI. This approaches the VFI problem from a generative perspective by formulating it as a conditional generation problem. As the first effort to address VFI using latent diffusion models, we rigorously benchmark our method on common test sets used in the existing VFI literature. Our quantitative experiments and user study indicate that LDMVFI is able to interpolate video content with favorable perceptual quality compared to the state of the art, even in the high-resolution regime. Our code is available at https://github.com/danier97/LDMVFI.

Duolikun Danier, Fan Zhang, David Bull• 2023

Related benchmarks

TaskDatasetResultRank
Video Frame InterpolationDAVIS
PSNR25.541
33
Video Frame InterpolationSNU-FILM Medium
PSNR33.975
12
Video Frame InterpolationSNU-FILM Hard
PSNR29.144
12
Video Frame InterpolationSNU-FILM Medium
LPIPS0.0284
9
Video Frame InterpolationSNU-FILM Extreme
LPIPS0.1226
9
Video Frame InterpolationDAVIS 480P 2017 (test)
PSNR25.54
8
Video Frame InterpolationLaMoR (test)
PSNR21.952
7
Showing 7 of 7 rows

Other info

Follow for update