Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Latte: Latent Diffusion Transformer for Video Generation

About

We propose Latte, a novel Latent Diffusion Transformer for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to the text-to-video generation (T2V) task, where Latte achieves results that are competitive with recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.

Xin Ma, Yaohui Wang, Xinyuan Chen, Gengyun Jia, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao• 2024

Related benchmarks

TaskDatasetResultRank
Video GenerationUCF-101 (test)--
105
Video GenerationUCF101
FVD202.2
68
Text-to-Video GenerationUCF-101 zero-shot
FVD478
59
Video GenerationUCF-101
FVD478
30
Video GenerationSkyTimelapse
FVD42.7
22
Video GenerationSkyTimelapse (test)
FVD1659.82
16
Video GenerationCVGBench-p
Subject Consistency95.21
16
Video GenerationCVGBench-m
Subject Consistency90.91
16
Video GenerationFaceForensics
FVD27.1
15
Class-to-video generationUCF-101--
15
Showing 10 of 18 rows

Other info

Follow for update