Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers

About

Large-scale pretrained transformers have created milestones in text (GPT-3) and text-to-image (DALL-E and CogView) generation. Its application to video generation is still facing many challenges: The potential huge computation cost makes the training from scratch unaffordable; The scarcity and weak relevance of text-video datasets hinder the model understanding complex movement semantics. In this work, we present 9B-parameter transformer CogVideo, trained by inheriting a pretrained text-to-image model, CogView2. We also propose multi-frame-rate hierarchical training strategy to better align text and video clips. As (probably) the first open-source large-scale pretrained text-to-video model, CogVideo outperforms all publicly available models at a large margin in machine and human evaluations.

Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang• 2022

Related benchmarks

TaskDatasetResultRank
Text-to-Video GenerationVBench
Quality Score82.75
111
Video GenerationUCF-101 (test)
Inception Score50.46
105
Text-to-Video GenerationMSR-VTT (test)
CLIP Similarity0.2631
85
Text-to-Video GenerationUCF-101
FVD626
61
Video GenerationUCF101
FVD305
54
Text-to-Video GenerationUCF-101 zero-shot
FVD701.6
44
Video GenerationVBench (test)--
35
Text-to-Video GenerationMSR-VTT
CLIPSIM0.2631
28
Video Frame PredictionKinetics-600
gFVD109.2
28
Text-to-Video GenerationUCF-101 (test)
FVD701.6
25
Showing 10 of 30 rows

Other info

Follow for update