Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MEVG: Multi-event Video Generation with Text-to-Video Models

About

We introduce a novel diffusion-based video generation method, generating a video showing multiple events given multiple individual sentences from the user. Our method does not require a large-scale video dataset since our method uses a pre-trained diffusion-based text-to-video generative model without a fine-tuning process. Specifically, we propose a last frame-aware diffusion process to preserve visual coherence between consecutive videos where each video consists of different events by initializing the latent and simultaneously adjusting noise in the latent to enhance the motion dynamic in a generated video. Furthermore, we find that the iterative update of latent vectors by referring to all the preceding frames maintains the global appearance across the frames in a video clip. To handle dynamic text input for video generation, we utilize a novel prompt generator that transfers course text messages from the user into the multiple optimal prompts for the text-to-video diffusion model. Extensive experiments and user studies show that our proposed method is superior to other video-generative models in terms of temporal coherency of content and semantics. Video examples are available on our project page: https://kuai-lab.github.io/eccv2024mevg.

Gyeongrok Oh, Jaehwan Jeong, Sieun Kim, Wonmin Byeon, Jinkyu Kim, Sungwoong Kim, Sangpil Kim• 2023

Related benchmarks

TaskDatasetResultRank
Video Generation60 multi-event prompts
CLIP-T Score24.4
11
Video Generation60 multi-event prompts 2
Visual Quality2.13
11
Video GenerationVBench
Motion Smoothness95.3
11
Multi-event Video GenerationHuman Evaluation Study
Omission Score1.41
7
Showing 4 of 4 rows

Other info

Follow for update