SnapGen-V: Generating a Five-Second Video within Five Seconds on a Mobile Device
About
We have witnessed the unprecedented success of diffusion-based video generation over the past year. Recently proposed models from the community have wielded the power to generate cinematic and high-resolution videos with smooth motions from arbitrary input prompts. However, as a supertask of image generation, video generation models require more computation and are thus hosted mostly on cloud servers, limiting broader adoption among content creators. In this work, we propose a comprehensive acceleration framework to bring the power of the large-scale video diffusion model to the hands of edge users. From the network architecture scope, we initialize from a compact image backbone and search out the design and arrangement of temporal layers to maximize hardware efficiency. In addition, we propose a dedicated adversarial fine-tuning algorithm for our efficient model and reduce the denoising steps to 4. Our model, with only 0.6B parameters, can generate a 5-second video on an iPhone 16 PM within 5 seconds. Compared to server-side models that take minutes on powerful GPUs to generate a single video, we accelerate the generation by magnitudes while delivering on-par quality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Generation | VBench | Quality Score81.14 | 111 | |
| Video Generation | VBench | Quality Score83.47 | 102 | |
| Video Generation | VBench 2.0 (test) | Total Score81.14 | 44 | |
| Text-to-Video Generation | VBench and Movie Gen Bench (user study) | Prompt Alignment44.4 | 3 | |
| Video Generation | Mobile Device | Video Length (s)5 | 3 |