Scaling Zero-Shot Reference-to-Video Generation
About
Reference-to-video (R2V) generation aims to synthesize videos that align with a text prompt while preserving the subject identity from reference images. However, current R2V methods are hindered by the reliance on explicit reference image-video-text triplets, whose construction is highly expensive and difficult to scale. We bypass this bottleneck by introducing Saber, a scalable zero-shot framework that requires no explicit R2V data. Trained exclusively on video-text pairs, Saber employs a masked training strategy and a tailored attention-based model design to learn identity-consistent and reference-aware representations. Mask augmentation techniques are further integrated to mitigate copy-paste artifacts common in reference-to-video generation. Moreover, Saber demonstrates remarkable generalization capabilities across a varying number of references and achieves superior performance on the OpenS2V-Eval benchmark compared to methods trained with R2V data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| subject-to-video generation | OpenS2V-Eval zero-shot (test) | Total Score57.91 | 16 | |
| Reference-to-Video Generation | OpenS2V-Eval 2025a | Total Score57.91 | 9 |