Concat-ID: Towards Universal Identity-Preserving Video Synthesis
About
We present Concat-ID, a unified framework for identity-preserving video generation. Concat-ID employs variational autoencoders to extract image features, which are then concatenated with video latents along the sequence dimension. It relies exclusively on inherent 3D self-attention mechanisms to incorporate them, eliminating the need for additional parameters or modules. A novel cross-video pairing strategy and a multi-stage training regimen are introduced to balance identity consistency and facial editability while enhancing video naturalness. Extensive experiments demonstrate Concat-ID's superiority over existing methods in both single and multi-identity generation, as well as its seamless scalability to multi-subject scenarios, including virtual try-on and background-controllable generation. Concat-ID establishes a new benchmark for identity-preserving video synthesis, providing a versatile and scalable solution for a wide range of applications.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Identity-Preserving Video Generation | OpenS2V (test) | Face Similarity0.501 | 17 | |
| Single-ID Video Generation | Single-ID (evaluation) | ID-Sim41.7 | 13 | |
| Face Identity Preservation | Face Identity Preservation Evaluation Set | FaceSim60.56 | 4 | |
| Single-face identity-consistent video generation | Single-face identity-consistent video generation dataset (220 videos) | ArcSim0.467 | 3 |