Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CETCAM: Camera-Controllable Video Generation via Consistent and Extensible Tokenization

About

Achieving precise camera control in video generation remains challenging, as existing methods often rely on camera pose annotations that are difficult to scale to large and dynamic datasets and are frequently inconsistent with depth estimation, leading to train-test discrepancies. We introduce CETCAM, a camera-controllable video generation framework that eliminates the need for camera annotations through a consistent and extensible tokenization scheme. CETCAM leverages recent advances in geometry foundation models, such as VGGT, to estimate depth and camera parameters and converts them into unified, geometry-aware tokens. These tokens are seamlessly integrated into a pretrained video diffusion backbone via lightweight context blocks. Trained in two progressive stages, CETCAM first learns robust camera controllability from diverse raw video data and then refines fine-grained visual quality using curated high-fidelity datasets. Extensive experiments across multiple benchmarks demonstrate state-of-the-art geometric consistency, temporal stability, and visual realism. Moreover, CETCAM exhibits strong adaptability to additional control modalities, including inpainting and layout control, highlighting its flexibility beyond camera control. The project page is available at https://sjtuytc.github.io/CETCam_project_page.github.io/.

Zelin Zhao, Xinyu Gong, Bangya Liu, Ziyang Song, Jun Zhang, Suhui Wu, Yongxin Chen, Hao Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Camera-controlled Video GenerationUNI3C-OOD-Challenging Dataset
Overall Score85.84
5
Camera-controlled Video GenerationCameraBench Dataset
Overall Score86.12
5
Camera-controlled Video GenerationHoIHQ Benchmark
Overall Score87.24
5
Showing 3 of 3 rows

Other info

Follow for update