FlexAM: Flexible Appearance-Motion Decomposition for Versatile Video Generation Control
About
Effective and generalizable control in video generation remains a significant challenge. While many methods rely on ambiguous or task-specific signals, we argue that a fundamental disentanglement of "appearance" and "motion" provides a more robust and scalable pathway. We propose FlexAM, a unified framework built upon a novel 3D control signal. This signal represents video dynamics as a point cloud, introducing three key enhancements: multi-frequency positional encoding to distinguish fine-grained motion, depth-aware positional encoding, and a flexible control signal for balancing precision and generative quality. This representation allows FlexAM to effectively disentangle appearance and motion, enabling a wide range of tasks including I2V/V2V editing, camera control, and spatial object editing. Extensive experiments demonstrate that FlexAM achieves superior performance across all evaluated tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Camera Controllability | RealEstate10K (test) | mRotErr1.097 | 10 | |
| Motion Transfer | Qwen Image Edit Style-transferred images (test) | Texture Alignment32.55 | 4 | |
| Object Manipulation | Object Manipulation (test) | CLIP Score95.36 | 3 |