VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations
About
Human motion data is inherently rich and complex, containing both semantic content and subtle stylistic features that are challenging to model. We propose a novel method for effective disentanglement of the style and content in human motion data to facilitate style transfer. Our approach is guided by the insight that content corresponds to coarse motion attributes while style captures the finer, expressive details. To model this hierarchy, we employ Residual Vector Quantized Variational Autoencoders (RVQ-VAEs) to learn a coarse-to-fine representation of motion. We further enhance the disentanglement by integrating codebook learning with contrastive learning and a novel information leakage loss to organize the content and the style across different codebooks. We harness this disentangled representation using our simple and effective inference-time technique Quantized Code Swapping, which enables motion style transfer without requiring any fine-tuning for unseen styles. Our framework demonstrates strong versatility across multiple inference applications, including style transfer, style removal, and motion blending.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Motion Style Transfer | 100STYLE (test) | Style Accuracy96.88 | 6 | |
| Motion Style Transfer | Aberman (train) | Top-1 Style Accuracy83.38 | 2 | |
| Motion Style Transfer | Aberman (test) | Top-1 Style Accuracy80.91 | 2 | |
| Motion Style Transfer | Xia styles | Top-1 Style Accuracy53.85 | 2 | |
| Motion Style Transfer | 100STYLE (Unseen style) | Style Accuracy68.95 | 1 |