Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VQ-Style: Disentangling Style and Content in Motion with Residual Quantized Representations

About

Human motion data is inherently rich and complex, containing both semantic content and subtle stylistic features that are challenging to model. We propose a novel method for effective disentanglement of the style and content in human motion data to facilitate style transfer. Our approach is guided by the insight that content corresponds to coarse motion attributes while style captures the finer, expressive details. To model this hierarchy, we employ Residual Vector Quantized Variational Autoencoders (RVQ-VAEs) to learn a coarse-to-fine representation of motion. We further enhance the disentanglement by integrating codebook learning with contrastive learning and a novel information leakage loss to organize the content and the style across different codebooks. We harness this disentangled representation using our simple and effective inference-time technique Quantized Code Swapping, which enables motion style transfer without requiring any fine-tuning for unseen styles. Our framework demonstrates strong versatility across multiple inference applications, including style transfer, style removal, and motion blending.

Fatemeh Zargarbashi, Dhruv Agrawal, Jakob Buhmann, Martin Guay, Stelian Coros, Robert W. Sumner• 2026

Related benchmarks

TaskDatasetResultRank
Motion Style Transfer100STYLE (test)
Style Accuracy96.88
6
Motion Style TransferAberman (train)
Top-1 Style Accuracy83.38
2
Motion Style TransferAberman (test)
Top-1 Style Accuracy80.91
2
Motion Style TransferXia styles
Top-1 Style Accuracy53.85
2
Motion Style Transfer100STYLE (Unseen style)
Style Accuracy68.95
1
Showing 5 of 5 rows

Other info

Follow for update