Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SALAD: Skeleton-aware Latent Diffusion for Text-driven Motion Generation and Editing

About

Text-driven motion generation has advanced significantly with the rise of denoising diffusion models. However, previous methods often oversimplify representations for the skeletal joints, temporal frames, and textual words, limiting their ability to fully capture the information within each modality and their interactions. Moreover, when using pre-trained models for downstream tasks, such as editing, they typically require additional efforts, including manual interventions, optimization, or fine-tuning. In this paper, we introduce a skeleton-aware latent diffusion (SALAD), a model that explicitly captures the intricate inter-relationships between joints, frames, and words. Furthermore, by leveraging cross-attention maps produced during the generation process, we enable attention-based zero-shot text-driven motion editing using a pre-trained SALAD model, requiring no additional user input beyond text prompts. Our approach significantly outperforms previous methods in terms of text-motion alignment without compromising generation quality, and demonstrates practical versatility by providing diverse editing capabilities beyond generation. Code is available at project page.

Seokhyeon Hong, Chaelin Kim, Serin Yoon, Junghyun Nam, Sihun Cha, Junyong Noh• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.076
331
Text-to-motion generationKIT-ML (test)
FID0.296
115
Text-driven Motion GenerationHumanML3D (test)
R-Precision@158.1
36
Text-to-motion generationExtended HumanML3D (test)
Top-1 Acc58.1
4
Showing 4 of 4 rows

Other info

Follow for update