Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EasyTune: Efficient Step-Aware Fine-Tuning for Diffusion-Based Motion Generation

About

In recent years, motion generative models have undergone significant advancement, yet pose challenges in aligning with downstream objectives. Recent studies have shown that using differentiable rewards to directly align the preference of diffusion models yields promising results. However, these methods suffer from (1) inefficient and coarse-grained optimization with (2) high memory consumption. In this work, we first theoretically and empirically identify the key reason of these limitations: the recursive dependence between different steps in the denoising trajectory. Inspired by this insight, we propose EasyTune, which fine-tunes diffusion at each denoising step rather than over the entire trajectory. This decouples the recursive dependence, allowing us to perform (1) a dense and fine-grained, and (2) memory-efficient optimization. Furthermore, the scarcity of preference motion pairs restricts the availability of motion reward model training. To this end, we further introduce a Self-refinement Preference Learning (SPL) mechanism that dynamically identifies preference pairs and conducts preference learning. Extensive experiments demonstrate that EasyTune outperforms DRaFT-50 by 8.2% in alignment (MM-Dist) improvement while requiring only 31.16% of its additional memory overhead and achieving a 7.3x training speedup. The project page is available at this link {https://xiaofeng-tan.github.io/projects/EasyTune/index.html}.

Xiaofeng Tan, Wanjiang Weng, Haodong Lei, Hongsong Wang• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-motion generationHumanML3D (test)
FID0.069
331
Text-to-motion generationKIT-ML (test)
FID0.284
115
Motion-to-text retrievalKIT-ML
R@155.11
16
Text-to-motion retrievalKIT-ML
R@153.27
16
Motion-Text RetrievalHumanML3D
R@170.23
6
Text-Motion RetrievalHumanML3D
R@169.31
6
Showing 6 of 6 rows

Other info

Follow for update