Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MotionGrounder: Grounded Multi-Object Motion Transfer via Diffusion Transformer

About

Motion transfer enables controllable video generation by transferring temporal dynamics from a reference video to synthesize a new video conditioned on a target caption. However, existing Diffusion Transformer (DiT)-based methods are limited to single-object videos, restricting fine-grained control in real-world scenes with multiple objects. In this work, we introduce MotionGrounder, a DiT-based framework that firstly handles motion transfer with multi-object controllability. Our Flow-based Motion Signal (FMS) in MotionGrounder provides a stable motion prior for target video generation, while our Object-Caption Alignment Loss (OCAL) grounds object captions to their corresponding spatial regions. We further propose a new Object Grounding Score (OGS), which jointly evaluates (i) spatial alignment between source video objects and their generated counterparts and (ii) semantic consistency between each generated object and its target caption. Our experiments show that MotionGrounder consistently outperforms recent baselines across quantitative, qualitative, and human evaluations.

Samuel Teodoro, Yun Chen, Agus Gunawan, Soo Ye Kim, Jihyong Oh, Munchurl Kim• 2026

Related benchmarks

TaskDatasetResultRank
Grounded Multi-Object Motion TransferCaption setting
MF68.75
6
Grounded Multi-Object Motion TransferSubject setting
MF (Motion Fidelity)0.6818
6
Grounded Multi-Object Motion TransferScene setting
MF67.46
6
Grounded Multi-Object Motion TransferAll setting
MF (Motion Fidelity)68.13
6
Grounded Multi-Object Motion TransferUser Study
MA2.99
6
Showing 5 of 5 rows

Other info

Follow for update