Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Space-Time Diffusion Features for Zero-Shot Text-Driven Motion Transfer

About

We present a new method for text-driven motion transfer - synthesizing a video that complies with an input text prompt describing the target objects and scene while maintaining an input video's motion and scene layout. Prior methods are confined to transferring motion across two subjects within the same or closely related object categories and are applicable for limited domains (e.g., humans). In this work, we consider a significantly more challenging setting in which the target and source objects differ drastically in shape and fine-grained motion characteristics (e.g., translating a jumping dog into a dolphin). To this end, we leverage a pre-trained and fixed text-to-video diffusion model, which provides us with generative and motion priors. The pillar of our method is a new space-time feature loss derived directly from the model. This loss guides the generation process to preserve the overall motion of the input video while complying with the target object in terms of shape and fine-grained motion traits.

Danah Yatim, Rafail Fridman, Omer Bar-Tal, Yoni Kasten, Tali Dekel• 2023

Related benchmarks

TaskDatasetResultRank
Video GenerationVBench--
102
Motion TransferDAVIS Caption
MF Score0.782
12
Motion TransferDAVIS Scene
MF Score0.776
12
Motion TransferDAVIS All
MF0.766
12
Motion TransferDAVIS Subject
MF74.1
12
Video EditingEditVerseBench Appearance (test)
Pick Score19.73
12
Video EditingTGVE benchmark
Pick Score20.4
11
Video EditingEditVerseBench 125 videos
CLIP Score96.5
11
Video EditingEditVerse latest (full)
Editing Quality4.2
11
Video EditingEgoEditBench
VLM Score4.59
10
Showing 10 of 21 rows

Other info

Code

Follow for update