Tora3: Trajectory-Guided Audio-Video Generation with Physical Coherence
About
Audio-video (AV) generation has recently made strong progress in perceptual quality and multimodal coherence, yet generating content with plausible motion-sound relations remains challenging. Existing methods often produce object motions that are visually unstable and sounds that are only loosely aligned with salient motion or contact events, largely because they lack an explicit motion-aware structure shared by video and audio generation. We present Tora3, a trajectory-guided AV generation framework that improves physical coherence by using object trajectories as a shared kinematic prior. Rather than treating trajectories as a video-only control signal, Tora3 uses them to jointly guide visual motion and acoustic events. Specifically, we design a trajectory-aligned motion representation for video, a kinematic-audio alignment module driven by trajectory-derived second-order kinematic states, and a hybrid flow matching scheme that preserves trajectory fidelity in trajectory-conditioned regions while maintaining local coherence elsewhere. We further curate PAV, a large-scale AV dataset emphasizing motion-relevant patterns with automatically extracted motion annotations. Extensive experiments show that Tora3 improves motion realism, motion-sound synchronization, and overall AV generation quality over strong open-source baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Motion-conditioned Audio-Video Generation | Audio-Video Generation Evaluation Set | AS4.61 | 5 | |
| Event-level Audio Timing | User Study | Win Rate65.9 | 4 | |
| Motion-Sound Intensity Alignment | User Study | Win Rate66.1 | 4 | |
| Overall Preference | User Study | Win Rate63.9 | 4 | |
| Video Quality | User Study | Win Rate59.2 | 4 | |
| Motion Alignment | User Study | Win Rate45.1 | 1 |