Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Future Optical Flow Prediction Improves Robot Control & Video Generation

About

Future motion representations, such as optical flow, offer immense value for control and generative tasks. However, forecasting generalizable spatially dense motion representations remains a key challenge, and learning such forecasting from noisy, real-world data remains relatively unexplored. We introduce FOFPred, a novel language-conditioned optical flow forecasting model featuring a unified Vision-Language Model (VLM) and Diffusion architecture. This unique combination enables strong multimodal reasoning with pixel-level generative fidelity for future motion prediction. Our model is trained on web-scale human activity data-a highly scalable but unstructured source. To extract meaningful signals from this noisy video-caption data, we employ crucial data preprocessing techniques and our unified architecture with strong image pretraining. The resulting trained model is then extended to tackle two distinct downstream tasks in control and generation. Evaluations across robotic manipulation and video generation under language-driven settings establish the cross-domain versatility of FOFPred, confirming the value of a unified VLM-Diffusion architecture and scalable learning from diverse web data for future optical flow prediction.

Kanchana Ranasinghe, Honglu Zhou, Yu Fang, Luyu Yang, Le Xue, Ran Xu, Caiming Xiong, Silvio Savarese, Michael S Ryoo, Juan Carlos Niebles• 2026

Related benchmarks

TaskDatasetResultRank
Long-horizon robotic manipulationCALVIN ABC→D (Zero-shot)
Task 1 Success Rate98.8
16
Language-driven motion control in Text-to-Video generationSSv2 (val)
FVD75.39
8
Bimanual Robot ManipulationRoboTwin easy setting 2.0
Handover Block61
7
Showing 3 of 3 rows

Other info

GitHub

Follow for update