Learning Pixel Trajectories with Multiscale Contrastive Random Walks
About
A range of video modeling tasks, from optical flow to multiple object tracking, share the same fundamental challenge: establishing space-time correspondence. Yet, approaches that dominate each space differ. We take a step towards bridging this gap by extending the recent contrastive random walk formulation to much denser, pixel-level space-time graphs. The main contribution is introducing hierarchy into the search problem by computing the transition matrix between two frames in a coarse-to-fine manner, forming a multiscale contrastive random walk when extended in time. This establishes a unified technique for self-supervised learning of optical flow, keypoint tracking, and video object segmentation. Experiments demonstrate that, for each of these tasks, the unified model achieves performance competitive with strong self-supervised approaches specific to that task. Project webpage: https://jasonbian97.github.io/flowwalk
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Point Tracking | DAVIS | AJ24.4 | 38 | |
| Point Tracking | Kinetics | delta_avg55.5 | 24 | |
| Pose Propagation | JHMDB | PCK@0.163.1 | 20 | |
| Point Tracking | Kubric | AJ51.1 | 18 | |
| Segment Propagation | DAVIS | J&Fm Score57.9 | 7 |