Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multigrid Predictive Filter Flow for Unsupervised Learning on Videos

About

We introduce multigrid Predictive Filter Flow (mgPFF), a framework for unsupervised learning on videos. The mgPFF takes as input a pair of frames and outputs per-pixel filters to warp one frame to the other. Compared to optical flow used for warping frames, mgPFF is more powerful in modeling sub-pixel movement and dealing with corruption (e.g., motion blur). We develop a multigrid coarse-to-fine modeling strategy that avoids the requirement of learning large filters to capture large displacement. This allows us to train an extremely compact model (4.6MB) which operates in a progressive way over multiple resolutions with shared weights. We train mgPFF on unsupervised, free-form videos and show that mgPFF is able to not only estimate long-range flow for frame reconstruction and detect video shot transitions, but also readily amendable for video object segmentation and pose tracking, where it substantially outperforms the published state-of-the-art without bells and whistles. Moreover, owing to mgPFF's nature of per-pixel filter prediction, we have the unique opportunity to visualize how each pixel is evolving during solving these tasks, thus gaining better interpretability.

Shu Kong, Charless Fowlkes• 2019

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)
J mean42.2
1130
One-shot Video Object SegmentationDAVIS 2016 (val)
J Mean40.5
28
Instance Segmentation PropagationDAVIS 2017
J Mean42.2
14
Human Pose TrackingJHMDB (split1)
PCK @ 0.158.4
11
One-shot Video Object SegmentationDAVIS 2017 (val)
J&F Mean44.6
11
Video Frame ReconstructionDAVIS 2017 (val)
Pixel L1 Dist7.32
8
Showing 6 of 6 rows

Other info

Code

Follow for update