MFT: Long-Term Tracking of Every Pixel
About
We propose MFT -- Multi-Flow dense Tracker -- a novel method for dense, pixel-level, long-term tracking. The approach exploits optical flows estimated not only between consecutive frames, but also for pairs of frames at logarithmically spaced intervals. It selects the most reliable sequence of flows on the basis of estimates of its geometric accuracy and the probability of occlusion, both provided by a pre-trained CNN. We show that MFT achieves competitive performance on the TAP-Vid benchmark, outperforming baselines by a significant margin, and tracking densely orders of magnitude faster than the state-of-the-art point-tracking methods. The method is insensitive to medium-length occlusions and it is robustified by estimating flow with respect to the reference frame, which reduces drift.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Point Tracking | TAP-Vid Kinetics | Overall Accuracy72.7 | 37 | |
| Point Tracking | TAP-Vid DAVIS (First) | Delta Avg (<c)66.8 | 19 | |
| Point Tracking | TAP-Vid DAVIS (Strided) | Avg Delta Error70.8 | 17 | |
| 2D Long-range optical flow | CVO Clean 7 frames | EPE (all)2.91 | 16 | |
| 2D Long-range optical flow | CVO 7 frames (Final) | EPE (all)3.16 | 16 | |
| 2D Long-range optical flow | CVO Extended (48 frames) | EPE (all)21.4 | 10 |