Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cost Function Unrolling in Unsupervised Optical Flow

About

Steepest descent algorithms, which are commonly used in deep learning, use the gradient as the descent direction, either as-is or after a direction shift using preconditioning. In many scenarios calculating the gradient is numerically hard due to complex or non-differentiable cost functions, specifically next to singular points. In this work we focus on the derivation of the Total Variation semi-norm commonly used in unsupervised cost functions. Specifically, we derive a differentiable proxy to the hard L1 smoothness constraint in a novel iterative scheme which we refer to as Cost Unrolling. Producing more accurate gradients during training, our method enables finer predictions of a given DNN model through improved convergence, without modifying its architecture or increasing computational complexity. We demonstrate our method in the unsupervised optical flow task. Replacing the L1 smoothness constraint with our unrolled cost during the training of a well known baseline, we report improved results on both MPI Sintel and KITTI 2015 unsupervised optical flow benchmarks. Particularly, we report EPE reduced by up to 15.82% on occluded pixels, where the smoothness constraint is dominant, enabling the detection of much sharper motion edges.

Gal Lifshitz, Dan Raviv• 2020

Related benchmarks

TaskDatasetResultRank
Optical Flow EstimationKITTI 2015 (train)
Fl-epe2.87
431
Optical Flow EstimationMPI Sintel Final (train)
Endpoint Error (EPE)3.61
209
Optical Flow EstimationMPI Sintel Clean (train)--
202
Optical FlowMPI Sintel Clean (test)
AEE4.69
158
Optical FlowMPI-Sintel final (test)--
137
Optical FlowKITTI 2015 (test)--
95
Showing 6 of 6 rows

Other info

Code

Follow for update