Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Truncated Back-propagation for Bilevel Optimization

About

Bilevel optimization has been recently revisited for designing and analyzing algorithms in hyperparameter tuning and meta learning tasks. However, due to its nested structure, evaluating exact gradients for high-dimensional problems is computationally challenging. One heuristic to circumvent this difficulty is to use the approximate gradient given by performing truncated back-propagation through the iterative optimization procedure that solves the lower-level problem. Although promising empirical performance has been reported, its theoretical properties are still unclear. In this paper, we analyze the properties of this family of approximate gradients and establish sufficient conditions for convergence. We validate this on several hyperparameter tuning and meta learning tasks. We find that optimization with the approximate gradient computed using few-step back-propagation often performs comparably to optimization with the exact gradient, while requiring far less memory and half the computation time.

Amirreza Shaban, Ching-An Cheng, Nathan Hatch, Byron Boots• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy87.84
882
Image ClassificationFashionMNIST (test)
Accuracy79.04
218
Hyper-data CleaningMNIST (test)
Test Accuracy0.8596
31
Image ClassificationCIFAR-10 (test)
Accuracy35.6
26
Binary ClassificationHeart (test)--
16
RegressionAbalone (test)
L2 Risk6.98
14
Classificationa1a (test)
Loss0.5284
11
Classificationionosphere (test)
Loss0.6443
11
Classificationgisette (test)
Loss0.3896
11
Regressionmg (test)
MSE0.0266
10
Showing 10 of 24 rows

Other info

Follow for update