Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On Penalty-based Bilevel Gradient Descent Method

About

Bilevel optimization enjoys a wide range of applications in emerging machine learning and signal processing problems such as hyper-parameter optimization, image reconstruction, meta-learning, adversarial training, and reinforcement learning. However, bilevel optimization problems are traditionally known to be difficult to solve. Recent progress on bilevel algorithms mainly focuses on bilevel optimization problems through the lens of the implicit-gradient method, where the lower-level objective is either strongly convex or unconstrained. In this work, we tackle a challenging class of bilevel problems through the lens of the penalty method. We show that under certain conditions, the penalty reformulation recovers the (local) solutions of the original bilevel problem. Further, we propose the penalty-based bilevel gradient descent (PBGD) algorithm and establish its finite-time convergence for the constrained bilevel problem with lower-level constraints yet without lower-level strong convexity. Experiments on synthetic and real datasets showcase the efficiency of the proposed PBGD algorithm.

Han Shen, Quan Xiao, Tianyi Chen• 2023

Related benchmarks

TaskDatasetResultRank
Hyper-data CleaningMNIST (test)
Test Accuracy0.9181
31
Showing 1 of 1 rows

Other info

Follow for update