Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Accelerated projected gradient algorithms for sparsity constrained optimization problems

About

We consider the projected gradient algorithm for the nonconvex best subset selection problem that minimizes a given empirical loss function under an $\ell_0$-norm constraint. Through decomposing the feasible set of the given sparsity constraint as a finite union of linear subspaces, we present two acceleration schemes with global convergence guarantees, one by same-space extrapolation and the other by subspace identification. The former fully utilizes the problem structure to greatly accelerate the optimization speed with only negligible additional cost. The latter leads to a two-stage meta-algorithm that first uses classical projected gradient iterations to identify the correct subspace containing an optimal solution, and then switches to a highly-efficient smooth optimization method in the identified subspace to attain superlinear convergence. Experiments demonstrate that the proposed accelerated algorithms are magnitudes faster than their non-accelerated counterparts as well as the state of the art.

Jan Harold Alcantara, Ching-pei Lee• 2022

Related benchmarks

TaskDatasetResultRank
Sparse Classificationgisette scale
CPU Cost0.07
40
Sparsity-constrained Optimizationduke
CPU Time0.01
40
RegressionE2006 tfidf
MSE0.139
25
Classificationcolon-cancer
CPU0.01
20
Classificationduke
CPU Cost0.01
20
Classificationleukemia
CPU Usage0.01
20
RegressionE2006-log1p (test)
CPU Time19.5
20
Sparse Classificationcolon-cancer
CPU Time0.02
20
Sparse Classificationduke
CPU0.01
20
Sparse Classificationleukemia
CPU0.01
20
Showing 10 of 25 rows

Other info

Follow for update