Accelerating optimization over the space of probability measures
About
The acceleration of gradient-based optimization methods is a subject of significant practical and theoretical importance, particularly within machine learning applications. While much attention has been directed towards optimizing within Euclidean space, the need to optimize over spaces of probability measures in machine learning motivates exploration of accelerated gradient methods in this context too. To this end, we introduce a Hamiltonian-flow approach analogous to momentum-based approaches in Euclidean space. We demonstrate that, in the continuous-time setting, algorithms based on this approach can achieve convergence rates of arbitrarily high order. We complement our findings with numerical examples.
Shi Chen, Qin Li, Oliver Tse, Stephen J. Wright• 2023
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Non-rigid Surface Registration | Liver | ASSD (mm)0.749 | 6 | |
| Non-rigid Surface Registration | Pancreas | ASSD (mm)0.416 | 6 | |
| Non-rigid Surface Registration | Left ventricle | ASSD (mm)0.407 | 6 | |
| Affine Surface Registration | Liver | ASSD (mm)7.555 | 5 | |
| Affine Surface Registration | Pancreas | ASSD (mm)5.511 | 5 | |
| Affine Surface Registration | Left ventricle | ASSD (mm)2.172 | 5 | |
| Affine Surface Registration | Liver | Runtime (s)2.53 | 4 | |
| Affine Surface Registration | Pancreas | Runtime (s)2.82 | 4 | |
| Affine Surface Registration | Left ventricle | Runtime (second)1.6 | 4 | |
| Non-rigid Surface Registration | Liver | Runtime (s)2.53 | 4 |
Showing 10 of 12 rows