Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Annealed Multiple Choice Learning: Overcoming limitations of Winner-takes-all with annealing

About

We introduce Annealed Multiple Choice Learning (aMCL) which combines simulated annealing with MCL. MCL is a learning framework handling ambiguous tasks by predicting a small set of plausible hypotheses. These hypotheses are trained using the Winner-takes-all (WTA) scheme, which promotes the diversity of the predictions. However, this scheme may converge toward an arbitrarily suboptimal local minimum, due to the greedy nature of WTA. We overcome this limitation using annealing, which enhances the exploration of the hypothesis space during training. We leverage insights from statistical physics and information theory to provide a detailed description of the model training trajectory. Additionally, we validate our algorithm by extensive experiments on synthetic datasets, on the standard UCI benchmark, and on speech separation.

David Perera, Victor Letzelter, Th\'eo Mariotte, Adrien Cort\'es, Mickael Chen, Slim Essid, Ga\"el Richard• 2024

Related benchmarks

TaskDatasetResultRank
Source SeparationWSJ0-2mix (eval)
SI-SDR16.85
4
Source SeparationWSJ0 3mix (eval)
SI-SDR10
4
RegressionUCI Year (single-fold)
Distortion4.46
3
RegressionUCI Protein (5 folds)
Distortion0.77
3
RegressionUCI Power (20 folds)
Distortion2.18
3
RegressionUCI Kin8nm (20 folds)
Distortion6.81e-4
3
RegressionUCI Energy (20 folds)
Distortion0.28
3
RegressionUCI Yacht (20 folds)
Distortion1.15
3
RegressionUCI Naval (20 folds)
Distortion5.37e-7
3
RegressionUCI Wine (20 folds)
Distortion0.03
3
Showing 10 of 12 rows

Other info

Code

Follow for update