Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Mixture Density via Natural Gradient Expectation Maximization

About

Mixture density networks are neural networks that produce Gaussian mixtures to represent continuous multimodal conditional densities. Standard training procedures involve maximum likelihood estimation using the negative log-likelihood (NLL) objective, which suffers from slow convergence and mode collapse. In this work, we improve the optimization of mixture density networks by integrating their information geometry. Specifically, we interpret mixture density networks as deep latent-variable models and analyze them through an expectation maximization framework, which reveals surprising theoretical connections to natural gradient descent. We then exploit such connections to derive the natural gradient expectation maximization (nGEM) objective. We show that empirically nGEM achieves up to 10$\times$ faster convergence while adding almost zerocomputational overhead, and scales well to high-dimensional data where NLL otherwise fails.

Yutao Chen, Jasmine Bayrooti, Steven Morad• 2026

Related benchmarks

TaskDatasetResultRank
RegressionUCI ENERGY (test)
Negative Log Likelihood-4.18
42
RegressionUCI KIN8NM (test)
NLL-1.96
25
RegressionUCI housing (test)
RMSE0.77
10
Inverse ProblemInverse MNIST
Negative Log Likelihood-1.53e+3
2
Showing 4 of 4 rows

Other info

Follow for update