Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Variational Learning of Fractional Posteriors

About

We introduce a novel one-parameter variational objective that lower bounds the data evidence and enables the estimation of approximate fractional posteriors. We extend this framework to hierarchical construction and Bayes posteriors, offering a versatile tool for probabilistic modelling. We demonstrate two cases where gradients can be obtained analytically and a simulation study on mixture models showing that our fractional posteriors can be used to achieve better calibration compared to posteriors from the conventional variational bound. When applied to variational autoencoders (VAEs), our approach attains higher evidence bounds and enables learning of high-performing approximate Bayes posteriors jointly with fractional posteriors. We show that VAEs trained with fractional posteriors produce decoders that are better aligned for generation from the prior.

Kian Ming A. Chai, Edwin V. Bonilla• 2026

Related benchmarks

TaskDatasetResultRank
Generative ModelingFashion-MNIST (train)--
30
Showing 1 of 1 rows

Other info

Follow for update