Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Negative Binomial Variational Autoencoders for Overdispersed Latent Modeling

About

Although artificial neural networks are often described as brain-inspired, their representations typically rely on continuous activations, such as the continuous latent variables in variational autoencoders (VAEs), which limits their biological plausibility compared to the discrete spike-based signaling in real neurons. Extensions like the Poisson VAE introduce discrete count-based latents, but their equal mean-variance assumption fails to capture overdispersion in neural spikes, leading to less expressive and informative representations. To address this, we propose NegBio-VAE, a negative-binomial latent-variable model with a dispersion parameter for flexible spike count modeling. NegBio-VAE preserves interpretability while improving representation quality and training feasibility via novel KL estimation and reparameterization. Experiments on four datasets demonstrate that NegBio-VAE consistently achieves superior reconstruction and generation performance compared to competing single-layer VAE baselines, and yields robust, informative latent representations for downstream tasks. Extensive ablation studies are performed to verify the model's robustness w.r.t. various components. Our code is available at https://github.com/co234/NegBio-VAE.

Yixuan Zhang, Jinhao Sheng, Wenxin Zhang, Quyu Kong, Feng Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Image GenerationFashion MNIST--
38
Image ReconstructionMNIST
MSE0.0123
34
GenerationMNIST
FID@5k79.6727
8
GenerationCIFAR 16x16
FID (5k)40.2788
8
ReconstructionFashion MNIST
MSE0.0144
8
ReconstructionCIFAR16x16
MSE0.0189
8
ReconstructionCelebA 64x64
MSE0.0341
8
Showing 7 of 7 rows

Other info

Follow for update