Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

General Self-Prediction Enhancement for Spiking Neurons

About

Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility. Crucially, mainstream SNNs ignore predictive coding, a core cortical mechanism where the brain predicts inputs and encodes errors for efficient perception. Inspired by this, we propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential. This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity. Experiments show consistent performance gains across diverse architectures, neuron types, time steps, and tasks demonstrating broad applicability for enhancing SNNs.

Zihan Huang, Zijie Xu, Yihan Huang, Shanshan Jia, Tong Bu, Yiting Dong, Wenxuan Liu, Jianhao Ding, Zhaofei Yu, Tiejun Huang• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningHopper v4
Average Return3.46e+3
13
Reinforcement LearningWalker2d v4
Avg Return4.50e+3
13
Reinforcement LearningAnt v4
Average Return5.53e+3
5
Reinforcement LearningMuJoCo Overall
Avg Performance Gain3.28
5
Reinforcement LearningHalfCheetah v4
Max Return9.83e+3
5
Showing 5 of 5 rows

Other info

Follow for update