General Self-Prediction Enhancement for Spiking Neurons
About
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility. Crucially, mainstream SNNs ignore predictive coding, a core cortical mechanism where the brain predicts inputs and encodes errors for efficient perception. Inspired by this, we propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential. This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity. Experiments show consistent performance gains across diverse architectures, neuron types, time steps, and tasks demonstrating broad applicability for enhancing SNNs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Hopper v4 | Average Return3.46e+3 | 13 | |
| Reinforcement Learning | Walker2d v4 | Avg Return4.50e+3 | 13 | |
| Reinforcement Learning | Ant v4 | Average Return5.53e+3 | 5 | |
| Reinforcement Learning | MuJoCo Overall | Avg Performance Gain3.28 | 5 | |
| Reinforcement Learning | HalfCheetah v4 | Max Return9.83e+3 | 5 |