When Gradient Clipping Becomes a Control Mechanism for Differential Privacy in Deep Learning
About
Privacy-preserving training on sensitive data commonly relies on differentially private stochastic optimization with gradient clipping and Gaussian noise. The clipping threshold is a critical control knob: if set too small, systematic over-clipping induces optimization bias; if too large, injected noise dominates updates and degrades accuracy. Existing adaptive clipping methods often depend on per-example gradient norm statistics, adding computational overhead and introducing sensitivity to datasets and architectures. We propose a control-driven clipping strategy that adapts the threshold using a lightweight, weight-only spectral diagnostic computed from model parameters. At periodic probe steps, the method analyzes a designated weight matrix via spectral decomposition and estimates a heavy-tailed spectral indicator associated with training stability. This indicator is smoothed over time and fed into a bounded feedback controller that updates the clipping threshold multiplicatively in the log domain. Because the controller uses only parameters produced during privacy-preserving training, the resulting threshold updates are post-processing and do not increase privacy loss beyond that of the underlying DP optimizer under standard composition accounting.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | EMNIST (test) | Accuracy90.02 | 174 | |
| Image Classification | ImageNet-100 (test) | Clean Accuracy65.5 | 109 | |
| Image Classification | MNIST (test) | Accuracy96.68 | 61 | |
| Regression | Energy | RMSE0.112 | 13 | |
| Binary Classification | UCI Adult | AUC0.852 | 8 | |
| Binary Classification | UCI Heart | AUC0.822 | 8 |