Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When Gradient Clipping Becomes a Control Mechanism for Differential Privacy in Deep Learning

About

Privacy-preserving training on sensitive data commonly relies on differentially private stochastic optimization with gradient clipping and Gaussian noise. The clipping threshold is a critical control knob: if set too small, systematic over-clipping induces optimization bias; if too large, injected noise dominates updates and degrades accuracy. Existing adaptive clipping methods often depend on per-example gradient norm statistics, adding computational overhead and introducing sensitivity to datasets and architectures. We propose a control-driven clipping strategy that adapts the threshold using a lightweight, weight-only spectral diagnostic computed from model parameters. At periodic probe steps, the method analyzes a designated weight matrix via spectral decomposition and estimates a heavy-tailed spectral indicator associated with training stability. This indicator is smoothed over time and fed into a bounded feedback controller that updates the clipping threshold multiplicatively in the log domain. Because the controller uses only parameters produced during privacy-preserving training, the resulting threshold updates are post-processing and do not increase privacy loss beyond that of the underlying DP optimizer under standard composition accounting.

Mohammad Partohaghighi, Roummel Marcia, Bruce J. West, YangQuan Chen• 2026

Related benchmarks

TaskDatasetResultRank
Image ClassificationEMNIST (test)
Accuracy90.02
174
Image ClassificationImageNet-100 (test)
Clean Accuracy65.5
109
Image ClassificationMNIST (test)
Accuracy96.68
61
RegressionEnergy
RMSE0.112
13
Binary ClassificationUCI Adult
AUC0.852
8
Binary ClassificationUCI Heart
AUC0.822
8
Showing 6 of 6 rows

Other info

Follow for update