Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Machine Unlearning for Quantized Neural Networks via Adaptive Gradient Reweighting with Similar Labels

About

Model quantization enables efficient deployment of deep neural networks on edge devices through low-bit parameter representation, yet raises critical challenges for implementing machine unlearning (MU) under data privacy regulations. Existing MU methods designed for full-precision models fail to address two fundamental limitations in quantized networks: 1) Noise amplification from label mismatch during data processing, and 2) Gradient imbalance between forgotten and retained data during training. These issues are exacerbated by quantized models' constrained parameter space and discrete optimization. We propose Q-MUL, the first dedicated unlearning framework for quantized models. Our method introduces two key innovations: 1) Similar Labels assignment replaces random labels with semantically consistent alternatives to minimize noise injection, and 2) Adaptive Gradient Reweighting dynamically aligns parameter update contributions from forgotten and retained data. Through systematic analysis of quantized model vulnerabilities, we establish theoretical foundations for these mechanisms. Extensive evaluations on benchmark datasets demonstrate Q-MUL's superiority over existing approaches.

Yujia Tong, Yuze Wang, Jingling Yuan, Chuang Hu• 2025

Related benchmarks

TaskDatasetResultRank
Full-class forgettingMNIST (retain)
Accuracy80.5
22
Full-class forgettingFashion-MNIST (retain)
Acc91.04
11
Full-class forgettingFashion MNIST (test)
Accuracy77
11
Full-class forgettingIris (test)
Accuracy83.3
11
Machine UnlearningFashion-MNIST 2% subset
Accuracy0.9021
11
Machine UnlearningFashion-MNIST 2% (test)
Accuracy83.56
11
Full-class forgettingIris (retain)
Acc83.8
11
Machine UnlearningIris 2% (forgotten)
UQI-0.046
11
Full-class forgettingMNIST (test)
Accuracy66
11
Machine UnlearningIris subset 2% (test)
Accuracy90
11
Showing 10 of 12 rows

Other info

Follow for update