Approximate Data Deletion from Machine Learning Models
About
Deleting data from a trained machine learning (ML) model is a critical task in many applications. For example, we may want to remove the influence of training points that might be out of date or outliers. Regulations such as EU's General Data Protection Regulation also stipulate that individuals can request to have their data deleted. The naive approach to data deletion is to retrain the ML model on the remaining data, but this is too time consuming. In this work, we propose a new approximate deletion method for linear and logistic models whose computational cost is linear in the the feature dimension $d$ and independent of the number of training data $n$. This is a significant gain over all existing methods, which all have superlinear time dependence on the dimension. We also develop a new feature-injection test to evaluate the thoroughness of data deletion from ML models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Unlearning | CIFAR-10 | Accf1.86 | 45 | |
| Single-class Unlearning | CIFAR-10 | Forget Accuracy1.04 | 16 | |
| Machine Unlearning | Tiny-Imagenet Random Forget 30%, γ=1/3 (test) | FA95.85 | 11 | |
| Machine Unlearning | CIFAR-100 Random Forget (forget set 20%) | FA97.02 | 11 | |
| Machine Unlearning | CIFAR-100 Random Forget 30% (forget set) | FA (%)97.02 | 11 | |
| Machine Unlearning | CIFAR-100 Random Forget (40%) | FA96.96 | 11 | |
| Machine Unlearning | CIFAR-100 Random Forget 50% | FA96.98 | 11 | |
| Machine Unlearning | Tiny-Imagenet Random Forget 20%, γ=0 (test) | FA95.64 | 11 | |
| Machine Unlearning | Tiny-Imagenet Random Forget 10%, γ=1 (test) | FA96.29 | 11 | |
| Single-class Unlearning | Tiny-ImageNet | Accuracy Forgotten12.2 | 11 |