Towards Source-Free Machine Unlearning
About
As machine learning becomes more pervasive and data privacy regulations evolve, the ability to remove private or copyrighted information from trained models is becoming an increasingly critical requirement. Existing unlearning methods often rely on the assumption of having access to the entire training dataset during the forgetting process. However, this assumption may not hold true in practical scenarios where the original training data may not be accessible, i.e., the source-free setting. To address this challenge, we focus on the source-free unlearning scenario, where an unlearning algorithm must be capable of removing specific data from a trained model without requiring access to the original training dataset. Building on recent work, we present a method that can estimate the Hessian of the unknown remaining training data, a crucial component required for efficient unlearning. Leveraging this estimation technique, our method enables efficient zero-shot unlearning while providing robust theoretical guarantees on the unlearning performance, while maintaining performance on the remaining data. Extensive experiments over a wide range of datasets verify the efficacy of our method.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Chest X-ray classification | CheXpert (test) | -- | 27 | |
| Unlearning | DomainNet Sketch to Painting | A_Dr65.8 | 22 | |
| Unlearning | DomainNet c → s | A_Dr62.9 | 22 | |
| Machine Unlearning | Office-31 D → A (test) | Retained Performance D->A78.6 | 11 | |
| Machine Unlearning | Office-31 W → A (test) | Target Domain Accuracy (r)79.7 | 11 | |
| Single-Class SCADA Unlearning | OfficeHome R → P | ADr88.6 | 11 | |
| Single-class Unlearning | Office 31 D -> A | Retained Accuracy (D->A)79 | 11 | |
| Single-class Unlearning | Office 31 D -> W | Accuracy ($A_{D_T}^r$)81.9 | 11 | |
| Single-class Unlearning | Office 31 W -> A | Accuracy (Retained)80.5 | 11 | |
| Unlearning | DomainNet r → c | A_Dr65.7 | 11 |