Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LoTUS: Large-Scale Machine Unlearning with a Taste of Uncertainty

About

We present LoTUS, a novel Machine Unlearning (MU) method that eliminates the influence of training samples from pre-trained models, avoiding retraining from scratch. LoTUS smooths the prediction probabilities of the model up to an information-theoretic bound, mitigating its over-confidence stemming from data memorization. We evaluate LoTUS on Transformer and ResNet18 models against eight baselines across five public datasets. Beyond established MU benchmarks, we evaluate unlearning on ImageNet1k, a large-scale dataset, where retraining is impractical, simulating real-world conditions. Moreover, we introduce the novel Retrain-Free Jensen-Shannon Divergence (RF-JSD) metric to enable evaluation under real-world conditions. The experimental results show that LoTUS outperforms state-of-the-art methods in terms of both efficiency and effectiveness. Code: https://github.com/cspartalis/LoTUS.

Christoforos N. Spartalis, Theodoros Semertzidis, Efstratios Gavves, Petros Daras• 2025

Related benchmarks

TaskDatasetResultRank
Machine UnlearningCIFAR-10 (train)
Average Gap0.0075
22
Machine UnlearningCIFAR-10 50% forget set
Average Gap0.005
20
Machine UnlearningTiny-ImageNet (TinyIN) 10% unlearning (train)
Avg Gap0.015
20
Machine UnlearningCIFAR-100 10% unlearning (train)
Average Gap0.0125
20
Machine UnlearningMUFAC 10% unlearning (train)
Average Gap0.125
20
Machine UnlearningCIFAR-100 (50% forget set)
Average Gap0.1725
20
Class UnlearningTinyImageNet (TinyIN)
Average Gap0.0925
10
Class UnlearningCIFAR-100
Average Gap0.12
10
Showing 8 of 8 rows

Other info

Code

Follow for update