Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Can Bad Teaching Induce Forgetting? Unlearning in Deep Networks using an Incompetent Teacher

About

Machine unlearning has become an important area of research due to an increasing need for machine learning (ML) applications to comply with the emerging data privacy regulations. It facilitates the provision for removal of certain set or class of data from an already trained ML model without requiring retraining from scratch. Recently, several efforts have been put in to make unlearning to be effective and efficient. We propose a novel machine unlearning method by exploring the utility of competent and incompetent teachers in a student-teacher framework to induce forgetfulness. The knowledge from the competent and incompetent teachers is selectively transferred to the student to obtain a model that doesn't contain any information about the forget data. We experimentally show that this method generalizes well, is fast and effective. Furthermore, we introduce the zero retrain forgetting (ZRF) metric to evaluate any unlearning method. Unlike the existing unlearning metrics, the ZRF score does not depend on the availability of the expensive retrained model. This makes it useful for analysis of the unlearned model after deployment as well. We present results of experiments conducted for random subset forgetting and class forgetting on various deep networks and across different application domains.~Source code is at: https://github.com/vikram2000b/bad-teaching-unlearning

Vikram S Chundawat, Ayush K Tarun, Murari Mandal, Mohan Kankanhalli• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy90.14
3381
Image ClassificationCIFAR-10 (Forget)
Accuracy98.52
63
Class UnlearningCIFAR-10
Retain Accuracy0.81
60
Machine UnlearningImageNette gas pump Class 7 (test)
Forget Accuracy7.4
48
Machine UnlearningCIFAR-10 bird, Class 2 (test)
Forgetting Accuracy (Class)9.5
48
Machine UnlearningCIFAR-10
Accf8.96
45
Machine UnlearningTiny-ImageNet (train)
Forgetting Accuracy (Train)21.6
43
PoisoningCIFAR10
Attack Cost0.44
36
PoisoningCIFAR100
Poisoning Cost0.68
36
Selective UnlearningLacuna 10 (test)
Test Error (mean)4.9
36
Showing 10 of 110 rows
...

Other info

Follow for update