Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Task-Distributionally Robust Data-Free Meta-Learning

About

Data-Free Meta-Learning (DFML) aims to enable efficient learning of unseen few-shot tasks, by meta-learning from multiple pre-trained models without accessing their original training data. While existing DFML methods typically generate synthetic data from these models to perform meta-learning, a comprehensive analysis of DFML's robustness-particularly its failure modes and vulnerability to potential attacks-remains notably absent. Such an analysis is crucial as algorithms often operate in complex and uncertain real-world environments. This paper fills this significant gap by systematically investigating the robustness of DFML, identifying two critical but previously overlooked vulnerabilities: Task-Distribution Shift (TDS) and Task-Distribution Corruption (TDC). TDS refers to the sequential shifts in the evolving task distribution, leading to the catastrophic forgetting of previously learned meta-knowledge. TDC exposes a security flaw of DFML, revealing its susceptibility to attacks when the pre-trained model pool includes untrustworthy models that deceptively claim to be beneficial but are actually harmful. To mitigate these vulnerabilities, we propose a trustworthy DFML framework comprising three components: synthetic task reconstruction, meta-learning with task memory interpolation, and automatic model selection. Specifically, utilizing model inversion techniques, we reconstruct synthetic tasks from multiple pre-trained models to perform meta-learning. To prevent forgetting, we introduce a strategy to replay interpolated historical tasks to efficiently recall previous meta-knowledge. Furthermore, our framework seamlessly incorporates an automatic model selection mechanism to automatically filter out untrustworthy models during the meta-learning process. Code is available at https://github.com/Egg-Hu/Trustworthy-DFML.

Zixuan Hu, Yongxian Wei, Li Shen, Zhenyi Wang, Baoyuan Wu, Chun Yuan, Dacheng Tao• 2023

Related benchmarks

TaskDatasetResultRank
5-way Few-shot Image ClassificationCUB-200 2011 (meta-test)
1-shot Acc37.93
24
5-way 1-shot ClassificationCIFAR-FS--
10
5-way 5-shot ClassificationCIFAR-FS--
10
Few-shot classificationVGG Flower Meta-Dataset (test)--
10
Few-shot classificationCIFAR-FS (meta-test)
Best Accuracy (5-way 1-shot)41.91
8
Few-shot classificationminiImageNet meta (test)
Best Accuracy (5-way 1-shot)33.82
8
Online and Continual Meta-learningCIFAR-FS
Best Score58.25
6
5-way 1-shot ClassificationAircraft, Quick Draw, Textures and MSCOCO 600 tasks
Best Accuracy40.82
3
5-way 5-shot ClassificationAircraft, Quick Draw, Textures and MSCOCO 600 tasks
Accuracy (BEST)52.55
3
Showing 9 of 9 rows

Other info

Follow for update