Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Simple Yet Effective Strategy to Robustify the Meta Learning Paradigm

About

Meta learning is a promising paradigm to enable skill transfer across tasks. Most previous methods employ the empirical risk minimization principle in optimization. However, the resulting worst fast adaptation to a subset of tasks can be catastrophic in risk-sensitive scenarios. To robustify fast adaptation, this paper optimizes meta learning pipelines from a distributionally robust perspective and meta trains models with the measure of expected tail risk. We take the two-stage strategy as heuristics to solve the robust meta learning problem, controlling the worst fast adaptation cases at a certain probabilistic level. Experimental results show that our simple method can improve the robustness of meta learning to task distributions and reduce the conditional expectation of the worst fast adaptation risk.

Qi Wang, Yiqin Lv, Yanghe Feng, Zheng Xie, Jincai Huang• 2023

Related benchmarks

TaskDatasetResultRank
Few-shot Image ClassificationOmniglot Meta-Training Alphabets (train)
Average Performance99.6
6
Few-shot Image ClassificationOmniglot Alphabets (meta-test)
Average Score93.7
6
Image Classificationmini-ImageNet (train)
Average Score70.2
5
System IdentificationPendulum (test)
Average MSE0.75
5
Meta-Reinforcement Learning2-D point robot navigation (meta-test)
Average Return-19.6
4
Few-shot Image Classificationmini-ImageNet Eight Meta (train)
Average Accuracy70.2
3
Few-shot Image Classificationmini-ImageNet (Four Meta-Testing Tasks)
Average Accuracy49.4
3
Few-shot regressionGaussian Process curves (meta-test)
Average Risk-0.8
3
Few-shot Sinusoid RegressionSinusoid 490 tasks 5-shot (test)
Avg MSE0.89
3
Few-shot Sinusoid RegressionSinusoid 490 meta-test tasks 10-shot (test)
Average MSE0.54
3
Showing 10 of 11 rows

Other info

Code

Follow for update