A Simple Yet Effective Strategy to Robustify the Meta Learning Paradigm
About
Meta learning is a promising paradigm to enable skill transfer across tasks. Most previous methods employ the empirical risk minimization principle in optimization. However, the resulting worst fast adaptation to a subset of tasks can be catastrophic in risk-sensitive scenarios. To robustify fast adaptation, this paper optimizes meta learning pipelines from a distributionally robust perspective and meta trains models with the measure of expected tail risk. We take the two-stage strategy as heuristics to solve the robust meta learning problem, controlling the worst fast adaptation cases at a certain probabilistic level. Experimental results show that our simple method can improve the robustness of meta learning to task distributions and reduce the conditional expectation of the worst fast adaptation risk.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Few-shot Image Classification | Omniglot Meta-Training Alphabets (train) | Average Performance99.6 | 6 | |
| Few-shot Image Classification | Omniglot Alphabets (meta-test) | Average Score93.7 | 6 | |
| Image Classification | mini-ImageNet (train) | Average Score70.2 | 5 | |
| System Identification | Pendulum (test) | Average MSE0.75 | 5 | |
| Meta-Reinforcement Learning | 2-D point robot navigation (meta-test) | Average Return-19.6 | 4 | |
| Few-shot Image Classification | mini-ImageNet Eight Meta (train) | Average Accuracy70.2 | 3 | |
| Few-shot Image Classification | mini-ImageNet (Four Meta-Testing Tasks) | Average Accuracy49.4 | 3 | |
| Few-shot regression | Gaussian Process curves (meta-test) | Average Risk-0.8 | 3 | |
| Few-shot Sinusoid Regression | Sinusoid 490 tasks 5-shot (test) | Avg MSE0.89 | 3 | |
| Few-shot Sinusoid Regression | Sinusoid 490 meta-test tasks 10-shot (test) | Average MSE0.54 | 3 |