Multi-Objective Meta Learning
About
Meta learning with multiple objectives can be formulated as a Multi-Objective Bi-Level optimization Problem (MOBLP) where the upper-level subproblem is to solve several possible conflicting targets for the meta learner. However, existing studies either apply an inefficient evolutionary algorithm or linearly combine multiple objectives as a single-objective problem with the need to tune combination weights. In this paper, we propose a unified gradient-based Multi-Objective Meta Learning (MOML) framework and devise the first gradient-based optimization algorithm to solve the MOBLP by alternatively solving the lower-level and upper-level subproblems via the gradient descent method and the gradient-based multi-objective optimization method, respectively. Theoretically, we prove the convergence properties of the proposed gradient-based optimization algorithm. Empirically, we show the effectiveness of the proposed MOML framework in several meta learning problems, including few-shot learning, neural architecture search, domain adaptation, and multi-task learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-Domain Classification | Office-Home (test) | Accuracy (Art)69.64 | 20 | |
| Image Classification | CIFAR-10 | Accuracy (Natural)97.25 | 9 | |
| Multi-task Learning | Office-31 (test) | Accuracy (Domain A)88.03 | 6 | |
| Semi-supervised Domain Adaptation | Office-31 | A->D Accuracy94.32 | 4 |