Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Meta-Learning at Scale for Large Language Models via Low-Rank Amortized Bayesian Meta-Learning

About

Fine-tuning large language models (LLMs) with low-rank adaptation (LoRA) is a cost-effective way to incorporate information from a specific dataset. However, when a problem requires incorporating information from multiple datasets - as in few shot learning - generalization across datasets can be limited, driving up training costs. As a consequence, other approaches such as in-context learning are typically used in this setting. To address this challenge, we introduce an efficient method for adapting the weights of LLMs to multiple distributions, Amortized Bayesian Meta-Learning for LoRA (ABMLL). This method builds on amortized Bayesian meta-learning for smaller models, adapting this approach to LLMs by reframing where local and global variables are defined in LoRA and using a new hyperparameter to balance reconstruction accuracy and the fidelity of task-specific parameters to the global ones. ABMLL supports effective generalization across datasets and scales to large models such as Llama3-8B and Qwen2-7B, outperforming existing methods on the CrossFit and Unified-QA datasets in terms of both accuracy and expected calibration error. We show that meta-learning can also be combined with in-context learning, resulting in further improvements in both these datasets and legal and chemistry applications.

Liyi Zhang, Jake Snell, Thomas L. Griffiths• 2025

Related benchmarks

TaskDatasetResultRank
ClassificationCrossFit cls-23
Accuracy75.4
16
Natural Language InferenceNLI
Accuracy85.2
14
Multiple-choice Question AnsweringMCQA
Accuracy82.1
11
Natural Language InferenceCrossFit NLI (test)
Accuracy83.6
10
Paraphrase DetectionCrossFit Para (test)
Accuracy66.1
10
Text ClassificationCrossFit cls-45 (test)
Accuracy75.2
10
Multiple-choice Question AnsweringUnifiedQA MCQA (test)
Accuracy77.4
10
ClassificationCrossFit cls-45
Accuracy77.4
6
In-Context LearningLegalBench
Accuracy79.5
6
In-Context LearningChemBench
Accuracy58.4
6
Showing 10 of 11 rows

Other info

Follow for update