Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models

About

The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific data (e.g., ChatDoctor) or instruction data (e.g., Alpaca). Among the various fine-tuning methods, adapter-based parameter-efficient fine-tuning (PEFT) is undoubtedly one of the most attractive topics, as it only requires fine-tuning a few external parameters instead of the entire LLMs while achieving comparable or even better performance. To enable further research on PEFT methods of LLMs, this paper presents LLM-Adapters, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks. The framework includes state-of-the-art open-access LLMs such as LLaMA, BLOOM, and GPT-J, as well as widely used adapters such as Series adapters, Parallel adapter, Prompt-based learning and Reparametrization-based methods. Moreover, we conduct extensive empirical studies on the impact of adapter types, placement locations, and hyper-parameters to the best design for each adapter-based methods. We evaluate the effectiveness of the adapters on fourteen datasets from two different reasoning tasks, Arithmetic Reasoning and Commonsense Reasoning. The results demonstrate that using adapter-based PEFT in smaller-scale LLMs (7B) with few extra trainable parameters yields comparable, and in some cases superior, performance to powerful LLMs (175B) in zero-shot inference on both reasoning tasks.

Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy87.97
3518
Image ClassificationCIFAR-10 (test)
Accuracy97.94
3381
Mathematical ReasoningGSM8K (test)
Accuracy60.8
900
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96
518
Code GenerationHumanEval (test)
Pass@134.7
506
Commonsense ReasoningCommon Sense Reasoning Tasks
Avg Score87.1
316
Mathematical ReasoningMAWPS
Accuracy91.6
234
Instruction FollowingMT-Bench
MT-Bench Score5.7
215
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA) (test)
BoolQ Accuracy88
202
Math ReasoningGSM8K
Accuracy61
187
Showing 10 of 33 rows

Other info

Follow for update