Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

KronA: Parameter Efficient Tuning with Kronecker Adapter

About

Fine-tuning a Pre-trained Language Model (PLM) on a specific downstream task has been a well-known paradigm in Natural Language Processing. However, with the ever-growing size of PLMs, training the entire model on several downstream tasks becomes very expensive and resource-hungry. Recently, different Parameter Efficient Tuning (PET) techniques are proposed to improve the efficiency of fine-tuning PLMs. One popular category of PET methods is the low-rank adaptation methods which insert learnable truncated SVD modules into the original model either sequentially or in parallel. However, low-rank decomposition suffers from limited representation power. In this work, we address this problem using the Kronecker product instead of the low-rank representation. We introduce KronA, a Kronecker product-based adapter module for efficient fine-tuning of Transformer-based PLMs. We apply the proposed methods for fine-tuning T5 on the GLUE benchmark to show that incorporating the Kronecker-based modules can outperform state-of-the-art PET methods.

Ali Edalati, Marzieh Tahaei, Ivan Kobyzev, Vahid Partovi Nia, James J. Clark, Mehdi Rezagholizadeh• 2022

Related benchmarks

TaskDatasetResultRank
Question AnsweringSQuAD 2.0
F181.96
190
SummarizationXsum
ROUGE-216.14
108
Question AnsweringSQuAD v1.1
F186.45
79
SummarizationCNN Daily Mail
ROUGE-140.83
67
Commonsense ReasoningCommonsense Reasoning (BoolQ, PIQA, SIQA, HellaS., WinoG., ARC-e, ARC-c, OBQA)
BoolQ Accuracy72.9
61
Reading ComprehensionDROP (test)
F1 Score58.5
61
MRI Image GenerationADNI (evaluation)
FID11.594
12
MRI Image GenerationPPMI (evaluation set)
FID15.857
12
MRI Image GenerationBraTS 2021 (evaluation set)
FID3.399
12
Concept RetrievalELSST concept retrieval synthetic (test)
MRR95.5
7
Showing 10 of 10 rows

Other info

Follow for update