Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Layer-wise LLM Fine-tuning for Revision Intention Prediction

About

Large Language Models (LLMs) have shown extraordinary success across various text generation tasks; however, their potential for simple yet essential text classification remains underexplored, as LLM pre-training tends to emphasize generation over classification. While LLMs with instruction tuning can transform classification into a generation task, they often struggle to categorize nuanced texts. One such example is text revision, which involves nuanced edits between pairs of texts. Although simply fine-tuning LLMs for revision classification seems plausible, it requires a large amount of revision annotations, which are exceptionally expensive and scarce in the community. To address this issue, we introduce a plug-and-play layer-wise parameter-efficient fine-tuning (PEFT) framework, i.e., IR-Tuning, which fine-tunes a subset of important LLM layers that are dynamically selected based on their gradient norm distribution, while freezing those of redundant layers. Extensive experiments suggest that IR-Tuning surpasses several layer-wise PEFT baselines over diverse text revisions, while achieving fast convergence, low GPU memory consumption, and effectiveness on small revision corpora.

Zhexiong Liu, Diane Litman• 2025

Related benchmarks

TaskDatasetResultRank
Revision GenerationITERATER sent
SARI0.4179
23
Revision GenerationITERATER doc
SARI46.94
23
Revision GenerationArgRevision
SARI37.51
23
Showing 3 of 3 rows

Other info

Follow for update