Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Intention-Adaptive LLM Fine-Tuning for Text Revision Generation

About

Large Language Models (LLMs) have achieved impressive capabilities in various context-based text generation tasks, such as summarization and reasoning; however, their applications in intention-based generation tasks remain underexplored. One such example is revision generation, which requires the generated text to explicitly reflect the writer's actual intentions. Identifying intentions and generating desirable revisions are challenging due to their complex and diverse nature. Although prior work has employed LLMs to generate revisions with few-shot learning, they struggle with handling entangled multi-intent scenarios. While fine-tuning LLMs using intention-based instructions appears promising, it demands large amounts of annotated data, which is expensive and scarce in the revision community. To address these challenges, we propose Intention-Tuning, an intention-adaptive layer-wise LLM fine-tuning framework that dynamically selects a subset of LLM layers to learn the intentions and subsequently transfers their representations to revision generation. Experimental results suggest that Intention-Tuning is effective and efficient on small revision corpora, outperforming several PEFT baselines.

Zhexiong Liu, Diane Litman• 2026

Related benchmarks

TaskDatasetResultRank
Revision GenerationITERATER sent
SARI0.421
23
Revision GenerationITERATER doc
SARI48.77
23
Revision GenerationArgRevision
SARI38.08
23
Showing 3 of 3 rows

Other info

Follow for update