Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LARFT: Closing the Cognition-Action Gap for Length Instruction Following in Large Language Models

About

Despite the strong performance of Large Language Models (LLMs) on complex instruction-following tasks, precise control of output length remains a persistent challenge. Existing methods primarily attempt to enforce length constraints by externally imposing length signals or optimization objectives, while largely overlooking the underlying limitation: the model's intrinsic deficit in length cognition. To address this, we propose LARFT (Length-Aware Reinforcement Fine-Tuning), a training framework that aligns the model's length cognition with its action. Specifically, LARFT integrates length-oriented reinforcement learning with a hindsight length awareness. By transforming on-policy data into hindsight self-awareness tasks where the model learns to identify the actual length of its own generation, LARFT jointly optimizes the model's internal representation of length information and refines its policy to satisfy length constraints, thereby achieving precise and reliable length instruction following. Extensive experiments across four base models demonstrate that LARFT outperforms existing baselines, achieving an average improvement of +20.92 points across three length instruction following benchmarks with only a marginal decline of -1.45 points on four general capability benchmarks.

Wei Zhang, Lintong Du, Yuanhe Zhang, Zhenhong Zhou, Kun Wang, Li Sun, Sen Su• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Length FollowingLIFEBench
LD Score10.8
28
Long-generationLongBench
Sequence Length (Sl)96.75
28
Scientific Question AnsweringGPQA
Score33.48
28
Short-length Instruction FollowingLenctrl-Bench
MAE4.99
28
General Language UnderstandingMMLU
MMLU Score72.94
28
Showing 6 of 6 rows

Other info

Follow for update