LARFT: Closing the Cognition-Action Gap for Length Instruction Following in Large Language Models
About
Despite the strong performance of Large Language Models (LLMs) on complex instruction-following tasks, precise control of output length remains a persistent challenge. Existing methods primarily attempt to enforce length constraints by externally imposing length signals or optimization objectives, while largely overlooking the underlying limitation: the model's intrinsic deficit in length cognition. To address this, we propose LARFT (Length-Aware Reinforcement Fine-Tuning), a training framework that aligns the model's length cognition with its action. Specifically, LARFT integrates length-oriented reinforcement learning with a hindsight length awareness. By transforming on-policy data into hindsight self-awareness tasks where the model learns to identify the actual length of its own generation, LARFT jointly optimizes the model's internal representation of length information and refines its policy to satisfy length constraints, thereby achieving precise and reliable length instruction following. Extensive experiments across four base models demonstrate that LARFT outperforms existing baselines, achieving an average improvement of +20.92 points across three length instruction following benchmarks with only a marginal decline of -1.45 points on four general capability benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | IFEval | -- | 625 | |
| Length Following | LIFEBench | LD Score10.8 | 28 | |
| Long-generation | LongBench | Sequence Length (Sl)96.75 | 28 | |
| Scientific Question Answering | GPQA | Score33.48 | 28 | |
| Short-length Instruction Following | Lenctrl-Bench | MAE4.99 | 28 | |
| General Language Understanding | MMLU | MMLU Score72.94 | 28 |