Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Not All Layers Need Tuning: Selective Layer Restoration Recovers Diversity

About

Post-training improves instruction-following and helpfulness of large language models (LLMs) but often reduces generation diversity, which leads to repetitive outputs in open-ended settings, a phenomenon known as mode collapse. Motivated by evidence that LLM layers play distinct functional roles, we hypothesize that mode collapse can be localized to specific layers and that restoring a carefully chosen range of layers to their pre-trained weights can recover diversity while maintaining high output quality. To validate this hypothesis and decide which layers to restore, we design a proxy task -- Constrained Random Character(CRC) -- with an explicit validity set and a natural diversity objective. Results on CRC reveal a clear diversity-validity trade-off across restoration ranges and identify configurations that increase diversity with minimal quality loss. Based on these findings, we propose Selective Layer Restoration (SLR), a training-free method that restores selected layers in a post-trained model to their pre-trained weights, yielding a hybrid model with the same architecture and parameter count, incurring no additional inference cost. Across three different tasks (creative writing, open-ended question answering, and multi-step reasoning) and three different model families (Llama, Qwen, and Gemma), we find SLR can consistently and substantially improve output diversity while maintaining high output quality.

Bowen Zhang, Meiyi Wang, Harold Soh• 2026

Related benchmarks

TaskDatasetResultRank
Joke generationJoke
Quality12.8
9
Multi-step ReasoningGSM8K (test)
Pass@132.2
9
Poem generationPoem
Quality1.005
9
Story GenerationStory
Quality117.7
9
Open-ended QAOpen-ended QA
Precision97.8
9
Showing 5 of 5 rows

Other info

Follow for update