Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DistillLens: Symmetric Knowledge Distillation Through Logit Lens

About

Standard Knowledge Distillation (KD) compresses Large Language Models (LLMs) by optimizing final outputs, yet it typically treats the teacher's intermediate layer's thought process as a black box. While feature-based distillation attempts to bridge this gap, existing methods (e.g., MSE and asymmetric KL divergence) ignore the rich uncertainty profiles required for the final output. In this paper, we introduce DistillLens, a framework that symmetrically aligns the evolving thought processes of student and teacher models. By projecting intermediate hidden states into the vocabulary space via the Logit Lens, we enforce structural alignment using a symmetric divergence objective. Our analysis proves that this constraint imposes a dual-sided penalty, preventing both overconfidence and underconfidence while preserving the high-entropy information conduits essential for final deduction. Extensive experiments on GPT-2 and Llama architectures demonstrate that DistillLens consistently outperforms standard KD and feature-transfer baselines on diverse instruction-following benchmarks. The code is available at https://github.com/manishdhakal/DistillLens.

Manish Dhakal, Uthman Jinadu, Anjila Budathoki, Rajshekhar Sunderraman, Yi Ding• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingS-NI
Rouge-L30.8
94
Instruction FollowingUnNI
Rouge-L34.3
94
Instruction FollowingSelfInst
Rouge-L18.2
57
Instruction FollowingSelfInst
R-L Score13.3
50
Instruction FollowingDolly
SBERT Similarity71.4
24
Instruction FollowingVicuna
SBERT Similarity73.6
24
Instruction FollowingS-NI
SBERT Similarity60.2
24
Instruction FollowingUnNI
SBERT Similarity60.3
24
Instruction FollowingDolly
Rouge-L25.2
6
Instruction FollowingVicuna
Rouge-L17.8
6
Showing 10 of 10 rows

Other info

Follow for update