Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models

About

Post-hoc explanations provide transparency and are essential for guiding model optimization, such as prompt engineering and data sanitation. However, applying model-agnostic techniques to Large Language Models (LLMs) is hindered by prohibitive computational costs, rendering these tools dormant for real-world applications. To revitalize model-agnostic interpretability, we propose a budget-friendly proxy framework that leverages efficient models to approximate the decision boundaries of expensive LLMs. We introduce a screen-and-apply mechanism to statistically verify local alignment before deployment. Our empirical evaluation confirms that proxy explanations achieve over 90% fidelity with only 11% of the oracle's cost. Building on this foundation, we demonstrate the actionable utility of our framework in prompt compression and poisoned example removal. Results show that reliable proxy explanations effectively guide optimization, transforming interpretability from a passive observation tool into a scalable primitive for LLM development. Additionally, we open-source code and datasets to facilitate future research.

Junhao Liu, Haonan Yu, Zhenyu Yan, Xin Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Explaining LLMsSST
CRR17.13
42
Explaining LLMsMMLU
CRR6.31
42
Explaining LLMsNQ
CRR11.76
42
Faithfulness EvaluationQasper yes/no question answering
AOPC@100.00e+0
10
Prompt CompressionMMLU
MMLU Accuracy (Chemistry)41
5
Prompt CompressionHellaSwag
Compression Ratio70.1
5
Prompt CompressionGSM8K
Compression Ratio35.5
5
Prompt CompressionPIQA
Compression Ratio64.5
5
Poisoned examples removalSST
Accuracy94
3
Poisoned examples removalHellaSwag
Accuracy93.5
3
Showing 10 of 11 rows

Other info

Follow for update