Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revitalizing Black-Box Interpretability: Actionable Interpretability for LLMs via Proxy Models

About

Post-hoc explanations provide transparency and are essential for guiding model optimization, such as prompt engineering and data sanitation. However, applying model-agnostic techniques to Large Language Models (LLMs) is hindered by prohibitive computational costs, rendering these tools dormant for real-world applications. To revitalize model-agnostic interpretability, we propose a budget-friendly proxy framework that leverages efficient models to approximate the decision boundaries of expensive LLMs. We introduce a screen-and-apply mechanism to statistically verify local alignment before deployment. Our empirical evaluation confirms that proxy explanations achieve over 90% fidelity with only 11% of the oracle's cost. Building on this foundation, we demonstrate the actionable utility of our framework in prompt compression and poisoned example removal. Results show that reliable proxy explanations effectively guide optimization, transforming interpretability from a passive observation tool into a scalable primitive for LLM development. Additionally, we open-source code and datasets to facilitate future research.

Junhao Liu, Haonan Yu, Zhenyu Yan, Xin Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Faithfulness EvaluationQasper yes/no question answering
AOPC@100.00e+0
10
Showing 1 of 1 rows

Other info

Follow for update