Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sparsity-Guided Holistic Explanation for LLMs with Interpretable Inference-Time Intervention

About

Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains. However, the enigmatic ``black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications. While past approaches, such as attention visualization, pivotal subnetwork extraction, and concept-based analyses, offer some insight, they often focus on either local or global explanations within a single dimension, occasionally falling short in providing comprehensive clarity. In response, we propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs. Our framework, termed SparseCBM, innovatively integrates sparsity to elucidate three intertwined layers of interpretation: input, subnetwork, and concept levels. In addition, the newly introduced dimension of interpretable inference-time intervention facilitates dynamic adjustments to the model during deployment. Through rigorous empirical evaluations on real-world datasets, we demonstrate that SparseCBM delivers a profound understanding of LLM behaviors, setting it apart in both interpreting and ameliorating model inaccuracies. Codes are provided in supplements.

Zhen Tan, Tianlong Chen, Zhenyu Zhang, Huan Liu• 2023

Related benchmarks

TaskDatasetResultRank
Text ClassificationHotel
Accuracy98.1
7
Text ClassificationBeer
Accuracy88.3
7
Text ClassificationCEBaB
Acc64.4
7
Showing 3 of 3 rows

Other info

Follow for update