Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explaining Language Models' Predictions with High-Impact Concepts

About

The emergence of large-scale pretrained language models has posed unprecedented challenges in deriving explanations of why the model has made some predictions. Stemmed from the compositional nature of languages, spurious correlations have further undermined the trustworthiness of NLP systems, leading to unreliable model explanations that are merely correlated with the output predictions. To encourage fairness and transparency, there exists an urgent demand for reliable explanations that allow users to consistently understand the model's behavior. In this work, we propose a complete framework for extending concept-based interpretability methods to NLP. Specifically, we propose a post-hoc interpretability method for extracting predictive high-level features (concepts) from the pretrained model's hidden layer activations. We optimize for features whose existence causes the output predictions to change substantially, \ie generates a high impact. Moreover, we devise several evaluation metrics that can be universally applied. Extensive experiments on real and synthetic tasks demonstrate that our method achieves superior results on {predictive impact}, usability, and faithfulness compared to the baselines.

Ruochen Zhao, Shafiq Joty, Yongjie Wang, Tan Wang• 2023

Related benchmarks

TaskDatasetResultRank
Concept Extraction Evaluation4 classification datasets average
RAcc99.66
35
Concept LearningAG-News
Training Time1.06e+3
21
Showing 2 of 2 rows

Other info

Follow for update