Inseq: An Interpretability Toolkit for Sequence Generation Models
About
Past work in natural language processing interpretability focused mainly on popular classification tasks while largely overlooking generation settings, partly due to a lack of dedicated tools. In this work, we introduce Inseq, a Python library to democratize access to interpretability analyses of sequence generation models. Inseq enables intuitive and optimized extraction of models' internal information and feature importance scores for popular decoder-only and encoder-decoder Transformers architectures. We showcase its potential by adopting it to highlight gender biases in machine translation models and locate factual knowledge inside GPT-2. Thanks to its extensible interface supporting cutting-edge techniques such as contrastive feature attribution, Inseq can drive future advances in explainable natural language generation, centralizing good practices and enabling fair and reproducible model evaluations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Cell-level attribution | ToTTo | Precision42.7 | 6 | |
| Cell-level attribution | FeTaQA | Precision0.565 | 6 | |
| Cell-level attribution | AITQA (gold set) | Precision10.99 | 6 | |
| Cell-level attribution | ToTTo (gold set) | Precision16.85 | 6 | |
| Cell-level attribution | AITQA | Precision19.2 | 6 | |
| Column-Level Attribution | ToTTo | Precision73.1 | 6 | |
| Column-Level Attribution | FeTaQA | Precision (%)82.6 | 6 | |
| Row-Level Attribution | ToTTo | Precision37.5 | 6 | |
| Row-Level Attribution | FeTaQA | Precision56.4 | 6 | |
| Row-Level Attribution | AITQA | Precision31.2 | 6 |