Prototype Transformer: Towards Language Model Architectures Interpretable by Design
About
While state-of-the-art language models (LMs) surpass the vast majority of humans in certain domains, their reasoning remains largely opaque, undermining trust in their output. Furthermore, while autoregressive LMs can output explicit reasoning, their true reasoning process is opaque, which introduces risks like deception and hallucination. In this work, we introduce the Prototype Transformer (ProtoT) -- an autoregressive LM architecture based on prototypes (parameter vectors), posed as an alternative to the standard self-attention-based transformers. ProtoT works by means of two-way communication between the input sequence and the prototypes, and we show that this leads to the prototypes automatically capturing nameable concepts (e.g. "woman") during training. They provide the potential to interpret the model's reasoning and allow for targeted edits of its behavior. Furthermore, by design, the prototypes create communication channels that aggregate contextual information at different time scales, aiding interpretability. In terms of computation scalability, ProtoT scales linearly with sequence length vs the quadratic scalability of SOTA self-attention transformers. Compared to baselines, ProtoT scales well with model and data size, and performs well on text generation and downstream tasks (GLUE). ProtoT exhibits robustness to input perturbations on par or better than some baselines, but differs from them by providing interpretable pathways showing how robustness and sensitivity arises. Reaching close to the performance of state-of-the-art architectures, ProtoT paves the way to creating well-performing autoregressive LMs interpretable by design.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | FineWeb-Edu (test) | Perplexity (Test)29.5 | 49 | |
| Robustness Evaluation | Lexical Variation (abbr.) | Jensen-Shannon Divergence0.0498 | 8 | |
| Open-ended Text Generation | Chatbot Arena inspired qualitative prompts (val) | ELO1.02e+3 | 4 | |
| Robustness Evaluation | Lexical Variation (punctuation) | Jensen-Shannon Divergence0.3982 | 4 | |
| Robustness Evaluation | Lexical Variation spelling | Jensen-Shannon Divergence0.026 | 4 | |
| Robustness Evaluation | Lexical Variation synonym | Jensen-Shannon Divergence0.1132 | 4 | |
| Robustness Evaluation | Lexical Variation typos | Jensen-Shannon Divergence0.2074 | 4 | |
| Natural Language Understanding | GLUE downstream fine-tuning | CoLA Score27.7 | 4 | |
| Robustness Evaluation | Lexical Variation contraction | Jensen-Shannon Divergence0.0823 | 4 |