Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prototype Transformer: Towards Language Model Architectures Interpretable by Design

About

While state-of-the-art language models (LMs) surpass the vast majority of humans in certain domains, their reasoning remains largely opaque, undermining trust in their output. Furthermore, while autoregressive LMs can output explicit reasoning, their true reasoning process is opaque, which introduces risks like deception and hallucination. In this work, we introduce the Prototype Transformer (ProtoT) -- an autoregressive LM architecture based on prototypes (parameter vectors), posed as an alternative to the standard self-attention-based transformers. ProtoT works by means of two-way communication between the input sequence and the prototypes, and we show that this leads to the prototypes automatically capturing nameable concepts (e.g. "woman") during training. They provide the potential to interpret the model's reasoning and allow for targeted edits of its behavior. Furthermore, by design, the prototypes create communication channels that aggregate contextual information at different time scales, aiding interpretability. In terms of computation scalability, ProtoT scales linearly with sequence length vs the quadratic scalability of SOTA self-attention transformers. Compared to baselines, ProtoT scales well with model and data size, and performs well on text generation and downstream tasks (GLUE). ProtoT exhibits robustness to input perturbations on par or better than some baselines, but differs from them by providing interpretable pathways showing how robustness and sensitivity arises. Reaching close to the performance of state-of-the-art architectures, ProtoT paves the way to creating well-performing autoregressive LMs interpretable by design.

Yordan Yordanov, Matteo Forasassi, Bayar Menzat, Ruizhi Wang, Chang Qi, Markus Kaltenberger, Amine M'Charrak, Tommaso Salvatori, Thomas Lukasiewicz• 2026

Related benchmarks

TaskDatasetResultRank
Language ModelingFineWeb-Edu (test)
Perplexity (Test)29.5
49
Robustness EvaluationLexical Variation (abbr.)
Jensen-Shannon Divergence0.0498
8
Open-ended Text GenerationChatbot Arena inspired qualitative prompts (val)
ELO1.02e+3
4
Robustness EvaluationLexical Variation (punctuation)
Jensen-Shannon Divergence0.3982
4
Robustness EvaluationLexical Variation spelling
Jensen-Shannon Divergence0.026
4
Robustness EvaluationLexical Variation synonym
Jensen-Shannon Divergence0.1132
4
Robustness EvaluationLexical Variation typos
Jensen-Shannon Divergence0.2074
4
Natural Language UnderstandingGLUE downstream fine-tuning
CoLA Score27.7
4
Robustness EvaluationLexical Variation contraction
Jensen-Shannon Divergence0.0823
4
Showing 9 of 9 rows

Other info

Follow for update