LaTIM: Measuring Latent Token-to-Token Interactions in Mamba Models
About
State space models (SSMs), such as Mamba, have emerged as an efficient alternative to transformers for long-context sequence modeling. However, despite their growing adoption, SSMs lack the interpretability tools that have been crucial for understanding and improving attention-based architectures. While recent efforts provide insights into Mamba's internal mechanisms, they do not explicitly decompose token-wise contributions, leaving gaps in understanding how Mamba selectively processes sequences across layers. In this work, we introduce LaTIM, a novel token-level decomposition method for both Mamba-1 and Mamba-2 that enables fine-grained interpretability. We extensively evaluate our method across diverse tasks, including machine translation, copying, and retrieval-based generation, demonstrating its effectiveness in revealing Mamba's token-to-token interaction patterns.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Word Alignment | RWTH Gold Alignment de-en (test) | AER0.44 | 31 | |
| Token Alignment | IWSLT DE→EN 2017 (test) | AER0.43 | 22 | |
| Token Alignment | IWSLT Fr-En 2017 (test) | AER35 | 22 | |
| Copying | Copying task | AUC98 | 11 |