Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Why Linear Interpretability Works: Invariant Subspaces as a Result of Architectural Constraints

About

Linear probes and sparse autoencoders consistently recover meaningful structure from transformer representations -- yet why should such simple methods succeed in deep, nonlinear systems? We show this is not merely an empirical regularity but a consequence of architectural necessity: transformers communicate information through linear interfaces (attention OV circuits, unembedding matrices), and any semantic feature decoded through such an interface must occupy a context-invariant linear subspace. We formalize this as the \emph{Invariant Subspace Necessity} theorem and derive the \emph{Self-Reference Property}: tokens directly provide the geometric direction for their associated features, enabling zero-shot identification of semantic structure without labeled data or learned probes. Empirical validation in eight classification tasks and four model families confirms the alignment between class tokens and semantically related instances. Our framework provides \textbf{a principled architectural explanation} for why linear interpretability methods work, unifying linear probes and sparse autoencoders.

Andres Saurez, Yousung Lee, Dongsoo Har• 2026

Related benchmarks

TaskDatasetResultRank
Emotion ClassificationEmotion
Accuracy54.59
26
Image ClassificationAnimals
Accuracy94.32
20
ClassificationCountries
Accuracy88.55
16
ClassificationC. Chars
Accuracy0.6129
16
ClassificationAuthors
Accuracy75.13
16
ClassificationLangs
Accuracy98.15
16
ClassificationFruits
Accuracy0.7364
16
ClassificationCompanies
Accuracy84.51
16
Showing 8 of 8 rows

Other info

Follow for update