From Atoms to Trees: Building a Structured Feature Forest with Hierarchical Sparse Autoencoders
About
Sparse autoencoders (SAEs) have proven effective for extracting monosemantic features from large language models (LLMs), yet these features are typically identified in isolation. However, broad evidence suggests that LLMs capture the intrinsic structure of natural language, where the phenomenon of "feature splitting" in particular indicates that such structure is hierarchical. To capture this, we propose the Hierarchical Sparse Autoencoder (HSAE), which jointly learns a series of SAEs and the parent-child relationships between their features. HSAE strengthens the alignment between parent and child features through two novel mechanisms: a structural constraint loss and a random feature perturbation mechanism. Extensive experiments across various LLMs and layers demonstrate that HSAE consistently recovers semantically meaningful hierarchies, supported by both qualitative case studies and rigorous quantitative metrics. At the same time, HSAE preserves the reconstruction fidelity and interpretability of standard SAEs across different dictionary sizes. Our work provides a powerful, scalable tool for discovering and analyzing the multi-scale conceptual structures embedded in LLM representations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Feature Interpretability | LLM Activations | AutoInterp Score86.9 | 8 | |
| Sparse Reconstruction | LLM Activations | L049.4 | 8 | |
| Downstream Utility Evaluation | LLM Activations | Sparse Probing Accuracy87.4 | 8 | |
| Hierarchical Feature Alignment | LLM Activations | Absorption98.3 | 8 | |
| Sparse Autoencoding | gemma2-2b-layer-13 activations | L0100.7 | 6 | |
| Sparse Autoencoding | gemma2-2b layer-20 activations | L0 Norm50 | 2 | |
| Sparse Autoencoding | gemma2-2B-layer-6 activations | L0 Norm50.1 | 2 | |
| Sparse Autoencoding | qwen3-4b layer-18 activations | L0 Norm50.2 | 2 |