Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Atoms to Trees: Building a Structured Feature Forest with Hierarchical Sparse Autoencoders

About

Sparse autoencoders (SAEs) have proven effective for extracting monosemantic features from large language models (LLMs), yet these features are typically identified in isolation. However, broad evidence suggests that LLMs capture the intrinsic structure of natural language, where the phenomenon of "feature splitting" in particular indicates that such structure is hierarchical. To capture this, we propose the Hierarchical Sparse Autoencoder (HSAE), which jointly learns a series of SAEs and the parent-child relationships between their features. HSAE strengthens the alignment between parent and child features through two novel mechanisms: a structural constraint loss and a random feature perturbation mechanism. Extensive experiments across various LLMs and layers demonstrate that HSAE consistently recovers semantically meaningful hierarchies, supported by both qualitative case studies and rigorous quantitative metrics. At the same time, HSAE preserves the reconstruction fidelity and interpretability of standard SAEs across different dictionary sizes. Our work provides a powerful, scalable tool for discovering and analyzing the multi-scale conceptual structures embedded in LLM representations.

Yifan Luo, Yang Zhan, Jiedong Jiang, Tianyang Liu, Mingrui Wu, Zhennan Zhou, Bin Dong• 2026

Related benchmarks

TaskDatasetResultRank
Feature InterpretabilityLLM Activations
AutoInterp Score86.9
8
Sparse ReconstructionLLM Activations
L049.4
8
Downstream Utility EvaluationLLM Activations
Sparse Probing Accuracy87.4
8
Hierarchical Feature AlignmentLLM Activations
Absorption98.3
8
Sparse Autoencodinggemma2-2b-layer-13 activations
L0100.7
6
Sparse Autoencodinggemma2-2b layer-20 activations
L0 Norm50
2
Sparse Autoencodinggemma2-2B-layer-6 activations
L0 Norm50.1
2
Sparse Autoencodingqwen3-4b layer-18 activations
L0 Norm50.2
2
Showing 8 of 8 rows

Other info

Follow for update