Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Information Flow Routes: Automatically Interpreting Language Models at Scale

About

Information flows by routes inside the network via mechanisms implemented in the model. These routes can be represented as graphs where nodes correspond to token representations and edges to operations inside the network. We automatically build these graphs in a top-down manner, for each prediction leaving only the most important nodes and edges. In contrast to the existing workflows relying on activation patching, we do this through attribution: this allows us to efficiently uncover existing circuits with just a single forward pass. Additionally, the applicability of our method is far beyond patching: we do not need a human to carefully design prediction templates, and we can extract information flow routes for any prediction (not just the ones among the allowed templates). As a result, we can talk about model behavior in general, for specific types of predictions, or different domains. We experiment with Llama 2 and show that the role of some attention heads is overall important, e.g. previous token heads and subword merging heads. Next, we find similarities in Llama 2 behavior when handling tokens of the same part of speech. Finally, we show that some model components can be specialized on domains such as coding or multilingual texts.

Javier Ferrando, Elena Voita• 2024

Related benchmarks

TaskDatasetResultRank
Component-level attributionKnown 1000
Discrepancy Score0.17
40
Token Attribution FaithfulnessKnown 1000
Distance15.22
40
Component-level attributionIOI
Dissimilarity (dis.)0.02
32
Token Attribution FaithfulnessSQuAD v2.0
Disagreement43.74
30
Sentiment AnalysisIMDB
Dis. Score91.63
10
Reading ComprehensionSQuAD v2.0
Disambiguation Score50.16
10
Factual KnowledgeKnown 1000
Disagreement Rate16.15
10
Token Attribution FaithfulnessIMDB
Distance91.62
10
Showing 8 of 8 rows

Other info

Follow for update