Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Patchscopes: A Unifying Framework for Inspecting Hidden Representations of Language Models

About

Understanding the internal representations of large language models (LLMs) can help explain models' behavior and verify their alignment with human values. Given the capabilities of LLMs in generating human-understandable text, we propose leveraging the model itself to explain its internal representations in natural language. We introduce a framework called Patchscopes and show how it can be used to answer a wide range of questions about an LLM's computation. We show that many prior interpretability methods based on projecting representations into the vocabulary space and intervening on the LLM computation can be viewed as instances of this framework. Moreover, several of their shortcomings such as failure in inspecting early layers or lack of expressivity can be mitigated by Patchscopes. Beyond unifying prior inspection techniques, Patchscopes also opens up new possibilities such as using a more capable model to explain the representations of a smaller model, and multihop reasoning error correction.

Asma Ghandeharioun, Avi Caciularu, Adam Pearce, Lucas Dixon, Mor Geva• 2024

Related benchmarks

TaskDatasetResultRank
AgePatchscopes (test)
SR67.24
32
CulturePatchscopes (test)
SR45.69
32
AgePatchscopes few-shot 1.0 (test)
SR0.561
32
CulturePatchscopes few-shot 1.0 (test)
SR14.52
32
GenderPatchscopes few-shot 1.0 (test)
SR34.91
32
ColorPatchscopes few-shot 1.0 (test)
SR18.41
32
ColorPatchscopes (test)
SR48.2
32
GenderPatchscopes (test)
SR34.78
32
Compositional ReasoningCompositional Reasoning Dataset
Correction Score (C)1.2
8
Compositional ReasoningCompositional Reasoning Correction Input Ic
Event Probability Ratio57.3
8
Showing 10 of 10 rows

Other info

Follow for update