Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Truth as a Trajectory: What Internal Representations Reveal About Large Language Model Reasoning

About

Existing explainability methods for Large Language Models (LLMs) typically treat hidden states as static points in activation space, assuming that correct and incorrect inferences can be separated using representations from an individual layer. However, these activations are saturated with polysemantic features, leading to linear probes learning surface-level lexical patterns rather than underlying reasoning structures. We introduce Truth as a Trajectory (TaT), which models the transformer inference as an unfolded trajectory of iterative refinements, shifting analysis from static activations to layer-wise geometric displacement. By analyzing displacement of representations across layers, TaT uncovers geometric invariants that distinguish valid reasoning from spurious behavior. We evaluate TaT across dense and Mixture-of-Experts (MoE) architectures on benchmarks spanning commonsense reasoning, question answering, and toxicity detection. Without access to the activations themselves and using only changes in activations across layers, we show that TaT effectively mitigates reliance on static lexical confounds, outperforming conventional probing, and establishes trajectory analysis as a complementary perspective on LLM explainability.

Hamed Damirchi, Ignacio Meza De la Jara, Ehsan Abbasnejad, Afshar Shamsi, Zhen Zhang, Javen Shi• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC-E
Accuracy94.28
416
Story completionStoryCloze
Accuracy87.28
73
Multiple-choice Question AnsweringARC Easy (test)
Accuracy89.1
68
Question AnsweringCommonsenseQA (test)
Accuracy77.56
60
Story Cloze TestStory Cloze (test)
Accuracy95.75
56
Toxicity DetectionToxigen
Score84.23
53
Multiple-choice Question AnsweringARC Challenge (test)
Accuracy82.17
44
Multiple-choice Question AnsweringOpenBookQA (test)
Accuracy90.8
39
Boolean Question AnsweringBoolQ (test)--
38
Social Interaction Question AnsweringSocialIQA (test)
Accuracy75.49
18
Showing 10 of 15 rows

Other info

Follow for update