Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Knowledge Graph-Assisted LLM Post-Training for Enhanced Legal Reasoning

About

LLM post-training has primarily relied on large text corpora and human feedback, without capturing the structure of domain knowledge. This has caused models to struggle dealing with complex reasoning tasks, especially for high-stakes professional domains. In Law, reasoning requires deep understanding of the relations between various legal concepts, a key component missing in current LLM post-training. In this paper, we propose a knowledge graph (KG)-assisted approach for enhancing LLMs' reasoning capability in Legal that is generalizable to other high-stakes domains. We model key legal concepts by following the \textbf{IRAC} (Issue, Rule, Analysis and Conclusion) framework, and construct a KG with 12K legal cases. We then produce training data using our IRAC KG, and conduct both Supervised Fine-Tuning (SFT) and Direct Preference Optimization (DPO) with three state-of-the-art (SOTA) LLMs (30B, 49B and 70B), varying architecture and base model family. Our post-trained models obtained better average performance on 4/5 diverse legal benchmarks (14 tasks) than baselines. In particular, our 70B DPO model achieved the best score on 4/6 reasoning tasks, among baselines and a 141B SOTA legal LLM, demonstrating the effectiveness of our KG for enhancing LLMs' legal reasoning capability.

Dezhao Song, Guglielmo Bonifazi, Frank Schilder, Jonathan Richard Schwarz• 2026

Related benchmarks

TaskDatasetResultRank
Legal Information Extraction and EntailmentCOLIEE
Average Score61.1
10
Legal language understandingLexGLUE
LexGLUE Average67.2
10
Legal ReasoningLegalBench
Balanced Accuracy79.3
10
Question AnsweringSuperGPQA Law
Accuracy43.8
10
Stance Detectiondelta-Stance
Macro-F1 (Avg)50.1
10
Showing 5 of 5 rows

Other info

Follow for update