Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Domain-Adaptation through Synthetic Data: Fine-Tuning Large Language Models for German Law

About

Large language models (LLMs) often struggle in specialized domains such as legal reasoning due to limited expert knowledge, resulting in factually incorrect outputs or hallucinations. This paper presents an effective method for adapting advanced LLMs to German legal question answering through a novel synthetic data generation approach. In contrast to costly human-annotated resources or unreliable synthetic alternatives, our approach systematically produces high-quality, diverse, and legally accurate question-answer pairs directly from authoritative German statutes. Using rigorous automated filtering methods and parameter-efficient fine-tuning techniques, we demonstrate that LLMs adapted with our synthetic dataset significantly outperform their baseline counterparts on German legal question answering tasks. Our results highlight the feasibility of using carefully designed synthetic data as a robust alternative to manual annotation in high-stakes, knowledge-intensive domains.

Ali Hamza Bashir, Muhammad Rehan Khalid, Kostadin Cvejoski, Jana Birr, Jule Berghaus, Armin Berger, Sandra Halscheidt, Christian Temath, Rafet Sifa, David Berghaus• 2026

Related benchmarks

TaskDatasetResultRank
Open Question AnsweringBGB (test)
Factual Correctness (%)76.4
8
Multiple-choice Question AnsweringLegalMC4 (test)
Exact Accuracy71.2
8
Multiple-choice Question AnsweringBGB (test)
Exact Accuracy75.1
8
Open Question AnsweringLegalMC4 (test)
LLM Factual Correctness55.4
8
Showing 4 of 4 rows

Other info

Follow for update