Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

HUKUKBERT: Domain-Specific Language Model for Turkish Law

About

Recent advances in natural language processing (NLP) have increasingly enabled LegalTech applications, yet existing studies specific to Turkish law have still been limited due to the scarcity of domain-specific data and models. Although extensive models like LEGAL-BERT have been developed for English legal texts, the Turkish legal domain lacks a domain-specific high-volume counterpart. In this paper, we introduce HukukBERT, the most comprehensive legal language model for Turkish, trained on a 18 GB cleaned legal corpus using a hybrid Domain-Adaptive Pre-Training (DAPT) methodology integrating Whole-Word Masking, Token Span Masking, Word Span Masking, and targeted Keyword Masking. We systematically compared our 48K WordPiece tokenizer and DAPT approach against general-purpose and existing domain-specific Turkish models. Evaluated on a novel Legal Cloze Test benchmark -- a masked legal term prediction task designed for Turkish court decisions -- HukukBERT achieves state-of-the-art performance with 84.40\% Top-1 accuracy, substantially outperforming existing models. Furthermore, we evaluated HukukBERT in the downstream task of structural segmentation of official Turkish court decisions, where it achieves a 92.8\% document pass rate, establishing a new state-of-the-art. We release HukukBERT to support future research in Turkish legal NLP tasks, including recognition of named entities, prediction of judgment, and classification of legal documents.

Mehmet Utku \"Ozt\"urk, Tansu T\"urko\u{g}lu, Buse Buz-Yalug• 2026

Related benchmarks

TaskDatasetResultRank
Cloze TestLegal Cloze Test
Top-1 Accuracy84.4
7
Court Decision SegmentationCourt Decision Segmentation Dataset v12
Document Pass Rate92.8
3
Showing 2 of 2 rows

Other info

Follow for update