Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LEGAL-BERT: The Muppets straight out of Law School

About

BERT has achieved impressive performance in several NLP tasks. However, there has been limited investigation on its adaptation guidelines in specialised domains. Here we focus on the legal domain, where we explore several approaches for applying BERT models to downstream legal tasks, evaluating on multiple datasets. Our findings indicate that the previous guidelines for pre-training and fine-tuning, often blindly followed, do not always generalize well in the legal domain. Thus we propose a systematic investigation of the available strategies when applying BERT in specialised domains. These are: (a) use the original BERT out of the box, (b) adapt BERT by additional pre-training on domain-specific corpora, and (c) pre-train BERT from scratch on domain-specific corpora. We also propose a broader hyper-parameter search space when fine-tuning for downstream tasks and we release LEGAL-BERT, a family of BERT models intended to assist legal NLP research, computational law, and legal technology applications.

Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, Ion Androutsopoulos• 2020

Related benchmarks

TaskDatasetResultRank
Case holding classificationCaseHOLD (test)
Mean macro F176.1
12
Single-label multi-class topic classificationSCOTUS (test)
Micro-F160.9
12
Natural Language InferenceCNLI (test)
Micro-F10.702
7
ProbingLegalLAMA 1.0 (test)
ECHR Articles Score91.1
7
Single-label multi-class topic classificationLEDGAR (test)
Micro-F181.2
7
Multi-label topic classificationECtHR (test)
Micro-F159.1
7
Multi-label topic classificationEURLEX (test)
Micro-F127.7
7
Showing 7 of 7 rows

Other info

Follow for update