Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets

About

Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the biomedicine domain. The benchmark consists of five tasks with ten datasets that cover both biomedical and clinical texts with different dataset sizes and difficulties. We also evaluate several baselines based on BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results. We make the datasets, pre-trained models, and codes publicly available at https://github.com/ncbi-nlp/BLUE_Benchmark.

Yifan Peng, Shankai Yan, Zhiyong Lu• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceMedNLI (test)
Accuracy86.36
89
Named Entity RecognitionBC5CDR (test)
Macro F1 (span-level)86.6
80
Named Entity RecognitionNCBI-disease (test)
Precision88.28
40
Document ClassificationHoC (test)
F1 (sample average)0.8603
20
Biomedical Natural Language ProcessingBLURB
BC5-chem91.19
12
DDI extractionDDIExtraction 2013
F1 Score79.9
10
Relation ExtractionChemProt
F1 Score74.4
10
Biomedical Knowledge ProbingMedLAMA 1.0 (Hard Set)
Acc@14.12
9
Biomedical Knowledge ProbingMedLAMA Full Set
Accuracy @ 14.87
9
Biomedical NERBioCreative Chemical-Disease Relation corpus V (test)
F1 Score93.5
8
Showing 10 of 15 rows

Other info

Code

Follow for update