Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeepStruct: Pretraining of Language Models for Structure Prediction

About

We introduce a method for improving the structural understanding abilities of language models. Unlike previous approaches that finetune the models with task-specific augmentation, we pretrain language models on a collection of task-agnostic corpora to generate structures from text. Our structure pretraining enables zero-shot transfer of the learned knowledge that models have about the structure tasks. We study the performance of this approach on 28 datasets, spanning 10 structure prediction tasks including open information extraction, joint entity and relation extraction, named entity recognition, relation classification, semantic role labeling, event extraction, coreference resolution, factual probe, intent detection, and dialogue state tracking. We further enhance the pretraining with the task-specific training sets. We show that a 10B parameter language model transfers non-trivially to most tasks and obtains state-of-the-art performance on 21 of 28 datasets that we evaluate.

Chenguang Wang, Xiao Liu, Zui Chen, Haoyun Hong, Jie Tang, Dawn Song• 2022

Related benchmarks

TaskDatasetResultRank
Coreference ResolutionCoNLL English 2012 (test)
MUC F1 Score74.9
114
Named Entity RecognitionCoNLL 03
F1 (Entity)93.1
102
Relation ExtractionTACRED
Micro F176.8
97
Named Entity RecognitionOntoNotes
F1-score87.8
91
Semantic Role LabelingCoNLL 2005 (WSJ)
F1 Score95.5
41
Named Entity RecognitionGENIA
F1 Score80.8
37
Joint Entity and Relation ExtractionCONLL04
Entity F190.7
33
Semantic Role LabelingCoNLL 2005 (Brown)
F1 Score92.1
31
Joint Entity and Relation ExtractionADE
Entity F1 Score0.911
26
Dialogue State TrackingMultiWOZ 2.1
Joint Goal Accuracy54.2
26
Showing 10 of 32 rows

Other info

Code

Follow for update