Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generalizing Natural Language Analysis through Span-relation Representations

About

Natural language processing covers a wide variety of tasks predicting syntax, semantics, and information content, and usually each type of output is generated with specially designed architectures. In this paper, we provide the simple insight that a great variety of tasks can be represented in a single unified format consisting of labeling spans and relations between spans, thus a single task-independent model can be used across different tasks. We perform extensive experiments to test this insight on 10 disparate tasks spanning dependency parsing (syntax), semantic role labeling (semantics), relation extraction (information content), aspect based sentiment analysis (sentiment), and many others, achieving performance comparable to state-of-the-art specialized models. We further demonstrate benefits of multi-task learning, and also show that the proposed method makes it easy to analyze differences and similarities in how the model handles different tasks. Finally, we convert these datasets into a unified format to build a benchmark, which provides a holistic testbed for evaluating future models for generalized natural language analysis.

Zhengbao Jiang, Wei Xu, Jun Araki, Graham Neubig• 2019

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 2003 (test)--
539
Flat Named Entity RecognitionOntoNotes 5.0 (test)
Micro F191.7
17
Flat Named Entity RecognitionFew-NERD (test)
Micro F170.6
5
Showing 3 of 3 rows

Other info

Code

Follow for update