Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CodeIE: Large Code Generation Models are Better Few-Shot Information Extractors

About

Large language models (LLMs) pre-trained on massive corpora have demonstrated impressive few-shot learning ability on many NLP tasks. A common practice is to recast the task into a text-to-text format such that generative LLMs of natural language (NL-LLMs) like GPT-3 can be prompted to solve it. However, it is nontrivial to perform information extraction (IE) tasks with NL-LLMs since the output of the IE task is usually structured and therefore is hard to be converted into plain text. In this paper, we propose to recast the structured output in the form of code instead of natural language and utilize generative LLMs of code (Code-LLMs) such as Codex to perform IE tasks, in particular, named entity recognition and relation extraction. In contrast to NL-LLMs, we show that Code-LLMs can be well-aligned with these IE tasks by designing code-style prompts and formulating these IE tasks as code generation tasks. Experiment results on seven benchmarks show that our method consistently outperforms fine-tuning moderate-size pre-trained models specially designed for IE tasks (e.g., UIE) and prompting NL-LLMs under few-shot settings. We further conduct a series of in-depth analyses to demonstrate the merits of leveraging Code-LLMs for IE tasks.

Peng Li, Tianxiang Sun, Qiong Tang, Hang Yan, Yuanbin Wu, Xuanjing Huang, Xipeng Qiu• 2023

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL 03
F1 (Entity)82.32
102
Relation ExtractionSciERC
Relation Strict F16.05
28
Relation ExtractionCoNLL 04
F143.05
24
Entity extractionACE04
F1 Score54.27
14
Relation ExtractionACE Rel 05
F1 Score7.09
13
Relation ExtractionNYT
Micro-F132.17
8
Entity extractionACE05-E
F1 Score51.91
2
Showing 7 of 7 rows

Other info

Code

Follow for update