Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Rich Representation of Keyphrases from Text

About

In this work, we explore how to train task-specific language models aimed towards learning rich representation of keyphrases from text documents. We experiment with different masking strategies for pre-training transformer language models (LMs) in discriminative as well as generative settings. In the discriminative setting, we introduce a new pre-training objective - Keyphrase Boundary Infilling with Replacement (KBIR), showing large gains in performance (upto 8.16 points in F1) over SOTA, when the LM pre-trained using KBIR is fine-tuned for the task of keyphrase extraction. In the generative setting, we introduce a new pre-training setup for BART - KeyBART, that reproduces the keyphrases related to the input text in the CatSeq format, instead of the denoised original input. This also led to gains in performance (upto 4.33 points in F1@M) over SOTA for keyphrase generation. Additionally, we also fine-tune the pre-trained language models on named entity recognition (NER), question answering (QA), relation extraction (RE), abstractive summarization and achieve comparable performance with that of the SOTA, showing that learning rich representation of keyphrases is indeed beneficial for many other fundamental NLP tasks.

Mayank Kulkarni, Debanjan Mahata, Ravneet Arora, Rajarshi Bhowmik• 2021

Related benchmarks

TaskDatasetResultRank
Keyphrase GenerationKP20k (test)
SemP58
23
Keyphrase GenerationKPTimes (test)
Semantic Precision (SemP)75.1
21
Showing 2 of 2 rows

Other info

Follow for update