Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Keyphrase Generation via Reinforcement Learning with Adaptive Rewards

About

Generating keyphrases that summarize the main points of a document is a fundamental task in natural language processing. Although existing generative models are capable of predicting multiple keyphrases for an input document as well as determining the number of keyphrases to generate, they still suffer from the problem of generating too few keyphrases. To address this problem, we propose a reinforcement learning (RL) approach for keyphrase generation, with an adaptive reward function that encourages a model to generate both sufficient and accurate keyphrases. Furthermore, we introduce a new evaluation method that incorporates name variations of the ground-truth keyphrases using the Wikipedia knowledge base. Thus, our evaluation method can more robustly evaluate the quality of predicted keyphrases. Extensive experiments on five real-world datasets of different scales demonstrate that our RL approach consistently and significantly improves the performance of the state-of-the-art generative models with both conventional and new evaluation methods.

Hou Pong Chan, Wang Chen, Lu Wang, Irwin King• 2019

Related benchmarks

TaskDatasetResultRank
Keyphrase GenerationKP20k (test)
SemP55
23
Present Keyphrase PredictionKrapivin
F1@530
15
Absent Keyphrase GenerationInspec
F1@50.012
7
Absent Keyphrase GenerationKP20k
F1@52.7
7
Present Keyphrase PredictionKP20k
F1@532.1
7
Absent Keyphrase GenerationKrapivin
F1@53
7
Present Keyphrase PredictionInspec
F1@525.3
7
Absent Keyphrase GenerationSemEval
F1@50.021
6
Present Keyphrase PredictionNUS
F1@537.5
6
Present Keyphrase PredictionSemEval
F1@528.7
6
Showing 10 of 11 rows

Other info

Follow for update