Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dependency-based Mixture Language Models

About

Various models have been proposed to incorporate knowledge of syntactic structures into neural language models. However, previous works have relied heavily on elaborate components for a specific language model, usually recurrent neural network (RNN), which makes themselves unwieldy in practice to fit into other neural language models, such as Transformer and GPT-2. In this paper, we introduce the Dependency-based Mixture Language Models. In detail, we first train neural language models with a novel dependency modeling objective to learn the probability distribution of future dependent tokens given context. We then formulate the next-token probability by mixing the previous dependency modeling probability distributions with self-attention. Extensive experiments and human evaluations show that our method can be easily and effectively applied to different neural language models while improving neural text generation on various tasks.

Zhixian Yang, Xiaojun Wan• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingPenn Treebank (PTB) (test)
Perplexity56.2
120
Language ModelingPenn Treebank (PTB) (val)
Perplexity58.6
70
Unconditional Text GenerationEMNLP 2017 WMT News
Perplexity36.11
64
Conditional Text GenerationROCStories (test)
UNION85.31
8
Unconditional Text GenerationEMNLP WMT News 2017
Human Score0.512
8
Unconditional Text GenerationWMT News EMNLP2017
LM Score5.14
8
Conditional Text GenerationROCStories
Grammaticality Win Rate36.2
5
Showing 7 of 7 rows

Other info

Code

Follow for update