Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Hidden Markov Language Models

About

The hidden Markov model (HMM) is a fundamental tool for sequence modeling that cleanly separates the hidden state from the emission structure. However, this separation makes it difficult to fit HMMs to large datasets in modern NLP, and they have fallen out of use due to very poor performance compared to fully observed models. This work revisits the challenge of scaling HMMs to language modeling datasets, taking ideas from recent approaches to neural modeling. We propose methods for scaling HMMs to massive state spaces while maintaining efficient exact inference, a compact parameterization, and effective regularization. Experiments show that this approach leads to models that are more accurate than previous HMM and n-gram-based methods, making progress towards the performance of state-of-the-art neural models.

Justin T. Chiu, Alexander M. Rush• 2020

Related benchmarks

TaskDatasetResultRank
Language ModelingPTB (test)
Perplexity119.5
471
Language ModelingPenn Treebank (PTB) (test)
Perplexity116
120
Language ModelingPTB (val)
Perplexity128.6
83
Language ModelingPenn Treebank (PTB) (val)
Perplexity125
70
Showing 4 of 4 rows

Other info

Follow for update