Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Empirical Study of Smoothing Techniques for Language Modeling

About

We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

Stanley F. Chen, Joshua T. Goodman• 1996

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpaca
Speedup (x)2.45
63
Speech RecognitionHub5'00 CH (test)
WER13.7
28
Automatic Speech RecognitionHub5 2000 (SWB)
WER7.6
21
SummarizationCNN/DM
Speedup2.01
8
Multi-turn dialogueMT-Bench (MTB)
Speedup Factor2.21
8
Question AnsweringNatural Questions (NQ)
Speedup1.95
8
Averaged Performance across five downstream tasksAverage Overall
Speedup2.2
8
Mathematical ReasoningGSM8K
Speedup2.36
8
Showing 8 of 8 rows

Other info

Follow for update