Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

You can't pick your neighbors, or can you? When and how to rely on retrieval in the $k$NN-LM

About

Retrieval-enhanced language models (LMs), which condition their predictions on text retrieved from large external datastores, have recently shown significant perplexity improvements compared to standard LMs. One such approach, the $k$NN-LM, interpolates any existing LM's predictions with the output of a $k$-nearest neighbors model and requires no additional training. In this paper, we explore the importance of lexical and semantic matching in the context of items retrieved by $k$NN-LM. We find two trends: (1) the presence of large overlapping $n$-grams between the datastore and evaluation set plays an important factor in strong performance, even when the datastore is derived from the training data; and (2) the $k$NN-LM is most beneficial when retrieved items have high semantic similarity with the query. Based on our analysis, we define a new formulation of the $k$NN-LM that uses retrieval quality to assign the interpolation coefficient. We empirically measure the effectiveness of our approach on two English language modeling datasets, Wikitext-103 and PG-19. Our re-formulation of the $k$NN-LM is beneficial in both cases, and leads to nearly 4% improvement in perplexity on the Wikitext-103 test set.

Andrew Drozdov, Shufan Wang, Razieh Rahimi, Andrew McCallum, Hamed Zamani, Mohit Iyyer• 2022

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity15.5
524
Language ModelingPG-19 (test)
Perplexity43.58
106
Word-level Language ModelingWikiText-103 (dev)
Perplexity15.72
64
Language ModelingPG-19 (dev)
Perplexity52.08
6
Showing 4 of 4 rows

Other info

Code

Follow for update