Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How Much Knowledge Can You Pack Into the Parameters of a Language Model?

About

It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine-tuning pre-trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open-domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

Adam Roberts, Colin Raffel, Noam Shazeer• 2020

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningPIQA
Accuracy68.48
647
Question AnsweringTriviaQA
Accuracy29.1
210
Common Sense ReasoningCOPA
Accuracy72.1
138
Open Question AnsweringNatural Questions (NQ) (test)
Exact Match (EM)36.6
134
Question AnsweringNQ
Accuracy26.3
108
Commonsense ReasoningSocialIQA
Accuracy65.5
97
Open-domain Question AnsweringTriviaQA (test)
Exact Match28.7
80
Commonsense ReasoningOBQA
Accuracy58.6
75
Question Answering2Wiki--
75
Question AnsweringWebQuestions (WebQs)
Accuracy44.7
67
Showing 10 of 54 rows

Other info

Follow for update