Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Language Models as Fact Checkers?

About

Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data. In this paper, we leverage this implicit knowledge to create an effective end-to-end fact checker using a solely a language model, without any external knowledge or explicit retrieval components. While previous work on extracting knowledge from LMs have focused on the task of open-domain question answering, to the best of our knowledge, this is the first work to examine the use of language models as fact checkers. In a closed-book setting, we show that our zero-shot LM approach outperforms a random baseline on the standard FEVER task, and that our fine-tuned LM compares favorably with standard baselines. Though we do not ultimately outperform methods which use explicit knowledge bases, we believe our exploration shows that this method is viable and has much room for exploration.

Nayeon Lee, Belinda Z. Li, Sinong Wang, Wen-tau Yih, Hao Ma, Madian Khabsa• 2020

Related benchmarks

TaskDatasetResultRank
Fact VerificationFEVER-Symmetric
Precision71.2
16
Fact VerificationFEVER S R
Precision77.9
8
Showing 2 of 2 rows

Other info

Follow for update