Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vera: A General-Purpose Plausibility Estimation Model for Commonsense Statements

About

Despite the much discussed capabilities of today's language models, they are still prone to silly and unexpected commonsense failures. We consider a retrospective verification approach that reflects on the correctness of LM outputs, and introduce Vera, a general-purpose model that estimates the plausibility of declarative statements based on commonsense knowledge. Trained on ~7M commonsense statements created from 19 QA datasets and two large-scale knowledge bases, and with a combination of three training objectives, Vera is a versatile model that effectively separates correct from incorrect statements across diverse commonsense domains. When applied to solving commonsense problems in the verification format, Vera substantially outperforms existing models that can be repurposed for commonsense verification, and it further exhibits generalization capabilities to unseen tasks and provides well-calibrated outputs. We find that Vera excels at filtering LM-generated commonsense knowledge and is useful in detecting erroneous commonsense statements generated by models like ChatGPT in real-world settings.

Jiacheng Liu, Wenya Wang, Dianzhuo Wang, Noah A. Smith, Yejin Choi, Hannaneh Hajishirzi• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy92.4
1085
Physical Commonsense ReasoningPIQA
Accuracy77.2
572
Physical Interaction Question AnsweringPIQA
Accuracy88.5
333
Physical Commonsense ReasoningPIQA (val)
Accuracy77.2
116
Social Interaction Question AnsweringSIQA
Accuracy80.1
109
Social Commonsense ReasoningSIQA
Accuracy58.2
89
Commonsense Question AnsweringCSQA
Accuracy63
58
Abductive Commonsense ReasoningANLI (test)
Accuracy73.2
53
Compositional ReasoningSugarCrepe--
50
Abductive Natural Language InferenceaNLI (leaderboard)
Accuracy83.9
47
Showing 10 of 17 rows

Other info

Follow for update