Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generating Fact Checking Explanations

About

Most existing work on automated fact checking is concerned with predicting the veracity of claims based on metadata, social network spread, language used in claims, and, more recently, evidence supporting or denying claims. A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process -- generating justifications for verdicts on claims. This paper provides the first study of how these explanations can be generated automatically based on available claim context, and how this task can be modelled jointly with veracity prediction. Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system. The results of a manual evaluation further suggest that the informativeness, coverage and overall quality of the generated explanations are also improved in the multi-task model.

Pepa Atanasova, Jakob Grue Simonsen, Christina Lioma, Isabelle Augenstein• 2020

Related benchmarks

TaskDatasetResultRank
Fact VerificationRAWFC
Precision45.64
30
Veracity PredictionRAWFC (test)
Precision45.64
28
Fact CheckingLIAR RAW
Precision28.01
20
Veracity Explanation RankingRAWFC
Readability (MAR)2.35
15
Veracity Explanation RankingLIAR RAW
Informativeness (MAR)2.09
12
Veracity PredictionLIAR-RAW (test)
Precision43.83
12
Claim VerificationLIAR (test)
Precision28
12
Explanation GenerationLIAR-RAW (test)
ROU-125.5
11
Explanation GenerationRAWFC (test)
ROUGE-137.62
10
Sentence ClassificationRAWFC (test)
Precision0.5062
2
Showing 10 of 10 rows

Other info

Follow for update