Towards LLM-based Fact Verification on News Claims with a Hierarchical Step-by-Step Prompting Method
About
While large pre-trained language models (LLMs) have shown their impressive capabilities in various NLP tasks, they are still under-explored in the misinformation domain. In this paper, we examine LLMs with in-context learning (ICL) for news claim verification, and find that only with 4-shot demonstration examples, the performance of several prompting methods can be comparable with previous supervised models. To further boost performance, we introduce a Hierarchical Step-by-Step (HiSS) prompting method which directs LLMs to separate a claim into several subclaims and then verify each of them via multiple questions-answering steps progressively. Experiment results on two public misinformation datasets show that HiSS prompting outperforms state-of-the-art fully-supervised approach and strong few-shot ICL-enabled baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Fact Verification | RAWFC | Precision53.4 | 30 | |
| Veracity Prediction | RAWFC (test) | Precision53.4 | 28 | |
| Fake News Detection | ANTiVax | Precision82.3 | 19 | |
| Fact Verification | LIAR | F1 Score37.5 | 18 | |
| Veracity Explanation Ranking | RAWFC | Readability (MAR)2.44 | 15 | |
| Claim Verification | LIAR (test) | Precision46.8 | 12 | |
| Veracity Prediction | LIAR RAW | Macro Precision46 | 6 | |
| Explanation Generation | RAWFC | Politeness97.1 | 4 | |
| Explanation Generation | ANTiVax | Politeness98.6 | 4 | |
| Explanation Generation | LIAR | Politeness96.7 | 4 |