Faithful Question Answering with Monte-Carlo Planning
About
Although large language models demonstrate remarkable question-answering performances, revealing the intermediate reasoning steps that the models faithfully follow remains challenging. In this paper, we propose FAME (FAithful question answering with MontE-carlo planning) to answer questions based on faithful reasoning steps. The reasoning steps are organized as a structured entailment tree, which shows how premises are used to produce intermediate conclusions that can prove the correctness of the answer. We formulate the task as a discrete decision-making problem and solve it through the interaction of a reasoning environment and a controller. The environment is modular and contains several basic task-oriented modules, while the controller proposes actions to assemble the modules. Since the search space could be large, we introduce a Monte-Carlo planning algorithm to do a look-ahead search and select actions that will eventually lead to high-quality steps. FAME achieves state-of-the-art performance on the standard benchmark. It can produce valid and faithful reasoning steps compared with large language models with a much smaller model size.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | OBQA (test) | -- | 13 | |
| Entailment tree generation | EntailmentBank Task 3 (Full Unseen) | Leaves F143.4 | 10 | |
| Question Answering | EntailmentBankQA Task 1 (test) | Accuracy91.5 | 7 | |
| Question Answering | EntailmentBankQA Task 2 (test) | Accuracy78.2 | 7 | |
| Selection | eQASC | P@153.36 | 6 | |
| Selection | eOBQA | P@10.7309 | 6 | |
| Entailment tree generation | EntailmentBank (test) | -- | 5 | |
| Entailment tree generation | EntailmentBank 50 samples (test) | FV100 | 4 | |
| Question Answering | EntailmentBankQA All (test) | Accuracy67.1 | 3 | |
| Question Answering | EntailmentBankQA Easy (test) | Answer Accuracy70.8 | 3 |