Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Verification and Refinement of Natural Language Explanations through LLM-Symbolic Theorem Proving

About

Natural language explanations represent a proxy for evaluating explanation-based and multi-step Natural Language Inference (NLI) models. However, assessing the validity of explanations for NLI is challenging as it typically involves the crowd-sourcing of apposite datasets, a process that is time-consuming and prone to logical errors. To address existing limitations, this paper investigates the verification and refinement of natural language explanations through the integration of Large Language Models (LLMs) and Theorem Provers (TPs). Specifically, we present a neuro-symbolic framework, named Explanation-Refiner, that integrates TPs with LLMs to generate and formalise explanatory sentences and suggest potential inference strategies for NLI. In turn, the TP is employed to provide formal guarantees on the logical validity of the explanations and to generate feedback for subsequent improvements. We demonstrate how Explanation-Refiner can be jointly used to evaluate explanatory reasoning, autoformalisation, and error correction mechanisms of state-of-the-art LLMs as well as to automatically enhance the quality of explanations of variable complexity in different domains.

Xin Quan, Marco Valentino, Louise A. Dennis, Andr\'e Freitas• 2024

Related benchmarks

TaskDatasetResultRank
Explanation RefinementFOLIO
Initial Score55.74
15
Explanation RefinementProofWriter
Initial Score64
15
Explanation RefinementEntailmentBank
Initial Score16.67
15
Explanation RefinementPrOntoQA
Initial Score0.6467
15
Logical Refinement of Natural Language ExplanationsE-SNLI
Initial Performance31
8
Logical Refinement of Natural Language ExplanationsQASC
Initial Score4
8
Logical Refinement of Natural Language ExplanationsWorldTree
Initial Performance3
8
Showing 7 of 7 rows

Other info

Follow for update