Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decompose-and-Formalise: Recursively Verifiable Natural Language Inference

About

Recent work has shown that integrating large language models (LLMs) with theorem provers (TPs) in neuro-symbolic pipelines helps with entailment verification and proof-guided refinement of explanations for natural language inference (NLI). However, scaling such refinement to naturalistic NLI remains difficult: long, syntactically rich inputs and deep multi-step arguments amplify autoformalisation errors, where a single local mismatch can invalidate the proof. Moreover, current methods often handle failures via costly global regeneration due to the difficulty of localising the responsible span or step from prover diagnostics. Aiming to address these problems, we propose a decompose-and-formalise framework that (i) decomposes premise-hypothesis pairs into an entailment tree of atomic steps, (ii) verifies the tree bottom-up to isolate failures to specific nodes, and (iii) performs local diagnostic-guided refinement instead of regenerating the whole explanation. Moreover, to improve faithfulness of autoformalisation, we introduce $\theta$-substitution in an event-based logical form to enforce consistent argument-role bindings. Across a range of reasoning tasks using five LLM backbones, our method achieves the highest explanation verification rates, improving over the state-of-the-art by 26.2%, 21.7%, 21.6% and 48.9%, while reducing refinement iterations and runtime and preserving strong NLI accuracy.

Xin Quan, Marco Valentino, Louise A. Dennis, Andr\'e Freitas• 2026

Related benchmarks

TaskDatasetResultRank
Explanation RefinementFOLIO
Initial Score85.25
15
Explanation RefinementProofWriter
Initial Score92
15
Explanation RefinementPrOntoQA
Initial Score0.98
15
Explanation RefinementEntailmentBank
Initial Score22.67
15
Showing 4 of 4 rows

Other info

Follow for update