Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LOGIC-LM++: Multi-Step Refinement for Symbolic Formulations

About

In this paper we examine the limitations of Large Language Models (LLMs) for complex reasoning tasks. Although recent works have started to employ formal languages as an intermediate representation for reasoning tasks, they often face challenges in accurately generating and refining these formal specifications to ensure correctness. To address these issues, this paper proposes Logic-LM++, an improvement on Logic-LM . It uses the ability of LLMs to do pairwise comparisons, allowing the evaluation of the refinements suggested by the LLM. The paper demonstrates that Logic-LM++ outperforms Logic-LM and other contemporary techniques across natural language reasoning tasks on three datasets, FOLIO, ProofWriter and AR-LSAT, with an average improvement of 18.5% on standard prompting, 12.3% on chain of thought prompting and 5% on Logic-LM.

Shashank Kirtania, Priyanshu Gupta, Arjun Radhakirshna• 2024

Related benchmarks

TaskDatasetResultRank
Logical reasoningAR-LSAT
Accuracy46.32
44
Deductive ReasoningProofWriter
End-to-end Accuracy79.66
21
Showing 2 of 2 rows

Other info

Follow for update