Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Extracting and Following Paths for Robust Relational Reasoning with Large Language Models

About

Large language models (LLMs) possess vast semantic knowledge but often struggle with complex reasoning tasks, particularly in relational reasoning problems such as kinship or spatial reasoning. In this paper, we present Path-of-Thoughts (PoT), a novel framework for solving relation reasoning that decomposes the task into three key stages: graph extraction, path identification, and reasoning. Unlike previous approaches, PoT efficiently extracts a reasoning graph that identifies crucial entities, relations, and attributes within the context. Subsequently, PoT identifies query-relevant reasoning paths within the graph, facilitating downstream reasoning of potential answers. Experimental evaluations across four datasets of relational reasoning demonstrate that PoT surpasses state-of-the-art baselines by a significant margin (up to 21.3%) without requiring fine-tuning or extensive LLM calls. Furthermore, unlike prior neuro-symbolic methods, PoT exhibits improved resilience against LLM extraction errors and input ambiguity by leveraging the compositional nature of graphs.

Ge Zhang, Mohammad Ali Alomrani, Hongjian Gu, Jiaming Zhou, Yaochen Hu, Bin Wang, Qun Liu, Mark Coates, Yingxue Zhang, Jianye Hao• 2024

Related benchmarks

TaskDatasetResultRank
Logical reasoningStepgame k=3
Accuracy89.5
56
Logical reasoningStepgame k=4
Accuracy93.3
56
Logical reasoningStepgame k=10
Accuracy87.8
56
Logical reasoningCLUTRR
Accuracy67.7
42
Logical reasoningCLUTRR (test)
Accuracy72.3
35
Relational ReasoningChinese Kinship
Accuracy71.2
20
Spatial ReasoningSPARTUN
Accuracy83.1
20
Showing 7 of 7 rows

Other info

Follow for update