Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-hop Reasoning and Retrieval in Embedding Space: Leveraging Large Language Models with Knowledge

About

As large language models (LLMs) continue to grow in size, their abilities to tackle complex tasks have significantly improved. However, issues such as hallucination and the lack of up-to-date knowledge largely remain unresolved. Knowledge graphs (KGs), which serve as symbolic representations of real-world knowledge, offer a reliable source for enhancing reasoning. Integrating KG retrieval into LLMs can therefore strengthen their reasoning by providing dependable knowledge. Nevertheless, due to limited understanding of the underlying knowledge graph, LLMs may struggle with queries that have multiple interpretations. Additionally, the incompleteness and noise within knowledge graphs may result in retrieval failures. To address these challenges, we propose an embedding-based retrieval reasoning framework EMBRAG. In this approach, the model first generates multiple logical rules grounded in knowledge graphs based on the input query. These rules are then applied to reasoning in the embedding space, guided by the knowledge graph, ensuring more robust and accurate reasoning. A reranker model further interprets these rules and refines the results. Extensive experiments on two benchmark KGQA datasets demonstrate that our approach achieves the new state-of-the-art performance in KG reasoning tasks.

Lihui Liu• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge Base Question AnsweringWEBQSP (test)
Hit@186.81
145
Question AnsweringMetaQA 3-hop
Hits@196.3
47
Question AnsweringWebQSP--
35
Question AnsweringMetaQA 2-hop
Hits@199.1
28
Question AnsweringCWQ
Hits@162.9
17
Question AnsweringMetaQA 1-hop
Hits@197.5
9
Showing 6 of 6 rows

Other info

Follow for update