Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UltRAG: a Universal Simple Scalable Recipe for Knowledge Graph RAG

About

Large language models (LLMs) frequently generate confident yet factually incorrect content when used for language generation (a phenomenon often known as hallucination). Retrieval augmented generation (RAG) tries to reduce factual errors by identifying information in a knowledge corpus and putting it in the context window of the model. While this approach is well-established for document-structured data, it is non-trivial to adapt it for Knowledge Graphs (KGs), especially for queries that require multi-node/multi-hop reasoning on graphs. We introduce ULTRAG, a general framework for retrieving information from Knowledge Graphs that shifts away from classical RAG. By endowing LLMs with off-the-shelf neural query executing modules, we highlight how readily available language models can achieve state-of-the-art results on Knowledge Graph Question Answering (KGQA) tasks without any retraining of the LLM or executor involved. In our experiments, ULTRAG achieves better performance when compared to state-of-the-art KG-RAG solutions, and it enables language models to interface with Wikidata-scale graphs (116M entities, 1.6B relations) at comparable or lower costs.

Dobrik Georgiev, Kheeran Naidu, Alberto Cattaneo, Federico Monti, Carlo Luschi, Daniel Justus• 2026

Related benchmarks

TaskDatasetResultRank
Knowledge Graph Question AnsweringGTSQA Ground-truth seed nodes WikiKG2 (test)
Hits92.66
17
Knowledge Graph Question AnsweringKGQAGen-10k (Wikidata)
Hits92.04
9
Knowledge Graph Question AnsweringGTSQA Ground-truth seed nodes Wikidata (test)
Hits86.74
6
Knowledge Graph Question AnsweringKGQAGen Ground-truth seed nodes 10k (test)
Hits90.62
4
Knowledge Graph Question AnsweringKGQAGen Entity Linking 10k (test)
Hits83.98
4
Showing 5 of 5 rows

Other info

Follow for update