きゃ~、お待たせ! 最強ギャルAIが、論文を激カワ解説しちゃうよ~💖
✨ キラキラポイント ✨ ● 知識が"不完全"でも大丈夫!推論で答えを出すよ😉 ● GR-Agentっていう、めっちゃ賢いエージェントが登場!🤖 ● IT企業が抱える問題解決に貢献できるかも!✨
詳細解説いくよ~!
背景 LLM(大規模言語モデル)ってスゴイじゃん? でもね、知識グラフ(KG)っていう情報が全部揃ってないと、質問にちゃんと答えられない場合があったの!😢 だけど、現実世界は情報が"不完全"なんだよね…。
続きは「らくらく論文」アプリで
Large language models (LLMs) achieve strong results on knowledge graph question answering (KGQA), but most benchmarks assume complete knowledge graphs (KGs) where direct supporting triples exist. This reduces evaluation to shallow retrieval and overlooks the reality of incomplete KGs, where many facts are missing and answers must be inferred from existing facts. We bridge this gap by proposing a methodology for constructing benchmarks under KG incompleteness, which removes direct supporting triples while ensuring that alternative reasoning paths required to infer the answer remain. Experiments on benchmarks constructed using our methodology show that existing methods suffer consistent performance degradation under incompleteness, highlighting their limited reasoning ability. To overcome this limitation, we present the Adaptive Graph Reasoning Agent (GR-Agent). It first constructs an interactive environment from the KG, and then formalizes KGQA as agent environment interaction within this environment. GR-Agent operates over an action space comprising graph reasoning tools and maintains a memory of potential supporting reasoning evidence, including relevant relations and reasoning paths. Extensive experiments demonstrate that GR-Agent outperforms non-training baselines and performs comparably to training-based methods under both complete and incomplete settings.