iconLogo
Published:2025/12/17 9:14:08

KGQA爆上げ!関係性で賢く推論するAIだって!🎉

超要約:LLM(大規模言語モデル)が知識グラフ(KG)を使って、もっと賢く質問に答える方法だよ!

✨ ギャル的キラキラポイント ✨

● 関係性(リレーションシップ)に着目!質問に合わせて推論ステップ数を調整するんだって✨ ● CoT(思考の連鎖)を使って、AIに推論の仕方を教えちゃう!まるで家庭教師👯‍♀️ ● 医療とか法律とか、間違っちゃいけない分野で大活躍の予感!まさに最強👸

詳細解説いくよ~!

続きは「らくらく論文」アプリで

RFKG-CoT: Relation-Driven Adaptive Hop-count Selection and Few-Shot Path Guidance for Knowledge-Aware QA

Chao Zhang / Minghan Li / Tianrui Lv / Guodong Zhou

Large language models (LLMs) often generate hallucinations in knowledge-intensive QA due to parametric knowledge limitations. While existing methods like KG-CoT improve reliability by integrating knowledge graph (KG) paths, they suffer from rigid hop-count selection (solely question-driven) and underutilization of reasoning paths (lack of guidance). To address this, we propose RFKG-CoT: First, it replaces the rigid hop-count selector with a relation-driven adaptive hop-count selector that dynamically adjusts reasoning steps by activating KG relations (e.g., 1-hop for direct "brother" relations, 2-hop for indirect "father-son" chains), formalized via a relation mask. Second, it introduces a few-shot in-context learning path guidance mechanism with CoT (think) that constructs examples in a "question-paths-answer" format to enhance LLMs' ability to understand reasoning paths. Experiments on four KGQA benchmarks show RFKG-CoT improves accuracy by up to 14.7 pp (Llama2-7B on WebQSP) over KG-CoT. Ablations confirm the hop-count selector and the path prompt are complementary, jointly transforming KG evidence into more faithful answers.

cs / cs.CL / cs.AI