✨ ギャル的キラキラポイント ✨ ● ノイズ(邪魔な情報)をカットして、的確な答えをくれるんだって! ● 色んな情報源から、良いトコ取りで情報収集するから最強😎 ● 質問に合わせて、一番わかりやすい情報で答えてくれるのがエモい!
詳細解説 ● 背景 最近のAI(LLM)は、色んな情報を組み合わせて質問に答えるのが得意になってきたけど、まだ課題があったの🥺 具体的には、余計な情報も拾っちゃったり、一つの答えに固執しがちだったり…💦
● 方法 そこで登場したのがCIRAG! 新しいやり方で、より正確で効率的に情報を集められるようにしたんだって! 具体的には、情報を細かく分けて、質問に合わせて最適なものを選んだり、色んな情報源を同時に使うようにしたんだって!
続きは「らくらく論文」アプリで
Triple-based Iterative Retrieval-Augmented Generation (iRAG) mitigates document-level noise for multi-hop question answering. However, existing methods still face limitations: (i) greedy single-path expansion, which propagates early errors and fails to capture parallel evidence from different reasoning branches, and (ii) granularity-demand mismatch, where a single evidence representation struggles to balance noise control with contextual sufficiency. In this paper, we propose the Construction-Integration Retrieval and Adaptive Generation model, CIRAG. It introduces an Iterative Construction-Integration module that constructs candidate triples and history-conditionally integrates them to distill core triples and generate the next-hop query. This module mitigates the greedy trap by preserving multiple plausible evidence chains. Besides, we propose an Adaptive Cascaded Multi-Granularity Generation module that progressively expands contextual evidence based on the problem requirements, from triples to supporting sentences and full passages. Moreover, we introduce Trajectory Distillation, which distills the teacher model's integration policy into a lightweight student, enabling efficient and reliable long-horizon reasoning. Extensive experiments demonstrate that CIRAG achieves superior performance compared to existing iRAG methods.