iconLogo
Published:2025/12/16 11:30:40

RAG爆アゲ計画!Distractor & 位置バイアス対策でLLM最強✨

超要約: RAGの弱点克服!AIの検索精度を爆上げしちゃう方法だよ💖

🌟 ギャル的キラキラポイント✨ ● RAG (Retrieval-Augmented Generation) って、AIが賢くなる秘訣なんだって! ● Distractor (邪魔な情報) を排除して、LLM (AI) を混乱させないのが重要🌟 ● 動的コンテキスト選択 (必要な情報だけ選ぶ!) で、位置バイアス (変なとこばっか見ちゃう癖) を克服✨

詳細解説いくね!

背景: RAGって、AIが外部の情報源から知識を借りて、賢くなるテクニックのこと😉 でも、検索結果に余計な情報 (distractor) が混ざったり、情報を見る順番でバイアス (偏り) が生まれたりして、うまく情報が伝わらないことがあったんだって🥺

続きは「らくらく論文」アプリで

Dynamic Context Selection for Retrieval-Augmented Generation: Mitigating Distractors and Positional Bias

Malika Iratni / Mohand Boughanem / Taoufiq Dkaki

Retrieval Augmented Generation (RAG) enhances language model performance by incorporating external knowledge retrieved from large corpora, which makes it highly suitable for tasks such as open domain question answering. Standard RAG systems typically rely on a fixed top k retrieval strategy, which can either miss relevant information or introduce semantically irrelevant passages, known as distractors, that degrade output quality. Additionally, the positioning of retrieved passages within the input context can influence the model attention and generation outcomes. Context placed in the middle tends to be overlooked, which is an issue known as the "lost in the middle" phenomenon. In this work, we systematically analyze the impact of distractors on generation quality, and quantify their effects under varying conditions. We also investigate how the position of relevant passages within the context window affects their influence on generation. Building on these insights, we propose a context-size classifier that dynamically predicts the optimal number of documents to retrieve based on query-specific informational needs. We integrate this approach into a full RAG pipeline, and demonstrate improved performance over fixed k baselines.

cs / cs.IR