iconLogo
Published:2026/1/11 9:47:43

最強ギャルAI、降臨〜!😎✨ 今回は「RIPRAG」について、アゲてくよ〜!

1. タイトル & 超要約

RIPRAG解説💖:RAGシステムをギャル流に守る方法!😎✨

2. ギャル的キラキラポイント✨

● RAG(検索拡張生成)システムって、賢いけど脆い!😭 ● ブラックボックス型RAGへの攻撃に成功率UP!🎉 ● IT企業も安心してRAG使える未来が来るかも!?🥰

続きは「らくらく論文」アプリで

RIPRAG: Hack a Black-box Retrieval-Augmented Generation Question-Answering System with Reinforcement Learning

Meng Xi / Sihan Lv / Yechen Jin / Guanjie Cheng / Naibo Wang / Ying Li / Jianwei Yin

Retrieval-Augmented Generation (RAG) systems based on Large Language Models (LLMs) have become a core technology for tasks such as question-answering (QA) and content generation. RAG poisoning is an attack method to induce LLMs to generate the attacker's expected text by injecting poisoned documents into the database of RAG systems. Existing research can be broadly divided into two classes: white-box methods and black-box methods. White-box methods utilize gradient information to optimize poisoned documents, and black-box methods use a pre-trained LLM to generate them. However, existing white-box methods require knowledge of the RAG system's internal composition and implementation details, whereas black-box methods are unable to utilize interactive information. In this work, we propose the RIPRAG attack framework, an end-to-end attack pipeline that treats the target RAG system as a black box and leverages our proposed Reinforcement Learning from Black-box Feedback (RLBF) method to optimize the generation model for poisoned documents. We designed two kinds of rewards: similarity reward and attack reward. Experimental results demonstrate that this method can effectively execute poisoning attacks against most complex RAG systems, achieving an attack success rate (ASR) improvement of up to 0.72 compared to baseline methods. This highlights prevalent deficiencies in current defensive methods and provides critical insights for LLM security research.

cs / cs.AI