超要約: LLM(大規模言語モデル)のIPI(間接的プロンプトインジェクション)攻撃を防ぐ、最強の防御策を開発したってこと💖
✨ ギャル的キラキラポイント ✨
● IPI攻撃をツール呼び出しの結果解析でブロック!賢すぎ!😎 ● 特別なモデルの訓練とかいらないから、導入も楽ちん🎵 ● 情報漏洩とか不正操作からLLMを守ってくれる!安心安全💖
詳細解説いくよ~!
続きは「らくらく論文」アプリで
As LLM agents transition from digital assistants to physical controllers in autonomous systems and robotics, they face an escalating threat from indirect prompt injection. By embedding adversarial instructions into the results of tool calls, attackers can hijack the agent's decision-making process to execute unauthorized actions. This vulnerability poses a significant risk as agents gain more direct control over physical environments. Existing defense mechanisms against Indirect Prompt Injection (IPI) generally fall into two categories. The first involves training dedicated detection models; however, this approach entails high computational overhead for both training and inference, and requires frequent updates to keep pace with evolving attack vectors. Alternatively, prompt-based methods leverage the inherent capabilities of LLMs to detect or ignore malicious instructions via prompt engineering. Despite their flexibility, most current prompt-based defenses suffer from high Attack Success Rates (ASR), demonstrating limited robustness against sophisticated injection attacks. In this paper, we propose a novel method that provides LLMs with precise data via tool result parsing while effectively filtering out injected malicious code. Our approach achieves competitive Utility under Attack (UA) while maintaining the lowest Attack Success Rate (ASR) to date, significantly outperforming existing methods. Code is available at GitHub.