iconLogo
Published:2025/12/17 6:57:16

タイトル & 超要約:LLMの攻撃と防御の研究だよ!✨

  1. ギャル的キラキラポイント✨その1: 既存の防御をぶっ壊す! ● LLM(大規模言語モデル)のセキュリティを守る研究だよ! 今までの防御策が、新しい攻撃手法「ASTRA」で破られちゃうかもって話!

  2. ギャル的キラキラポイント✨その2: ASTRAって何者? ● ASTRAは、LLMの構造(アテンションメカニズム)をうまく使って攻撃するの!賢すぎる~!🤯

  3. ギャル的キラキラポイント✨その3: IT業界に貢献! ● LLMを使ったサービスが、もっと安全になるかも! みんなが安心して使えるようになるって、すごくない?😍

詳細解説

続きは「らくらく論文」アプリで

May I have your Attention? Breaking Fine-Tuning based Prompt Injection Defenses using Architecture-Aware Attacks

Nishit V. Pandya / Andrey Labunets / Sicun Gao / Earlence Fernandes

A popular class of defenses against prompt injection attacks on large language models (LLMs) relies on fine-tuning to separate instructions and data, so that the LLM does not follow instructions that might be present with data. We evaluate the robustness of this approach in the whitebox setting by constructing strong optimization-based attacks, and show that the defenses do not provide the claimed security properties. Specifically, we construct a novel attention-based attack algorithm for textual LLMs and apply it to three recent whitebox defenses SecAlign (CCS 2025), SecAlign++, and StruQ (USENIX Security 2025), showing attacks with success rates of up to \textbf{85-95\%} on unseen prompts with modest increase in attacker budget in terms of tokens. Our findings make fundamental progress towards understanding the robustness of prompt injection defenses in the whitebox setting. We release our code and attacks at https://github.com/nishitvp/better_opts_attacks

cs / cs.CR / cs.AI / cs.CL