iconLogo
Published:2026/1/5 11:49:07

タイトル & 超要約:LLMの新たな攻撃「CDA」!セキュリティ対策が重要だよ♡

  1. ギャル的キラキラポイント✨

    ● LLMの構造化出力(JSONとか)を狙う、新しい攻撃方法を発見! 従来の対策じゃ防げないんだって😱 ● 「Constrained Decoding Attack (CDA)」っていう攻撃方法を開発! 2つの攻撃手法でLLMを翻弄しちゃう😎 ● セキュリティ対策がマジ重要ってことがわかった! これからLLMを使うなら、安全対策は必須だよ💋

  2. 詳細解説

    • 背景 LLM(大規模言語モデル)は、チャットボットとか色んなことに使われてるよね! 特に、JSONみたいな構造化されたデータを出力する機能は便利💻✨ でも、その便利さの裏には、セキュリティのリスクが潜んでるんだって!

続きは「らくらく論文」アプリで

Beyond Prompts: Space-Time Decoupling Control-Plane Jailbreaks in LLM Structured Output

Shuoming Zhang / Jiacheng Zhao / Hanyuan Dong / Ruiyuan Xu / Zhicheng Li / Yangyu Zhang / Shuaijiang Li / Yuan Wen / Chunwei Xia / Zheng Wang / Xiaobing Feng / Huimin Cui

Content Warning: This paper may contain unsafe or harmful content generated by LLMs that may be offensive to readers. Large Language Models (LLMs) are extensively used as tooling platforms through structured output APIs to ensure syntax compliance so that robust integration with existing software, like agent systems, can be achieved. However, the feature enabling the functionality of grammar-guided structured output presents significant security vulnerabilities. In this work, we reveal a critical control-plane attack surface orthogonal to traditional data-plane vulnerabilities. We introduce Constrained Decoding Attack (CDA), a novel jailbreak class that weaponizes structured output constraints to bypass both external auditing and internal safety alignment. Unlike prior attacks focused on input prompt designs, CDA operates by embedding malicious intent in schema-level grammar rules (control-plane) while maintaining benign surface prompts (data-plane). We instantiate this with two proof-of-concept attacks: EnumAttack, which embeds malicious content in enum fields; and the more evasive DictAttack, which decouples the malicious payload across a benign prompt and a dictionary-based grammar. Our evaluation spans a broad spectrum of 13 proprietary/open-weight models. In particular, DictAttack achieves 94.3--99.5% ASR across five benchmarks on gpt-5, gemini-2.5-pro, deepseek-r1, and gpt-oss-120b. Furthermore, we demonstrate the significant challenge in defending against these threats: while basic grammar auditing mitigates EnumAttack, the more sophisticated DictAttack maintains a 75.8% ASR even against multiple state-of-the-art jailbreak guardrails. This exposes a critical "semantic gap" in current safety architectures and underscores the urgent need for cross-plane defenses that can bridge the data and control planes to secure the LLM generation pipeline.

cs / cs.CR / cs.AI