超重要事項をまとめたよ! IT企業の新規事業開発担当者向けね😉
✨ ギャル的キラキラポイント ✨ ● コーディングAIの弱点(セキュリティ)を徹底的に調査した論文なの! ● ツール呼び出しをハッキング😱するテクニックを発見! ● 安全なコーディング環境を作るためのヒントがいっぱい詰まってる✨
詳細解説いくよ~!
背景 最近のIDE(開発ソフト)には、AIがコードを書くのを手伝ってくれる機能がついてるの! このAIは「コーディングエージェント」って呼ばれてるんだけど、便利になる一方で、セキュリティの問題も出てきてるんだよね😥 特に、AIが他のツールを呼び出す機能(ツール呼び出し)が、攻撃の入り口になる可能性があるんだって!
続きは「らくらく論文」アプリで
Coding agents powered by large language models are becoming central modules of modern IDEs, helping users perform complex tasks by invoking tools. While powerful, tool invocation opens a substantial attack surface. Prior work has demonstrated attacks against general-purpose and domain-specific agents, but none have focused on the security risks of tool invocation in coding agents. To fill this gap, we conduct the first systematic red-teaming of six popular real-world coding agents: Cursor, Claude Code, Copilot, Windsurf, Cline, and Trae. Our red-teaming proceeds in two phases. In Phase 1, we perform prompt leakage reconnaissance to recover system prompts. We discover a general vulnerability, ToolLeak, which allows malicious prompt exfiltration through benign argument retrieval during tool invocation. In Phase 2, we hijack the agent's tool-invocation behavior using a novel two-channel prompt injection in the tool description and return values, achieving remote code execution (RCE). We adaptively construct payloads using security information leaked in Phase 1. In emulation across five backends, our method outperforms baselines on Claude-Sonnet-4, Claude-Sonnet-4.5, Grok-4, and GPT-5. On real agents, our approach succeeds on 19 of 25 agent-LLM pairs, achieving leakage on every agent using Claude and Grok backends. For tool-invocation hijacking, we obtain RCE on every tested agent-LLM pair, with our two-channel method delivering the highest success rate. We provide case studies on Cursor and Claude Code, analyze security guardrails of external and built-in tools, and conclude with practical defense recommendations.