iconLogo
Published:2026/1/5 1:51:40

AIエージェント脅威検出、構造的表現でセキュリティ爆上げ🚀

超要約: AIエージェントのセキュリティを、構造的トークン化で強化する研究だよ!

✨ ギャル的キラキラポイント ✨

● 従来のトークン化じゃダメ🙅‍♀️行動パターンに注目する発想が天才的! ● Gated Multi-View Fusion (複数の表現方法を組み合わせ) って、なんかオシャレじゃん?✨ ● セマンティック攻撃(言葉の攻撃)からAIを守るって、未来感ハンパない!

詳細解説いくね!

続きは「らくらく論文」アプリで

Structural Representations for Cross-Attack Generalization in AI Agent Threat Detection

Vignesh Iyer

Autonomous AI agents executing multi-step tool sequences face semantic attacks that manifest in behavioral traces rather than isolated prompts. A critical challenge is cross-attack generalization: can detectors trained on known attack families recognize novel, unseen attack types? We discover that standard conversational tokenization -- capturing linguistic patterns from agent interactions -- fails catastrophically on structural attacks like tool hijacking (AUC 0.39) and data exfiltration (AUC 0.46), while succeeding on linguistic attacks like social engineering (AUC 0.78). We introduce structural tokenization, encoding execution-flow patterns (tool calls, arguments, observations) rather than conversational content. This simple representational change dramatically improves cross-attack generalization: +46 AUC points on tool hijacking, +39 points on data exfiltration, and +71 points on unknown attacks, while simultaneously improving in-distribution performance (+6 points). For attacks requiring linguistic features, we propose gated multi-view fusion that adaptively combines both representations, achieving AUC 0.89 on social engineering without sacrificing structural attack detection. Our findings reveal that AI agent security is fundamentally a structural problem: attack semantics reside in execution patterns, not surface language. While our rule-based tokenizer serves as a baseline, the structural abstraction principle generalizes even with simple implementation.

cs / cs.CR