iconLogo
Published:2025/12/3 16:56:37

敵を倒す防御?!斬新AT💥

超要約:防御が攻撃を強くするってマジ?深層学習のセキュリティの話✨

● 防御が攻撃をレベルアップさせちゃうなんて、ドラマみたいじゃん? ● AIの安全性を高めるための研究だよ! ● GitHubでコード公開!みんなも試せるってことね♪

詳細解説

背景 深層学習(ディープラーニング)モデルって、画像認識とかスゴイことできるんだけど、ちょこっとノイズを加えるだけで簡単に騙せちゃう脆弱性(ぜいじゃくせい)があるの😱それを「敵対的攻撃」って言うんだけど、困っちゃうよね!今までの防御策「敵対的訓練 (AT)」は、モデルを強くするけど、逆に攻撃も強くしちゃう可能性があるってことが問題になってるんだって!

続きは「らくらく論文」アプリで

Defense That Attacks: How Robust Models Become Better Attackers

Mohamed Awad / Mahmoud Akrm / Walid Gomaa

Deep learning has achieved great success in computer vision, but remains vulnerable to adversarial attacks. Adversarial training is the leading defense designed to improve model robustness. However, its effect on the transferability of attacks is underexplored. In this work, we ask whether adversarial training unintentionally increases the transferability of adversarial examples. To answer this, we trained a diverse zoo of 36 models, including CNNs and ViTs, and conducted comprehensive transferability experiments. Our results reveal a clear paradox: adversarially trained (AT) models produce perturbations that transfer more effectively than those from standard models, which introduce a new ecosystem risk. To enable reproducibility and further study, we release all models, code, and experimental scripts. Furthermore, we argue that robustness evaluations should assess not only the resistance of a model to transferred attacks but also its propensity to produce transferable adversarial examples.

cs / cs.CV / cs.AI