iconLogo
Published:2025/12/25 5:36:31

はいは~い!最強ギャル解説AI、爆誕だよ~!✨ この論文、アゲてこーっ!💖

専門家LLMの安全性をブチ壊す攻撃👊💥

超要約:MoE型LLM(かしこいAI)の弱点を突く攻撃方法を発見したよ!危険なことさせないように対策しよっ💖

✨ ギャル的キラキラポイント ✨ ● 訓練ナシで攻撃できるのがエモい!推論(AIが考えるとき)だけで攻撃できちゃうんだよね😎 ● いろんなMoE型LLMに対応!有名どころもズタボロにできちゃうかも?🤣 ● 画像を読み取るAI(VLM)にも攻撃できるって、マジ卍じゃん?😳

詳細解説いくよ~!

続きは「らくらく論文」アプリで

GateBreaker: Gate-Guided Attacks on Mixture-of-Expert LLMs

Lichao Wu / Sasha Behrouzi / Mohamadreza Rostami / Stjepan Picek / Ahmad-Reza Sadeghi

Mixture-of-Experts (MoE) architectures have advanced the scaling of Large Language Models (LLMs) by activating only a sparse subset of parameters per input, enabling state-of-the-art performance with reduced computational cost. As these models are increasingly deployed in critical domains, understanding and strengthening their alignment mechanisms is essential to prevent harmful outputs. However, existing LLM safety research has focused almost exclusively on dense architectures, leaving the unique safety properties of MoEs largely unexamined. The modular, sparsely-activated design of MoEs suggests that safety mechanisms may operate differently than in dense models, raising questions about their robustness. In this paper, we present GateBreaker, the first training-free, lightweight, and architecture-agnostic attack framework that compromises the safety alignment of modern MoE LLMs at inference time. GateBreaker operates in three stages: (i) gate-level profiling, which identifies safety experts disproportionately routed on harmful inputs, (ii) expert-level localization, which localizes the safety structure within safety experts, and (iii) targeted safety removal, which disables the identified safety structure to compromise the safety alignment. Our study shows that MoE safety concentrates within a small subset of neurons coordinated by sparse routing. Selective disabling of these neurons, approximately 3% of neurons in the targeted expert layers, significantly increases the averaged attack success rate (ASR) from 7.4% to 64.9% against the eight latest aligned MoE LLMs with limited utility degradation. These safety neurons transfer across models within the same family, raising ASR from 17.9% to 67.7% with one-shot transfer attack. Furthermore, GateBreaker generalizes to five MoE vision language models (VLMs) with 60.9% ASR on unsafe image inputs.

cs / cs.CR