iconLogo
Published:2026/1/7 5:30:53

ゼロショでLLMの脱獄(jailbreak)攻撃を防ぐってマジ!?🤖✨

超要約: 未知の攻撃も検知!LLMの安全性を爆上げする新技術だよ♡


🌟 ギャル的キラキラポイント✨

● 未知の攻撃にも対応できるのが最強!従来の対策じゃ防げなかった攻撃もブロックできるってこと😳 ● LLM(大規模言語モデル)の内部の動きをうまく利用してるのが天才的!賢い~💖 ● IT企業が安心してLLMを使えるように、安全対策をバッチリしてくれるってわけ😍

続きは「らくらく論文」アプリで

ALERT: Zero-shot LLM Jailbreak Detection via Internal Discrepancy Amplification

Xiao Lin / Philip Li / Zhichen Zeng / Tingwei Li / Tianxin Wei / Xuying Ning / Gaotang Li / Yuzhong Chen / Hanghang Tong

Despite rich safety alignment strategies, large language models (LLMs) remain highly susceptible to jailbreak attacks, which compromise safety guardrails and pose serious security risks. Existing detection methods mainly detect jailbreak status relying on jailbreak templates present in the training data. However, few studies address the more realistic and challenging zero-shot jailbreak detection setting, where no jailbreak templates are available during training. This setting better reflects real-world scenarios where new attacks continually emerge and evolve. To address this challenge, we propose a layer-wise, module-wise, and token-wise amplification framework that progressively magnifies internal feature discrepancies between benign and jailbreak prompts. We uncover safety-relevant layers, identify specific modules that inherently encode zero-shot discriminative signals, and localize informative safety tokens. Building upon these insights, we introduce ALERT (Amplification-based Jailbreak Detector), an efficient and effective zero-shot jailbreak detector that introduces two independent yet complementary classifiers on amplified representations. Extensive experiments on three safety benchmarks demonstrate that ALERT achieves consistently strong zero-shot detection performance. Specifically, (i) across all datasets and attack strategies, ALERT reliably ranks among the top two methods, and (ii) it outperforms the second-best baseline by at least 10% in average Accuracy and F1-score, and sometimes by up to 40%.

cs / cs.LG / cs.AI / cs.IR