iconLogo
Published:2025/11/8 2:24:05

LLMエージェント💅安全UP!エラーを即座にキャッチ☆

超要約:LLM(大規模言語モデル)の推論エラーを瞬時に見つける魔法🧙‍♀️

✨ ギャル的キラキラポイント ✨ ● 推論中(インライン)にエラーを発見!事後チェックはもう古い~? ● アテンション(注意)メカニズムを解析するよ!特別な学習は不要! ● HFER(高周波エネルギー比)っていう指標でエラーをキャッチ✨

🌟 詳細解説 🌟 背景 LLMエージェントって、色んなことできるけど、たま~にトンチンカンなこと言っちゃうことあるじゃん?😱 従来の安全対策は、結果を見てから「あ、間違ってる!」って気づくから、ちょっと遅いんだよね💦

方法 この研究では、LLMが考えてる最中(推論中)にエラーを見つける方法を開発したんだって!👀 LLMの頭の中(アテンションメカニズム)を覗き見して、HFERっていう指標でエラーをチェックするんだって!✨

続きは「らくらく論文」アプリで

Catching Contamination Before Generation: Spectral Kill Switches for Agents

Valentin No\"el

Agentic language models compose multi step reasoning chains, yet intermediate steps can be corrupted by inconsistent context, retrieval errors, or adversarial inputs, which makes post hoc evaluation too late because errors propagate before detection. We introduce a diagnostic that requires no additional training and uses only the forward pass to emit a binary accept or reject signal during agent execution. The method analyzes token graphs induced by attention and computes two spectral statistics in early layers, namely the high frequency energy ratio and spectral entropy. We formalize these signals, establish invariances, and provide finite sample estimators with uncertainty quantification. Under a two regime mixture assumption with a monotone likelihood ratio property, we show that a single threshold on the high frequency energy ratio is optimal in the Bayes sense for detecting context inconsistency. Empirically, the high frequency energy ratio exhibits robust bimodality during context verification across multiple model families, which enables gating decisions with overhead below one millisecond on our hardware and configurations. We demonstrate integration into retrieval augmented agent pipelines and discuss deployment as an inline safety monitor. The approach detects contamination while the model is still processing the text, before errors commit to the reasoning chain.

cs / cs.LG / cs.SY / eess.SP / eess.SY / stat.ML