iconLogo
Published:2026/1/4 19:51:51

LLMの安全、時間と多言語がカギ🔑

超要約: LLM、言葉と時間で危険度UP!対策必須☆

ギャル的キラキラポイント✨

● 英語だけじゃダメ!多言語🌍も安全にね! ● 過去・未来の質問🗓️で危険度変わるってマジ!? ● ITサービス💻の安全を守る、未来の話だよ!

詳細解説

背景 LLM(大規模言語モデル)って、すごいけど危険もいっぱい😱英語でOKでも、他の言語だと変なこと言い出すかも!過去や未来のこと聞くと、さらに危なくなる可能性も…?グローバル化🌎が進むIT業界にとって、無視できない問題なんだよね。

方法 GPT-5.1、Gemini 3 Pro、Claude 4.5 Opus っていう、めっちゃ優秀なLLMたちをチェック🧐西アフリカの詐欺とか武器製造🔫に関するデータで、英語とハウサ語で実験!過去、現在、未来の時間軸⏰でもテストしたんだって!

続きは「らくらく論文」アプリで

Safe in the Future, Dangerous in the Past: Dissecting Temporal and Linguistic Vulnerabilities in LLMs

Muhammad Abdullahi Said / Muhammad Sammani Sani

As Large Language Models (LLMs) integrate into critical global infrastructure, the assumption that safety alignment transfers zero-shot from English to other languages remains a dangerous blind spot. This study presents a systematic audit of three state of the art models (GPT-5.1, Gemini 3 Pro, and Claude 4.5 Opus) using HausaSafety, a novel adversarial dataset grounded in West African threat scenarios (e.g., Yahoo-Yahoo fraud, Dane gun manufacturing). Employing a 2 x 4 factorial design across 1,440 evaluations, we tested the non-linear interaction between language (English vs. Hausa) and temporal framing. Our results challenge the narrative of the multilingual safety gap. Instead of a simple degradation in low-resource settings, we identified a complex interference mechanism in which safety is determined by the intersection of variables. Although the models exhibited a reverse linguistic vulnerability with Claude 4.5 Opus proving significantly safer in Hausa (45.0%) than in English (36.7%) due to uncertainty-driven refusal, they suffered catastrophic failures in temporal reasoning. We report a profound Temporal Asymmetry, where past-tense framing bypassed defenses (15.6% safe) while future-tense scenarios triggered hyper-conservative refusals (57.2% safe). The magnitude of this volatility is illustrated by a 9.2x disparity between the safest and most vulnerable configurations, proving that safety is not a fixed property but a context-dependent state. We conclude that current models rely on superficial heuristics rather than robust semantic understanding, creating Safety Pockets that leave Global South users exposed to localized harms. We propose Invariant Alignment as a necessary paradigm shift to ensure safety stability across linguistic and temporal shifts.

cs / cs.CL