超要約: マルチモーダルLLM(MLLM)の安全性を評価する新しい指標が出たよ!複数ターンの会話で安全性をチェックするんだって✨
ギャル的キラキラポイント✨
● 複数ターン(回)の会話で評価: 今までの評価じゃ分からなかった、危険な流れをチェックできるんだって! ● 2つの危険なシナリオ: 状況に合わせて、MLLMがどう反応するか試せるのがスゴい! ● 安全なAIへの第一歩: みんなが安心してAIを使えるようになるための、めちゃくちゃ大事な研究だよ💖
詳細解説
続きは「らくらく論文」アプリで
Multimodal large language models (MLLMs) are increasingly deployed as assistants that interact through text and images, making it crucial to evaluate contextual safety when risk depends on both the visual scene and the evolving dialogue. Existing contextual safety benchmarks are mostly single-turn and often miss how malicious intent can emerge gradually or how the same scene can support both benign and exploitative goals. We introduce the Multi-Turn Multimodal Contextual Safety Benchmark (MTMCS-Bench), a benchmark of realistic images and multi-turn conversations that evaluates contextual safety in MLLMs under two complementary settings, escalation-based risk and context-switch risk. MTMCS-Bench offers paired safe and unsafe dialogues with structured evaluation. It contains over 30 thousand multimodal (image+text) and unimodal (text-only) samples, with metrics that separately measure contextual intent recognition, safety-awareness on unsafe cases, and helpfulness on benign ones. Across eight open-source and seven proprietary MLLMs, we observe persistent trade-offs between contextual safety and utility, with models tending to either miss gradual risks or over-refuse benign dialogues. Finally, we evaluate five current guardrails and find that they mitigate some failures but do not fully resolve multi-turn contextual risks.