超要約: 日本語LLM(大規模言語モデル)の医療での安全性を評価するベンチマークだよ!
✨ ギャル的キラキラポイント ✨
● 日本語に特化!英語だけじゃなくて、日本語でも安全性をチェックできるのがスゴくない?😍 ● 多ターン(複数回)の会話形式で評価するから、リアルな医療現場での使われ方に近いんだって! ● 日本医師会のガイドラインに基づいてるから、信頼性バツグン!安心安全だね✨
詳細解説いくよ~!
続きは「らくらく論文」アプリで
As Large Language Models (LLMs) are increasingly deployed in healthcare field, it becomes essential to carefully evaluate their medical safety before clinical use. However, existing safety benchmarks remain predominantly English-centric, and test with only single-turn prompts despite multi-turn clinical consultations. To address these gaps, we introduce JMedEthicBench, the first multi-turn conversational benchmark for evaluating medical safety of LLMs for Japanese healthcare. Our benchmark is based on 67 guidelines from the Japan Medical Association and contains over 50,000 adversarial conversations generated using seven automatically discovered jailbreak strategies. Using a dual-LLM scoring protocol, we evaluate 27 models and find that commercial models maintain robust safety while medical-specialized models exhibit increased vulnerability. Furthermore, safety scores decline significantly across conversation turns (median: 9.5 to 5.0, $p < 0.001$). Cross-lingual evaluation on both Japanese and English versions of our benchmark reveals that medical model vulnerabilities persist across languages, indicating inherent alignment limitations rather than language-specific factors. These findings suggest that domain-specific fine-tuning may accidentally weaken safety mechanisms and that multi-turn interactions represent a distinct threat surface requiring dedicated alignment strategies.