超要約: 医療AI、異常解剖(いじょうかいぼう)に弱い!? 新ベンチマークで弱点丸見え👀 改善策も提案だよ!
✨ ギャル的キラキラポイント ✨ ● 医療AIの誤診リスクを減らせるかも💖 ● まれな体の構造もAIがちゃんと認識できるように✨ ● IT企業が新しいビジネスチャンスを掴めるかも😍
詳細解説 ● 背景 医療画像(いりょうがぞう)をAIが診断(しんだん)する時代! でもAIは、珍しい体の形(異常解剖)を見抜くのが苦手だったの😱 原因は、学習データが偏ってるから。
● 方法 AdversarialAnatomyBench(アドバーサリアルアナトミーベンチ)っていう、新しいテストを作ったよ! いろんな医療画像で、AIの異常解剖の認識能力を試したんだ😎
続きは「らくらく論文」アプリで
Vision-language models are increasingly integrated into clinical workflows. However, existing benchmarks primarily assess performance on common anatomical presentations and fail to capture the challenges posed by rare variants. To address this gap, we introduce AdversarialAnatomyBench, the first benchmark comprising naturally occurring rare anatomical variants across diverse imaging modalities and anatomical regions. We call such variants that violate learned priors about "typical" human anatomy natural adversarial anatomy. Benchmarking 22 state-of-the-art VLMs with AdversarialAnatomyBench yielded three key insights. First, when queried with basic medical perception tasks, mean accuracy dropped from 74% on typical to 29% on atypical anatomy. Even the best-performing models, GPT-5, Gemini 2.5 Pro, and Llama 4 Maverick, showed performance drops of 41-51%. Second, model errors closely mirrored expected anatomical biases. Third, neither model scaling nor interventions, including bias-aware prompting and test-time reasoning, resolved these issues. These findings highlight a critical and previously unquantified limitation in current VLM: their poor generalization to rare anatomical presentations. AdversarialAnatomyBench provides a foundation for systematically measuring and mitigating anatomical bias in multimodal medical AI systems.