I. 研究の概要
超要約: AIの頭脳、LRMがLLMジャッジより優秀か検証!AI評価をレベルアップ⤴️
ギャル的キラキラポイント✨ ● LLM(AI)の評価、もっと良くしたい! ● 推論(考える力)を上げたLRMに注目👀 ● バイアス(偏り)を減らして、フェアな評価を目指す✨
詳細解説
続きは「らくらく論文」アプリで
This paper presents the first systematic comparison investigating whether Large Reasoning Models (LRMs) are superior judge to non-reasoning LLMs. Our empirical analysis yields four key findings: 1) LRMs outperform non-reasoning LLMs in terms of judgment accuracy, particularly on reasoning-intensive tasks; 2) LRMs demonstrate superior instruction-following capabilities in evaluation contexts; 3) LRMs exhibit enhanced robustness against adversarial attacks targeting judgment tasks; 4) However, LRMs still exhibit strong biases in superficial quality. To improve the robustness against biases, we propose PlanJudge, an evaluation strategy that prompts the model to generate an explicit evaluation plan before execution. Despite its simplicity, our experiments demonstrate that PlanJudge significantly mitigates biases in both LRMs and standard LLMs.