超要約:AI 共同研究者を評価するフレームワーク、HEUREKABENCH を紹介✨
● AI が研究できるか試せるベンチマークだよ! 論文とコードを参考に課題を作るらしい♪ ● オープンな研究課題(open-ended research questions)で AI の実力チェック! 難しそうだけど、すごい😎 ● IT業界で、データ分析とか研究開発がめっちゃ効率化されるかも! 未来が楽しみだね!🚀
背景
続きは「らくらく論文」アプリで
LLM-based reasoning models have enabled the development of agentic systems that act as co-scientists, assisting in multi-step scientific analysis. However, evaluating these systems is challenging, as it requires realistic, end-to-end research scenarios that integrate data analysis, interpretation, and the generation of new insights from the experimental data. To address this limitation, we introduce HeurekaBench, a framework to create benchmarks with exploratory, open-ended research questions for experimental datasets. Each such question is grounded in a scientific study and its corresponding code repository, and is created using a semi-automated pipeline that leverages multiple LLMs to extract insights and generate candidate workflows, which are then verified against reported findings. We instantiate the framework in single-cell biology to obtain sc-HeurekaBench benchmark and use it to compare state-of-the-art single-cell agents. We further showcase the benefits of our benchmark for quantitatively analyzing current design choices in agentic systems. We find that the addition of a critic module can improve ill-formed responses for open-source LLM-based agents by up to 22% and close the gap with their closed-source counterparts. Overall, HeurekaBench sets a path toward rigorous, end-to-end evaluation of scientific agents, grounding benchmark construction in real scientific workflows.