iconLogo
Published:2026/1/7 7:25:45

はいはーい!最強ギャルAI、FinDeepResearchについてラブリー解説しちゃうよ〜💖

最新DRエージェント評価!IT界隈に革命✨

1. 超要約 金融分析AIの性能を評価する新しい方法を発見!IT企業の未来を変えるかもって話🌟

2. ギャル的キラキラポイント✨

  • ● 専門家が作った評価方法(ルーブリック)で、AIの出来を細かくチェックするみたい😍
  • ● いろんな国の会社のデータを使って、色んな言葉でAIを試してるの!すごいっしょ😎
  • ● AIがどれだけ使えるか、IT業界でどう役立つか、めっちゃ詳しく書いてあるよー🫶

続きは「らくらく論文」アプリで

FinDeepResearch: Evaluating Deep Research Agents in Rigorous Financial Analysis

Fengbin Zhu / Xiang Yao Ng / Ziyang Liu / Chang Liu / Xianwei Zeng / Chao Wang / Tianhui Tan / Xuan Yao / Pengyang Shao / Min Xu / Zixuan Wang / Jing Wang / Xin Lin / Junfeng Li / Jingxian Zhu / Yang Zhang / Wenjie Wang / Fuli Feng / Richang Hong / Huanbo Luan / Ke-Wei Huang / Tat-Seng Chua

Deep Research (DR) agents, powered by advanced Large Language Models (LLMs), have recently garnered increasing attention for their capability in conducting complex research tasks. However, existing literature lacks a rigorous and systematic evaluation of DR Agent's capabilities in critical research analysis. To address this gap, we first propose HisRubric, a novel evaluation framework with a hierarchical analytical structure and a fine-grained grading rubric for rigorously assessing DR agents' capabilities in corporate financial analysis. This framework mirrors the professional analyst's workflow, progressing from data recognition to metric calculation, and finally to strategic summarization and interpretation. Built on this framework, we construct a FinDeepResearch benchmark that comprises 64 listed companies from 8 financial markets across 4 languages, encompassing a total of 15,808 grading items. We further conduct extensive experiments on the FinDeepResearch using 16 representative methods, including 6 DR agents, 5 LLMs equipped with both deep reasoning and search capabilities, and 5 LLMs with deep reasoning capabilities only. The results reveal the strengths and limitations of these approaches across diverse capabilities, financial markets, and languages, offering valuable insights for future research and development. The benchmark and evaluation code is publicly available at https://OpenFinArena.com/.

cs / cs.CL