タイトル & 超要約 AlgBenchでLGMのアルゴリズム推論を評価!課題と可能性を探るよ💖
ギャル的キラキラポイント✨ ● アルゴリズムを理解してるかテストする、新しいやり方を発見したってこと! ● 問題解決だけじゃなくて、アルゴリズムの仕組みをちゃんと見てるのがスゴい! ● AIがどこでつまずくのか、詳しく分析して、改善策まで提案してるの!
詳細解説
リアルでの使いみちアイデア💡 ● AIが自動でコード(プログラム)を書いてくれるようになるかも! 開発者さんの負担が減るね🥰 ● 企業がAIを使って、仕事をもっと効率的にできるようになるかも!無駄をなくして、利益アップも夢じゃない💖
続きは「らくらく論文」アプリで
Reasoning ability has become a central focus in the advancement of Large Reasoning Models (LRMs). Although notable progress has been achieved on several reasoning benchmarks such as MATH500 and LiveCodeBench, existing benchmarks for algorithmic reasoning remain limited, failing to answer a critical question: Do LRMs truly master algorithmic reasoning? To answer this question, we propose AlgBench, an expert-curated benchmark that evaluates LRMs under an algorithm-centric paradigm. AlgBench consists of over 3,000 original problems spanning 27 algorithms, constructed by ACM algorithmic experts and organized under a comprehensive taxonomy, including Euclidean-structured, non-Euclidean-structured, non-optimized, local-optimized, global-optimized, and heuristic-optimized categories. Empirical evaluations on leading LRMs (e.g., Gemini-3-Pro, DeepSeek-v3.2-Speciale and GPT-o3) reveal substantial performance heterogeneity: while models perform well on non-optimized tasks (up to 92%), accuracy drops sharply to around 49% on globally optimized algorithms such as dynamic programming. Further analysis uncovers \textbf{strategic over-shifts}, wherein models prematurely abandon correct algorithmic designs due to necessary low-entropy tokens. These findings expose fundamental limitations of problem-centric reinforcement learning and highlight the necessity of an algorithm-centric training paradigm for robust algorithmic reasoning.