iconLogo
Published:2026/1/11 10:51:03

DAGGERで数式問題も楽勝!最強AI爆誕✨

超かしこいAIが、数式問題の邪魔者(distractor)を無視して、ちゃんと計算してくれるようになるんだって!すごーい💖

✨ ギャル的キラキラポイント ✨

● 余計な情報に惑わされない! distractor(邪魔者)に強いから、問題文が長くても大丈夫🎵 ● 計算グラフで分かりやすい! 数式をグラフにして、どこを計算すればいいか一目瞭然👀✨ ● Gemma-3 ちゃんがさらに進化! SFTとGRPOで、最強の数式AIに大変身🦄💕

詳細解説いくよ~!

続きは「らくらく論文」アプリで

{\dag}DAGGER: Distractor-Aware Graph Generation for Executable Reasoning in Math Problems

Zabir Al Nazi / Shubhashis Roy Dipta / Sudipta Kar

Chain-of-Thought (CoT) prompting is widely adopted for mathematical problem solving, including in low-resource languages, yet its behavior under irrelevant context remains underexplored. To systematically study this challenge, we introduce DISTRACTMATH-BN, a Bangla benchmark that augments MGSM and MSVAMP with semantically coherent but computationally irrelevant information. Evaluating seven models ranging from 3B to 12B parameters, we observe substantial performance degradation under distractors: standard models drop by up to 41 points, while reasoning-specialized models decline by 14 to 20 points despite consuming five times more tokens. We propose {\dag}DAGGER, which reformulates mathematical problem solving as executable computational graph generation with explicit modeling of distractor nodes. Fine-tuning Gemma-3 models using supervised fine-tuning followed by Group Relative Policy Optimization achieves comparable weighted accuracy on augmented benchmarks while using 89 percent fewer tokens than reasoning models. Importantly, this robustness emerges without explicit training on distractor-augmented examples. Our results suggest that enforcing structured intermediate representations improves robustness and inference efficiency in mathematical reasoning compared to free-form approaches, particularly in noisy, low-resource settings.

cs / cs.CL / cs.LG