iconLogo
Published:2026/1/7 2:00:39

Web推論を安定化!Anchor-GRPOって何?😎

超要約: Webエージェントの賢さUP!計画立てを上手にして、Web検索をもっと頼れるようにする技術だよ✨

🌟 ギャル的キラキラポイント✨ ● 計画の最初のステップ(Plan Anchor)が超大事!そこを良くするよ! ● 計画と実行を分けて、安定感MAX!😎 ● 3B~30Bモデルで効果あり!モデルサイズ関係なく使えるの最強💖

詳細解説いくよ~!

背景: LLMを使ったWebエージェントはスゴイけど、長期的なWebタスク(長いお仕事)だと計画が不安定になりがちだったの😢 例えば、検索したいものが見つからないとか、変なサイトばっかり見ちゃうとか…そんな課題を解決したい!

続きは「らくらく論文」アプリで

WebAnchor: Anchoring Agent Planning to Stabilize Long-Horizon Web Reasoning

Xinmiao Yu / Liwen Zhang / Xiaocheng Feng / Yong Jiang / Bing Qin / Pengjun Xie / Jingren Zhou

Large Language Model(LLM)-based agents have shown strong capabilities in web information seeking, with reinforcement learning (RL) becoming a key optimization paradigm. However, planning remains a bottleneck, as existing methods struggle with long-horizon strategies. Our analysis reveals a critical phenomenon, plan anchor, where the first reasoning step disproportionately impacts downstream behavior in long-horizon web reasoning tasks. Current RL algorithms, fail to account for this by uniformly distributing rewards across the trajectory. To address this, we propose Anchor-GRPO, a two-stage RL framework that decouples planning and execution. In Stage 1, the agent optimizes its first-step planning using fine-grained rubrics derived from self-play experiences and human calibration. In Stage 2, execution is aligned with the initial plan through sparse rewards, ensuring stable and efficient tool usage. We evaluate Anchor-GRPO on four benchmarks: BrowseComp, BrowseComp-Zh, GAIA, and XBench-DeepSearch. Across models from 3B to 30B, Anchor-GRPO outperforms baseline GRPO and First-step GRPO, improving task success and tool efficiency. Notably, WebAnchor-30B achieves 46.0% pass@1 on BrowseComp and 76.4% on GAIA. Anchor-GRPO also demonstrates strong scalability, getting higher accuracy as model size and context length increase.

cs / cs.CL