iconLogo
Published:2026/1/2 19:54:40

言語モデルのエージェント、賢く成長✨(Hindsight Trajectory Rewriting)

超要約: 言語モデルさんを賢くする秘訣!失敗から学び、少ない経験で成長する魔法🧙‍♀️

🌟 ギャル的キラキラポイント ● 失敗を宝に変える!過去のデータから成功のヒントを見つけるんだって💖 ● 少ないデータでOK!学習コストを大幅削減できるのがエモい🥺 ● 色んな分野で大活躍の予感!未来がマジ卍じゃん?🚀

詳細解説いくよー!

背景 最近の言語モデル(LM)さんはすごいけど、新しいことにはちょっと弱い💔 データが少ないと、パフォーマンス(実力)が出にくいんだよね😢 この研究は、LMさんがもっと色んなことに対応できるように、少ない経験でも賢く成長できる方法を探してるんだって!

続きは「らくらく論文」アプリで

Sample-Efficient Online Learning in LM Agents via Hindsight Trajectory Rewriting

Michael Y. Hu / Benjamin Van Durme / Jacob Andreas / Harsh Jhamtani

Language model (LM) agents deployed in novel environments often exhibit poor sample efficiency when learning from sequential interactions. This significantly hinders the usefulness of such agents in environments where interaction is costly (for example, when they interact with humans or reset physical systems). While a number of existing LM agent architectures incorporate various mechanisms for experience storage and reflection, they make limited use of LMs' abilities to directly generate or reason about full counterfactual trajectories. We introduce ECHO (Experience Consolidation via Hindsight Optimization), a prompting framework that adapts hindsight experience replay from reinforcement learning for language model agents. ECHO generates optimized trajectories for alternative goals that could have been achieved during failed attempts, effectively creating synthetic positive examples from unsuccessful interactions. Our approach consists of two components: a hindsight rule that uses the language model itself to identify relevant subgoals and generate optimized trajectories, and an update rule that maintains compressed trajectory representations in memory. We evaluate ECHO on stateful versions of XMiniGrid, a text-based navigation and planning benchmark, and PeopleJoinQA, a collaborative information-gathering enterprise simulation. Across both domains, ECHO outperforms vanilla language agent baselines by up to 80%; in XMiniGrid, it also outperforms a number of sophisticated agent architectures including Reflexion and AWM, demonstrating faster adaptation to novel environments through more effective utilization of past experiences.

cs / cs.LG / cs.AI / cs.CL