超要約: 失敗を活かして賢くなるAI、開発コストも下がるよ!✨
✨ ギャル的キラキラポイント ✨ ● 失敗を恐れないAI!Failure Agent って、なんかカッコよくない?😎 ● 少ないデータでもOK!学習コスト削減で、企業もハッピー🥰 ● いろんなことに使える!AIの可能性が爆上がり🚀
詳細解説 ● 背景 LLM(大規模言語モデル)のおかげで、AIエージェントが色んなコトできるようになったけど、賢くするには大量のデータが必要だったの💦 でも、データ集めるのって大変じゃん? そこを解決してくれるのがCo-evolving Agents!
● 方法 Target Agent(実際に動くAI)とFailure Agent(失敗から学ぶAI)が協力プレイ👯♀️ 失敗例をFailure Agent が分析して、Target Agent の学習に役立てるの! 失敗から学ぶって、まさに人間みたいじゃん?
続きは「らくらく論文」アプリで
The rapid progress of large foundation models has accelerated the development of task-specialized agents across diverse domains. However, the effectiveness of agents remains tightly coupled with the quality of training data, while curating task-specific datasets remains costly and often infeasible in real-world scenarios. Recent work has explored self-improving agents that autonomously generate, refine, and re-train on their own trajectories. A prominent line of approaches further leverages preference optimization by pairing predicted trajectories with scarce ground-truth trajectories, enabling agents to learn directly from their own failures. While these methods outperform supervised fine-tuning, their heavy reliance on predicted trajectories under limited ground-truth supervision leaves them prone to overfitting. To address this, we propose a co-evolving agents framework in which a target agent improves jointly with an auxiliary failure agent. The failure agent learns through preference optimization over failure trajectories from both the target and itself, thereby generating hard negatives that are close to success yet remain failures. Incorporating these informative hard negatives into the target agent's optimization sharpens decision boundaries and enhances generalization. Our comprehensive analysis and experiments across benchmark datasets show that our method not only shows improved performance but also demonstrates that failures, instead of being used as-is, can be systematically transformed into structured and valuable learning signals in self-improving agents.