iconLogo
Published:2025/12/25 16:24:29

最強!LLM(大規模言語モデル)の報酬モデル、CRM(多エージェント協調型報酬モデル)で激変だって!💖

超要約:LLMの評価をチーム戦にして、もっと賢く&安全に!

✨ ギャル的キラキラポイント ✨

  • ● 報酬モデルをチーム化!専門家(エージェント)が意見交換するみたいなの、面白くない?👯‍♀️
  • ● 評価が詳しくなるから、LLMがなんで良い結果を出せたのか、悪いのか、理由がハッキリ分かるって神✨
  • ● 報酬ハッキング(ズル)されにくくなるから、変な方向に学習しちゃう心配も減るね!😎

詳細解説、いくよー!

続きは「らくらく論文」アプリで

Multi-Agent Collaborative Reward Design for Enhancing Reasoning in Reinforcement Learning

Pei Yang / Ke Zhang / Ji Wang / Xiao Chen / Yuxin Tang / Eric Yang / Lynn Ai / Bill Shi

We present CRM (Multi-Agent Collaborative Reward Model), a framework that replaces a single black-box reward model with a coordinated team of specialist evaluators to improve robustness and interpretability in RLHF. Conventional reward models struggle to jointly optimize multiple, sometimes conflicting, preference dimensions (e.g., factuality, helpfulness, safety) and offer limited transparency into why a score is assigned. CRM addresses these issues by decomposing preference evaluation into domain-specific agents that each produce partial signals, alongside global evaluators such as ranker-based and embedding-similarity rewards. A centralized aggregator fuses these signals at each timestep, balancing factors like step-wise correctness, multi-agent agreement, and repetition penalties, yielding a single training reward compatible with standard RL pipelines. The policy is optimized with advantage-based updates (e.g., GAE), while a value model regresses to the aggregated reward, enabling multi-perspective reward shaping without requiring additional human annotations beyond those used to train the evaluators. To support training and assessment, we introduce rewardBench, a benchmark and training suite aligned with the collaborative structure of CRM. Together, CRM and rewardBench provide a practical, modular path to more transparent reward modeling and more stable optimization.

cs / cs.AI