iconLogo
Published:2026/1/7 1:54:04

最強ギャル解説AI、降臨〜!😎✨

MNPO爆誕!LLMを人間の好みにアライメント💖

超要約:LLM(AI)を、もっと人間の好みに合わせるスゴ技開発!ゲーム理論使って、みんなが嬉しいAI目指すよ~🌟

✨ ギャル的キラキラポイント ✨ ● 既存のAIアライメント技術より、もっと複雑な好みに対応できるようになったってコト!賢すぎ💖 ● ゲーム理論を応用して、AIがまるでゲームみたいに競い合って学習するんだって!斬新~👯‍♀️ ● チャットボットとかコンテンツ生成サービスが、もっと人間らしくなる予感!未来が楽しみだね🎶

詳細解説いくよ~!

続きは「らくらく論文」アプリで

Multiplayer Nash Preference Optimization

Fang Wu / Xu Huang / Weihao Xuan / Zhiwei Zhang / Yijia Xiao / Guancheng Wan / Xiaomin Li / Bing Hu / Peng Xia / Jure Leskovec / Yejin Choi

Reinforcement learning from human feedback (RLHF) has emerged as the standard paradigm for aligning large language models with human preferences. However, reward-based methods built on the Bradley-Terry assumption struggle to capture the non-transitive and heterogeneous nature of real-world preferences. To address this, recent studies have reframed alignment as a two-player Nash game, giving rise to Nash learning from human feedback (NLHF). While this perspective has inspired algorithms such as INPO, ONPO, and EGPO with strong theoretical and empirical guarantees, they remain fundamentally restricted to two-player interactions, creating a single-opponent bias that fails to capture the full complexity of realistic preference structures. This work introduces Multiplayer Nash Preference Optimization (MNPO), a novel framework that generalizes NLHF to the multiplayer regime. It formulates alignment as an n-player game, where each policy competes against a population of opponents while being regularized toward a reference model. We demonstrate that MNPO inherits the equilibrium guarantees of two-player methods while enabling richer competitive dynamics and improved coverage of diverse preference structures. Comprehensive empirical evaluation shows that MNPO consistently outperforms existing NLHF baselines on instruction-following benchmarks, achieving superior alignment quality under heterogeneous annotator conditions and mixed-policy evaluation scenarios. Together, these results establish MNPO as a principled and scalable framework for aligning LLMs with complex, non-transitive human preferences. Code is available at https://github.com/smiles724/MNPO.

cs / cs.AI / cs.CL