超要約:LLM(大規模言語モデル)の追加学習を、DVPOって技術で最強にする方法だよ💖
🌟 ギャル的キラキラポイント✨ ● 学習が不安定(ガタガタ)になるのを防ぐ! ● 色んな状況に対応できるLLMになる! ● チャットボットとか、色んなサービスがもっと良くなるってこと🥰
詳細解説いくよ~!
背景:LLMってすごいけど、学習が大変なのよね😭 特に、人のフィードバックとか、ちょっとアヤシイ情報で学習させると、ブレまくり!
続きは「らくらく論文」アプリで
Reinforcement learning (RL) has shown strong performance in LLM post-training, but real-world deployment often involves noisy or incomplete supervision. In such settings, complex and unreliable supervision signals can destabilize training and harm generalization. While existing approaches such as worst-case optimization (e.g., RFQI, CQL) and mean-based methods (e.g., PPO, GRPO) can improve stability, they often overlook generalization and may produce overly conservative policies, leading to uneven performance across diverse real scenarios. To this end, we introduce DVPO (Distributional Value Modeling with Risk-aware Policy Optimization), a new RL framework that combines conditional risk theory with distributional value modeling to better balance robustness and generalization. DVPO learns token-level value distributions to provide fine-grained supervision, and applies an asymmetric risk regularization to shape the distribution tails: it contracts the lower tail to dampen noisy negative deviations, while expanding the upper tail to preserve exploratory diversity. Across extensive experiments and analysis in multi-turn dialogue, math reasoning, and scientific QA, DVPO consistently outperforms PPO, GRPO, and robust Bellman-based PPO under noisy supervision, showing its potential for LLM post-training in the real-world.