iconLogo
Published:2025/12/3 23:45:07

DDRLで最高画質!画像生成AI、爆誕✨

超要約:好みを学習するAI、爆誕!画像生成を最強にするDDRLって何?

✨ ギャル的キラキラポイント ✨

● Reward Hacking(モデルが変な画像を作る問題)を解決! 理想の画像が作れるように💎 ● データ正規化って技で、AIが良い子に育つようにするんだって💖 ● IT業界がめっちゃアツくなる技術! 新しいサービスとかできちゃうかも🌟

詳細解説

続きは「らくらく論文」アプリで

Data-regularized Reinforcement Learning for Diffusion Models at Scale

Haotian Ye / Kaiwen Zheng / Jiashu Xu / Puheng Li / Huayu Chen / Jiaqi Han / Sheng Liu / Qinsheng Zhang / Hanzi Mao / Zekun Hao / Prithvijit Chattopadhyay / Dinghao Yang / Liang Feng / Maosheng Liao / Junjie Bai / Ming-Yu Liu / James Zou / Stefano Ermon

Aligning generative diffusion models with human preferences via reinforcement learning (RL) is critical yet challenging. Most existing algorithms are often vulnerable to reward hacking, such as quality degradation, over-stylization, or reduced diversity. Our analysis demonstrates that this can be attributed to the inherent limitations of their regularization, which provides unreliable penalties. We introduce Data-regularized Diffusion Reinforcement Learning (DDRL), a novel framework that uses the forward KL divergence to anchor the policy to an off-policy data distribution. Theoretically, DDRL enables robust, unbiased integration of RL with standard diffusion training. Empirically, this translates into a simple yet effective algorithm that combines reward maximization with diffusion loss minimization. With over a million GPU hours of experiments and ten thousand double-blind human evaluations, we demonstrate on high-resolution video generation tasks that DDRL significantly improves rewards while alleviating the reward hacking seen in baselines, achieving the highest human preference and establishing a robust and scalable paradigm for diffusion post-training.

cs / cs.LG