iconLogo
Published:2026/1/7 2:29:49

コード生成を爆上げ🚀VeRPOって何?

  1. 超要約: コード生成を賢くする新技術!報酬設計(ほうしゅうせっけい)を爆イケに💕

  2. ギャル的キラキラポイント✨

    • ● コード生成の精度が爆上がりするかも!✨
    • ● 難しい報酬設計を、簡単にしちゃう!😲
    • ● IT業界に革命を起こすポテンシャル!💖
  3. 詳細解説

    • 背景: コード生成AIは優秀だけど、報酬(ほうしゅう)の設定が難しい問題があったの! パス/フェイルだけじゃ、学習がイマイチだったり… RM(報酬モデル)は複雑すぎたり😢
    • 方法: VeRPOは、検証(けんしょう)可能な実行フィードバックだけを使うの! つまり、コードがちゃんと動くかどうかの情報だけで、報酬を作るってコト💖 しかも、単体テストの難易度に合わせて報酬を調整するんだって!
    • 結果: VeRPOを使うと、コード生成がマジで上手くなる! 計算コストもかからないから、すごい✨
    • 意義: コード生成の精度が上がれば、開発(かいはつ)が楽になる! IT業界の未来が明るくなるかもね!🥳
  4. リアルでの使いみちアイデア💡

    • AIコードアシスタントで、爆速開発!💻✨
    • 自動テストサービスで、バグを未然に防ぐ!🔎

続きは「らくらく論文」アプリで

VeRPO: Verifiable Dense Reward Policy Optimization for Code Generation

Longwen Wang / Xuan'er Wu / Xiaohui Hu / Yirui Liu / Yuankai Fan / Kaidong Yu / Qizhen Weng / Wei Xi / Xuelong Li

Effective reward design is a central challenge in Reinforcement Learning (RL) for code generation. Mainstream pass/fail outcome rewards enforce functional correctness via executing unit tests, but the resulting sparsity limits potential performance gains. While recent work has explored external Reward Models (RM) to generate richer, continuous rewards, the learned RMs suffer from reward misalignment and prohibitive computational cost. In this paper, we introduce \textbf{VeRPO} (\textbf{V}erifiable D\textbf{e}nse \textbf{R}eward \textbf{P}olicy \textbf{O}ptimization), a novel RL framework for code generation that synthesizes \textit{robust and dense rewards fully grounded in verifiable execution feedback}. The core idea of VeRPO is constructing dense rewards from weighted partial success: by dynamically estimating the difficulty weight of each unit test based on the execution statistics during training, a dense reward is derived from the sum of weights of the passed unit tests. To solidify the consistency between partial success and end-to-end functional correctness, VeRPO further integrates the dense signal with global execution outcomes, establishing a robust and dense reward paradigm relying solely on verifiable execution feedback. Extensive experiments across diverse benchmarks and settings demonstrate that VeRPO consistently outperforms outcome-driven and RM-based baselines, achieving up to +8.83\% gain in pass@1 with negligible time cost (< 0.02\%) and zero GPU memory overhead.

cs / cs.LG / cs.AI