iconLogo
Published:2026/1/4 15:59:15

LLMの自己修正、ギャル流で解明!😎✨

1. LLMが賢くなるヒミツ、見つけちゃった💖 (超要約)

● LLM(大規模言語モデル)が、自分のミスに気づいて直せるようになる秘密を発見! ● 「二段階意思決定サンプリング」ってのがキモらしい✨ ● AIの未来、爆アゲ間違いなしっしょ!


2. ギャル的キラキラポイント✨

続きは「らくらく論文」アプリで

The Two-Stage Decision-Sampling Hypothesis: Understanding the Emergence of Self-Reflection in RL-Trained LLMs

Zibo Zhao (Arizona State University) / Yuanting Zha (ShanghaiTech University) / Haipeng Zhang (ShanghaiTech University) / Xingcheng Xu (Shanghai Artificial Intelligence Laboratory)

Self-reflection capabilities emerge in Large Language Models after RL post-training, with multi-turn RL achieving substantial gains over SFT counterparts. Yet the mechanism of how a unified optimization objective gives rise to functionally distinct capabilities of generating solutions and evaluating when to revise them remains opaque. To address this question, we introduce the Gradient Attribution Property to characterize how reward gradients distribute across policy components, formalized through the Two-Stage Decision-Sampling (DS) Hypothesis, which decomposes the policy into sampling ($\pi_{sample}$) for generation and decision ($\pi_{d}$) for verification. We prove that surrogate rewards exhibit Balanced Gradient Attribution, while SFT and KL penalties exhibit Unbalanced Gradient Attribution, with length-weighting creating asymmetric regularization that constrains $\pi_{sample}$ while leaving $\pi_{d}$ under-optimized, providing an theoretical explanation of why RL succeeds where SFT fails. We also empirically validate our theoretical predictions on arithmetic reasoning demonstrates that RL's superior generalization stems primarily from improved decision-making ($\pi_{d}$) rather than sampling capabilities, providing a first-principles mechanistic explanation for self-correction in thinking models.

cs / cs.LG / cs.AI