超要約: LLM(大規模言語モデル)の頭脳🧠を良くする新しい方法!TTRL(テスト時強化学習)に「SCOPE」ってやつを導入したら、もっと賢くなったって話💖
ギャル的キラキラポイント✨ ● 確認バイアス(自分の間違いを認めないこと)を克服!自信満々なアウトプットを出せるように💪 ● 多様な答えが出せるように工夫!表現力が爆上がり⤴ ● 「ステップワイズ信頼度」と「サブグループ分割」っていう、なんかすごい技を使ってるらしい😳
詳細解説 ● 背景 LLMをもっと賢くしたい!でも、学習(がくしゅう)方法が難しい…。そこで、TTRLって方法で、LLM自身が学びやすいようにしてるんだって。でも、従来のTTRLは、多数決で答えを決めるから、間違った方向に進みがちだったみたい🥺
● 方法 新しい方法「SCOPE」は、LLMの答えに「信頼度」を付けて、より信憑性(しんぴょうせい)の高いものを選んで学習するんだって!それに、答えをグループ分けして、色んな答えが出せるように工夫してるらしい✨
続きは「らくらく論文」アプリで
Test-time reinforcement learning mitigates the reliance on annotated data by using majority voting results as pseudo-labels, emerging as a complementary direction to reinforcement learning with verifiable rewards (RLVR) for improving reasoning ability of large language models (LLMs). However, this voting strategy often induces confirmation bias and suffers from sparse rewards, limiting the overall performance. In this work, we propose subgroup-specific step-wise confidence-weighted pseudo-label estimation (SCOPE), a framework integrating model confidence and dynamic subgroup partitioning to address these issues. Specifically, SCOPE integrates the proposed step-wise confidence into pseudo label deduction, prioritizing high-quality reasoning paths over simple frequency count. Furthermore, it dynamically partitions the candidate outputs pool into independent subgroups by balancing reasoning quality against exploration diversity. By deriving local consensus via repeat sampling for each sub group, SCOPE provides diverse supervision targets to encourage broader exploration. We conduct experiments across various models and benchmarks, experimental results show that SCOPE consistently outperforms recent baselines. Notably, SCOPE achieving relative improvements of 13.1\% on challenging AIME 2025 and 8.1\% on AMC. The code is released at \href{https://github.com/szu-tera/SCOPE}{https://github.com/szu-tera/SCOPE}.