iconLogo
Published:2026/1/11 8:32:23

LLMの推論、ちょい足しで爆上がり🚀✨

超要約: LLM(大規模言語モデル)の頭脳🧠を、コスパ良くもっと良くする研究だよ!

ギャル的キラキラポイント✨

● エラーの原因が「自信なさげな言葉」だって判明💡 ● 「自信なさげ」な部分だけちょい足しするから、お財布にも優しい👛 ● 難しいこと抜きで、賢さがアップ⤴️するって最高じゃん?

詳細解説

続きは「らくらく論文」アプリで

Less is More: Improving LLM Reasoning with Minimal Test-Time Intervention

Zhen Yang / Mingyang Zhang / Feng Chen / Ganggui Ding / Liang Hou / Xin Tao / Ying-Cong Chen

Recent progress in large language models (LLMs) has focused on test-time scaling to improve reasoning via increased inference computation, but often at the cost of efficiency. We revisit test-time behavior and uncover a simple yet underexplored phenomenon: reasoning uncertainty is highly localized-only a small subset of high-entropy tokens dominantly affects output correctness. Motivated by this, we propose Minimal Test-Time Intervention (MTI), a training-free framework that enhances reasoning accuracy and stability with minimal overhead. MTI includes: (i) Selective CFG intervention, applying classifier-free guidance only at uncertain positions; and (ii) Lightweight negative-prompt guidance, reusing the main model's KV cache to approximate unconditional decoding efficiently. MTI yields consistent gains across general, coding, and STEM tasks-e.g., +9.28% average improvement on six benchmarks for DeepSeek-R1-7B and +11.25% on AIME2024 using Ling-mini-2.0-while remaining highly efficient.

cs / cs.CL / cs.AI