タイトル & 超要約:LEASHでLLM爆速🚀✨
ギャル的キラキラポイント✨ ● 長文バイバイ👋!推論(すいろん)を短くして、計算コストも時間も節約💰 ● 賢いペナルティシステム💡状況に合わせて、いい感じに調整するよ! ● AIさん、もっと身近に💖ビジネスチャンス爆増の予感!
詳細解説
リアルでの使いみちアイデア💡
もっと深掘りしたい子へ🔍
続きは「らくらく論文」アプリで
Existing approaches typically rely on fixed length penalties, but such penalties are hard to tune and fail to adapt to the evolving reasoning abilities of LLMs, leading to suboptimal trade-offs between accuracy and conciseness. To address this challenge, we propose Leash (adaptive LEngth penAlty and reward SHaping), a reinforcement learning framework for efficient reasoning in LLMs. We formulate length control as a constrained optimization problem and employ a Lagrangian primal-dual method to dynamically adjust the penalty coefficient. When generations exceed the target length, the penalty is intensified; when they are shorter, it is relaxed. This adaptive mechanism guides models toward producing concise reasoning without sacrificing task performance. Experiments on Deepseek-R1-Distill-Qwen-1.5B and Qwen3-4B-Thinking-2507 show that Leash reduces the average reasoning length by 60% across diverse tasks - including in-distribution mathematical reasoning and out-of-distribution domains such as coding and instruction following - while maintaining competitive performance. Our work thus presents a practical and effective paradigm for developing controllable and efficient LLMs that balance reasoning capabilities with computational budgets.