iconLogo
Published:2026/1/8 12:41:49

スケール解放!LLM爆伸びテク✨

超要約: 言語モデルの学習を加速させる、新しい"魔法の呪文"を発見したよ🪄

✨ ギャル的キラキラポイント ✨

● 重みのスケール(大きさのこと)を自分で学習させちゃうんだって!賢すぎ🥺 ● WD(重み減衰)の弱点克服!学習をもっと自由に💃 ● AIチャットボットとか翻訳が、もっとすごいことになるかも⁉️

詳細解説

続きは「らくらく論文」アプリで

Learnable Multipliers: Freeing the Scale of Language Model Matrix Layers

Maksim Velikanov / Ilyas Chahed / Jingwei Zuo / Dhia Eddine Rhaiem / Younes Belkada / Hakim Hacid

Applying weight decay (WD) to matrix layers is standard practice in large-language-model pretraining. Prior work suggests that stochastic gradient noise induces a Brownian-like expansion of the weight matrices W, whose growth is counteracted by WD, leading to a WD-noise equilibrium with a certain weight norm ||W||. In this work, we view the equilibrium norm as a harmful artifact of the training procedure, and address it by introducing learnable multipliers to learn the optimal scale. First, we attach a learnable scalar multiplier to W and confirm that the WD-noise equilibrium norm is suboptimal: the learned scale adapts to data and improves performance. We then argue that individual row and column norms are similarly constrained, and free their scale by introducing learnable per-row and per-column multipliers. Our method can be viewed as a learnable, more expressive generalization of muP multipliers. It outperforms a well-tuned muP baseline, reduces the computational overhead of multiplier tuning, and surfaces practical questions such as forward-pass symmetries and the width-scaling of the learned multipliers. Finally, we validate learnable multipliers with both Adam and Muon optimizers, where it shows improvement in downstream evaluations matching the improvement of the switching from Adam to Muon.

cs / cs.LG