iconLogo
Published:2026/1/5 14:28:17

忘却を止める!LLMの神チューニング✨

超要約: LLMの"忘れっぽさ"を、新しい技術で解決するって話!

✨ ギャル的キラキラポイント ✨ ● LLM(大規模言語モデル)の頭が良くなる方法を発見!💖 ● 「確信的な対立」って状態を、おしゃれに解決!😎 ● ITサービスの質が爆上がりする予感…!🎉

詳細解説 ● 背景 LLMって、賢いけど特定の勉強をすると、前に覚えたこと忘れちゃうの!😱 それを解決するために、研究者たちは頑張ってるんだね!

● 方法 モデルの"自信過剰"を見抜く技術を開発!🤔 確信度(エントロピー)を考慮して、忘れちゃうのを防ぐんだって!

続きは「らくらく論文」アプリで

Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting

Muxi Diao / Lele Yang / Wuxuan Gong / Yutong Zhang / Zhonghao Yan / Yufei Han / Kongming Liang / Weiran Xu / Zhanyu Ma

Supervised Fine-Tuning (SFT) is the standard paradigm for domain adaptation, yet it frequently incurs the cost of catastrophic forgetting. In sharp contrast, on-policy Reinforcement Learning (RL) effectively preserves general capabilities. We investigate this discrepancy and identify a fundamental distributional gap: while RL aligns with the model's internal belief, SFT forces the model to fit external supervision. This mismatch often manifests as "Confident Conflicts" tokens characterized by low probability but low entropy. In these instances, the model is highly confident in its own prediction but is forced to learn a divergent ground truth, triggering destructive gradient updates. To address this, we propose Entropy-Adaptive Fine-Tuning (EAFT). Unlike methods relying solely on prediction probability, EAFT utilizes token-level entropy as a gating mechanism to distinguish between epistemic uncertainty and knowledge conflict. This allows the model to learn from uncertain samples while suppressing gradients on conflicting data. Extensive experiments on Qwen and GLM series (ranging from 4B to 32B parameters) across mathematical, medical, and agentic domains confirm our hypothesis. EAFT consistently matches the downstream performance of standard SFT while significantly mitigating the degradation of general capabilities.

cs / cs.LG / cs.AI / cs.CL