iconLogo
Published:2025/10/23 10:22:11

破滅的忘却をブッ飛ばせ!P&MフレームワークでCL革命🚀

超要約: CLの弱点、破滅的忘却をモデルマージングで解決!ビジネスも加速💨

✨ ギャル的キラキラポイント ✨ ● 過去の知識を"合体"させて、新しいことどんどん覚えちゃう!最強のモデル🧠 ● 損失を最小限にする計算式を開発!効率よく学習できるのがエモい💖 ● LoRAと合体で、メモリも節約!コスパ最強のAIってコト🫶

詳細解説いくよ~! ● 背景 CLってのは、新しいことどんどん学んでいくAIのことね!でも、新しいこと覚えようとすると、前に覚えたこと忘れちゃう「破滅的忘却」って問題があったんだ😭。IT業界でも、常に進化するデータに対応するために、CLはめっちゃ重要なんだけど、この問題がネックだったの。

● 方法 P&Mは、過去のモデルと新しいモデルを"合体"させる「モデルマージング」ってテクニックを使うの!まるで、色んな知識を持った人が集まって、最強のチームを作るみたいなイメージ👯‍♀️。さらに、損失が最小限になるように計算式を導き出したり、LoRAっていうメモリ節約術と組み合わせたりして、性能アップを目指してるんだって!

続きは「らくらく論文」アプリで

Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

Haomiao Qiu / Miao Zhang / Ziyue Qiao / Liqiang Nie

Continual Learning (CL) aims to enable models to continuously acquire new knowledge from a sequence of tasks with avoiding the forgetting of learned information. However, existing CL methods only rely on the parameters of the most recent task for inference, which makes them susceptible to catastrophic forgetting. Inspired by the recent success of model merging techniques, we propose \textbf{Perturb-and-Merge (P\&M)}, a novel continual learning framework that integrates model merging into the CL paradigm to mitigate forgetting. Specifically, after training on each task, P\&M constructs a new model by forming a convex combination of the previous model and the newly trained task-specific model. Through theoretical analysis, We minimize the total loss increase across all tasks and derive a closed-form solution for the merging coefficient under mild assumptions. To further improve the performance of the merged model, we observe that the degradation introduced during merging can be alleviated by a regularization term composed of the task vector and the Hessian matrix of the loss function. Interestingly, we show that this term can be efficiently approximated using second-order symmetric finite differences, and a stochastic perturbation strategy along the task vector direction is accordingly devised which incurs no additional forward or backward passes while providing an effective approximation of the regularization term. Finally, we combine P\&M with LoRA, a parameter-efficient fine-tuning method, to reduce memory overhead. Our proposed approach achieves state-of-the-art performance on several continual learning benchmark datasets. The code is available at https://github.com/qhmiao/P-M-for-Continual-Learning.

cs / cs.LG / cs.AI