iconLogo
Published:2026/1/7 6:55:29

最強!MLLM学習爆速化🚀

超要約:MLLM(多言語モデル)の学習を爆速にする方法だよ!言語の壁をブチ壊す💖

✨ ギャル的キラキラポイント ✨

● 計算コストを大幅削減!💰 お財布にも優しいってコト! ● 色んな言語に対応できるようになる!🌍 世界が広がる~! ● 特定言語の精度爆上げ!⤴️推しの言語を極められる!

詳細解説 ● 背景 MLLMって、色んな言語を喋れるスゴイやつ✨ でも、特定の言語のレベルを上げようとすると、学習に時間がかかるし、他の言語のレベルが下がっちゃう…😭 だから、もっと効率的に学習する方法が必要だったの!

続きは「らくらく論文」アプリで

ELO: Efficient Layer-Specific Optimization for Continual Pretraining of Multilingual LLMs

HanGyeol Yoo / ChangSu Choi / Minjun Kim / Seohyun Song / SeungWoo Song / Inho Won / Jongyoul Park / Cheoneum Park / KyungTae Lim

We propose an efficient layer-specific optimization (ELO) method designed to enhance continual pretraining (CP) for specific languages in multilingual large language models (MLLMs). This approach addresses the common challenges of high computational cost and degradation of source language performance associated with traditional CP. The ELO method consists of two main stages: (1) ELO Pretraining, where a small subset of specific layers, identified in our experiments as the critically important first and last layers, are detached from the original MLLM and trained with the target language. This significantly reduces not only the number of trainable parameters but also the total parameters computed during the forward pass, minimizing GPU memory consumption and accelerating the training process. (2) Layer Alignment, where the newly trained layers are reintegrated into the original model, followed by a brief full fine-tuning step on a small dataset to align the parameters. Experimental results demonstrate that the ELO method achieves a training speedup of up to 6.46 times compared to existing methods, while improving target language performance by up to 6.2\% on qualitative benchmarks and effectively preserving source language (English) capabilities.

cs / cs.CL