iconLogo
Published:2025/11/7 22:11:11

タイトル:LLMのICL、コピーに頼らないってマジ?🌟 超要約:LLMのインコンテキスト学習、コピペじゃなくてもイケるって話✨

● LLM(大規模言語モデル)の賢さの秘密に迫る研究だよ! ● コピー能力に頼らないICL(インコンテキスト学習)の可能性を探る! ● 新しい学習方法HAPAXで、AIの未来がもっと明るくなるかも!

詳細解説いくよ~💖

背景 LLMは、コンテキスト(文脈)から学んで賢くなるICLって能力を持ってるんだけど、それがコピペ(コピー能力)に頼ってる部分があるって言われてたのね🤔 でも、コピペじゃなくてもICLできるんじゃない?ってのが今回の研究の始まりなんだって!

方法 コピペを抑制(抑える)するHAPAX(ハペックス)っていう、新しい学習方法を開発したんだって! n-gram(文章の構成要素)を学習から除外(のぞく)ことで、コピーを減らしたんだってさ! それで、ICLのタスクでどれくらいできるか試したんだって!

続きは「らくらく論文」アプリで

In-Context Learning Without Copying

Kerem Sahin / Sheridan Feucht / Adam Belfki / Jannik Brinkmann / Aaron Mueller / David Bau / Chris Wendler

Induction heads are attention heads that perform inductive copying by matching patterns from earlier context and copying their continuations verbatim. As models develop induction heads, they often experience a sharp drop in training loss, a phenomenon cited as evidence that induction heads may serve as a prerequisite for more complex in-context learning (ICL) capabilities. In this work, we ask whether transformers can still acquire ICL capabilities when inductive copying is suppressed. We propose Hapax, a setting where we omit the loss contribution of any token that can be correctly predicted by induction heads. Despite a significant reduction in inductive copying, performance on abstractive ICL tasks (i.e., tasks where the answer is not contained in the input context) remains comparable and surpasses the vanilla model on 13 of 21 tasks, even though 31.7\% of tokens are omitted from the loss. Furthermore, our model achieves lower loss values on token positions that cannot be predicted correctly by induction heads. Mechanistic analysis further shows that models trained with Hapax develop fewer and weaker induction heads but still preserve ICL capabilities. Taken together, our findings indicate that inductive copying is not essential for learning abstractive ICL mechanisms.

cs / cs.CL