iconLogo
Published:2026/1/7 3:13:03

LLMの推論、裏側見ちゃお! 🔎✨(レイヤー順序の反転)

超要約: LLM(大規模言語モデル)の頭の中、実は順番が逆転してた!😳


ギャル的キラキラポイント✨

● LLMの思考回路、今までの常識を覆す事実判明!😲 ● 「レイヤー順序の反転」って言葉がなんかカッコよくない?😎 ● IT界隈(かいわい)がアゲアゲになる可能性大!🚀

続きは「らくらく論文」アプリで

Layer-Order Inversion: Rethinking Latent Multi-Hop Reasoning in Large Language Models

Xukai Liu / Ye Liu / Jipeng Zhang / Yanghai Zhang / Kai Zhang / Qi Liu

Large language models (LLMs) perform well on multi-hop reasoning, yet how they internally compose multiple facts remains unclear. Recent work proposes \emph{hop-aligned circuit hypothesis}, suggesting that bridge entities are computed sequentially across layers before later-hop answers. Through systematic analyses on real-world multi-hop queries, we show that this hop-aligned assumption does not generalize: later-hop answer entities can become decodable earlier than bridge entities, a phenomenon we call \emph{layer-order inversion}, which strengthens with total hops. To explain this behavior, we propose a \emph{probabilistic recall-and-extract} framework that models multi-hop reasoning as broad probabilistic recall in shallow MLP layers followed by selective extraction in deeper attention layers. This framework is empirically validated through systematic probing analyses, reinterpreting prior layer-wise decoding evidence, explaining chain-of-thought gains, and providing a mechanistic diagnosis of multi-hop failures despite correct single-hop knowledge. Code is available at https://github.com/laquabe/Layer-Order-Inversion.

cs / cs.CL / cs.AI