iconLogo
Published:2025/8/22 17:26:33

SAEのL0設定、間違えると大変!LLMの謎を解き明かせ💖

  1. タイトル & 超要約(15字以内) L0の罠!SAEでLLMを攻略💖

  2. ギャル的キラキラポイント✨

    • ● LLM(大規模言語モデル)の秘密🗝️を解き明かす研究だよ!
    • ● SAE(スパースオートエンコーダ)のL0設定で、LLMの学習が変わるらしい!
    • ● L0をちゃんと設定しないと、LLMが変なこと覚えちゃう😱
  3. 詳細解説

    • 背景 LLMってすごいけど、中身はブラックボックスじゃん?🤔 でもSAEっていう魔法のツールを使うと、LLMが何考えてるかちょっとだけ分かるようになるんだって!そのSAEの性能を左右するのが、L0(活性化する特徴量の平均数)の設定なんだよね!

    • 方法 L0の設定を変えながら、SAEがLLMのどんな情報を捉えるかを実験したみたい!L0が低いと特徴が混ざっちゃうし、高いと解釈しにくい特徴になっちゃうらしい!最適なL0を見つける方法も提案してるよ✨

続きは「らくらく論文」アプリで

Sparse but Wrong: Incorrect L0 Leads to Incorrect Features in Sparse Autoencoders

David Chanin / Adri\`a Garriga-Alonso

Sparse Autoencoders (SAEs) extract features from LLM internal activations, meant to correspond to single concepts. A core SAE training hyperparameter is L0: how many features should fire per token on average. Existing work compares SAE algorithms using sparsity--reconstruction tradeoff plots, implying L0 is a free parameter with no single correct value. In this work we study the effect of L0 on BatchTopK SAEs, and show that if L0 is not set precisely, the SAE fails to learn the underlying features of the LLM. If L0 is too low, the SAE will mix correlated features to improve reconstruction. If L0 is too high, the SAE finds degenerate solutions that also mix features. Further, we demonstrate a method to determine the correct L0 value for an SAE on a given training distribution, which finds the true L0 in toy models and coincides with peak sparse probing performance in LLMs. We find that most commonly used SAEs have an L0 that is too low. Our work shows that, to train SAEs with correct features, practitioners must set L0 correctly.

cs / cs.LG / cs.AI / cs.CL