iconLogo
Published:2026/1/7 5:26:26

最強LLM推論制御爆誕☆

超賢いLLMの推論を、もっともっと良くする技術だよ!

✨ ギャル的キラキラポイント ✨ ● LLMの頭の中(隠れ状態)を分析するんだって!🧠✨ ● 推論のクセ(戦略)をピンポイントで制御できるの!😎 ● チャットボットとか、色んなAIが賢くなるってこと!🎉

詳細解説いくね~!

背景 LLM (Large Language Model) って、文章作る天才じゃん?でも、たまにヘンな答え出すことない?🤔 それはね、LLMが自分で考え方(推論戦略)を選んでるんだけど、それが必ずしもベストじゃないからなんだ!

続きは「らくらく論文」アプリで

Controllable LLM Reasoning via Sparse Autoencoder-Based Steering

Yi Fang / Wenjie Wang / Mingfeng Xue / Boyi Deng / Fengli Xu / Dayiheng Liu / Fuli Feng

Large Reasoning Models (LRMs) exhibit human-like cognitive reasoning strategies (e.g. backtracking, cross-verification) during reasoning process, which improves their performance on complex tasks. Currently, reasoning strategies are autonomously selected by LRMs themselves. However, such autonomous selection often produces inefficient or even erroneous reasoning paths. To make reasoning more reliable and flexible, it is important to develop methods for controlling reasoning strategies. Existing methods struggle to control fine-grained reasoning strategies due to conceptual entanglement in LRMs' hidden states. To address this, we leverage Sparse Autoencoders (SAEs) to decompose strategy-entangled hidden states into a disentangled feature space. To identify the few strategy-specific features from the vast pool of SAE features, we propose SAE-Steering, an efficient two-stage feature identification pipeline. SAE-Steering first recalls features that amplify the logits of strategy-specific keywords, filtering out over 99\% of features, and then ranks the remaining features by their control effectiveness. Using the identified strategy-specific features as control vectors, SAE-Steering outperforms existing methods by over 15\% in control effectiveness. Furthermore, controlling reasoning strategies can redirect LRMs from erroneous paths to correct ones, achieving a 7\% absolute accuracy improvement.

cs / cs.AI / cs.CL