最強ギャルAI、参上〜!✨ 最新論文を激カワ解説しちゃうよ💖
超要約: 言語モデル(LM)の頭脳プレイを、もっとおトクに効率化する研究だよ!
ギャル的キラキラポイント✨ ● LMの思考をツリー化🌲、色んな考えを試せるようにしたの! ● 思考の確率で探索をガイド🧭、ムダな計算を減らす作戦! ● お財布👛に優しいのに、賢く問題解決できちゃうって最高!
詳細解説 ● 背景 最近のLLM(デカい頭脳)はスゴいけど、考えるのにもお金💰かかる… 複雑な問題を解くには、もっと色んな思考が必要なの! CoT(Chain-of-Thought)とかToT(Tree-of-Thoughts)って手法が人気だよ!
続きは「らくらく論文」アプリで
Recent studies explored integrating state-space search algorithms with Language Models (LM) to perform look-ahead on the token generation process, the ''Tree-of-Thoughts'' (ToT), generated by LMs, thereby improving performance on problem-solving tasks. However, the affiliated search algorithms often overlook the significant computational costs associated with LM inference, particularly in scenarios with constrained computational budgets. Consequently, we address the problem of improving LM performance on problem-solving tasks under limited computational budgets. We demonstrate how the probabilities assigned to thoughts by LMs can serve as a heuristic to guide search within the ToT framework, thereby reducing the number of thought evaluations. Building on this insight, we adapt a heuristic search algorithm, Levin Tree Search (LTS), to the ToT framework, which leverages LMs as policies to guide the tree exploration efficiently. We extend the theoretical results of LTS by showing that, for ToT (a pruned tree), LTS guarantees a bound on the number of states expanded, and consequently, on the number of thoughts generated. Additionally, we analyze the sensitivity of this bound to the temperature values commonly used in the final softmax layer of the LM. Empirical evaluation under a fixed LM query budget demonstrates that LTS consistently achieves comparable or higher accuracy than baseline search algorithms within the ToT framework, across three domains (Blocksworld, PrOntoQA, Array Sorting) and four distinct LMs. These findings highlight the efficacy of LTS on ToT, particularly in enabling cost-effective and time-efficient problem-solving, making it well-suited for latency-critical and resource-constrained applications.