iconLogo
Published:2026/1/8 9:34:54

LLM爆速化!Judge Decoding の新時代到来✨

超要約: LLM(大規模言語モデル)の推論を速くする Judge Decoding を、もっと良くする方法を発見! 教師データなしでもいけるってスゴくない?😍

ギャル的キラキラポイント✨

● LLM の動きがマジ爆速になる予感!💨 ● 教師データ(先生みたいなもの)なしで賢くなれるなんて神✨ ● 新しいビジネスチャンスが生まれまくりそうじゃん?💰

詳細解説

続きは「らくらく論文」アプリで

Revisiting Judge Decoding from First Principles via Training-Free Distributional Divergence

Shengyin Sun / Yiming Li / Renxi Liu / Weizhe Lin / Hui-Ling Zhen / Xianzhi Yu / Mingxuan Yuan / Chen Ma

Judge Decoding accelerates LLM inference by relaxing the strict verification of Speculative Decoding, yet it typically relies on expensive and noisy supervision. In this work, we revisit this paradigm from first principles, revealing that the ``criticality'' scores learned via costly supervision are intrinsically encoded in the draft-target distributional divergence. We theoretically prove a structural correspondence between learned linear judges and Kullback-Leibler (KL) divergence, demonstrating they rely on the same underlying logit primitives. Guided by this, we propose a simple, training-free verification mechanism based on KL divergence. Extensive experiments across reasoning and coding benchmarks show that our method matches or outperforms complex trained judges (e.g., AutoJudge), offering superior robustness to domain shifts and eliminating the supervision bottleneck entirely.

cs / cs.CL