iconLogo
Published:2026/1/4 22:42:20

LACONIC:スパース検索、爆誕!🚀 (超要約: 高速&低コスト検索✨)

ギャル的キラキラポイント✨

● Llama3(最強LLM)を使ったスパース検索モデルなの!💎 ● GPUいらずで、検索が爆速になるよ!💨 ● コストも抑えられて、検索の可能性が広がるってコト!💰

詳細解説

背景: 情報検索、マジ大事じゃん? でも、従来の検索モデルはメモリ喰いすぎたり、GPU必須だったり、何かと大変だったの🥺

方法: Llama3っていうスゴイLLMを使って、**スパース検索(まばらな検索)**モデル作ったよ!「two-phase training curriculum」っていう方法で学習させて、めっちゃ性能アップ⤴️

結果: メモリ使用量めっちゃ減ったのに、検索の速さはdenseモデルとほぼ同じ!CPUでも動くから、色んな環境で使えるってワケ💖

続きは「らくらく論文」アプリで

LACONIC: Dense-Level Effectiveness for Scalable Sparse Retrieval via a Two-Phase Training Curriculum

Zhichao Xu / Shengyao Zhuang / Crystina Zhang / Xueguang Ma / Yijun Tian / Maitrey Mehta / Jimmy Lin / Vivek Srikumar

While dense retrieval models have become the standard for state-of-the-art information retrieval, their deployment is often constrained by high memory requirements and reliance on GPU accelerators for vector similarity search. Learned sparse retrieval offers a compelling alternative by enabling efficient search via inverted indices, yet it has historically received less attention than dense approaches. In this report, we introduce LACONIC, a family of learned sparse retrievers based on the Llama-3 architecture (1B, 3B, and 8B). We propose a streamlined two-phase training curriculum consisting of (1) weakly supervised pre-finetuning to adapt causal LLMs for bidirectional contextualization and (2) high-signal finetuning using curated hard negatives. Our results demonstrate that LACONIC effectively bridges the performance gap with dense models: the 8B variant achieves a state-of-the-art 60.2 nDCG on the MTEB Retrieval benchmark, ranking 15th on the leaderboard as of January 1, 2026, while utilizing 71\% less index memory than an equivalent dense model. By delivering high retrieval effectiveness on commodity CPU hardware with a fraction of the compute budget required by competing models, LACONIC provides a scalable and efficient solution for real-world search applications.

cs / cs.IR / cs.CL