タイトル & 超要約:CLUSTERFUSION!LLMでテキスト爆速分類✨
1. ギャル的キラキラポイント✨
● LLM(大規模言語モデル)をクラスタリングの主役に据えた斬新(ざんしん)な発想💡従来の補助的な役割からの脱却! ● ドメイン知識(専門知識)やユーザーの好みをプロンプトで簡単に反映できるから、超パーソナルな分析ができる💖 ● 3段階パイプラインで、LLMのポテンシャルを最大限に引き出す設計! 爆速&高精度な結果を期待できるよ🫶
2. 詳細解説
続きは「らくらく論文」アプリで
Text clustering is a fundamental task in natural language processing, yet traditional clustering algorithms with pre-trained embeddings often struggle in domain-specific contexts without costly fine-tuning. Large language models (LLMs) provide strong contextual reasoning, yet prior work mainly uses them as auxiliary modules to refine embeddings or adjust cluster boundaries. We propose ClusterFusion, a hybrid framework that instead treats the LLM as the clustering core, guided by lightweight embedding methods. The framework proceeds in three stages: embedding-guided subset partition, LLM-driven topic summarization, and LLM-based topic assignment. This design enables direct incorporation of domain knowledge and user preferences, fully leveraging the contextual adaptability of LLMs. Experiments on three public benchmarks and two new domain-specific datasets demonstrate that ClusterFusion not only achieves state-of-the-art performance on standard tasks but also delivers substantial gains in specialized domains. To support future work, we release our newly constructed dataset and results on all benchmarks.