iconLogo
Published:2025/12/25 9:22:56

LLMでレコメンド爆上げ! I2Iモデルをブースト🚀

超要約: LLM (大規模言語モデル) を使って、おすすめ表示の精度を爆上げする研究だよ!データ不足とかノイズ問題を解決して、売上アップも狙えるってワケ💖

✨ ギャル的キラキラポイント ✨ ● LLMってスゴい! データを生成したり選別したり、とにかく優秀なの✨ ● 長尾アイテム (ニッチな商品) にも対応!売れ筋以外も表示できるよ💕 ● eコマース (ネット通販) での実証実験済み!効果もバッチリ👍

詳細解説

背景

続きは「らくらく論文」アプリで

LLM-I2I: Boost Your Small Item2Item Recommendation Model with Large Language Model

Yinfu Feng / Yanjing Wu / Rong Xiao / Xiaoyi Zen

Item-to-Item (I2I) recommendation models are widely used in real-world systems due to their scalability, real-time capabilities, and high recommendation quality. Research to enhance I2I performance focuses on two directions: 1) model-centric approaches, which adopt deeper architectures but risk increased computational costs and deployment complexity, and 2) data-centric methods, which refine training data without altering models, offering cost-effectiveness but struggling with data sparsity and noise. To address these challenges, we propose LLM-I2I, a data-centric framework leveraging Large Language Models (LLMs) to mitigate data quality issues. LLM-I2I includes (1) an LLM-based generator that synthesizes user-item interactions for long-tail items, alleviating data sparsity, and (2) an LLM-based discriminator that filters noisy interactions from real and synthetic data. The refined data is then fused to train I2I models. Evaluated on industry (AEDS) and academic (ARD) datasets, LLM-I2I consistently improves recommendation accuracy, particularly for long-tail items. Deployed on a large-scale cross-border e-commerce platform, it boosts recall number (RN) by 6.02% and gross merchandise value (GMV) by 1.22% over existing I2I models. This work highlights the potential of LLMs in enhancing data-centric recommendation systems without modifying model architectures.

cs / cs.IR / cs.AI