iconLogo
Published:2025/10/23 6:31:00

LLMレコメンドをアプデ🚀Fine-tuning & RAG最強✨

超要約: LLMレコメンド、賢くアプデする方法を研究!

✨ ギャル的キラキラポイント ✨

● ユーザーの「スキ♡」を逃さない、最新情報をキャッチ! ● コスパ最強!アプデ費用を抑えて、常にイケてる状態をキープ! ● IT業界の未来を明るくする、革命的な技術なの🌟

詳細解説

続きは「らくらく論文」アプリで

Balancing Fine-tuning and RAG: A Hybrid Strategy for Dynamic LLM Recommendation Updates

Changping Meng / Hongyi Ling / Jianling Wang / Yifan Liu / Shuzhou Zhang / Dapeng Hong / Mingyan Gao / Onkar Dalal / Ed Chi / Lichan Hong / Haokai Lu / Ningren Han

Large Language Models (LLMs) empower recommendation systems through their advanced reasoning and planning capabilities. However, the dynamic nature of user interests and content poses a significant challenge: While initial fine-tuning aligns LLMs with domain knowledge and user preferences, it fails to capture such real-time changes, necessitating robust update mechanisms. This paper investigates strategies for updating LLM-powered recommenders, focusing on the trade-offs between ongoing fine-tuning and Retrieval-Augmented Generation (RAG). Using an LLM-powered user interest exploration system as a case study, we perform a comparative analysis of these methods across dimensions like cost, agility, and knowledge incorporation. We propose a hybrid update strategy that leverages the long-term knowledge adaptation of periodic fine-tuning with the agility of low-cost RAG. We demonstrate through live A/B experiments on a billion-user platform that this hybrid approach yields statistically significant improvements in user satisfaction, offering a practical and cost-effective framework for maintaining high-quality LLM-powered recommender systems.

cs / cs.IR