🌟 ギャル的キラキラポイント ● IDとテキストを足し算しちゃう発想が天才的!💡 ● 複雑なことしなくても精度上がるって最高~!💖 ● 企業が儲かる未来が見える!💰
詳細解説いくよ~!
背景 最近のレコメンド(おすすめ)は、ユーザーの過去の行動データとか、商品の説明文とか、色々使って"あなたのため"に頑張ってる🔥 でも、IDだけじゃ物足りないし、テキストだけじゃ情報が足りない…💦
方法 この研究は、ID(商品の番号みたいなもの)とテキスト(商品説明とか)を別々に学習させて、最後に合体!つまり、良いとこ取りってコト💖 それぞれの情報を活かして、相乗効果を狙う作戦🚀
続きは「らくらく論文」アプリで
Modern Sequential Recommendation (SR) models commonly utilize modality features to represent items, motivated in large part by recent advancements in language and vision modeling. To do so, several works completely replace ID embeddings with modality embeddings, claiming that modality embeddings render ID embeddings unnecessary because they can match or even exceed ID embedding performance. On the other hand, many works jointly utilize ID and modality features, but posit that complex fusion strategies, such as multi-stage training and/or intricate alignment architectures, are necessary for this joint utilization. However, underlying both these lines of work is a lack of understanding of the complementarity of ID and modality features. In this work, we address this gap by studying the complementarity of ID- and text-based SR models. We show that these models do learn complementary signals, meaning that either should provide performance gain when used properly alongside the other. Motivated by this, we propose a new SR method that preserves ID-text complementarity through independent model training, then harnesses it through a simple ensembling strategy. Despite this method's simplicity, we show it outperforms several competitive SR baselines, implying that both ID and text features are necessary to achieve state-of-the-art SR performance but complex fusion architectures are not.