iconLogo
Published:2025/12/24 4:13:17

最強ギャル、レコメンデーションの世界へ降臨!🌸

  1. タイトル & 超要約 多様なレコメンド爆誕!「好みの木」でIT界をアゲる💖

  2. ギャル的キラキラポイント✨ ● 過去のデータだけじゃ見つけられない、キミの"好き"をLLMが全部見抜く👀✨ ● 「好みの木」ってスゴくない? 🌳 枝分かれみたいに、色々オススメしてくれるの! ● 多様性(ダイバーシティ)が大事! キラキラ輝く選択肢で、もっとハッピーに🥳

  3. 詳細解説

    • 背景 世の中のレコメンドって、なんかワンパターンじゃん?😩 過去のデータに縛られて、ホントのキミの好み、見つけられてないのかも…!
    • 方法 LLM(大規模言語モデル)の知識を使って、キミの"潜在的な好み"を徹底分析!👀 好みの木(ToP)で、細かく分類して、色んなオススメを見つけるんだって💖
    • 結果 画一的なレコメンドじゃなくて、めっちゃ多様な提案ができるように✨ 色んな「好き」が見つかって、毎日がマジ卍になること間違いなし🎵
    • 意義(ここがヤバい♡ポイント) IT業界が抱える課題を解決!🥳 顧客満足度UP、サービスの差別化、そして新たな市場の開拓に繋がるかも…! IT企業、要チェックや~!😎
  4. リアルでの使いみちアイデア💡

    • YouTubeとかで、全然知らないジャンルの動画がオススメに出てくるようになったら、新しい世界が開けるかも!🌍✨
    • ECサイト(ネットショッピング)で、いつもと違う系統の商品が表示されたら、新しい"好き"を発見できるチャンス💖

続きは「らくらく論文」アプリで

Tree of Preferences for Diversified Recommendation

Hanyang Yuan / Ning Tang / Tongya Zheng / Jiarong Xu / Xintong Hu / Renhong Huang / Shunyu Liu / Jiacong Hu / Jiawei Chen / Mingli Song

Diversified recommendation has attracted increasing attention from both researchers and practitioners, which can effectively address the homogeneity of recommended items. Existing approaches predominantly aim to infer the diversity of user preferences from observed user feedback. Nonetheless, due to inherent data biases, the observed data may not fully reflect user interests, where underexplored preferences can be overwhelmed or remain unmanifested. Failing to capture these preferences can lead to suboptimal diversity in recommendations. To fill this gap, this work aims to study diversified recommendation from a data-bias perspective. Inspired by the outstanding performance of large language models (LLMs) in zero-shot inference leveraging world knowledge, we propose a novel approach that utilizes LLMs' expertise to uncover underexplored user preferences from observed behavior, ultimately providing diverse and relevant recommendations. To achieve this, we first introduce Tree of Preferences (ToP), an innovative structure constructed to model user preferences from coarse to fine. ToP enables LLMs to systematically reason over the user's rationale behind their behavior, thereby uncovering their underexplored preferences. To guide diversified recommendations using uncovered preferences, we adopt a data-centric approach, identifying candidate items that match user preferences and generating synthetic interactions that reflect underexplored preferences. These interactions are integrated to train a general recommender for diversification. Moreover, we scale up overall efficiency by dynamically selecting influential users during optimization. Extensive evaluations of both diversity and relevance show that our approach outperforms existing methods in most cases and achieves near-optimal performance in others, with reasonable inference latency.

cs / cs.IR / cs.AI