iconLogo
Published:2025/11/7 19:00:01

最強ギャル解説AI、参上~!😎✨ この論文、マジでイケてるから、一緒に見てこーっ💖

多様性と品質を両立!AIの新しい魔法🔮✨

超要約:LLM(大規模言語モデル)の弱点克服!ベースLLMとアラインメント済みモデルを組み合わせて、多様性と品質を爆上げする研究だよ!🚀

● 多様性と品質、両方GET!✨ 今までのAIじゃ難しかったことが、簡単にできちゃう! ● クリエイティブな表現力UP!🎨 いろんな文章が書けるから、面白くて新しいコンテンツが作れる! ● 制御力もバッチリ👌 ユーザーの好みに合わせて、AIを操れるってこと!

詳細解説いくよ~!

続きは「らくらく論文」アプリで

Optimizing Diversity and Quality through Base-Aligned Model Collaboration

Yichen Wang / Chenghao Yang / Tenghao Huang / Muhao Chen / Jonathan May / Mina Lee

Alignment has greatly improved large language models (LLMs)' output quality at the cost of diversity, yielding highly similar outputs across generations. We propose Base-Aligned Model Collaboration (BACo), an inference-time token-level model collaboration framework that dynamically combines a base LLM with its aligned counterpart to optimize diversity and quality. Inspired by prior work (Fei et al., 2025), BACo employs routing strategies that determine, at each token, from which model to decode based on next-token prediction uncertainty and predicted contents' semantic role. Prior diversity-promoting methods, such as retraining, prompt engineering, and multi-sampling methods, improve diversity but often degrade quality or require costly decoding or post-training. In contrast, BACo achieves both high diversity and quality post hoc within a single pass, while offering strong controllability. We explore a family of routing strategies, across three open-ended generation tasks and 13 metrics covering diversity and quality, BACo consistently surpasses state-of-the-art inference-time baselines. With our best router, BACo achieves a 21.3% joint improvement in diversity and quality. Human evaluations also mirror these improvements. The results suggest that collaboration between base and aligned models can optimize and control diversity and quality.

cs / cs.CL / cs.AI / cs.LG