iconLogo
Published:2025/12/3 11:56:53

DZ-TDPOで会話上手!AIがもっと賢くなるってマジ!?✨

超要約: 会話の文脈(コンテクスト)をちゃんと理解して、賢く話せるAIを作る技術だよ!

✨ ギャル的キラキラポイント ✨ ● 過去の会話もちゃんと覚えてるから、話がスムーズ💖 ● まるで友達みたいに、話の流れに合わせてくれるの! ● 色んなサービス(チャットボットとか)が、もっと使いやすくなるかも🎵

詳細解説いくよ~!

背景 最近のAI(LLM)はスゴイけど、長い会話になると、話の矛盾(むじゅん)に気づけないことがあったの!過去のこと忘れちゃうみたいな?😱

続きは「らくらく論文」アプリで

DZ-TDPO: Non-Destructive Temporal Alignment for Mutable State Tracking in Long-Context Dialogue

Yijun Liao

Long-context dialogue systems suffer from State Inertia, where static constraints prevent models from resolving conflicts between evolving user intents and established historical context. To address this, we propose DZ-TDPO, a non-destructive alignment framework that synergizes conflict-aware dynamic KL constraints with a learnable temporal attention bias. Experiments on the Multi-Session Chat (MSC) dataset demonstrate that DZ-TDPO achieves state-of-the-art win rates (86.2% on Phi-3.5) while maintaining robust zero-shot generalization. Crucially, our scaling analysis reveals a "Capacity-Stability Trade-off": while smaller models incur an "alignment tax" (perplexity surge) to overcome historical inertia, the larger Qwen2.5-7B model achieves near-perfect alignment (99.4% win rate) with negligible perplexity overhead. This confirms that TAI can be alleviated via precise attention regulation rather than destructive weight updates, preserving general capabilities (MMLU) across model scales. Code and data are available: https://github.com/lyj20071013/DZ-TDPO

cs / cs.CL