iconLogo
Published:2025/12/16 6:01:08

CogMem爆誕!会話LLMが進化☆

超要約:会話が超得意なLLM「CogMem」ってスゴくない?✨

ギャル的キラキラポイント✨ ● 会話の途中で忘れんぼうにならない!記憶力バツグン👏 ● ムダな計算しなくてエコ!賢くて偉い~💖 ● 人間みたいに会話できる!AIと話すの、マジ卍!

詳細解説 ● 背景 LLM(大規模言語モデル)って、すごいけど会話が苦手だったのよね😓前の会話忘れちゃったり、長い話になると計算パンクしちゃったり…。でも、CogMem(コグメン)は違うの!

● 方法 CogMemは、人間の脳みそを参考に作られたの🧠3つの記憶領域(長期、直接アクセス、注意焦点)を駆使して、必要な情報を賢く処理するんだって!

続きは「らくらく論文」アプリで

CogMem: A Cognitive Memory Architecture for Sustained Multi-Turn Reasoning in Large Language Models

Yiran Zhang / Jincheng Hu / Mark Dras / Usman Naseem

Large language models (LLMs) excel at single-turn reasoning but often lose accuracy and coherence over extended, multi-turn interactions. Recent evaluations such as TurnBench highlight recurring failure modes-reasoning bias, task drift, hallucination, overconfidence, and memory decay. Current approaches typically append full conversational histories, causing unbounded context growth, higher computational costs, and degraded reasoning efficiency. We introduce CogMem, a cognitively inspired, memory-augmented LLM architecture that supports sustained iterative reasoning through structured, persistent memory. CogMem incorporates three layers: a Long-Term Memory (LTM) that consolidates cross-session reasoning strategies; a Direct Access (DA) memory that maintains session-level notes and retrieves relevant long-term memories; and a Focus of Attention (FoA) mechanism that dynamically reconstructs concise, task-relevant context at each turn. Experiments on TurnBench show that this layered design mitigates reasoning failures, controls context growth, and improves consistency across extended reasoning chains, moving toward more reliable, human-like reasoning in LLMs.

cs / cs.CL