超要約: LLM(お利口AI)に記憶力と頭脳を!複雑なタスクもラクラクこなせるTME、すごい!
🌟 ギャル的キラキラポイント✨ ● LLMが賢くなるって、まるで推しが最強になるみたい!😍 ● タスク(お仕事)の記憶を構造化(整理整頓)!頭の中がスッキリ✨ ● 開発コスト削減&信頼性UPで、IT業界がますます輝く未来へ💖
🌟 詳細解説 ● 背景 LLMはすごいけど、記憶力(コンテキスト)が弱点だった💦 いっぱいお話(プロンプト)すると、何が何だか分からなくなっちゃうの。
● 方法 TME(Task Memory Engine)は、その弱点を克服する魔法のフレームワーク!タスクを階層的に整理して、必要な情報だけをLLMに伝えるから、頭が混乱しないんだって!✨
続きは「らくらく論文」アプリで
Large Language Models (LLMs) are increasingly used as autonomous agents for multi-step tasks. However, most existing frameworks fail to maintain a structured understanding of the task state, often relying on linear prompt concatenation or shallow memory buffers. This leads to brittle performance, frequent hallucinations, and poor long-range coherence. In this work, we propose the Task Memory Engine (TME), a lightweight and structured memory module that tracks task execution using a hierarchical Task Memory Tree (TMT). Each node in the tree corresponds to a task step, storing relevant input, output, status, and sub-task relationships. We introduce a prompt synthesis method that dynamically generates LLM prompts based on the active node path, significantly improving execution consistency and contextual grounding. Through case studies and comparative experiments on multi-step agent tasks, we demonstrate that TME leads to better task completion accuracy and more interpretable behavior with minimal implementation overhead. A reference implementation of the core TME components is available at https://github.com/biubiutomato/TME-Agent, including basic examples and structured memory integration. While the current implementation uses a tree-based structure, TME is designed to be graph-aware, supporting reusable substeps, converging task paths, and shared dependencies. This lays the groundwork for future DAG-based memory architectures.