iconLogo
Published:2026/1/7 3:39:11

エージェント(AI)がテレパシー!? 潜在空間コミュニケーション🚀

超要約: AI同士の会話をレベルアップ!隠れた情報で賢く連携✨

🌟 ギャル的キラキラポイント✨ ● まるでテレパシー!言葉じゃなく、頭の中の情報で会話するんだって😳 ● 情報圧縮(ぎゅっ!)で、無駄なくスムーズなコミュニケーションを実現💖 ● AIがもっと賢くなって、色んな事ができるようになるって、ワクワクだね🎵

詳細解説

背景 AI(LLM)同士が協力する時、言葉(トークン)で話すと情報が減っちゃったり、誤解が生まれやすい問題があったの。例えば「今日の晩ご飯は?」って聞かれて「おいしい!」だけじゃ、何が?ってなるじゃん?

方法 そこで、AIの頭の中(潜在空間)を直接使って会話する「Interlat」って技術を開発✨ 情報を圧縮して、必要なことだけ伝えるから、無駄がない!まさにテレパシー!

続きは「らくらく論文」アプリで

Enabling Agents to Communicate Entirely in Latent Space

Zhuoyun Du / Runze Wang / Huiyu Bai / Zouying Cao / Xiaoyong Zhu / Yu Cheng / Bo Zheng / Wei Chen / Haochao Ying

While natural language is the de facto communication medium for LLM-based agents, it presents a fundamental constraint. The process of downsampling rich, internal latent states into discrete tokens inherently limits the depth and nuance of information that can be transmitted, thereby hindering collaborative problem-solving. Inspired by telepathy, which bypasses symbolic language in communication, we propose Interlat (Inter-agent Latent Space Communication), a paradigm that leverages the continuous last hidden states of an LLM as a representation of its thought for direct communication (termed latent communication). An additional learned compression process further compresses latent communication via latent space reasoning. Experiments demonstrate that Interlat outperforms both fine-tuned chain-of-thought (CoT) prompting and single-agent baselines, even across heterogeneous models, promoting more exploratory behavior and enabling genuine utilization of latent information. Further compression not only substantially accelerates inference by up to 24 times but also maintains competitive performance through an efficient information-preserving mechanism. We position this work as a feasibility study of entirely latent space inter-agent communication, and our results highlight its potential, offering valuable insights for future research.

cs / cs.LG / cs.AI / cs.MA