超要約: 個人情報守りつつ、AIを賢くする秘密兵器!それがFed-SE🚀
🌟 ギャル的キラキラポイント✨ ● プライバシー(個人情報)を守りながら、AIをもっと賢くできるって、すごくない?😳 ● 分散学習(みんなで協力して学習)するから、データも安全だし、賢くなれるの💖 ● LoRAっていう技術で、計算コストも通信コストもお得にしちゃうところが天才的✨
背景 最近のAIはすごいけど、個人情報とか色んなデータが必要じゃん?😱でも、みんなのデータを集めるのって、ちょっとコワイよね?そこで登場したのが、データを分散(色んな場所に置く)したまま学習する「分散学習」! Fed-SEはその進化系なんだ✨
続きは「らくらく論文」アプリで
LLM agents are widely deployed in complex interactive tasks, yet privacy constraints often preclude centralized optimization and co-evolution across dynamic environments. Despite the demonstrated success of Federated Learning (FL) on static datasets, its effectiveness in open-ended, self-evolving agent systems remains largely unexplored. In such settings, the direct application of standard FL is particularly challenging, as heterogeneous tasks and sparse, trajectory-level reward signals give rise to severe gradient instability, which undermines the global optimization process. To bridge this gap, we propose Fed-SE, a Federated Self-Evolution framework for LLM agents that establishes a local evolution-global aggregation paradigm. Locally, agents employ parameter-efficient fine-tuning on filtered, high-return trajectories to achieve stable gradient updates. Globally, Fed-SE aggregates updates within a low-rank subspace, reducing communication cost across clients. Experiments across five heterogeneous environments demonstrate that Fed-SE improves average task success rates by 10\% over the state-of-the-art FedIT, validating its effectiveness in cross-environment knowledge transfer under privacy constraints.