iconLogo
Published:2025/12/25 15:02:07

最強ギャル、MoRAgentでLLMエージェント爆誕!✨(IT企業向け)

超要約: LLMエージェントを、コスパ良く&最強にする方法!LoRA使って、役割分担で更にパワーアップってコト💖

ギャル的キラキラポイント✨

● LoRA (低ランク適応) って技で、お財布に優しいチューニング💰✨ ● 3つの役割(推論・実行・要約)に分けて、それぞれの能力を爆上げ⤴️ ● IT企業向けで、業務効率化とか、新しいビジネスチャンスが盛りだくさん💖

詳細解説

続きは「らくらく論文」アプリで

MoRAgent: Parameter Efficient Agent Tuning with Mixture-of-Roles

Jing Han / Binwei Yan / Tianyu Guo / Zheyuan Bai / Mengyu Zheng / Hanting Chen / Ying Nie

Despite recent advancements of fine-tuning large language models (LLMs) to facilitate agent tasks, parameter-efficient fine-tuning (PEFT) methodologies for agent remain largely unexplored. In this paper, we introduce three key strategies for PEFT in agent tasks: 1) Inspired by the increasingly dominant Reason+Action paradigm, we first decompose the capabilities necessary for the agent tasks into three distinct roles: reasoner, executor, and summarizer. The reasoner is responsible for comprehending the user's query and determining the next role based on the execution trajectory. The executor is tasked with identifying the appropriate functions and parameters to invoke. The summarizer conveys the distilled information from conversations back to the user. 2) We then propose the Mixture-of-Roles (MoR) framework, which comprises three specialized Low-Rank Adaptation (LoRA) groups, each designated to fulfill a distinct role. By focusing on their respective specialized capabilities and engaging in collaborative interactions, these LoRAs collectively accomplish the agent task. 3) To effectively fine-tune the framework, we develop a multi-role data generation pipeline based on publicly available datasets, incorporating role-specific content completion and reliability verification. We conduct extensive experiments and thorough ablation studies on various LLMs and agent benchmarks, demonstrating the effectiveness of the proposed method. This project is publicly available at https://mor-agent.github.io.

cs / cs.CL