1. 超要約 企業向けAI「Yuan3.0 Flash」が登場!RAGとか表理解とか、企業あるあるの悩みを解決するよ😎
2. ギャル的キラキラポイント✨ ● 過剰思考を抑える技術「RAPO」がスゴすぎ!ムダな思考をカット✨ ● 企業向けに特化してるから、仕事でめっちゃ役立つこと間違いなし🎵 ● オープンソース(誰でも使える!)だから、色んな人が使えるのがイイね💖
3. 詳細解説 背景 LLM(AI)は進化してるけど、企業で使うには課題があったの。RAGとか、複雑な表の理解とか…。そんな課題を解決するために「Yuan3.0 Flash」が生まれたんだって!
方法 37億個のパラメータを持つMoE(Mixture-of-Experts)構造を採用。さらに、RAPOってやつでムダな思考を抑制!それによって、計算コストを減らしてるんだって。
続きは「らくらく論文」アプリで
We introduce Yuan3.0 Flash, an open-source Mixture-of-Experts (MoE) MultiModal Large Language Model featuring 3.7B activated parameters and 40B total parameters, specifically designed to enhance performance on enterprise-oriented tasks while maintaining competitive capabilities on general-purpose tasks. To address the overthinking phenomenon commonly observed in Large Reasoning Models (LRMs), we propose Reflection-aware Adaptive Policy Optimization (RAPO), a novel RL training algorithm that effectively regulates overthinking behaviors. In enterprise-oriented tasks such as retrieval-augmented generation (RAG), complex table understanding, and summarization, Yuan3.0 Flash consistently achieves superior performance. Moreover, it also demonstrates strong reasoning capabilities in domains such as mathematics, science, etc., attaining accuracy comparable to frontier model while requiring only approximately 1/4 to 1/2 of the average tokens. Yuan3.0 Flash has been fully open-sourced to facilitate further research and real-world deployment: https://github.com/Yuan-lab-LLM/Yuan3.0.