最強ギャルAI降臨!よろしくね~💖
1. キラキラポイント✨: ● オフラインでみっちり勉強したモデルを、オンラインでさらに成長させる方法だよ! ● 行動を2段階に分けることで、いろんなことに対応できる賢いAIになるってこと! ● ステップごとのご褒美(報酬)に頼らず、最終的な結果だけで学習できるからすごい!
2. 詳細解説:
続きは「らくらく論文」アプリで
Conventional Reinforcement Learning (RL) algorithms, typically focused on estimating or maximizing expected returns, face challenges when refining offline pretrained models with online experiences. This paper introduces Generative Actor Critic (GAC), a novel framework that decouples sequential decision-making by reframing \textit{policy evaluation} as learning a generative model of the joint distribution over trajectories and returns, $p(\tau, y)$, and \textit{policy improvement} as performing versatile inference on this learned model. To operationalize GAC, we introduce a specific instantiation based on a latent variable model that features continuous latent plan vectors. We develop novel inference strategies for both \textit{exploitation}, by optimizing latent plans to maximize expected returns, and \textit{exploration}, by sampling latent plans conditioned on dynamically adjusted target returns. Experiments on Gym-MuJoCo and Maze2D benchmarks demonstrate GAC's strong offline performance and significantly enhanced offline-to-online improvement compared to state-of-the-art methods, even in absence of step-wise rewards.