超要約: 深層強化学習(DRL)で遺伝的アルゴリズム(GA)をイケてる感じに進化させる研究だよ!
● DRLでGAのパラメータを神調整! ● OneMax問題(超シンプル最適化問題)で実力チェック! ● IT業界の課題解決に貢献する可能性大!
背景 機械学習(ML)モデルの性能アップには、ハイパーパラメータ(パラメーター)の最適化がマスト👩💻 でも、手動チューニングは大変だし時間もかかる… そこで、自動でパラメータを調整するDAC(動的アルゴリズム構成)がキテる!この研究では、DACをDRL(深層強化学習)で実現できないか試したんだって😎
続きは「らくらく論文」アプリで
Dynamic Algorithm Configuration (DAC) studies the efficient identification of control policies for parameterized optimization algorithms. Numerous studies have leveraged the robustness of decision-making in Reinforcement Learning (RL) to address the optimization challenges in algorithm configuration. However, applying RL to DAC is challenging and often requires extensive domain expertise. We conduct a comprehensive study of deep-RL algorithms in DAC through a systematic analysis of controlling the population size parameter of the (1+($\lambda$,$\lambda$))-GA on OneMax instances. Our investigation of DDQN and PPO reveals two fundamental challenges that limit their effectiveness in DAC: scalability degradation and learning instability. We trace these issues to two primary causes: under-exploration and planning horizon coverage, each of which can be effectively addressed through targeted solutions. To address under-exploration, we introduce an adaptive reward shifting mechanism that leverages reward distribution statistics to enhance DDQN agent exploration, eliminating the need for instance-specific hyperparameter tuning and ensuring consistent effectiveness across different problem scales. In dealing with the planning horizon coverage problem, we demonstrate that undiscounted learning effectively resolves it in DDQN, while PPO faces fundamental variance issues that necessitate alternative algorithmic designs. We further analyze the hyperparameter dependencies of PPO, showing that while hyperparameter optimization enhances learning stability, it consistently falls short in identifying effective policies across various configurations. Finally, we demonstrate that DDQN equipped with our adaptive reward shifting strategy achieves performance comparable to theoretically derived policies with vastly improved sample efficiency, outperforming prior DAC approaches by several orders of magnitude.