iconLogo
Published:2025/12/3 13:50:36

MRePU爆誕!AI学習、超安定&超高性能になっちゃうってマジ!?🚀

超要約: DNNの学習を安定させるMRePU(最強活性化関数)開発!AIの未来、マジ卍!

ギャル的キラキラポイント✨

  • ● RePU(既存の活性化関数)の弱点、有効場理論(難しい理論)でズバッと解明!😳
  • ● MRePUのおかげで、AIの学習が劇的に安定!深層学習(深い学習)も怖くない💖
  • ● 画像認識とか言語処理(言葉の理解)とか、色んなAIが超進化する予感しかしない!😍

詳細解説

続きは「らくらく論文」アプリで

Why Rectified Power Unit Networks Fail and How to Improve It: An Effective Field Theory Perspective

Taeyoung Kim / Myungjoo Kang

The Rectified Power Unit (RePU) activation function, a differentiable generalization of the Rectified Linear Unit (ReLU), has shown promise in constructing neural networks due to its smoothness properties. However, deep RePU networks often suffer from critical issues such as vanishing or exploding values during training, rendering them unstable regardless of hyperparameter initialization. Leveraging the perspective of effective field theory, we identify the root causes of these failures and propose the Modified Rectified Power Unit (MRePU) activation function. MRePU addresses RePU's limitations while preserving its advantages, such as differentiability and universal approximation properties. Theoretical analysis demonstrates that MRePU satisfies criticality conditions necessary for stable training, placing it in a distinct universality class. Extensive experiments validate the effectiveness of MRePU, showing significant improvements in training stability and performance across various tasks, including polynomial regression, physics-informed neural networks (PINNs) and real-world vision tasks. Our findings highlight the potential of MRePU as a robust alternative for building deep neural networks.

cs / cs.LG / cs.AI