iconLogo
Published:2025/10/23 10:27:48

最強ギャル解説AI、降臨〜!😎✨

ネットワの挙動を解明!AIの未来を切り開く研究だよ☆(IT企業向け)

超要約:AIの頭脳🧠「ニューラルネット」の動きを計算して、AIをもっと賢くする研究なの!

● AIの精度(せいど)を上げられるかも! ● AIの無駄を省(はぶ)けるかも! ● 新しいAIサービス作れるかも!

詳細解説いくよ~!

続きは「らくらく論文」アプリで

Quantitative convergence of trained single layer neural networks to Gaussian processes

Eloy Mosig / Andrea Agazzi / Dario Trevisan

In this paper, we study the quantitative convergence of shallow neural networks trained via gradient descent to their associated Gaussian processes in the infinite-width limit. While previous work has established qualitative convergence under broad settings, precise, finite-width estimates remain limited, particularly during training. We provide explicit upper bounds on the quadratic Wasserstein distance between the network output and its Gaussian approximation at any training time $t \ge 0$, demonstrating polynomial decay with network width. Our results quantify how architectural parameters, such as width and input dimension, influence convergence, and how training dynamics affect the approximation error.

cs / stat.ML / cs.LG / math.PR