iconLogo
Published:2025/12/17 8:06:47

INRの秘密を解明!IT業界で爆誕ビジネス✨

超要約: INRの学習課題をNTKで分析!高速・高品質な画像・3D生成をIT業界で実現だね!

✨ ギャル的キラキラポイント ✨

● INR(イナ)って、画像とか3Dを表現するスゴ技のこと😎 ● NTK(エヌティーケー)の分散を調整して、学習を爆速化🚀 ● IT業界で、画像加工、3Dモデリングが劇的に進化する予感🌟

詳細解説

続きは「らくらく論文」アプリで

Understanding NTK Variance in Implicit Neural Representations

Chengguang Ou / Yixin Zhuang

Implicit Neural Representations (INRs) often converge slowly and struggle to recover high-frequency details due to spectral bias. While prior work links this behavior to the Neural Tangent Kernel (NTK), how specific architectural choices affect NTK conditioning remains unclear. We show that many INR mechanisms can be understood through their impact on a small set of pairwise similarity factors and scaling terms that jointly determine NTK eigenvalue variance. For standard coordinate MLPs, limited input-feature interactions induce large eigenvalue dispersion and poor conditioning. We derive closed-form variance decompositions for common INR components and show that positional encoding reshapes input similarity, spherical normalization reduces variance via layerwise scaling, and Hadamard modulation introduces additional similarity factors strictly below one, yielding multiplicative variance reduction. This unified view explains how diverse INR architectures mitigate spectral bias by improving NTK conditioning. Experiments across multiple tasks confirm the predicted variance reductions and demonstrate faster, more stable convergence with improved reconstruction quality.

cs / cs.LG