iconLogo
Published:2026/1/7 4:25:33

分散最適化、ギャルの味方!✨ 大規模データも爆速学習!

超要約:勾配爆上がり問題も解決!分散学習を最強にする方法を発見!

ギャル的キラキラポイント✨

● 勾配(こうばい)が荒ぶってても大丈夫!計算を安定化させる魔法🪄 ● モデル学習が爆速に!時間もコストもセーブできちゃう💰 ● AI技術がもっと身近になる予感!新しいサービスが生まれるかも💖

詳細解説

続きは「らくらく論文」アプリで

Provably Convergent Decentralized Optimization over Directed Graphs under Generalized Smoothness

Yanan Bo / Yongqiang Wang

Decentralized optimization has become a fundamental tool for large-scale learning systems; however, most existing methods rely on the classical Lipschitz smoothness assumption, which is often violated in problems with rapidly varying gradients. Motivated by this limitation, we study decentralized optimization under the generalized $(L_0, L_1)$-smoothness framework, in which the Hessian norm is allowed to grow linearly with the gradient norm, thereby accommodating rapidly varying gradients beyond classical Lipschitz smoothness. We integrate gradient-tracking techniques with gradient clipping and carefully design the clipping threshold to ensure accurate convergence over directed communication graphs under generalized smoothness. In contrast to existing distributed optimization results under generalized smoothness that require a bounded gradient dissimilarity assumption, our results remain valid even when the gradient dissimilarity is unbounded, making the proposed framework more applicable to realistic heterogeneous data environments. We validate our approach via numerical experiments on standard benchmark datasets, including LIBSVM and CIFAR-10, using regularized logistic regression and convolutional neural networks, demonstrating superior stability and faster convergence over existing methods.

cs / math.OC / cs.LG