iconLogo
Published:2025/12/17 6:30:15

最強ギャル、DPで分散学習を爆アゲ💖

超要約:差分プライバシーで、有向グラフでも安全&爆速学習!✨

ギャル的キラキラポイント✨

  • ● データ漏洩のリスクを激減!個人情報守りつつ、AI学習できるって神じゃん?🥺
  • ● 有向グラフ (一方通行のネットワーク) でもOK!色んなとこで使えるって最強👍
  • ● 局所解にハマらない!高速学習で、モデルの精度も爆上がり⤴️

詳細解説

続きは「らくらく論文」アプリで

Differentially Private Gradient-Tracking-Based Distributed Stochastic Optimization over Directed Graphs

Jialong Chen / Jimin Wang / Ji-Feng Zhang

This paper proposes a differentially private gradient-tracking-based distributed stochastic optimization algorithm over directed graphs. In particular, privacy noises are incorporated into each agent's state and tracking variable to mitigate information leakage, after which the perturbed states and tracking variables are transmitted to neighbors. We design two novel schemes for the step-sizes and the sampling number within the algorithm. The sampling parameter-controlled subsampling method employed by both schemes enhances the differential privacy level, and ensures a finite cumulative privacy budget even over infinite iterations. The algorithm achieves both almost sure and mean square convergence for nonconvex objectives. Furthermore, when nonconvex objectives satisfy the Polyak-Lojasiewicz (PL) condition, Scheme (S1) achieves a polynomial mean square convergence rate, and Scheme (S2) achieves an exponential mean square convergence rate. The trade-off between privacy and convergence is presented. The effectiveness of the algorithm and its superior performance compared to existing works are illustrated through numerical examples of distributed training on the benchmark datasets "MNIST" and "CIFAR-10".

cs / eess.SY / cs.SY