タイトル: DRAG & BR-DRAG でFL爆アゲ!💖 超要約: クライアント問題&ビザンチン攻撃を撃退!最強FL爆誕☆
✨ ギャル的キラキラポイント ✨ ● データの偏り(クライアントドリフト)を、発散度(DoD)で調整するってエモくない?🥺 ● 悪質な攻撃(ビザンチン攻撃)にも強いBR-DRAG!まさに最強🛡️ ● プライバシー守りつつ、AIモデルも爆速で作れちゃうって神✨
詳細解説いくねー!✍️
背景 データが色んなトコにある分散型学習(FL)って、プライバシー守れるから超イケてるじゃん?🥰 でも、データがクライアントによって偏ってたり(クライアントドリフト)、悪いヤツがデータに細工したり(ビザンチン攻撃)すると、学習うまくいかないんだよね😭
続きは「らくらく論文」アプリで
Inherent client drifts caused by data heterogeneity, as well as vulnerability to Byzantine attacks within the system, hinder effective model training and convergence in federated learning (FL). This paper presents two new frameworks, named DiveRgence-based Adaptive aGgregation (DRAG) and Byzantine-Resilient DRAG (BR-DRAG), to mitigate client drifts and resist attacks while expediting training. DRAG designs a reference direction and a metric named divergence of degree to quantify the deviation of local updates. Accordingly, each worker can align its local update via linear calibration without extra communication cost. BR-DRAG refines DRAG under Byzantine attacks by maintaining a vetted root dataset at the server to produce trusted reference directions. The workers' updates can be then calibrated to mitigate divergence caused by malicious attacks. We analytically prove that DRAG and BR-DRAG achieve fast convergence for non-convex models under partial worker participation, data heterogeneity, and Byzantine attacks. Experiments validate the effectiveness of DRAG and its superior performance over state-of-the-art methods in handling client drifts, and highlight the robustness of BR-DRAG in maintaining resilience against data heterogeneity and diverse Byzantine attacks.