超要約:GNN(グラフニューラルネットワーク)でプライバシー守りつつ、精度も爆上げする方法見つけたって話!🚀
💖 ギャル的キラキラポイント💖 ● GNNの弱点克服! GNNって、プライバシー守るのが大変だったけど、この研究で解決できそうなんだよね!😍 ● プライバシーコストが層の数で増えない! 多層GNN(高性能なやつ)でも、安心して使えるようになるよ!👯♀️ ● 金融とか医療とか、色んな分野で役立つ! 個人情報とか扱うとこでも、ガンガン使えるのがスゴくない?😎
詳細解説いくね!
● 背景 GNNは、SNSとか推薦システムみたいなグラフ構造のデータ分析にめっちゃ役立つんだけど、個人情報とか漏れちゃうリスクがあったの😱 プライバシー保護のために、ノイズ(邪魔な音みたいなやつ)を足すと精度が落ちるというジレンマも…
続きは「らくらく論文」アプリで
Differential privacy (DP) has been integrated into graph neural networks (GNNs) to protect sensitive structural information, e.g., edges, nodes, and associated features across various applications. A prominent approach is to perturb the message-passing process, which forms the core of most GNN architectures. However, existing methods typically incur a privacy cost that grows linearly with the number of layers (e.g., GAP published in Usenix Security'23), ultimately requiring excessive noise to maintain a reasonable privacy level. This limitation becomes particularly problematic when multi-layer GNNs, which have shown better performance than one-layer GNN, are used to process graph data with sensitive information. In this paper, we theoretically establish that the privacy budget converges with respect to the number of layers by applying privacy amplification techniques to the message-passing process, exploiting the contractive properties inherent to standard GNN operations. Motivated by this analysis, we propose a simple yet effective Contractive Graph Layer (CGL) that ensures the contractiveness required for theoretical guarantees while preserving model utility. Our framework, CARIBOU, supports both training and inference, equipped with a contractive aggregation module, a privacy allocation module, and a privacy auditing module. Experimental evaluations demonstrate that CARIBOU significantly improves the privacy-utility trade-off and achieves superior performance in privacy auditing tasks.