Differentially Private and Communication Efficient Collaborative Learning
DOI:
https://doi.org/10.1609/aaai.v35i8.16887Keywords:
Ethics -- Bias, Fairness, Transparency & Privacy, Distributed Machine Learning & Federated Learning, Learning on the Edge & Model Compression, Classification and RegressionAbstract
Collaborative learning has received huge interests due to its capability of exploiting the collective computing power of the wireless edge devices. However, during the learning process, model updates using local private samples and large-scale parameter exchanges among agents impose severe privacy concerns and communication bottleneck. In this paper, to address these problems, we propose two differentially private (DP) and communication efficient algorithms, called Q-DPSGD-1 and Q-DPSGD-2. In Q-DPSGD-1, each agent first performs local model updates by a DP gradient descent method to provide the DP guarantee and then quantizes the local model before transmitting it to neighbors to improve communication efficiency. In Q-DPSGD-2, each agent injects discrete Gaussian noise to enforce DP guarantee after first quantizing the local model. Moreover, we track the privacy loss of both approaches under the Renyi DP and provide convergence analysis for both convex and non-convex loss functions. The proposed methods are evaluated in extensive experiments on real-world datasets and the empirical results validate our theoretical findings.Downloads
Published
2021-05-18
How to Cite
Ding, J., Liang, G., Bi, J., & Pan, M. (2021). Differentially Private and Communication Efficient Collaborative Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7219-7227. https://doi.org/10.1609/aaai.v35i8.16887
Issue
Section
AAAI Technical Track on Machine Learning I