Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-030-58951-6_27guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

A Framework for Evaluating Client Privacy Leakages in Federated Learning

Published: 14 September 2020 Publication History

Abstract

Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server. However, recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks and intrude the client privacy regarding its training data. In this paper, we present a principled framework for evaluating and comparing different forms of client privacy leakage attacks. We first provide formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training (e.g., local gradient or weight update vector). We then analyze how different hyperparameter configurations in federated learning and different settings of the attack algorithm may impact on both attack effectiveness and attack cost. Our framework also measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols. Our experiments additionally include some preliminary mitigation strategies to highlight the importance of providing a systematic attack evaluation framework towards an in-depth understanding of the various forms of client privacy leakage threats in federated learning and developing theoretical foundations for attack mitigation.

References

[1]
Phong LT, Aono Y, Hayashi T, Wang L, and Moriai S Batten L, Kim DS, Zhang X, and Li G Privacy-preserving deep learning: revisited and enhanced Applications and Techniques in Information Security 2017 Singapore Springer 100-110
[2]
Aono Y, Hayashi T, Wang L, Moriai S, et al. Privacy-preserving deep learning via additively homomorphic encryption IEEE Trans. Inf. Forensics Secur. 2017 13 5 1333-1345
[3]
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. arXiv preprint arXiv:1807.00459 (2018)
[4]
Battiti R First-and second-order methods for learning: between steepest descent and Newton’s method Neural Comput. 1992 4 2 141-166
[5]
Bonawitz, K., et al.: Towards federated learning at scale: System design. In: Proceedings of the 2nd SysML Conference, pp. 619–633 (2018)
[6]
Fletcher, R.: Practical Methods of Optimization. Wiley, Hoboken (2013)
[7]
Fredrikson, M., Jha, S., Ristenpart, T.: Model inversion attacks that exploit confidence information and basic countermeasures. In: Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, pp. 1322–1333 (2015)
[8]
Ganju, K., Wang, Q., Yang, W., Gunter, C.A., Borisov, N.: Property inference attacks on fully connected neural networks using permutation invariant representations. In: Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, pp. 619–633 (2018)
[9]
Geiping, J., Bauermeister, H., Dröge, H., Moeller, M.: Inverting gradients-how easy is it to break privacy in federated learning? arXiv preprint arXiv:2003.14053 (2020)
[10]
Hayes, J., Melis, L., Danezis, G., De Cristofaro, E.: Logan: evaluating privacy leakage of generative models using generative adversarial networks. arXiv preprint arXiv:1705.07663 (2017)
[11]
Hitaj, B., Ateniese, G., Perez-Cruz, F.: Deep models under the GAN: information leakage from collaborative deep learning. In: Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 603–618 (2017)
[12]
Huang, G.B., Mattar, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. In: Technical report (2008)
[13]
Kamp M et al. Berlingerio M, Bonchi F, Gärtner T, Hurley N, Ifrim G, et al. Efficient decentralized deep learning by dynamic model averaging Machine Learning and Knowledge Discovery in Databases 2019 Cham Springer 393-409
[14]
Konečnỳ, J., McMahan, H.B., Yu, F.X., Richtárik, P., Suresh, A.T., Bacon, D.: Federated learning: strategies for improving communication efficiency. In: NIPS Workshop on Private Multi-Party Machine Learning (2016)
[15]
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. In: Technical report (2009)
[16]
LeCun, Y., Cortes, C., Burges, C.J.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist10, 34 (1998)
[17]
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: International Conference on Learning Representations (2018)
[18]
Liu, W., Chen, L., Chen, Y., Zhang, W.: Accelerating federated learning via momentum gradient descent. IEEE Trans. Parallel Distrib. Syst. (2020)
[19]
Ma, C., et al.: Adding vs. averaging in distributed primal-dual optimization. In: Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 1973–1982 (2015)
[20]
McMahan, B., Moore, E., Ramage, D., Hampson, S., Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282 (2017)
[21]
McMahan, B., Ramage, D.: Federated learning: Collaborative machine learning without centralized training data. Google Res. Blog 3 (2017)
[22]
McMahan, H.B., Moore, E., Ramage, D., Arcas, B.A.: Federated learning of deep networks using model averaging. corr abs/1602.05629 (2016). arXiv preprint arXiv:1602.05629 (2016)
[23]
Melis, L., Song, C., De Cristofaro, E., Shmatikov, V.: Exploiting unintended feature leakage in collaborative learning. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 691–706. IEEE (2019)
[24]
Rossi, F., Gégout, C.: Geometrical initialization, parametrization and control of multilayer perceptrons: application to function approximation. In: Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN 1994), vol. 1, pp. 546–550. IEEE (1994)
[25]
Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)
[26]
Vanhaesebrouck, P., Bellet, A., Tommasi, M.: Decentralized collaborative learning of personalized models over networks. In: Artificial Intelligence and Statistics (2017)
[27]
Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., Qi, H.: Beyond inferring class representatives: user-level privacy leakage from federated learning. In: IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512–2520. IEEE (2019)
[28]
Wang Z, Bovik AC, Sheikh HR, and Simoncelli EP Image quality assessment: from error visibility to structural similarity IEEE Trans. Image Process. 2004 13 4 600-612
[29]
Yang Q, Liu Y, Chen T, and Tong Y Federated machine learning: concept and applications ACM Trans. Intell. Syst. Technol. (TIST) 2019 10 2 1-19
[30]
Yao, X., Huang, T., Zhang, R.X., Li, R., Sun, L.: Federated learning with unbiased gradient aggregation and controllable meta updating. arXiv preprint arXiv:1910.08234 (2019)
[31]
Zhang Q and Benveniste A Wavelet networks IEEE Trans. Neural Netw. 1992 3 6 889-898
[32]
Zhao, B., Mopuri, K.R., Bilen, H.: iDLG: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020)
[33]
Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data. arXiv preprint arXiv:1806.00582 (2018)
[34]
Zhu, L., Liu, Z., Han, S.: Deep leakage from gradients. In: Advances in Neural Information Processing Systems, pp. 14747–14756 (2019)

Cited By

View all
  • (2024)Guardian: Guarding against Gradient Leakage with Provable Defense for Federated LearningProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635758(190-198)Online publication date: 4-Mar-2024
  • (2024)Privacy Preserving Federated Learning: A Novel Approach for Combining Differential Privacy and Homomorphic EncryptionInformation Security Theory and Practice10.1007/978-3-031-60391-4_11(162-177)Online publication date: 29-Feb-2024
  • (2023)MnemonistProceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence10.5555/3625834.3626010(1879-1888)Online publication date: 31-Jul-2023
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Computer Security – ESORICS 2020: 25th European Symposium on Research in Computer Security, ESORICS 2020, Guildford, UK, September 14–18, 2020, Proceedings, Part I
Sep 2020
773 pages
ISBN:978-3-030-58950-9
DOI:10.1007/978-3-030-58951-6

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 14 September 2020

Author Tags

  1. Privacy leakage attacks
  2. Federated learning
  3. Attack evaluation framework

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 21 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Guardian: Guarding against Gradient Leakage with Provable Defense for Federated LearningProceedings of the 17th ACM International Conference on Web Search and Data Mining10.1145/3616855.3635758(190-198)Online publication date: 4-Mar-2024
  • (2024)Privacy Preserving Federated Learning: A Novel Approach for Combining Differential Privacy and Homomorphic EncryptionInformation Security Theory and Practice10.1007/978-3-031-60391-4_11(162-177)Online publication date: 29-Feb-2024
  • (2023)MnemonistProceedings of the Thirty-Ninth Conference on Uncertainty in Artificial Intelligence10.5555/3625834.3626010(1879-1888)Online publication date: 31-Jul-2023
  • (2023)Surrogate model extension (SME)Proceedings of the 40th International Conference on Machine Learning10.5555/3618408.3620229(43228-43257)Online publication date: 23-Jul-2023
  • (2023)A Privacy Preserving System for Movie Recommendations Using Federated LearningACM Transactions on Recommender Systems10.1145/3634686Online publication date: 24-Nov-2023
  • (2023)LDIAJournal of Information Security and Applications10.1016/j.jisa.2023.10347574:COnline publication date: 1-May-2023
  • (2022)Enhancing Privacy in Federated Learning with Local Differential Privacy for Email ClassificationData Privacy Management, Cryptocurrencies and Blockchain Technology10.1007/978-3-031-25734-6_1(3-18)Online publication date: 26-Sep-2022

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media