Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3340531.3412771acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article
Open access

PrivacyFL: A Simulator for Privacy-Preserving and Secure Federated Learning

Published: 19 October 2020 Publication History

Abstract

Federated learning is a technique that enables distributed clients to collaboratively learn a shared machine learning model without sharing their training data. This reduces data privacy risks, however, privacy concerns still exist since it is possible to leak information about the training dataset from the trained model's weights or parameters. Therefore, it is important to develop federated learning algorithms that train highly accurate models in a privacy-preserving manner. Setting up a federated learning environment, especially with security and privacy guarantees, is a time-consuming process with numerous configurations and parameters that can be manipulated. In order to help clients ensure that collaboration is feasible and to check that it improves their model accuracy, a real-world simulator for privacy-preserving and secure federated learning is required.
In this paper, we introduce PrivacyFL, which is an extensible, easily configurable, and scalable simulator for federated learning environments. Its key features include latency simulation, robustness to client departure/failure, support for both centralized (with one or more servers) and decentralized (serverless) learning, and configurable privacy and security mechanisms based on differential privacy and secure multiparty computation (MPC).
In this paper, we motivate our research, describe the architecture of the simulator and associated protocols, and discuss its evaluation in numerous scenarios that highlight its wide range of functionality and its advantages. Our paper addresses a significant real-world problem: checking the feasibility of participating in a federated learning environment under a variety of circumstances. It also has a strong practical impact because organizations such as hospitals, banks, and research institutes, which have large amounts of sensitive data and would like to collaborate, would greatly benefit from having a system that enables them to do so in a privacy-preserving and secure manner.

Supplementary Material

MP4 File (3340531.3412771.mp4)
PrivacyFL is an extensible, easily configurable, and scalable simulator for federated learning environments. Its key features include latency simulation, robustness to client departure/failure, support for both centralized (with one or more servers) and decentralized (serverless) learning, and configurable privacy and security mechanisms based on differential privacy and secure multiparty computation (MPC).

References

[1]
Stephen D Bay, Dennis Kibler, Michael J Pazzani, and Padhraic Smyth. 2000. The UCI KDD archive of large data sets for data mining research and experimentation. ACM SIGKDD explorations newsletter, Vol. 2, 2 (2000), 81--85.
[2]
Abhishek Bhowmick, John Duchi, Julien Freudiger, Gaurav Kapoor, and Ryan Rogers. 2018. Protection against reconstruction and its applications in private federated learning. arXiv preprint arXiv:1812.00984 (2018).
[3]
Keith Bonawitz, Hubert Eichner, Wolfgang Grieskamp, Dzmitry Huba, Alex Ingerman, Vladimir Ivanov, Chloe Kiddon, Jakub Konecny, Stefano Mazzocchi, H Brendan McMahan, et almbox. 2019. Towards federated learning at scale: System design. arXiv preprint arXiv:1902.01046 (2019).
[4]
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, and Karn Seth. 2017. Practical secure aggregation for privacy-preserving machine learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. ACM, 1175--1191.
[5]
David Chaum, Claude Cré peau, and Ivan Damgård. 1987. Multiparty Unconditionally Secure Protocols (Abstract). In Advances in Cryptology - CRYPTO '87, A Conference on the Theory and Applications of Cryptographic Techniques, Santa Barbara, California, USA, August 16--20, 1987, Proceedings. 462. https://doi.org/10.1007/3--540--48184--2_43
[6]
Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. 2006. Calibrating noise to sensitivity in private data analysis. In Theory of cryptography conference. Springer, 265--284.
[7]
Quan Geng, Peter Kairouz, Sewoong Oh, and Pramod Viswanath. 2015. The staircase mechanism in differential privacy. IEEE Journal of Selected Topics in Signal Processing, Vol. 9, 7 (2015), 1176--1184.
[8]
Robin C Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557 (2017).
[9]
Oded Goldreich. 1998. Secure multi-party computation. Manuscript. Preliminary version, Vol. 78 (1998).
[10]
Bargav Jayaraman, Lingxiao Wang, David Evans, and Quanquan Gu. 2018. Distributed learning without distress: Privacy-preserving empirical risk minimization. In Advances in Neural Information Processing Systems. 6343--6354.
[11]
Yann LeCun, Corinna Cortes, and Christopher JC Burges. 1998. The MNIST database of handwritten digits, 1998. URL http://yann. lecun. com/exdb/mnist, Vol. 10 (1998), 34.
[12]
Brendan McMahan and Daniel Ramage. 2017. Federated learning: Collaborative machine learning without centralized training data. Google Research Blog, Vol. 3 (2017).
[13]
H Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2017. Learning differentially private recurrent language models. arXiv preprint arXiv:1710.06963 (2017).
[14]
Luca Melis, Congzheng Song, Emiliano De Cristofaro, and Vitaly Shmatikov. 2019. Exploiting unintended feature leakage in collaborative learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 691--706.
[15]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2018. Comprehensive privacy analysis of deep learning: Stand-alone and federated learning under passive and active white-box inference attacks. arXiv preprint arXiv:1812.00910 (2018).
[16]
Milad Nasr, Reza Shokri, and Amir Houmansadr. 2019. Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. In 2019 IEEE Symposium on Security and Privacy (SP). IEEE, 739--753.
[17]
DJ Newman, S Hettich, CL Blake, and CJ Merz. 1998. UCI repository of machine learning databases. Dept. Information and Computer Sciences, Univ. California, Irvine.
[18]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 3--18.
[19]
Om Thakkar, Galen Andrew, and H Brendan McMahan. 2019. Differentially Private Learning with Adaptive Clipping. arXiv preprint arXiv:1905.03871 (2019).
[20]
Andrew Chi-Chih Yao. 1986. How to Generate and Exchange Secrets (Extended Abstract). In 27th Annual Symposium on Foundations of Computer Science, Toronto, Canada, 27--29 October 1986. 162--167. https://doi.org/10.1109/SFCS.1986.25

Cited By

View all
  • (2024)Federated Learning Security and Privacy-Preserving Algorithm and Experiments Research Under Internet of Things Critical InfrastructureTsinghua Science and Technology10.26599/TST.2023.901000729:2(400-414)Online publication date: Apr-2024
  • (2024)Amalgam: A Framework for Obfuscated Neural Network Training on the CloudProceedings of the 25th International Middleware Conference10.1145/3652892.3700762(238-251)Online publication date: 2-Dec-2024
  • (2024)FLSys: Toward an Open Ecosystem for Federated Learning Mobile AppsIEEE Transactions on Mobile Computing10.1109/TMC.2022.322357823:1(501-519)Online publication date: Jan-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '20: Proceedings of the 29th ACM International Conference on Information & Knowledge Management
October 2020
3619 pages
ISBN:9781450368599
DOI:10.1145/3340531
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 19 October 2020

Check for updates

Author Tags

  1. differential privacy
  2. federated learning
  3. privacy-preserving federated learning
  4. secure multiparty computation

Qualifiers

  • Research-article

Conference

CIKM '20
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

CIKM '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)440
  • Downloads (Last 6 weeks)35
Reflects downloads up to 29 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Federated Learning Security and Privacy-Preserving Algorithm and Experiments Research Under Internet of Things Critical InfrastructureTsinghua Science and Technology10.26599/TST.2023.901000729:2(400-414)Online publication date: Apr-2024
  • (2024)Amalgam: A Framework for Obfuscated Neural Network Training on the CloudProceedings of the 25th International Middleware Conference10.1145/3652892.3700762(238-251)Online publication date: 2-Dec-2024
  • (2024)FLSys: Toward an Open Ecosystem for Federated Learning Mobile AppsIEEE Transactions on Mobile Computing10.1109/TMC.2022.322357823:1(501-519)Online publication date: Jan-2024
  • (2024)GraphFederator: Federated Visual Analysis for Multi-party Graphs2024 IEEE 17th Pacific Visualization Conference (PacificVis)10.1109/PacificVis60374.2024.00027(172-181)Online publication date: 23-Apr-2024
  • (2024)A Survey on the use of Federated Learning in Privacy-Preserving Recommender SystemsIEEE Open Journal of the Computer Society10.1109/OJCS.2024.33963445(227-247)Online publication date: 2024
  • (2024)Federated Learning for Medical Applications: A Taxonomy, Current Trends, Challenges, and Future Research DirectionsIEEE Internet of Things Journal10.1109/JIOT.2023.332906111:5(7374-7398)Online publication date: 1-Mar-2024
  • (2024)Blades: A Unified Benchmark Suite for Byzantine Attacks and Defenses in Federated Learning2024 IEEE/ACM Ninth International Conference on Internet-of-Things Design and Implementation (IoTDI)10.1109/IoTDI61053.2024.00018(158-169)Online publication date: 13-May-2024
  • (2024)A secure and privacy preserved infrastructure for VANETs based on federated learning with local differential privacyInformation Sciences10.1016/j.ins.2023.119717652(119717)Online publication date: Jan-2024
  • (2024)Low dimensional secure federated learning framework against poisoning attacksFuture Generation Computer Systems10.1016/j.future.2024.04.017158:C(183-199)Online publication date: 1-Sep-2024
  • (2024)Adversarial Attacks on GNN-Based Vertical Federated LearningAttacks, Defenses and Testing for Deep Learning10.1007/978-981-97-0425-5_3(35-54)Online publication date: 4-Jun-2024
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media