Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Aggregation Service for Federated Learning: An Efficient, Secure, and More Resilient Realization

Published: 01 March 2023 Publication History

Abstract

Federated learning has recently emerged as a paradigm promising the benefits of harnessing rich data from diverse sources to train high quality models, with the salient features that training datasets never leave local devices. Only model updates are locally computed and shared for aggregation to produce a global model. While federated learning greatly alleviates the privacy concerns as opposed to learning with centralized data, sharing model updates still poses privacy risks. In this paper, we present a system design which offers efficient protection of individual model updates throughout the learning procedure, allowing clients to only provide obscured model updates while a cloud server can still perform the aggregation. Our federated learning system first departs from prior works by supporting lightweight encryption and aggregation, and resilience against drop-out clients with no impact on their participation in future rounds. Meanwhile, prior work largely overlooks bandwidth efficiency optimization in the ciphertext domain and the support of security against an actively adversarial cloud server, which we also fully explore in this paper and provide effective and efficient mechanisms. Extensive experiments over several benchmark datasets (MNIST, CIFAR-10, and CelebA) show our system achieves accuracy comparable to the plaintext baseline, with practical performance.

References

[1]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Int. Conf. Artif. Intell. Statist., A. Singh and X. J. Zhu, Eds., 2017, pp. 1273–1282.
[2]
L. Melis, C. Song, E. D. Cristofaro, and V. Shmatikov, “Exploiting unintended feature leakage in collaborative learning,” in Proc. IEEE Symp. Secur. Privacy, 2019, pp. 691–706.
[3]
R. Xu, N. Baracaldo, Y. Zhou, A. Anwar, and H. Ludwig, “Hybridalpha: An efficient approach for privacy-preserving federated learning,” in Proc. 12th ACM Workshop Artif. Intell. Secur., 2019, pp. 13–23.
[4]
S. Truexet al., “A hybrid approach to privacy-preserving federated learning,” in Proc. Proc. ACM Workshop Artif. Intell. Secur., 2019, pp. 1–11.
[5]
L. T. Phong, Y. Aono, T. Hayashi, L. Wang, and S. Moriai, “Privacy-preserving deep learning via additively homomorphic encryption,” IEEE Trans. Inf. Forensics Secur., vol. 13, no. 5, pp. 1333–1345, May 2018.
[6]
C. Zhang, S. Li, J. Xia, W. Wang, F. Yan, and Y. Liu, “Batchcrypt: Efficient homomorphic encryption for cross-silo federated learning,” in Proc. USENIX Annu. Tech. Conf., 2020, pp. 493–506.
[7]
K. Bonawitzet al., “Practical secure aggregation for privacy-preserving machine learning,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 1175–1191.
[8]
T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Process. Mag., vol. 37, no. 3, pp. 50–60, May 2020.
[9]
K. Kursawe, G. Danezis, and M. Kohlweiss, “Privacy-friendly aggregation for the smart-grid,” in Proc. Int. Symp. Privacy Enhancing Technol., 2011, pp. 175–191.
[10]
L. Melis, G. Danezis, and E. D. Cristofaro, “Efficient private statistics with succinct sketches,” in Proc. Netw. Distrib. Syst. Secur. Symp., 2016, pp. 1–15.
[11]
F. McKeenet al., “Innovative instructions and software model for isolated execution,” in Proc. Workshop Hardware Architectural Support Secur. Privacy, 2013, Art. no.
[12]
Intel, “Intel software guard extensions,” 2020. [Online]. Available: https://software.intel.com/en-us/sgx
[13]
F. Tramèr, F. Zhang, H. Lin, J. Hubaux, A. Juels, and E. Shi, “Sealed-glass proofs: Using transparent enclaves to prove and sell knowledge,” in Proc .Eur. Symp. Secur. Privacy, 2017, pp. 19–34.
[14]
J. So, B. Guler, and A. S. Avestimehr, “Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning,” EEE J. Sel. Areas Inf. Theory, vol. 2, no. 1, pp. 479–489, 2020.
[15]
S. Kadhe, N. Rajaraman, O. O. Koyluoglu, and K. Ramchandran, “Fastsecagg: Scalable secure aggregation for privacy-preserving federated learning,” CoRR, vol. abs/2009.11248, 2020.
[16]
B. Choi, J. Sohn, D. Han, and J. Moon, “Communication-computation efficient secure aggregation for federated learning,” CoRR, vol. abs/2012.05433, 2020.
[17]
K. Mandal and G. Gong, “PrivFL: Practical privacy-preserving federated regressions on high-dimensional data over mobile networks,” in Proc. ACM SIGSAC Conf. Cloud Comput. Secur. Workshop, 2019, pp. 57–68.
[18]
F. Schusteret al., “VC3: Trustworthy data analytics in the cloud using SGX,” in Proc. IEEE Symp. Secur. Privacy, 2015, pp. 38–54.
[19]
O. Ohrimenkoet al., “Oblivious multi-party machine learning on trusted processors,” in Proc. 25th USENIX Conf. Secur. Symp., 2016, pp. 619–636.
[20]
R. Bahmaniet al., “Secure multiparty computation from SGX,” in Proc. Int. Conf. Financial Cryptogr. Data Secur., 2017, pp. 477–497.
[21]
J. I. Choiet al., “A hybrid approach to secure function evaluation using SGX,” in Proc. ACM Asia Conf. Comput. Commun. Secur., 2019, pp. 100–113.
[22]
F. Tramèr and D. Boneh, “Slalom: Fast, verifiable and private execution of neural networks in trusted hardware,” in Proc. Int. Conf. Learn. Representations, 2019, pp. 1–19.
[23]
N. Kumar, M. Rathee, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma, “Cryptflow: Secure tensorflow inference,” in Proc. IEEE Symp. Secur. Privacy, 2020, pp. 336–353.
[24]
X. Zhang, F. Li, Z. Zhang, Q. Li, C. Wang, and J. Wu, “Enabling execution assurance of federated learning at untrusted participants,” in Proc. IEEE Conf. Comput. Commun., 2020, pp. 1877–1886.
[25]
H. Duan, Y. Zheng, Y. Du, A. Zhou, C. Wang, and M. H. Au, “Aggregating crowd wisdom via blockchain: A private, correct, and robust realization,” in Proc. IEEE Int. Conf. Pervasive Comput. Commun., 2019, pp. 1–10.
[26]
G. Xu, H. Li, S. Liu, K. Yang, and X. Lin, “Verifynet: Secure and verifiable federated learning,” IEEE Trans. Inf. Forensics Secur., vol. 15, pp. 911–926, 2020.
[27]
Y. Zheng, H. Duan, X. Tang, C. Wang, and J. Zhou, “Denoising in the dark: Privacy-preserving deep neural network-based image denoising,” IEEE Trans. Dependable Secure Comput., vol. 18, no. 3, pp. 1261–1275, May/Jun. 2021.
[28]
X. Liu, Y. Zheng, X. Yuan, and X. Yi, “Medisc: Towards secure and lightweight deep learning as a medical diagnostic service,” in Proc. Eur. Symp. Res. Comput. Secur., 2021, pp. 519–541.
[29]
Q. Li, G. Cao, and T. F. L. Porta, “Efficient and privacy-aware data aggregation in mobile sensing,” IEEE Trans. Dependable Secure Comput., vol. 11, no. 2, pp. 115–129, Mar./Apr. 2014.
[30]
Y. Zhang, Q. Chen, and S. Zhong, “Efficient and privacy-preserving min and kth min computations in mobile sensing systems,” IEEE Trans. Dependable Secure Comput., vol. 14, no. 1, pp. 9–21, Jan./Feb. 2017.
[31]
M. Fang, X. Cao, J. Jia, and N. Z. Gong, “Local model poisoning attacks to byzantine-robust federated learning,” in Proc. 29th USENIX Conf. Secur. Symp., 2020, pp. 1623–1640.
[32]
E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in Proc. Int. Conf. Artif. Intell. Statist., 2020, pp. 2938–2948.
[33]
C. Wang, K. Ren, J. Wang, and Q. Wang, “Harnessing the cloud for securely outsourcing large-scale systems of linear equations,” IEEE Trans. Parallel Distrib. Syst., vol. 24, no. 6, pp. 1172–1181, Jun. 2013.
[34]
K. Bonawitzet al., “Towards federated learning at scale: System design,” in Proc. Conf. Mach. Learn. Syst., 2019, pp. 1–15.
[35]
A. Reisizadeh, A. Mokhtari, H. Hassani, A. Jadbabaie, and R. Pedarsani, “Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization,” in Proc. Int. Conf. Artif. Intell. Statist., 2020, pp. 2021–2031.
[36]
D. Alistarh, D. Grubic, J. Li, R. Tomioka, and M. Vojnovic, “QSGD: Communication-efficient SGD via gradient quantization and encoding,” in Proc. 31st Int. Conf. Neural Inf. Process. Syst., 2017, pp. 1707–1718.
[37]
N. Shlezinger, M. Chen, Y. C. Eldar, H. V. Poor, and S. Cui, “Federated learning with quantization constraints,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process., 2020, pp. 8851–8855.
[38]
J. Konecný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated learning: Strategies for improving communication efficiency,” in Proc. NIPS Workshop Private Multi-Party Mach. Learn., 2016, pp. 1–5.
[39]
R. Shokri and V. Shmatikov, “Privacy-preserving deep learning,” in Proc. 53rd Annu. Allerton Conf. Commun. Control, Comput., 2015, pp. 909–910.
[40]
Apache, “Thrift,” 2020. [Online]. Available: https://thrift.apache.org
[41]
P. W. T. L. J. K. H. B. M. V. S. S. Caldas, S. M. K. Duddu, and A. Talwalkar, “Leaf: A benchmark for federated settings,” in Proc. Workshop Federated Learn. Data Privacy Confidentiality, 2019, pp. 1–6.
[42]
A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Proc. Neural Inf. Process. Syst., 2012, pp. 1106–1114.
[43]
K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2016, pp. 770–778.
[44]
Y. Zheng, H. Duan, X. Yuan, and C. Wang, “Privacy-aware and efficient mobile crowdsensing with truth discovery,” IEEE Trans. Dependable Secur. Comput., vol. 17, no. 1, pp. 121–133, Jan./Feb. 2020.
[45]
J. Liu, M. Juuti, Y. Lu, and N. Asokan, “Oblivious neural network predictions via minionn transformations,” in Proc. ACM SIGSAC Conf. Comput. Commun. Secur., 2017, pp. 619–631.
[46]
Y. Zheng, H. Duan, and C. Wang, “Learning the truth privately and confidently: Encrypted confidence-aware truth discovery in mobile crowdsensing,” IEEE Trans. Inf. Forensics Secur., vol. 13, no. 10, pp. 2475–2489, Oct. 2018.

Cited By

View all
  • (2024)Secure and Verifiable Data Collaboration with Low-Cost Zero-Knowledge ProofsProceedings of the VLDB Endowment10.14778/3665844.366586017:9(2321-2334)Online publication date: 6-Aug-2024
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • (2024)Personalized Privacy-Preserving Federated LearningProceedings of the 25th International Middleware Conference10.1145/3652892.3700785(454-466)Online publication date: 2-Dec-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing  Volume 20, Issue 2
March-April 2023
885 pages

Publisher

IEEE Computer Society Press

Washington, DC, United States

Publication History

Published: 01 March 2023

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 06 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Secure and Verifiable Data Collaboration with Low-Cost Zero-Knowledge ProofsProceedings of the VLDB Endowment10.14778/3665844.366586017:9(2321-2334)Online publication date: 6-Aug-2024
  • (2024)A Survey on Federated Unlearning: Challenges, Methods, and Future DirectionsACM Computing Surveys10.1145/367901457:1(1-38)Online publication date: 19-Jul-2024
  • (2024)Personalized Privacy-Preserving Federated LearningProceedings of the 25th International Middleware Conference10.1145/3652892.3700785(454-466)Online publication date: 2-Dec-2024
  • (2024)BadSampler: Harnessing the Power of Catastrophic Forgetting to Poison Byzantine-robust Federated LearningProceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining10.1145/3637528.3671879(1944-1955)Online publication date: 25-Aug-2024
  • (2024)PASTELProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36338087:4(1-29)Online publication date: 12-Jan-2024
  • (2024)LDS-FL: Loss Differential Strategy Based Federated Learning for Privacy PreservingIEEE Transactions on Information Forensics and Security10.1109/TIFS.2023.332232819(1015-1030)Online publication date: 1-Jan-2024
  • (2024)Fuzzy Federated Learning for Privacy-Preserving Detection of Adolescent Idiopathic ScoliosisIEEE Transactions on Fuzzy Systems10.1109/TFUZZ.2024.344546832:10(5493-5507)Online publication date: 1-Oct-2024
  • (2024)TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated LearningIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2024.335020621:5(4309-4323)Online publication date: 1-Sep-2024
  • (2024)Exploring the Practicality of Differentially Private Federated Learning: A Local Iteration Tuning ApproachIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.332588921:4(3280-3294)Online publication date: 1-Jul-2024
  • (2024)Source Inference Attacks: Beyond Membership Inference Attacks in Federated LearningIEEE Transactions on Dependable and Secure Computing10.1109/TDSC.2023.332156521:4(3012-3029)Online publication date: 1-Jul-2024
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media