Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3616855.3635758acmconferencesArticle/Chapter ViewAbstractPublication PageswsdmConference Proceedingsconference-collections
research-article

Guardian: Guarding against Gradient Leakage with Provable Defense for Federated Learning

Published: 04 March 2024 Publication History

Abstract

Federated learning is a privacy-focused learning paradigm, which trains a global model with gradients uploaded from multiple participants, circumventing explicit exposure of private data. However, previous research of gradient leakage attacks suggests that gradients alone are sufficient to reconstruct private data, rendering the privacy protection mechanism of federated learning unreliable. Existing defenses commonly craft transformed gradients based on ground-truth gradients to obfuscate the attacks, but often are less capable of maintaining good model performance together with satisfactory privacy protection. In this paper, we propose a novel yet effective defense framework named guarding against gradient leakage (Guardian) that produces transformed gradients by jointly optimizing two theoretically-derived metrics associated with gradients for performance maintenance and privacy protection. In this way, the transformed gradients produced via Guardian can achieve minimal privacy leakage in theory with the given performance maintenance level. Moreover, we design an ingenious initialization strategy for faster generation of transformed gradients to enhance the practicality of Guardian in real-world applications, while demonstrating theoretical convergence of Guardian to the performance of the global model. Extensive experiments on various tasks show that, without sacrificing much accuracy, Guardian can effectively defend state-of-the-art gradient leakage attacks, compared with the slight effects of baseline defense approaches.

References

[1]
Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. 2016. Deep learning with differential privacy. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security. 308--318.
[2]
Mislav Balunović, Dimitar I Dimitrov, Robin Staab, and Martin Vechev. 2022. Bayesian Framework for Gradient Leakage. In International Conference on Learning Representations.
[3]
Stephen Boyd, Stephen P Boyd, and Lieven Vandenberghe. 2004. Convex optimization. Cambridge university press.
[4]
Jieren Deng, Yijue Wang, Ji Li, Chenghong Wang, Chao Shang, Hang Liu, Sanguthevar Rajasekaran, and Caiwen Ding. 2021. TAG: Gradient Attack on Transformer-based Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2021. 3600--3610.
[5]
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In International Conference on Learning Representations.
[6]
Nathan Dowlin, Ran Gilad-Bachrach, Kim Laine, Kristin E. Lauter, Michael Naehrig, and John Robert Wernsing. 2016. CryptoNets: applying neural networks to encrypted data with high throughput and accuracy. In International Conference on Machine Learning.
[7]
Ranjie Duan, Xingjun Ma, Yisen Wang, James Bailey, A Kai Qin, and Yun Yang. 2020. Adversarial camouflage: Hiding physical-world attacks with natural styles. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 1000--1008.
[8]
Mingyuan Fan, Cen Chen, Chengyu Wang, and Jun Huang. 2023. On the Trustworthiness Landscape of State-of-the-art Generative Models: A Comprehensive Survey. ArXiv, Vol. abs/2307.16680 (2023). https://api.semanticscholar.org/CorpusID:260333997
[9]
Mingyuan Fan, Cen Chen, Chengyu Wang, Wenmeng Zhou, and Jun Huang. 2022. Refiner: Data Refining against Gradient Leakage Attacks in Federated Learning. ArXiv, Vol. abs/2212.02042 (2022). https://api.semanticscholar.org/CorpusID:254246302
[10]
Wei Gao, Shangwei Guo, Tianwei Zhang, Han Qiu, Yonggang Wen, and Yang Liu. 2021. Privacy-preserving collaborative learning with automatic transformation search. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 114--123.
[11]
Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, and Michael Moeller. 2020. Inverting gradients-how easy is it to break privacy in federated learning? Advances in Neural Information Processing Systems, Vol. 33 (2020), 16937--16947.
[12]
Jinwoo Jeon, Kangwook Lee, Sewoong Oh, Jungseul Ok, et al. 2021. Gradient inversion with generative image prior. Advances in Neural Information Processing Systems, Vol. 34 (2021).
[13]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, Vol. 14, 1--2 (2021), 1--210.
[14]
Peter Kairouz, H. B. McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Keith Bonawitz, Zachary B. Charles, Graham Cormode, Rachel Cummings, Rafael G. L. D'Oliveira, Salim Y. El Rouayheb, David Evans, Josh Gardner, Zachary Garrett, Adrià Gascón, Badih Ghazi, Phillip B. Gibbons, Marco Gruteser, Zaïd Harchaoui, Chaoyang He, Lie He, Zhouyuan Huo, Ben Hutchinson, Justin Hsu, Martin Jaggi, Tara Javidi, Gauri Joshi, Mikhail Khodak, Jakub Konecný, Aleksandra Korolova, Farinaz Koushanfar, Oluwasanmi Koyejo, Tancrède Lepoint, Yang Liu, Prateek Mittal, Mehryar Mohri, Richard Nock, Ayfer Özgür, R. Pagh, Mariana Raykova, Hang Qi, Daniel Ramage, Ramesh Raskar, Dawn Xiaodong Song, Weikang Song, Sebastian U. Stich, Ziteng Sun, Ananda Theertha Suresh, Florian Tramèr, Praneeth Vepakomma, Jianyu Wang, Li Xiong, Zheng Xu, Qiang Yang, Felix X. Yu, Han Yu, and Sen Zhao. 2019. Advances and Open Problems in Federated Learning. Found. Trends Mach. Learn., Vol. 14 (2019), 1--210.
[15]
Li Li, Yuxi Fan, Mike Tse, and Kuo-Yi Lin. 2020a. A review of applications in federated learning. Computers & Industrial Engineering, Vol. 149 (2020), 106854.
[16]
Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020b. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine, Vol. 37, 3 (2020), 50--60.
[17]
H. B. McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2016. Communication-Efficient Learning of Deep Networks from Decentralized Data. In International Conference on Artificial Intelligence and Statistics.
[18]
Fan Mo, Anastasia Borovykh, Mohammad Malekzadeh, Hamed Haddadi, and Soteris Demetriou. 2021. Quantifying information leakage from gradients. arXiv e-prints (2021), arXiv--2105.
[19]
Daniel Scheliga, Patrick M"ader, and Marco Seeland. 2022. PRECODE-A Generic Model Extension to Prevent Deep Gradient Leakage. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 1849--1858.
[20]
Jingwei Sun, Ang Li, Binghui Wang, Huanrui Yang, Hai Li, and Yiran Chen. 2021. Soteria: Provable defense against privacy leakage in federated learning from representation perspective. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9311--9319.
[21]
Wenqi Wei, Ling Liu, Margaret Loper, Ka-Ho Chow, Mehmet Emre Gursoy, Stacey Truex, and Yanzhao Wu. 2020. A framework for evaluating client privacy leakages in federated learning. In European Symposium on Research in Computer Security. Springer, 545--566.
[22]
Dongxian Wu and Yisen Wang. 2021. Adversarial neuron pruning purifies backdoored deep models. Advances in Neural Information Processing Systems, Vol. 34 (2021).
[23]
Kaidi Xu, Gaoyuan Zhang, Sijia Liu, Quanfu Fan, Mengshu Sun, Hongge Chen, Pin-Yu Chen, Yanzhi Wang, and Xue Lin. 2020. Adversarial t-shirt! evading person detectors in a physical world. In European conference on computer vision. Springer, 665--681.
[24]
Hongxu Yin, Arun Mallya, Arash Vahdat, Jose M Alvarez, Jan Kautz, and Pavlo Molchanov. 2021. See through gradients: Image batch recovery via gradinversion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 16337--16346.
[25]
Chengliang Zhang, Suyi Li, Junzhe Xia, Wei Wang, Feng Yan, and Yang Liu. 2020. BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning. In USENIX Annual Technical Conference.
[26]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. idlg: Improved deep leakage from gradients. arXiv preprint arXiv:2001.02610 (2020).
[27]
Ligeng Zhu, Zhijian Liu, and Song Han. 2019. Deep leakage from gradients. Advances in Neural Information Processing Systems, Vol. 32 (2019). io

Cited By

View all

Index Terms

  1. Guardian: Guarding against Gradient Leakage with Provable Defense for Federated Learning

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Conferences
        WSDM '24: Proceedings of the 17th ACM International Conference on Web Search and Data Mining
        March 2024
        1246 pages
        ISBN:9798400703713
        DOI:10.1145/3616855
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Sponsors

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 04 March 2024

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. federated learning
        2. gradient leakage defense
        3. privacy protection

        Qualifiers

        • Research-article

        Funding Sources

        • the National Natural Science Foundation of China
        • Alibaba Innovation Research Program

        Conference

        WSDM '24

        Acceptance Rates

        Overall Acceptance Rate 498 of 2,863 submissions, 17%

        Upcoming Conference

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 189
          Total Downloads
        • Downloads (Last 12 months)189
        • Downloads (Last 6 weeks)1
        Reflects downloads up to 26 Sep 2024

        Other Metrics

        Citations

        Cited By

        View all

        View Options

        Get Access

        Login options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media