Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article
Free access
Just Accepted

PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence

Online AM: 01 June 2024 Publication History

Abstract

Federated learning (FL), which holds promise for use in edge intelligence applications for smart cities, enables smart devices collaborate in training a global model by exchanging local model updates instead of sharing local training data. However, the global model can be corrupted by malicious clients conducting poisoning attacks, resulting in the failure of converging the global model, incorrect predictions on the test set, or the backdoor embedded. Although some aggregation algorithms can enhance the robustness of FL against malicious clients, our work demonstrates that existing stealthy poisoning attacks can still bypass these defense methods. In this work, we propose a robust aggregation mechanism, called Parts and All (PnA), to protect the global model of FL by filtering out malicious local model updates throughout the detection of poisoning attacks at layers of local model updates. We conduct comprehensive experiments on three representative datasets. The experimental results demonstrate that our proposed PnA is more effective than existing robust aggregation algorithms against state-of-the-art poisoning attacks. Besides, PnA has a stable performance against poisoning attacks with different poisoning settings.

References

[1]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How To Backdoor Federated Learning. In The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy](Proceedings of Machine Learning Research, Vol.  108), Silvia Chiappa and Roberto Calandra (Eds.). PMLR, 2938–2948. http://proceedings.mlr.press/v108/bagdasaryan20a.html
[2]
Gilad Baruch, Moran Baruch, and Yoav Goldberg. 2019. A Little Is Enough: Circumventing Defenses For Distributed Learning. In Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett (Eds.). 8632–8642. https://proceedings.neurips.cc/paper/2019/hash/ec1c59141046cd1866bbbcdfb6ae31d4-Abstract.html
[3]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin B. Calo. 2019. Analyzing Federated Learning through an Adversarial Lens. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA(Proceedings of Machine Learning Research, Vol.  97), Kamalika Chaudhuri and Ruslan Salakhutdinov (Eds.). PMLR, 634–643. http://proceedings.mlr.press/v97/bhagoji19a.html
[4]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks against Support Vector Machines. In Proceedings of the 29th International Conference on Machine Learning, ICML 2012, Edinburgh, Scotland, UK, June 26 - July 1, 2012. icml.cc / Omnipress. http://icml.cc/2012/papers/880.pdf
[5]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett (Eds.). 119–129. https://proceedings.neurips.cc/paper/2017/hash/f4b9ec30ad9f68f89b29639786cb62ef-Abstract.html
[6]
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. In 28th Annual Network and Distributed System Security Symposium, NDSS 2021, virtually, February 21-25, 2021. The Internet Society. https://www.ndss-symposium.org/ndss-paper/fltrust-byzantine-robust-federated-learning-via-trust-bootstrapping/
[7]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2020. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. In 29th USENIX Security Symposium, USENIX Security 2020, August 12-14, 2020, Srdjan Capkun and Franziska Roesner (Eds.). USENIX Association, 1605–1622. https://www.usenix.org/conference/usenixsecurity20/presentation/fang
[8]
Clement Fung, Chris J. M. Yoon, and Ivan Beschastnikh. 2018. Mitigating Sybils in Federated Learning Poisoning. CoRR abs/1808.04866(2018). arXiv:1808.04866 http://arxiv.org/abs/1808.04866
[9]
Robin C. Geyer, Tassilo Klein, and Moin Nabi. 2017. Differentially Private Federated Learning: A Client Level Perspective. CoRR abs/1712.07557(2017). arXiv:1712.07557 http://arxiv.org/abs/1712.07557
[10]
Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, and Siddharth Garg. 2019. BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. IEEE Access 7(2019), 47230–47244. https://doi.org/10.1109/ACCESS.2019.2909068
[11]
JiChu Jiang, Burak Kantarci, Sema F. Oktug, and Tolga Soyata. 2020. Federated Learning in Smart City Sensing: Challenges and Opportunities. Sensors 20, 21 (2020), 6230. https://doi.org/10.3390/s20216230
[12]
Jakub Konečný, H. Brendan McMahan, Felix X. Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated Learning: Strategies for Improving Communication Efficiency. CoRR abs/1610.05492(2016). arXiv:1610.05492 http://arxiv.org/abs/1610.05492
[13]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning multiple layers of features from tiny images. (2009).
[14]
Yann LeCun. 1998. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/(1998).
[15]
Lun Li, Jiqiang Liu, Lichen Cheng, Shuo Qiu, Wei Wang, Xiangliang Zhang, and Zonghua Zhang. 2018. CreditCoin: A Privacy-Preserving Blockchain-Based Incentive Announcement Network for Communications of Smart Vehicles. IEEE Transactions on Intelligent Transportation Systems 19, 7(2018), 2204–2220. https://doi.org/10.1109/TITS.2017.2777990
[16]
Wei Yang Bryan Lim, Jianqiang Huang, Zehui Xiong, Jiawen Kang, Dusit Niyato, Xian-Sheng Hua, Cyril Leung, and Chunyan Miao. 2021. Towards Federated Learning in UAV-Enabled Internet of Vehicles: A Multi-Dimensional Contract-Matching Approach. IEEE Trans. Intell. Transp. Syst. 22, 8 (2021), 5140–5154. https://doi.org/10.1109/TITS.2021.3056341
[17]
Pengrui Liu, Xiangrui Xu, and Wei Wang. 2022. Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives. Cybersecurity 5, 1 (2022), 1–19.
[18]
Xing Liu, Jiqiang Liu, Sencun Zhu, Wei Wang, and Xiangliang Zhang. 2020. Privacy Risk Analysis and Mitigation of Analytics Libraries in the Android Ecosystem. IEEE Transactions on Mobile Computing 19, 5 (2020), 1184–1199. https://doi.org/10.1109/TMC.2019.2903186
[19]
Shiwei Lu, Ruihu Li, and Wenbin Liu. 2024. FedDAA: a robust federated learning framework to protect privacy and defend against adversarial attack. Frontiers of Computer Science 18, 2 (2024), 182307.
[20]
Xiaoting Lyu, Yufei Han, Wei Wang, Jingkai Liu, Bin Wang, Jiqiang Liu, and Xiangliang Zhang. 2023. Poisoning with cerberus: stealthy and colluded backdoor attack against federated learning. In Thirty-Seventh AAAI Conference on Artificial Intelligence.
[21]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA(Proceedings of Machine Learning Research, Vol.  54), Aarti Singh and Xiaojin (Jerry) Zhu (Eds.). PMLR, 1273–1282. http://proceedings.mlr.press/v54/mcmahan17a.html
[22]
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning Differentially Private Recurrent Language Models. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net. https://openreview.net/forum?id=BJ0hF1Z0b
[23]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018(Proceedings of Machine Learning Research, Vol.  80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 3518–3527. http://proceedings.mlr.press/v80/mhamdi18a.html
[24]
Luis Muñoz-González, Battista Biggio, Ambra Demontis, Andrea Paudice, Vasin Wongrassamee, Emil C. Lupu, and Fabio Roli. 2017. Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, AISec@CCS 2017, Dallas, TX, USA, November 3, 2017, Bhavani Thuraisingham, Battista Biggio, David Mandell Freeman, Brad Miller, and Arunesh Sinha (Eds.). ACM, 27–38. https://doi.org/10.1145/3128572.3140451
[25]
Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. 2020. Local and central differential privacy for robustness and privacy in federated learning. arXiv preprint arXiv:2009.03561(2020).
[26]
Dinh C. Nguyen, Ming Ding, Pubudu N. Pathirana, Aruna Seneviratne, Jun Li, and H. Vincent Poor. 2021. Federated Learning for Internet of Things: A Comprehensive Survey. IEEE Commun. Surv. Tutorials 23, 3 (2021), 1622–1658. https://doi.org/10.1109/COMST.2021.3075439
[27]
Vasyl Pihur, Aleksandra Korolova, Frederick Liu, Subhash Sankuratripati, Moti Yung, Dachuan Huang, and Ruogu Zeng. 2022. Differentially-Private ”Draw and Discard” Machine Learning: Training Distributed Model from Enormous Crowds. In Cyber Security, Cryptology, and Machine Learning - 6th International Symposium, CSCML 2022, Be’er Sheva, Israel, June 30 - July 1, 2022, Proceedings(Lecture Notes in Computer Science, Vol.  13301), Shlomi Dolev, Jonathan Katz, and Amnon Meisels (Eds.). Springer, 468–486. https://doi.org/10.1007/978-3-031-07689-3_33
[28]
Krishna Pillutla, Sham M. Kakade, and Zaïd Harchaoui. 2022. Robust Aggregation for Federated Learning. IEEE Trans. Signal Process. 70 (2022), 1142–1154. https://doi.org/10.1109/TSP.2022.3153135
[29]
Shiva Raj Pokhrel and Jinho Choi. 2020. Federated Learning With Blockchain for Autonomous Vehicles: Analysis and Design Challenges. IEEE Trans. Commun. 68, 8 (2020), 4734–4746. https://doi.org/10.1109/TCOMM.2020.2990686
[30]
Basheer Qolomany, Kashif Ahmad, Ala I. Al-Fuqaha, and Junaid Qadir. 2020. Particle Swarm Optimized Federated Learning For Industrial IoT and Smart City Services. In IEEE Global Communications Conference, GLOBECOM 2020, Virtual Event, Taiwan, December 7-11, 2020. IEEE, 1–6. https://doi.org/10.1109/GLOBECOM42002.2020.9322464
[31]
Virat Shejwalkar and Amir Houmansadr. 2021. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. In 28th Annual Network and Distributed System Security Symposium, NDSS 2021, virtually, February 21-25, 2021. The Internet Society. https://www.ndss-symposium.org/ndss-paper/manipulating-the-byzantine-optimizing-model-poisoning-attacks-and-defenses-for-federated-learning/
[32]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Yoshua Bengio and Yann LeCun (Eds.). http://arxiv.org/abs/1409.1556
[33]
Nan Sun, Wei Wang, Yongxin Tong, and Kexin Liu. 2024. Blockchain based federated learning for intrusion detection for Internet of Things. Frontiers of Computer Science 18, 5 (2024), 185328.
[34]
Wei Wang, Mengxue Zhao, and Jigang Wang. 2019. Effective android malware detection with a hybrid model based on deep autoencoder and convolutional neural network. Journal of Ambient Intelligence and Humanized Computing 10 (2019), 3035–3043.
[35]
Han Xiao, Kashif Rasul, and Roland Vollgraf. 2017. Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms. CoRR abs/1708.07747(2017). arXiv:1708.07747 http://arxiv.org/abs/1708.07747
[36]
Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2020. DBA: Distributed Backdoor Attacks against Federated Learning. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. https://openreview.net/forum?id=rkgyS0VFvr
[37]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2018. Generalized Byzantine-tolerant SGD. CoRR abs/1802.10116(2018). arXiv:1802.10116 http://arxiv.org/abs/1802.10116
[38]
Xiangrui Xu, Pengrui Liu, Wei Wang, Hong-Liang Ma, Bin Wang, Zhen Han, and Yufei Han. 2022. CGIR: Conditional Generative Instance Reconstruction Attacks against Federated Learning. IEEE Transactions on Dependable and Secure Computing (2022), 1–13. https://doi.org/10.1109/TDSC.2022.3228302
[39]
Dong Yin, Yudong Chen, Kannan Ramchandran, and Peter L. Bartlett. 2018. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018(Proceedings of Machine Learning Research, Vol.  80), Jennifer G. Dy and Andreas Krause (Eds.). PMLR, 5636–5645. http://proceedings.mlr.press/v80/yin18a.html
[40]
Xiaohan Yuan, Jiqiang Liu, Bin Wang, Wei Wang, Bin Wang, Tao Li, Xiaobo Ma, and Witold Pedrycz. 2024. FedComm: A Privacy-Enhanced and Efficient Authentication Protocol for Federated Learning in Vehicular Ad-Hoc Networks. IEEE Transactions on Information Forensics and Security 19 (2024), 777–792. https://doi.org/10.1109/TIFS.2023.3324747
[41]
Yupei Zhang, Yuxin Li, Yifei Wang, Shuangshuang Wei, Yunan Xu, and Xuequn Shang. 2024. Federated learning-outcome prediction with multi-layer privacy protection. Frontiers of Computer Science 18, 6 (2024), 186604.
[42]
Xiaokang Zhou, Wei Liang, I Kevin, Kai Wang, Zheng Yan, Laurence T Yang, Wei Wei, Jianhua Ma, and Qun Jin. 2023. Decentralized P2P Federated Learning for Privacy-Preserving and Resilient Mobile Robotic Systems. IEEE Wireless Communications 30, 2 (2023), 82–89.
[43]
Xiaokang Zhou, Wei Liang, I Kevin, Kai Wang, and Laurence T Yang. 2020. Deep correlation mining based on hierarchical hybrid networks for heterogeneous big data recommendations. IEEE Transactions on Computational Social Systems 8, 1 (2020), 171–178.
[44]
Xiaokang Zhou, Qiuyue Yang, Xuzhe Zheng, Wei Liang, I Kevin, Kai Wang, Jianhua Ma, Yi Pan, and Qun Jin. 2024. Personalized Federation Learning with Model-Contrastive Learning for Multi-Modal User Modeling in Human-Centric Metaverse. IEEE Journal on Selected Areas in Communications (2024).
[45]
Xiaokang Zhou, Xiaozhou Ye, I Kevin, Kai Wang, Wei Liang, Nirmal Kumar C Nair, Shohei Shimizu, Zheng Yan, and Qun Jin. 2023. Hierarchical federated learning with social context clustering-based participant selection for internet of medical things applications. IEEE Transactions on Computational Social Systems (2023).
[46]
Xiaokang Zhou, Xuzhe Zheng, Xuesong Cui, Jiashuai Shi, Wei Liang, Zheng Yan, Laurance T Yang, Shohei Shimizu, I Kevin, and Kai Wang. 2023. Digital twin enhanced federated reinforcement learning with lightweight knowledge distillation in mobile networks. IEEE Journal on Selected Areas in Communications (2023).
[47]
Xiaokang Zhou, Xuzhe Zheng, Tian Shu, Wei Liang, I Kevin, Kai Wang, Lianyong Qi, Shohei Shimizu, and Qun Jin. 2023. Information theoretic learning-enhanced dual-generative adversarial networks with causal representation for robust ood generalization. IEEE Transactions on Neural Networks and Learning Systems (2023).

Index Terms

  1. PnA: Robust Aggregation Against Poisoning Attacks to Federated Learning for Edge Intelligence

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Sensor Networks
      ACM Transactions on Sensor Networks Just Accepted
      EISSN:1550-4867
      Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Journal Family

      Publication History

      Online AM: 01 June 2024
      Accepted: 21 May 2024
      Revised: 06 February 2024
      Received: 15 November 2022

      Check for updates

      Author Tags

      1. Federated learning
      2. robust model aggregation
      3. poisoning attacks
      4. backdoor attacks

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 188
        Total Downloads
      • Downloads (Last 12 months)188
      • Downloads (Last 6 weeks)46
      Reflects downloads up to 02 Oct 2024

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Full Access

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media