Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3576915.3623212acmconferencesArticle/Chapter ViewAbstractPublication PagesccsConference Proceedingsconference-collections
research-article

MESAS: Poisoning Defense for Federated Learning Resilient against Adaptive Attackers

Published: 21 November 2023 Publication History

Abstract

Federated Learning (FL) enhances decentralized machine learning by safeguarding data privacy, reducing communication costs, and improving model performance with diverse data sources. However, FL faces vulnerabilities such as untargeted poisoning attacks and targeted backdoor attacks, posing challenges to model integrity and security. Preventing backdoors proves especially challenging due to their stealthy nature. Existing mitigation techniques have shown efficacy but often overlook realistic adversaries and diverse data distributions.
This work introduces the concept of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously. Extensive empirical testing reveals existing defenses' vulnerability in this adversary model. We present <u>Me</u>tric-Ca<u>s</u>c<u>a</u>de<u>s</u> (MESAS), a novel defense method tailored to more realistic scenarios and adversary models. MESAS employs multiple detection metrics simultaneously to combat poisoned model updates, posing a complex multi-objective problem for adaptive attackers. In a comprehensive evaluation across nine backdoors and three datasets, MESAS outperforms existing defenses in distinguishing backdoors from data distribution-related distortions within and across clients. MESAS offers robust defense against strong adaptive adversaries in real-world data settings, with a modest average overhead of just 24.37 seconds.

References

[1]
Abien Fred Agarap. 2018. Deep Learning using Rectified Linear Units (ReLU). arXiv preprint arXiv:1803.08375 (2018).
[2]
Mohiuddin Ahmed, Raihan Seraj, and Syed Mohammed Shamsul Islam. 2020. The k-means Algorithm: A Comprehensive Survey and Performance Evaluation. Electronics (2020).
[3]
Sebastien Andreina, Giorgia Azzurra Marson, Helen Möllering, and Ghassan Karame. 2021. BaFFLe: Backdoor Detection via Feedback-based Federated Learning. ICDCS (2021).
[4]
Eugene Bagdasaryan and Vitaly Shmatikov. 2021. Blind Backdoors in Deep Learning Models. USENIX Security (2021).
[5]
Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, and Vitaly Shmatikov. 2020. How To Backdoor Federated Learning. AISTATS (2020).
[6]
Shefali Bansal, Medha Singh, Madhulika Bhadauria, and Richa Adalakha. 2022. Federated Learning Approach towards Sentiment Analysis. ICTACS (2022).
[7]
Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, and Seraphin Calo. 2019. Analyzing Federated Learning through an Adversarial Lens. ICML (2019).
[8]
Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012. Poisoning Attacks against Support Vector Machine. ICML (2012).
[9]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. NIPS (2017).
[10]
Nicholas Boucher, Ilia Shumailov, Ross Anderson, and Nicolas Papernot. 2022. Bad characters: Imperceptible NLP attacks. IEEE S&P (2022).
[11]
California State Legislature. 2018. California Consumer Privacy Act. https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180SB1121.
[12]
Di Cao, Shan Chang, Zhijian Lin, Guohua Liu, and Donghong Sun. 2019. Understanding Distributed Poisoning Attack in Federated Learning. ICPADS (2019).
[13]
Xiaoyu Cao, Minghong Fang, Jia Liu, and Neil Zhenqiang Gong. 2021. FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping. NDSS (2021).
[14]
Yair Censor. 1977. Pareto optimality in multiobjective problems. Applied Mathematics and Optimization (1977).
[15]
Fei Chen, Mi Luo, Zhenhua Dong, Zhenguo Li, and Xiuqiang He. 2018. Federated Meta-Learning with Fast Convergence and Efficient Communication. arXiv preprint arXiv:1802.07876 (2018).
[16]
Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv preprint arXiv:1712.05526 (2017).
[17]
Xiaoyi Chen, Ahmed Salem, Dingfan Chen, Michael Backes, Shiqing Ma, Qingni Shen, Zhonghai Wu, and Yang Zhang. 2021. BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements. ACSAC (2021).
[18]
Victor Costan and Srinivas Devadas. 2016. Intel SGX Explained. IACR Cryptol. ePrint Arch. (2016).
[19]
Erfan Darzidehkalani, Mohammad Ghasemi-rad, and P.M.A. van Ooijen. 2022. Federated Learning in Medical Imaging: Part II: Methods, Challenges, and Considerations. Journal of the American College of Radiology (2022).
[20]
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. CVPR (2009).
[21]
Li Deng. 2012. The MNIST Database of Handwritten Digit Images for Machine Learning Research. IEEE Signal Processing Magazine (2012).
[22]
Cynthia Dwork. 2008. Differential Privacy: A Survey of Results. TAMC (2008).
[23]
Jean-Antoine Désidéri. 2012. Multiple-gradient descent algorithm (MGDA) for multiobjective optimization. Comptes Rendus Mathematique (2012).
[24]
El Mahdi El Mhamdi, Rachid Guerraoui, and Sébastien Rouault. 2018. The Hidden Vulnerability of Distributed Learning in Byzantium. PMLR (2018).
[25]
European Parliament and Council of the European Union. 2018. General Data Protection Regulation. https://eur-lex.europa.eu/eli/reg/2016/679/oj.
[26]
Minghong Fang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2020. Local Model Poisoning Attacks to Byzantine-Robust Federated Learning. USENIX Security (2020).
[27]
Jie Feng, Can Rong, Funing Sun, Diansheng Guo, and Yong Li. 2020. PMF: A Privacy-Preserving Human Mobility Prediction Framework via Federated Learning. ACM IMWUT (2020).
[28]
Christopher Frederickson, Michael Moore, Glenn Dawson, and Robi Polikar. 2018. Attack Strength vs. Detectability Dilemma in Adversarial Machine Learning. IJCNN (2018).
[29]
Clement Fung, Chris JM Yoon, and Ivan Beschastnikh. 2020. The Limitations of Federated Learning in Sybil Settings. RAID (2020).
[30]
Karan Ganju, Qi Wang, Wei Yang, Carl A Gunter, and Nikita Borisov. 2018. Property Inference Attacks on Fully Connected Neural Networks using Permutation Invariant Representations. CCS (2018).
[31]
Yansong Gao, Bao Gia Doan, Zhi Zhang, Siqi Ma, Jiliang Zhang, Anmin Fu, Surya Nepal, and Hyoungshick Kim. 2020. Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review. arXiv preprint arXiv:2007.10760 (2020).
[32]
Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain. arXiv preprint arXiv:1708.06733 (2017).
[33]
Gozde N Gunesli, Mohsin Bilal, Shan E Ahmed Raza, and Nasir M Rajpoot. 2021. FedDropoutAvg: Generalizable federated learning for histopathology image classification. arXiv preprint arXiv:2111.13230 (2021).
[34]
Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Francc oise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. 2018. Federated Learning for Mobile Keyboard Prediction. arXiv preprint arXiv:1811.03604 (2018).
[35]
Hanieh Hashemi, Yongqin Wang, Chuan Guo, and Murali Annavaram. 2021. Byzantine-Robust and Privacy-Preserving Framework for FedML. ICLR Workshops (2021).
[36]
Jamie Hayes, Luca Melis, George Danezis, and Emiliano De Cristofaro. 2019. LOGAN: Membership inference attacks against generative models. PETS (2019).
[37]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. CVPR (2016).
[38]
Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, and Kurt Keutzer. 2016. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and 0.5MB model size. arXiv preprint arXiv:1602.07360 (2016).
[39]
Jakub Konevc nỳ, H Brendan McMahan, Daniel Ramage, and Peter Richtárik. 2016. Federated optimization: Distributed machine learning for on-device intelligence. arXiv preprint arXiv:1610.02527 (2016).
[40]
Torsten Krauß and Alexandra Dmitrienko. 2023. Avoid Adversarial Adaption in Federated Learning by Multi-Metric Investigations. arXiv preprint arXiv:2306.03600 (2023).
[41]
Alex Krizhevsky, Geoffrey Hinton, et al. 2009. Learning Multiple Layers of Features from Tiny Images. Citeseer (2009).
[42]
Kavita Kumari, Phillip Rieger, Hossein Fereidooni, Murtuza Jadliwala, and Ahmad-Reza Sadeghi. 2023. BayBFed: Bayesian Backdoor Defense for Federated Learning. IEEE S&P (2023).
[43]
Li Li, Yuxi Fan, Mike Tse, and Kuo-Yi Lin. 2020. A review of applications in federated learning. Computers & Industrial Engineering (2020).
[44]
Liping Li, Wei Xu, Tianyi Chen, Georgios B Giannakis, and Qing Ling. 2019. RSA: Byzantine-robust stochastic aggregation methods for distributed learning from heterogeneous datasets. AAAI (2019).
[45]
Yiming Li, Yong Jiang, Zhifeng Li, and Shu-Tao Xia. 2022a. Backdoor Learning: A Survey. IEEE Transactions on Neural Networks and Learning Systems (2022).
[46]
Yijing Li, Xiaofeng Tao, Xuefei Zhang, Junjie Liu, and Jin Xu. 2022b. Privacy-Preserved Federated Learning for Autonomous Driving. IEEE T-ITS (2022).
[47]
Tjen-Sien Lim and Wei-Yin Loh. 1996. A comparison of tests of equality of variances. Computational Statistics & Data Analysis (1996).
[48]
Chih-Ting Liu, Chien-Yi Wang, Shao-Yi Chien, and Shang-Hong Lai. 2022a. FedFR: Joint Optimization Federated Framework for Generic and Personalized Face Recognition. AAAI (2022).
[49]
Pengrui Liu, Xiangrui Xu, and Wei Wang. 2022b. Threats, attacks and defenses to federated learning: issues, taxonomy and perspectives. Cybersecurity (2022).
[50]
Yang Liu, Anbu Huang, Yun Luo, He Huang, Youzhi Liu, Yuanyuan Chen, Lican Feng, Tianjian Chen, Han Yu, and Qiang Yang. 2020. FedVision: An Online Visual Object Detection Platform Powered by Federated Learning. AAAI (2020).
[51]
Yingqi Liu, Shiqing Ma, Yousra Aafer, Wen-Chuan Lee, Juan Zhai, Weihang Wang, and X. Zhang. 2018. Trojaning Attack on Neural Networks. NDSS (2018).
[52]
Edward H Livingston. 2004. Who was student and why do we care so much about his t-test? Journal of Surgical Research (2004).
[53]
Frank J Massey Jr. 1951. The Kolmogorov-Smirnov Test for Goodness of Fit. Journal of the American statistical Association (1951).
[54]
Leland McInnes, John Healy, and Steve Astels. 2017. HDBScan: Hierarchical density based clustering. The Journal of Open Source Software (2017).
[55]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Agüera y Arcas. 2017. Communication-Efficient Learning of Deep Networks from Decentralized Data. AISTATS (2017).
[56]
Brendan McMahan and Daniel Ramage. 2017. Federated learning: Collaborative Machine Learning without Centralized Training Data. Google AI (2017).
[57]
H. Brendan McMahan, Daniel Ramage, Kunal Talwar, and Li Zhang. 2018. Learning Differentially Private Language Models Without Losing Accuracy. ICLR (2018).
[58]
Thomas Minka. 2000. Estimating a Dirichlet distribution.
[59]
Fan Mo, Hamed Haddadi, Kleomenis Katevas, Eduard Marin, Diego Perino, and Nicolas Kourtellis. 2021. PPFL: Privacy-preserving Federated Learning with Trusted Execution Environments. MobiSys (2021).
[60]
Luis Mu noz-González, Kenneth T Co, and Emil C Lupu. 2019. Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging. arXiv preprint arXiv:1909.05125 (2019).
[61]
Mohammad Naseri, Jamie Hayes, and Emiliano De Cristofaro. 2022. Local and Central Differential Privacy for Robustness and Privacy in Federated Learning. NDSS (2022).
[62]
Blaine Nelson, Marco Barreno, Fuching Jack Chi, Anthony D Joseph, Benjamin IP Rubinstein, Udam Saini, Charles Sutton, J Doug Tygar, and Kai Xia. 2008. Exploiting Machine Learning to Subvert Your Spam Filter. LEET (2008).
[63]
Anh Nguyen, Tuong Do, Minh Tran, Binh X. Nguyen, Chien Duong, Tu Phan, Erman Tjiputra, and Quang D. Tran. 2022a. Deep Federated Learning for Autonomous Driving. IEEE IV (2022).
[64]
Dinh C. Nguyen, Quoc-Viet Pham, Pubudu N. Pathirana, Ming Ding, Aruna Seneviratne, Zihuai Lin, Octavia Dobre, and Won-Joo Hwang. 2022b. Federated Learning for Smart Healthcare: A Survey. ACM Comput. Surv. (2022).
[65]
Thien Duc Nguyen, Phillip Rieger, Huili Chen, Hossein Yalame, Helen Mö llering, Hossein Fereidooni, Samuel Marchal, Markus Miettinen, Azalia Mirhoseini, Farinaz Koushanfar, Ahmad-Reza Sadeghi, Thomas Schneider, and Shaza Zeitouni. 2022c. FLAME: Taming Backdoors in Federated Learning. USENIX Security (2022).
[66]
Thien Duc Nguyen, Phillip Rieger, Markus Miettinen, and Ahmad-Reza Sadeghi. 2020. Poisoning Attacks on Federated Learning-Based IoT Intrusion Detection System. NDSS DISS (2020).
[67]
Frank Nielsen. 2016. Hierarchical Clustering. Introduction to HPC with MPI for Data Science (2016).
[68]
NVIDIA, Péter Vingelmann, and Frank H.P. Fitzek. 2020. CUDA, release: 10.2.89. https://developer.nvidia.com/cuda-toolkit
[69]
Wojciech Ozga, Do Le Quoc, and Christof Fetzer. 2021. Perun: Confidential Multi-stakeholder Machine Learning Framework with Hardware Acceleration Support. DBSec (2021).
[70]
Xudong Pan, Mi Zhang, Beina Sheng, Jiaming Zhu, and Min Yang. 2022. Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation. USENIX Security (2022).
[71]
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. Pytorch: An imperative style, high-performance deep learning library. NeurIPS (2019).
[72]
Friedrich Pukelsheim. 1994. The Three Sigma Rule. The American Statistician (1994).
[73]
Apostolos Pyrgelis, Carmela Troncoso, and Emiliano De Cristofaro. 2018. Knock Knock, Who's There? Membership Inference on Aggregate Location Data. NDSS (2018).
[74]
Do Le Quoc, Franz Gregor, Sergei Arnautov, Roland Kunkel, Pramod Bhatotia, and Christof Fetzer. 2020. SecureTF: A Secure TensorFlow Framework. Middleware (2020).
[75]
Swaroop Ramaswamy, Rajiv Mathews, Kanishka Rao, and Francc oise Beaufays. 2019. Federated Learning for Emoji Prediction in a Mobile Keyboard. arXiv preprint arXiv:1906.04329 (2019).
[76]
Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, and Ahmad-Reza Sadeghi. 2022. DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. NDSS (2022).
[77]
Holger R Roth, Ken Chang, Praveer Singh, Nir Neumark, Wenqi Li, Vikash Gupta, Sharut Gupta, Liangqiong Qu, Alvin Ihsani, Bernardo C Bizzo, et al. 2020. Federated learning for breast density classification: A real-world implementation. MICCAI (2020).
[78]
Aniruddha Saha, Akshayvarun Subramanya, and Hamed Pirsiavash. 2020. Hidden Trigger Backdoor Attacks. AAAI (2020).
[79]
Ahmed Salem, Apratim Bhattacharya, Michael Backes, Mario Fritz, and Yang Zhang. 2020. Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning. USENIX Security (2020).
[80]
Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2020. DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108 (2020).
[81]
Ozan Sener and Vladlen Koltun. 2018. Multi-Task Learning as Multi-Objective Optimization. NeurIPS (2018).
[82]
Micah Sheller, Anthony Reina, Brandon Edwards, Jason Martin, and Spyridon Bakas. 2018a. Federated Learning for Medical Imaging. Intel AI (2018).
[83]
Micah Sheller, Anthony Reina, Brandon Edwards, Jason Martin, and Spyridon Bakas. 2018b. Multi-Institutional Deep Learning Modeling Without Sharing Patient Data: A Feasibility Study on Brain Tumor Segmentation. Brain Lesion Workshop (2018).
[84]
Shiqi Shen, Shruti Tople, and Prateek Saxena. 2016. Auror: Defending Against Poisoning Attacks in Collaborative Deep Learning Systems. ACSAC (2016).
[85]
Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov. 2017. Membership Inference Attacks Against Machine Learning Models. IEEE S&P (2017).
[86]
Santiago Silva, Boris A. Gutman, Eduardo Romero, Paul M. Thompson, Andre Altmann, and Marco Lorenzi. 2019. Federated Learning in Distributed Medical Databases: Meta-Analysis of Large-Scale Subcortical Brain Data. IEEE ISBI (2019).
[87]
Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. EMNLP (2013).
[88]
Konstantin Sozinov, Vladimir Vlassov, and Sarunas Girdzijauskas. 2018. Human Activity Recognition Using Federated Learning. IEEE BdCloud (2018).
[89]
J. Stallkamp, M. Schlipsing, J. Salmen, and C. Igel. 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural Networks (2012).
[90]
Octavian Suciu, Radu Marginean, Yigitcan Kaya, Hal Daume III, and Tudor Dumitras. 2018. When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks. USENIX Security (2018).
[91]
Gan Sun, Yang Cong, Jiahua Dong, Qiang Wang, Lingjuan Lyu, and Ji Liu. 2022. Data Poisoning Attacks on Federated Machine Learning. IEEE IoT-J (2022).
[92]
Ziteng Sun, Peter Kairouz, Ananda Theertha Suresh, and H. Brendan McMahan. 2019. Can You Really Backdoor Federated Learning? arXiv preprint arXiv:1911.07963 (2019).
[93]
The Linux Foundation. 2022. PyTorch. https://pytorch.org.
[94]
Zhiyi Tian, Lei Cui, Jie Liang, and Shui Yu. 2022. A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning. Comput. Surveys (2022).
[95]
Florian Tramer and Dan Boneh. 2019. Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware. ICLR (2019).
[96]
Alexander Turner, Dimitris Tsipras, and Aleksander Madry. 2019. Label-Consistent Backdoor Attacks. arXiv preprint arXiv:1912.02771 (2019).
[97]
U.S. Congress. 1996. Health Insurance Portability and Accountability Act. https://www.govinfo.gov/content/pkg/PLAW-104publ191/pdf/PLAW-104publ191.pdf.
[98]
Guido Van Rossum and Fred L Drake Jr. 1995. Python reference manual. Centrum voor Wiskunde en Informatica Amsterdam.
[99]
Stavros Volos, Kapil Vaswani, and Rodrigo Bruno. 2018. Graviton: Trusted Execution Environments on GPUs. OSDI (2018).
[100]
Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, and Dimitris Papailiopoulos. 2020. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning. NIPS (2020).
[101]
Lixu Wang, Shichao Xu, Xiao Wang, and Qi Zhu. 2019b. Eavesdrop the Composition Proportion of Training Labels in Federated Learning. arXiv preprint arXiv:1910.06044 (2019).
[102]
Zhibo Wang, Mengkai Song, Zhifei Zhang, Yang Song, Qian Wang, and Hairong Qi. 2019a. Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning. INFOCOM (2019).
[103]
Zhaoxian Wu, Qing Ling, Tianyi Chen, and Georgios B. Giannakis. 2020. Federated Variance-Reduced Stochastic Gradient Descent With Robustness to Byzantine Attacks. IEEE Transactions on Signal Processing (2020).
[104]
Geming Xia, Jian Chen, Chaodong Yu, and Jun Ma. 2023. Poisoning Attacks in Federated Learning: A Survey. IEEE Access (2023).
[105]
Chulin Xie, Keli Huang, Pin-Yu Chen, and Bo Li. 2020a. DBA: Distributed Backdoor Attacks against Federated Learning. ICLR (2020).
[106]
Cong Xie, Oluwasanmi Koyejo, and Indranil Gupta. 2020b. Fall of Empires: Breaking Byzantine-tolerant SGD by Inner Product Manipulation. UAI (2020).
[107]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019a. Federated Machine Learning: Concept and Applications. ACM Transactions on Intelligent Systems and Technology (2019).
[108]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019b. Federated Machine Learning: Concept and Applications. TIST (2019).
[109]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. ICML (2018).
[110]
Hongyi Zhang, Jan Bosch, and Helena Holmström Olsson. 2021. End-to-End Federated Learning for Autonomous Driving Vehicles. IJCNN (2021).
[111]
Zaixi Zhang, Xiaoyu Cao, Jinyuan Jia, and Neil Zhenqiang Gong. 2022. FLDetector: Defending Federated Learning Against Model Poisoning Attacks via Detecting Malicious Clients. KDD22 (2022).
[112]
Bo Zhao, Konda Reddy Mopuri, and Hakan Bilen. 2020. iDLG: Improved Deep Leakage from Gradients. arXiv preprint arXiv:2001.02610 (2020).
[113]
Lingchen Zhao, Shengshan Hu, Qian Wang, Jianlin Jiang, Chao Shen, Xiangyang Luo, and Pengfei Hu. 2021. Shielding Collaborative Learning: Mitigating Poisoning Attacks Through Client-Side Detection. PRDC (2021).
[114]
Hangyu Zhu, Jinjin Xu, Shiqing Liu, and Yaochu Jin. 2021. Federated Learning on Non-IID Data: A Survey. Neurocomput. (2021).
[115]
Jianping Zhu, Rui Hou, XiaoFeng Wang, Wenhao Wang, Jiangfeng Cao, Lutan Zhao, Fengkai Yuan, Peinan Li, Zhongpu Wang, Boyan Zhao, Lixin Zhang, and Dan Meng. 2019. Enabling Privacy-Preserving, Compute-and Data-Intensive Computing using Heterogeneous Trusted Execution Environment. arXiv preprint arXiv:1904.04782 (2019).

Cited By

View all

Index Terms

  1. MESAS: Poisoning Defense for Federated Learning Resilient against Adaptive Attackers

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CCS '23: Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security
      November 2023
      3722 pages
      ISBN:9798400700507
      DOI:10.1145/3576915
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 21 November 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. backdoor attacks
      2. federated learning
      3. poisoning attacks
      4. security

      Qualifiers

      • Research-article

      Conference

      CCS '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 1,261 of 6,999 submissions, 18%

      Upcoming Conference

      CCS '24
      ACM SIGSAC Conference on Computer and Communications Security
      October 14 - 18, 2024
      Salt Lake City , UT , USA

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 1,050
        Total Downloads
      • Downloads (Last 12 months)1,050
      • Downloads (Last 6 weeks)89
      Reflects downloads up to 02 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media