Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3511808.3557232acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
research-article

Adversarial Robustness through Bias Variance Decomposition: A New Perspective for Federated Learning

Published: 17 October 2022 Publication History

Abstract

Federated learning learns a neural network model by aggregating the knowledge from a group of distributed clients under the privacy-preserving constraint. In this work, we show that this paradigm might inherit the adversarial vulnerability of the centralized neural network, i.e., it has deteriorated performance on adversarial examples when the model is deployed. This is even more alarming when federated learning paradigm is designed to approximate the updating behavior of a centralized neural network. To solve this problem, we propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms. This is motivated by our observation that the generalization error in federated learning can be naturally decomposed into the bias and variance triggered by multiple clients' predictions. Thus, we propose to generate the adversarial examples via maximizing the bias and variance during server update, and learn the adversarially robust model updates with those examples during client update. As a result, an adversarially robust neural network can be aggregated from these improved local clients' model updates. The experiments are conducted on multiple benchmark data sets using several prevalent neural network models, and the empirical results show that our framework is robust against white-box and black-box adversarial corruptions under both IID and non-IID settings.

Supplementary Material

MP4 File (CIKM22-fp0046.mp4)
In this paper, we propose an adversarially robust federated learning framework, named Fed_BVA, with improved server and client update mechanisms. This is motivated by our observation that the generalization error in federated learning can be naturally decomposed into the bias and variance triggered by multiple clients' predictions. Thus, we propose to generate the adversarial examples via maximizing the bias and variance during server update and learn the adversarially robust model updates with those examples during client update. As a result, an adversarially robust neural network can be aggregated from these improved local clients' model updates.

References

[1]
Abbas Acar, Hidayet Aksu, A Selcuk Uluagac, and Mauro Conti. 2018. A survey on homomorphic encryption schemes: Theory and implementation. Comput. Surveys 51, 4 (2018), 1--35.
[2]
Mikhail Belkin, Daniel Hsu, Siyuan Ma, and Soumik Mandal. 2019. Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proceedings of the National Academy of Sciences 116, 32 (2019), 15849--15854.
[3]
Peva Blanchard, El Mahdi El Mhamdi, Rachid Guerraoui, and Julien Stainer. 2017. Machine learning with adversaries: Byzantine tolerant gradient descent. Advances in Neural Information Processing Systems 30 (2017).
[4]
Nicholas Carlini and David Wagner. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy. IEEE, 39--57.
[5]
Liwei Che, Zewei Long, Jiaqi Wang, Yaqing Wang, Houping Xiao, and Fenglong Ma. 2021. FedTriNet: A Pseudo Labeling Method with Three Players for Federated Semi-supervised Learning. In 2021 IEEE International Conference on Big Data. IEEE, 715--724.
[6]
Yudong Chen, Lili Su, and Jiaming Xu. 2017. Distributed statistical machine learning in adversarial settings: Byzantine gradient descent. Proceedings of the ACM on Measurement and Analysis of Computing Systems 1, 2 (2017), 1--25.
[7]
François Chollet. 2017. Xception: Deep Learning with Depthwise Separable Convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1251--1258.
[8]
Deepesh Data and Suhas Diggavi. 2021. Byzantine-resilient high-dimensional SGD with local iterations on heterogeneous data. In International Conference on Machine Learning. 2478--2488.
[9]
Pedro Domingos. 2000. A unified bias-variance decomposition and its applications. In International Conference on Machine Learning. 231--238.
[10]
Dongqi Fu, Liri Fang, Ross Maciejewski, Vetle I. Torvik, and Jingrui He. 2022. Meta- Learned Metrics over Multi-Evolution Temporal Graphs. In KDD '22: The 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Washington, DC, USA, August 14 - 18, 2022. ACM, 367--377.
[11]
Dongqi Fu, Zhe Xu, Bo Li, Hanghang Tong, and Jingrui He. 2020. A View-Adversarial Framework for Multi-View Network Embedding. In CIKM '20: The 29th ACM International Conference on Information and Knowledge Management, Virtual Event, Ireland, October 19-23, 2020. ACM, 2025--2028.
[12]
Stuart Geman, Elie Bienenstock, and René Doursat. 1992. Neural networks and the bias/variance dilemma. Neural computation 4, 1 (1992), 1--58.
[13]
Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
[14]
Andrew Hard, Kanishka Rao, Rajiv Mathews, Swaroop Ramaswamy, Françoise Beaufays, Sean Augenstein, Hubert Eichner, Chloé Kiddon, and Daniel Ramage. 2018. Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018).
[15]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.
[16]
Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. CoRR abs/1704.04861 (2017).
[17]
Wonyong Jeong, Jaehong Yoon, Eunho Yang, and Sung Ju Hwang. 2021. Federated Semi-Supervised Learning with Inter-Client Consistency & Disjoint Learning. In International Conference on Learning Representations.
[18]
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. Scaffold: Stochastic controlled averaging for federated learning. In International Conference on Machine Learning. 5132--5143.
[19]
Ron Kohavi, David H Wolpert, et al. 1996. Bias plus variance decomposition for zero-one loss functions. In International Conference on Machine Learning. 275--83.
[20]
Jakub Konecny, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
[21]
Alexey Kurakin, Ian Goodfellow, and Samy Bengio. 2017. Adversarial machine learning at scale. In International Conference on Learning Representations.
[22]
Rui Li, Fenglong Ma, Wenjun Jiang, and Jing Gao. 2019. Online Federated Multitask Learning. In 2019 IEEE International Conference on Big Data. IEEE, 215--220.
[23]
Suyi Li, Yong Cheng, Wei Wang, Yang Liu, and Tianjian Chen. 2020. Learning to detect malicious clients for robust federated learning. arXiv preprint arXiv:2002.00211 (2020).
[24]
Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In International Conference on Machine Learning. 6357--6368.
[25]
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations.
[26]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273--1282.
[27]
Mehryar Mohri, Gary Sivek, and Ananda Theertha Suresh. 2019. Agnostic federated learning. In International Conference on Machine Learning. PMLR, 4615--4625.
[28]
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard. 2016. DeepFool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 2574--2582.
[29]
Venkata Krishna Pillutla, Sham M. Kakade, and Zaïd Harchaoui. 2019. Robust Aggregation for Federated Learning. CoRR abs/1912.13445 (2019).
[30]
Amirhossein Reisizadeh, Farzan Farnia, Ramtin Pedarsani, and Ali Jadbabaie. 2020. Robust federated learning: The case of affine distribution shifts. Advances in Neural Information Processing Systems 33 (2020), 21554--21565.
[31]
Mark Sandler, AndrewG. Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted Residuals and Linear Bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 4510--4520.
[32]
Devansh Shah, Parijat Dube, Supriyo Chakraborty, and Ashish Verma. 2021. Adversarial training in communication constrained federated learning. arXiv preprint arXiv:2103.01319 (2021).
[33]
Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations.
[34]
Sebastian U. Stich. 2019. Local SGD Converges Fast and Communicates Little. In International Conference on Learning Representations.
[35]
Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna Estrach, Dumitru Erhan, Ian Goodfellow, and Robert Fergus. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
[36]
Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick Drew McDaniel. 2018. Ensemble adversarial training: Attacks and defenses. In International Conference on Learning Representations.
[37]
Giorgio Valentini and Thomas G Dietterich. 2004. Bias-variance analysis of support vector machines for the development of SVM-based ensemble methods. Journal of Machine Learning Research (2004), 725--775.
[38]
Hongyi Wang, Mikhail Yurochkin, Yuekai Sun, Dimitris Papailiopoulos, and Yasaman Khazaeni. 2020. Federated Learning with Matched Averaging. In International Conference on Learning Representations.
[39]
Yisen Wang, Xingjun Ma, James Bailey, Jinfeng Yi, Bowen Zhou, and Quanquan Gu. 2019. On the Convergence and Robustness of Adversarial Training. In International Conference on Machine Learning.
[40]
Jun Wu and Jingrui He. 2021. Indirect Invisible Poisoning Attacks on Domain Adaptation. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 1852--1862.
[41]
Cihang Xie and Alan Yuille. 2020. Intriguing Properties of Adversarial Training at Scale. In International Conference on Learning Representations.
[42]
Qiang Yang, Yang Liu, Tianjian Chen, and Yongxin Tong. 2019. Federated machine learning: Concept and applications. ACM Transactions on Intelligent Systems and Technology (TIST) 10, 2 (2019), 1--19.
[43]
Zitong Yang, Yaodong Yu, Chong You, Jacob Steinhardt, and Yi Ma. 2020. Rethinking bias-variance trade-off for generalization of neural networks. In International Conference on Machine Learning.
[44]
Dong Yin, Yudong Chen, Ramchandran Kannan, and Peter Bartlett. 2018. Byzantine-robust distributed learning: Towards optimal statistical rates. In International Conference on Machine Learning. PMLR, 5650--5659.
[45]
Lecheng Zheng, Yu Cheng, Hongxia Yang, Nan Cao, and Jingrui He. 2021. Deep co-attention network for multi-view subspace learning. In Proceedings of the Web Conference 2021. 1528--1539.
[46]
Dawei Zhou, Lecheng Zheng, Jiawei Han, and Jingrui He. 2020. A Data-Driven Graph Generative Model for Temporal Interaction Networks. In The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 401--411.
[47]
Dawei Zhou, Lecheng Zheng, Jiejun Xu, and Jingrui He. 2019. Misc-GAN: A multi-scale generative model for graphs. Frontiers in big Data 2 (2019), 3.
[48]
Yao Zhou, Jianpeng Xu, Jun Wu, Zeinab Taghavi Nasrabadi, Evren Körpeoglu, Kannan Achan, and Jingrui He. 2021. PURE: Positive-Unlabeled Recommendation with Generative Adversarial Network. In The 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM, 2409--2419.
[49]
Giulio Zizzo, Ambrish Rawat, Mathieu Sinn, and Beat Buesser. 2020. FAT: Federated adversarial training. arXiv preprint arXiv:2012.01791 (2020).

Cited By

View all
  • (2024)Performance Analysis of a Hybrid Federated-Centralized Learning Framework2024 IEEE International Conference on Electro Information Technology (eIT)10.1109/eIT60633.2024.10609948(1-6)Online publication date: 30-May-2024
  • (2024)GANFAT: Robust federated adversarial learning with label distribution skewFuture Generation Computer Systems10.1016/j.future.2024.06.030160(711-723)Online publication date: Nov-2024
  • (2024)Leveraging Foundation Models for Multi-modal Federated Learning with Incomplete ModalityMachine Learning and Knowledge Discovery in Databases. Applied Data Science Track10.1007/978-3-031-70378-2_25(401-417)Online publication date: 22-Aug-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CIKM '22: Proceedings of the 31st ACM International Conference on Information & Knowledge Management
October 2022
5274 pages
ISBN:9781450392365
DOI:10.1145/3511808
  • General Chairs:
  • Mohammad Al Hasan,
  • Li Xiong
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 17 October 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial robustness
  2. bias-variance analysis
  3. federated learning

Qualifiers

  • Research-article

Funding Sources

  • National Science Foundation under Award No. IIS-1947203, IIS- 2117902, IIS-2137468
  • Agriculture and Food Research Initiative (AFRI) grant no. 2020-67021-32799/project accession no.1024178 from the USDA National Institute of Food and Agriculture

Conference

CIKM '22
Sponsor:

Acceptance Rates

CIKM '22 Paper Acceptance Rate 621 of 2,257 submissions, 28%;
Overall Acceptance Rate 1,861 of 8,427 submissions, 22%

Upcoming Conference

CIKM '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)58
  • Downloads (Last 6 weeks)6
Reflects downloads up to 13 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Performance Analysis of a Hybrid Federated-Centralized Learning Framework2024 IEEE International Conference on Electro Information Technology (eIT)10.1109/eIT60633.2024.10609948(1-6)Online publication date: 30-May-2024
  • (2024)GANFAT: Robust federated adversarial learning with label distribution skewFuture Generation Computer Systems10.1016/j.future.2024.06.030160(711-723)Online publication date: Nov-2024
  • (2024)Leveraging Foundation Models for Multi-modal Federated Learning with Incomplete ModalityMachine Learning and Knowledge Discovery in Databases. Applied Data Science Track10.1007/978-3-031-70378-2_25(401-417)Online publication date: 22-Aug-2024
  • (2024)Testing the Robustness of Machine Learning Models Through MutationsAdvances in Computational Collective Intelligence10.1007/978-3-031-70248-8_24(308-320)Online publication date: 8-Sep-2024
  • (2023)Evaluating self-supervised learning via risk decompositionProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3618760(8779-8820)Online publication date: 23-Jul-2023
  • (2023)Multimodal Federated Learning: A SurveySensors10.3390/s2315698623:15(6986)Online publication date: 6-Aug-2023

View Options

Get Access

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media