Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-031-33377-4_34guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Enhancing Federated Learning Robustness Using Data-Agnostic Model Pruning

Published: 28 May 2023 Publication History

Abstract

Federated learning enables multiple data owners with a common objective to participate in a machine learning task without sharing their raw data. At each round, clients train local models with their own data and then upload the model parameters to update the global model. This multi-agent form of machine learning has been shown prone to adversarial manipulation by recent studies. Byzantine attackers impersonated as benign clients can stealthily interrupt or destroy the learning process. In this paper, we propose FLAP, a post-aggregation model pruning technique to enhance the Byzantine robustness of federated learning by effectively disabling the malicious and dormant components in the learned neural network models. Our technique is data-agnostic, without requiring clients to submit their dataset or training output, well aligned with the data locality of federated learning. FLAP is performed by the server right after the aggregation, which renders it compatible with an arbitrary aggregation algorithm and existing defensive techniques. Our empirical study demonstrates the effectiveness of FLAP under various settings. It reduces the error rate by up to 10.2% against the state-of-the-art adversarial models. Moreover, FLAP also manages to increase the average accuracy by up to 22.1% against different adversarial settings, mitigating the adversarial impacts while preserving learning fidelity.

References

[1]
Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.: Analyzing federated learning through an adversarial lens. In: International Conference on Machine Learning (2019)
[2]
Blanchard, P., Mhamdi, E.M.E., Guerraoui, R., Stainer, J.: Machine learning with adversaries: Byzantine tolerant gradient descent. In: Guyon, I., von Luxburg, U., Bengio, S., Wallach, H.M., Fergus, R., Vishwanathan, S.V.N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, pp. 119–129 (2017)
[3]
Cao, X., Fang, M., Liu, J., Gong, N.Z.: Fltrust: Byzantine-robust federated learning via trust bootstrapping. In: Network and Distributed System Security Symposium. The Internet Society (2021)
[4]
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)
[5]
Fang, M., Cao, X., Jia, J., Gong, N.: Local model poisoning attacks to byzantine-robust federated learning. In: 29th USENIX Security Symposium (2020)
[6]
Fang, U., Li, J., Akhtar, N., Li, M., Jia, Y.: GOMIC: multi-view image clustering via self-supervised contrastive heterogeneous graph co-learning. World Wide Web, pp. 1–17 (2022)
[7]
Guan, H., Xiao, Y., Li, J., Liu, Y., Bai, G.: A comprehensive study of real-world bugs in machine learning model optimization. In: Proceedings of the International Conference on Software Engineering (2023)
[8]
Guerraoui, R., Rouault, S., et al.: The hidden vulnerability of distributed learning in Byzantium. In: International Conference on Machine Learning (2018)
[9]
Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.D.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58 (2011)
[10]
Jin, C., Wang, J., Teo, S.G., Zhang, L., Chan, C., Hou, Q., Aung, K.M.M.: Towards end-to-end secure and efficient federated learning for XGBoost (2022)
[11]
Li, T., Hu, S., Beirami, A., Smith, V.: Ditto: Fair and robust federated learning through personalization. In: International Conference on Machine Learning (2021)
[12]
Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: defending against backdooring attacks on deep neural networks. In: International Symposium on Research in Attacks, Intrusions, and Defenses, pp. 273–294. Springer (2018)
[13]
Mahalle, A., Yong, J., Tao, X., Shen, J.: Data privacy and system security for banking and financial services industry based on cloud computing infrastructure. In: IEEE International Conference on Computer Supported Cooperative Work in Design (2018)
[14]
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: International Conference on Artificial Intelligence and Statistics (2017)
[15]
Meng, M.H., Bai, G., Teo, S.G., Dong, J.S.: Supervised robustness-preserving data-free neural network pruning. In: International Conference on Engineering of Complex Computer Systems (2023)
[16]
Meng, M.H., Bai, G., Teo, S.G., Hou, Z., Xiao, Y., Lin, Y., Dong, J.S.: Adversarial robustness of deep neural networks: a survey from a formal verification perspective. IEEE Trans. Depend. Secure Comput. (2022)
[17]
Panda, A., Mahloujifar, S., Bhagoji, A.N., Chakraborty, S., Mittal, P.: Sparsefed: mitigating model poisoning attacks in federated learning with sparsification. In: International Conference on Artificial Intelligence and Statistics (2022)
[18]
Salem, A., Zhang, Y., Humbert, M., Berrang, P., Fritz, M., Backes, M.: Ml-Leaks: model and data independent membership inference attacks and defenses on machine learning models. In: Network and Distributed System Security Symposium (2019)
[19]
Shaik T et al. Fedstack: personalized activity monitoring using stacked federated learning Knowl.-Based Syst. 2022 257
[20]
Shejwalkar, V., Houmansadr, A., Kairouz, P., Ramage, D.: Back to the drawing board: a critical evaluation of poisoning attacks on production federated learning. In: IEEE Symposium on Security and Privacy, pp. 1354–1371. IEEE (2022)
[21]
Song X, Li J, Cai T, Yang S, Yang T, and Liu C A survey on deep learning based knowledge tracing Knowl.-Based Syst. 2022 258
[22]
Srinivas, S., Babu, R.V.: Data-free parameter pruning for deep neural networks. In: Proceedings of the British Machine Vision Conference (2015)
[23]
Teo SG, Cao J, and Lee VC DAG: a general model for privacy-preserving data mining IEEE Trans. Knowl. Data Eng. 2018 32 1 40-53
[24]
Wang, B., et al.: Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 707–723. IEEE (2019)
[25]
Wang, K., Zhang, J., Bai, G., Ko, R., Dong, J.S.: It’s not just the site, it’s the contents: intra-domain fingerprinting social media websites through CDN bursts. In: Proceedings of the Web Conference (2021)
[26]
Wu, C., Yang, X., Zhu, S., Mitra, P.: Mitigating backdoor attacks in federated learning. arXiv preprint arXiv:2011.01767 (2020)
[27]
Yin, D., Chen, Y., Ramchandran, K., Bartlett, P.L.: Byzantine-robust distributed learning: towards optimal statistical rates. In: International Conference on Machine Learning (2018)
[28]
Yin H, Song X, Yang S, and Li J Sentiment analysis and topic modeling for covid-19 vaccine discussions World Wide Web 2022 25 3 1067-1083
[29]
Zhang, Y., Bai, G., Li, X., Curtis, C., Chen, C., Ko, R.K.: PrivColl: practical privacy-preserving collaborative machine learning. In: European Symposium on Research in Computer Security (2020)

Cited By

View all
  • (2023)LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference AttacksProceedings of the 2023 ACM Asia Conference on Computer and Communications Security10.1145/3579856.3590334(122-135)Online publication date: 10-Jul-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Advances in Knowledge Discovery and Data Mining: 27th Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2023, Osaka, Japan, May 25–28, 2023, Proceedings, Part II
May 2023
562 pages
ISBN:978-3-031-33376-7
DOI:10.1007/978-3-031-33377-4
  • Editors:
  • Hisashi Kashima,
  • Tsuyoshi Ide,
  • Wen-Chih Peng

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 28 May 2023

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 23 Nov 2024

Other Metrics

Citations

Cited By

View all
  • (2023)LoDen: Making Every Client in Federated Learning a Defender Against the Poisoning Membership Inference AttacksProceedings of the 2023 ACM Asia Conference on Computer and Communications Security10.1145/3579856.3590334(122-135)Online publication date: 10-Jul-2023

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media