Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-031-70903-6_18guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Exploiting Layerwise Feature Representation Similarity For Backdoor Defence in Federated Learning

Published: 16 September 2024 Publication History

Abstract

Federated learning is an emerging paradigm for distributed machine learning that enables clients to collaboratively train models while maintaining data privacy. However, this approach introduces vulnerabilities, notably the risk of backdoor attacks where compromised models may perform normally on clean data but maliciously on poisoned inputs. A range of defences has been proposed in the literature based on robust aggregation, differential privacy or certified robustness and clustering/trust score-based approaches. In this work, we introduce FedAvgCKA, a novel defence mechanism that leverages the learned representations of neural networks to distinguish between benign and malicious submissions from clients. We demonstrate the effectiveness of FedAvgCKA across various federated learning scenarios and datasets, showcasing its ability to maintain high main task accuracy and significantly reduce backdoor attack success rates even in non-iid settings.

References

[1]
[2]
Andreina, S., Marson, G.A., Möllering, H., Karame, G.: Baffle: backdoor detection via feedback-based federated learning. In: 41st IEEE International Conference on Distributed Computing Systems, ICDCS 2021, Washington DC, USA, July 7-10, 2021, pp. 852–863. IEEE (2021)
[3]
Bagdasaryan, E., Veit, A., Hua, Y., Estrin, D., Shmatikov, V.: How to backdoor federated learning. In: Chiappa, S., Calandra, R. (eds.) The 23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020, 26-28 August 2020, Online [Palermo, Sicily, Italy]. Proceedings of Machine Learning Research, vol. 108, pp. 2938–2948. PMLR (2020)
[4]
Bhagoji, A.N., Chakraborty, S., Mittal, P., Calo, S.B.: Analyzing federated learning through an adversarial lens. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 634–643. PMLR (2019)
[5]
Blanchard, P., Mhamdi, E.M.E., Guerraoui, R., Stainer, J.: Machine learning with adversaries: byzantine tolerant gradient descent. In: Guyon, I., von Luxburg, U., Bengio, S., Wallach, H.M., Fergus, R., Vishwanathan, S.V.N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4–9, 2017, Long Beach, CA, USA, pp. 119–129 (2017)
[6]
Brisimi TS, Chen R, Mela T, Olshevsky A, Paschalidis IC, and Shi W Federated learning of predictive models from federated electronic health records Int. J. Med. Inform. 2018 112 59-67
[7]
Campello RJGB, Moulavi D, and Sander J Pei J, Tseng VS, Cao L, Motoda H, and Xu G Density-based clustering based on hierarchical density estimates Advances in Knowledge Discovery and Data Mining 2013 Heidelberg Springer 160-172
[8]
Cao, X., Fang, M., Liu, J., Gong, N.Z.: FLTrust: byzantine-robust federated learning via trust bootstrapping. In: 28th Annual Network and Distributed System Security Symposium, NDSS 2021, virtually, February 21-25, 2021. The Internet Society (2021)
[9]
Chen, M., Mathews, R., Ouyang, T., Beaufays, F.: Federated learning of out-of-vocabulary words. CoRR abs/1903.10635 (2019). http://arxiv.org/abs/1903.10635
[10]
Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. CoRR abs/1712.05526 (2017). http://arxiv.org/abs/1712.05526
[11]
Doan, B.G., Abbasnejad, E., Ranasinghe, D.C.: Februus: input Purification Defense Against Trojan Attacks on Deep Neural Network Systems, pp. 897–912. ACM, New York, NY, USA (2020)
[12]
Fang, M., Cao, X., Jia, J., Gong, N.Z.: Local model poisoning attacks to byzantine-robust federated learning. In: Capkun, S., Roesner, F. (eds.) 29th USENIX Security Symposium, USENIX Security 2020, August 12–14, 2020, pp. 1605–1622. USENIX Association (2020)
[13]
Fung, C., Yoon, C.J.M., Beschastnikh, I.: The limitations of federated learning in sybil settings. In: Egele, M., Bilge, L. (eds.) 23rd International Symposium on Research in Attacks, Intrusions and Defenses, RAID 2020, San Sebastian, Spain, October 14–15, 2020, pp. 301–316. USENIX Association (2020)
[14]
Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D.C., Nepal, S.: STRIP: a defence against trojan attacks on deep neural networks. In: Balenson, D. (ed.) Proceedings of the 35th Annual Computer Security Applications Conference, ACSAC 2019, San Juan, PR, USA, December 09-13, 2019, pp. 113–125. ACM (2019)
[15]
Gretton, A., Fukumizu, K., Teo, C.H., Song, L., Schölkopf, B., Smola, A.J.: A kernel statistical test of independence. In: Platt, J.C., Koller, D., Singer, Y., Roweis, S.T. (eds.) Advances in Neural Information Processing Systems 20, Proceedings of the Twenty-First Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 3–6, 2007, pp. 585–592. Curran Associates, Inc. (2007)
[16]
Gu T, Liu K, Dolan-Gavitt B, and Garg S BadNets: evaluating backdooring attacks on deep neural networks IEEE Access 2019 7 47230-47244
[17]
Hard, A., et al.: Federated learning for mobile keyboard prediction (2018). https://arxiv.org/abs/1811.03604
[18]
Kornblith, S., Norouzi, M., Lee, H., Hinton, G.E.: Similarity of neural network representations revisited. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9–15 June 2019, Long Beach, California, USA. Proceedings of Machine Learning Research, vol. 97, pp. 3519–3529. PMLR (2019)
[19]
Krizhevsky, A.: Learning multiple layers of features from tiny images. Tech. rep., University of Toronto (2009). https://www.cs.toronto.edu/~kriz/learning-features-2009-TR.pdf
[20]
Kurita, K., Michel, P., Neubig, G.: Weight poisoning attacks on pretrained models. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 2793–2806. Association for Computational Linguistics (2020)
[21]
Kusetogullari H, Yavariabdi A, Cheddad A, Grahn H, and Hall J ARDIS: a Swedish historical handwritten digit dataset Neural Comput. Appl. 2019 32 21 16505-16518
[22]
LeCun, Y., Cortes, C., Burges, C.: MNIST handwritten digit database. ATT Labs 2 (2010). http://yann.lecun.com/exdb/mnist
[23]
Liu, Y., et al.: Trojaning attack on neural networks. In: 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18–21, 2018. The Internet Society (2018)
[24]
McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-Efficient Learning of Deep Networks from Decentralized Data. In: Singh, A., Zhu, J. (eds.) Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 54, pp. 1273–1282. PMLR, Fort Lauderdale, FL, USA (2017)
[25]
Mhamdi, E.M.E., Guerraoui, R., Rouault, S.: The hidden vulnerability of distributed learning in byzantium. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10-15, 2018. Proceedings of Machine Learning Research, vol. 80, pp. 3518–3527. PMLR (2018)
[26]
Nguyen, T.D., et al.: FLAME: taming backdoors in federated learning. In: Butler, K.R.B., Thomas, K. (eds.) 31st USENIX Security Symposium, USENIX Security 2022, Boston, MA, USA, August 10-12, 2022, pp. 1415–1432. USENIX Association (2022)
[27]
Paszke, A., Gross, S., et al.: An imperative style, high-performance deep learning library. In: Advances in Neural Information Processing Systems, vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019). http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf
[28]
Qiao, X., Yang, Y., Li, H.: Defending neural backdoors via generative distribution modeling. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R. (eds.) Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 14004–14013 (2019)
[29]
Rieger, P., Nguyen, T.D., Miettinen, M., Sadeghi, A.: Deepsight: mitigating backdoor attacks in federated learning through deep model inspection. In: 29th Annual Network and Distributed System Security Symposium, NDSS 2022, San Diego, California, USA, April 24–28, 2022 (2022)
[30]
Sun, Z., Kairouz, P., Suresh, A.T., McMahan, H.B.: Can you really backdoor federated learning? CoRR abs/1911.07963 (2019). http://arxiv.org/abs/1911.07963
[31]
Wang, H., et al.: Attack of the tails: yes, you really can backdoor federated learning. In: Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., Lin, H. (eds.) Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6–12, 2020, virtual (2020)
[32]
Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. CoRR abs/1708.07747 (2017). http://arxiv.org/abs/1708.07747
[33]
Xie, C., Chen, M., Chen, P., Li, B.: CRFL: certifiably robust federated learning against backdoor attacks. In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18–24 July 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 139, pp. 11372–11382. PMLR (2021)
[34]
Xie, C., Huang, K., Chen, P., Li, B.: DBA: distributed backdoor attacks against federated learning. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26–30, 2020 (2020)
[35]
Xu J, Glicksberg BS, Su C, Walker PB, Bian J, and Wang F Federated learning for healthcare informatics J. Heal. Inform. Res. 2021 5 1 1-19
[36]
Yin, D., Chen, Y., Ramchandran, K., Bartlett, P.L.: Byzantine-robust distributed learning: towards optimal statistical rates. In: Dy, J.G., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, ICML 2018, Stockholmsmässan, Stockholm, Sweden, July 10–15, 2018. Proceedings of Machine Learning Research, vol. 80, pp. 5636–5645. PMLR (2018). http://proceedings.mlr.press/v80/yin18a.html

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Computer Security – ESORICS 2024: 29th European Symposium on Research in Computer Security, Bydgoszcz, Poland, September 16–20, 2024, Proceedings, Part IV
Sep 2024
494 pages
ISBN:978-3-031-70902-9
DOI:10.1007/978-3-031-70903-6
  • Editors:
  • Joaquin Garcia-Alfaro,
  • Rafał Kozik,
  • Michał Choraś,
  • Sokratis Katsikas

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 16 September 2024

Author Tags

  1. federated learning
  2. backdoor attack
  3. centered kernel alignment

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 02 Oct 2024

Other Metrics

Citations

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media