Federated Learning in Adversarial Environments: Testbed Design and Poisoning Resilience in Cybersecurity

HJ Huang, B Iskandarov, M Rahman, HT Otal… - arXiv preprint arXiv …, 2024 - arxiv.org
HJ Huang, B Iskandarov, M Rahman, HT Otal, MA Canbaz
arXiv preprint arXiv:2409.09794, 2024arxiv.org
This paper presents the design and implementation of a Federated Learning (FL) testbed,
focusing on its application in cybersecurity and evaluating its resilience against poisoning
attacks. Federated Learning allows multiple clients to collaboratively train a global model
while keeping their data decentralized, addressing critical needs for data privacy and
security, particularly in sensitive fields like cybersecurity. Our testbed, built using the Flower
framework, facilitates experimentation with various FL frameworks, assessing their …
This paper presents the design and implementation of a Federated Learning (FL) testbed, focusing on its application in cybersecurity and evaluating its resilience against poisoning attacks. Federated Learning allows multiple clients to collaboratively train a global model while keeping their data decentralized, addressing critical needs for data privacy and security, particularly in sensitive fields like cybersecurity. Our testbed, built using the Flower framework, facilitates experimentation with various FL frameworks, assessing their performance, scalability, and ease of integration. Through a case study on federated intrusion detection systems, we demonstrate the testbed's capabilities in detecting anomalies and securing critical infrastructure without exposing sensitive network data. Comprehensive poisoning tests, targeting both model and data integrity, evaluate the system's robustness under adversarial conditions. Our results show that while federated learning enhances data privacy and distributed learning, it remains vulnerable to poisoning attacks, which must be mitigated to ensure its reliability in real-world applications.
arxiv.org