Nothing Special   »   [go: up one dir, main page]

Skip to main content

FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification

  • Conference paper
  • First Online:
Machine Learning and Knowledge Discovery in Databases. Research Track (ECML PKDD 2024)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14943))

  • 425 Accesses

Abstract

Federated learning (FL) enables clients to collaboratively train machine learning models under the coordination of a server in a privacy-preserving manner. One of the main challenges in FL is that the server may not receive local updates from each client in each round due to client resource limitations and intermittent network connectivity. The existence of unavailable clients severely deteriorates the overall FL performance. In this paper, we propose FedAR, a novel client update Approximation and Rectification algorithm for FL to address the client unavailability issue. FedAR can get all clients involved in the global model update to achieve a high-quality global model on the server, which also furnishes accurate predictions for each client. To this end, the server uses the latest update from each client as a surrogate for its current update. It then assigns a different weight to each client’s surrogate update to derive the global model, in order to guarantee contributions from both available and unavailable clients. Our theoretical analysis proves that FedAR achieves optimal convergence rates on non-IID datasets for both convex and non-convex smooth loss functions. Extensive empirical studies show that FedAR comprehensively outperforms state-of-the-art FL baselines including FedAvg, MIFA, FedVARP and Scaffold in terms of the training loss, test accuracy, and bias mitigation. Moreover, FedAR also depicts impressive performance in the presence of a large number of clients with severe client unavailability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    For clear observation, we recommend viewing all figures about experimental results in color.

References

  1. Abdelmoniem, A.M., Sahu, A.N., Canini, M., Fahmy, S.A.: Refl: Resource-efficient federated learning. In: Proceedings of the Eighteenth European Conference on Computer Systems, pp. 215–232 (2023)

    Google Scholar 

  2. Bonawitz, K., et al.: Towards federated learning at scale: system design. Proc. Mach. Learn. Syst. 1, 374–388 (2019)

    Google Scholar 

  3. Briggs, C., Fan, Z., Andras, P.: Federated learning with hierarchical clustering of local updates to improve training on non-iid data. In: 2020 International Joint Conference on Neural Networks (IJCNN), pp. 1–9. IEEE (2020)

    Google Scholar 

  4. Brisimi, T.S., Chen, R., Mela, T., Olshevsky, A., Paschalidis, I.C., Shi, W.: Federated learning of predictive models from federated electronic health records. Int. J. Med. Informatics 112, 59–67 (2018)

    Article  Google Scholar 

  5. Chen, S., Li, B.: Towards optimal multi-modal federated learning on non-iid data with hierarchical gradient blending. In: IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pp. 1469–1478. IEEE (2022)

    Google Scholar 

  6. Cho, Y.J., Gupta, S., Joshi, G., Yağan, O.: Bandit-based communication-efficient client selection strategies for federated learning. In: 2020 54th Asilomar Conference on Signals, Systems, and Computers, pp. 1066–1069. IEEE (2020)

    Google Scholar 

  7. Cho, Y.J., Wang, J., Joshi, G.: Towards understanding biased client selection in federated learning. In: International Conference on Artificial Intelligence and Statistics, pp. 10351–10375. PMLR (2022)

    Google Scholar 

  8. Fraboni, Y., Vidal, R., Kameni, L., Lorenzi, M.: Clustered sampling: Low-variance and improved representativity for clients selection in federated learning. In: International Conference on Machine Learning, pp. 3407–3416. PMLR (2021)

    Google Scholar 

  9. Ghosh, A., Chung, J., Yin, D., Ramchandran, K.: An efficient framework for clustered federated learning. Adv. Neural. Inf. Process. Syst. 33, 19586–19597 (2020)

    Google Scholar 

  10. Gu, X., Huang, K., Zhang, J., Huang, L.: Fast federated learning in the presence of arbitrary device unavailability. Adv. Neural. Inf. Process. Syst. 34, 12052–12064 (2021)

    Google Scholar 

  11. al Hard, A., et al.: Federated learning for mobile keyboard prediction. arXiv preprint arXiv:1811.03604 (2018)

  12. Horvath, S., Laskaridis, S., Almeida, M., Leontiadis, I., Venieris, S., Lane, N.: Fjord: fair and accurate federated learning under heterogeneous targets with ordered dropout. Adv. Neural. Inf. Process. Syst. 34, 12876–12889 (2021)

    Google Scholar 

  13. Huang, W., Ye, M., Du, B.: Learn from others and be yourself in heterogeneous federated learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10143–10153 (2022)

    Google Scholar 

  14. Jhunjhunwala, D., SHARMA, P., Nagarkatti, A., Joshi, G.: Fedvarp: tackling the variance due to partial client participation in federated learning. In: The 38th Conference on Uncertainty in Artificial Intelligence (2022)

    Google Scholar 

  15. Kairouz, P., et al.: Advances and open problems in federated learning. Foundat. Trends Mach. Learn. 14(1–2), 1–210 (2021)

    Google Scholar 

  16. Karimireddy, S.P., Kale, S., Mohri, M., Reddi, S., Stich, S., Suresh, A.T.: Scaffold: stochastic controlled averaging for federated learning. In: International Conference on Machine Learning, pp. 5132–5143. PMLR (2020)

    Google Scholar 

  17. Krizhevsky, A.: Learning multiple layers of features from tiny images. University of Toronto (May 2012)

    Google Scholar 

  18. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). https://doi.org/10.1109/5.726791

    Article  Google Scholar 

  19. Li, T., Sanjabi, M., Beirami, A., Smith, V.: Fair resource allocation in federated learning. arXiv preprint arXiv:1905.10497 (2019)

  20. Luo, B., Xiao, W., Wang, S., Huang, J., Tassiulas, L.: Tackling system and statistical heterogeneity for federated learning with adaptive client sampling. In: IEEE INFOCOM 2022-IEEE Conference on Computer Communications, pp. 1739–1748. IEEE (2022)

    Google Scholar 

  21. McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial Intelligence and Statistics, pp. 1273–1282. PMLR (2017)

    Google Scholar 

  22. Mendieta, M., Yang, T., Wang, P., Lee, M., Ding, Z., Chen, C.: Local learning matters: Rethinking data heterogeneity in federated learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 8397–8406 (2022)

    Google Scholar 

  23. Mohri, M., Sivek, G., Suresh, A.T.: Agnostic federated learning. In: International Conference on Machine Learning, pp. 4615–4625. PMLR (2019)

    Google Scholar 

  24. Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., Ng, A.Y.: Reading digits in natural images with unsupervised feature learning. In: NIPS Workshop on Deep Learning and Unsupervised Feature Learning 2011 (2011). http://ufldl.stanford.edu/housenumbers/nips2011_housenumbers.pdf

  25. Shapley, L.S., et al.: A value for n-person games (1953)

    Google Scholar 

  26. Shu, J., Zhang, W., Zhou, Y., Cheng, Z., Yang, L.T.: Flas: computation and communication efficient federated learning via adaptive sampling. IEEE Trans. Netw. Sci. Eng. 9(4), 2003–2014 (2021)

    Article  MathSciNet  Google Scholar 

  27. Soltani, B., Zhou, Y., Haghighi, V., Lui, J.: A survey of federated evaluation in federated learning. arXiv preprint arXiv:2305.08070 (2023)

  28. , Song, T., Tong, Y., Wei, S.: Profit allocation for federated learning. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 2577–2586. IEEE (2019)

    Google Scholar 

  29. Wang, G., Dang, C.X., Zhou, Z.: Measure contribution of participants in federated learning. In: 2019 IEEE International Conference on Big Data (Big Data), pp. 2597–2604. IEEE (2019)

    Google Scholar 

  30. Wang, S., Ji, M.: A unified analysis of federated learning with arbitrary client participation. arXiv preprint arXiv:2205.13648 (2022)

  31. Wang, Z., Fan, X., Qi, J., Jin, H., Yang, P., Shen, S., Wang, C.: Fedgs: federated graph-based sampling with arbitrary client availability. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 37, pp. 10271–10278 (2023)

    Google Scholar 

  32. Yan, Y., et al.: Federated optimization under intermittent client availability. INFORMS J. Comput. 36(1), 185–202 (2024)

    Article  MathSciNet  Google Scholar 

  33. Yu, S., Lin, C., Zhang, X., Guo, L.: Defending against cross-technology jamming in heterogeneous IoT systems. In: IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), pp. 702–712 (2022)

    Google Scholar 

  34. Yu, S., Zhang, X., Huang, P., Guo, L., Cheng, L., Wang, K.: AuthCTC: defending against waveform emulation attack in heterogeneous IoT environments. In: Proceedings of the 15th ACM Asia Conference on Computer and Communications Security, pp. 20–32 (2020)

    Google Scholar 

  35. Zhang, X., Guo, L., Li, M., Fang, Y.: Social-enabled data offloading via mobile participation-a game-theoretical approach. In: 2016 IEEE Global Communications Conference (GLOBECOM), pp. 1–6 (2016)

    Google Scholar 

  36. Zhang, X., Huang, P., Guo, L., Fang, Y.: Hide and seek: Waveform emulation attack and defense in cross-technology communication. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 1117–1126 (2019)

    Google Scholar 

  37. Zhang, X., Yu, S., Zhou, H., Huang, P., Guo, L., Li, M.: Signal emulation attack and defense for smart home iot. In: IEEE Trans. Dependable Sec. Comput. (2022)

    Google Scholar 

  38. Zhou, H., Wang, S., Jiang, C., Zhang, X., Guo, L., Yuan, Y.: Waste not, want not: service migration-assisted federated intelligence for multi-modality mobile edge computing. In: Proceedings of the Twenty-fourth International Symposium on Theory, Algorithmic Foundations, and Protocol Design for Mobile Networks and Mobile Computing, pp.211–220 (2023)

    Google Scholar 

  39. Zhou, H., Yu, S., Zhang, X., Guo, L., Lorenzo, B: DQN-based QoE Enhancement for Data Collection in Heterogeneous IoT Network. In: 2022 IEEE 19th International Conference on Mobile Ad Hoc and Smart Systems (MASS), pp.188–194 (2022)

    Google Scholar 

  40. Zhou, P., Xu, H., Lee, L.H., Fang, P., Hui, P.: Are you left out? an efficient and fair federated learning for personalized profiles on wearable devices of inferior networking conditions. Proc. ACM on Interactive, Mobile, Wearable Ubiquitous Technol. 6(2), 1–25 (2022)

    Google Scholar 

  41. Zhu, L., Lin, H., Lu, Y., Lin, Y., Han, S.: Delayed gradient averaging: tolerate the communication latency for federated learning. Adv. Neural. Inf. Process. Syst. 34, 29995–30007 (2021)

    Google Scholar 

Download references

Acknowledgment

The work of X. Zhang is partially supported by the National Science Foundation under Grant Number: CCF-2312617. The work of S. Chakraborty is partially supported by the National Science Foundation under Grant Number: IIS-2143424 (NSF CAREER Award).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaonan Zhang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 522 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jiang, C., Zhou, H., Zhang, X., Chakraborty, S. (2024). FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification. In: Bifet, A., Davis, J., Krilavičius, T., Kull, M., Ntoutsi, E., Žliobaitė, I. (eds) Machine Learning and Knowledge Discovery in Databases. Research Track. ECML PKDD 2024. Lecture Notes in Computer Science(), vol 14943. Springer, Cham. https://doi.org/10.1007/978-3-031-70352-2_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70352-2_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70351-5

  • Online ISBN: 978-3-031-70352-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics