Nothing Special   »   [go: up one dir, main page]

Skip to main content

K Asynchronous Federated Learning with Cosine Similarity Based Aggregation on Non-IID Data

  • Conference paper
  • First Online:
Algorithms and Architectures for Parallel Processing (ICA3PP 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14492))

  • 414 Accesses

Abstract

In asynchronous federated learning, each device updates the model independently as soon as it becomes available, without waiting for other devices. However, this approach is confronted with two critical challenges, namely the non-IID data and the staleness issue, which can adversely impact the performance of the model. To address these challenges, we propose a novel framework called Class-balanced K-Asynchronous Federated Learning (CKAFL). In this framework, we adopt a two-pronged approach, aiming to resolve the problems of non-IID and staleness separately on the client and server side. We give a novel evaluation method that employs cosine similarity to measure the staleness of a delayed gradient to optimize the aggregation algorithm on the server side. We introduce a class-balanced loss function to mitigate the non-IID data in the client side. To evaluate the effectiveness of CKAFL, we conduct extensive experiments on three commonly used datasets. The experimental results show that even when a large proportion of devices have stale updates, the proposed CKAFL framework presents its effectiveness by outperforming baselines on both non-IID and IID cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chai, Z., et al.: TiFL: a tier-based federated learning system. In: Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing (2020). https://doi.org/10.1145/3369583.3392686

  2. Chai, Z., Chen, Y., Zhao, L., Cheng, Y., Rangwala, H.: FedAT: a communication-efficient federated learning method with asynchronous tiers under non-IID data (2020)

    Google Scholar 

  3. Chen, M., Mao, B., Ma, T.: FedSA: a staleness-aware asynchronous federated learning algorithm with non-IID data. Futur. Gener. Comput. Syst. 120, 1–12 (2021)

    Article  Google Scholar 

  4. Chen, Y., Sun, X., Jin, Y.: Communication-efficient federated deep learning with layerwise asynchronous model update and temporally weighted aggregation. IEEE Trans. Neural Netw. Learn. Syst. 4229–4238 (2019). https://doi.org/10.1109/tnnls.2019.2953131

  5. Dai, W., Zhou, Y., Dong, N., Zhang, H., Xing, E.: Toward understanding the impact of staleness in distributed machine learning (2018)

    Google Scholar 

  6. Hsu, H., Qi, H., Brown, M.: Measuring the effects of non-identical data distribution for federated visual classification. arXiv, Learning (2019)

    Google Scholar 

  7. Kairouz, P., et al.: Advances and open problems in federated learning. arXiv, Learning (2021). https://doi.org/10.1561/9781680837896

  8. Li, T., Sahu, A., Zaheer, M., Sanjabi, M., Talwalkar, A., Smith, V.: Federated optimization in heterogeneous networks. arXiv, Learning (2018)

    Google Scholar 

  9. Lian, X., Zhang, W., Zhang, C., Liu, J.: Asynchronous decentralized parallel stochastic gradient descent. arXiv, Optimization and Control (2017)

    Google Scholar 

  10. Liu, Y., Wu, G., Zhang, W., Li, J.: Federated learning-based intrusion detection on non-IID data. In: Meng, W., Lu, R., Min, G., Vaidya, J. (eds.) ICA3PP 2022. LNCS, vol. 13777, pp. 313–329. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-22677-9_17

    Chapter  Google Scholar 

  11. McMahan, H., Moore, E., Ramage, D., Hampson, S., Arcas, B.: Communication-efficient learning of deep networks from decentralized data (2016)

    Google Scholar 

  12. Nguyen, J., et al.: Federated learning with buffered asynchronous aggregation (2021)

    Google Scholar 

  13. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library (2019)

    Google Scholar 

  14. Ren, J., et al.: Balanced meta-softmax for long-tailed visual recognition. In: Neural Information Processing Systems (2020)

    Google Scholar 

  15. Tong, G., Li, G., Wu, J., Li, J.: GradMFL: Gradient Memory-Based Federated Learning for Hierarchical Knowledge Transferring Over Non-IID Data, pp. 612–626 (2022). https://doi.org/10.1007/978-3-030-95384-3_38

  16. Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., Khazaeni, Y.: Federated learning with matched averaging (2020)

    Google Scholar 

  17. Wang, L., Xu, S., Wang, X., Zhu, Q.: Addressing class imbalance in federated learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, pp. 10165–10173 (2021). https://doi.org/10.1609/aaai.v35i11.17219

  18. Wang, L., Wang, W., Li, B.: CMFL: mitigating communication overhead for federated learning. In: International Conference on Distributed Computing Systems (2019)

    Google Scholar 

  19. Wang, Q., Yang, Q., He, S., Shui, Z., Chen, J.: Asyncfeded: asynchronous federated learning with euclidean distance based adaptive weight aggregation (2022)

    Google Scholar 

  20. Wu, W., He, L., Lin, W., Mao, R., Maple, C., Jarvis, S.: Safa: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 655–668 (2020). https://doi.org/10.1109/tc.2020.2994391

  21. Wu, X., Wang, C.L.: KAFL: achieving high training efficiency for fast-k asynchronous federated learning (2022)

    Google Scholar 

  22. Xiao, W., et al.: Fed-Tra: Improving Accuracy of Deep Learning Model on Non-IID in Federated Learning, pp. 790–803 (2022). https://doi.org/10.1007/978-3-030-95384-3_49

  23. Xie, C., Koyejo, O., Gupta, I.: Asynchronous federated optimization. arXiv, Distributed, Parallel, and Cluster Computing (2019)

    Google Scholar 

  24. Xie, C., Koyejo, S., Gupta, I.: Zeno++: robust fully asynchronous SGD (2020)

    Google Scholar 

  25. Xu, C., Qu, Y., Xiang, Y., Gao, L.: Asynchronous federated learning on heterogeneous devices: a survey. arXiv preprint arXiv:2109.04269 (2021)

  26. Yao, L., et al.: A benchmark for federated hetero-task learning (2022)

    Google Scholar 

  27. Zhang, W., Gupta, S., Lian, X., Liu, J.: Staleness-aware async-SGD for distributed deep learning (2016)

    Google Scholar 

  28. Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., Chandra, V.: Federated learning with non-IID data (2018)

    Google Scholar 

  29. Zhou, Z., Mertikopoulos, P., Bambos, N., Glynn, P., Ye, Y.: Distributed stochastic optimization with large delays. Math. Oper. Res. 47(3), 2082–2111 (2021)

    Article  MathSciNet  Google Scholar 

  30. Zhou, Z., Li, Y., Ren, X., Yang, S.: Towards efficient and stable k-asynchronous federated learning with unbounded stale gradients on non-IID data. IEEE Trans. Parallel Distrib. Syst. 33(12), 3291–3305 (2022)

    Article  Google Scholar 

  31. Zhu, F., Hao, J., Chen, Z., Zhao, Y., Chen, B., Tan, X.: STAFL: staleness-tolerant asynchronous federated learning on non-IID dataset. Electronics 11(3), 314 (2022)

    Article  Google Scholar 

  32. Shang, X., Lu, Y., Huang, G., Wang, H.: Federated learning on heterogeneous and long-tailed data via classifier re-training with federated features (2022)

    Google Scholar 

  33. Ziang, J.: KNN approach to unbalanced data distributions: a case study involving information extraction (2003)

    Google Scholar 

  34. Lee, H., Park, M., Kim, J.: Plankton classification on imbalanced large scale database via convolutional neural networks with transfer learning. In: 2016 IEEE International Conference on Image Processing (ICIP) (2016). https://doi.org/10.1109/icip.2016.7533053

Download references

Acknowledgments

This work was supported in part by the NSFC under Grant 62072069, in part by Hisense Group Holdings Company.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Heng Qi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, S., Zhou, Y., Gao, X., Qi, H. (2024). K Asynchronous Federated Learning with Cosine Similarity Based Aggregation on Non-IID Data. In: Tari, Z., Li, K., Wu, H. (eds) Algorithms and Architectures for Parallel Processing. ICA3PP 2023. Lecture Notes in Computer Science, vol 14492. Springer, Singapore. https://doi.org/10.1007/978-981-97-0811-6_26

Download citation

  • DOI: https://doi.org/10.1007/978-981-97-0811-6_26

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-97-0810-9

  • Online ISBN: 978-981-97-0811-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics