Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3605573.3605582acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicppConference Proceedingsconference-collections
research-article

ASFL: Adaptive Semi-asynchronous Federated Learning for Balancing Model Accuracy and Total Latency in Mobile Edge Networks

Published: 13 September 2023 Publication History

Abstract

Federated learning (FL) is a new paradigm for privacy-preserving learning. This is particularly appealing in the mobile edge network (MEN), in which devices collectively train a global model with their own set of data. It is, however, routinely difficult for FL algorithms to satisfy different training task preferences in terms of the total latency and model accuracy due to a number of factors including the straggler effect, data heterogeneity, communication bottleneck and device mobility. To this end, we propose an Adaptive Semi-asynchronous Federated Learning (ASFL) framework, which adaptively balances the total latency and model accuracy according to the task preferences in MEN. Specifically, ASFL conducts a two-stage operation: i) Device selection stage. Each global round selects a set of devices that can maximize the model accuracy to eliminate data heterogeneity and communication bottlenecks; ii) Training stage. We first define a latency-accuracy objective value to model the balance between the latency and accuracy. Then in each global round, we use a deep reinforcement learning (DRL) algorithm based on soft actor-critic with discrete actions to intelligently derive the number of picked devices (i.e., participants in the current global aggregation) and the lag tolerance at each global round to maximize the latency-accuracy objective value. Extensive experiments show that ASFL can improve the latency-accuracy objective value by up to 94% compared with three state-of-the-art FL frameworks.

References

[1]
Yae Jee, Jianyu Wang, and Gauri Joshi. 2022. Towards understanding biased client selection in federated learning. In Proc. of AISTATS. PMLR, 10351–10375.
[2]
Qimei Chen, Zehua You, and Hao Jiang. 2021. Semi-asynchronous hierarchical federated learning for cooperative intelligent transportation systems. arXiv preprint arXiv:2110.09073 (2021).
[3]
Petros Christodoulou. 2019. Soft actor-critic for discrete action settings. arXiv preprint arXiv:1910.07207 (2019).
[4]
Moming Duan, Duo Liu, Xianzhang Chen, Renping Liu, Yujuan Tan, and Liang Liang. 2020. Self-balancing federated learning with global imbalanced data in mobile systems. IEEE Transactions on Parallel and Distributed Systems 32, 1 (2020), 59–71.
[5]
Guibing Guo, Enneng Yang, Li Shen, Xiaochun Yang, and Xiaodong He. 2019. Discrete Trust-aware Matrix Factorization for Fast Recommendation. In Proc. of IJCAI. 1380–1386.
[6]
Dzmitry Huba, John Nguyen, Kshitiz Malik, Ruiyu Zhu, Mike Rabbat, Ashkan Yousefpour, Carole-Jean Wu, Hongyuan Zhan, Pavel Ustinov, Harish Srinivas, Kaikai Wang, Anthony Shoumikhin, Jesik Min, and Mani Malek. 2022. PAPAYA: Practical, Private, and Scalable Federated Learning. In Proc. of MLSys, Vol. 4. 814–832.
[7]
Angelos Katharopoulos and François and Fleuret. 2018. Not all samples are created equal: Deep learning with importance sampling. In Proc. of the PMLR ICML. 2525–2534.
[8]
Fan Lai, Xiangfeng Zhu, Harsha V Madhyastha, and Mosharaf Chowdhury. 2021. Oort: Efficient federated learning via guided participant selection. In Proc. of USENIX OSDI. 19–35.
[9]
Feiyuan Liang, Qinglin Yang, Ruiqi Liu, Junbo Wang, Kento Sato, and Jian Guo. 2022. Semi-Synchronous Federated Learning Protocol with Dynamic Aggregation in Internet of Vehicles. IEEE Transactions on Vehicular Technology (2022).
[10]
Wei Yang Bryan Lim, Nguyen Cong Luong, Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. 2020. Federated learning in mobile edge networks: A comprehensive survey. IEEE Communications Surveys & Tutorials 22, 3 (2020), 2031–2063.
[11]
Jianchun Liu, Hongli Xu, Lun Wang, Yang Xu, Chen Qian, Jinyang Huang, and He Huang. 2021. Adaptive Asynchronous Federated Learning in Resource-Constrained Edge Computing. IEEE Transactions on Mobile Computing (2021).
[12]
Jianchun Liu, Hongli Xu, Yang Xu, Zhenguo Ma, Zhiyuan Wang, Chen Qian, and He Huang. 2021. Communication-efficient asynchronous federated learning in resource-constrained edge computing. Computer Networks 199 (2021), 108429.
[13]
Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, and Leandros Tassiulas. 2022. Tackling system and statistical heterogeneity for federated learning with adaptive client sampling. In Proc. of INFOCOM. IEEE, 1739–1748.
[14]
Qianpiao Ma, Yang Xu, Hongli Xu, Zhida Jiang, Liusheng Huang, and He Huang. 2021. FedSA: A semi-asynchronous federated learning mechanism in heterogeneous edge computing. IEEE Journal on Selected Areas in Communications 39, 12 (2021), 3654–3672.
[15]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proc. of AISTATS. PMLR, 1273–1282.
[16]
Kang Loon Ng, Zichen Chen, Zelei Liu, Han Yu, Yang Liu, and Qiang Yang. 2021. A multi-player game for studying federated learning incentive schemes. In Proc. of IJCAI. 5279–5281.
[17]
Nang Hung Nguyen, Phi Le Nguyen, Thuy Dung Nguyen, Trung Thanh Nguyen, Duc Long Nguyen, Thanh Hung Nguyen, Huy Hieu Pham, and Thao Nguyen Truong. 2022. FedDRL: Deep Reinforcement Learning-based Adaptive Aggregation for Non-IID Data in Federated Learning. In Proc. of ICPP. 1–11.
[18]
Takayuki Nishio and Ryo Yonetani. 2019. Client selection for federated learning with heterogeneous resources in mobile edge. In Proc. of IEEE ICC. 1–7.
[19]
Jinlong Pang, Jieling Yu, Ruiting Zhou, and John CS Lui. 2022. An incentive auction for heterogeneous client selection in federated learning. IEEE Transactions on Mobile Computing (2022).
[20]
Laércio Lima Pilla. 2021. Optimal task assignment for heterogeneous federated learning devices. In Proc. of IPDPS. IEEE, 661–670.
[21]
Conghui Tan, Di Jiang, Jinhua Peng, Xueyang Wu, Qian Xu, and Qiang Yang. 2021. A de novo divide-and-merge paradigm for acoustic model optimization in automatic speech recognition. In Proc. of IJCAI. 3709–3715.
[22]
Mingxing Tan, Bo Chen, Ruoming Pang, Vijay Vasudevan, Mark Sandler, Andrew Howard, and Quoc V Le. 2019. Mnasnet: Platform-aware neural architecture search for mobile. In Proc. of the IEEE/CVF CVPR.
[23]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications 37, 6 (2019), 1205–1221.
[24]
Yue Wang, Xiaofeng Tao, Xuefei Zhang, Ping Zhang, and Y Thomas Hou. 2019. Cooperative task offloading in three-tier mobile computing networks: An ADMM framework. IEEE Transactions on Vehicular Technology 68, 3 (2019), 2763–2776.
[25]
Joel Wolfrath, Nikhil Sreekumar, Dhruv Kumar, Yuanli Wang, and Abhishek Chandra. 2022. Haccs: Heterogeneity-aware clustered client selection for accelerated federated learning. In Proc. of IPDPS. IEEE, 985–995.
[26]
Wentai Wu, Ligang He, Weiwei Lin, Rui Mao, Carsten Maple, and Stephen Jarvis. 2020. SAFA: A semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 70, 5 (2020), 655–668.
[27]
Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2020. Asynchronous federated optimization. In Proc. of OPT.
[28]
Chaoqun You, Daquan Feng, Kun Guo, Howard H Yang, Chenyuan Feng, and Tony QS Quek. 2022. Semi-Synchronous Personalized Federated Learning over Mobile Edge Networks. IEEE Transactions on Wireless Communications (2022).
[29]
Liangkun Yu, Xiang Sun, Rana Albelaihi, and Chen Yi. 2022. Latency Aware Semi-synchronous Client Selection and Model Aggregation for Wireless Federated Learning. arXiv preprint arXiv:2210.10311 (2022).
[30]
Chendi Zhou, Hao Tian, Hong Zhang, Jin Zhang, Mianxiong Dong, and Juncheng Jia. 2021. TEA-fed: time-efficient asynchronous federated learning for edge computing. In Proc. of CF. 30–37.
[31]
Ruiting Zhou, Ruobei Wang, Jieling Yu, Bo Li, and Yuqing Li. 2023. Heterogeneous Federated Learning for Balancing Job Completion Time and Model Accuracy. In Proc. of ICPADS. IEEE, 562–569.
[32]
Feng Zhu, Jiangshan Hao, Zhong Chen, Yanchao Zhao, Bing Chen, and Xiaoyang Tan. 2022. STAFL: Staleness-Tolerant Asynchronous Federated Learning on Non-iid Dataset. Electronics 11, 3 (2022), 314.
[33]
Hongbin Zhu, Miao Yang, Junqian Kuang, Hua Qian, and Yong Zhou. 2022. Client selection for asynchronous federated learning with fairness consideration. In Proc. of ICC Workshops. IEEE, 800–805.
[34]
Brian D Ziebart, Andrew L Maas, J Andrew Bagnell, Anind K Dey, 2008. Maximum entropy inverse reinforcement learning. In Proc. of AAAI, Vol. 8. Chicago, IL, USA, 1433–1438.

Index Terms

  1. ASFL: Adaptive Semi-asynchronous Federated Learning for Balancing Model Accuracy and Total Latency in Mobile Edge Networks

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICPP '23: Proceedings of the 52nd International Conference on Parallel Processing
      August 2023
      858 pages
      ISBN:9798400708435
      DOI:10.1145/3605573
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 13 September 2023

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Deep reinforcement learning
      2. Federated learning
      3. Lag tolerance
      4. Semi-asynchronous

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • NSFC
      • Shanghai Pujiang Program
      • RGC RIF
      • RGC GRF

      Conference

      ICPP 2023
      ICPP 2023: 52nd International Conference on Parallel Processing
      August 7 - 10, 2023
      UT, Salt Lake City, USA

      Acceptance Rates

      Overall Acceptance Rate 91 of 313 submissions, 29%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 270
        Total Downloads
      • Downloads (Last 12 months)224
      • Downloads (Last 6 weeks)19
      Reflects downloads up to 19 Nov 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media