Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

CPPer-FL: Clustered Parallel Training for Efficient Personalized Federated Learning

Published: 15 February 2024 Publication History

Abstract

In this paper, a clustered parallel training algorithm is designed for personalized federated learning (Per-FL), called CPPer-FL. CPPer-FL improves the communication and training efficiency of Per-FL from two perspectives, namely, less burden for the central server and lower interaction idling delay. CPPer-FL adopts a client-edge-center learning architecture, which offloads the central server's model aggregation and communication burden to distributed edge servers. Also, CPPer-FL redesigns the cascading model synchronization and updating procedure in conventional Per-FL and changes it to a parallel manner, thus improving the interaction efficiency in the training process. Further, for the proposed hierarchical architecture, two approaches are proposed to cater to Per-FL: similarity-based clustering for client-edge association and personalized model aggregation for parallel model updating, such that clients’ personal features can be preserved in the training process. The convergence of CPPer-FL has been formally analyzed and proved. Evaluation results validate the communication efficiency, model convergence, and model accuracy improvement.

References

[1]
B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Y. Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Artif. Intell. Statist., PMLR, 2017, pp. 1273–1282.
[2]
A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized federated learning: A meta-learning approach,” 2020,.
[3]
R. Zhang et al., “Buffer-aware virtual reality video streaming with personalized and private viewport prediction,” IEEE J. Sel. Areas Commun., vol. 40, no. 2, pp. 694–709, Feb. 2022.
[4]
Z. Qu et al., “Partial synchronization to accelerate federated learning over relay-assisted edge networks,” IEEE Trans. Mobile Comput., vol. 21, no. 12, pp. 4502–4516, Dec. 2022.
[5]
Z. Su et al., “Secure and efficient federated learning for smart grid with edge-cloud collaboration,” IEEE Trans. Ind. Informat., vol. 18, no. 2, pp. 1333–1344, Feb. 2022.
[6]
D. Chen, C. S. Hong, Y. Zha, Y. Zhang, X. Liu, and Z. Han, “FedSVRG based communication efficient scheme for federated learning in MEC networks,” IEEE Trans. Veh. Technol, vol. 70, no. 7, pp. 7300–7304, Jul. 2021.
[7]
H. Wu and P. Wang, “Fast-convergent federated learning with adaptive weighting,” IEEE Trans. Cogn. Commun. Netw., vol. 7, no. 4, pp. 1078–1088, Dec. 2021.
[8]
S. Liu, J. Yu, X. Deng, and S. Wan, “FedCPF: An efficient-communication federated learning approach for vehicular edge computing in 6G communication networks,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 2, pp. 1616–1629, Feb. 2022.
[9]
M. K. Nori, S. Yun, and I.-M. Kim, “Fast federated learning by balancing communication trade-offs,” IEEE Trans. Commun., vol. 69, no. 8, pp. 5168–5182, Aug. 2021.
[10]
H. Zheng, M. Gao, Z. Chen, and X. Feng, “A distributed hierarchical deep computation model for federated learning in edge computing,” IEEE Trans. Ind. Inform., vol. 17, no. 12, pp. 7946–7956, Dec. 2021.
[11]
L. Abrahamyan, Y. Chen, G. Bekoulis, and N. Deligiannis, “Learned gradient compression for distributed deep learning,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 12, pp. 7330–7344, Dec. 2022.
[12]
Y. Zhang, T. Gu, and X. Zhang, “MDLdroid: A chainsgd-reduce approach to mobile deep learning for personal mobile sensing,” in Proc. 19th ACM/IEEE Int. Conf. Inf. Process. Sensor Netw., 2020, pp. 73–84.
[13]
J. Zhao et al., “Federated learning with heterogeneity-aware probabilistic synchronous parallel on edge,” IEEE Trans. Serv. Comput., vol. 15, no. 2, pp. 614–626, Mar./Apr. 2022.
[14]
Y. Zhou, Q. Ye, and J. Lv, “Communication-efficient federated learning with compensated overlap-FedAvg,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 1, pp. 192–205, Jan. 2022.
[15]
H. Zhuang, Y. Wang, Q. Liu, and Z. Lin, “Fully decoupled neural network learning using delayed gradients,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 10, pp. 6013–6020, Oct. 2022.
[16]
J. Konečnỳ, H. B. McMahan, D. Ramage, and P. Richtárik, “Federated optimization: Distributed machine learning for on-device intelligence,” 2016,.
[17]
T. Wang, Y. Liu, X. Zheng, H.-N. Dai, W. Jia, and M. Xie, “Edge-based communication optimization for distributed federated learning,” IEEE Trans. Netw. Sci. Eng., vol. 9, no. 4, pp. 2015–2024, Jul./Aug. 2022.
[18]
N. Mhaisen, A. A. Abdellatif, A. Mohamed, A. Erbad, and M. Guizani, “Optimal user-edge assignment in hierarchical federated learning based on statistical properties and network topology constraints,” IEEE Trans. Netw. Sci. Eng., vol. 9, no. 1, pp. 55–66, Jan./Feb. 2022.
[19]
A. Reisizadeh, S. Prakash, R. Pedarsani, and A. S. Avestimehr, “Codedreduce: A fast and robust framework for gradient aggregation in distributed learning,” IEEE/ACM Trans. Netw., vol. 30, no. 1, pp. 148–161, Feb. 2022.
[20]
J. Guo, J. Wu, A. Liu, and N. Xiong, “Lightfed: An efficient and secure federated edge learning system on model splitting,” IEEE Trans. Parallel Distrib. Syst., vol. 33, no. 11, pp. 2701–2713, Nov. 2022.
[21]
H. Li et al., “Decentralized dual proximal gradient algorithms for non-smooth constrained composite optimization problems,” IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 10, pp. 2594–2605, Oct. 2021.
[22]
J. Shu, W. Zhang, Y. Zhou, Z. Cheng, and L. T. Yang, “FLAS: Computation and communication efficient federated learning via adaptive sampling,” IEEE Trans. Netw. Sci. Eng., vol. 9, no. 4, pp. 2003–2014, Jul./Aug. 2022.
[23]
H. Sun, Z. Gui, S. Guo, Q. Qi, J. Wang, and J. Liao, “GSSP: Eliminating stragglers through grouping synchronous for distributed deep learning in heterogeneous cluster,” IEEE Trans. Cloud Comput., vol. 10, no. 4, pp. 2637–2648, Oct./Dec. 2022.
[24]
P. Zhang, C. Wang, C. Jiang, and Z. Han, “Deep reinforcement learning assisted federated learning algorithm for data management of IIoT,” IEEE Trans. Ind. Inform., vol. 17, no. 12, pp. 8475–8484, Dec. 2021.
[25]
Z. Wang et al., “Federated learning via intelligent reflecting surface,” IEEE Trans. Wireless Commun., vol. 21, no. 2, Feb. 2022.
[26]
Z. Zhao, J. Xia, L. Fan, X. Lei, G. K. Karagiannidis, and A. Nallanathan, “System optimization of federated learning networks with a constrained latency,” IEEE Trans. Veh. Technol, vol. 71, no. 1, pp. 1095–1100, Jan. 2022.
[27]
Y. Liu, G. Wu, Z. Tian, and Q. Ling, “DQC-ADMM: Decentralized dynamic ADMM with quantized and censored communications,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 8, pp. 3290–3304, Aug. 2022.
[28]
J. Wangni, J. Wang, J. Liu, and T. Zhang, “Gradient sparsification for communication-efficient distributed optimization,” Proc. Adv. Neural Inf. Process. Syst., vol. 31, pp. 1306–1316, 2018.
[29]
T. Chen, Y. Sun, and W. Yin, “Communication-adaptive stochastic gradient methods for distributed learning,” IEEE Trans. Signal Process., vol. 69, pp. 4637–4651, 2021.
[30]
W. Li, Z. Wu, T. Chen, L. Li, and Q. Ling, “Communication-censored distributed stochastic gradient descent,” IEEE Trans. Neural Netw. Learn. Syst., vol. 33, no. 11, pp. 6831–6843, Nov. 2022.
[31]
S. Liu, G. Yu, R. Yin, J. Yuan, L. Shen, and C. Liu, “Joint model pruning and device selection for communication-efficient federated edge learning,” IEEE Trans. Commun., vol. 70, no. 1, pp. 231–244, Jan. 2022.
[32]
S. Wang et al., “Adaptive federated learning in resource constrained edge computing systems,” IEEE J. Sel. Areas Commun., vol. 37, no. 6, pp. 1205–1221, Jun. 2019.
[33]
T. Sery, N. Shlezinger, K. Cohen, and Y. C. Eldar, “Over-the-air federated learning from heterogeneous data,” IEEE Trans. Signal Process., vol. 69, pp. 3796–3811, 2021.
[34]
H. Liu, X. Yuan, and Y.-J. A. Zhang, “Reconfigurable intelligent surface enabled federated learning: A unified communication-learning design approach,” IEEE Trans. Wireless Commun., vol. 20, no. 11, pp. 7595–7609, Nov. 2021.
[35]
A. Elgabli, J. Park, C. B. Issaid, and M. Bennis, “Harnessing wireless channels for scalable and privacy-preserving federated learning,” IEEE Trans. Commun., vol. 69, no. 8, pp. 5194–5208, Aug. 2021.
[36]
N. Zhang and M. Tao, “Gradient statistics aware power control for over-the-air federated learning,” IEEE Trans. Wireless Commun., vol. 20, no. 8, pp. 5115–5128, Aug. 2021.
[37]
Y. Du and K. You, “Asynchronous stochastic gradient descent over decentralized datasets,” IEEE Control Netw. Syst., vol. 8, no. 3, pp. 1212–1224, Sep. 2021.
[38]
L. Feng, Y. Zhao, S. Guo, X. Qiu, W. Li, and P. Yu, “Blockchain-based asynchronous federated learning for Internet of Things,” IEEE Trans. Comput., vol. 8, 2021.
[39]
W.-C. Lo, C.-L. Fan, J. Lee, C.-Y. Huang, K.-T. Chen, and C.-H. Hsu, “360 video viewing dataset in head-mounted virtual reality,” in Proc. 8th ACM Multimedia Syst. Conf., 2017, pp. 211–216.
[40]
J. L. Reyes-Ortiz, A. Ghio, X. Parra, D. Anguita, J. Cabestany, and A. Catala, “Human activity and motion disorder recognition: Towards smarter interactive cognitive environments,” in Proc. 30th Eur. Symp. Artif. Neural Netw., Citeseer, 2013, pp. 403–412.
[41]
X. Corbillon, F. De Simone, and G. Simon, “360-degree video head movement dataset,” in Proc. 8th ACM Multimedia Syst. Conf., 2017, pp. 199–204.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Mobile Computing
IEEE Transactions on Mobile Computing  Volume 23, Issue 10
Oct. 2024
1160 pages

Publisher

IEEE Educational Activities Department

United States

Publication History

Published: 15 February 2024

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media