Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

A Model Personalization-based Federated Learning Approach for Heterogeneous Participants with Variability in the Dataset

Published: 07 December 2023 Publication History

Abstract

Federated learning is an emerging paradigm that provides privacy-preserving collaboration among multiple participants for model training without sharing private data. The participants with heterogeneous devices and networking resources decelerate the training and aggregation. The dataset of the participant also possesses a high level of variability, which means the characteristics of the dataset change over time. Moreover, it is a prerequisite to preserve the personalized characteristics of the local dataset on each participant device to achieve better performance. This article proposes a model personalization-based federated learning approach in the presence of variability in the local datasets. The approach involves participants with heterogeneous devices and networking resources. The central server initiates the approach and constructs a base model that executes on most participants. The approach simultaneously learns the personalized model and handles the variability in the datasets. We propose a knowledge distillation-based early-halting approach for devices where the base model does not fit directly. The early halting speeds up the training of the model. We also propose an aperiodic global update approach that helps participants to share their updated parameters aperiodically with server. Finally, we perform a real-world study to evaluate the performance of the approach and compare with state-of-the-art techniques.

References

[1]
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra Perez, and Jorge Luis Reyes Ortiz. 2013. A public domain dataset for human activity recognition using smartphones. In Proceedings of the ESANN. 437–442.
[2]
Sébastien Bubeck et al. 2015. Convex optimization: Algorithms and complexity. Foundations and Trends® in Machine Learning 8, 3-4 (2015), 231–357.
[3]
Zheng Chai, Ahsan Ali, Syed Zawad, Stacey Truex, Ali Anwar, Nathalie Baracaldo, Yi Zhou, Heiko Ludwig, Feng Yan, and Yue Cheng. 2020. TiFL: A tier-based federated learning system. In Proceedings of the HPDC. 125–136.
[4]
Zheng Chai, Hannan Fayyaz, Zeshan Fayyaz, Ali Anwar, Yi Zhou, Nathalie Baracaldo, Heiko Ludwig, and Yue Cheng. 2019. Towards taming the resource and data heterogeneity in federated learning. In Proceedings of the 2019 USENIX Conference on Operational Machine Learning (OpML 19). 19–21.
[5]
Enmao Diao, Jie Ding, and Vahid Tarokh. 2021. HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. In 9th International Conference on Learning Representations, ICLR. 1–24.
[6]
Sannara EK, François PORTET, Philippe LALANDA, and German VEGA. 2021. A federated learning aggregation algorithm for pervasive computing: Evaluation and comparison. In Proceedings of the PerCom. 1–10.
[7]
Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. 2020. Personalized Federated Learning: A Meta-Learning Approach. arXiv:2002.07948 [cs.LG].
[8]
Yiwen Guo, Anbang Yao, and Yurong Chen. 2016. Dynamic network surgery for efficient DNNs. In Proceedings of the NIPS. 1–9.
[9]
Chaoyang He, Murali Annavaram, and Salman Avestimehr. 2020. Group knowledge transfer: Federated learning of large cnns at the edge. Proceedings of the NIPS, Vol. 33. 14068–14080.
[10]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the CVPR. 770–778.
[11]
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the Knowledge in a Neural Network. arXiv:1503.02531 [stat.ML].
[12]
Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, and Nicholas Lane. 2021. Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout. Proceedings of the NIPS, Vol. 34. 12876–12889.
[13]
Yutao Huang, Lingyang Chu, Zirui Zhou, Lanjun Wang, Jiangchuan Liu, Jian Pei, and Yong Zhang. 2021. Personalized cross-silo federated learning on non-iid data. In Proceedings of the AAAI Conference on Artificial intelligence. 7865–7873.
[14]
Zhongming Ji, Li Chen, Nan Zhao, Yunfei Chen, Guo Wei, and F. Richard Yu. 2021. Computation offloading for edge-assisted federated learning. IEEE Transactions on Vehicular Technology 70, 9 (2021), 9330–9344.
[15]
Woojin Kang, In-Taek Jung, DaeHo Lee, and Jin-Hyuk Hong. 2021. Styling words: A simple and natural way to increase variability in training data collection for gesture recognition. In Proceedings of the CHI. 1–12.
[16]
Alex Krizhevsky and Geoffrey Hinton. 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report 0. University of Toronto, Toronto, ON. https://www.cs.toronto.edu/kriz/learning-features-2009-TR.pdf
[17]
Ramakant Kumar, Rahul Mishra, and Hari Prabhat Gupta. 2023. A federated learning approach with imperfect labels in LoRa-based transportation systems. IEEE Transactions on Intelligent Transportation Systems 24, 11 (2023), 1–9.
[18]
Fan Lai, Xiangfeng Zhu, Harsha V. Madhyastha, and Mosharaf Chowdhury. 2021. Oort: Efficient federated learning via guided participant selection. In Proceedings of the USENIX OSDI. 19–35.
[19]
Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86, 11 (1998), 2278–2324.
[20]
Ang Li, Jingwei Sun, Pengcheng Li, Yu Pu, Hai Li, and Yiran Chen. 2021. Hermes: An efficient federated learning framework for heterogeneous mobile clients. In Proceedings of the ACM Mobicom. 420–437.
[21]
Daliang Li and Junpu Wang. 2019. FedMD: Heterogenous Federated Learning via Model Distillation. arXiv:1910.03581 [cs.LG].
[22]
Tian Li, Anit Kumar Sahu, Ameet Talwalkar, and Virginia Smith. 2020. Federated learning: Challenges, methods, and future directions. IEEE Signal Processing Magazine 37, 3 (2020), 50–60.
[23]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. Proceedings of the MLSys 2 (2020), 429–450.
[24]
Yuzheng Li, Chuan Chen, Nan Liu, Huawei Huang, Zibin Zheng, and Qiang Yan. 2021. A blockchain-based decentralized federated learning framework with committee consensus. IEEE Network 35, 1 (2021), 234–241.
[25]
Paul Pu Liang, Terrance Liu, Liu Ziyin, Nicholas B. Allen, Randy P. Auerbach, David Brent, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020. Think Locally, Act Globally: Federated Learning with Local and Global Representations. arXiv:2001.01523 [cs.LG].
[26]
Wei Yang Bryan Lim, Nguyen Cong Luong, Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. 2020. Federated learning in mobile edge networks: A comprehensive survey. IEEE Communications Surveys and Tutorials 22, 3 (2020), 2031–2063.
[27]
Tao Lin, Lingjing Kong, Sebastian U. Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. In Proceedings of the NeurIPS. 2351–2363.
[28]
Bingyan Liu, Yifeng Cai, Ziqi Zhang, Yuanchun Li, Leye Wang, Ding Li, Yao Guo, and Xiangqun Chen. 2022. DistFL: Distribution-aware federated learning for mobile scenarios. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 5, 4 (2022), 1–26.
[29]
Chaoyue Liu, Libin Zhu, and Mikhail Belkin. 2022. Loss landscapes and optimization in over-parameterized non-linear systems and neural networks. Applied and Computational Harmonic Analysis 59 (2022), 85–116.
[30]
Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. 2018. Hierarchical representations for efficient architecture search. In Proceedings of the ICLR. 1–13.
[31]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the AISTATS. 1273–1282.
[32]
Rahul Mishra, Ashish Gupta, Hari Prabhat Gupta, and Tanima Dutta. 2022. A sensors based deep learning model for unseen locomotion mode identification using multiple semantic matrices. IEEE Transactions on Mobile Computing 21, 3 (2022), 799–810.
[33]
Rahul Mishra, Hari Prabhat Gupta, and Tanima Dutta. 2020. Teacher, trainee, and student based knowledge distillation technique for monitoring indoor activities: Poster abstract. In Proceedings of the SenSys. 729–730.
[34]
Rahul Mishra, Hari Prabhat Gupta, and Tanima Dutta. 2022. Noise-resilient federated learning: Suppressing noisy labels in the local datasets of participants. In Proceedings of the IEEE INFOCOM WKSHPS. 1–2.
[35]
Xiaomin Ouyang, Zhiyuan Xie, Jiayu Zhou, Guoliang Xing, and Jianwei Huang. 2022. ClusterFL: A clustering-based federated learning system for human activity recognition. ACM Transactions on Sensor Networks 19, 1 (2022), 1–32.
[36]
Jihong Park, Sumudu Samarakoon, Mehdi Bennis, and Mérouane Debbah. 2019. Wireless network intelligence at the edge. Proceedings of the IEEE 107, 11 (2019), 2204–2239.
[37]
Mary Phuong and Christoph Lampert. 2019. Towards understanding knowledge distillation. In Proc. ICML. 5142–5151.
[38]
Hanchi Ren, Jingjing Deng, Xianghua Xie, Xiaoke Ma, and Yichuan Wang. 2023. FedBoosting: Federated Learning with Gradient Protected Boosting for Text Recognition. arXiv:2007.07296 [cs.CV].
[39]
Shihao Shen, Yiwen Han, Xiaofei Wang, and Yan Wang. 2019. Computation offloading with multiple agents in edge-computing-supported IoT. ACM Transactions on Sensor Networks 16, 1 (2019), 1–27.
[40]
Yimin Shi, Haihan Duan, Lei Yang, and Wei Cai. 2022. An energy-efficient and privacy-aware decomposition framework for edge-assisted federated learning. ACM Transactions on Sensor Networks 18, 4 (2022), 1–24.
[41]
SHL Challenge. 2022. Retrieved from http://www.shl-dataset.org/activity-recognition-challenge/. Accessed 10 September 2022.
[42]
Chitranjan Singh, Rahul Mishra, Hari Prabhat Gupta, and Garvit Banga. 2022. A federated learning-based patient monitoring system in internet of medical things. IEEE Transactions on Computational Social Systems 10, 4 (2022), 1–7.
[43]
Guangcong Wang, Xiaohua Xie, Jianhuang Lai, and Jiaxuan Zhuo. 2017. Deep growing learning. In Proceedings of the ICCV. 2812–2820.
[44]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications 37, 6 (2019), 1205–1221.
[45]
Chuhan Wu, Fangzhao Wu, Ruixuan Liu, Lingjuan Lyu, Yongfeng Huang, and Xing Xie. 2021. Communication-efficient federated learning via knowledge distillation. Nature Communication 13, 2032 (2022).
[46]
Qiong Wu, Xu Chen, Tao Ouyang, Zhi Zhou, Xiaoxi Zhang, Shusen Yang, and Junshan Zhang. 2023. HiFlash: Communication-efficient hierarchical federated learning with adaptive staleness control and heterogeneity-aware client-edge association. IEEE Transactions on Parallel and Distributed Systems 34, 5 (2023), 1560–1579. DOI:
[47]
Wenyan Wu and Shuo Yang. 2017. Leveraging intra and inter-dataset variations for robust face alignment. In Proceedings of the CVPR. 150–159.
[48]
Hao Yu and Rong Jin. 2019. On the computation and communication complexity of parallel SGD with dynamic batch sizes for stochastic non-convex optimization. In Proceedings of the ICML. 7174–7183.
[49]
Rong Yu and Peichun Li. 2021. Toward resource-efficient federated learning in mobile edge computing. IEEE Network 35, 1 (2021), 148–155.
[50]
Li Yuan, Francis E. H. Tay, Guilin Li, Tao Wang, and Jiashi Feng. 2020. Revisiting knowledge distillation via label smoothing regularization. In Proceedings of the CVPR. 3903–3911.
[51]
Yufeng Zhan, Peng Li, and Song Guo. 2020. Experience-driven computational resource allocation of federated learning by deep reinforcement learning. In Proceedings of the IPDPS. 234–243.
[52]
H. Zhao, X. Sun, J. Dong, C. Chen, and Z. Dong. 2020. Highlight every step: Knowledge distillation via collaborative teaching. IEEE Transactions on Cybernetics 52, 4 (2020), 1–12. DOI:
[53]
Shuxin Zheng, Qi Meng, Taifeng Wang, Wei Chen, Nenghai Yu, Zhi-Ming Ma, and Tie-Yan Liu. 2017. Asynchronous stochastic gradient descent with delay compensation. In Proceedings of the ICML. 4120–4129.
[54]
G. Zhou, Y. Fan, R. Cui, W. Bian, X. Zhu, and K. Gai.2018. Rocket launching: A universal and efficient framework for training well-performing light net. In Proceedings of the AAAI. 1–8.
[55]
Yuhao Zhou, Qing Ye, and Jiancheng Lv. 2022. Communication-efficient federated learning with compensated overlap-FedAvg. IEEE Transactions on Parallel and Distributed Systems 33, 1 (2022), 192–205.
[56]
Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. 2021. Data-free knowledge distillation for heterogeneous federated learning. In Proceedings of the ICML. 1–12.
[57]
Hayreddin Çeker and Shambhu Upadhyaya. 2016. Adaptive techniques for intra-user variability in keystroke dynamics. In Proceedings of the BTAS. 1–6.

Cited By

View all
  • (2024)Fair-select: a federated learning approach to ensure fairness in selection of participantsMultimedia Tools and Applications10.1007/s11042-024-20476-5Online publication date: 29-Nov-2024

Index Terms

  1. A Model Personalization-based Federated Learning Approach for Heterogeneous Participants with Variability in the Dataset

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Transactions on Sensor Networks
      ACM Transactions on Sensor Networks  Volume 20, Issue 1
      January 2024
      717 pages
      EISSN:1550-4867
      DOI:10.1145/3618078
      Issue’s Table of Contents

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Journal Family

      Publication History

      Published: 07 December 2023
      Online AM: 06 November 2023
      Accepted: 07 October 2023
      Revised: 29 July 2023
      Received: 22 October 2022
      Published in TOSN Volume 20, Issue 1

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Dataset variability
      2. early halting
      3. federated learning
      4. personalization

      Qualifiers

      • Research-article

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)213
      • Downloads (Last 6 weeks)10
      Reflects downloads up to 13 Feb 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Fair-select: a federated learning approach to ensure fairness in selection of participantsMultimedia Tools and Applications10.1007/s11042-024-20476-5Online publication date: 29-Nov-2024

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Full Text

      View this article in Full Text.

      Full Text

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media