Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3560905.3568503acmconferencesArticle/Chapter ViewAbstractPublication PagessensysConference Proceedingsconference-collections
research-article
Open access

TailorFL: Dual-Personalized Federated Learning under System and Data Heterogeneity

Published: 24 January 2023 Publication History

Abstract

Federated learning (FL) enables distributed mobile devices to collaboratively learn a shared model without exposing their raw data. However, heterogeneous devices usually have limited and different available resources, i.e., system heterogeneity, for model training and communicating, while the diverse data distribution among devices, i.e., data heterogeneity, may result in significant performance degradation. In this paper, we propose TailorFL, a dual-personalized FL framework, which tailors a submodel for each device with personalized structure for training and personalized parameters for local inference. To achieve this, we first excavate the personalization principle for data heterogeneous FL via in-depth empirical studies, and based on which, we propose a resource-aware and data-directed pruning strategy that makes each device's submodel structure match its resource capability and correlate with its local data distribution. To aggregate the submodels while preserving their dual personalization properties, we design a scaling-based aggregation strategy that scales parameters with the pruning rate of submodels and aggregates the overlapped parameters. Moreover, to further promote beneficial and restrain detrimental collaborations among devices, we propose a server-assisted model-tuning mechanism, which dynamically tunes device's submodel structure at the server side with the global view of device's data distribution similarities. Extensive experiments demonstrate that compared to the status quo approaches, TailorFL achieves an average of 22% increase in inference accuracy, and reduces the memory, computation, and communication costs for model training simultaneously.

References

[1]
Ahmed M Abdelmoniem and Marco Canini. 2021. Towards mitigating device heterogeneity in federated learning via adaptive model quantization. In EuroSysW. 96--103.
[2]
Dan Alistarh, Demjan Grubic, Jerry Li, Ryota Tomioka, and Milan Vojnovic. 2017. QSGD: Communication-efficient SGD via gradient quantization and encoding. In NeurIPS. 1707--1718.
[3]
Davide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra Perez, and Jorge Luis Reyes Ortiz. 2013. A public domain dataset for human activity recognition using smartphones. In ESANN. 437--442.
[4]
Manoj Ghuhan Arivazhagan, Vinay Aggarwal, Aaditya Kumar Singh, and Sunav Choudhary. 2019. Federated learning with personalization layers. arXiv preprint arXiv:1912.00818 (2019).
[5]
Ilai Bistritz, Ariana Mann, and Nicholas Bambos. 2020. Distributed distillation for on-device learning. In NeurIPS. 22593--22604.
[6]
Yae Jee Cho, Jianyu Wang, Tarun Chiruvolu, and Gauri Joshi. 2021. Personalized Federated Learning for Heterogeneous Clients with Clustered Knowledge Transfer. arXiv preprint arXiv:2109.08119 (2021).
[7]
Luke N Darlow, Elliot J Crowley, Antreas Antoniou, and Amos J Storkey. 2018. CINIC-10 is not ImageNet or CIFAR-10. arXiv preprint arXiv:1810.03505 (2018).
[8]
Lei Deng, Guoqi Li, Song Han, Luping Shi, and Yuan Xie. 2020. Model compression and hardware acceleration for neural networks: A comprehensive survey. Proc. IEEE 108, 4 (2020), 485--532.
[9]
Yongheng Deng, Feng Lyu, Ju Ren, Huaqing Wu, Yuezhi Zhou, Yaoxue Zhang, and Xuemin Shen. 2021. AUCTION: Automated and Quality-Aware Client Selection Framework for Efficient Federated Learning. IEEE Transactions on Parallel and Distributed Systems 33, 8 (2021), 1996--2009.
[10]
Enmao Diao, Jie Ding, and Vahid Tarokh. 2020. HeteroFL: Computation and Communication Efficient Federated Learning for Heterogeneous Clients. In ICLR.
[11]
Alireza Fallah, Aryan Mokhtari, and Asuman Ozdaglar. 2020. Personalized federated learning with theoretical guarantees: A model-agnostic meta-learning approach. In NeurIPS. 3557--3568.
[12]
Biyi Fang, Xiao Zeng, and Mi Zhang. 2018. NestDNN: Resource-aware multi-tenant on-device deep learning for continuous mobile vision. In ACM MobiCom. 115--127.
[13]
Jonathan Frankle and Michael Carbin. 2018. The lottery ticket hypothesis: Finding sparse, trainable neural networks. arXiv preprint arXiv:1803.03635 (2018).
[14]
Avishek Ghosh, Jichan Chung, Dong Yin, and Kannan Ramchandran. 2020. An efficient framework for clustered federated learning. In NeurIPS. 19586--19597.
[15]
Filip Hanzely and Peter Richtárik. 2020. Federated learning of a mixture of global and local models. arXiv preprint arXiv:2002.05516 (2020).
[16]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE CVPR. 770--778.
[17]
Samuel Horvath, Stefanos Laskaridis, Mario Almeida, Ilias Leontiadis, Stylianos Venieris, and Nicholas Lane. 2021. Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout. In NeurIPS. 12876--12889.
[18]
Sebastian Houben, Johannes Stallkamp, Jan Salmen, Marc Schlipsing, and Christian Igel. 2013. Detection of Traffic Signs in Real-World Images: The German Traffic Sign Detection Benchmark. In IJCNN.
[19]
Tzu-Ming Harry Hsu, Hang Qi, and Matthew Brown. 2019. Measuring the effects of non-identical data distribution for federated visual classification. arXiv preprint arXiv:1909.06335 (2019).
[20]
Benoit Jacob, Skirmantas Kligys, Bo Chen, Menglong Zhu, MatthewTang, Andrew Howard, Hartwig Adam, and Dmitry Kalenichenko. 2018. Quantization and training of neural networks for efficient integer-arithmetic-only inference. In IEEE CVPR. 2704--2713.
[21]
Yihan Jiang, Jakub Konečnỳ, Keith Rush, and Sreeram Kannan. 2019. Improving federated learning personalization via model agnostic meta learning. arXiv preprint arXiv:1909.12488 (2019).
[22]
Yuang Jiang, Shiqiang Wang, Victor Valls, Bong Jun Ko, Wei-Han Lee, Kin K Leung, and Leandros Tassiulas. 2022. Model pruning enables efficient federated learning on edge devices. IEEE Transactions on Neural Networks and Learning Systems (2022).
[23]
Sangil Jung, Changyong Son, Seohyung Lee, Jinwoo Son, Jae-Joon Han, Youngjun Kwak, Sung Ju Hwang, and Changkyu Choi. 2019. Learning to quantize deep networks by optimizing quantization intervals with task loss. In IEEE CVPR. 4350--4359.
[24]
Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. 2021. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning 14, 1--2 (2021), 1--210.
[25]
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, and Ananda Theertha Suresh. 2020. SCAFFOLD: Stochastic controlled averaging for federated learning. In ICML. 5132--5143.
[26]
Young Geun Kim and Carole-Jean Wu. 2020. AutoScale: Energy efficiency optimization for stochastic edge inference using reinforcement learning. In MICRO. 1082--1096.
[27]
Young Geun Kim and Carole-Jean Wu. 2021. AutoFL: Enabling heterogeneity-aware energy efficient federated learning. In MICRO. 183--198.
[28]
Jakub Konečnỳ, H Brendan McMahan, Felix X Yu, Peter Richtárik, Ananda Theertha Suresh, and Dave Bacon. 2016. Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:1610.05492 (2016).
[29]
Alex Krizhevsky and Geoffrey Hinton. 2009. Learning multiple layers of features from tiny images. Technical Report. Univ. Toronto, Toronto, Canada.
[30]
Viraj Kulkarni, Milind Kulkarni, and Aniruddha Pant. 2020. Survey of Personalization Techniques for Federated Learning. arXiv preprint arXiv:2003.08673 (2020).
[31]
Fan Lai, Xiangfeng Zhu, Harsha V Madhyastha, and Mosharaf Chowdhury. 2021. Oort: Efficient federated learning via guided participant selection. In OSDI. 19--35.
[32]
Hyun-Suk Lee and Jang-Won Lee. 2021. Adaptive transmission scheduling in wireless networks for asynchronous federated learning. IEEE Journal on Selected Areas in Communications 39, 12 (2021), 3673--3687.
[33]
Ang Li, Jingwei Sun, Pengcheng Li, Yu Pu, Hai Li, and Yiran Chen. 2021. Hermes: an efficient federated learning framework for heterogeneous mobile clients. In ACM MobiCom. 420--437.
[34]
Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, and Hai Li. 2020. LotteryFL: Personalized and communication-efficient federated learning with lottery ticket hypothesis on non-iid datasets. arXiv preprint arXiv:2008.03371 (2020).
[35]
Ang Li, Jingwei Sun, Xiao Zeng, Mi Zhang, Hai Li, and Yiran Chen. 2021. Fed-Mask: Joint Computation and Communication-Efficient Personalized Federated Learning via Heterogeneous Masking. In ACM SenSys. 42--55.
[36]
Daliang Li and Junpu Wang. 2019. FedMD: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581 (2019).
[37]
Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, and Hans Peter Graf. 2016. Pruning filters for efficient convnets. arXiv preprint arXiv:1608.08710 (2016).
[38]
Tian Li, Shengyuan Hu, Ahmad Beirami, and Virginia Smith. 2021. Ditto: Fair and robust federated learning through personalization. In ICML. 6357--6368.
[39]
Tian Li, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, and Virginia Smith. 2020. Federated optimization in heterogeneous networks. In MLSys. 429--450.
[40]
Paul Pu Liang, Terrance Liu, Liu Ziyin, Nicholas B Allen, Randy P Auerbach, David Brent, Ruslan Salakhutdinov, and Louis-Philippe Morency. 2020. Think locally, act globally: Federated learning with local and global representations. arXiv preprint arXiv:2001.01523 (2020).
[41]
Wei Yang Bryan Lim, Nguyen Cong Luong, Dinh Thai Hoang, Yutao Jiao, Ying-Chang Liang, Qiang Yang, Dusit Niyato, and Chunyan Miao. 2020. Federated learning in mobile edge networks: A comprehensive survey. IEEE Communications Surveys & Tutorials 22, 3 (2020), 2031--2063.
[42]
Tao Lin, Lingjing Kong, Sebastian U Stich, and Martin Jaggi. 2020. Ensemble distillation for robust model fusion in federated learning. In NeurIPS. 2351--2363.
[43]
Jianchun Liu, Hongli Xu, Lun Wang, Yang Xu, Chen Qian, Jinyang Huang, and He Huang. 2021. Adaptive Asynchronous Federated Learning in Resource-Constrained Edge Computing. IEEE Transactions on Mobile Computing (2021).
[44]
Zhuang Liu, Jianguo Li, Zhiqiang Shen, Gao Huang, Shoumeng Yan, and Changshui Zhang. 2017. Learning efficient convolutional networks through network slimming. In IEEE ICCV. 2736--2744.
[45]
Bing Luo, Wenli Xiao, Shiqiang Wang, Jianwei Huang, and Leandros Tassiulas. 2021. Tackling System and Statistical Heterogeneity for Federated Learning with Adaptive Client Sampling. arXiv preprint arXiv:2112.11256 (2021).
[46]
Junyu Luo, Jianlei Yang, Xucheng Ye, Xin Guo, and Weisheng Zhao. 2021. FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update. In ACM CIKM. 3283--3287.
[47]
Sangkug Lym, Esha Choukse, Siavash Zangeneh, Wei Wen, Sujay Sanghavi, and Mattan Erez. 2019. PruneTrain: fast neural network training by dynamic sparse model reconfiguration. In SC. 1--13.
[48]
Qianpiao Ma, Yang Xu, Hongli Xu, Zhida Jiang, Liusheng Huang, and He Huang. 2021. FedSA: A semi-asynchronous federated learning mechanism in heterogeneous edge computing. IEEE Journal on Selected Areas in Communications 39, 12 (2021), 3654--3672.
[49]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. 2017. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics. PMLR, 1273--1282.
[50]
Pavlo Molchanov, Arun Mallya, Stephen Tyree, Iuri Frosio, and Jan Kautz. 2019. Importance estimation for neural network pruning. In IEEE CVPR. 11264--11272.
[51]
Takayuki Nishio and Ryo Yonetani. 2019. Client selection for federated learning with heterogeneous resources in mobile edge. In IEEE ICC. 1--7.
[52]
Wei Niu, Xiaolong Ma, Sheng Lin, Shihao Wang, Xuehai Qian, Xue Lin, Yanzhi Wang, and Bin Ren. 2020. PatDNN: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning. In ASPLOS. 907--922.
[53]
Xiaomin Ouyang, Zhiyuan Xie, Jiayu Zhou, Jianwei Huang, and Guoliang Xing. 2021. ClusterFL: a similarity-aware federated learning system for human activity recognition. In ACM MobiSys. 54--66.
[54]
Antonio Polino, Razvan Pascanu, and Dan Alistarh. 2018. Model compression via distillation and quantization. arXiv preprint arXiv:1802.05668 (2018).
[55]
Saurav Prakash, Sagar Dhakal, Mustafa Riza Akdeniz, Yair Yona, Shilpa Talwar, Salman Avestimehr, and Nageen Himayat. 2020. Coded computing for low-latency federated learning over wireless edge networks. IEEE Journal on Selected Areas in Communications 39, 1 (2020), 233--250.
[56]
Ao Ren, Tianyun Zhang, Shaokai Ye, Jiayu Li, Wenyao Xu, Xuehai Qian, Xue Lin, and Yanzhi Wang. 2019. ADMM-NN: An algorithm-hardware co-design framework of dnns using alternating direction methods of multipliers. In ASPLOS. 925--938.
[57]
Jinke Ren, Guanding Yu, and Guangyao Ding. 2020. Accelerating DNN training in wireless federated edge learning systems. IEEE Journal on Selected Areas in Communications 39, 1 (2020), 219--232.
[58]
Ju Ren, Deyu Zhang, Shiwen He, Yaoxue Zhang, and Tao Li. 2019. A survey on end-edge-cloud orchestrated network computing paradigms: Transparent computing, mobile edge computing, fog computing, and cloudlet. Comput. Surveys 52, 6 (2019), 1--36.
[59]
Daniel Rothchild, Ashwinee Panda, Enayat Ullah, Nikita Ivkin, Ion Stoica, Vladimir Braverman, Joseph Gonzalez, and Raman Arora. 2020. FetchSGD: Communication-efficient federated learning with sketching. In ICML. 8253--8265.
[60]
Yichen Ruan and Carlee Joe-Wong. 2021. FedSoft: Soft Clustered Federated Learning with Proximal Local Updating. arXiv preprint arXiv:2112.06053 (2021), 10.
[61]
Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. 2018. MobileNetV2: Inverted residuals and linear bottlenecks. In IEEE CVPR. 4510--4520.
[62]
Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014).
[63]
Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. 2017. Federated multi-task learning. In NeurIPS. 4427--4437.
[64]
Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929--1958.
[65]
Canh T Dinh, Nguyen Tran, and Josh Nguyen. 2020. Personalized federated learning with moreau envelopes. In NeurIPS. 21394--21405.
[66]
Linlin Tu, Xiaomin Ouyang, Jiayu Zhou, Yuze He, and Guoliang Xing. 2021. FedDL: Federated Learning via Dynamic Layer Sharing for Human Activity Recognition. In ACM SenSys. 15--28.
[67]
Saeed Vahidian, Mahdi Morafah, and Bill Lin. 2021. Personalized Federated Learning by Structured and Unstructured Pruning under Data Heterogeneity. In IEEE ICDCSW. 27--34.
[68]
Cong Wang, Yuanyuan Yang, and Pengzhan Zhou. 2020. Towards efficient scheduling of federated mobile devices under computational and statistical heterogeneity. IEEE Transactions on Parallel and Distributed Systems 32, 2 (2020), 394--410.
[69]
Hao Wang, Zakhary Kaplan, Di Niu, and Baochun Li. 2020. Optimizing federated learning on non-iid data with reinforcement learning. In IEEE INFOCOM. 1698--1707.
[70]
Jianyu Wang, Qinghua Liu, Hao Liang, Gauri Joshi, and H Vincent Poor. 2020. Tackling the objective inconsistency problem in heterogeneous federated optimization. In NeurIPS. 7611--7623.
[71]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K Leung, Christian Makaya, Ting He, and Kevin Chan. 2019. Adaptive federated learning in resource constrained edge computing systems. IEEE Journal on Selected Areas in Communications 37, 6 (2019), 1205--1221.
[72]
Yulong Wang, Xiaolu Zhang, Lingxi Xie, Jun Zhou, Hang Su, Bo Zhang, and Xiaolin Hu. 2020. Pruning from scratch. In AAAI. 12273--12280.
[73]
Jianqiao Wangni, Jialei Wang, Ji Liu, and Tong Zhang. 2018. Gradient sparsification for communication-efficient distributed optimization. In NeurIPS. 1306--1316.
[74]
Wentai Wu, Ligang He, Weiwei Lin, Rui Mao, Carsten Maple, and Stephen Jarvis. 2020. SAFA: a semi-asynchronous protocol for fast federated learning with low overhead. IEEE Trans. Comput. 70, 5 (2020), 655--668.
[75]
Cong Xie, Sanmi Koyejo, and Indranil Gupta. 2019. Asynchronous federated optimization. arXiv preprint arXiv:1903.03934 (2019).
[76]
Mengwei Xu, Jiawei Liu, Yuanqiang Liu, Felix Xiaozhu Lin, Yunxin Liu, and Xuanzhe Liu. 2019. A first look at deep learning apps on smartphones. In WWW. 2125--2136.
[77]
Zirui Xu, Zhao Yang, Jinjun Xiong, Janlei Yang, and Xiang Chen. 2019. ELFISH: Resource-aware federated learning on heterogeneous edge devices. Ratio 2, r1 (2019), r2.
[78]
Zirui Xu, Fuxun Yu, Jinjun Xiong, and Xiang Chen. 2021. Helios: heterogeneity-aware federated learning with dynamically balanced collaboration. In DAC. 997--1002.
[79]
Li Lyna Zhang, Shihao Han, Jianyu Wei, Ningxin Zheng, Ting Cao, Yuqing Yang, and Yunxin Liu. 2021. nn-Meter: towards accurate latency prediction of deep-learning model inference on diverse edge devices. In ACM MobiSys. 81--93.
[80]
Yue Zhao, Meng Li, Liangzhen Lai, Naveen Suda, Damon Civin, and Vikas Chandra. 2018. Federated learning with non-iid data. arXiv preprint arXiv:1806.00582 (2018).
[81]
Zhi Zhou, Xu Chen, En Li, Liekang Zeng, Ke Luo, and Junshan Zhang. 2019. Edge intelligence: Paving the last mile of artificial intelligence with edge computing. Proc. IEEE 107, 8 (2019), 1738--1762.
[82]
Zhuangdi Zhu, Junyuan Hong, and Jiayu Zhou. 2021. Data-free knowledge distillation for heterogeneous federated learning. In ICML. 12878--12889.

Cited By

View all
  • (2024)Personalized Federated Learning Incorporating Adaptive Model Pruning at the EdgeElectronics10.3390/electronics1309173813:9(1738)Online publication date: 1-May-2024
  • (2024)FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated ClientsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661880(398-411)Online publication date: 3-Jun-2024
  • (2024)EchoPFLProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435608:1(1-22)Online publication date: 6-Mar-2024
  • Show More Cited By

Index Terms

  1. TailorFL: Dual-Personalized Federated Learning under System and Data Heterogeneity

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SenSys '22: Proceedings of the 20th ACM Conference on Embedded Networked Sensor Systems
      November 2022
      1280 pages
      ISBN:9781450398862
      DOI:10.1145/3560905
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 24 January 2023

      Check for updates

      Author Tags

      1. federated learning
      2. heterogeneity
      3. personalization
      4. pruning

      Qualifiers

      • Research-article

      Funding Sources

      • Young Elite Scientist Sponsorship Program by CAST
      • Guoqiang Institute, Tsinghua University
      • Natural Science Foundation of Hunan Province, China
      • National Key R&D Program of China
      • National Natural Science Foundation of China
      • Tsinghua University (AIR)-Asiainfo Technologies (China) Inc. Joint Research Center
      • Xiaomi AI Innovation Research
      • Key Research and Development Project of Hunan Province, China
      • Young Talents Plan of Hunan Province, China

      Conference

      Acceptance Rates

      SenSys '22 Paper Acceptance Rate 52 of 187 submissions, 28%;
      Overall Acceptance Rate 174 of 867 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)523
      • Downloads (Last 6 weeks)76
      Reflects downloads up to 10 Nov 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Personalized Federated Learning Incorporating Adaptive Model Pruning at the EdgeElectronics10.3390/electronics1309173813:9(1738)Online publication date: 1-May-2024
      • (2024)FedConv: A Learning-on-Model Paradigm for Heterogeneous Federated ClientsProceedings of the 22nd Annual International Conference on Mobile Systems, Applications and Services10.1145/3643832.3661880(398-411)Online publication date: 3-Jun-2024
      • (2024)EchoPFLProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36435608:1(1-22)Online publication date: 6-Mar-2024
      • (2024)Fed-RAC: Resource-Aware Clustering for Tackling Heterogeneity of Participants in Federated LearningIEEE Transactions on Parallel and Distributed Systems10.1109/TPDS.2024.337993335:7(1207-1220)Online publication date: Jul-2024
      • (2024)SESAME: A Resource Expansion and Sharing Scheme for Multiple Edge Services ProvidersIEEE/ACM Transactions on Networking10.1109/TNET.2024.337790832:4(3189-3204)Online publication date: Aug-2024
      • (2024)An Incentive Mechanism for Long-Term Federated Learning in Autonomous DrivingIEEE Internet of Things Journal10.1109/JIOT.2023.334849811:9(15642-15655)Online publication date: 1-May-2024
      • (2024)RelayRec: Empowering Privacy-Preserving CTR Prediction via Cloud-Device Relay Learning2024 23rd ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)10.1109/IPSN61024.2024.00020(188-199)Online publication date: 13-May-2024
      • (2024)ArtFL: Exploiting Data Resolution in Federated Learning for Dynamic Runtime Inference via Multi-Scale Training2024 23rd ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)10.1109/IPSN61024.2024.00007(27-38)Online publication date: 13-May-2024
      • (2024)Model optimization techniques in personalized federated learningExpert Systems with Applications: An International Journal10.1016/j.eswa.2023.122874243:COnline publication date: 25-Jun-2024
      • (2024)Federated Learning with Flexible ArchitecturesMachine Learning and Knowledge Discovery in Databases. Research Track10.1007/978-3-031-70344-7_9(143-161)Online publication date: 22-Aug-2024
      • Show More Cited By

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Get Access

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media