Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Federated semi-supervised learning with tolerant guidance and powerful classifier in edge scenarios

Published: 25 June 2024 Publication History

Abstract

Federated Learning is a distributed machine learning method that offers inherent advantages in efficient learning and privacy protection within edge computing scenarios. However, terminal nodes often encounter challenges such as insufficient datasets and a significant amount of unlabelled data, leading to reduced accuracy in multi-party collaborative training models. Prior approaches have typically relied on a single pseudo label from unlabelled data to guide model training, limiting the utilization of knowledge within these data. To address this, this paper proposes a federated semi-supervised learning method (FedTG) tailored for image classification. Specifically, we leverage multiple high probability pseudo labels from unlabelled data to participate in semi-supervised learning, rather than relying on a single pseudo label. This approach mitigates the potential harm caused by errors in a single pseudo label and enables the model to fully capture the knowledge within the unlabelled data. Additionally, recognizing the significance of model classifiers (final neural network layer) in image classification tasks, we propose the exclusion of model classifier updates during the training process using unlabelled data to maintain optimal classification performance. Experiments conducted on real datasets have demonstrated that the FedTG method effectively enhances the accuracy of traditional Federated Learning model.

References

[1]
Mahadev Satyanarayanan, The emergence of edge computing, Computer 50 (1) (2017) 30–39.
[2]
Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, Blaise Aguera y Arcas, Communication-efficient learning of deep networks from decentralized data, in: Artificial Intelligence and Statistics, PMLR, 2017, pp. 1273–1282.
[3]
Sai Praneeth Karimireddy, Satyen Kale, Mehryar Mohri, Sashank Reddi, Sebastian Stich, Ananda Theertha Suresh, Scaffold: Stochastic controlled averaging for federated learning, in: International Conference on Machine Learning, PMLR, 2020, pp. 5132–5143.
[4]
Li Tian, Anit Kumar Sahu, Manzil Zaheer, Maziar Sanjabi, Ameet Talwalkar, Virginia Smith, Federated optimization in heterogeneous networks, in: Proceedings of Machine Learning and Systems, vol. 2, 2020, pp. 429–450.
[5]
Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin K. Leung, Christian Makaya, Ting He, Kevin Chan, Adaptive federated learning in resource constrained edge computing systems, IEEE J. Sel. Areas Commun. 37 (6) (2019) 1205–1221.
[6]
Dinh C. Nguyen, Ming Ding, Quoc-Viet Pham, Pubudu N. Pathirana, Long Le Bao, Aruna Seneviratne, Jun Li, Dusit Niyato, H. Vincent Poor, Federated learning meets blockchain in edge computing: Opportunities and challenges, IEEE Int. Things J. 8 (16) (2021) 12806–12825.
[7]
Yunfan Ye, Shen Li, Fang Liu, Yonghao Tang, Wanting Hu Edgefed, Optimized federated learning based on edge computing, IEEE Access 8 (2020) 209191–209198.
[8]
Geyer, Robin C.; Klein, Tassilo; Nabi, Moin (2017): Differentially private federated learning: A client level perspective. arXiv preprint arXiv:1712.07557.
[9]
Nima Mohammadi, Jianan Bai, Qiang Fan, Yifei Song, Yang Yi, Lingjia Liu, Differential privacy meets federated learning under communication constraints, IEEE Int. Things J. 9 (22) (2021) 22204–22219.
[10]
Dongfen Li, Jinshan Lai, Ruijin Wang, Xiong Li, Pandi Vijayakumar, Brij B. Gupta, Wadee Alhalabi, Ubiquitous intelligent federated learning privacy-preserving scheme under edge computing, Future Gener. Comput. Syst. 144 (2023) 205–218.
[11]
Yalniz, I. Zeki; Jégou, Hervé; Chen, Kan; Paluri, Manohar; Mahajan, Dhruv (2019): Billion-scale semi-supervised learning for image classification. arXiv preprint arXiv:1905.00546.
[12]
David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, Colin A. Raffel Mixmatch, A holistic approach to semi-supervised learning, Adv. Neural Inf. Process. Syst. 32 (2019).
[13]
Xiaohua Zhai, Avital Oliver, Alexander Kolesnikov, Lucas Beyer, S4L: Self-supervised semi-supervised learning, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 1476–1485.
[14]
Lin, Haowen; Lou, Jian; Xiong, Li; Semifed, Cyrus Shahabi (2021): Semi-supervised federated learning with consistency and pseudo-labeling. arXiv preprint arXiv:2108.09412.
[15]
Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, Laurens Van Der Maaten, Exploring the limits of weakly supervised pretraining, in: Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 181–196.
[16]
Ekin D. Cubuk, Barret Zoph, Jonathon Shlens, Quoc V. Le Randaugment, Practical automated data augmentation with a reduced search space, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2020, pp. 702–703.
[17]
Phillip Chlap, Hang Min, Nym Vandenberg, Jason Dowling, Lois Holloway, Annette Haworth, A review of medical image data augmentation techniques for deep learning applications, J. Med. Imag. Radiat. Oncol. 65 (5) (2021) 545–563.
[18]
Qizhe Xie, Minh-Thang Luong, Eduard Hovy, Quoc V. Le, Self-training with noisy student improves ImageNet classification, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10687–10698.
[19]
Li Deng, The MNIST database of handwritten digit images for machine learning research [best of the web], IEEE Signal Process. Mag. 29 (6) (2012) 141–142.
[20]
Alex Krizhevsky, Geoffrey Hinton, et al., Learning multiple layers of features from tiny images, 2009.
[21]
Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng, Joint device scheduling and resource allocation for latency constrained wireless federated learning, IEEE Trans. Wirel. Commun. 20 (1) (2020) 453–467.
[22]
Sattler, Felix; Marban, Arturo; Rischke, Roman; Samek, Wojciech (2020): Communication-efficient federated distillation. arXiv preprint. arXiv:2012.00632.
[23]
Dian Shi, Liang Li, Rui Chen, Pavana Prakash, Miao Pan, Yuguang Fang, Toward energy-efficient federated learning over 5G+ mobile devices, IEEE Wirel. Commun. 29 (5) (2022) 44–51.
[24]
Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, A simple framework for contrastive learning of visual representations, in: International Conference on Machine Learning, PMLR, 2020, pp. 1597–1607.
[25]
Acar, Durmus Alp Emre; Zhao, Yue; Navarro, Ramon Matas; Mattina, Matthew; Whatmough, Paul N.; Saligrama, Venkatesh (2021): Federated learning based on dynamic regularization. arXiv preprint arXiv:2111.04263.
[26]
Nanor Ebenezer, Cobbinah B. Mawuli, Qinli Yang, Junming Shao, Kobiah Christiana Fedsulp, A communication-efficient federated learning framework with selective updating and loss penalization, Inf. Sci. (2023).
[27]
Hajira Batool, Adeel Anjum, Abid Khan, Stefano Izzo, Carlo Mazzocca, Gwanggil Jeon, A secure and privacy preserved infrastructure for VANETs based on federated learning with local differential privacy, Inf. Sci. (2023).
[28]
Deshan Yang, Senlin Luo, Jinjie Zhou, Limin Pan, Xiaonan Yang, Jiyuan Xing, Efficient and persistent backdoor attack by boundary trigger set constructing against federated learning, Inf. Sci. (2023).
[29]
Canh T. Dinh, Nguyen Tran, Josh Nguyen, Personalized federated learning with Moreau envelopes, Adv. Neural Inf. Process. Syst. 33 (2020) 21394–21405.
[30]
Qiong Wu, Kaiwen He, Xu Chen, Personalized federated learning for intelligent IoT applications: A cloud-edge based framework, IEEE Open J. Comput. Soc. 1 (2020) 35–44.
[31]
Qinbin Li, Bingsheng He, Dawn Song, Model-contrastive federated learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 10713–10722.
[32]
Liang Gao, Huazhu Fu, Li Li, Yingwen Chen, Ming Xu, Cheng-Zhong Xu, FedDC: Federated learning with non-IID data via local drift decoupling and correction, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 10112–10121.
[33]
Hinton, Geoffrey; Vinyals, Oriol; Dean, Jeff (2015): Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.
[34]
Li, Daliang; Fedmd, Junpu Wang (2019): Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581.
[35]
Qiushan Guo, Xinjiang Wang, Yichao Wu, Zhipeng Yu, Ding Liang, Xiaolin Hu, Ping Luo, Online knowledge distillation via collaborative learning, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11020–11029.
[36]
Yue Tan, Guodong Long, Lu Liu, Tianyi Zhou, Qinghua Lu, Jing Jiang, Chengqi Zhang, FedProto: Federated prototype learning across heterogeneous clients, in: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, 2022, pp. 8432–8440.
[37]
Matthijs Douze, Arthur Szlam, Bharath Hariharan, Hervé Jégou, Low-shot learning with large-scale diffusion, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 3349–3358.
[38]
Zhijun Guo, Yang Shen, Tian Yang, Yuan-Jiang Li, Yanfang Deng, Yuhua Qian, Semi-supervised feature selection based on fuzzy related family, Inf. Sci. (2023).
[39]
Zhengming Zhang, Yaoqing Yang, Zhewei Yao, Yujun Yan, Joseph E. Gonzalez, Kannan Ramchandran, Michael W. Mahoney, Improving semi-supervised federated learning by reducing the gradient diversity of models, in: 2021 IEEE International Conference on Big Data (Big Data), IEEE, 2021, pp. 1214–1225.
[40]
Ruijie Zhao, Yijun Wang, Zhi Xue, Tomoaki Ohtsuki, Bamidele Adebisi, Gui Guan, Semi-supervised federated learning based intrusion detection method for internet of things, IEEE Int. Things J. (2022).
[41]
Chenyou Fan, Junjie Hu, Jianwei Huang, Private semi-supervised federated learning, in: International Joint Conference on Artificial Intelligence, 2022.
[42]
J. Michael Cherry, Caroline Adler, Catherine Ball, Stephen A. Chervitz, Selina S. Dwight, Erich T. Hester, Yankai Jia, Gail Juvik, TaiYun Roe, Mark Schroeder, et al., SGD: Saccharomyces genome database, Nucleic Acids Res. 26 (1) (1998) 73–79.
[43]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778.
[44]
Zagoruyko, Sergey; Komodakis, Nikos (2016): Wide residual networks. arXiv preprint arXiv:1605.07146.
[45]
Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li, Bag of tricks for image classification with convolutional neural networks, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 558–567.
[46]
Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun, Identity mappings in deep residual networks, in: Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11–14, 2016, Proceedings, Part IV 14, Springer, 2016, pp. 630–645.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Information Sciences: an International Journal
Information Sciences: an International Journal  Volume 662, Issue C
Mar 2024
1436 pages

Publisher

Elsevier Science Inc.

United States

Publication History

Published: 25 June 2024

Author Tags

  1. Federated learning
  2. Edge computing
  3. Semi-supervised learning
  4. Image classification

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Nov 2024

Other Metrics

Citations

View Options

View options

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media