Abstract
As Deep Neural Networks (DNNs) evolve in complexity, so does their parameter size, resulting in prolonged training time. While various distributed training strategies have been proposed to speed up training, the efficiency of these strategies is often hindered by the frequent communication required between different computational nodes. Numerous gradient compression techniques (e.g., Sparsification, Quantization, Low-Rank) have been introduced to enhance the communication process. However, these methods mainly focus on the numerical characteristics while neglecting the inherent characteristics of neural network training. In addition, these techniques necessitate the operation and transmission of tensors across all layers, which depletes the computational resources and requires substantial transmission time. To address these issues, this paper proposes a Layer-wised Sparsification method. Instead of compressing the gradients of all layers, the layers carefully selected through a Hypernetwork within each computational node will be transmitted. An efficient objective function is constructed for the Hypernetwork to guide the selection of layers for transmission, which ensures that layers that contribute more to the learning process are prioritized for transmission. Comprehensive experiments on Resnet-18 and VGG-16 are conducted to verify our method. The results show that the proposed method can reduce the communication overhead with only a slight loss of accuracy. Furthermore, our method can be combined with other compression methods, leading to further reductions in communication volume.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
This work is supported by National Natural Science Foundation of China (Grant No. 62306198) .
References
Alistarh, D., Grubic, D., Li, J.Z., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1707–1718. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017)
Barredo Arrieta, A., et al.: Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 58, 82–115 (2020). https://doi.org/10.1016/j.inffus.2019.12.012
Basu, D., Data, D., Karakus, C., Diggavi, S.: Qsparse-local-SGD: Distributed SGD with quantization, sparsification and local computations. In: Wallach, H., Larochelle, H., Beygelzimer, A., d’ Alché-Buc, F., Fox, E., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 32. Curran Associates, Inc. (2019). https://proceedings.neurips.cc/paper_files/paper/2019/file/d202ed5bcfa858c15a9f383c3e386ab2-Paper.pdf
Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. ACM Comput. Surv. 52(4) (2019). https://doi.org/10.1145/3320060
Bernstein, J., Wang, Y.X., Azizzadenesheli, K., Anandkumar, A.: signSGD: compressed optimisation for non-convex problems. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning. Proceedings of Machine Learning Research, vol. 80, pp. 560–569. PMLR (10–15 Jul 2018).https://proceedings.mlr.press/v80/bernstein18a.html
Brock, A., Lim, T., Ritchie, J.M., Weston, N.: Smash: One-shot model architecture search through hypernetworks (2017)
Brown, T.B., et al.: Language models are few-shot learners. In: Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Curran Associates Inc., Red Hook, NY, USA (2020)
Das, D., et al.: Distributed deep learning using synchronous stochastic gradient descent (2016)
Dean, J., et al.: Large scale distributed deep networks. In: Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1, pp. 1223–1231. NIPS’12, Curran Associates Inc., Red Hook, NY, USA (2012)
Frankle, J., Carbin, M.: The lottery ticket hypothesis: finding sparse, trainable neural networks (2019)
Ha, D., Dai, A., Le, Q.V.: Hypernetworks (2016)
Jia, X., De Brabandere, B., Tuytelaars, T., Gool, L.V.: Dynamic filter networks. In: Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 29. Curran Associates, Inc. (2016). https://proceedings.neurips.cc/paper_files/paper/2016/file/8bf1211fd4b7b94528899de0a43b9fb3-Paper.pdf
Jiang, J., Fu, F., Yang, T., Cui, B.: SketchML: accelerating distributed machine learning with data sketches. In: Proceedings of the 2018 International Conference on Management of Data, pp. 1269–1284. SIGMOD ’18, Association for Computing Machinery, New York, NY, USA (2018).https://doi.org/10.1145/3183713.3196894, https://doi.org/10.1145/3183713.3196894
Khani, M., et al.: Sip-ml: high-bandwidth optical network interconnects for machine learning training. In: Proceedings of the 2021 ACM SIGCOMM 2021 Conference, pp. 657–675. SIGCOMM ’21, Association for Computing Machinery, New York, NY, USA (2021). https://doi.org/10.1145/3452296.3472900
Klocek, S., Maziarka, L., Wołczyk, M., Tabor, J., Nowak, J., undefinedmieja, M.: Hypernetwork functional image representation. In: Artificial Neural Networks and Machine Learning - ICANN 2019: Workshop and Special Sessions: 28th International Conference on Artificial Neural Networks, Munich, Germany, September 17-19, 2019, Proceedings, pp. 496–510. Springer-Verlag, Berlin, Heidelberg (2019). https://doi.org/10.1007/978-3-030-30493-5_48
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training. ICLR (2017)
Ma, X., Zhang, J., Guo, S., Xu, W.: Layer-wised model aggregation for personalized federated learning. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10082–10091 (2022).https://doi.org/10.1109/CVPR52688.2022.00985
Raghu, M., Gilmer, J., Yosinski, J., Sohl-Dickstein, J.: SVCCA: singular vector canonical correlation analysis for deep learning dynamics and interpretability. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 6078–6087. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017)
Shi, S., et al.: A distributed synchronous SGD algorithm with global top-k sparsification for low bandwidth networks. In: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), pp. 2238–2247 (2019).https://doi.org/10.1109/ICDCS.2019.00220
Tang, Z., Shi, S., Chu, X., Wang, W., Li, B.: Communication-efficient distributed deep learning: a comprehensive survey. CoRR abs/2003.06307 (2020). https://arxiv.org/abs/2003.06307
Vogels, T., Karimireddy, S.P., Jaggi, M.: PowerSGD: practical low-rank gradient compression for distributed optimization (2020)
Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsification for communication-efficient distributed optimization. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp. 1306–1316. NIPS’18, Curran Associates Inc., Red Hook, NY, USA (2018)
Wen, W., et al.: TernGrad: ternary gradients to reduce communication in distributed deep learning. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, pp. 1508–1518. NIPS’17, Curran Associates Inc., Red Hook, NY, USA (2017)
Ye, Q., Sun, Y., Zhang, J., Lv, J.: A distributed framework for EA-based NAS. IEEE Trans. Parallel Distrib. Syst. 32(7), 1753–1764 (2021). https://doi.org/10.1109/TPDS.2020.3046774
Ye, Q., Zhou, Y., Shi, M., Lv, J.: FLSGD: free local SGD with parallel synchronization. J. Supercomput. 78(10), 12410–12433 (2022).https://doi.org/10.1007/s11227-021-04267-5
Ye, Q., Zhou, Y., Shi, M., Sun, Y., Lv, J.: DBS: dynamic batch size for distributed deep neural network training. CoRR abs/2007.11831 (2020). https://arxiv.org/abs/2007.11831
Zhang, C., Bengio, S., Singer, Y.: Are all layers created equal? J. Mach. Learn. Res. 23(1) (2022)
Zhang, C., Ren, M., Urtasun, R.: Graph hypernetworks for neural architecture search. ArXiv abs/1810.05749 (2018). https://api.semanticscholar.org/CorpusID:53113128
Zhang, S., Zhang, C., You, Z., Zheng, R., Xu, B.: Asynchronous stochastic gradient descent for DNN training. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 6660–6663 (2013).https://doi.org/10.1109/ICASSP.2013.6638950
Zhou, Y., Ye, Q., Lv, J.: Communication-efficient federated learning with compensated overlap-fedAvg. IEEE Trans. Parallel Distrib. Syst. 33(1), 192–205 (2022). https://doi.org/10.1109/TPDS.2021.3090331
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Wu, Y., Li, J., Ye, Q. (2024). Layer-Wised Sparsification Based on Hypernetwork for Distributed NN Training. In: Wand, M., Malinovská, K., Schmidhuber, J., Tetko, I.V. (eds) Artificial Neural Networks and Machine Learning – ICANN 2024. ICANN 2024. Lecture Notes in Computer Science, vol 15021. Springer, Cham. https://doi.org/10.1007/978-3-031-72347-6_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-72347-6_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-72346-9
Online ISBN: 978-3-031-72347-6
eBook Packages: Computer ScienceComputer Science (R0)