Nothing Special   »   [go: up one dir, main page]

Skip to main content

Re-Training and Parameter Sharing with the Hash Trick for Compressing Convolutional Neural Networks

  • Conference paper
  • First Online:
Machine Learning for Cyber Security (ML4CS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12486))

Included in the following conference series:

Abstract

As an ubiquitous technology for improving machine intelligence, deep learning has largely taken the dominant position among nowadays most advanced computer vision systems. To achieve superior performance on large-scale data sets, convolutional neural networks (CNNs) are often designed as complex models with millions of parameters. This limits the deployment of CNNs in embedded intelligent computer vision systems, such as intelligent robots that are resource-constrained with real-time computing requirement. This paper proposes a simple and effective model compression scheme to improve the real-time sensing of the surrounding objects. In the proposed framework, the Hash trick is first applied to a modified convolutional layer, and the compression of the convolutional layer is realized via weight sharing. Subsequently, the Hash index matrix is introduced to represent the Hash function, and its relaxation regularization is introduced into the fine-tuned loss function. Through the dynamic retraining of the index matrix, the Hash function can be updated. We evaluate our method using several state-of-the-art CNNs. Experimental results showed that the proposed method can reduce the number of parameters in AlexNet by 24 × with no accuracy loss. In addition, the compressed VGG16 and ResNet50 can achieve a more than 60 × increased speed, which is significant.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. LeCun, Y., Bottou, L., Bengio, Y.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  2. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  3. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)

    Google Scholar 

  4. He, K., Zhang, X., Ren, S. (eds.): Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  5. Ren, S.Q., He, K., Girshick, R.: Faster R-CNN: towards real-time object detection with region proposal networks. Trans. Pattern Anal. Mach. Intell. 39, 1137–1149 (2017)

    Article  Google Scholar 

  6. Redmon, J., Farhadi, A.: YOLOv3: An incremental improvement. arXiv:1804.02767 (2018)

  7. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)

    Google Scholar 

  8. Ronneberger, O., Fischer, P., Brox, T.: U-Net: Convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24574-4_28

    Chapter  Google Scholar 

  9. Chen, L.C., Papandreou, G., Kokkinos, I.: DeepLab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2018)

    Article  Google Scholar 

  10. Wang, L., et al.: Temporal Segment Networks: Towards Good Practices for Deep Action Recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 20–36. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46484-8_2

    Chapter  Google Scholar 

  11. Diba, A., Sharma, V., Gool, L.V.: Deep temporal linear encoding networks. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1541–1550 (2017)

    Google Scholar 

  12. Denil, M., et al.: Predicting parameters in deep learning. In: Advances in Neural Information Processing Systems, pp. 2148–2156 (2013)

    Google Scholar 

  13. Chen, W., et al.: Compressing neural networks with the hashing trick. In: International Conference on Machine Learning, pp. 2285–2294 (2015)

    Google Scholar 

  14. LeCun, Y., Denker, J., Solla, S.A.: Optimal brain damage. In: Advances in Neural Information Processing Systems, pp. 598–605 (1990)

    Google Scholar 

  15. Hassibi, B., Stork, D.G.: Second order derivatives for network pruning: optimal brain surgeon. In: Advances in Neural Information Processing Systems, pp. 164–171 (1993)

    Google Scholar 

  16. Han, S., Mao, H., Dally, W.J.: Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding. In: International Conference on Learning Representations (2016)

    Google Scholar 

  17. Srinivas, S., Babu, R.V.: Data-free Parameter Pruning for Deep Neural Networks. The British Machine Vision Association (2015)

    Google Scholar 

  18. Wen, W., et al.: Learning structured sparsity in deep neural networks. In: Advances in Neural Information Processing Systems, pp. 2074–2082 (2016)

    Google Scholar 

  19. Huang, Z., Wang, N.: Data-driven sparse structure selection for deep neural networks. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 304–320 (2018)

    Google Scholar 

  20. Denton, E.L., Zaremba, W., Bruna, J., LeCun, Y., Fergus, R.: Exploiting linear structure within convolutional networks for efficient evaluation. In: Advances in Neural Information Processing Systems, pp. 1269–1277 (2014)

    Google Scholar 

  21. Lebedev, V., Ganin, Y., Rakhuba, M., Oseledets, I.V., Lempitsky, V.S.: Speeding-up convolutional neural networks using fine-tuned cpdecomposition. arXiv:1412.6553

  22. Courbariaux, M., Bengio, Y., David, J.P.: Binaryconnect: training deep neural networks with binary weights during propagations. In: Advances in Neural Information Processing Systems, pp. 3123–3131 (2015)

    Google Scholar 

  23. Hubara, I., et al.: Binarized neural networks: Training neural networks with weights and activations constrained to +1 or −1. arXiv:1602.02830

  24. Rastegari, M., Ordonez, V., Redmon, J., Farhadi, A.: XNOR-Net: ImageNet classification using binary convolutional neural networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 525–542. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_32

    Chapter  Google Scholar 

  25. Wess, M., Dinakarrao, S.M.P., Jantsch, A.: Weighted quantization-regularization in DNNs for weight memory minimization toward HW implementation. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 37(11), 2929–2939 (2018)

    Article  Google Scholar 

  26. Yang, T.J., Chen, Y.H., Sze, V.: Designing energy-efficient convolutional neural networks using energy-aware pruning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5687–5695 (2017)

    Google Scholar 

  27. Parashar, A., et al.: SCNN: an accelerator for compressed-sparse convolutional neural networks. In: ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pp. 27–40 (2017)

    Google Scholar 

  28. Deng, C., et al.: TIE: energy-efficient tensor train-based inference engine for deep neural network. In: Proceedings of the 46th International Symposium on Computer Architecture, pp. 264–278 (2019)

    Google Scholar 

  29. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  30. Romero, A., et al.: Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550 (2014)

  31. Zagoruyko, S., Komodakis, N.: Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer. In: International Conference on Learning Representations (2017)

    Google Scholar 

  32. Zha, H., et.al.: Spectral relaxation for k-means clustering. In: Advances in Neural Information Processing Systems, pp. 1057–1064 (2002)

    Google Scholar 

  33. Absil, P.A., Mahony, R., Sepulchre, R.: Optimization Algorithms on Matrix Manifolds. Princeton University Press, Jersey (2019)

    MATH  Google Scholar 

  34. Sabour, S., Frosst, N., Hinton, G.E.: Dynamic routing between capsules. In: Advances in Neural Information Processing Systems, pp. 3856–3866 (2017)

    Google Scholar 

  35. Cheng, Y., et al.: A survey of model compression and acceleration for deep neural networks. arXiv preprint arXiv:1710.09282

  36. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science, University of Toronto (2009)

    Google Scholar 

  37. Russakovsky, O., Deng, J., Su, H., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  38. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  39. Li, R., Wang, S., Zhu, F., Huang, J.: Adaptive graph convolutional neural networks. In: Advances in AAAI Conference on Artificial Intelligence, pp. 3546–3553 (2018)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (no.61672120) and the Sichuan Science and Technology Program under Grant 2018HH0143.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xu Gou .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gou, X., Qing, L., Wang, Y., Xin, M. (2020). Re-Training and Parameter Sharing with the Hash Trick for Compressing Convolutional Neural Networks. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12486. Springer, Cham. https://doi.org/10.1007/978-3-030-62223-7_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62223-7_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62222-0

  • Online ISBN: 978-3-030-62223-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics