Nothing Special   »   [go: up one dir, main page]

Skip to main content

An Experimental Study of Deep Neural Networks on HPC Clusters

  • Conference paper
  • First Online:
Supercomputing (RuSCDays 2019)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1129))

Included in the following conference series:

  • 935 Accesses

Abstract

Deep neural networks (DNN) offer great opportunities for solving many problems associated with processing large-scale data. Building and using deep neural networks requires large computational resources. In this regard, the question naturally arises about the possibility of using HPC-systems for the implementation of DNN. In order to better understand the performance implications of DNN on High Performance clusters we analyze the performance of several DNN-models over 2 HPC systems: the Lomonosov-2 supercomputer (the section with processors equipped with P100 GPUs) and the Polus high-performance cluster based on IBM Power8 processors with P100 GPUs. Comparing these frameworks is interesting as they represent different types of processors (Intel for Lomonosov-2 and IBM for Polus). Apart from different processor architectures, these systems feature different internode communications, which may affect the performance of the analysed algorithms in case of parallel and distributed implementation of neural network algorithms. The studies were carried out on the basis of the PyTorch framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Specifications of supercomputer Lomonosov-2. https://parallel.ru/cluster/lomonosov2.html

  2. Specifications of cluster Polus. http://hpc.cs.msu.su/polus

  3. Ben-Nun, T., Hoefler, T.: Demystifying parallel and distributed deep learning: an in-depth concurrency analysis. arXiv:1802.09941 (2018)

  4. Liu, L., Wu, Y., Wei, W., Cao, W., Sahin, S., Zhang, Q.: Benchmarking deep learning frameworks: design considerations, metrics and beyond. In: 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS), Vienna, pp. 1258–1269 (2018). https://doi.org/10.1109/icdcs.2018.00125

  5. Zhang, X., Wang, Y., Shi, W.: pCAMP: performance comparison of machine learning packages on the edges. In: USENIX Workshop on Hot Topics in Edge Computing (HotEdge 2018) (2018)

    Google Scholar 

  6. Asaadi, H., Chapman, B.: Comparative study of deep learning framework in HPC environments. In: 2017 New York Scientific Data Summit (NYSDS), pp. 1–7 (2017)

    Google Scholar 

  7. Tato, A.G.: Evaluation of machine learning fameworks on finis terrae II. arXiv:1801.04546 (2018)

  8. Mericas, A., et al.: IBM POWER8 performance features and evaluation. IBM J. Res. Dev. 59, 6:1–6:10 (2015)

    Article  Google Scholar 

  9. PyTorch. https://pytorch.org

  10. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A.: MobileNetV2: inverted residuals and linear bottlenecks. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4510–4520 (2018). arXiv:1801.04381

  11. Simonyan, K.: Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556 (2014)

  12. Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv:1602.07360 (2016)

  13. Kingma, D., Jimmy, B.: Adam: a method for stochastic optimization. In: Contribution to International Conference on Learning Representations, San Diego, 7–9 May 2015 (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nina Popova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Buryak, D. et al. (2019). An Experimental Study of Deep Neural Networks on HPC Clusters. In: Voevodin, V., Sobolev, S. (eds) Supercomputing. RuSCDays 2019. Communications in Computer and Information Science, vol 1129. Springer, Cham. https://doi.org/10.1007/978-3-030-36592-9_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-36592-9_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-36591-2

  • Online ISBN: 978-3-030-36592-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics