Nothing Special   »   [go: up one dir, main page]

Skip to main content

Word2Vec FPGA Accelerator Based on Spatial and Temporal Parallelism

  • Conference paper
  • First Online:
Parallel and Distributed Computing, Applications and Technologies (PDCAT 2022)

Abstract

Word2vec is a word embedding method that converts words into vectors in such a way that the semantically and syntactically relevant words are close to each other in the vector space. Acceleration is required to reduce the processing time of Word2vec. We propose a power-efficient FPGA accelerator exploiting temporal and spatial parallelism. The proposed FPGA accelerator has the highest power-efficiency compared to existing top-end GPU accelerators. It is more power efficient and nearly two times faster compared to a previously proposed highly power-efficient FPGA accelerator.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. 520N-MX (2022). https://www.bittware.com/fpga/520n-mx/

  2. Bae, S., Yi, Y.: Acceleration of Word2vec using GPUs. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) ICONIP 2016. LNCS, vol. 9948, pp. 269–279. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46672-9_31

    Chapter  Google Scholar 

  3. Canny, J., Zhao, H., Jaros, B., Chen, Y., Mao, J.: Machine learning at the limit. In: 2015 Big IEEE International Conference on Big Data (Big Data), pp. 233–242. IEEE (2015)

    Google Scholar 

  4. Chelba, C., et al.: One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005 (2013)

  5. Gupta, S., Khare, V.: BlazingText: scaling and accelerating Word2Vec using multiple GPUs. In: Proceedings of the Machine Learning on HPC Environments, p. 6. ACM (2017)

    Google Scholar 

  6. Ji, S., Satish, N., Li, S., Dubey, P.: Parallelizing word2vec in multi-core and many-core architectures. arXiv preprint arXiv:1611.06172 (2016)

  7. Ji, S., Satish, N., Li, S., Dubey, P.: Parallelizing word2vec in shared and distributed memory. arXiv preprint arXiv:1604.04661 (2016)

  8. Kiros, R., et al.: Skip-thought vectors. In: Advances in Neural Information Processing Systems, pp. 3294–3302 (2015)

    Google Scholar 

  9. Ma, L., Zhang, Y.: Using Word2Vec to process big text data. In: 2015 IEEE International Conference on Big Data (Big Data), pp. 2895–2897. IEEE (2015)

    Google Scholar 

  10. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)

  11. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  12. Ono, T., et al.: FPGA-based acceleration of word2vec using OpenCL. In: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. IEEE (2019)

    Google Scholar 

  13. Recht, B., Re, C., Wright, S., Niu, F.: Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems, pp. 693–701 (2011)

    Google Scholar 

  14. Rengasamy, V., Fu, T.Y., Lee, W.C., Madduri, K.: Optimizing Word2Vec performance on multicore systems. In: Proceedings of the Seventh Workshop on Irregular Applications: Architectures and Algorithms, p. 3. ACM (2017)

    Google Scholar 

  15. Simonton, T.M., Alaghband, G.: Efficient and accurate Word2Vec implementations in GPU and shared-memory multicore architectures. In: High Performance Extreme Computing Conference (HPEC), 2017 IEEE, pp. 1–7. IEEE (2017)

    Google Scholar 

  16. Su, Z., Xu, H., Zhang, D., Xu, Y.: Chinese sentiment classification using a neural network tool-Word2vec. In: 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI), pp. 1–6. IEEE (2014)

    Google Scholar 

  17. Xue, B., Fu, C., Shaobin, Z.: A study on sentiment computing and classification of sina weibo with word2vec. In: 2014 IEEE International Congress on Big Data (BigData Congress), pp. 358–363. IEEE (2014)

    Google Scholar 

Download references

Acknowledgment

This research is partly supported by MEXT KAKENHI, grant number 19K11998.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hasitha Muthumala Waidyasooriya .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Waidyasooriya, H.M., Ishihara, S., Hariyama, M. (2023). Word2Vec FPGA Accelerator Based on Spatial and Temporal Parallelism. In: Takizawa, H., Shen, H., Hanawa, T., Hyuk Park, J., Tian, H., Egawa, R. (eds) Parallel and Distributed Computing, Applications and Technologies. PDCAT 2022. Lecture Notes in Computer Science, vol 13798. Springer, Cham. https://doi.org/10.1007/978-3-031-29927-8_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-29927-8_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-29926-1

  • Online ISBN: 978-3-031-29927-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics