Abstract
Word2vec is a word embedding method that converts words into vectors in such a way that the semantically and syntactically relevant words are close to each other in the vector space. Acceleration is required to reduce the processing time of Word2vec. We propose a power-efficient FPGA accelerator exploiting temporal and spatial parallelism. The proposed FPGA accelerator has the highest power-efficiency compared to existing top-end GPU accelerators. It is more power efficient and nearly two times faster compared to a previously proposed highly power-efficient FPGA accelerator.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
520N-MX (2022). https://www.bittware.com/fpga/520n-mx/
Bae, S., Yi, Y.: Acceleration of Word2vec using GPUs. In: Hirose, A., Ozawa, S., Doya, K., Ikeda, K., Lee, M., Liu, D. (eds.) ICONIP 2016. LNCS, vol. 9948, pp. 269–279. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46672-9_31
Canny, J., Zhao, H., Jaros, B., Chen, Y., Mao, J.: Machine learning at the limit. In: 2015 Big IEEE International Conference on Big Data (Big Data), pp. 233–242. IEEE (2015)
Chelba, C., et al.: One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005 (2013)
Gupta, S., Khare, V.: BlazingText: scaling and accelerating Word2Vec using multiple GPUs. In: Proceedings of the Machine Learning on HPC Environments, p. 6. ACM (2017)
Ji, S., Satish, N., Li, S., Dubey, P.: Parallelizing word2vec in multi-core and many-core architectures. arXiv preprint arXiv:1611.06172 (2016)
Ji, S., Satish, N., Li, S., Dubey, P.: Parallelizing word2vec in shared and distributed memory. arXiv preprint arXiv:1604.04661 (2016)
Kiros, R., et al.: Skip-thought vectors. In: Advances in Neural Information Processing Systems, pp. 3294–3302 (2015)
Ma, L., Zhang, Y.: Using Word2Vec to process big text data. In: 2015 IEEE International Conference on Big Data (Big Data), pp. 2895–2897. IEEE (2015)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Ono, T., et al.: FPGA-based acceleration of word2vec using OpenCL. In: 2019 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 1–5. IEEE (2019)
Recht, B., Re, C., Wright, S., Niu, F.: Hogwild: a lock-free approach to parallelizing stochastic gradient descent. In: Advances in Neural Information Processing Systems, pp. 693–701 (2011)
Rengasamy, V., Fu, T.Y., Lee, W.C., Madduri, K.: Optimizing Word2Vec performance on multicore systems. In: Proceedings of the Seventh Workshop on Irregular Applications: Architectures and Algorithms, p. 3. ACM (2017)
Simonton, T.M., Alaghband, G.: Efficient and accurate Word2Vec implementations in GPU and shared-memory multicore architectures. In: High Performance Extreme Computing Conference (HPEC), 2017 IEEE, pp. 1–7. IEEE (2017)
Su, Z., Xu, H., Zhang, D., Xu, Y.: Chinese sentiment classification using a neural network tool-Word2vec. In: 2014 International Conference on Multisensor Fusion and Information Integration for Intelligent Systems (MFI), pp. 1–6. IEEE (2014)
Xue, B., Fu, C., Shaobin, Z.: A study on sentiment computing and classification of sina weibo with word2vec. In: 2014 IEEE International Congress on Big Data (BigData Congress), pp. 358–363. IEEE (2014)
Acknowledgment
This research is partly supported by MEXT KAKENHI, grant number 19K11998.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Waidyasooriya, H.M., Ishihara, S., Hariyama, M. (2023). Word2Vec FPGA Accelerator Based on Spatial and Temporal Parallelism. In: Takizawa, H., Shen, H., Hanawa, T., Hyuk Park, J., Tian, H., Egawa, R. (eds) Parallel and Distributed Computing, Applications and Technologies. PDCAT 2022. Lecture Notes in Computer Science, vol 13798. Springer, Cham. https://doi.org/10.1007/978-3-031-29927-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-29927-8_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-29926-1
Online ISBN: 978-3-031-29927-8
eBook Packages: Computer ScienceComputer Science (R0)