Nothing Special   »   [go: up one dir, main page]

Skip to main content

A Robust, Quantization-Aware Training Method for Photonic Neural Networks

  • Conference paper
  • First Online:
Engineering Applications of Neural Networks (EANN 2022)

Abstract

The computationally demanding nature of Deep Learning (DL) has fueled the research on neuromorphics due to their potential to provide high-speed and low energy hardware accelerators. To this end, neuromorphic photonics are increasingly gain attention since they can operate in very high frequencies with very low energy consumption. However, they also introduce new challenges in DL training and deployment. In this paper, we propose a novel training method that is able to compensate for quantization noise, which profoundly exists in photonic hardware due to analog-to-digital (ADC) and digital-to-analog (DAC) conversions, targeting photonic neural networks (PNNs) which employ easily saturated activation functions. The proposed method takes into account quantization during training, leading to significant performance improvements during the inference phase. We conduct evaluation experiments on both image classification and time-series analysis tasks, employing a wide range of existing photonic neuromorphic architectures. The evaluation experiments demonstrate the effectiveness of the proposed method when low-bit resolution photonic architectures are used, as well as its generalization ability.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Dabos, G., et al.: End-to-end deep learning with neuromorphic photonics. In: Integrated Optics: Devices, Materials, and Technologies XXV, vol. 11689, p. 116890I. International Society for Optics and Photonics (2021)

    Google Scholar 

  2. Danial, L., Wainstein, N., Kraus, S., Kvatinsky, S.: Breaking through the speed-power-accuracy tradeoff in ADCs using a memristive neuromorphic architecture. IEEE Trans. Emerg. Top. Comput. Intell. 2(5), 396–409 (2018)

    Google Scholar 

  3. De Marinis, L., Cococcioni, M., Castoldi, P., Andriolli, N.: Photonic neural networks: a survey. IEEE Access 7, 175827–175841 (2019)

    Article  Google Scholar 

  4. Deng, L.: The MNIST database of handwritten digit images for machine learning research. IEEE Sign. Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  5. Esser, S.K., McKinstry, J.L., Bablani, D., Appuswamy, R., Modha, D.S.: Learned step size quantization (2020)

    Google Scholar 

  6. Feldmann, J., Youngblood, N., Wright, C., Bhaskaran, H., Pernice, W.: All-optical spiking neurosynaptic networks with self-learning capabilities. Nature 569(7755), 208–214 (2019)

    Article  Google Scholar 

  7. Feldmann, J., et al.: Parallel convolutional processing using an integrated photonic tensor core. Nature 589(7840), 52–58 (2021)

    Google Scholar 

  8. Giamougiannis, G., et al.: Silicon-integrated coherent neurons with 32GMAC/sec/axon compute line-rates using EAM-based input and weighting cells. In: Proceedings of the European Conference on Optical Communication (ECOC), pp. 1–4 (2021)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the International Conference on Computer Vision, pp. 1026–1034 (2015)

    Google Scholar 

  10. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res. 18(1), 6869–6898 (2017)

    MathSciNet  MATH  Google Scholar 

  11. Indiveri, G., et al.: Neuromorphic silicon neuron circuits. Front. Neurosci. 5, 73 (2011)

    Google Scholar 

  12. Jacob, B., et al.: Quantization and training of neural networks for efficient integer-arithmetic-only inference. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2704–2713 (2018)

    Google Scholar 

  13. Jouppi, N.P., et al.: In-datacenter performance analysis of a tensor processing unit. In: Proceedings of the Annual International Symposium on Computer Architecture, pp. 1–12 (2017)

    Google Scholar 

  14. Kelley, H.J.: Gradient theory of optimal flight paths. ARS J. 30(10), 947–954 (1960)

    Google Scholar 

  15. Krizhevsky, A., Nair, V., Hinton, G.: CIFAR-10 (Canadian institute for advanced research). http://www.cs.toronto.edu/~kriz/cifar.html

  16. Kulkarni, U., Meena, S., Gurlahosur, S.V., Bhogar, G.: Quantization friendly MobileNet (QF-MobileNet) architecture for vision based applications on embedded platforms. Neural Netw. 136, 28–39 (2021)

    Article  Google Scholar 

  17. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  18. Lee, D., Wang, D., Yang, Y., Deng, L., Zhao, G., Li, G.: QTTNet: quantized tensor train neural networks for 3D object and video recognition. Neural Netw. 141, 420–432 (2021)

    Google Scholar 

  19. Lin, X., et al.: All-optical machine learning using diffractive deep neural networks. Science 361(6406), 1004–1008 (2018)

    Google Scholar 

  20. McHugh, M.L.: Interrater reliability: the kappa statistic. Biochemia Medica 22(3), 276–282 (2012)

    Article  MathSciNet  Google Scholar 

  21. Miscuglio, M., Sorger, V.J.: Photonic tensor cores for machine learning. Appl. Phys. Rev. 7(3), 31404 (2020)

    Google Scholar 

  22. Mourgias-Alexandris, G., et al.: Channel response-aware photonic neural network accelerators for high-speed inference through bandwidth-limited optics. Opt. Express 30(7), 10664–10671 (2022)

    Google Scholar 

  23. Mourgias-Alexandris, G., Tsakyridis, A., Passalis, N., Tefas, A., Vyrsokinos, K., Pleros, N.: An all-optical neuron with sigmoid activation function. Opt. Express 27(7), 9620–9630 (2019)

    Article  Google Scholar 

  24. Mourgias-Alexandris, G., Tsakyridis, A., Passalis, N., Tefas, A., Vyrsokinos, K., Pleros, N.: An all-optical neuron with sigmoid activation function. Opt. Express 27(7), 9620–9630 (2019)

    Article  Google Scholar 

  25. Mourgias-Alexandris, G., et al.: A silicon photonic coherent neuron with 10GMAC/sec processing line-rate. In: Proceedings of the Optical Fiber Communications Conference and Exhibition (OFC), pp. 1–3 (2021)

    Google Scholar 

  26. Mourgias-Alexandris, G., et al.: 25GMAC/sec/axon photonic neural networks with 7GHZ bandwidth optics through channel response-aware training. In: Proceedings of the European Conference on Optical Communication (ECOC), pp. 1–4 (2021)

    Google Scholar 

  27. Murmann, B.: Mixed-signal computing for deep neural network inference. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 29(1), 3–13 (2021)

    Google Scholar 

  28. Nahmias, M.A., de Lima, T.F., Tait, A.N., Peng, H.T., Shastri, B.J., Prucnal, P.R.: Photonic multiply-accumulate operations for neural networks. IEEE J. Sel. Top. Quant. Electron. 26(1), 1–18 (2020)

    Article  Google Scholar 

  29. Nousi, P., et al.: Machine learning for forecasting mid-price movements using limit order book data. IEEE Access 7, 64722–64736 (2019)

    Article  Google Scholar 

  30. Ntakaris, A., Magris, M., Kanniainen, J., Gabbouj, M., Iosifidis, A.: Benchmark dataset for mid-price forecasting of limit order book data with machine learning methods. J. Forecast. 37(8), 852–866 (2018)

    Article  MathSciNet  Google Scholar 

  31. Park, E., Ahn, J., Yoo, S.: Weighted-entropy-based quantization for deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 7197–7205 (2017)

    Google Scholar 

  32. Passalis, N., Kirtas, M., Mourgias-Alexandris, G., Dabos, G., Pleros, N., Tefas, A.: Training noise-resilient recurrent photonic networks for financial time series analysis. In: Proceedings of the 28th European Signal Processing Conference, pp. 1556–1560 (2021)

    Google Scholar 

  33. Passalis, N., Mourgias-Alexandris, G., Tsakyridis, A., Pleros, N., Tefas, A.: Training deep photonic convolutional neural networks with sinusoidal activations. IEEE Trans. Emerg. Top. Comput. Intell. 5, 384–393 (2019)

    Google Scholar 

  34. Pearson, C.: High-speed, analog-to-digital converter basics. Texas Instruments Application Report, SLAA510 (2011)

    Google Scholar 

  35. Pitris, S., et al.: O-band energy-efficient broadcast-friendly interconnection scheme with SiPho Mach-Zehnder Modulator (MZM) & Arrayed Waveguide Grating Router (AWGR). In: Proceedings of the Optical Fiber Communication Conference on Optical Society of America (2018)

    Google Scholar 

  36. Pleros, N., et al.: Compute with light: architectures, technologies and training models for neuromorphic photonic circuits. In: Proceedings of the European Conference on Optical Communication (ECOC), pp. 1–4 (2021)

    Google Scholar 

  37. Rosenbluth, D., Kravtsov, K., Fok, M.P., Prucnal, P.R.: A high performance photonic pulse processing device. Opt. Express 17(25), 22767–22772 (2009)

    Article  Google Scholar 

  38. Sarpeshkar, R.: Analog versus digital: extrapolating from electronics to neurobiology. Neural Comput. 10(7), 1601–1638 (1998)

    Article  Google Scholar 

  39. Shastri, B.J., et al.: Photonics for artificial intelligence and neuromorphic computing. Nat. Photon. 15(2), 102–114 (2021)

    Google Scholar 

  40. Shen, Y., et al.: Deep learning with coherent nanophotonic circuits. Nat. Photon. 11(7), 441 (2017)

    Google Scholar 

  41. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 (2019)

  42. Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26–31 (2012)

    Google Scholar 

  43. Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4820–4828 (2016)

    Google Scholar 

Download references

Acknowledgements

The research work was supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.), Greece under the “First Call for H.F.R.I. Research Projects to support Faculty members and Researchers and the procurement of high-cost research equipment grant” (Project Number: 4233)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Kirtas .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oikonomou, A. et al. (2022). A Robust, Quantization-Aware Training Method for Photonic Neural Networks. In: Iliadis, L., Jayne, C., Tefas, A., Pimenidis, E. (eds) Engineering Applications of Neural Networks. EANN 2022. Communications in Computer and Information Science, vol 1600. Springer, Cham. https://doi.org/10.1007/978-3-031-08223-8_35

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-08223-8_35

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-08222-1

  • Online ISBN: 978-3-031-08223-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics