Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Optimization of Artificial Neural Networks using Wavelet Transforms

  • Published:
Programming and Computer Software Aims and scope Submit manuscript

Abstract

The article presents the artificial neural networks performance optimization using wavelet transform. The existing approaches of wavelet transform implementation in neural networks imply either transformation before neural network or using “wavenet” architecture, which requires new neural network training approaches. The proposed approach is based on the representation of the neuron as a nonrecursive adaptive filter and wavelet filter application to obtain the low-frequency part of the image. It reduces the image size and filtering interference, which is usually high-frequency. Our wavelet transform model is based on the classical representation of a forward propagation neural network or convolutional layers. It allows designing neural networks with the wavelet transform based on existing libraries and does not require changes in the neural network training algorithm. It was tested on three MNIST-like datasets. As a result of testing, it was found that the speed gain is approximately 50 ± 5% with a slight loss of recognition quality of no more than 4%. For practitioner programmers, the proposed algorithm was tested on real images to distinguish animals and showed similar results as the MNIST-like tests.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1.
Fig. 2.
Fig. 3.
Fig. 4.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

REFERENCES

  1. Methods of Computer Image Processing, Soifer, V.A., Ed., Moscow: Fizmatlit, 2003

    Google Scholar 

  2. Forsyth, D.A. and Ponce, J., Computer Vision. A Modern Approach, Upper Saddle River, NJ: Pearson Education, 2003.

    Google Scholar 

  3. Shapiro, L.G. and Stockman, G.C., Computer Vision, Prentice Hall, 2001.

    Google Scholar 

  4. Hastie, R., Tibshirani, R. and Friedman, J., The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Springer, 2009, vol. 2, pp. 1–758.

    Google Scholar 

  5. Ahmed, N. and Rao, K.R., Orthogonal Transforms for Digital Signal Processing, Berlin, 1975.

    Book  MATH  Google Scholar 

  6. Ranzato, M.A., Poultney, C., Chopra, S., and LeCun, Y., Efficient learning of sparse representations with an energy-based model, in Advances in Neural Information Processing Systems (NIPS 2006), Platt J., Eds., MIT Press, 2006.

    Google Scholar 

  7. Galushkin, A.I., Tomashevich, D.S., and Tomashevich, N.S., Methods for implementing invariance to affine transformations of two-dimensional images, Inf. Tekhnol., 2001, suppl. no. 1, pp. 1–19.

  8. Phillips, P., Martin, A., Wilson, C., and Przybocki, M., An introduction to evaluation biometric systems, Computer, 2000, vol. 33, pp. 56–63.

    Article  Google Scholar 

  9. Gusev, V.Yu. and Krapivenko, A.V., Method for filtering periodic noise in digital images, Tr. Mosk. Aviats. Inst., Radiotekhn., Elektron., Telekommun. Sist., 2012, no. 50. http://trudymai.ru/published.php?ID=28805.

  10. Kolmogorov, A.N., On the representation of continuous functions of several variables as superpositions of continuous functions of one variable and addition, Dokl. Akad. Nauk SSSR, 1957, vol. 114, no. 5, pз. 953–956.

  11. Arnold, V.I., On the representation of functions of several variables as a superposition of functions of a smaller number of variables, Mat. Prosveshchenie, 1958, no. 3, pp. 41–61.

  12. Gorban, A.N., Generalized approximation theorem and computational capabilities of neural networks, Sib. Zh. Vychisl. Mat., 1998, vol. 1, no. 1, pp. 12–24.

    MATH  Google Scholar 

  13. Hecht-Nielsen, R., Neurocomputing, Addison-Wesley, 1989.

    Google Scholar 

  14. Widrow, B., Adaptive sampled-data systems, a statistical theory of adaptation, IRE WESCON Convent. Rec., 1959, vol. 4, pp. 74–85.

    Google Scholar 

  15. Tsypkin, Ya.Z., Information Theory of Identification, Moscow: Nauka, Fizmatlit, 1995.

    MATH  Google Scholar 

  16. Kharkevich, A.A., Selected Works in Three Volumes, vol. 3: Information Theory. Image Recognition, Moscow: Nauka, 1973.

    Google Scholar 

  17. McCulloch, W.S. and Pitts, W., A logical calculus of the ideas immanent in nervous activity, Bull. Math. Biophys., 1943, vol. 5, pp. 115–133.

    Article  MathSciNet  MATH  Google Scholar 

  18. Vershkov, N.A., Kuchukov, V.A., Kuchukova, N.N., and Babenko, M., The wave model of artificial neural network, in Proc. 2020 IEEE Conf. of Russian Young Researchers in Electrical and Electronic Engineering, EIConRus 2020, Moscow, St. Petersburg, 2020, pp. 542–547.

  19. Malykhina, G.F. and Merkusheva, A.V., A method for monitoring the state of a subsystem (object) with incomplete measurement information about the set of parameters that determine its dynamics, Nauch. Priborostroen., 2004, vol. 14, no. 1, pp. 72–84.

    Google Scholar 

  20. Kim, J.S., Cho, Y., and Lim, T.H., Prediction of locations in medical images using orthogonal neural networks, Eur. J. Radiol. Open, 2021, vol. 8, p. 100388.

  21. Jamal, A., Ashour, M., Helmi, R., and Fong, S., A wavelet-neural networks model for time series, in Proc. 11th IEEE Symp. on Computer Applications Industrial Electronics (ISCAIE), Penang, 2021. https://doi.org/10.1109/ISCAIE51753.2021.9431777

  22. Khaustov, P.A., Grigoriev, D.S., and Spitsyn, V.G., Development of an optical character recognition system based on the combined use of a probabilistic neural network and wavelet transform, Izv. Tomsk. Politekh. Univ. Inzh. Georesur., 2013, vol. 323, no. 5, pp. 101–105.

    Google Scholar 

  23. LeCun, Y. and Bengio, Y., Convolutional networks for images, speech, and time-series, in The Handbook of Brain Theory and Neural Networks, Arbib, M.A., Ed., MIT Press, 1995.

    Google Scholar 

  24. Fujieda, S., Takayama, K., and Hachisuka, T., Wavelet Convolutional Neural Networks, 2018. https://arxiv.org/pdf/1805.08620.pdf.

  25. Veitch, D., Wavelet Neural Networks and Their Application in the Study of Dynamical System, Dep. Mathematics Univ. of York, 2005.

  26. Nagornov, O.V., Nikitaev, V.G., Prostokishin, V.M., Tyuflin, S.A., Pronichev, A.N., Bukharova, S.A., Chistov, K.S., Kashafutdinov, R.Z., and Khorkin, V.A., Wavelet Analysis in Examples, Moscow: NRNU MIPT, 2010.

    Google Scholar 

  27. Kerenidis, I., Landman, J. and Mathur, N., Classical and Quantum Algorithms for Orthogonal Neural Networks, 2021. https://arxiv.org/2106.07198v1.

  28. Yubei, J., Chakraborty, Ch., and Yu, S.X., Orthogonal convolutional neural networks, in Proc. 2020 IEEE/CVF Conf. on Computer Vision and Pattern Recognition, Seattle, 2020. https://doi.org/10.1109/CVPR42600.2020.01152

  29. Smolentsev, N.K., Fundamentals of Wavelet Theory. Wavelets in Matlab, Moscow: Radio i svyaz’, 2019.

  30. Sikarev, A.A. Lebedev, O.N., Microelectronic Devices for the Formation and Processing of Complex Signals, Moscow: Radio i svyaz’, 1983.

  31. Koehler, G., MNIST Handwritten Digit Recognition in PyTorch. Nextjournal, 2020. https://nextjournal.com/gkoehler/pytorch-mnist. Accessed 28.02.2022.

  32. Qiao, Yu., The MNIST Database of handwritten digits, 2007. http://yann.lecun.com/exdb/mnist/. Accessed 04.03.2022.

  33. GitHub – rois-codh/kmnist: Repository for Kuzushiji-MNIST, Kuzushiji-49, and Kuzushiji-Kanji. https://github.com/rois-codh/kmnist. Accessed 04.03.2022.

  34. A MNIST-like fashion product database. Benchmark. https://github.com/zalandoresearch/fashion-mnist. Accessed 04.03.2022.

  35. Cats and Dogs Dataset. https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip. Accessed 04.03.2022.

  36. PyTorch. https://pytorch.org/get-started/previous-versions/. Accessed 04.03.2022.

Download references

ACKNOWLEDGMENTS

This work was supported in part by the Russian Science Foundation, project no. 19-71-10033.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to N. Vershkov, M. Babenko, A. Tchernykh, V. Kuchukov, N. Kucherov, N. Kuchukova or A. Yu. Drozdov.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Vershkov, N., Babenko, M., Tchernykh, A. et al. Optimization of Artificial Neural Networks using Wavelet Transforms. Program Comput Soft 48, 376–384 (2022). https://doi.org/10.1134/S036176882206007X

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1134/S036176882206007X

Navigation