Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3379247.3379253acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiccdeConference Proceedingsconference-collections
research-article

A Weight Initialization Method Associated with Samples for Deep Feedforward Neural Network

Published: 07 March 2020 Publication History

Abstract

Artificial neural network is an important force to promote the development of artificial intelligence, but it usually needs to be trained before use. The initial weight given randomly is the most widely used method when training neural networks. However, randomly given initial weights are independent of the samples. A weight initialization method associated with samples for deep feedforward neural network (DFFNN) is proposed. The initial weights set by this method are a combination of the original weights given randomly and the weights obtained by the first epoch training. The initial weights not only have random characteristics, but are also closely related to the samples to be trained. The proposed method is tested with the bearing data provided by the Case Western Reserve University (CWRU) Bearing Data Center. The testing results show that the proposed method can accelerate the training of DFFNN to some extent.

References

[1]
Hinton G. E. and Salakhutdinov R. R. 2006. Reducing the dimensionality of data with neural networks, Science 313, 504--507.
[2]
Wang L. N., Yang Y., Min R. Q., and Chakradhar S. 2017. Accelerating deep neural network training with inconsistent stochastic gradient descent, Neural Networks 93, 219--229.
[3]
Nur A. S., Radzi N. H. M., and Ibrahim A. O. 2014. Artificial neural network weight optimization: A review, TELKOMNIKA Indonesian Journal of Electrical Engineering 12, 6897--6902.
[4]
Combes R. T. D., Pezeshki M., Shabanian S., Courville A., and Bengio Yoshua, 2018. On the learning dynamics of deep neural networks, arXiv: 1809.06848v1.
[5]
Ramos E. Z., Nakakuni M., and Yfantis E. 2017. Quantitative measures to evaluate neural network weight initialization strategies, in IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, USA, 2017.
[6]
Kumar S. K., On weight initialization in deep neural networks, arXiv:1704.08863v2, 2017.
[7]
Daniely A., Frostig R., and Singer Y. 2016. Toward deeper understanding of neural networks: The power of initialization and a dual view on expressivity, in 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain, 2016, pp. 1--9.
[8]
Lee A., Geem Z. W., and Suh K. D., 2016. Determination of optimal initial weights of an artificial neural network by using the harmony search algorithm: Application to breakwater armor stones, Applied Sciences 6, 1--17.
[9]
Schmidhuber J. 2015. Deep learning in neural networks: An overview, Neural Networks 61, 85--117.
[10]
Case Western Reserve University Bearing Data Center Website. <http://csegroups.case.edu/bearingdatacenter/home>.
[11]
DeepLearnToolbox website. <https://github.com/rasmusbergpalm/DeepLearnToolbox>.
[12]
Yang Y. L., Fu P. Y., and He Y. C. 2018. Bearing faults automatic classification based on deep learning, IEEE Access 6, 71540--71554.
[13]
Yang Y. L. and Fu P. Y. 2018. Rolling-element bearing fault data automatic clustering based on wavelet and deep neural network, Shock and Vibration 2018, Article ID 3047830, 11 pages.

Cited By

View all
  • (2021)Quantitative analysis of the generalization ability of deep feedforward neural networksJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-20167940:3(4867-4876)Online publication date: 1-Jan-2021
  • (2021)Improving convergence speed of the neural network model using Meta heuristic algorithms for weight initialization2021 International Conference on Computer Communication and Informatics (ICCCI)10.1109/ICCCI50826.2021.9402415(1-6)Online publication date: 27-Jan-2021
  • (2021)A State-of-the-Art Survey on Deep Learning Methods and Applications in BioinformaticsProceedings of International Conference on Advanced Computing Applications10.1007/978-981-16-5207-3_62(765-771)Online publication date: 24-Nov-2021

Index Terms

  1. A Weight Initialization Method Associated with Samples for Deep Feedforward Neural Network

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Other conferences
      ICCDE '20: Proceedings of 2020 6th International Conference on Computing and Data Engineering
      January 2020
      279 pages
      ISBN:9781450376730
      DOI:10.1145/3379247
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 March 2020

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. Artifical neural networks
      2. deep feedforward neural network
      3. random initialization
      4. weight initialization

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Conference

      ICCDE 2020

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)9
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 01 Oct 2024

      Other Metrics

      Citations

      Cited By

      View all
      • (2021)Quantitative analysis of the generalization ability of deep feedforward neural networksJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-20167940:3(4867-4876)Online publication date: 1-Jan-2021
      • (2021)Improving convergence speed of the neural network model using Meta heuristic algorithms for weight initialization2021 International Conference on Computer Communication and Informatics (ICCCI)10.1109/ICCCI50826.2021.9402415(1-6)Online publication date: 27-Jan-2021
      • (2021)A State-of-the-Art Survey on Deep Learning Methods and Applications in BioinformaticsProceedings of International Conference on Advanced Computing Applications10.1007/978-981-16-5207-3_62(765-771)Online publication date: 24-Nov-2021

      View Options

      Get Access

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media