Abstract
When data are shared among arbitrarily connected machines, the training process became an interesting challenge where each node is initialized with a specific scalar value, so it present a problem of computing their average taking into account interconnectivity between agents in order to ensure that the objective process converges as the centralized counterpart, the decentralized average consensus (DAC) is the most popular strategy due to its low-complexity. In this paper a random topology is choosing to validate a network of agents with a given probability of interconnectivity between every pair of neighbors nodes, the global regularized least-square problem requires an optimization procedure to solve it with decentralized fashion then, the question is what is the optimal output weight vector that we have to choose for the test task, here the DAC intervenes to encourage all agents having the same vectors or we will be on the case of local training, so we must choose appropriately the DAC strategy in order that all agents converge to the same state. The contribution key is to apply the Metropolis-Weights as a strategy of average consensus to compute the mean of the updates of nodes at each step with several tests, this protocol demonstrate convergence of the consensus algorithm for network without packet losses. Experimental results on prediction and identification tasks show a favorable performance in terms of accuracy and efficiency.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Scardapane, S., Wang, D., Panella, M.: A decentralized training algorithm for echo state networks in distributed big data applications. Neural Netw. 78, 65–74 (2016)
Goudarzi, A., Lakin, M.R., Stefanovic, D.: Reservoir computing approach to robust computation using unreliable nanoscale networks. In: Ibarra, O.H., Kari, L., Kopecki, S. (eds.) UCNC 2014. LNCS, vol. 8553, pp. 164–176. Springer, Cham (2014). doi:10.1007/978-3-319-08123-6_14
Fischione, C.: Distributed Estimation, Lecture 9 Principles of Wireless Sensor Networks (2009)
Zinkevich, M., Langford, J., Smola, A.J.: Slow learners are fast. In: Advances in Neural Information Processing Systems, pp. 2331–2339 (2009)
Baccour, L., Alimi, M.A., John, R.I.: Similarity measures for intuitionistic fuzzy sets: state of the art. J. Intell. Fuzzy Syst. 24(1), 37–49 (2013)
Ben Moussa, S., Zahour, A., Benabdelhafid, A., Alimi, A.M.: New features using fractal multi-dimensions for generalized Arabic font recognition. Pattern Recogn. Lett. 31(5), 361–371 (2010)
Bezine, H., Alimi, M.A., Derbel, N: Handwriting trajectory movements controlled by a bêta-elliptic model. In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, p. 1228 (2003)
Elloumi, W., Baklouti, N., Abraham, A., Alimi, M.A.: The multi-objective hybridization of particle swarm optimization and fuzzy ant colony optimization. J. Intell. Fuzzy Syst. (JIFS) 27(1), 515–525 (2014)
Elloumi, W., El Abed, H., Abraham, A., Alimi, M.A.: A comparative study of the improvement of performance using a PSO modified by ACO applied to TSP. J. Appl. Soft Comput. (JASoC) 25, 234–241 (2014)
Lu, Y., Roychowdhury, V., Vandenberghe, L.: Distributed parallel support vector machines in strongly connected networks. IEEE Trans. Neural Netw. 19(7), 1167–1178 (2008)
Flouri, K., Beferull-Lozano, B., Tsakalides, P: Training a SVM-based classifier in distributed sensor networks. In: 14th European Signal Processing Conference, pp. 1–5 (2006)
Slimane, F., Kanoun, S., Hennebert, J., Alimi, A.M., Ingold, R.: A study on font-family and font-size recognition applied to Arabic word images at ultra-low resolution. Pattern Recogn. Lett. 34(2), 209–218 (2013)
Alimi, M.A.: Evolutionary computation for the recognition of on-line cursive handwriting. IETE J. Res. 48(5), 385–396 (2002). SPEC
Boubaker, H., Kherallah, M., Alimi, M.A.: New algorithm of straight or curved baseline detection for short arabic handwritten writing. In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, p. 778 (2009)
Scardapane, S., Wang, D., Panella, M., Uncini, A.: Distributed learning for random vector functional-link networks. Inf. Sci. 301, 271–284 (2015)
Scardapane, S., Panella, M., Comminiello, D., Uncini, A.: Learning from Distributed Data Sources using Random Vector Functional-Link Networks. Procedia Comput. Sci. 53, 468–477 (2015)
Obst, O.: Distributed fault detection in sensor networks using a recurrent neural network. Neural Process. Lett. 40(3), 261–273 (2014)
Elloumi, W., Alimi, M.A.: A more efficient MOPSO for optimization. In: ACS/IEEE International Conference on Computer System and Applications (AICCSA) (2010)
Guijarro-Berdiñas, B., Martínez-Rego, D., Fernández-Lorenzo, S.: Privacy-preserving distributed learning based on genetic algorithms and artificial neural networks. In: Omatu, S., Rocha, M.P., Bravo, J., Fernández, F., Corchado, E., Bustillo, A., Corchado, J.M. (eds.) IWANN 2009. LNCS, vol. 5518, pp. 195–202. Springer, Heidelberg (2009). doi:10.1007/978-3-642-02481-8_27
Jaeger, H.: Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach, vol. 5. GMD-Forschungszentrum Informationstechnik (2002)
Georgopoulos, L., Hasler, M.: Distributed machine learning in networks by consensus. Neurocomputing 124, 2–12 (2014)
Xiao, L., Boyd, S., Lall, S.: Distributed average consensus with time-varying metropolis weights. Automatica (2006)
Dhahri, H., Alimi, M.A.: The modified differential evolution and the RBF (MDE-RBF) neural network for time series prediction. In: IEEE International Conference on Neural Networks - Conference Proceedings, p. 2938 (2006)
Bouaziz, S., Dhahri, H., Alimi, M.A., Abraham, A.: A hybrid learning algorithm for evolving Flexible Beta Basis Function Neural Tree Model. Neurocomputing 117, 107–117 (2013)
Elbaati, A., Boubaker, H., Kherallah, M., Alimi, M.A., Ennaji, A., Abed, H.E.: Arabic handwriting recognition using restored stroke chronology. In: Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, p. 411 (2009)
Acknowledgements
The research leading to these results has received funding from the Ministry of Higher Education and Scientific Research of Tunisia under the grant agreement number LR11ES48.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Slama, N., Elloumi, W., Alimi, A.M. (2017). Distributed Recurrent Neural Network Learning via Metropolis-Weights Consensus. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10637. Springer, Cham. https://doi.org/10.1007/978-3-319-70093-9_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-70093-9_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70092-2
Online ISBN: 978-3-319-70093-9
eBook Packages: Computer ScienceComputer Science (R0)