Abstract
Neural networks have been proposed to solve difficult problems like speech and character recognition. However, there has so far not come up any revolutionary system. This paper gives the results of a survey of the ongoing research on neural network applications. Moreover, we point out the demands for the mapping of neural applications onto parallel computer hardware. We propose a flexible mapping of back propagation trained neural networks onto a highly parallel computer.
The experiments undertaken show the need for application specific mapping of the given neural network and training set.
Preview
Unable to display preview. Download preview PDF.
References
Tom Kavli. Nevrale nett: Hvor vil vi de neste årene ? In Proc. of the Norwegian Neural Network Seminar. SINTEF Instrumentation, November 1994.
D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representation by error propagation. In Parallel Distributed Processing, volume 1, pages 318–362. The MIT Press, 1986.
Vipin Kumar et al. A scalable parallel formulation of the back propagation algorithm for hypercubes and related architectures. IEEE Trans. on Parallel and Distributed Systems, 5(10):1073–1090, October 1994.
Hiroaki Ishihata et al. Third generation message passing computer AP1000. In Proc. of the International Symposium on Supercomputing, pages 46–55, Nov. 1991.
Bernard Widrow et al. Neural networks: Applications in industry, business and science. Communication of ACM, 37(3):93–105, March 1994.
Jim Tørresen. Parallelization of Backpropagation Training for Feed-Forward Neural Networks. PhD thesis, Norwegian University of Science and Technology, 1996. ISBN 82-7119-906-4.
Terrence J.Sejnowski.NETtalk corpus, obtainable fromftp.idiap.ch in pub/benchmarks/ neural/ nettalk. tar. z.
Alexander Singer. Implementation of artificial neural networks on the Connection Machine. Parallel Computing, 14:305–315, Summer 1990.
Tomas Nordstrom and Bertil Svensson. Using and designing massively parallel computers for artificial neural networks. Journal of Parallel and Distributed Computing, 14(3):260–285, March 1992.
Darin Jackson and Dan Hammerstrom. Distributing back propagation networks over the Intel iPSC/860 hypercube. In Proc. of Int. Joint Conference on Neural Networks, volume 1, pages 569–574, 1991.
G. Chinn et al. Systolic array implementations of neural nets on the MasPar MP-1 massively parallel processor. In Proc. of Int. Joint Conference on Neural Networks, volume II, pages 169–173, 1990.
Andreas Zell et al. Problems of massive parallelism in neural network simulation. In Proc. of IEEE Int. Conference on Neural Networks, pages 1890–1895, 1993.
Helene Paugam-Moisy. Parallel neural computing based on neural network duplicating. In Ioannis Pitas, editor, Parallel algorithms for digital image processing, computer vision and neural networks, chapter 10, pages 305–340. John Wiley & Sons, 1993.
Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Parallel back propagation training algorithm for MIMD computer with 2D-torus network. In Proceeding of International Conference On Neural Information Processing (ICONIP'94), Seoul, Korea, volume 1, pages 140–145, October 1994.
Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Exploiting multiple degrees of BP parallelism on the highly parallel computer AP1000. In Fourth International Conference on Artificial Neural Networks (ANN'95), pages 483–488, Cambridge, UK, June 1995. IEE.
Jim Torresen, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. General mapping of feed-forward neural networks onto an MIMD computer. In Proc. of IEEE Int. Conference on Neural Networks (ICNN'95), Perth, Western Australia, 27 November–1 December 1995. IEEE.
Jim Torresen, Shinji Tomita, and Olav Landsverk. The relation of weight update frequency to convergence of BP. In Proc. Of World Congress on Neural Networks (WCNN'95), volume 1, pages 679–682, Washington, D.C., July 1995. INNS Press.
Hiroaki Ishihata. Performance evaluation of the AP1000. In Proc. of CAP workshop, pages N-1-8, 1991.
Terrence J. Sejnowski and Charles R. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145–168, 1987.
Kwang Bo Cho et al. Image compression using multi-layer perceptron with block classification and SOFM coding. In Proc. of World Congress on Neural Networks, volume 3, pages 26–31, 1994.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Torresen, J., Mori, Si., Nakashima, H., Tomita, S., Landsverk, O. (1997). Exploiting parallel computers to reduce neural network training time of real applications. In: Polychronopoulos, C., Joe, K., Araki, K., Amamiya, M. (eds) High Performance Computing. ISHPC 1997. Lecture Notes in Computer Science, vol 1336. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0024236
Download citation
DOI: https://doi.org/10.1007/BFb0024236
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63766-0
Online ISBN: 978-3-540-69644-5
eBook Packages: Springer Book Archive