Abstract
Finite-state machines are the most pervasive models of com- putation, not only in theoretical computer science, but also in all of its applications to real-life problems, and constitute the best characterized computational model. On the other hand, neural networks -proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings- have evolved into a great variety of so-called artificial neural networks. Artificial neural networks have be- come a very successful tool for modelling and problem solving because of their built-in learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finite-state machines and (mainly discrete-time recurrent) neural networks, and suggests possible ways to construct finite-state models in biologically plausible neural networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
We’re sorry, something doesn't seem to be working properly.
Please try refreshing the page. If that doesn't work, please contact support so we can address the problem.
References
W.S. McCulloch and W.H. Pitts. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.
J.E. Hopcroft and J.D. Ullman. Introduction to automata theory, languages, and computation. Addison-Wesley, Reading, MA, 1979.
S.C. Kleene, Representation of events in nerve nets and finite automata. In C.E. Shannon and J. McCarthy, editors, Automata Studies, pages 3–42. Princeton University Press, Princeton, N.J., 1956.
M.L. Minsky. Computation: Finite and Infinite Machines. Prentice-Hall, Inc., Englewood Cliffs, NJ, 1967. Ch: Neural Networks. Automata Made up of Parts.
N. Alon, A.K. Dewdney, and T.J. Ott. Efficient simulation of finite automata by neural nets. Journal of the Association of Computing Machinery, 38(2):495–514, 1991.
P. Indyk. Optimal simulation of automata by neural nets. In Proceedings of the 12th Annual Symposium on Theoretical Aspects of Computer Science, pages 337–348, Berlin, 1995. Springer-Verlag.
B.G. Horne and D.R. Hush. Bounds on the complexity of recurrent neural network implementations of finite state machines. Neural Networks, 9(2):243–252, 1996.
Tony Robinson and Frank Fallside. A recurrent error propagation network speech recognition system. Computer Speech and Language, 5:259–274, 1991.
J.L. Elman. Finding structure in time. Cognitive Science, 14:179–211, 1990.
A. Cleeremans, D. Servan-Schreiber, and J.L. McClelland. Finite state automata and simple recurrent networks. Neural Computation, 1(3):372–381, 1989.
Jordan B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:22–252, 1991.
C.L. Giles, C.B. Miller, D. Chen, H.H. Chen, G.Z. Sun, and Y.C. Lee. Learning and extracted finite state automata with second-order recurrent neural networks. Neural Computation, 4(3):393–405, 1992.
R.L. Watrous and G.M. Kuhn. Induction of finite-state languages using second-order recurrent networks. Neural Computation, 4(3):406–414, 1992.
Arun Maskara and Andrew Noetzel. Forcing simple recurrent neural networks to encode context. In Proceedings of the 1992 Long Island Conference on Artificial Intelligence and Computer Graphics, 1992.
A. Sanfeliu and R. Alquézar. Active grammatical inference: a new learning methodology. In Dov Dori and A. Bruckstein, editors, Shape and Structure in Pattern Recognition, Singapore, 1994. World Scientific. Proceedings of the IAPR International Workshop on Structural and Syntactic Pattern Recognition SSPR’94 (Nahariya, Israel).
P. Manolios and R. Fanelli. First order recurrent neural networks and deterministic finite state automata. Neural Computation, 6(6):1154–1172, 1994.
M.L. Forcada and R.C. Carrasco. Learning the initial state of a second-order recurrent neural network during regular-language inference. Neural Computation, 7(5):923–930, 1995.
Peter Tiňo and Jozef Sajda. Learning and extracting initial Mealy automata with a modular neural network model. Neural Computation, 7(4), July 1995.
R.P. Ñeco and M.L. Forcada. Beyond Mealy machines: Learning translators with recurrent neural networks. In Proceedings of the World Conference on Neural Networks’ 96, pages 408–411, San Diego, California, September 15-18 1996.
Marco Gori, Marco Maggini, E. Martinelli, and G. Soda. Inductive inference from noisy examples using the hybrid finite state filter. IEEE Transactions on Neural Networks, 9(3):571–575, 1998.
M.W. Goudreau, C.L. Giles, S.T. Chakradhar, and D. Chen. First-order vs. second-order single layer recurrent neural networks. IEEE Transactions on Neural Networks, 5(3):511–513, 1994.
Rafael C. Carrasco, Mikel L. Forcada, M. Ángeles Valdés-Muñoz, and Ramón P. Ñeco. Stable encoding of finite-state machines in discrete-time recurrent neural nets with sigmoid units. Neural Computation, 12, 2000. In press.
Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157–166, 1994.
J.F. Kolen and Jordan B. Pollack. The observer’s paradox: apparent computational complexity in physical systems. Journal of Experimental and Theoretical Artificial Intelligence, 7:253–277, 1995.
J.F. Kolen. Fool’s gold: Extracting finite state machines from recurrent network dynamics. In J. D. Cowan, G. Tesauro,, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 501–508, San Mateo, CA, 1994. Morgan Kaufmann.
M. Casey. The dynamics of discrete-time computation, with application to recurrent neural networks and finite state machine extraction. Neural Computation, 8(6):1135–1178, 1996.
A. Blair and J.B. Pollack. Analysis of dynamical recognizers. Neural Computation, 9(5):1127–1142, 1997.
C.W. Omlin and C.L. Giles. Constructing deterministic finite-state automata in recurrent neural networks. Journal of the ACM, 43(6):937–972, 1996.
R. Alquézar and A. Sanfeliu. An algebraic framework to represent finite state automata in single-layer recurrent neural networks. Neural Computation, 7(5):931–949, 1995.
Stefan C. Kremer. A Theory of Grammatical Induction in the Connectionist Paradigm. PhD thesis, Department of Computer Science, University of Alberta, Edmonton, Alberta, 1996.
Paolo Frasconi, Marco Gori, Marco Maggini, and Giovanni Soda. Representation of finite-state automata in recurrent radial basis function networks. Machine Learning, 23:5–32, 1996.
Jiří Šíma. Analog stable simulation of discrete neural networks. Neural Network World, 7:679–686, 1997.
Jiří Šíma and Jiří Wiedermann. Theory of neuromata. Journal of the ACM, 45(1):155–178, 1998.
Ramón P. ðeco, Mikel L. Forcada, Rafael C. Carrasco, and M. Ángeles Valdés-Muñoz. Encoding of sequential translators in discrete-time recurrent neural nets. In Proceedings of the European Symposium on Artificial Neural Networks ESANN’99, pages 375–380, 1999.
Rafael C. Carrasco, Jose Oncina, and Mikel L. Forcada. Efficient encodings of finite automata in discrete-time recurrent neural networks. In Proceedings of ICANN’99, International Conference on Artificial Neural Networks, 1999. (in press).
B.A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: a survey. IEEE Transactions on Neural Networks, 6(5):1212–1228, 1995.
F.J. Pineda. Generalization of back-propagation to recurrent neural networks. Physical Review Letters, 59(19):2229–2232, 1987.
L.B. Almeida. Backpropagation in perceptrons with feedback. In R. Eckmiller and Ch. von der Malsburg, editors, Neural Computers, pages 199–208, Neuss 1987, 1988. Springer-Verlag, Berlin.
S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature, 381:520–522, 1996.
Thomas Natschläger and Wolfgang Maass. Fast analog computation in networks of spiking neurons using unreliable synapses. In Proceedings of ESANN’99, European Symposium on Artificial Neural Networks, pages 417–422, 1999.
Thomas Wennekers. Synfire graphs: From spike patterns to automata of spiking neurons. Technical Report Ulmer Informatik-Berichte Nr. 98-08, Universität Ulm, Fakultät für Informatik, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Forcada, M.L., Carrasco, R.C. (2001). Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?. In: Wermter, S., Austin, J., Willshaw, D. (eds) Emergent Neural Computational Architectures Based on Neuroscience. Lecture Notes in Computer Science(), vol 2036. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44597-8_34
Download citation
DOI: https://doi.org/10.1007/3-540-44597-8_34
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42363-8
Online ISBN: 978-3-540-44597-5
eBook Packages: Springer Book Archive