Abstract
We propose a neural network based autoassociative memory system for unsupervised learning. This system is intended to be an example of how a general information processing architecture, similar to that of neocortex, could be organized. The neural network has its units arranged into two separate groups called populations, one input and one hidden population. The units in the input population form receptive fields that sparsely projects onto the units of the hidden population. Competitive learning is used to train these forward projections. The hidden population implements an attractor memory. A back projection from the hidden to the input population is trained with a Hebbian learning rule. This system is capable of processing correlated and densely coded patterns, which regular attractor neural networks are very poor at. The system shows good performance on a number of typical attractor neural network tasks such as pattern completion, noise reduction, and prototype extraction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Hertz, J., Krogh, A., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Reading (1991)
Barlow, H.B.: Unsupervised Learning. Neural Computation 1(3), 295–311 (1989)
Linsker, R.: From basic network principles to neural architecture: Emergence of orientation columns. Proc. Natl. Acad. Sci. 83, 8779–8783 (1986)
Linsker, R.: From basic network principles to neural architecture: Emergence of orientation-selective cells. Proc. Natl. Acad. Sci. 83, 8390–8394 (1986)
Linsker, R.: From basic network principles to neural architecture: Emergence of spatial-opponent cells. Proc. Natl. Acad. Sci. 83, 7508–7512 (1986)
Linsker, R.: Self-organization in a perceptual network. IEEE Computer 21, 105–117 (1988)
Olshausen, B.A., Field, D.J.: Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1. Vision Research 37(23), 3311–3325 (1997)
Olshausen, B.A., Field, D.J.: Sparse coding of sensory inputs. Current Opinion in Neurobiology 14, 481–487 (2004)
Bell, A.J., Sejnowski, T.J.: The Independent Components of Natural Scenes are Edge Filters. Vision Research 37(23), 3327–3338 (1997)
Földiak, P.: Forming sparse representations by local anti-Hebbian learning. Biol. Cybern. 64, 165–170 (1990)
Olshausen, B.A., Field, D.J.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381(6583), 607–609 (1996)
Schraudolph, N.N., Sejnowski, T.J.: Competitive Anti-Hebbian Learning of Invariants. Advances of Information Processing Systems 4, 1017–1024 (1992)
Yuille, A.L., Smirnakis, S.M., Xu, L.: Bayesian Self-Organization Driven by Prior Probability Distributions. Neural Computation 7, 580–593 (1995)
Peper, F., Shirazi, M.N.: A Categorizing Associative Memory Using an Adaptive Classifier and Sparse Coding. IEEE Trans. on Neural Networks 7(3), 669–675 (1996)
Michaels, R.: Associative Memory with Uncorrelated Inputs. Neural Computation 8, 256–259 (1996)
Bartlett, M.S., Sejnowski, T.J.: Learning viewpoint-invariant face representations from visual experience in an attractor network. Network: Comp. in Neur. Sys. 9(3), 399–417 (1998)
Amit, Y., Mascaro, M.: Attractor Networks for Shape Recognition. Neural Computation 13(6), 1415–1442 (2001)
Fukushima, K.: A Neural Network for Visual Pattern Recognition. Computer 21(3), 65–75 (1988)
Fukushima, K.: Analysis of the Process of Visual Pattern Recognition by the Neocognitron. Neural Networks 2(6), 413–420 (1989)
Fukushima, K., Wake, N.: Handwritten Alphanumeric Character Recognition by the Neocognitron. IEEE Trans. on Neural Networks 2(3), 355–365 (1991)
Földiák, P.: Learning Invariance from Transformation Sequences. Neural Computation 3, 194–200 (1991)
Grossberg, S.: Competetive Learning: From Interactive Activation to Adaptive Resonance. Cognitive Science 11, 23–63 (1987)
Rolls, E.T., Treves, A.: Neural Networks and Brain Function. Oxford University Press, New York (1998)
Togawa, F., et al.: Receptive field neural network with shift tolerant capability for Kanji character recognition. In: IEEE International Joint Conference on Neural Networks, Singapore (1991)
Wallis, G., Rolls, E.T.: Invariant Face and Object Recognition in the Visual System. Progress in Neurobiology 51, 167–194 (1997)
Rumelhart, D.E., Zipser, D.: Feature Discovery by Competetive Learning. Cognitive Science 9, 75–112 (1985)
Hawkins, J. (ed.): On Intelligence. Times Books (2004)
Edelman, S., Poggio, T.: Models of object recognition. Current Opinion in Neurobiology 1, 270–273 (1991)
Moses, Y., Ullman, S.: Generalization to Novel Views: Universal, Class-based, and Model-based Processing. Int. J. Computer Vision 29, 233–253 (1998)
Sandberg, A., et al.: A Bayesian attractor network with incremental learning. Network: Comp. in Neur. Sys. 13(2), 179–194 (2002)
Lansner, A., Ekeberg, Ö.: A one-layer feedback artificial neural network with a Bayesian learning rule. Int. J. Neural Systems 1(1), 77–87 (1989)
Lansner, A., Holst, A.: A higher order Bayesian neural network with spiking units. Int. J. Neural Systems 7(2), 115–128 (1996)
Ueda, N., Nakano, R.: A New Competitive Learning Approach Based on an Equidistortion Principle for Designing Optimal Vector Quantizers. Neural Network 7(8), 1211–1227 (1994)
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. PNAS 79, 2554–2558 (1982)
Buxhoeveden, D.P., Casanova, M.F.: The minicolumn hypothesis in neuroscience. Brain 125(5), 935–951 (2002)
Thomson, A.M., Bannister, A.P.: Interlaminar Connections in the Neocortex. Cerebral Cortex 13(1), 5–14 (2003)
Hubel, D.H., Wiesel, T.N.: Functional architecture of macaque monkey visual cortex. Proc. R. Soc. Lond. B. 198, 1–59 (1977)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Johansson, C., Lansner, A. (2006). Attractor Memory with Self-organizing Input. In: Ijspeert, A.J., Masuzawa, T., Kusumoto, S. (eds) Biologically Inspired Approaches to Advanced Information Technology. BioADIT 2006. Lecture Notes in Computer Science, vol 3853. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11613022_22
Download citation
DOI: https://doi.org/10.1007/11613022_22
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-31253-6
Online ISBN: 978-3-540-32438-6
eBook Packages: Computer ScienceComputer Science (R0)