Abstract
Artificial neural networks are often used as models of biological memory because they share with the latter properties like generalisation, distributed representation, robustness, fault tolerance. However, they operate on a short-term scale and can therefore only be appropriate models of short-term memory. This limitation is known as the so-called catastrophic interference: when a new set of data is learned, the network totally forgets the previously trained sets. To palliate these restrictions, we have developed an algorithm which enables some types of neural network to behave better in the longer term. It requires local networks where the representation takes the form of prototypes (as example, we utilize a RBF network). These prototypes model the previously learned input subspaces. During the presentation of the new input subspace, they can be inwardly manipulated such as to enable a “relearning” of a part of the internal model. In order to show the long-term capabilities of our heuristic, we compare the results of simulations with those obtained by a multi-layer network in the case of a typical psychophysical experiment.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
Reference
V. R. de Angulo & C. Torras, “On-Line Learning with Minimal Degradation in Feedforward Networks”, IEEE Trans. on Neural Networks, 6(3), May 1995.
A.D. Baddeley, theory and practice, Allyn & Bacon, Boston, 1990.
J.M. Barnes, B.J. Underwood, Journ. of Exp. Psychol., 58: 97–105, 1959.
C. Cachin, “Pedagogical pattern selection strategies”, Neural Networks, 7: 175–181, 1994.
G.A. Carpenter, S. Grossberg, “Massively parallel architecture for self organizing neural pattern recognition machine”, Comp. Vis., Graph., and Image Proc., 37: 54–115, 1987.
M. McCloskey, N.J. Cohen, “Catastrophic Interference in Connectionist Networks: the Sequential Learning Problem”, In G.H. Bower (Ed.), The psychology of learning and motivation, New York: Acad. Press, 109–165, 1989.
J.L. McClelland, B.L. McNaughton, R. C. O'Reilly, “Why there are complementary Learning Systems in the Hippocampus and Neocortex: insights from the successes and failures of connectionist models of learning and memory”, Technical Report PDP. CNS.94.1, Carnegie Mellon University, March 1994.
R.M. French, “Using semi-distributed Representations to overcome catastrophic forgetting in connectionist Network”, Proc. of the 13th Annual Cogn. Sci. Soc. Conf., Hillside, NJ: Lawrence Erlbaum, 173–178, 1991.
B. Fritzke, “Growing cell structures — A self-organizing network for unsupervised and supervised learning”, Neural Networks, 7(9): 14441–1460, 1994.
M.J.A. Howe, Introduction to human memory, Harper & Row, 81–83, 1970.
P. Kanerva, Sparse Distributed Memory, Cambridge, MA: MIT Press, 1988.
H. Kay, “Learning and retaining verbal material”, Brit. Journ. of Psych., 44: 81–100, 1955.
T. Kohonen, “Self-organized formation of topologically correct feature maps”, Biological Cybernetics, 43: 59–69, 1982.
J. K. Kruschke, “ALCOVE: an exemplar-based connectionist model of category learning”, Psychological Review, 99(1): 22–44, 1992.
D. Marr, “Simple memory: a theory for archicortex”, The Philos. Trans. of the Royal Soc. of London, 262: Series B, 23–81, 1971
J. Piaget, La psychologie de l'enfant, Armand Colin, 1947.
T. Poggio, F. Girosi, “A theory of networks for approximation and learning”, MIT Technical report No. 1140, 1989.
R. Ratcliff, “Connectionist Models of Recognition Memory: Constraints Imposed by Learning and Forgetting Functions, Psychological Review, 97(2): 285–308, 1990.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1996 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Wacquant, S., Joublin, F. (1996). Inward relearning: A step towards long-term memory. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_149
Download citation
DOI: https://doi.org/10.1007/3-540-61510-5_149
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-61510-1
Online ISBN: 978-3-540-68684-2
eBook Packages: Springer Book Archive