Nothing Special   »   [go: up one dir, main page]

Skip to main content

Inward relearning: A step towards long-term memory

  • Poster Presentations 3
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

  • 115 Accesses

Abstract

Artificial neural networks are often used as models of biological memory because they share with the latter properties like generalisation, distributed representation, robustness, fault tolerance. However, they operate on a short-term scale and can therefore only be appropriate models of short-term memory. This limitation is known as the so-called catastrophic interference: when a new set of data is learned, the network totally forgets the previously trained sets. To palliate these restrictions, we have developed an algorithm which enables some types of neural network to behave better in the longer term. It requires local networks where the representation takes the form of prototypes (as example, we utilize a RBF network). These prototypes model the previously learned input subspaces. During the presentation of the new input subspace, they can be inwardly manipulated such as to enable a “relearning” of a part of the internal model. In order to show the long-term capabilities of our heuristic, we compare the results of simulations with those obtained by a multi-layer network in the case of a typical psychophysical experiment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

Reference

  1. V. R. de Angulo & C. Torras, “On-Line Learning with Minimal Degradation in Feedforward Networks”, IEEE Trans. on Neural Networks, 6(3), May 1995.

    Google Scholar 

  2. A.D. Baddeley, theory and practice, Allyn & Bacon, Boston, 1990.

    Google Scholar 

  3. J.M. Barnes, B.J. Underwood, Journ. of Exp. Psychol., 58: 97–105, 1959.

    Google Scholar 

  4. C. Cachin, “Pedagogical pattern selection strategies”, Neural Networks, 7: 175–181, 1994.

    Google Scholar 

  5. G.A. Carpenter, S. Grossberg, “Massively parallel architecture for self organizing neural pattern recognition machine”, Comp. Vis., Graph., and Image Proc., 37: 54–115, 1987.

    Google Scholar 

  6. M. McCloskey, N.J. Cohen, “Catastrophic Interference in Connectionist Networks: the Sequential Learning Problem”, In G.H. Bower (Ed.), The psychology of learning and motivation, New York: Acad. Press, 109–165, 1989.

    Google Scholar 

  7. J.L. McClelland, B.L. McNaughton, R. C. O'Reilly, “Why there are complementary Learning Systems in the Hippocampus and Neocortex: insights from the successes and failures of connectionist models of learning and memory”, Technical Report PDP. CNS.94.1, Carnegie Mellon University, March 1994.

    Google Scholar 

  8. R.M. French, “Using semi-distributed Representations to overcome catastrophic forgetting in connectionist Network”, Proc. of the 13th Annual Cogn. Sci. Soc. Conf., Hillside, NJ: Lawrence Erlbaum, 173–178, 1991.

    Google Scholar 

  9. B. Fritzke, “Growing cell structures — A self-organizing network for unsupervised and supervised learning”, Neural Networks, 7(9): 14441–1460, 1994.

    Google Scholar 

  10. M.J.A. Howe, Introduction to human memory, Harper & Row, 81–83, 1970.

    Google Scholar 

  11. P. Kanerva, Sparse Distributed Memory, Cambridge, MA: MIT Press, 1988.

    Google Scholar 

  12. H. Kay, “Learning and retaining verbal material”, Brit. Journ. of Psych., 44: 81–100, 1955.

    Google Scholar 

  13. T. Kohonen, “Self-organized formation of topologically correct feature maps”, Biological Cybernetics, 43: 59–69, 1982.

    Google Scholar 

  14. J. K. Kruschke, “ALCOVE: an exemplar-based connectionist model of category learning”, Psychological Review, 99(1): 22–44, 1992.

    Google Scholar 

  15. D. Marr, “Simple memory: a theory for archicortex”, The Philos. Trans. of the Royal Soc. of London, 262: Series B, 23–81, 1971

    Google Scholar 

  16. J. Piaget, La psychologie de l'enfant, Armand Colin, 1947.

    Google Scholar 

  17. T. Poggio, F. Girosi, “A theory of networks for approximation and learning”, MIT Technical report No. 1140, 1989.

    Google Scholar 

  18. R. Ratcliff, “Connectionist Models of Recognition Memory: Constraints Imposed by Learning and Forgetting Functions, Psychological Review, 97(2): 285–308, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wacquant, S., Joublin, F. (1996). Inward relearning: A step towards long-term memory. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_149

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_149

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics