Reservoir Computing and Self-Organized Neural Hierarchies
- There is a growing understanding that machine learning architectures have to be much bigger and more complex to approach any intelligent behavior. There is also a growing understanding that purely supervised learning is inadequate to train such systems. A recent paradigm of artificial recurrent neural network (RNN) training under the umbrella-name Reservoir Computing (RC) demonstrated that training big recurrent networks (the reservoirs) differently than supervised readouts from them is often better. It started with Echo State Networks (ESNs) and Liquid State Machines ten years ago where the reservoir was generated randomly and only linear readouts from it were trained. Rather surprisingly, such simply and fast trained ESNs outperformed classical fully-trained RNNs in many tasks. While full supervised training of RNNs is problematic, intuitively there should also be something better than a random network. In recent years RC became a vivid research field extending the initial paradigm from fixed random reservoir and trained output into using different methods for training the reservoir and the readout. In this thesis we overview existing and investigate new alternatives to the classical supervised training of RNNs and their hierarchies. First we present a taxonomy and a systematic overview of the RNN training approaches under the RC umbrella. Second, we propose and investigate the use of two different neural network models for the reservoirs together with several unsupervised adaptation techniques, as well as unsupervisedly layer-wise trained deep hierarchies of such models. We rigorously empirically test the proposed methods on two temporal pattern recognition datasets, comparing it to the classical reservoir computing state of art.