Nothing Special   »   [go: up one dir, main page]

X - /ife - Is Smaller Than The Width, A . Then For Any Given Input, Only The Small

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

Manufacturing

Systems

Estimation

Using Neural Network Models

295

where /x^ and < 7 f c are the center and width, respectively, of the receptive field in the input space for unit k. The value of Zk will be appreciable only when the "distance" \\x /ife|| is smaller than the width, a^. Then for any given input, only the small fraction of basis functions with centers very close will respond with activations that differ significantly from zero. This leads to the notion of locality of RBF networks. A commonly used RBF network assumes a Gaussian basis function for the locally-tuned units: >-Mfc|l (2) zk = exp

where the norm is Euclidean. No bias terms are needed when Gaussian basis functions are used. The output layer of the RBF network is linear and produces a weighted sum of the outputs of the hidden layer, where the sum is calculated by the matrix multiplication given in Eq. (3).
Wn W21 W22 WK1 WK2 Zl

d2

Wl2

z% (3)

dL

WlL

W2L

WKL

ZK

The strength of the connections between the fcth hidden unit and the Zth output unit is denoted by weight wu- Term di, where I = 1 , . . . , L, is the Ith component of the network output vector for one input/output pair. The linear output layer function may also include a bias term Aoi- An allowance for nonlinearity in the output layer is possible, provided the transfer function is invertible. Moody and Darken (1989), Broomhead and Lowe (1988), and Hassoun (1995) are popular citations for theory on Radial Basis Function networks. Training the Network

Training of RBF networks is most computationally efficient when a hybrid learning method, combining linear supervised learning and linear self-organized learning is used. Supervised learning rules adjust the network parameters to move network outputs closer to target outputs and self-organized learning rules modify parameters in response to network inputs only. The combination of local representation and linear learning offers tremendous speed advantages relative to other architectures such as backpropagation. The hybrid learning method is an example of a training strategy that decouples learning at the hidden and the output layers, made possible for RBF networks because of the local receptive field nature of the hidden units. Under the hybrid learning method, receptive field centers and widths are first determined using a self-organizing or feedforward technique. Then, a supervised feedback procedure that optimizes total error is used to adjust the network weights and biases that connect the hidden and output layers.

You might also like