Nothing Special   »   [go: up one dir, main page]

Unit 2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 9

UNIT-II

ESSENTIALS OF ARTIFICIAL NEURAL NETWORKS

Types of activation functions:

1. Thresholding function:
It is easy to sense that the output signal is either 1 or 0 resulting in the
neuron being on or off.
Ø(I) = 1, I>Ө

= 0, I<=Ө.

2. Signum function: This is also known as the quantizer function, the


function Ø is defined as

Ø(I) = 1, I>Ө

= -1, I<=Ө.

AI techniques by CH.NAGA SAI KALYAN Page 1


3. Sigmoidal function:

The S-shaped sigmoidal function is the most commonly used activation


function
( ) = slope parameter

The sigmoid function is differentiable, where as the threshold function is


not. Differentiability is an important feature of the neural network
theory.
As , ( ) → 0 to 1, and it reduces to threshold function.

AI techniques by CH.NAGA SAI KALYAN Page 2


4. Hyperbolic tangent function:
It is given by ( ) ( )
( )
( )
( )

( )

why non-linear activation functions?


Activation functions cannot be linear because neural networks with linear
activation functions are effective for only single layer, when multiple layers use
linear activation function the entire network is equivalent to a single layer
model. Nonlinear means that the output cannot be reproduced from a linear
combination of the inputs.
(OR)
Without a nonlinear activation function in the network, no matter how
many layers it had, would behave just like a single layer perceptron, because
summing these layers would give you just another linear function.

What a single neuron can do?


Logic operations performed by ANN
Logical AND :
Consider the truth table illustrating an AND gate

x2 x1 y

0 0 0

0 1 0

1 0 0

1 1 1
 w1  1
 w   1 , b  1.5
 2  

AI techniques by CH.NAGA SAI KALYAN Page 3


Logical OR :
Consider the truth table illustrating the OR gate

x2 x1 y  w1  1
 w   1 , b  0.5
 2  
0 0 0

0 1 1

1 0 1

1 1 1

Note: The implementations of AND and OR logic functions differ only by the
value of the bias.

NOT Gate:

x y

1 0

0 1

AI techniques by CH.NAGA SAI KALYAN Page 4


LEARNING PROCESS:

The process of modifying the weights in the connection between network


layers with the objective of achieving expected output.

Learning Tasks
The learning algorithm for a neural network is depended on the learning tasks to
be performed by the network. Such learning tasks include

 Pattern association
 Pattern recognition
 Function approximation
 Filtering
 Beam forming
 Identification and Control
LEARNING METHODS:

Learning methods in neural networks can be broadly classified into three


basic types.
 Supervised learning
 Unsupervised learning
 Reinforced learning

Supervised learning: In this method, every input that is used to train the
network is always associated with the output. A teacher is assumed to be present
during learning process, a comparison is made between network computed
output and the correct expected output to determine the error. The error can then
be used to change the network parameters in order to improve the performance.
This process will continues until error will be zero.

AI techniques by CH.NAGA SAI KALYAN Page 5


Unsupervised learning: In this type of learning no teacher is present and the
target output is not presented to the network.

 In this method all similar input patterns are grouped together as clusters.
 If a matching input pattern is not found a new cluster is formed
(clustering is nothing but mode separation or class separation).

 No feedback in unsupervised learning.


 Network must discover patterns, regularities, features for the input data
over the output. While doing so the network might change in parameters
which results improvement in performance.
 This process is called self organising.

Reinforcement learning: In this method a teacher though available does not


present the expected answer but only indicates if the computed output is correct
or incorrect. This information helps in the network learning process. A reward is
given for a computed correct answer and a penalty for a wrong answer.

AI techniques by CH.NAGA SAI KALYAN Page 6


LEARNING RULES:
1. Rosenblatt’s perceptron learning rule:

This learning is supervised. This type of learning can be applied only if


the neuron response is binary (0 or 1) or bipolar (1 or –1). The weight
adjustment in this method is obtained as

wkj (n)   [d k  sgn  (vk (n))] x j


1 if wT x0
 (vk (n))   T
 1 if w x0
wkj (n  1)  wkj (n)  wkj (n)
Where n=1,2,… is the iteration number, x j j  1,2,...,m is the input,  is

the learning rate parameter, vk (n) is the net activity of the neuron k,
 (vk (n))  yk (n) is the out put of the neuron k, dk is the desired

response, ek (n) is the error between the output and the desired response of
the neuron k and wkj (n) is the correction applied to the synaptic weight

between the neuron k and the input node j=1,2, …, m. There will be no weight
correction for the cases were the actual response and the desired response is
equal.

2. Competitive learning: when an input pattern is presented, all the neurons in


the layer compete and the winning neuron under goes weight adjustment.
This is called winner-takes-all strategy.

The weight correction is effected as

AI techniques by CH.NAGA SAI KALYAN Page 7


 ( x j  wkj ) if neuron k wins the compititio n
wkj   0 if neuron k losses the compition

3. Hebbian learning: Hebbian learning can be applied for neurons with
binary and continuous activation function. It is the most natural learning
of all other types of learning.
Consider the single neuron k. The net activity of the neuron k and the
error is obtained as;

vk (n)  w T (n) x(n)


yk (n)   (w T (n) x(n))

4. Gradient descent learning: This is based on the minimisation of error E


defined in terms of weights and activation function of the network. It is
required that the activation function used by the network must be
differentiable.

= weight update of the link connecting the ith and jth neuron
of the two neighbouring layers.

= learning rate parameter. = error gradient

5. Stochastic learning: In this method weights are adjusted in a probabilistic


fashion.

AI techniques by CH.NAGA SAI KALYAN Page 8


Eg: simulated annealing- Boltzmann and Cauchy machines which are
neural networks employs this kind of learning mechanism.
Elements of neural dynamics
The effect of a spike on the postsynaptic neuron can be recorded with an
intracellular electrode which measures the potential difference u(t) between the
interior of the cell and its surroundings. This potential difference is called the
membrane potential. Without any input, the neuron is at rest corresponding to a
constant membrane potential urest. After the arrival of a spike, the potential
changes and finally decays back to the resting potential. If the change is
positive, the synapse is said to be excitatory. If the change is negative, the
synapse is inhibitory.
At rest, the cell membrane has already a strongly negative polarization of
about –65 mV. An input at an excitatory synapse reduces the negative
polarization of the membrane and is therefore called depolarizing. An input that
increases the negative polarization of the membrane even further is called
hyperpolarizing.

AI techniques by CH.NAGA SAI KALYAN Page 9

You might also like