Download presentation
Presentation is loading. Please wait.
1
Backpropagation
2
Linear separability constraint
3
Input 1 Input 2 Output 1 1 2 3 w1 w2 1
4
What if we add an extra layer between input and output?
5
5 w5 w6 3 4 w2 w3 w1 w4 1 2 Same as a linear network without any hidden layer!
6
What if we use thresholded units?
7
5 w5 w6 If netj > thresh, aj = 1 Else aj = 0 3 4 w2 w3 w1 w4 1 2
8
5 If netj > 9.9, aj = 1 Else aj = 0 10 -10 3 4 1 Unit 3 10 10 5 5 1 2 1 1 Unit 4
9
So with thresholded units and a hidden layer, solutions exist…
…and solutions can be viewed as “re-representing” the inputs, so as to make the mapping to the output unit learnable. BUT, how can we learn the correct weights instead of just setting them by hand?
10
But what if: Simple delta rule: …What function should we use for aj?
11
Net input Change in activation Activation 1.00 0.90 0.80 0.70 0.60
1.00 0.90 0.80 0.70 Change in activation 0.60 0.50 0.40 Activation 0.30 0.20 0.10 0.00 -10 -5 5 10 Net input
12
Simple delta rule:
13
5 w5 w6 3 4 w2 w3 w1 w4 1 2
14
5 6 Targets For outputs delta computed directly based on error. Delta is stored at each unit and also used directly to adjust each incoming weight. 3 4 5 1 2 6 Output For hidden units, there are no targets; “error” signal is instead the sum of the output unit deltas. These are used to compute deltas for the hidden units, which are again stored with unit and used to directly change incoming weights. Hidden Deltas, and hence error signal at output, can propagate backward through network through many layers until it reaches the input. Input
15
Alternative error functions.
16
Sum-squared error: 5 w5 w6 3 4 w2 w3 w1 w4 1 2 Cross-entropy error:
17
5 w5 w6 3 4 w2 w3 w1 w4 1 2
18
Input 1 Input 2 New input Output 1 1 3 w1 w2 1 2 2
Multi-Layer Perceptron (MLP)
Beyond Linear Separability
Slides from: Doug Gray, David Poole
NEURAL NETWORKS Backpropagation Algorithm
1 Machine Learning: Lecture 4 Artificial Neural Networks (Based on Chapter 4 of Mitchell T.., Machine Learning, 1997)
also known as the “Perceptron”
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 7: Learning in recurrent networks Geoffrey Hinton.
Multilayer Perceptrons 1. Overview Recap of neural network theory The multi-layered perceptron Back-propagation Introduction to training Uses.
Kostas Kontogiannis E&CE
Machine Learning: Connectionist McCulloch-Pitts Neuron Perceptrons Multilayer Networks Support Vector Machines Feedback Networks Hopfield Networks.
The back-propagation training algorithm
Before we start ADALINE
Data Mining with Neural Networks (HK: Chapter 7.5)
Hopefully a clearer version of Neural Network. With Actual Weights.
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
Traffic Sign Recognition Using Artificial Neural Network Radi Bekker
Where We’re At Three learning rules Hebbian learning regression LMS (delta rule) regression Perceptron classification.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.