CS7015 (Deep Learning) : Lecture 8
CS7015 (Deep Learning) : Lecture 8
CS7015 (Deep Learning) : Lecture 8
Mitesh M. Khapra
1/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Acknowledgements
Chapter 7, Deep Learning book
Ali Ghodsi’s Video Lectures on Regularizationa
Dropout: A Simple Way to Prevent Neural Networks from Overfittingb
a
Lecture 2.1 and Lecture 2.2
b
Dropout
2/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.1 : Bias and Variance
3/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We will begin with a quick overview of bias, variance and the trade-off between
them.
4/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Let us consider the problem of fitting a curve
through a given set of points
5/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We sample 25 points from the training data
and train a simple and a complex model
Simple
We repeat the process ‘k’ times to train
multiple models (each model sees a different
Complex sample of the training data)
The points were drawn from
We make a few observations from these plots
a sinusoidal function (the true
f (x))
6/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
7/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Simple models trained on different samples of
the data do not differ much from each other
8/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Let f (x) be the true model (sinusoidal in this
case) and fˆ(x) be our estimate of the model
(simple or complex, in this case) then,
10/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
In summary (informally)
Simple model: high bias, low variance
Complex model: low bias, high variance
There is always a trade-off between the bias
and variance
Both bias and variance contribute to the mean
square error. Let us see how
11/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.2 : Train error vs Test error
12/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Consider a new point (x, y) which was not
seen during training
We can show that
If we use the model fˆ(x) to predict the
E[(y − fˆ(x))2 ] = Bias2 value of y then the mean square error is
+ V ariance given by
+ σ 2 (irreducible error)
E[(y − fˆ(x))2 ]
See proof here
(average square error in predicting y for
many such unseen points)
13/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
error High bias High variance
Sweet spot-
-perfect tradeoff The parameters of fˆ(x) (all wi ’s) are trained
-ideal model using a training set {(xi , yi )}ni=1
complexity
However, at test time we are interested in eval-
uating the model on a validation (unseen) set
which was not used for training
As the model complexity increases trainerr becomes overly optimistic and gives
us a wrong picture of how close fˆ is to f
We will concretize this intuition mathematically now and eventually show how
to account for the optimism in the training error
15/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
m+n
Further we use fˆ to approximate f
Let D={xi , yi }i=1 ,then for any and estimate the parameters using T
point (x, y) we have, ⊂ D such that
yi = f (xi ) + εi yi = fˆ(xi )
which means that yi is related to xi We are interested in knowing
by some true function f but there is
also some noise ε in the relation E[(fˆ(xi ) − f (xi ))2 ]
16/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
E[(yˆi − yi )2 ] = E[(fˆ(xi ) − f (xi ) − εi )2 ] (yi = f (xi ) + εi )
17/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We will take a small detour to understand how to empirically estimate an
Expectation and then return to our derivation
18/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Suppose we have observed the goals scored(z) in k matches as
z1 = 2, z2 = 1, z3 = 0, ... zk = 2
Now we can empirically estimate E[z] i.e. the expected number of goals scored
as
k
1X
E[z] = zi
k
i=1
19/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
... returning back to our derivation
20/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
E[(fˆ(xi ) − f (xi ))2 ] = E[(yˆi − yi )2 ] − E[ε2i ] + 2E[ εi (fˆ(xi ) − f (xi )) ]
n+m n+m
1 X 1 X 2
E[(fˆ(xi ) − f (xi ))2 ] = (yˆi − yi )2 − εi + 2 E[ εi (fˆ(xi ) − f (xi )) ]
| {z } m m | {z }
true error i=n+1 i=n+1
| {z } | {z } = covariance (εi ,fˆ(xi )−f (xi ))
empirical estimation of error small constant
Now, ε 6⊥ fˆ(x) because ε was used for estimating the parameters of fˆ(x)
Hence, the empirical train error is smaller than the true error and does not give
a true picture of the error
24/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Using Stein’s Lemma (and some trickery) we can show that
n n
1X σ 2 X ∂ fˆ(xi )
εi (fˆ(xi ) − f (xi )) =
n n ∂yi
i=1 i=1
∂ fˆ(xi )
When will ∂yibe high? When a small change in the observation causes a
large change in the estimation(fˆ)
27/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Hence while training, instead of minimizing the training error Ltrain (θ) we
should minimize
Where Ω(θ) would be high for complex models and small for simple models
σ2 Pn ∂ fˆ(xi )
Ω(θ) acts as an approximate for n i=1 ∂yi
28/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
error
High bias High variance
σ2 Pn ∂ fˆ(xi )
Sweet spot n i=1 ∂yi
model complexity
29/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Why do we care about this Deep Neural networks are highly complex
bias variance tradeoff and models.
model complexity? Many parameters, many non-linearities.
It is easy for them to overfit and drive training
error to 0.
Hence we need some form of regularization.
30/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Different forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
31/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.4 : l2 regularization
32/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Different forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
33/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
For l2 regularization we have,
∇L
f(w) = ∇L (w) + αw
Update rule:
34/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Assume w∗ is the optimal solution for L (w) [not L
f(w)] i.e. the solution in
the absence of regularization (w optimal → ∇L (w∗ ) = 0)
∗
Now,
∇L
f(w) = ∇L (w) + αw
= H(w − w∗ ) + αw 35/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Let w
e be the optimal solution for L(w)
e [i.e regularized loss]
∵ ∇L(
e w)
e =0
e − w ∗ ) + αw
H(w e=0
e = Hw∗
∴(H + αI)w
e = (H + αI)−1 Hw∗
∴w
e → w∗ [no regularization]
Notice that if α → 0 then w
But we are interested in the case when α 6= 0
Let us analyse the case when α 6= 0
36/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
If H is symmetric Positive Semi Definite
H = QΛQT [Q is orthogonal, QQT = QT Q = I]
e = (H + αI)−1 Hw∗
w
= (QΛQT + αI)−1 QΛQT w∗
= (QΛQT + αQIQT )−1 QΛQT w∗
= [Q(Λ + αI)QT ]−1 QΛQT w∗
−1
= QT (Λ + αI)−1 Q−1 QΛQT w∗
−1
= Q(Λ + αI)−1 ΛQT w∗ (∵ QT = Q)
e = QDQT w∗
w
38/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
e = Q(Λ + αI)−1 ΛQT w∗
w Each element i of QT w∗ gets scaled
= QDQT w∗ by λiλ+α
i
before it is rotated back by
1 Q
λ1 +α λi
1 if λi >> α then λi +α =1
−1 λ2 +α
(Λ + αI) =
.. λi
.
if λi << α then λi +α =0
1 Thus only significant directions
λn +α
(larger eigen values) will be retained.
D = (Λ + αI)−1 Λ
λ1 n
λ1 +α
X λi
λ2 Effective parameters = <n
λ2 +α
λi + α
(Λ + αI)−1 Λ = i=1
..
.
λn
λn +α
39/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
The weight vector(w∗ ) is getting rotated to (w̃)
All of its elements are shrinking but some are shrinking more than the others
This ensures that only important features are given high weights
40/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.5 : Dataset augmentation
41/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Different forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
42/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
rotated by 20◦ rotated by 65◦ shifted vertically
label = 2
43/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Typically, More data = better learning
Works well for image classification / object recognition tasks
Also shown to work well for speech
For some tasks it may not be clear how to generate such data
44/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.6 : Parameter Sharing and tying
45/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Other forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
46/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
x̂
h(x)
Parameter Sharing
Used in CNNs
Parameter Tying
Same filter applied at different
positions of the image Typically used in autoencoders
Or same weight matrix acts on The encoder and decoder weights
different input neurons are tied.
47/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.7 : Adding Noise to the inputs
48/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Other forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
49/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We saw this in Autoencoder
We can show that for a simple input
output neural network, adding Gaus-
sian noise to the input is equivalent
x̂ to weight decay (L2 regularisation)
Can be viewed as data augmentation
h(x)
x̃
P (x̃|x) ←noise process
x
50/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We are interested in E[(ye − y)2 ]
n
" #
h i X 2
... ... 2
E (ye − y) =E yb + w i εi − y
x1 + ε1 x2 + ε2 xk + εk xn + εn i=1
!2
n
X
2
ε ∼ N (0, σ ) =E yb − y + wi εi
i=1
xei = xi + εi h i
" n
X
# " n
X 2
#
n 2
X = E (yb − y) + E 2(yb − y) wi εi + E w i εi
yb = wi xi i=1 i=1
i=1 h i
" n
#
2
X
Xn
= E (yb − y) +0+E wi2 ε2i
ye = wi xei i=1
i=1
(∵ εi is independent of εj and εi is independent of (yb-y) )
Xn n
X
= wi xi + w i εi h i n
X
i=1 i=1 = (E (yb − y)2 + σ 2 wi2 (same as L2 norm penalty)
n
X i=1
= yb + wi εi
i=1 51/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.8 : Adding Noise to the outputs
52/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Other forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
53/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
0 0 1 0 0 0 0 0 0 0 Hard targets
9
X
minimize : pi log qi
i=0
true distribution : p = {0, 0, 1, 0, 0, 0, 0, 0, 0, 0}
estimated distribution : q
Intuition
Do not trust the true labels, they may be noisy
Instead, use soft targets
54/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
ε ε ε ε ε ε ε ε ε
9 9 1−ε 9 9 9 9 9 9 9 Soft targets
55/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.9 : Early stopping
56/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Other forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
57/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Error Track the validation error
Have a patience parameter p
If you are at step k and there was
no improvement in validation error in
the previous p steps then stop train-
V alidation error
ing and return the model stored at
step k − p
T raining error
Basically, stop the training early be-
fore it drives the training error to 0
k−p k Steps
return this model stop and blows up the validation error
58/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Error Very effective and the mostly widely
used form of regularization
Can be used even with other regular-
izers (such as l2 )
How does it act as a regularizer ?
V alidation error
We will first see an intuitive explan-
ation and then a mathematical ana-
T raining error
lysis
k−p k Steps
return this model stop
59/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Error Recall that the update rule in SGD is
wt+1 = wt − η∇wt
t
X
= w0 − η ∇wi
V alidation error i=1
k−p k
stop
Steps |wt+1 − w0 | ≤ ηt|τ |
return this model
60/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We will now see a mathematical analysis of this
61/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Recall that the Taylor series approximation for L (w) is
1
L (w) = L (w∗ ) + (w − w∗ )T ∇L (w∗ ) + (w − w∗ )T H(w − w∗ )
2
1
= L (w∗ ) + (w − w∗ )T H(w − w∗ ) [ w∗ is optimal so ∇L (w∗ ) is 0 ]
2
∇(L (w)) = H(w − w∗ )
62/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
wt = (I − ηH)wt−1 + ηHw∗
Compare this with the expression we had for optimum W̃ with L2 regularization
w̃ = Q[I − (Λ + αI)−1 α]QT w∗
We observe that wt = w̃, if we choose ε,t and α such that
(I − εΛ)t = (Λ + αI)−1 α
63/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Things to be remember
Early stopping only allows t updates to the parameters.
If a parameter w corresponds to a dimension which is important for the loss
L (θ) then ∂L∂w(θ) will be large
However if a parameter is not important ( ∂L∂w(θ) is small) then its updates will
be small and the parameter will not be able to grow large in ‘t0 steps
Early stopping will thus effectively shrink the parameters corresponding to less
important directions (same as weight decay).
64/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.10 : Ensemble methods
65/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Other forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
66/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
yfinal
Combine the output of different models to re-
duce generalization error
ylr ysvm ynb
The models can correspond to different clas-
sifiers
y It could be different instances of the same clas-
y sifier trained with:
different hyperparameters
different features
different samples of the training data
x1 x2 x3 x4
67/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
yfinal
Bagging: form an ensemble using dif-
ylr1 ylr2 ylr3 ferent instances of the same classifier
From a given dataset, construct mul-
y y y tiple training sets by sampling with
replacement (T1 , T2 , ..., Tk )
Train ith instance of the classifier us-
ing training set Ti
68/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
The error made by the average When would bagging work?
prediction of all the models is Consider a set of k LR mod-
1 P
k ε
i i els
The expected squared error is : Suppose that each model
1X 2 makes an error εi on a test
mse =E[( εi ) ] example
k
i
1 XX XX Let εi be drawn from a
= 2 E[ εi εj + εi εj ]
k zero mean multivariate nor-
i i=j i i6=j
1 X 2 XX
mal distribution
= 2 E[ εi + εi εj ]
k V ariance = E[ε2i ] = V
i i i6=j
1 X XX Covariance = E[εi εj ] = C
= 2
( E[ε2i ] + E[εi εj ])
k
i i i6=j
1
= 2 (kV + k(k − 1)C)
k
1 k−1
= V + C
k k 69/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
1 k−1 When would bagging work ?
mse = V + C
k k If the errors of the model are perfectly
correlated then V = C and mse = V
[bagging does not help: the mse of the
ensemble is as bad as the individual
models]
If the errors of the model are inde-
pendent or uncorrelated then C = 0
and the mse of the ensemble reduces
to k1 V
On average, the ensemble will per-
form at least as well as its individual
members
70/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Module 8.11 : Dropout
71/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Other forms of regularization
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
72/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Typically model averaging(bagging
ensemble) always helps
Training several large neural net-
works for making an ensemble is pro-
hibitively expensive
Option 1: Train several neural
networks having different architec-
tures(obviously expensive)
Option 2: Train multiple instances
of the same network using different
training samples (again expensive)
Even if we manage to train with op-
tion 1 or option 2, combining several
models at test time is infeasible in
real time applications
73/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Dropout is a technique which ad-
dresses both these issues.
Effectively it allows training several
neural networks without any signific-
ant computational overhead.
Also gives an efficient approximate
way of combining exponentially many
different neural networks.
74/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Dropout refers to dropping out units
Temporarily remove a node and all its incoming/outgoing connections
resulting in a thinned network
Each node is retained with a fixed probability (typically p = 0.5) for hidden
nodes and p = 0.8 for visible nodes
75/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Suppose a neural network has n nodes
Using the dropout idea, each node can be retained or dropped
For example, in the above case we drop 5 nodes to get a thinned network
Given a total of n nodes, what are the total number of thinned networks that
can be formed? 2n
Of course, this is prohibitively large and we cannot possibly train so many
networks
Trick: (1) Share the weights across all the networks
(2) Sample a different network for each training instance
Let us see how? 76/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
We initialize all the parameters (weights) of the network and start training
For the first training instance (or mini-batch), we apply dropout resulting in
the thinned network
We compute the loss and backpropagate
Which parameters will we update? Only those which are active
77/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
For the second training instance (or mini-batch), we again apply dropout res-
ulting in a different thinned network
We again compute the loss and backpropagate to the active weights
If the weight was active for both the training instances then it would have
received two updates by now
If the weight was active for only one of the training instances then it would
have received only one updates by now
Each thinned network gets trained rarely (or even never) but the parameter
sharing ensures that no model has untrained or poorly trained parameters
78/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
w1 w2 w3 w4 pw1 pw2 pw3 pw4
79/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Dropout essentially applies a masking
noise to the hidden units
Prevents hidden units from co-
adapting
Essentially a hidden unit cannot rely
too much on other units as they may
get dropped out any time
Each hidden unit has to learn to be
more robust to these random dro-
pouts
80/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Here is an example of how dropout
helps in ensuring redundancy and ro-
bustness
Suppose hi learns to detect a face by
firing on detecting a nose
Dropping hi then corresponds to eras-
hi
ing the information that a nose exists
The model should then learn another
hi which redundantly encodes the
presence of a nose
Or the model should learn to detect
the face using other features
81/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Recap
l2 regularization
Dataset augmentation
Parameter Sharing and tying
Adding Noise to the inputs
Adding Noise to the outputs
Early stopping
Ensemble methods
Dropout
82/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
Appendix
83/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8
To prove: The below two equations are equivalent
Proof by induction:
Base case: t = 1 and w0 =0:
w1 according to the first equation:
= Q I − (I − ηΛ)t (I − ηΛ) QT w∗
= Q I − (I − ηΛ)t (I − ηΛ) QT w∗
86/1
Mitesh M. Khapra CS7015 (Deep Learning) : Lecture 8