Cheet Sheet
Cheet Sheet
Cheet Sheet
edu/~shervine
Super VIP Cheatsheet: Machine Learning 4 Machine Learning Tips and Tricks 10
4.1 Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.1.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Afshine Amidi and Shervine Amidi 4.1.2 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.2 Model selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
September 15, 2018 4.3 Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
5 Refreshers 12
5.1 Probabilities and Statistics . . . . . . . . . . . . . . . . . . . . . . . . 12
Contents 5.1.1 Introduction to Probability and Combinatorics . . . . . . . . . 12
5.1.2 Conditional Probability . . . . . . . . . . . . . . . . . . . . . 12
1 Supervised Learning 2 5.1.3 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . 13
1.1 Introduction to Supervised Learning . . . . . . . . . . . . . . . . . . . 2 5.1.4 Jointly Distributed Random Variables . . . . . . . . . . . . . . 13
1.2 Notations and general concepts . . . . . . . . . . . . . . . . . . . . . 2 5.1.5 Parameter estimation . . . . . . . . . . . . . . . . . . . . . . 14
1.3 Linear models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 5.2 Linear Algebra and Calculus . . . . . . . . . . . . . . . . . . . . . . . 14
1.3.1 Linear regression . . . . . . . . . . . . . . . . . . . . . . . . . 2 5.2.1 General notations . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2.2 Matrix operations . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.2 Classification and logistic regression . . . . . . . . . . . . . . . 3
5.2.3 Matrix properties . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3.3 Generalized Linear Models . . . . . . . . . . . . . . . . . . . . 3
5.2.4 Matrix calculus . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Support Vector Machines . . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Generative Learning . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.5.1 Gaussian Discriminant Analysis . . . . . . . . . . . . . . . . . 4
1.5.2 Naive Bayes . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6 Tree-based and ensemble methods . . . . . . . . . . . . . . . . . . . . 4
1.7 Other non-parametric approaches . . . . . . . . . . . . . . . . . . . . 4
1.8 Learning Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2 Unsupervised Learning 6
2.1 Introduction to Unsupervised Learning . . . . . . . . . . . . . . . . . 6
2.2 Clustering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 Expectation-Maximization . . . . . . . . . . . . . . . . . . . . 6
2.2.2 k-means clustering . . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.3 Hierarchical clustering . . . . . . . . . . . . . . . . . . . . . . 6
2.2.4 Clustering assessment metrics . . . . . . . . . . . . . . . . . . 6
2.3 Dimension reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.3.1 Principal component analysis . . . . . . . . . . . . . . . . . . 7
2.3.2 Independent component analysis . . . . . . . . . . . . . . . . . 7
3 Deep Learning 8
3.1 Neural Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.2 Convolutional Neural Networks . . . . . . . . . . . . . . . . . . . . . 8
3.3 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . . . 8
3.4 Reinforcement Learning and Control . . . . . . . . . . . . . . . . . . . 9
1 Supervised Learning r Cost function – The cost function J is commonly used to assess the performance of a model,
and is defined with the loss function L as follows:
1.1 Introduction to Supervised Learning m
X
J(θ) = L(hθ (x(i) ), y (i) )
Given a set of data points {x(1) , ..., x(m) } associated to a set of outcomes {y (1) , ..., y (m) }, we
want to build a classifier that learns how to predict y from x. i=1
r Type of prediction – The different types of predictive models are summed up in the table
below: r Gradient descent – By noting α ∈ R the learning rate, the update rule for gradient descent
is expressed with the learning rate and the cost function J as follows:
Regression Classifier
θ ←− θ − α∇J(θ)
Outcome Continuous Class
Examples Linear regression Logistic regression, SVM, Naive Bayes
r Type of model – The different models are summed up in the table below:
Remark: Stochastic gradient descent (SGD) is updating the parameter based on each training
example, and batch gradient descent is on a batch of training examples.
Illustration r Likelihood – The likelihood of a model L(θ) given parameters θ is used to find the optimal
parameters θ through maximizing the likelihood. In practice, we use the log-likelihood `(θ) =
log(L(θ)) which is easier to optimize. We have:
1.2 Notations and general concepts r Newton’s algorithm – The Newton’s algorithm is a numerical method that finds θ such
that `0 (θ) = 0. Its update rule is as follows:
r Hypothesis – The hypothesis is noted hθ and is the model that we choose. For a given input
`0 (θ)
data x(i) , the model prediction output is hθ (x(i) ). θ←θ−
`00 (θ)
r Loss function – A loss function is a function L : (z,y) ∈ R × Y 7−→ L(z,y) ∈ R that takes as
inputs the predicted value z corresponding to the real data value y and outputs how different Remark: the multidimensional generalization, also known as the Newton-Raphson method, has
they are. The common loss functions are summed up in the table below: the following update rule:
−1
Least squared Logistic Hinge Cross-entropy θ ← θ − ∇2θ `(θ) ∇θ `(θ)
1
(y − z)2 log(1 + exp(−yz)) max(0,1 − yz) − y log(z) + (1 − y) log(1 − z)
2 1.3 Linear models
r LMS algorithm – By noting α the learning rate, the update rule of the Least Mean Squares Distribution η T (y) a(η) b(y)
(LMS) algorithm for a training set of m data points, which is also known as the Widrow-Hoff
learning rule, is as follows: φ
Bernoulli log 1−φ
y log(1 + exp(η)) 1
m
η2 2
X (i)
exp − y2
∀j, θ j ← θj + α y (i) − hθ (x(i) ) xj Gaussian µ y 2
√1
2π
i=1
1
Poisson log(λ) y eη
Remark: the update rule is a particular case of the gradient ascent. y!
r LWR – Locally Weighted Regression, also known as LWR, is a variant of linear regression that Geometric log(1 − φ) y log eη
1
1−eη
weights each training example in its cost function by w(i) (x), which is defined with parameter
τ ∈ R as:
(x(i) − x)2 r Assumptions of GLMs – Generalized Linear Models (GLM) aim at predicting a random
w(i) (x) = exp −
2τ 2 variable y as a function fo x ∈ Rn+1 and rely on the following 3 assumptions:
Remark: there is no closed form solution for the case of logistic regressions. 1
min ||w||2 such that y (i) (wT x(i) − b) > 1
2
r Softmax regression – A softmax regression, also called a multiclass logistic regression, is
used to generalize logistic regression when there are more than 2 outcome classes. By convention,
we set θK = 0, which makes the Bernoulli parameter φi of each class i equal to:
exp(θiT x)
φi =
K
X
exp(θjT x)
j=1
r Kernel – Given a feature mapping φ, we define the kernel K to be defined as: 1.5.2 Naive Bayes
K(x,z) = φ(x)T φ(z) r Assumption – The Naive Bayes model supposes that the features of each data point are all
independent:
2
||x−z||
In practice, the kernel K defined by K(x,z) = exp − 2σ 2
is called the Gaussian kernel
n
and is commonly used. Y
P (x|y) = P (x1 ,x2 ,...|y) = P (x1 |y)P (x2 |y)... = P (xi |y)
i=1
r Solutions – Maximizing the log-likelihood gives the following solutions, with k ∈ {0,1},
l ∈ [[1,L]]
(j)
1 #{j|y (j) = k and xi = l}
P (y = k) = × #{j|y (j) = k} and P (xi = l|y = k) =
m #{j|y (j) = k}
Remark: we say that we use the "kernel trick" to compute the cost function using the kernel
because we actually don’t need to know the explicit mapping φ, which is often very complicated.
Instead, only the values K(x,z) are needed. Remark: Naive Bayes is widely used for text classification and spam detection.
r Random forest – It is a tree-based technique that uses a high number of decision trees
1.5 Generative Learning built out of randomly selected sets of features. Contrary to the simple decision tree, it is highly
uninterpretable but its generally good performance makes it a popular algorithm.
A generative model first tries to learn how the data is generated by estimating P (x|y), which
we can then use to estimate P (y|x) by using Bayes’ rule. Remark: random forests are a type of ensemble methods.
r Boosting – The idea of boosting methods is to combine several weak learners to form a
stronger one. The main ones are summed up in the table below:
1.5.1 Gaussian Discriminant Analysis
r Setting – The Gaussian Discriminant Analysis assumes that y and x|y = 0 and x|y = 1 are
such that: Adaptive boosting Gradient boosting
y ∼ Bernoulli(φ) - High weights are put on errors to - Weak learners trained
improve at the next boosting step on remaining errors
x|y = 0 ∼ N (µ0 ,Σ) and x|y = 1 ∼ N (µ1 ,Σ) - Known as Adaboost
r Estimation – The following table sums up the estimates that we find when maximizing the
likelihood:
1.7 Other non-parametric approaches
φ
b µbj (j = 0,1) Σ
b r k-nearest neighbors – The k-nearest neighbors algorithm, commonly known as k-NN, is a
non-parametric approach where the response of a data point is determined by the nature of its
k neighbors from the training set. It can be used in both classification and regression settings.
m Pm m
1 X 1
i=1 {y (i) =j}
x(i) 1 X
1{y(i) =1} m (x(i) − µy(i) )(x(i) − µy(i) )T
1 Remark: The higher the parameter k, the higher the bias, and the lower the parameter k, the
P
m m
i=1 i=1 {y (i) =j} i=1 higher the variance.
r Upper bound theorem – Let H be a finite hypothesis class such that |H| = k and let δ and
the sample size m be fixed. Then, with probability of at least 1 − δ, we have:
r
1 2k
(h) 6 min (h) + 2
b log
h∈H 2m δ
r Hoeffding inequality – Let Z1 , .., Zm be m iid variables drawn from a Bernoulli distribution
of parameter φ. Let φ
b be their sample mean and γ > 0 fixed. We have:
P (|φ − φ
b| > γ) 6 2 exp(−2γ 2 m)
m
1 X
b(h) = 1{h(x(i) )6=y(i) }
m
i=1
r Shattering – Given a set S = {x(1) ,...,x(d) }, and a set of classifiers H, we say that H shatters
S if for any set of labels {y (1) , ..., y (d) }, we have:
2.1 Introduction to Unsupervised Learning We note c(i) the cluster of data point i and µj the center of cluster j.
r Algorithm – After randomly initializing the cluster centroids µ1 ,µ2 ,...,µk ∈ Rn , the k-means
r Motivation – The goal of unsupervised learning is to find hidden patterns in unlabeled data algorithm repeats the following step until convergence:
{x(1) ,...,x(m) }. m
X
r Jensen’s inequality – Let f be a convex function and X a random variable. We have the 1{c(i) =j} x(i)
following inequality: i=1
c(i) = arg min||x(i) − µj ||2 and µj = m
E[f (X)] > f (E[X]) j X
1{c(i) =j}
i=1
2.2 Clustering
2.2.1 Expectation-Maximization
r Latent variables – Latent variables are hidden/unobserved variables that make estimation
problems difficult, and are often denoted z. Here are the most common settings where there are
latent variables:
Xˆ
P (x(i) ,z (i) ; θ)
Ward linkage Average linkage Complete linkage
θi = argmax Qi (z (i) ) log dz (i)
θ z (i) Qi (z (i) ) Minimize within cluster Minimize average distance Minimize maximum distance
i
distance between cluster pairs of between cluster pairs
k m
X X
Bk = nc(i) (µc(i) − µ)(µc(i) − µ)T , Wk = (x(i) − µc(i) )(x(i) − µc(i) )T
j=1 i=1
the Calinski-Harabaz index s(k) indicates how well a clustering model defines its clusters, such
that the higher the score, the more dense and well separated the clusters are. It is defined as
follows:
Tr(Bk ) N −k
s(k) = ×
Tr(Wk ) k−1 2.3.2 Independent component analysis
It is a technique meant to find the underlying generating sources.
2.3 Dimension reduction r Assumptions – We assume that our data x has been generated by the n-dimensional source
vector s = (s1 ,...,sn ), where si are independent random variables, via a mixing and non-singular
2.3.1 Principal component analysis matrix A as follows:
x = As
It is a dimension reduction technique that finds the variance maximizing directions onto which
to project the data. The goal is to find the unmixing matrix W = A−1 by an update rule.
r Eigenvalue, eigenvector – Given a matrix A ∈ Rn×n , λ is said to be an eigenvalue of A if
there exists a vector z ∈ Rn \{0}, called eigenvector, such that we have: r Bell and Sejnowski ICA algorithm – This algorithm finds the unmixing matrix W by
following the steps below:
Az = λz
• Write the probability of x = As = W −1 s as:
r Spectral theorem – Let A ∈ Rn×n . If A is symmetric, then A is diagonalizable by a real n
orthogonal matrix U ∈ Rn×n . By noting Λ = diag(λ1 ,...,λn ), we have:
Y
p(x) = ps (wiT x) · |W |
∃Λ diagonal, A = U ΛU T i=1
Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of
matrix A. • Write the log likelihood given our training data {x(i) , i ∈ [[1,m]]} and by noting g the
sigmoid function as:
r Algorithm – The Principal Component Analysis (PCA) procedure is a dimension reduction
technique that projects the data on k dimensions by maximizing the variance of the data as m n
!
follows:
X X
0
l(W ) = log g (wjT x(i) ) + log |W |
• Step 1: Normalize the data to have a mean of 0 and standard deviation of 1. i=1 j=1
Therefore, the stochastic gradient ascent learning rule is such that for each training example
(i) m m
(i)
xj − µj 1 X (i) 1 X (i) x(i) , we update W as follows:
xj ← where µj = xj and σj2 = (xj − µj ) 2
σj m m
i=1 i=1
1 − 2g(w1T x(i) )
1 − 2g(w2 x ) (i) T
T (i)
W ←− W + α .. x + (W T )−1
m
1 X T .
• Step 2: Compute Σ = x(i) x(i) ∈ Rn×n , which is symmetric with real eigenvalues.
m 1 − 2g(wn T x(i) )
i=1
• Step 4: Project the data on spanR (u1 ,...,uk ). This procedure maximizes the variance
among all k-dimensional spaces.
3 Deep Learning
∂L(z,y) ∂L(z,y) ∂a ∂z
= × ×
3.1 Neural Networks ∂w ∂a ∂z ∂w
Neural networks are a class of models that are built with layers. Commonly used types of neural As a result, the weight is updated as follows:
networks include convolutional and recurrent neural networks. ∂L(z,y)
w ←− w − η
r Architecture – The vocabulary around neural networks architectures is described in the ∂w
figure below:
r Updating weights – In a neural network, weights are updated as follows:
• Step 1: Take a batch of training data.
follows:
xi − µB
xi ←− γ p +β
2 +
σB
It is usually done after a fully connected/convolutional layer and before a non-linearity layer and
aims at allowing higher learning rates and reducing the strong dependence on initialization.
r Cross-entropy loss – In the context of neural networks, the cross-entropy loss L(z,y) is
commonly used and is defined as follows: 3.3 Recurrent Neural Networks
h i
L(z,y) = − y log(z) + (1 − y) log(1 − z) r Types of gates – Here are the different types of gates that we encounter in a typical recurrent
neural network:
3.4 Reinforcement Learning and Control r Maximum likelihood estimate – The maximum likelihood estimates for the state transition
probabilities are as follows:
The goal of reinforcement learning is for an agent to learn how to evolve in an environment.
#times took action a in state s and got to s0
Psa (s0 ) =
r Markov decision processes – A Markov decision process (MDP) is a 5-tuple (S,A,{Psa },γ,R) #times took action a in state s
where:
• S is the set of states r Q-learning – Q-learning is a model-free estimation of Q, which is done as follows:
r Value function – For a given policy π and a given state s, we define the value function V π
as follows:
h i
V π (s) = E R(s0 ) + γR(s1 ) + γ 2 R(s2 ) + ...|s0 = s,π
∗
r Bellman equation – The optimal Bellman equations characterizes the value function V π
of the optimal policy π ∗ :
∗ ∗
X
V π (s) = R(s) + max γ Psa (s0 )V π (s0 )
a∈A
s0 ∈S
Remark: we note that the optimal policy π ∗ for a given state s is such that:
X
π ∗ (s) = argmax Psa (s0 )V ∗ (s0 )
a∈A
s0 ∈S
V0 (s) = 0
4.1.1 Classification r AUC – The area under the receiving operating curve, also noted AUC or AUROC, is the
area below the ROC as shown in the following figure:
In a context of a binary classification, here are the main metrics that are important to track to
assess the performance of the model.
r Confusion matrix – The confusion matrix is used to have a more complete picture when
assessing the performance of a model. It is defined as follows:
Predicted class
+ –
TP FN
+ False Negatives
True Positives
Type II error
Actual class 4.1.2 Regression
FP TN
– False Positives r Basic metrics – Given a regression model f , the following metrics are commonly used to
True Negatives assess the performance of the model:
Type I error
Total sum of squares Explained sum of squares Residual sum of squares
r Main metrics – The following metrics are commonly used to assess the performance of
classification models: m
X m
X m
X
SStot = (yi − y)2 SSreg = (f (xi ) − y)2 SSres = (yi − f (xi ))2
i=1 i=1 i=1
Metric Formula Interpretation
TP + TN
Accuracy Overall performance of model r Coefficient of determination – The coefficient of determination, often noted R2 or r2 ,
TP + TN + FP + FN
provides a measure of how well the observed outcomes are replicated by the model and is defined
TP as follows:
Precision How accurate the positive predictions are
TP + FP SSres
R2 = 1 −
TP SStot
Recall Coverage of actual positive sample
TP + FN
Sensitivity r Main metrics – The following metrics are commonly used to assess the performance of
TN regression models, by taking into account the number of variables n that they take into consid-
Specificity Coverage of actual negative sample eration:
TN + FP
2TP Adjusted R2
F1 score Hybrid metric useful for unbalanced classes Mallow’s Cp AIC BIC
2TP + FP + FN
SSres + 2(n + 1)b
σ2 (1 − R2 )(m − 1)
2 (n + 2) − log(L) log(m)(n + 2) − 2 log(L) 1−
m m−n−1
r ROC – The receiver operating curve, also noted ROC, is the plot of TPR versus FPR by
varying the threshold. These metrics are are summed up in the table below: where L is the likelihood and b
σ 2 is an estimate of the variance associated with each response.
Once the model has been chosen, it is trained on the entire dataset and tested on the unseen
test set. These are represented in the figure below: h i
... + λ||θ||1 ... + λ||θ||22 ... + λ (1 − α)||θ||1 + α||θ||22
λ∈R λ∈R λ ∈ R, α ∈ [0,1]
r Model selection – Train model on training set, then evaluate on the development set, then
r Cross-validation – Cross-validation, also noted CV, is a method that is used to select a pick best performance model on the development set, and retrain all of that model on the whole
model that does not rely too much on the initial training set. The different types are summed training set.
up in the table below:
4.3 Diagnostics
k-fold Leave-p-out
- Training on k − 1 folds and - Training on n − p observations and r Bias – The bias of a model is the difference between the expected prediction and the correct
model that we try to predict for given data points.
assessment on the remaining one assessment on the p remaining ones
- Generally k = 5 or 10 - Case p = 1 is called leave-one-out r Variance – The variance of a model is the variability of the model prediction for given data
points.
The most commonly used method is called k-fold cross-validation and splits the training data r Bias/variance tradeoff – The simpler the model, the higher the bias, and the more complex
into k folds to validate the model on one fold while training the model on the k − 1 other folds, the model, the higher the variance.
all of this k times. The error is then averaged over the k folds and is named cross-validation
error.
Underfitting Just right Overfitting
- High training error - Training error - Low training error
Symptoms - Training error close slightly lower than - Training error much
to test error test error lower than test error
- High bias - High variance
Regression
r Regularization – The regularization procedure aims at avoiding the model to overfit the
data and thus deals with high variance issues. The following table sums up the different types
of commonly used regularization techniques:
5 Refreshers
r Axioms of probability – For each event E, we denote P (E) as the probability of event E
Deep learning occuring. By noting E1 ,...,En mutually exclusive events, we have the 3 following axioms:
n
! n
[ X
(1) 0 6 P (E) 6 1 (2) P (S) = 1 (3) P Ei = P (Ei )
i=1 i=1
r Partition – Let {Ai , i ∈ [[1,n]]} be such that for all i, Ai 6= ∅. We say that {Ai } is a partition
if we have:
n
[
∀i 6= j, Ai ∩ Aj = ∅ and Ai = S
i=1
n
X
Remark: for any event B in the sample space, we have P (B) = P (B|Ai )P (Ai ).
i=1
r Extended form of Bayes’ rule – Let {Ai , i ∈ [[1,n]]} be a partition of the sample space. r Expectation and Moments of the Distribution – Here are the expressions of the expected
We have: value E[X], generalized expected value E[g(X)], kth moment E[X k ] and characteristic function
ψ(ω) for the discrete and continuous cases:
P (B|Ak )P (Ak )
P (Ak |B) = n
X
P (B|Ai )P (Ai ) Case E[X] E[g(X)] E[X k ] ψ(ω)
i=1 n n n n
X X X X
(D) xi f (xi ) g(xi )f (xi ) xki f (xi ) f (xi )eiωxi
r Independence – Two events A and B are independent if and only if we have: i=1 i=1 i=1 i=1
ˆ +∞ ˆ +∞ ˆ +∞ ˆ +∞
P (A ∩ B) = P (A)P (B) (C) xf (x)dx g(x)f (x)dx xk f (x)dx f (x)eiωx dx
−∞ −∞ −∞ −∞
F (x) = P (X 6 x)
r Transformation of random variables – Let the variables X and Y be linked by some
function. By noting fX and fY the distribution function of X and Y respectively, we have:
Remark: we have P (a < X 6 B) = F (b) − F (a).
fY (y) = fX (x)
dx
r Probability density function (PDF) – The probability density function f is the probability dy
that X takes on values between two adjacent realizations of the random variable.
r Relationships involving the PDF and CDF – Here are the important properties to know r Leibniz integral rule – Let g be a function of x and potentially c, and a, b boundaries that
in the discrete (D) and the continuous (C) cases. may depend on c. We have:
ˆ ˆ b
∂ b ∂b ∂a ∂g
g(x)dx = · g(b) − · g(a) + (x)dx
Case CDF F PDF f Properties of PDF ∂c a ∂c ∂c a ∂c
X X
(D) F (x) = P (X = xi ) f (xj ) = P (X = xj ) 0 6 f (xj ) 6 1 and f (xj ) = 1
r Chebyshev’s inequality – Let X be a random variable with expected value µ and standard
xi 6x j deviation σ. For k, σ > 0, we have the following inequality:
ˆ x ˆ +∞ 1
dF
(C) F (x) = f (y)dy f (x) = f (x) > 0 and f (x)dx = 1 P (|X − µ| > kσ) 6
−∞ dx −∞ k2
r Marginal density and cumulative distribution – From the joint density probability 5.1.5 Parameter estimation
function fXY , we have:
r Random sample – A random sample is a collection of n random variables X1 , ..., Xn that
Case Marginal density Cumulative function are independent and identically distributed with X.
r Estimator – An estimator θ̂ is a function of the data that is used to infer the value of an
unknown parameter θ in a statistical model.
X XX
(D) fX (xi ) = fXY (xi ,yj ) FXY (x,y) = fXY (xi ,yj )
j xi 6x yj 6y r Bias – The bias of an estimator θ̂ is defined as being the difference between the expected
ˆ ˆ ˆ value of the distribution of θ̂ and the true value, i.e.:
+∞ x y
(C) fX (x) = fXY (x,y)dy FXY (x,y) = fXY (x0 ,y 0 )dx0 dy 0 Bias(θ̂) = E[θ̂] − θ
−∞ −∞ −∞
r Diagonal matrix – A diagonal matrix D ∈ Rn×n is a square matrix with nonzero values in ∀i,j, i,j = Aj,i
AT
its diagonal and zero everywhere else:
d1 0 · · · 0 Remark: for matrices A,B, we have (AB)T = B T AT .
.. .. ..
. .
D= 0 .
r Inverse – The inverse of an invertible square matrix A is noted A−1 and is the only matrix
.. .. ..
. . . 0 such that:
0 ··· 0 dn
AA−1 = A−1 A = I
Remark: we also note D as diag(d1 ,...,dn ).
Remark: not all square matrices are invertible. Also, for matrices A,B, we have (AB)−1 =
5.2.2 Matrix operations B −1 A−1
r Trace – The trace of a square matrix A, noted tr(A), is the sum of its diagonal entries:
r Vector-vector multiplication – There are two types of vector-vector products:
n
• inner product: for x,y ∈ Rn , we have:
X
tr(A) = Ai,i
i=1
n
X
xT y = xi yi ∈ R
Remark: for matrices A,B, we have tr(AT ) = tr(A) and tr(AB) = tr(BA)
i=1
r Determinant – The determinant of a square matrix A ∈ Rn×n , noted |A| or det(A) is
expressed recursively in terms of A\i,\j , which is the matrix A without its ith row and j th
• outer product: for x ∈ Rm , y ∈ Rn , we have:
column, as follows:
x1 y1 ··· x1 yn n
.. ..
X
xy T = ∈ Rm×n det(A) = |A| = (−1)i+j Ai,j |A\i,\j |
. .
xm y1 ··· xm yn j=1
Remark: A is invertible if and only if |A| 6= 0. Also, |AB| = |A||B| and |AT | = |A|.
r Matrix-vector multiplication – The product of matrix A ∈ Rm×n and vector x ∈ Rn is a
vector of size Rn , such that:
aT
5.2.3 Matrix properties
r,1 x n
..
X
Ax = = ac,i xi ∈ R n
r Symmetric decomposition – A given matrix A can be expressed in terms of its symmetric
.
T
ar,m x i=1 and antisymmetric parts as follows:
A + AT A − AT
where aT
r,i are the vector rows and ac,j are the vector columns of A, and xi are the entries
A= +
2 2
of x. | {z } | {z }
Symmetric Antisymmetric
r Matrix-matrix multiplication – The product of matrices A ∈ Rm×n and B ∈ Rn×p is a
matrix of size Rn×p , such that:
aT ··· aT
r Norm – A norm is a function N : V −→ [0, + ∞[ where V is a vector space, and such that
r,1 bc,1 r,1 bc,p n
for all x,y ∈ V , we have:
.. ..
X
AB = = ac,i bT n×p
∈R
. . r,i
T
ar,m bc,1 ··· T
ar,m bc,p i=1 • N (x + y) 6 N (x) + N (y)
tively.
• if N (x) = 0, then x = 0
r Transpose – The transpose of a matrix A ∈ Rm×n , noted AT , is such that its entries are
flipped: For x ∈ V , the most commonly used norms are summed up in the table below:
r Linearly dependence – A set of vectors is said to be linearly dependent if one of the vectors ∇A tr(ABAT C) = CAB + C T AB T ∇A |A| = |A|(A−1 )T
in the set can be defined as a linear combination of the others.
Remark: if no vector can be written this way, then the vectors are said to be linearly independent.
r Matrix rank – The rank of a given matrix A is noted rank(A) and is the dimension of the
vector space generated by its columns. This is equivalent to the maximum number of linearly
independent columns of A.
A = AT and ∀x ∈ Rn , xT Ax > 0
∃Λ diagonal, A = U ΛU T
A = U ΣV T
Illustration
r Fully Connected (FC) – The fully connected layer (FC) operates on a flattened input where
each input is connected to all neurons. If present, FC layers are usually found towards the end
of CNN architectures and can be used to optimize objectives such as class scores.
- Maximum padding
- No padding - Padding such that feature
l m such that end
map size has size I
convolutions are
- Drops last S
Purpose applied on the limits
convolution if - Output size is
of the input
dimensions do not mathematically convenient
match - Filter ’sees’ the input
- Also called ’half’ padding
end-to-end
Remark: the application of K filters of size F × F results in an output feature map of size
O × O × K.
r Stride – For a convolutional or a pooling operation, the stride S denotes the number of pixels Remark: often times, Pstart = Pend , P , in which case we can replace Pstart + Pend by 2P in
by which the window moves after each operation. the formula above.
r Understanding the complexity of the model – In order to assess the complexity of a ReLU Leaky ReLU ELU
model, it is often useful to determine the number of parameters that its architecture will have.
In a given layer of a convolutional neural network, it is done as follows: g(z) = max(z,z) g(z) = max(α(ez − 1),z)
g(z) = max(0,z)
with 1 with α 1
CONV POOL FC
Illustration
r Receptive field – The receptive field at layer k is the area denoted Rk × Rk of the input
that each pixel of the k-th activation map can ’see’. By calling Fj the filter size of layer j and 1.6 Object detection
Si the stride value of layer i and with the convention S0 = 1, the receptive field at layer k can
be computed with the formula: r Types of models – There are 3 main types of object recognition algorithms, for which the
k j−1 nature of what is predicted is different. They are described in the table below:
X Y
Rk = 1 + (Fj − 1) Si
Classification
j=1 i=0 Image classification Detection
w. localization
In the example below, we have F1 = F2 = 3 and S1 = S2 = 1, which gives R2 = 1+2 · 1+2 · 1 =
5.
r YOLO – You Only Look Once (YOLO) is an object detection algorithm that performs the
following steps:
• Step 2: For each grid cell, run a CNN that predicts y of the following form:
Box of center (bx ,by ), height bh
Reference points (l1x ,l1y ), ...,(lnx ,lny ) T
and width bw y = pc ,bx ,by ,bh ,bw ,c1 ,c2 ,...,cp ,... ∈ RG×G×k×(5+p)
| {z }
repeated k times
r Intersection over Union – Intersection over Union, also known as IoU, is a function that
quantifies how correctly positioned a predicted bounding box Bp is over the actual bounding where pc is the probability of detecting an object, bx ,by ,bh ,bw are the properties of the
box Ba . It is defined as: detected bouding box, c1 ,...,cp is a one-hot representation of which of the p classes were
detected, and k is the number of anchor boxes.
Bp ∩ Ba
IoU(Bp ,Ba ) = • Step 3: Run the non-max suppression algorithm to remove any potential duplicate over-
Bp ∪ Ba
lapping bounding boxes.
Remark: when pc = 0, then the network does not detect any object. In that case, the corre-
sponding predictions bx , ..., cp have to be ignored.
Remark: we always have IoU ∈ [0,1]. By convention, a predicted bounding box Bp is considered
as being reasonably good if IoU(Bp ,Ba ) > 0.5. r R-CNN – Region with Convolutional Neural Networks (R-CNN) is an object detection algo-
rithm that first segments the image to find potential relevant bounding boxes and then run the
r Anchor boxes – Anchor boxing is a technique used to predict overlapping bounding boxes. detection algorithm to find most probable objects in those bounding boxes.
In practice, the network is allowed to predict more than one box simultaneously, where each box
prediction is constrained to have a given set of geometrical properties. For instance, the first
prediction can potentially be a rectangular box of a given form, while the second will be another
rectangular box of a different geometrical form.
r Non-max suppression – The non-max suppression technique aims at removing duplicate
overlapping bounding boxes of a same object by selecting the most representative ones. After
having removed all boxes having a probability prediction lower than 0.6, the following steps are
repeated while there are boxes remaining:
r Types of models – Two main types of model are summed up in table below:
r Content cost function – The content cost function Jcontent (C,G) is used to determine how
the generated image G differs from the original content image C. It is defined as follows:
1 [l](C)
Jcontent (C,G) = ||a − a[l](G) ||2
2
r Style matrix – The style matrix G[l] of a given layer l is a Gram matrix where each of its
[l]
elements Gkk0 quantifies how correlated the channels k and k0 are. It is defined with respect to
r One Shot Learning – One Shot Learning is a face verification algorithm that uses a limited activations a[l] as follows:
training set to learn a similarity function that quantifies how different two given images are. The [l] [l]
similarity function applied to two images is often noted d(image 1, image 2). n nw
H X
X
[l] [l] [l]
Gkk0 = aijk aijk0
r Siamese Network – Siamese Networks aim at learning how to encode images to then quantify
i=1 j=1
how different two images are. For a given input image x(i) , the encoded output is often noted
as f (x(i) ).
Remark: the style matrix for the style image and the generated image are noted G[l](S) and
r Triplet loss – The triplet loss ` is a loss function computed on the embedding representation G[l](G) respectively.
of a triplet of images A (anchor), P (positive) and N (negative). The anchor and the positive
example belong to a same class, while the negative example to another one. By calling α ∈ R+ r Style cost function – The style cost function Jstyle (S,G) is used to determine how the
the margin parameter, this loss is defined as follows: generated image G differs from the style S. It is defined as follows:
r Overall cost function – The overall cost function is defined as being a combination of the
content and style cost functions, weighted by parameters α,β, as follows:
Remark: a higher value of α will make the model care more about the content while a higher
value of β will make it care more about the style.
2.1 Overview
r Architecture of a traditional RNN – Recurrent neural networks, also known as RNNs,
are a class of neural networks that allow previous outputs to be used as inputs while having
hidden states. They are typically as follows:
Remark: use cases using variants of GANs include text to image, music generation and syn-
thesis.
r ResNet – The Residual Network architecture (also called ResNet) uses residual blocks with a
high number of layers meant to decrease the training error. The residual block has the following
characterizing equation:
a[l+2] = g(a[l] + z [l+2] ) For each timestep t, the activation a<t> and the output y <t> are expressed as follows:
r Inception Network – This architecture uses inception modules and aims at giving a try
at different convolutions in order to increase its performance. In particular, it uses the 1 × 1 a<t> = g1 (Waa a<t−1> + Wax x<t> + ba ) and y <t> = g2 (Wya a<t> + by )
convolution trick to lower the burden of computation.
where Wax , Waa , Wya , ba , by are coefficients that are shared temporally and g1 , g2 activation
functions
? ? ?
The pros and cons of a typical RNN architecture are summed up in the table below:
Advantages Drawbacks
- Possibility of processing input of any length - Computation being slow
- Model size not increasing with size of input - Difficulty of accessing information
- Computation takes into account from a long time ago
historical information - Cannot consider any future input
- Weights are shared across time for the current state
r Applications of RNNs – RNN models are mostly used in the fields of natural language
processing and speech recognition. The different applications are summed up in the table below:
1 ez − e−z
g(z) = g(z) = g(z) = max(0,z)
1 + e−z ez + e−z
Many-to-one
Sentiment classification
Tx > 1, Ty = 1
Many-to-many
Name entity recognition
Tx = Ty r Vanishing/exploding gradient – The vanishing and exploding gradient phenomena are
often encountered in the context of RNNs. The reason why they happen is that it is difficult
to capture long term dependencies because of multiplicative gradient that can be exponentially
decreasing/increasing with respect to the number of layers.
Many-to-many r Gradient clipping – It is a technique used to cope with the exploding gradient problem
sometimes encountered when performing backpropagation. By capping the maximum value for
Machine translation the gradient, this phenomenon is controlled in practice.
Tx 6= Ty
r Loss function – In the case of a recurrent neural network, the loss function L of all time
steps is defined based on the loss at every time step as follows:
Ty
X r Types of gates – In order to remedy the vanishing gradient problem, specific gates are used
y ,y) =
L(b y <t> ,y <t> )
L(b in some types of RNNs and usually have a well-defined purpose. They are usually noted Γ and
t=1 are equal to:
r Backpropagation through time – Backpropagation is done at each point in time. At where W, U, b are coefficients specific to the gate and σ is the sigmoid function. The main ones
timestep T , the derivative of the loss L with respect to weight matrix W is expressed as follows: are summed up in the table below:
- Noted ow - Noted ew
- Naive approach, no similarity information - Takes into account words similarity
Dependencies r Embedding matrix – For a given word w, the embedding matrix E is a matrix that maps
its 1-hot representation ow to its embedding ew as follows:
ew = Eow
Remark: learning the embedding matrix can be done using target/context likelihood models.
Remark: the sign ? denotes the element-wise multiplication between two vectors.
r Variants of RNNs – The table below sums up the other commonly used RNN architectures: 2.3.2 Word embeddings
Bidirectional Deep r Word2vec – Word2vec is a framework aimed at learning word embeddings by estimating the
(BRNN) likelihood that a given word is surrounded by other words. Popular models include skip-gram,
(DRNN) negative sampling and CBOW.
r Skip-gram – The skip-gram word2vec model is a supervised learning task that learns word
embeddings by assessing the likelihood of any given target word t happening with a context
word c. By noting θt a parameter associated with t, the probability P (t|c) is given by:
exp(θtT ec )
P (t|c) =
|V |
X
exp(θjT ec )
j=1
Remark: summing over the whole vocabulary in the denominator of the softmax part makes
this model computationally expensive. CBOW is another word2vec model using the surrounding
words to predict a given word.
r Negative sampling – It is a set of binary classifiers using logistic regressions that aim at 2.5 Language model
assessing how a given context and a given target words are likely to appear simultaneously, with
the models being trained on sets of k negative examples and 1 positive example. Given a context r Overview – A language model aims at estimating the probability of a sentence P (y).
word c and a target word t, the prediction is expressed by:
r n-gram model – This model is a naive approach aiming at quantifying the probability that
P (y = 1|c,t) = σ(θtT ec ) an expression appears in a corpus by counting its number of appearance in the training data.
Remark: this method is less computationally expensive than the skip-gram model. r Perplexity – Language models are commonly assessed using the perplexity metric, also
known as PP, which can be interpreted as the inverse probability of the dataset normalized by
r GloVe – The GloVe model, short for global vectors for word representation, is a word em- the number of words T . The perplexity is such that the lower, the better and is defined as
bedding technique that uses a co-occurence matrix X where each Xi,j denotes the number of follows:
times that a target i occurred with a context j. Its cost function J is as follows: ! T1
T
|V |
Y 1
1 X PP =
J(θ) = f (Xij )(θiT ej + bi + b0j − log(Xij ))2
P|V | (t) (t)
yj · b
yj
2 t=1 j=1
i,j=1
Remark: PP is commonly used in t-SNE.
here f is a weighting function such that Xi,j = 0 =⇒ f (Xi,j ) = 0.
(final)
Given the symmetry that e and θ play in this model, the final word embedding ew is given
by: 2.6 Machine translation
(final) e w + θw r Overview – A machine translation model is similar to a language model except it has an
ew =
2 encoder network placed before. For this reason, it is sometimes referred as a conditional language
model. The goal is to find a sentence y such that:
Remark: the individual components of the learned word embeddings are not necessarily inter-
pretable. y= arg max P (y <1> ,...,y <Ty > |x)
y <1> ,...,y <Ty >
2.4 Comparing words r Beam search – It is a heuristic search algorithm used in machine translation and speech
recognition to find the likeliest sentence y given an input x.
r Cosine similarity – The cosine similarity between words w1 and w2 is expressed as follows:
• Step 1: Find top B likely words y <1>
w1 · w2
similarity = = cos(θ)
||w1 || ||w2 || • Step 2: Compute conditional probabilities y <k> |x,y <1> ,...,y <k−1>
Remark: if the beam width is set to 1, then this is equivalent to a naive greedy search. Remark: the attention scores are commonly used in image captioning and machine translation.
r Beam width – The beam width B is a parameter for beam search. Large values of B yield
to better result but with slower performance and increased memory. Small values of B lead to
worse results but is less computationally intensive. A standard value for B is around 10.
r Length normalization – In order to improve numerical stability, beam search is usually ap-
plied on the following normalized objective, often called the normalized log-likelihood objective,
defined as:
Ty
1 X h i
Objective = log p(y <t> |x,y <1> , ..., y <t−1> )
Tyα
t=1
Remark: the parameter α can be seen as a softener, and its value is usually between 0.5 and 1.
r Error analysis – When obtaining a predicted translation b y that is bad, one can wonder why r Attention weight – The amount of attention that the output y <t> should pay to the
0 0
we did not get a good translation y ∗ by performing the following error analysis: activation a<t > is given by α<t,t > computed as follows:
0
0 exp(e<t,t >)
Case P (y ∗ |x) > P (b
y |x) P (y ∗ |x) 6 P (b
y |x) α<t,t >
=
Tx
00
X
Root cause Beam search faulty RNN faulty exp(e<t,t >
)
- Try different architecture t00 =1
Remedies Increase beam width - Regularize Remark: computation complexity is quadratic with respect to Tx .
- Get more data
? ? ?
r Bleu score – The bilingual evaluation understudy (bleu) score quantifies how good a machine
translation is by computing a similarity score based on n-gram precision. It is defined as follows:
n
!
1 X
bleu score = exp pk
n
k=1
2.7 Attention
r Attention model – This model allows an RNN to pay attention to specific parts of the input
that is considered as being important, which improves the performance of the resulting model
0
in practice. By noting α<t,t > the amount of attention that the output y <t> should pay to the
0
activation a <t > and c <t> the context at time t, we have:
0
> <t0 > 0
X X
c<t> = α<t,t a with α<t,t >
=1
t0 t0
r Data augmentation – Deep learning models usually need a lot of data to be properly trained. r Epoch – In the context of training a model, epoch is a term used to refer to one iteration
It is often useful to get more data from the existing ones using data augmentation techniques. where the model sees the whole training set to update its weights.
The main ones are summed up in the table below. More precisely, given the following input
r Mini-batch gradient descent – During the training phase, updating weights is usually not
image, here are the techniques that we can apply:
based on the whole training set at once due to computation complexities or one data point due
to noise issues. Instead, the update step is done on mini-batches, where the number of data
Original Flip Rotation Random crop points in a batch is a hyperparameter that we can tune.
r Loss function – In order to quantify how a given model performs, the loss function L is
usually used to evaluate to what extent the actual outputs y are correctly predicted by the
model outputs z.
r Cross-entropy loss – In the context of binary classification in neural networks, the cross-
entropy loss L(z,y) is commonly used and is defined as follows:
h i
L(z,y) = − y log(z) + (1 − y) log(1 − z)
- Random focus
- Flipped with respect - Rotation with on one part of
- Image without to an axis for which a slight angle the image
3.2.2 Finding optimal weights
any modification the meaning of the - Simulates incorrect - Several random
image is preserved horizon calibration crops can be r Backpropagation – Backpropagation is a method to update the weights in the neural network
done in a row by taking into account the actual output and the desired output. The derivative with respect
to each weight w is computed using the chain rule.
follows:
xi − µB
xi ←− γ p +β
2 +
σB
It is usually done after a fully connected/convolutional layer and before a non-linearity layer and
aims at allowing higher learning rates and reducing the strong dependence on initialization.
r Xavier initialization – Instead of initializing the weights in a purely random manner, Xavier - Root Mean Square propagation
dw db
initialization enables to have initial weights that take into account characteristics that are unique RMSprop - Speeds up learning algorithm w − α√ b ←− b − α √
to the architecture. by controlling oscillations
sdw sdb
r Transfer learning – Training a deep learning model requires a lot of data and more impor- - Adaptive Moment estimation
tantly a lot of time. It is often useful to take advantage of pre-trained weights on huge datasets vdw vdb
Adam - Most popular method w − α√ b ←− b − α √
that took days/weeks to train, and leverage it towards our use case. Depending on how much sdw + sdb +
data we have at hand, here are the different ways to leverage this: - 4 parameters to tune
r Dropout – Dropout is a technique used in neural networks to prevent overfitting the training
Freezes all layers, data by dropping out neurons with probability p > 0. It forces the model to avoid relying too
Small
trains weights on softmax much on particular sets of features.
Remark: most deep learning frameworks parametrize dropout through the ’keep’ parameter 1−p.
Trains weights on layers r Weight regularization – In order to make sure that the weights are not too large and that
Large and softmax by initializing the model is not overfitting the training set, regularization techniques are usually performed on
weights on pre-trained ones the model weights. The main ones are summed up in the table below:
r Learning rate – The learning rate, often noted α or sometimes η, indicates at which pace the
weights get updated. It can be fixed or adaptively changed. The current most popular method
is called Adam, which is a method that adapts the learning rate.
r Adaptive learning rates – Letting the learning rate vary when training a model can reduce
h i
the training time and improve the numerical optimal solution. While Adam optimizer is the ... + λ||θ||1 ... + λ||θ||22 ... + λ (1 − α)||θ||1 + α||θ||22
most commonly used technique, others can also be useful. They are summed up in the table λ∈R λ∈R
below: λ ∈ R,α ∈ [0,1]
r Early stopping – This regularization technique stops the training process as soon as the
validation loss reaches a plateau or starts to increase.
? ? ?
3 Variables-based models 12
Super VIP Cheatsheet: Artificial Intelligence 3.1 Constraint satisfaction problems . . . . . . . . . . . . . . . . . . . . . 12
3.1.1 Factor graphs . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.2 Dynamic ordering . . . . . . . . . . . . . . . . . . . . . . . . 12
Afshine Amidi and Shervine Amidi 3.1.3 Approximate methods . . . . . . . . . . . . . . . . . . . . . . 13
3.1.4 Factor graph transformations . . . . . . . . . . . . . . . . . . 13
May 26, 2019 3.2 Bayesian networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.2 Probabilistic programs . . . . . . . . . . . . . . . . . . . . . . 15
3.2.3 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Contents
4 Logic-based models 16
4.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1 Reflex-based models 2 4.2 Propositional logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.1 Linear predictors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 4.3 First-order logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.1.1 Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Loss minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Non-linear predictors . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Stochastic gradient descent . . . . . . . . . . . . . . . . . . . . . . . 3
1.5 Fine-tuning models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.6 Unsupervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6.1 k-means . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.6.2 Principal Component Analysis . . . . . . . . . . . . . . . . 4
2 States-based models 5
2.1 Search optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.1 Tree search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1.2 Graph search . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.3 Learning costs . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.4 A? search . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1.5 Relaxation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Markov decision processes . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.1 Notations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2.2 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.3 When unknown transitions and rewards . . . . . . . . . . . . . 9
2.3 Game playing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.1 Speeding up minimax . . . . . . . . . . . . . . . . . . . . . . 11
2.3.2 Simultaneous games . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.3 Non-zero-sum games . . . . . . . . . . . . . . . . . . . . . . . 12
r Linear regression – Given a weight vector w ∈ Rd and a feature vector φ(x) ∈ Rd , the
1.1 Linear predictors output of a linear regression of weights w denoted as fw is given by:
In this section, we will go through reflex-based models that can improve with experience, by fw (x) = s(x,w)
going through samples that have input-output pairs.
r Feature vector – The feature vector of an input x is noted φ(x) and is such that: r Residual – The residual res(x,y,w) ∈ R is defined as being the amount by which the prediction
fw (x) overshoots the target y:
" φ (x) #
1 res(x,y,w) = fw (x) − y
φ(x) = .. ∈ Rd
.
φd (x)
+1 if w · φ(x) > 0
fw (x) = sign(s(x,w)) = −1 if w · φ(x) < 0
? if w · φ(x) = 0
Illustration
r Regression case – The prediction of a sample x of true label y ∈ R with a linear model of
weights w can be done with the predictor fw (x) , s(x,w). In this situation, a metric of interest
quantifying the quality of the regression is given by the margin res(x,y,w) and can be used with
the following loss functions:
r Stochastic updates – Stochastic gradient descent (SGD) updates the parameters of the
model one training example (φ(x),y) ∈ Dtrain at a time. This method leads to sometimes noisy,
but fast updates.
r Batch updates – Batch gradient descent (BGD) updates the parameters of the model one
batch of examples (e.g. the entire training set) at a time. This method computes stable update
directions, at a greater computational cost.
r Logistic function – The logistic function σ, also called the sigmoid function, is defined as:
1
∀z ∈] − ∞, + ∞[, σ(z) =
1 + e−z
where we note w, b, x, z the weight, bias, input and non-activated output of the neuron respec-
tively.
- Shrinks coefficients to 0 Tradeoff between variable r Objective function – The loss function for one of the main clustering algorithms, k-means,
Makes coefficients smaller is given by:
- Good for variable selection selection and small coefficients
n
X
Lossk-means (x,µ) = ||φ(xi ) − µzi ||2
i=1
r Algorithm – After randomly initializing the cluster centroids µ1 ,µ2 ,...,µk ∈ Rn , the k-means
algorithm repeats the following step until convergence:
m
X
1{zi =j} φ(xi )
i=1
zi = arg min||φ(xi ) − µj || 2
and µj = m
j X
h i 1{zi =j}
... + λ||θ||1 ... + λ||θ||22 ... + λ (1 − α)||θ||1 + α||θ||22 i=1
λ∈R λ∈R λ ∈ R, α ∈ [0,1]
r Sets vocabulary – When selecting a model, we distinguish 3 different parts of the data that
we have as follows:
r Spectral theorem – Let A ∈ Rn×n . If A is symmetric, then A is diagonalizable by a real 2 States-based models
orthogonal matrix U ∈ Rn×n . By noting Λ = diag(λ1 ,...,λn ), we have:
∃Λ diagonal, A = U ΛU T 2.1 Search optimization
In this section, we assume that by accomplishing action a from state s, we deterministically
Remark: the eigenvector associated with the largest eigenvalue is called principal eigenvector of arrive in state Succ(s,a). The goal here is to determine a sequence of actions (a1 ,a2 ,a3 ,a4 ,...)
matrix A. that starts from an initial state and leads to an end state. In order to solve this kind of problem,
our objective will be to find the minimum cost path by using states-based models.
r Algorithm – The Principal Component Analysis (PCA) procedure is a dimension reduction
technique that projects the data on k dimensions by maximizing the variance of the data as
follows:
2.1.1 Tree search
• Step 1: Normalize the data to have a mean of 0 and standard deviation of 1.
This category of states-based algorithms explores all possible states and actions. It is quite
memory efficient, and is suitable for huge state spaces but the runtime can become exponential
in the worst cases.
(i) m m
(i)
xj − µj 1 X (i) 1 X (i)
xj ← where µj = xj and σj2 = (xj − µj )2
σj m m
i=1 i=1
m
1 X T
• Step 2: Compute Σ = x(i) x(i) ∈ Rn×n , which is symmetric with real eigenvalues.
m
i=1
• Step 4: Project the data on spanR (u1 ,...,uk ). This procedure maximizes the variance
r Search problem – A search problem is defined with:
among all k-dimensional spaces.
• a starting state sstart
r Breadth-first search (BFS) – Breadth-first search is a graph search algorithm that does a
level-by-level traversal. We can implement it iteratively with the help of a queue that stores at
each step future nodes to be visited. For this algorithm, we can assume action costs to be equal r Graph – A graph is comprised of a set of vertices V (also called nodes) as well as a set of
to a constant c > 0. edges E (also called links).
r Depth-first search (DFS) – Depth-first search is a search algorithm that traverses a graph
by following each path as deep as it can. We can implement it recursively, or iteratively with
the help of a stack that stores at each step future nodes to be visited. For this algorithm, action Remark: a graph is said to be acylic when there is no cycle.
costs are assumed to be equal to 0.
r State – A state is a summary of all past actions sufficient to choose future actions optimally.
0 if IsEnd(s)
FutureCost(s) =
min Cost(s,a) + FutureCost(Succ(s,a)) otherwise
a∈Actions(s)
Remark: the figure above illustrates a bottom-to-top approach whereas the formula provides the
2.1.2 Graph search intuition of a top-to-bottom problem resolution.
This category of states-based algorithms aims at constructing optimal paths, enabling exponen- r Types of states – The table below presents the terminology when it comes to states in the
tial savings. In this section, we will focus on dynamic programming and uniform cost search. context of uniform cost search:
State Explanation • decreases the estimated cost of each state-action of the true minimizing path y given by
the training data,
States for which the optimal path has
Explored E • increases the estimated cost of each state-action of the current predicted path y 0 inferred
already been found
from the learned weights.
States seen for which we are still figuring out
Frontier F
how to get there with the cheapest cost Remark: there are several versions of the algorithm, one of which simplifies the problem to only
learning the cost of each action a, and the other parametrizes Cost(s,a) to a feature vector of
Unexplored U States not seen yet learnable weights.
r Uniform cost search – Uniform cost search (UCS) is a search algorithm that aims at finding 2.1.4 A? search
the shortest path from a state sstart to an end state send . It explores states s in increasing order
of PastCost(s) and relies on the fact that all action costs are non-negative. r Heuristic function – A heuristic is a function h over states s, where each h(s) aims at
estimating FutureCost(s), the cost of the path from s to send .
r Algorithm – A∗ is a search algorithm that aims at finding the shortest path from a state s to
an end state send . It explores states s in increasing order of PastCost(s) + h(s). It is equivalent
to a uniform cost search with edge costs Cost0 (s,a) given by:
Cost0 (s,a) = Cost(s,a) + h(Succ(s,a)) − h(s)
Remark: this algorithm can be seen as a biased version of UCS exploring states estimated to be
closer to the end state.
Remark 1: the UCS algorithm is logically equivalent to Djikstra’s algorithm.
Remark 2: the algorithm would not work for a problem with negative action costs, and adding a r Consistency – A heuristic h is said to be consistent if it satisfies the two following properties:
positive constant to make them non-negative would not solve the problem since this would end
up being a different problem. • For all states s and actions a,
r Correctness theorem – When a state s is popped from the frontier F and moved to explored h(s) 6 Cost(s,a) + h(Succ(s,a))
set E, its priority is equal to PastCost(s) which is the minimum cost path from sstart to s.
r Graph search algorithms summary – By noting N the number of total states, n of which
are explored before the end state send , we have:
Remark: the complexity countdown supposes the number of possible actions per state to be
constant. • The end state verifies the following:
h(send ) = 0
2.1.3 Learning costs
Suppose we are not given the values of Cost(s,a), we want to estimate these quantities from a
training set of minimizing-cost-path sequence of actions (a1 , a2 , ..., ak ).
r Correctness – If h is consistent, then A∗ returns the minimum cost path. r Max heuristic – Let h1 (s), h2 (s) be two heuristics. We have the following property:
r Admissibility – A heuristic h is said to be admissible if we have: h1 (s), h2 (s) consistent =⇒ h(s) = max{h1 (s), h2 (s)} consistent
h(s) 6 FutureCost(s)
2.2 Markov decision processes
r Theorem – Let h(s) be a given heuristic. We have:
In this section, we assume that performing action a from state s can lead to several states s01 ,s02 ,...
h(s) consistent =⇒ h(s) admissible in a probabilistic manner. In order to find our way between an initial state and an end state,
our objective will be to find the maximum value policy by using Markov decision processes that
help us cope with randomness and uncertainty.
r Efficiency – A∗ explores all states s satisfying the following equation:
PastCost(s) 6 PastCost(send ) − h(s) 2.2.1 Notations
r Definition – The objective of a Markov decision process is to maximize rewards. It is defined
with:
Remark: larger values of h(s) is better as this equation shows it will restrict the set of states s • whether an end state was reached IsEnd(s)
going to be explored.
• a discount factor 0 6 γ 6 1
2.1.5 Relaxation
It is a framework for producing consistent heuristics. The idea is to find closed-form reduced
costs by removing constraints and use them as heuristics.
r Relaxed search problem – The relaxation of search problem P with costs Cost is noted
Prel with costs Costrel , and satisfies the identity:
Costrel (s,a) 6 Cost(s,a)
r Relaxed heuristic – Given a relaxed search problem Prel , we define the relaxed heuristic
h(s) = FutureCostrel (s) as the minimum cost path from s to an end state in the graph of costs
Costrel (s,a). r Transition probabilities – The transition probability T (s,a,s0 ) specifies the probability
of going to state s0 after action a is taken in state s. Each s0 7→ T (s,a,s0 ) is a probability
r Consistency of relaxed heuristics – Let Prel be a given relaxed problem. By theorem, we distribution, which means that:
have: X
h(s) = FutureCostrel (s) =⇒ h(s) consistent ∀s,a, T (s,a,s0 ) = 1
s0 ∈ States
r Tradeoff when choosing heuristic – We have to balance two aspects in choosing a heuristic:
r Policy – A policy π is a function that maps each state s to an action a, i.e.
• Computational efficiency: h(s) = FutureCostrel (s) must be easy to compute. It has to π : s 7→ a
produce a closed form, easier search and independent subproblems.
• Good enough approximation: the heuristic h(s) should be close to FutureCost(s) and we r Utility – The utility of a path (s0 , ..., sk ) is the discounted sum of the rewards on that path.
have thus to not remove too many constraints. In other words,
k
X
X Qopt (s,a) = T (s,a,s0 ) Reward(s,a,s0 ) + γVopt (s0 )
u(s0 ,...,sk ) = ri γ i−1
s0 ∈ States
i=1
r Optimal value – The optimal value Vopt (s) of state s is defined as being the maximum value
attained by any policy. It is computed as follows:
Vopt (s) = max Qopt (s,a)
a∈ Actions(s)
Remark: the figure above is an illustration of the case k = 4. r Optimal policy – The optimal policy πopt is defined as being the policy that leads to the
optimal values. It is defined by:
r Q-value – The Q-value of a policy π by taking action a from state s, also noted Qπ (s,a), is
the expected utility of taking action a from state s and then following policy π. It is defined as ∀s, πopt (s) = argmax Qopt (s,a)
follows: a∈ Actions(s)
X
Qπ (s,a) = T (s,a,s0 ) Reward(s,a,s0 ) + γVπ (s0 )
r Value iteration – Value iteration is an algorithm that finds the optimal value Vopt as well
s0 ∈ States
as the optimal policy πopt . It is done as follows:
r Value of a policy – The value of a policy π from state s, also noted Vπ (s), is the expected • Initialization: for all states s, we have
utility by following policy π from state s over random paths. It is defined as follows:
(0)
Vopt (s) ←− 0
Vπ (s) = Qπ (s,π(s))
(0)
Vπ (s) ←− 0
Remark: if we have either γ < 1 or the MDP graph being acyclic, then the value iteration
algorithm is guaranteed to converge to the correct answer.
• Iteration: for t from 1 to TPE , we have
∀s,
(t)
Vπ (s) ←− Qπ
(t−1)
(s,π(s)) 2.2.3 When unknown transitions and rewards
Now, let’s assume that the transition probabilities and the rewards are unknown.
with
r Model-based Monte Carlo – The model-based Monte Carlo method aims at estimating
T (s,a,s0 ) and Reward(s,a,s0 ) using Monte Carlo simulation with:
X h i
(t−1) 0 0 (t−1) 0
Qπ (s,π(s)) = T (s,π(s),s ) Reward(s,π(s),s ) + γVπ (s )
# times (s,a,s0 ) occurs
s0 ∈ States Tb(s,a,s0 ) =
# times (s,a) occurs
Remark: by noting S the number of states, A the number of actions per state, S 0 the number and
of successors and T the number of iterations, then the time complexity is of O(TPE SS 0 ).
0
Reward(s,a,s
\ ) = r in (s,a,r,s0 )
r Optimal Q-value – The optimal Q-value Qopt (s,a) of state s with action a is defined to be
the maximum Q-value attained by any policy starting. It is computed as follows: These estimations will be then used to deduce Q-values, including Qπ and Qopt .
Remark: model-based Monte Carlo is said to be off-policy, because the estimation does not • a starting state sstart
depend on the exact policy.
• possible actions Actions(s) from state s
r Model-free Monte Carlo – The model-free Monte Carlo method aims at directly estimating
Qπ , as follows: • successors Succ(s,a) from states s with actions a
bπ (s,a) = average of ut where st−1 = s, at = a
Q • whether an end state was reached IsEnd(s)
where ut denotes the utility starting at step t of a given episode.
• the agent’s utility Utility(s) at end state s
Remark: model-free Monte Carlo is said to be on-policy, because the estimated value is dependent
on the policy π used to generate the data. • the player Player(s) who controls state s
r Equivalent formulation – By introducing the constant η = 1
and for
1+(#updates to (s,a))
Remark: we will assume that the utility of the agent has the opposite sign of the one of the
each (s,a,u) of the training set, the update rule of model-free Monte Carlo has a convex combi- opponent.
nation formulation:
r Types of policies – There are two types of policies:
bπ (s,a) ← (1 − η)Q
Q bπ (s,a) + ηu
as well as a stochastic gradient formulation: • Deterministic policies, noted πp (s), which are actions that player p takes in state s.
bπ (s,a) ← Q
Q bπ (s,a) − η(Q
bπ (s,a) − u) • Stochastic policies, noted πp (s,a) ∈ [0,1], which are probabilities that player p takes action
a in state s.
Remark: we can extract πmax and πmin from the minimax value Vminimax .
r TD learning – Temporal difference (TD) learning is used when we don’t know the transi-
tions/rewards. The value is based on exploration policy. To be able to use it, we need to know
r Minimax properties – By noting V the value function, there are 3 properties around rules of the game Succ(s,a). For each (s,a,r,s0 ), the update is done as follows:
minimax to have in mind:
w ←− w − η V (s,w) − (r + γV (s0 ,w)) ∇w V (s,w)
• Property 1 : if the agent were to change its policy to any πagent , then the agent would be
no better off.
∀πagent , V (πmax ,πmin ) > V (πagent ,πmin ) 2.3.2 Simultaneous games
This is the contrary of turn-based games, where there is no ordering on the player’s moves.
• Property 2 : if the opponent changes its policy from πmin to πopp , then he will be no
better off. r Single-move simultaneous game – Let there be two players A and B, with given possible
actions. We note V (a,b) to be A’s utility if A chooses action a, B chooses action b. V is called
∀πopp , V (πmax ,πmin ) 6 V (πmax ,πopp ) the payoff matrix.
In the end, we have the following relationship: • A mixed strategy is a probability distribution over actions:
V (πexptmax ,πmin ) 6 V (πmax ,πmin ) 6 V (πmax ,π) 6 V (πexptmax ,π) ∀a ∈ Actions, 0 6 π(a) 6 1
2.3.1 Speeding up minimax r Game evaluation – The value of the game V (πA ,πB ) when player A follows πA and player
B follows πB is such that:
r Evaluation function – An evaluation function is a domain-specific and approximate estimate X
of the value Vminimax (s). It is noted Eval(s). V (πA ,πB ) = πA (a)πB (b)V (a,b)
Remark: FutureCost(s) is an analogy for search problems. a,b
Remark: in any finite-player game with finite number of actions, there exists at least one Nash 3.1.1 Factor graphs
equilibrium.
r Definition – A factor graph, also referred to as a Markov random field, is a set of variables
X = (X1 ,...,Xn ) where Xi ∈ Domaini and m factors f1 ,...,fm with each fj (X) > 0.
r Scope and arity – The scope of a factor fj is the set of variables it depends on. The size of
this set is called the arity.
Remark: factors of arity 1 and 2 are called unary and binary respectively.
r Assignment weight – Each assignment x = (x1 ,...,xn ) yields a weight Weight(x) defined as
being the product of all factors fj applied to that assignment. Its expression is given by:
m
Y
Weight(x) = fj (x)
j=1
Here, the constraint j with assignment x is said to be satisfied if and only if fj (x) = 1.
r Backtracking search – Backtracking search is an algorithm used to find maximum weight 3.1.3 Approximate methods
assignments of a factor graph. At each step, it chooses an unassigned variable and explores
its values by recursion. Dynamic ordering (i.e. choice of variables and values) and lookahead r Beam search – Beam search is an approximate algorithm that extends partial assignments
(i.e. early elimination of inconsistent options) can be used to explore the graph more efficiently, of n variables of branching factor b = |Domain| by exploring the K top paths at each step. The
although the worst-case runtime stays exponential: O(|Domain|n ). beam size K ∈ {1,...,bn } controls the tradeoff between efficiency and accuracy. This algorithm
has a time complexity of O(n · Kb log(Kb)).
r Forward checking – It is a one-step lookahead heuristic that preemptively removes incon- The example below illustrates a possible beam search of parameters K = 2, b = 3 and n = 5.
sistent values from the domains of neighboring variables. It has the following characteristics:
• After assigning a variable Xi , it eliminates inconsistent values from the domains of all its
neighbors.
• If any of these domains becomes empty, we stop the local backtracking search.
• If we un-assign a variable Xi , we have to restore the domain of its neighbors.
r Most constrained variable – It is a variable-level ordering heuristic that selects the next
unassigned variable that has the fewest consistent values. This has the effect of making incon-
sistent assignments to fail earlier in the search, which enables more efficient pruning.
r Least constrained value – It is a value-level ordering heuristic that assigns the next value
that yields the highest number of consistent values of neighboring variables. Intuitively, this
procedure chooses first the values that are most likely to work.
Remark: in practice, this heuristic is useful when all factors are constraints.
• we assign to each element u ∈ Domaini a weight w(u) that is the product of all factors
connected to that variable,
The example above is an illustration of the 3-color problem with backtracking search coupled Remark: Gibbs sampling can be seen as the probabilistic counterpart of ICM. It has the advan-
with most constrained variable exploration and least constrained value heuristic, as well as tage to be able to escape local minima in most cases.
forward checking at each step.
r Arc consistency – We say that arc consistency of variable Xl with respect to Xk is enforced 3.1.4 Factor graph transformations
when for each xl ∈ Domainl :
• unary factors of Xl are non-zero, r Independence – Let A,B be a partitioning of the variables X. We say that A and B are
independent if there are no edges between A and B and we write:
• there exists at least one xk ∈ Domaink such that any factor between Xl and Xk is
non-zero. A,B independent ⇐⇒ A ⊥
⊥B
r AC-3 – The AC-3 algorithm is a multi-step lookahead heuristic that applies forward checking
to all relevant variables. After a given assignment, it performs forward checking and then Remark: independence is the key property that allows us to solve subproblems in parallel.
successively enforces arc consistency with respect to the neighbors of variables for which the
r Conditional independence – We say that A and B are conditionally independent given C
domain change during the process.
if conditioning on C produces a graph in which A and B are independent. In this case, it is
Remark: AC-3 can be implemented both iteratively and recursively. written:
gj (x) = fj (x ∪ {Xi : v}) Remark: finding the best variable ordering is a NP-hard problem.
r Markov blanket – Let A ⊆ X be a subset of variables. We define MarkovBlanket(A) to be 3.2 Bayesian networks
the neighbors of A that are not in A.
In this section, our goal will be to compute conditional probabilities. What is the probability of
r Proposition – Let C = MarkovBlanket(A) and B = X\(A ∪ C). Then we have: a query given evidence?
A⊥
⊥ B|C
3.2.1 Introduction
r Explaining away – Suppose causes C1 and C2 influence an effect E. Conditioning on the
effect E and on one of the causes (say C1 ) changes the probability of the other cause (say C2 ).
In this case, we say that C1 has explained away C2 .
r Directed acyclic graph – A directed acyclic graph (DAG) is a finite directed graph with
no directed cycles.
r Bayesian network – A Bayesian network is a directed acyclic graph (DAG) that specifies
a joint distribution over random variables X = (X1 ,...,Xn ) as a product of local conditional
distributions, one for each node:
r Elimination – Elimination is a factor graph transformation that removes Xi from the graph
and solves a small subproblem conditioned on its Markov blanket as follows: n
Y
P (X1 = x1 ,...,Xn = xn ) , p(xi |xParents(i) )
• Consider all factors fi,1 ,...,fi,k that depend on Xi
i=1
• Remove Xi and fi,1 ,...,fi,k
• Add fnew,i (x) defined as: Remark: Bayesian networks are factor graphs imbued with the language of probability.
k
Y
fnew,i (x) = max fi,l (x)
xi
l=1
r Treewidth – The treewidth of a factor graph is the maximum arity of any factor created by
variable elimination with the best variable ordering. In other words,
Treewidth = min max arity(fnew,i )
orderings i∈{1,...,n}
r Locally normalized – For each xParents(i) , all factors are local conditional distributions.
The example below illustrates the case of a factor graph of treewidth 3. Hence they have to satisfy:
Hto ∼ o )
p(Hto |Ht−1
o∈{a,b} Multiple object
Factorial HMM
tracking
X
p(xi |xParents(i) ) = 1 Et ∼ p(Et |Hta ,Htb )
xi
Y ∼ p(Y ) Document
As a result, sub-Bayesian networks and conditional distributions are consistent. Naive Bayes
Wi ∼ p(Wi |Y ) classification
Remark: local conditional distributions are the true conditional distributions.
r Concept – A probabilistic program randomizes variables assignment. That way, we can write 3.2.3 Inference
down complex Bayesian networks that generate assignments without us having to explicitly
specify associated probabilities. r General probabilistic inference strategy – The strategy to compute the probability
P (Q|E = e) of query Q given evidence E = e is as follows:
Remark: examples of probabilistic programs include Hidden Markov model (HMM), factorial
HMM, naive Bayes, latent Dirichlet allocation, diseases and symptoms and stochastic block • Step 1: Remove variables that are not ancestors of the query Q or the evidence E by
models.
marginalization
r Summary – The table below summarizes the common probabilistic programs as well as their • Step 2: Convert Bayesian network to factor graph
applications:
• Step 3: Condition on the evidence E = e
Program Algorithm Illustration Example • Step 5: Run probabilistic inference algorithm (manual, variable elimination, Gibbs sam-
pling, particle filtering)
• Step 1: for i ∈ {1,..., L}, compute Fi (hi ) = Fi−1 (hi−1 )p(hi |hi−1 )p(ei |hi )
P
hi−1
Hidden Markov Ht ∼ p(Ht |Ht−1 ) • Step 2: for i ∈ {L,..., 1}, compute Bi (hi ) = Bi+1 (hi+1 )p(hi+1 |hi )p(ei+1 |hi+1 )
P
Object tracking hi+1
Model (HMM) Et ∼ p(Et |Ht )
• Step 3: for i ∈ {1,...,L}, compute Si (hi ) = PFi (hi )Bi (hi )
Fi (hi )Bi (hi )
hi
with the convention F0 = BL+1 = 1. From this procedure and these notations, we get that 4 Logic-based models
P (H = hk |E = e) = Sk (hk )
4.1 Concepts
Remark: this algorithm interprets each assignment to be a path where each edge hi−1 → hi is
In this section, we will go through logic-based models that use logical formulas and inference
of weight p(hi |hi−1 )p(ei |hi ). rules. The idea here is to balance expressivity and computational efficiency.
r Gibbs sampling – This algorithm is an iterative approximate method that uses a small set of r Syntax of propositional logic – By noting f,g formulas, and ¬, ∧, ∨, →, ↔ connectives,
assignments (particles) to represent a large probability distribution. From a random assignment here are the following possible logical expressions that we can write:
x, Gibbs sampling performs the following steps for i ∈ {1,...,n} until convergence:
• For all u ∈ Domaini , compute the weight w(u) of assignment x where Xi = u Name Symbol Translation
• Sample v from the probability distribution induced by w: v ∼ P (Xi = v|X−i = x−i ) Negation ¬f not f
Conjunction f ∧g f and g
• Set Xi = v
Disjunction f ∨g f or g
Remark: X−i denotes X\{Xi } and x−i represents the corresponding assignment.
Implication f →g if f then g
r Particle filtering – This algorithm approximates the posterior density of state variables
given the evidence of observation variables by keeping track of K particles at a time. Starting Biconditional f ↔g f , that is to say g
from a set of particles C of size K, we run the following 3 steps iteratively:
• Step 1: proposal - For each old particle xt−1 ∈ C, sample x from the transition probability Remark: formulas can be built up recursively.
distribution p(x|xt−1 ) and add x to a set C 0 . r Model – A model w is an assignment of truth values to propositional symbols.
• Step 2: weighting - Weigh each x of the set C 0 by w(x) = p(et |x), where et is the evidence r Interpretation function – Let f be a formula, w be a model, then the interpretation function
observed at time t. I(f,w) is such that:
• Step 3: resampling - Sample K elements from the set C 0 using the probability distribution I(f,w) ∈ {0,1}
induced by w and store them in C: these are the current particles xt .
Remark: a more expensive version of this algorithm also keeps track of past particles in the r Set of models – We note M(f ) the set of models w for which we have:
proposal step. ∀w ∈ M(f ), I(f,w) = 1
r Maximum likelihood – If we don’t know the local conditional distributions, we can learn
them using maximum likelihood.
r Knowledge base – A knowledge base KB is defined to be a set of formulas representing their
Y conjunction, as follows:
max p(X = x; θ)
θ \
x∈Dtrain M(KB) = M(f )
f ∈KB
r Laplace smoothing – For each distribution d and partial assignment (xParents(i) ,xi ), add λ
to countd (xParents(i) ,xi ), then normalize to get probability estimates.
r Entailment – A knowledge base KB that is said to entail f is noted KB |= f . We have:
r Algorithm – The Expectation-Maximization (EM) algorithm gives an efficient method at KB |= f ⇐⇒ M(KB) ⊆ M(f )
estimating the parameter θ through maximum likelihood estimation by repeatedly constructing
a lower-bound on the likelihood (E-step) and optimizing that lower bound (M-step) as follows:
r Contradiction – A knowledge base KB contradicts f if and only if we have the following:
• E-step: Evaluate the posterior probability q(h) that each data point e came from a
particular cluster h as follows: M(KB) ∩ M(f ) = ∅
Remark: we can quantify the uncertainty of the overlap of the two by computing the following r Resolution inference rule – The resolution inference rule is a generalized inference rule
quantity: and is written as follows:
X
P (W = w) f1 ∨ ... ∨ fn ∨ p, ¬p ∨ g1 ∨ ... ∨ gm
w∈M(KB∪{f }) f1 ∨ ... ∨ fn ∨ g1 ∨ ... ∨ gm
P (f |KB) = X
P (W = w) Remark: this can take exponential time.
w∈M(KB)
r Conjunctive normal form – A conjunctive normal form (CNF) formula is a conjunction of
clauses.
r Satisfiability – A knowledge base KB is said to be satisfiable if we have:
r Conversion to CNF – Every formula f in propositional logic can be converted into an
M(KB) 6= ∅ equivalent CNF formula f 0 :
M(f ) = M(f 0 )
r Model checking – Model checking is an algorithm that takes as input a knowledge base KB
and checks whether we have M(KB) 6= ∅.
r Resolution-based inference – The resolution-based inference algorithm follows the follow-
Remark: popular model checking algorithms include DPLL and WalkSat. ing steps:
r Modus ponens inference rule – For any propositional symbols p1 ,...,pk ,q, we have: • Step 1: Convert all formulas into CNF
p1 ,...,pk , (p1 ∧ ... ∧ pk ) −→ q • Step 2: Repeatedly apply resolution rule
q
• Step 3: Return unsatisfiable if and only if derive false
Remark: this can take linear time.
r Inference rule – If f1 ,...,fk ,g are formulas, then the following is an inference rule: 4.3 First-order logic
f1 ,...,fk The idea here is that variables yield compact knowledge representations.
g
r Model – A model w in first-order logic maps:
r Derivation – We say that KB derives f , and we note KB ` f , if and only if f eventually gets • constant symbols to objects
added to KB. • predicate symbols to tuple of objects
r Soundness/completeness – A set of inference rules can have the following properties:
r Definite clause – By noting x1 ,...,xn variables and a1 ,...,ak ,b atomic formulas, a definite
Property Meaning clause has the following form:
Completeness {f : KB ` f } ⊇ {f : KB |= f }
r Modus ponens – By noting x1 ,...,xn variables and a1 ,...,ak ,b atomic formulas, a modus
ponens has the following form:
a1 ,...,ak ∀x1 ,...,∀xn (a1 ∧ ... ∧ ak ) → b
b
4.2 Propositional logic
r Definite clause – By noting p1 ,...,pk ,q propositional symbols, we define a definite clause as r Substitution – A substitution θ is a mapping from variables to terms. For instance,
having the following form: Subst(θ,f ) returns the result of performing substitution θ on f .
(p1 ∧ ... ∧ pk ) −→ q r Unification – Unification takes two formulas f and g and returns a substitution θ which is
the most general unifier:
Remark: the case when q = false is called a goal clause. Unify[f,g] = θ s.t. Subst[θ,f ] = Subst[θ,g]
r Horn clause – A Horn clause is defined to be either a definite clause or a goal clause. or fail if no such θ exists.
r Modus ponens on Horn clauses – Modus ponens is complete with respect to Horn clauses r Completeness – Modus ponens is complete for first-order logic with only Horn clauses.
if we suppose that KB contains only Horn clauses and p is an entailed propositional symbol.
Applying modus ponens will then derive p. r Semi-decidability – First-order logic, even restricted to only Horn clauses, is semi-decidable.