Nothing Special   »   [go: up one dir, main page]

Data Mining: Classification

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 87

DATA MINING

LECTURE 10
Classification
Basic Concepts
Decision Trees
Catching tax-evasion
Tid Refund Marital Taxable
Status Income Cheat

1 Yes Single 125K No


No
Tax-return data for year 2011
2 No Married 100K
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes A new tax return for 2012
6 No Married 60K No Is this a cheating tax return?
7 Yes Divorced 220K No Refund Marital Taxable
Status Income Cheat
8 No Single 85K Yes
9 No Married 75K No No Married 80K ?
10

10 No Single 90K Yes


10

An instance of the classification problem: learn a method for discriminating between


records of different classes (cheaters vs non-cheaters)
What is classification?
• Classification is the task of learning a target function f that
maps attribute set x to one of the predefined class labels y
l l
ir ca ir ca o us
o o u
eg eg tin ss
t t n l a
ca ca co c
Tid Refund Marital Taxable
Status Income Cheat One of the attributes is the class attribute
1 Yes Single 125K No In this case: Cheat
2 No Married 100K No
3 No Single 70K No Two class labels (or classes): Yes (1), No (0)
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Why classification?
• The target function f is known as a classification
model

• Descriptive modeling: Explanatory tool to


distinguish between objects of different classes
(e.g., understand why people cheat on their
taxes)

• Predictive modeling: Predict a class of a


previously unseen record
Examples of Classification Tasks
• Predicting tumor cells as benign or malignant

• Classifying credit card transactions as legitimate or


fraudulent

• Categorizing news stories as finance,


weather, entertainment, sports, etc

• Identifying spam email, spam web pages, adult content

• Understanding if a web query has commercial intent or


not
General approach to classification
• Training set consists of records with known class
labels

• Training set is used to build a classification model

• A labeled test set of previously unseen data records


is used to evaluate the quality of the model.

• The classification model is applied to new records


with unknown class labels
Illustrating Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Learning
1 Yes Large 125K No
algorithm
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Tid Attrib1 Attrib2 Attrib3 Class Model
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ? Deduction


14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Evaluation of classification models
• Counts of test records that are correctly (or
incorrectly) predicted by the classification model
• Confusion matrix Predicted Class

Actual Class
Class = 1 Class = 0
Class = 1 f11 f10
Class = 0 f01 f00

# correct prediction s f11  f 00


Accuracy  
total # of prediction s f11  f10  f 01  f 00

# wrong prediction s f10  f 01


Error rate  
total # of prediction s f11  f10  f 01  f 00
Classification Techniques
• Decision Tree based Methods
• Rule-based Methods
• Memory based reasoning
• Neural Networks
• Naïve Bayes and Bayesian Belief Networks
• Support Vector Machines
Classification Techniques
• Decision Tree based Methods
• Rule-based Methods
• Memory based reasoning
• Neural Networks
• Naïve Bayes and Bayesian Belief Networks
• Support Vector Machines
Decision Trees
• Decision tree
• A flow-chart-like tree structure
• Internal node denotes a test on an attribute
• Branch represents an outcome of the test
• Leaf nodes represent class labels or class distribution
Example of a Decision Tree
l l
ica ica ous
r r u
go go tin ss
te te n
ca ca co cla
Tid Refund Marital Taxable
Splitting Attributes
Status Income Cheat

1 Yes Single 125K No


2 No Married 100K No Refund
Yes No
3 No Single 70K No Test outcom
4 Yes Married 120K No NO MarSt
5 No Divorced 95K Yes Married
Single, Divorced
6 No Married 60K No
7 Yes Divorced 220K No TaxInc NO
8 No Single 85K Yes < 80K > 80K
9 No Married 75K No
NO YES
10 No Single 90K Yes
10

Class labe
Training Data Model: Decision Tree
Another Example of Decision Tree
l l
ica ica ous
r r u
go go tin ss
te te n
ca ca co cla MarSt Single,
Tid Refund Marital Taxable
Married Divorced
Status Income Cheat
NO Refund
1 Yes Single 125K No
Yes No
2 No Married 100K No
3 No Single 70K No NO TaxInc
4 Yes Married 120K No < 80K > 80K
5 No Divorced 95K Yes
NO YES
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No There could be more than one tree that
10 No Single 90K Yes fits the same data!
10
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply Decision
Model
Tid Attrib1 Attrib2 Attrib3 Class Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Apply Model to Test Data
Test Data
Start from the root of tree. Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married

TaxInc NO
< 80K > 80K

NO YES
Apply Model to Test Data
Test Data
Refund Marital Taxable
Status Income Cheat

No Married 80K ?
Refund 10

Yes No

NO MarSt
Single, Divorced Married Assign Cheat to “No”

TaxInc NO
< 80K > 80K

NO YES
Decision Tree Classification Task
Tid Attrib1 Attrib2 Attrib3 Class
Tree
1 Yes Large 125K No Induction
2 No Medium 100K No algorithm
3 No Small 70K No

4 Yes Medium 120K No


Induction
5 No Large 95K Yes

6 No Medium 60K No

7 Yes Large 220K No Learn


8 No Small 85K Yes Model
9 No Medium 75K No

10 No Small 90K Yes


Model
10

Training Set
Apply
Decision
Model
Tid Attrib1 Attrib2 Attrib3 Class
Tree
11 No Small 55K ?

12 Yes Medium 80K ?

13 Yes Large 110K ?


Deduction
14 No Small 95K ?

15 No Large 67K ?
10

Test Set
Tree Induction
• Finding the best decision tree is NP-hard

• Greedy strategy.
• Split the records based on an attribute test that optimizes
certain criterion.

• Many Algorithms:
• Hunt’s Algorithm (one of the earliest)
• CART
• ID3, C4.5
• SLIQ,SPRINT
General Structure of Hunt’s Algorithm
Tid Refund Marital Taxable
• Let Dt be the set of training records that Status Income Cheat

reach a node t 1 Yes Single 125K No


2 No Married 100K No

• General Procedure: 3 No Single 70K No


4 Yes Married 120K No
• If Dt contains records that belong the
5 No Divorced 95K Yes
same class yt, then t is a leaf node labeled 6 No Married 60K No
as yt 7 Yes Divorced 220K No
• If Dt contains records with the same 8 No Single 85K Yes
attribute values, then t is a leaf node 9 No Married 75K No

labeled with the majority class yt 10


10 No Single 90K Yes

• If Dt is an empty set, then t is a leaf node


Dt
labeled by the default class, yd
• If Dt contains records that belong to more
than one class, use an attribute test to ?
split the data into smaller subsets.
• Recursively apply the procedure to each subset.
Tid Refund Marital
Tid Refund Marital
Marital Taxable
Taxable
Taxable
Status
Status
Status Income Cheat
Income
Income Cheat
Cheat

Hunt’s Algorithm 11
42
Yes
Yes
No
Yes
Single
Single
Single
Married
125K
125K
125K
Married 120K
Married 100K
120K
No
No
No
No
No
No
73 No
Yes Divorced
Single 220K
Divorced 70K
220K No
No
No
Refund
Don’t 24 Yes
No Married
Married 100K
Married 120K
100K No
No
No
Yes No
Cheat 63
5 No
No Single
Divorced 70K
Married 95K
60K No
Yes
No
Don’t Don’t
95
6 No
No Divorced
Married 95K
Married 60K
75K No
Yes
No
Cheat Cheat
36
7 No
Yes Married
Divorced 60K
Single 220K
70K No
No
No
58 No
No Single 85K
Single
Divorced 85K
95K Yes
Yes
Yes
89 No
No Married 75K
Married
Single 75K
85K No
No
Yes
Refund Refund
10 No
No Single
Single
Single 90K
90K
90K Yes
Yes
Yes
Yes No Yes No 10
10
10

Don’t Don’t Marital


Marital
Cheat Status
Cheat Status
Single, Single,
Married Married
Divorced Divorced
Don’t Taxable Don’t
Cheat Cheat
Cheat Income
< 80K >= 80K
Don’t Cheat
Cheat
Constructing decision-trees (pseudocode)
GenDecTree(Sample S, Features F)
1. If stopping_condition(S,F) = true then
a. leaf = createNode()
b. leaf.label= Classify(S)
c. return leaf
2. root = createNode()
3. root.test_condition = findBestSplit(S,F)
4. V = {v| v a possible outcome of root.test_condition}
5. for each value vєV:
a. Sv: = {s | root.test_condition(s) = v and s є S};
b. child = GenDecTree(Sv ,F) ;
c. Add child as a descent of root and label the edge (rootchild) as v
6. return root
Tree Induction
• Issues
• How to Classify a leaf node
• Assign the majority class
• If leaf is empty, assign the default class – the class that has the
highest popularity.
• Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
• Determine when to stop splitting
How to Specify Test Condition?
• Depends on attribute types
• Nominal
• Ordinal
• Continuous

• Depends on number of ways to split


• 2-way split
• Multi-way split
Splitting Based on Nominal Attributes
• Multi-way split: Use as many partitions as distinct
values.
CarType
Family Luxury
Sports

• Binary split: Divides values into two subsets.


Need to find optimal partitioning.

CarType OR CarType
{Sports, {Family,
Luxury} {Family} Luxury} {Sports}
Splitting Based on Ordinal Attributes
• Multi-way split: Use as many partitions as distinct
values.
Size
Small Large
Medium

• Binary split: Divides values into two subsets –


respects the order. Need to find optimal partitioning.
Size Size
{Small,
{Large}
OR {Medium,
{Small}
Medium} Large}

Size
• What about this split? {Small,
{Medium}
Large}
Splitting Based on Continuous Attributes

• Different ways of handling


• Discretization to form an ordinal categorical attribute
• Static – discretize once at the beginning
• Dynamic – ranges can be found by equal interval bucketing,
equal frequency bucketing (percentiles), or clustering.

• Binary Decision: (A < v) or (A  v)


• consider all possible splits and finds the best cut
• can be more compute intensive
Splitting Based on Continuous Attributes

Taxable Taxable
Income Income?
> 80K?
< 10K > 80K
Yes No

[10K,25K) [25K,50K) [50K,80K)

(i) Binary split (ii) Multi-way split


How to determine the Best Split
Before Splitting: 10 records of class 0,
10 records of class 1

Own Car Student


Car? Type? ID?

Yes No Family Luxury c1 c20


c10 c11
Sports
C0: 6 C0: 4 C0: 1 C0: 8 C0: 1 C0: 1 ... C0: 1 C0: 0 ... C0: 0
C1: 4 C1: 6 C1: 3 C1: 0 C1: 7 C1: 0 C1: 0 C1: 1 C1: 1

Which test condition is the best?


How to determine the Best Split
• Greedy approach:
• Nodes with homogeneous class distribution are
preferred
• Need a measure of node impurity:

C0: 5 C0: 9
C1: 5 C1: 1

Non-homogeneous, Homogeneous,
High degree of impurity Low degree of impurity

• Ideas?
Measuring Node Impurity
• p(i|t): fraction of records associated with node t
belonging to class i
c
Entropy(t )   p (i | t ) log p (i | t )
i 1
• Used in ID3 and C4.5
c
Gini(t )  1    p (i | t )
2

i 1

• Used in CART, SLIQ, SPRINT.

Classification error(t )  1  max i  p(i | t )


Gain
• Gain of an attribute split: compare the impurity
of the parent node with the average impurity of
the child nodes
k N (v j )
  I ( parent )   I (v j )
j 1 N

• Maximizing the gain  Minimizing the weighted


average impurity measure of children nodes
• If I() = Entropy(), then Δinfo is called information
gain
Example
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
C1 0 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0
C2 6 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0
Error = 1 – max (0, 1) = 1 – 1 = 0

P(C1) = 1/6 P(C2) = 5/6

C1 1 Gini = 1 – (1/6)2 – (5/6)2 = 0.278


Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65
C2 5
Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6

P(C1) = 2/6 P(C2) = 4/6


C1 2 Gini = 1 – (2/6)2 – (4/6)2 = 0.444
C2 4 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92
Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3
Impurity measures
• All of the impurity measures take value zero
(minimum) for the case of a pure node where a
single value has probability 1
• All of the impurity measures take maximum value
when the class distribution in a node is uniform.
Comparison among Splitting Criteria
For a 2-class problem:

The different impurity measures are consistent


Categorical Attributes

• For binary values split in two


• For multivalued attributes, for each distinct value, gather
counts for each class in the dataset
• Use the count matrix to make decisions

Multi-way split Two-way split


(find best partition of values)

CarType CarType CarType


Family Sports Luxury {Sports, {Family,
{Family} {Sports}
Luxury} Luxury}
C1 1 2 1 C1 3 1 C1 2 2
C2 4 1 1 C2 2 4 C2 1 5
Gini 0.393 Gini 0.400 Gini 0.419
Continuous Attributes
Tid Refund Marital Taxable
• Use Binary Decisions based on one value Status Income Cheat

1 Yes Single 125K No


• Choices for the splitting value 2 No Married 100K No
• Number of possible splitting values 3 No Single 70K No
= Number of distinct values 4 Yes Married 120K No
5 No Divorced 95K Yes
• Each splitting value has a count matrix 6 No Married 60K No
associated with it 7 Yes Divorced 220K No
• Class counts in each of the partitions, A 8 No Single 85K Yes
< v and A  v 9 No Married 75K No
10 No Single 90K Yes
10

• Exhaustive method to choose best v


Taxable
• For each v, scan the database to gather Income
count matrix and compute the impurity > 80K?
index
• Computationally Inefficient! Repetition Yes No
of work.
Continuous Attributes
• For efficient computation: for each attribute,
• Sort the attribute on values
• Linearly scan these values, each time updating the count matrix and
computing impurity
• Choose the split position that has the least impurity

Cheat No No No Yes Yes Yes No No No No


Taxable Income
60 70 75 85 90 95 100 120 125 220
Sorted Values
55 65 72 80 87 92 97 110 122 172 230
Split Positions
<= > <= > <= > <= > <= > <= > <= > <= > <= > <= > <= >
Yes 0 3 0 3 0 3 0 3 1 2 2 1 3 0 3 0 3 0 3 0 3 0

No 0 7 1 6 2 5 3 4 3 4 3 4 3 4 4 3 5 2 6 1 7 0

Gini 0.420 0.400 0.375 0.343 0.417 0.400 0.300 0.343 0.375 0.400 0.420
Splitting based on impurity
• Impurity measures favor attributes with large
number of values

• A test condition with large number of outcomes


may not be desirable
• # of records in each partition is too small to make
predictions
Splitting based on INFO
Gain Ratio
• Splitting using information gain

GAIN n n
 SplitINFO    log
k
GainRATIO Split i i

SplitINFO
split

ni 1
n

Parent Node, p is split into k partitions


ni is the number of records in partition i
• Adjusts Information Gain by the entropy of the
partitioning (SplitINFO). Higher entropy partitioning
(large number of small partitions) is penalized!
• Used in C4.5
• Designed to overcome the disadvantage of impurity
Stopping Criteria for Tree Induction
• Stop expanding a node when all the records
belong to the same class

• Stop expanding a node when all the records have


similar attribute values

• Early termination (to be discussed later)


Decision Tree Based Classification
• Advantages:
• Inexpensive to construct
• Extremely fast at classifying unknown records
• Easy to interpret for small-sized trees
• Accuracy is comparable to other classification
techniques for many simple data sets
Example: C4.5
• Simple depth-first construction.
• Uses Information Gain
• Sorts Continuous Attributes at each node.
• Needs entire data to fit in memory.
• Unsuitable for Large Datasets.
• Needs out-of-core sorting.

• You can download the software from:


http://www.cse.unsw.edu.au/~quinlan/c4.5r8.tar.gz
Other Issues
• Data Fragmentation
• Expressiveness
Data Fragmentation
• Number of instances gets smaller as you traverse
down the tree

• Number of instances at the leaf nodes could be


too small to make any statistically significant
decision

• You can introduce a lower bound on the number


of items per leaf node in the stopping criterion.
Expressiveness
• A classifier defines a function that discriminates
between two (or more) classes.
• The expressiveness of a classifier is the class of
functions that it can model, and the kind of data
that it can separate
• When we have discrete (or binary) values, we are
interested in the class of boolean functions that can be
modeled
• If the data-points are real vectors we talk about the
decision boundary that the classifier can model
Decision Boundary
1

0.9

0.8
x < 0.43?

0.7
Yes No
0.6

y < 0.33?
y

0.5 y < 0.47?


0.4

0.3
Yes No Yes No

0.2
:4 :0 :0 :4
0.1 :0 :4 :3 :0
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

x
• Border line between two neighboring regions of different classes is known
as decision boundary
• Decision boundary is parallel to axes because test condition involves a
single attribute at-a-time
Expressiveness
• Decision tree provides expressive representation for
learning discrete-valued function
• But they do not generalize well to certain types of
Boolean functions
• Example: parity function:
• Class = 1 if there is an even number of Boolean attributes with truth
value = True
• Class = 0 if there is an odd number of Boolean attributes with truth
value = True
• For accurate modeling, must have a complete tree

• Less expressive for modeling continuous variables


• Particularly when test condition involves only a single
attribute at-a-time
Oblique Decision Trees

x+y<1

Class = + Class =

• Test condition may involve multiple attributes


• More expressive representation
• Finding optimal test condition is computationally expensive
Practical Issues of Classification
• Underfitting and Overfitting

• Evaluation
Underfitting and Overfitting (Example)

500 circular and 500


triangular data points.

Circular points:
0.5  sqrt(x12+x22)  1

Triangular points:
sqrt(x12+x22) > 0.5 or
sqrt(x12+x22) < 1
Underfitting and Overfitting
Underfitting Overfitting

Underfitting: when model is too simple, both training and test errors are large
Overfitting: when model is too complex it models the details of the training set and
fails on the test set
Overfitting due to Noise

Decision boundary is distorted by noise point


Overfitting due to Insufficient Examples

Lack of data points in the lower half of the diagram makes it difficult to
predict correctly the class labels of that region
- Insufficient number of training records in the region causes the decision
tree to predict the test examples using other training records that are
irrelevant to the classification task
Notes on Overfitting
• Overfitting results in decision trees that are more
complex than necessary

• Training error no longer provides a good estimate


of how well the tree will perform on previously
unseen records
• The model does not generalize well

• Need new ways for estimating errors


Estimating Generalization Errors
• Re-substitution errors: error on training ( )
 
••
Generalization errors: error on testing ()
• Methods for estimating generalization errors:
• Optimistic approach:

• Pessimistic approach:
• For each leaf node:
• Total errors: (N: number of leaf nodes)
• Penalize large trees
• For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances)
• Training error = 10/1000 = 1
• Generalization error = (10 + 300.5)/1000 = 2.5%

• Using validation set:


• Split data into training, validation, test
• Use validation dataset to estimate generalization error
• Drawback: less data for training.
Occam’s Razor
• Given two models of similar generalization errors,
one should prefer the simpler model over the
more complex model

• For complex models, there is a greater chance


that it was fitted accidentally by errors in data

• Therefore, one should include model complexity


when evaluating a model
Minimum Description Length (MDL)
A?
X y Yes No
X y
X1 1 0 B? X1 ?
X2 0 B1 B2
X2 ?
X3 0 C? 1
A C1 C2 B X3 ?
X4 1
0 1 X4 ?
… …
Xn
… …
1
Xn ?

• Cost(Model,Data) = Cost(Data|Model) + Cost(Model)


• Search for the least costly model.

• Cost(Data|Model) encodes the misclassification errors.


• Cost(Model) encodes the decision tree
• node encoding (number of children) plus splitting condition
encoding.
How to Address Overfitting
• Pre-Pruning (Early Stopping Rule)
• Stop the algorithm before it becomes a fully-grown tree
• Typical stopping conditions for a node:
• Stop if all instances belong to the same class
• Stop if all the attribute values are the same

• More restrictive conditions:


• Stop if number of instances is less than some user-specified
threshold
• Stop if class distribution of instances are independent of the available
features (e.g., using  2 test)
• Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).
How to Address Overfitting…
• Post-pruning
• Grow decision tree to its entirety
• Trim the nodes of the decision tree in a bottom-up
fashion
• If generalization error improves after trimming, replace
sub-tree by a leaf node.
• Class label of leaf node is determined from majority
class of instances in the sub-tree

• Can use MDL for post-pruning


Example of Post-Pruning
Training Error (Before splitting) = 10/30
Pessimistic error = (10 + 0.5)/30 = 10.5/30
Class = Yes 20
Training Error (After splitting) = 9/30
Class = No 10
Pessimistic error (After splitting)
Error = 10/30
= (9 + 4  0.5)/30 = 11/30
A? PRUNE!

A1 A4
A2 A3

Class = Yes 8 Class = Yes 3 Class = Yes 4 Class = Yes 5


Class = No 4 Class = No 4 Class = No 1 Class = No 1
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?

• Methods for Performance Evaluation


• How to obtain reliable estimates?

• Methods for Model Comparison


• How to compare the relative performance among
competing models?
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?

• Methods for Performance Evaluation


• How to obtain reliable estimates?

• Methods for Model Comparison


• How to compare the relative performance among
competing models?
Metrics for Performance Evaluation
• Focus on the predictive capability of a model
• Rather than how fast it takes to classify or build models,
scalability, etc.
• Confusion Matrix:

PREDICTED CLASS

Class=Yes Class=No

a: TP (true positive)
ACTUAL Class=Yes a b
b: FN (false negative)
CLASS
Class=No c d c: FP (false positive)
d: TN (true negative)
Metrics for Performance Evaluation…
PREDICTED CLASS

Class=Yes Class=No

ACTUAL Class=Yes a b
(TP) (FN)
CLASS
Class=No c d
(FP) (TN)

• Most widely-used metric:


ad TP  TN
Accuracy  
a  b  c  d TP  TN  FP  FN
Limitation of Accuracy
• Consider a 2-class problem
• Number of Class 0 examples = 9990
• Number of Class 1 examples = 10

• If model predicts everything to be class 0,


accuracy is 9990/10000 = 99.9 %
• Accuracy is misleading because model does not detect
any class 1 example
Cost Matrix
PREDICTED CLASS

C(i|j) Class=Yes Class=No

ACTUAL Class=Yes C(Yes|Yes) C(No|Yes)


CLASS
Class=No C(Yes|No) C(No|No)

C(i|j): Cost of classifying class j example as class i

wa  w d
Weighted Accuracy  1 4

wa wb wc w d


1 2 3 4
Computing Cost of Classification
Cost PREDICTED CLASS
Matrix
C(i|j) + -
ACTUAL + -1 100
CLASS
- 1 0

Model M1 PREDICTED CLASS Model M2 PREDICTED CLASS

+ - + -
ACTUAL ACTUAL
+ 150 40 + 250 45
CLASS CLASS
- 60 250 - 5 200

Accuracy = 80% Accuracy = 90%


Cost = 3910 Cost = 4255
Cost vs Accuracy
Count PREDICTED CLASS Accuracy is proportional to cost if
1. C(Yes|No)=C(No|Yes) = q
Class=Yes Class=No 2. C(Yes|Yes)=C(No|No) = p

Class=Yes a b N=a+b+c+d
ACTUAL
CLASS Class=No c d
Accuracy = (a + d)/N

Cost PREDICTED CLASS


Cost = p (a + d) + q (b + c)
Class=Yes Class=No
= p (a + d) + q (N – a – d)
= q N – (q – p)(a + d)
Class=Yes p q
ACTUAL = N [q – (q-p)  Accuracy]
CLASS Class=No q p
Precision-Recall
Count PREDICTED CLASS
Class=Yes Class=No
a TP Class=Yes a b
Precision (p)  
a  c TP  FP ACTUAL Class=No c d
CLASS
a TP
Recall (r)  
a  b TP  FN
1 2rp 2a 2TP
F - measure (F)    
 1 / r  1 / p  r  p 2a  b  c 2TP  FP  FN
 
 2 

 Precision is biased towards C(Yes|Yes) & C(Yes|No)


 Recall is biased towards C(Yes|Yes) & C(No|Yes)
 F-measure is biased towards all except C(No|No)
Precision-Recall plot
• Usually for parameterized models, it controls the
precision/recall tradeoff
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?

• Methods for Performance Evaluation


• How to obtain reliable estimates?

• Methods for Model Comparison


• How to compare the relative performance among
competing models?
Methods for Performance Evaluation
• How to obtain a reliable estimate of performance?

• Performance of a model may depend on other


factors besides the learning algorithm:
• Class distribution
• Cost of misclassification
• Size of training and test sets
Methods of Estimation
• Holdout
• Reserve 2/3 for training and 1/3 for testing
• Random subsampling
• One sample may be biased -- Repeated holdout
• Cross validation
• Partition data into k disjoint subsets
• k-fold: train on k-1 partitions, test on the remaining one
• Leave-one-out: k=n
• Guarantees that each record is used the same number of
times for training and testing
• Bootstrap
• Sampling with replacement
• ~63% of records used for training, ~27% for testing
Dealing with class Imbalance
• If the class we are interested in is very rare, then
the classifier will ignore it.
• The class imbalance problem
• Solution
• We can modify the optimization criterion by using a cost
sensitive metric
• We can balance the class distribution
• Sample from the larger class so that the size of the two classes
is the same
• Replicate the data of the class of interest so that the classes are
balanced
• Over-fitting issues
Learning Curve
 Learning curve shows
how accuracy changes
with varying sample size

 Requires a sampling
schedule for creating
learning curve

Effect of small sample size:


- Bias in the estimate

- Variance of estimate
Model Evaluation
• Metrics for Performance Evaluation
• How to evaluate the performance of a model?

• Methods for Performance Evaluation


• How to obtain reliable estimates?

• Methods for Model Comparison


• How to compare the relative performance among
competing models?
ROC (Receiver Operating Characteristic)

• Developed in 1950s for signal detection theory to


analyze noisy signals
• Characterize the trade-off between positive hits and
false alarms
• ROC curve plots TPR (on the y-axis) against FPR
(on the x-axis)
PREDICTED CLASS
TP
TPR  Yes No
TP  FN
Yes a b
Fraction of positive instances (TP) (FN)
predicted correctly Actual
No c d
FP (FP) (TN)
FPR 
FP  TN
Fraction of negative instances predicted incorrectly
ROC (Receiver Operating Characteristic)

• Performance of a classifier represented as a point


on the ROC curve

• Changing some parameter of the algorithm,


sample distribution or cost matrix changes the
location of the point
ROC Curve
- 1-dimensional data set containing 2 classes (positive and negative)
- any points located at x > t is classified as positive

At threshold t:
TP=0.5, FN=0.5, FP=0.12, FN=0.88
ROC Curve
(TP,FP):
• (0,0): declare everything
to be negative class
• (1,1): declare everything
to be positive class
• (1,0): ideal

• Diagonal line: PREDICTED CLASS


Yes No
• Random guessing
Yes a b
• Below diagonal line: (TP) (FN)
Actual
• prediction is opposite of No c d
the true class (FP) (TN)
Using ROC for Model Comparison
 No model consistently
outperform the other
 M is better for
1
small FPR
 M is better for
2
large FPR
 Area Under the ROC
curve (AUC)
 Ideal: Area = 1
 Random guess:
 Area = 0.5
ROC curve vs Precision-Recall curve

Area Under the Curve (AUC) as a single number for evaluation

You might also like