SlideShare a Scribd company logo
Machine Learning: Decision Tree Learning Soongsil University, Seoul Gun Ho Lee
Decision Tree Learning Introduction Decision Tree Representation Appropriate Problems for Decision Tree Learning Basic Algorithm Hypothesis Space Search in Decision Tree Learning Inductive Bias in Decision Tree Learning Issues in Decision Tree Learning Summary
Tree leaning Task Induction Deduction Learn  Model (tree) Model (Tree) Test Set Learning algorithm Training Set Apply  Model Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10
Example of a Decision Tree Refund MarSt TaxInc YES NO NO NO Yes No Married   Single, Divorced < 80K > 80K Splitting Attributes Training Data Model:  Decision Tree categorical categorical continuous class
Another Example of Decision Tree categorical categorical continuous class MarSt Refund TaxInc YES NO NO Yes No Married   Single, Divorced < 80K > 80K There could be more than one tree that fits the same data! NO
Decision Tree Classification Task Decision Tree
Apply Model to Test Data Test Data Start from the root of tree. Refund MarSt TaxInc YES NO NO NO Yes No Married   Single, Divorced < 80K > 80K
Apply Model to Test Data Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married   Single, Divorced < 80K > 80K
Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married   Single, Divorced < 80K > 80K Test Data
Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married   Single, Divorced < 80K > 80K Test Data
Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married  Single, Divorced < 80K > 80K Test Data
Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married  Single, Divorced < 80K > 80K Test Data Assign Cheat to “No”
Decision Tree Classification Task Decision Tree
Overview One of the most widely used and practical methods for  inductive inference over supervised data It approximates  discrete-valued functions  (as opposed to continuous) It is robust to  noisy data Decision trees can represent any discrete function on discrete features It is also efficient for processing  large amounts of data , so is often used in data mining applications Decision Tree Learners’ bias typically prefers  small tree over larger ones
Decision Trees Tree-based classifiers for instances represented as feature-vectors.  Nodes test features, there is one branch for each value of the feature, and leaves specify the category. Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors. Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF). red    circle -> pos red    circle -> A blue -> B;  red    square -> B green -> C;  red    triangle -> C color red blue green shape circle square triangle neg pos pos neg neg color red blue green shape circle square triangle B C A B C
Properties of Decision Tree Learning Continuous (real-valued) features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold (e.g. length < 3 and length   3) Classification trees have  discrete class labels at the leaves ,  regression trees  allow  real-valued outputs at the leaves . Algorithms for finding consistent trees are efficient for processing  large amounts of training data  for data mining tasks. Methods developed for handling  noisy training data  (both class and feature noise). Methods developed for handling  missing feature  values.
What makes a good tree? Not too small – need to include enough attributes to handle possibly subtle distinctions in data Not too big - computational efficiency (avoid redundant, spurious attributes) - avoid over-fitting training examples (noisy, scarce data) Occam’s Razor: find simplest hypothesis (tree) that is consistent with all observations inductive bias – small trees, with informative nodes near root
Basic Algorithm 가능한 모든  decision trees space 에서의  top-down, greedy search Training examples 를 가장 잘 분류할 수 있는  attribute 를 루트에 둔다 . Entropy, Information gain
Top-Down Decision Tree Induction Recursively build a tree top-down by divide and conquer. Example:  <big, red, circle>: +  <small, red, circle>: + <small, red, square>:     <big, blue, circle>:   <big,  red , circle>: +  <small,  red , circle>: + <small,  red , square>:     color red blue green
Top-Down Decision Tree Induction Recursively build a tree top-down by divide and conquer. shape circle square triangle <big,  red , circle>: +  <small, red, circle>: + <small,  red , square>:     color red blue green <big,  red , circle>: +  <small,  red , circle>: + pos <small,  red , square>:     neg pos <big, blue, circle>:   neg neg Example:  <big, red, circle>: +  <small, red, circle>: + <small, red, square>:     <big, blue, circle>:  
Decision Tree Induction Pseudocode DTree( examples ,  features ) returns a tree If all  examples  are in one category, return a leaf node with that category label. Else if the set of  features  is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature  F  and create a node  R  for it For each possible value  v i  of  F : Let  examples i  be the subset of examples that have value  v i  for  F Add an out-going edge  E  to node  R  labeled with the value  v i. If  examples i  is empty then attach a leaf node to edge  E  labeled with the category that is the most common in  examples . else call DTree( examples i  ,  features  – { F }) and attach the resulting tree as the subtree under edge  E. Return the subtree rooted at  R.
The Basic Decision Tree Learning Algorithm Top-Down Induction of Decision Trees . This approach is exemplified by the  ID3  algorithm and its successor C4.5
Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting
How to determine the Best Split Income Age >=10k <10k young old Customers fair customers Good customers
How to determine the Best Split Greedy approach:  Nodes with  homogeneous  class distribution are preferred Need a measure of node impurity: High degree of impurity Low degree of impurity pure 50%  red 50%  green 75%  red 25%  green 100%  red 0%  green
Split Selection Method Numerical  or ordered attributes: Find a split point that separates the (two) classes (Yes:  No:  ) Age < 33 30 35 Age
Split Selection Method (Contd.) Categorical attributes: How to group? Sport: Truck: Minivan: (Sport, Truck) -- (Minivan) (Sport) --- (Truck, Minivan) (Sport, Minivan) --- (Truck) Car Sport, Truck Minivan
Decision Tree Induction Many Algorithms: Hunt’s Algorithm (1960’s, one of the earliest) ID3(Quinlan 1979), C4.5(Quinlan 1993) CART SLIQ   (EDBT’96 — Mehta et al.) builds an index for each attribute and only class list and the current attribute list reside in memory  SPRINT   (VLDB’96 — J. Shafer et al.) constructs an attribute list data structure  RainForest  (VLDB’98 — Gehrke, Ramakrishnan & Ganti) separates the scalability aspects from the criteria that determine the quality of the tree builds an AVC-list (attribute, value, class label) BOAT   Uses bootstrapping to create several small samples
History of Decision-Tree Research Hunt and colleagues use exhaustive search decision-tree methods (CLS) to model human concept learning in the 1960’s. In the late 70’s, Quinlan developed ID3 with the information gain heuristic to learn expert systems from examples. Simulataneously, Breiman and Friedman and colleagues develop CART (Classification and Regression Trees), similar to ID3. In the 1980’s a variety of improvements are introduced to handle noise, continuous features, missing features, and improved splitting criteria. Various expert-system development tools results. Quinlan’s updated decision-tree package (C4.5) released in 1993. Weka includes Java version of C4.5 called  J48 .
General Structure of Hunt’s Algorithm Let D t  be the set of training records that reach a node t General Procedure: If D t  contains records that belong the same class y t , then t is a leaf node labeled as y t If D t  is an empty set, then t is a leaf node labeled by the default class, y d If D t  contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. D t ?
Hunt’s Algorithm Cheat=No Refund Cheat=No Cheat=No Yes No Refund Cheat=No Yes No Marital Status Cheat=No Cheat Single, Divorced Married Taxable Income Cheat=No < 80K >= 80K Refund Cheat=No Yes No Marital Status Cheat=No Cheat Single, Divorced Married
What is ID3( Interactive  Dichotomizer  3) A mathematical algorithm for building the decision tree. Invented by J. Ross Quinlan in 1979. Uses Information Theory invented by Shannon in 1948. Information Gain  is used to select the most useful attribute for classification. Builds the tree from the top down, with  no backtracking.
ID3 ID3 is designed for that case: many attributes  training set contains many objects a reasonably good decision tree is required without much computation It is generally found to construct simple decision tree.  But it cannot guarantee that the trees is always best.
ID3 Algorithm Overview Step 1 : Choose a random subset of the training set Step 2 : Form a decision tree that correctly classifies all the objects in the window Step 3 :  IF the tree gives the correct answer for all the objects in the training  set,  Then the process terminates;  Else a selection of the incorrectly  classified objects is added to the window and go to Step 2
ID3 Algorithm
Picking a Good Split Feature Goal is to have the resulting tree be  as small as possible , per Occam’s razor. Finding a minimal decision tree (nodes, leaves, or depth) is an  NP-hard optimization problem. Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest. General lesson in ML:  “Greed is good.” Want to pick a feature that creates subsets of examples that are relatively “ pure” in a single class  so they are “closer” to being leaf nodes. There are a variety of heuristics for picking a good test, a popular one is based on  information gain  that originated with the ID3 system of Quinlan (1979).
Entropy Minimum number of bits of information needed to encode the classification of an arbitrary member of S entropy = 0, if all members in the same class entropy = 1, if |positive examples|=|negative examples|
Example – Information Needed n = 5 p = 9 The information needed to generate a decision tree from this window is: E(p,n) =  = 0.940 bits
Entropy Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is: where  p 1  is the fraction of positive examples in S and  p 0  is the fraction of negatives. If  all examples are in one category , entropy is zero (we define  0  log(0)=0 ) If  examples are equally mixed ( p 1 = p 0 =0.5), entropy is a maximum  of  1 . For  multi-class problems with c categories , entropy generalizes to:
Entropy Plot for Binary Classification
Information Gain Parent Node, p is split into k partitions; n i  is number of records in partition I Measures Reduction in Entropy achieved because of the split.  Choose the split that achieves maximum  GAIN value !! Used in ID3 and C4.5 Disadvantage :  Tends to prefer splits that  result in large number of partitions , each being small but pure.
Information Gain Example: <big, red, circle>: +  <small, red, circle>: + <small, red, square>:     <big, blue, circle>:   2+, 2    : E=1 size big  small 1+,1    1+,1  E=1  E=1 Gain=1  (0.5  1 + 0.5  1)  = 0 2+, 2    : E=1 color red  blue 2+,1    0+,1  E=0.918  E=0 Gain=1  (0.75  0.918 +0.25  0)  = 0.311 2+, 2     : E=1 shape circle  square 2+,1    0+,1  E=0.918  E=0 Gain =1  (0.75  0.918+0.25  0)  = 0.311
Training Examples for  PlayTennis Day Outlook 온도 Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No
ID3 play don’t play p no = 5/14 p yes = 9/14  Impurity   =   - p yes  log 2  p yes  - p no  log 2  p no   =  - 9/14 log 2  9/14 - 5/14 log 2  5/14     =    0.94 bits
Selecting the Next Attribute :  which attribute is the best classifier? ID3
Example – Branch on outlook E(outlook)  =  0.694 Gain(outlook)=0.940- E(outlook )=0.246bits p 3 =3,  n 3 =2,  E(p 3 , n 3 )=0.971 p 2 =4,  n 2 =0,  E(p 2 , n 2 )=0 p 1 =2,  n 1 =3,  E(p 1 , n 1 )=0.971 Outlook sunny overcast rain P True Normal Mild 11 P False Normal Cool 9 N False High Mild 8 N True High Hot 2 N False High Hot 1 P False Normal Hot 13 P True High Mild 12 P true Normal Cool 7 P False High Hot 3 N True High Mild 14 P False Normal Mild 10 N True Normal Cool 6 P False Normal Cool 5 P False High Mild 4
ID3 amount of information required to specify class of an example given that it reaches node   0.94 bits 0.97 bits * 5/14 gain: 0.25 bits maximal  information  gain play don’t play 0.0 bits * 4/14 0.97 bits * 5/14 0.98 bits * 7/14 0.59 bits * 7/14 0.92 bits * 6/14 0.81 bits * 4/14 0.81 bits * 8/14 1.0 bits * 4/14 1.0 bits * 6/14 outlook sunny overcast rainy + = 0.69 bits + = 0.79 bits + = 0.91 bits + = 0.89 bits gain: 0.15 bits gain: 0.03 bits gain: 0.05 bits humidity temperature windy high normal hot mild cool false true
ID3 outlook sunny overcast rainy maximal  information  gain 0.97 bits play don’t play 0.0 bits * 3/5 humidity temperature windy high normal hot mild cool false true + = 0.0 bits gain:  0.97 bits + = 0.40 bits gain: 0.57 bits + = 0.95 bits gain: 0.02 bits 0.0 bits * 2/5 0.0 bits * 2/5 1.0 bits * 2/5 0.0 bits * 1/5 0.92 bits * 3/5 1.0 bits * 2/5
ID3 outlook sunny overcast rainy humidity high normal 0.97 bits play don’t play 1.0 bits *2/5 temperature windy hot mild cool false true + = 0.95 bits gain: 0.02 bits + = 0.95 bits gain: 0.02 bits + = 0.0 bits gain: 0.97 bits humidity high normal 0.92 bits * 3/5 0.92 bits * 3/5 1.0 bits * 2/5 0.0 bits * 3/5 0.0 bits * 2/5 
ID3 outlook sunny overcast rainy windy false true humidity high normal Yes No No Yes Yes play don’t play
Hypothesis Space Search in Decision Tree Learning ID3 can be characterized as  searching a space of hypothesis  for  one  that  fits the training examples The hypothesis space searched by ID3 is  the set of possible decisions ID3 performs  a simple-to-complex , hill-climbing search through this hypothesis space,  locally-optimal solution .
Hypothesis Space Search in Decision Tree Learning Hypothesis space of all decision tree  is a complete space of finite discrete-valued functions, relative to the available attributes Outputs a  single hypothesis No Back Tracking Local minima… Inductive bias  : approximate “prefer shortest tree” Information-gain gives a bias for trees with minimal depth
Hypothesis Space Search Performs  batch   learning  that processes all training instances at once rather  than  incremental learning  that updates a hypothesis after each example. Guaranteed to find a tree consistent with any conflict-free training set  i.e. identical feature vectors always assigned the same class,
Occam’s Razor Occam’s Razor: Prefer the simplest hypothesis that fits the data William of Ockham (AD 1285? – 1347?)
Complex DT : Simple DT temperature cool mild hot outlook outlook outlook sunny o’cast rain P N windy true false N P sunny o’cast rain windy P humid true false P N high normal windy P true false N P true false N humid high normal P outlook sunny o’cast rain N P null outlook sunny overcast rain humidity P windy high normal N P true false N P
Inductive Bias in Decision Tree Learning Occam’s Razor
Decision Tree Representation Decision Trees
Disadvantages Only allow 2 classes,  This limitation is usually removed in most later systems. Not guaranteed to find the simplest tree. Not incremental.  Additional training data can not be considered without rebuilt the the whole tree with all the former data.
Advantages A reasonably good decision tree without much computation. Iterative method usually is found more quickly to build a tree than build on the whole training set. The process is not sensitive to parameters such as window size. ID3 is linear to the difficulty of the problem
C4.5 History ID3, CHAID – 1960s C4.5 innovations (Quinlan): permit numeric attributes deal sensibly with missing values pruning to deal with for noisy data C4.5 - one of best-known and most widely-used learning algorithms Last research version: C4.8, implemented in Weka as J4.8 (Java) Commercial successor: C5.0 (available from Rulequest)
C4.5 ID3 favors attributes with large number of divisions Lead to overfitting Improved version of ID3: Missing Data Continuous Data Pruning Subtree replacement by leaf node Subtree raising Automated rule generation GainRatio: take into account the cardinality of each division
Weakness of ID3: Highly-branching attributes Problematic: attributes with a large number of values (extreme case: ID code) Subsets are more likely to be pure if there is a large number of values ⇒  Information gain is biased towards choosing attributes with a large number of values ⇒  This may result in  overfitting  (selection of an attribute that is non-optimal for prediction)
Weakness of ID3: Split for ID Code Attribute Entropy of split = 0 (since each leaf node is “pure”, having only one case. Information gain is maximal for ID code ID code No Yes No No Yes D1  D2  D3  …  D13  D14 Day Outlook 온도 Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No
C4.5 Gain Ratio:  Parent Node, p is split into k partitions n i  is the number of records in partition i Adjusts Information Gain by the entropy of the partitioning (SplitINFO).  Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 Designed  to overcome the disadvantage of  Information Gain Split information is sensitive to how broadly and uniformly the attribute splits the data
Numeric attributes Standard method: binary splits E.g. temp < 45 Unlike nominal attributes, every attribute has many possible split points Solution is straightforward extension:  Evaluate info gain (or other measure) for every possible split point of attribute Choose “best” split point Info gain for best split point is info gain for attribute Computationally more demanding witten & eibe
Example Split on temperature attribute: E.g. temperature    71.5: yes/4, no/2 temperature    71.5: yes/5, no/3 Info([4,2],[5,3]) = 6/14 info([4,2]) + 8/14 info([5,3])  = 0.939 bits Place split points halfway between values Can evaluate all split points in one pass! witten & eibe 64  65  68  69  70  71  72  72  75  75  80  81  83  85 Yes  No  Yes Yes Yes No  No  Yes  Yes Yes No  Yes Yes  No
Avoid repeated sorting! Sort instances by the values of the numeric attribute Time complexity for sorting:  O  ( n  log  n ) Q. Does this have to be repeated at each node of the tree? A: No! Sort order for children can be derived from sort order for parent Time complexity of derivation:  O  ( n ) Drawback: need to create and store an array of sorted indices for each numeric attribute   witten & eibe
Weather data – nominal values witten & eibe … … … … … Yes False Normal Mild Rainy Yes False High Hot  Overcast  No True High  Hot  Sunny No False High Hot Sunny Play Windy Humidity Temperature Outlook If outlook = sunny and humidity = high then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity = normal then play = yes If none of the above then play = yes
More speeding up  Entropy only needs to be evaluated between points of different classes (Fayyad & Irani, 1992) Potential optimal breakpoints Breakpoints between values of the same class cannot be optimal value class 64  65  68  69  70  71  72  72  75  75  80  81  83  85 Yes  No Yes Yes  Yes No  No  Yes Yes  Yes  No  Yes Yes  No X
Continuous Attributes: Computing Gini Index... For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing gini index Choose the split position that has the least gini index Split Positions Sorted Values
Splitting Based on Nominal Attributes Multi-way split:  Use as many partitions as distinct values.  Binary split:   Divides values into two subsets.    Need to find optimal partitioning. OR CarType Family Sports Luxury CarType {Family,  Luxury} {Sports} CarType {Sports, Luxury} {Family} Size {Small, Large} {Medium}
Splitting Based on Continuous Attributes
Missing as a separate value Missing value denoted “?” in C4.X Simple idea: treat missing as a separate value Q: When this is not appropriate? A: When values are missing due to different reasons  Example 1: gene expression could be missing when it is very high or very low  Example 2: field  IsPregnant =missing for a male patient should be treated differently (no) than for a female patient of age 25 (unknown)
Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
Computing Impurity Measure Split on Refund: Entropy(Refund=Yes) -(0)log(0/3) – (3/3)log(3/3)  =  0 Entropy(Refund=No)    = -(2/6)log(2/6) – (4/6)log(4/6) =  0.9183 Entropy(Children)    =  3/10 (0)  + 6/10 (0.9183) =  0.551 Gain =  0.8813  –  0.551  = 0.3303 Missing value Before Splitting:   Entropy(Parent)    = -0.3 log(0.3)-(0.7)log(0.7) =  0.8813
Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
Distribute Instances Refund Yes No Refund Yes No Probability that Refund=Yes is 3/9 Probability that Refund=No is 6/9 Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9
Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
Classify Instances Refund MarSt TaxInc YES NO NO NO Yes No Married   Single,  Divorced < 80K > 80K New record: Probability that Marital Status  =  Married is 3.67/6.67 Probability that Marital Status  ={Single,Divorced} is 3/6.67 6.67 1 2 3.67 Total 2.67 1 1 6/9 Class=Yes 4 0 1 3 Class=No Total Divorced Single Married
 
From trees to rules – how? How can we produce a set of rules from a decision tree?
From trees to rules – simple Simple way: one rule for each leaf C4.5rules: greedily prune conditions from each rule if this reduces its estimated error Can produce duplicate rules Check for this at the end Then look at each class in turn consider the rules for that class find a “good” subset (guided by MDL) Then rank the subsets to avoid conflicts Finally, remove rules (greedily) if this decreases error on the training data witten & eibe
C4.5rules: choices and options C4.5rules slow for large and noisy datasets Commercial version C5.0rules uses a different technique Much faster and a bit more accurate C4.5 has two parameters Confidence value (default 25%): lower values incur heavier pruning Minimum number of instances in the two most popular branches (default 2) witten & eibe
CART Split Selection Method Motivation: We need a way to choose quantitatively between different splitting predicates Idea: Quantify the  impurity  of a node Method: Select splitting predicate that generates children nodes with minimum impurity from a space of possible splitting predicates
CART If a data set  D  contains examples from  n  classes, gini index,  gini ( D ) is defined as where  p j  is the relative frequency of class  j  in  D If a data set  D   is split on A into two subsets  D 1  and  D 2 , the  gini  index  gini ( D ) is defined as Reduction in Impurity: The attribute provides the smallest  gini split ( D ) (or the  largest reduction in impurity ) is chosen to split the node
Measure of Impurity: GINI Maximum (0.5) when records are  equally distributed among all classes , implying least interesting information Minimum (0.0) when  all records belong to one class ,  implying most interesting information
Examples for computing GINI P(C1) = 0/6 = 0  P(C2) = 6/6 = 1 Gini = 1 – P(C1) 2  – P(C2) 2  = 1 – 0 – 1 = 0  P(C1) = 1/6  P(C2) = 5/6 Gini = 1 – (1/6) 2  – (5/6) 2  = 0.278 P(C1) = 2/6  P(C2) = 4/6 Gini = 1 – (2/6) 2  – (4/6) 2  = 0.444
Comparison among Splitting Criteria For a 2-class problem:
CART Ex.  D has 9 tuples in buys_computer = “yes” and 5 in “no” Suppose the attribute income partitions D into 10 in D 1 : { low, medium } and 4 in D 2 but  gini {medium,high}  is 0.30  and thus the best since it is the lowest All attributes are assumed continuous-valued Can be modified for categorical attributes
Comparing Attribute Selection Measures The three measures, in general, return good results but Information gain:  biased towards multivalued attributes Gain ratio:  tends to  prefer unbalanced splits  in which  one partition is much smaller than the others Gini index:  biased to multivalued attributes has  difficulty when # of classes is large tends to favor tests that result in  equal-sized partitions  and purity in both partitions
Issues in Decision Tree Learning Overfitting in Decision Trees
Issues in Decision Tree Learning Overfitting in Decision Trees h h ’
Overfitting Learning a tree that classifies the training data perfectly may not lead to the tree with the best generalization to unseen data. There may  be noise  in the training data that the tree is erroneously fitting. The algorithm may be making poor decisions towards the leaves of the tree that are based on  very little data  and may not reflect reliable trends. hypothesis complexity accuracy on training data on test data
Overfitting Example voltage (V) current (I) Testing Ohms Law: V = IR  (I = (1/R)V) Ohm was wrong, we have found a more accurate function! Experimentally measure 10 points Fit a curve to the Resulting data. Perfect fit to training data with an 9 th  degree polynomial (can fit  n  points exactly with an  n -1 degree polynomial)
Overfitting Example voltage (V) current (I) Testing Ohms Law: V = IR  (I = (1/R)V) Better generalization with a linear function that fits training data less accurately.
Overfitting Noise in Decision Trees Category or feature noise can easily cause overfitting. Add noisy instance <medium, blue, circle>:  pos  (but really  neg ) shape circle square triangle color red blue green pos neg pos neg neg
Overfitting Noise in Decision Trees Category or feature noise can easily cause overfitting. Add noisy instance <medium, blue, circle>:  pos  (but really  neg ) shape circle square triangle color red blue green pos neg pos neg <big, blue, circle>:   <medium, blue, circle>: + Noise can also cause different instances of the same feature vector to have different classes.  Impossible to fit this data and must label leaf with the majority class. <big, red, circle>:  neg  (but really  pos ) Conflicting examples can also arise if the features are incomplete and inadequate to determine the class or if the target concept is non-deterministic. small med big pos neg neg
Overfitting Prevention (Pruning) Methods Two basic approaches for decision trees Prepruning : Stop growing tree as some point during top-down construction when there is no longer sufficient data to make reliable decisions. Postpruning : Grow the full tree, then remove subtrees that do not have sufficient evidence. Label leaf resulting from pruning with the  majority class of the remaining data, or a class probability distribution .  Method for determining which subtrees to prune: Cross-validation : Reserve some training data as a hold-out set ( validation set ) to evaluate utility of subtrees. Statistical test : Use a statistical test on the training data to determine if any observed regularity can be dismisses as likely due to random chance. Minimum description length (MDL):  Determine if the additional complexity of the hypothesis is less complex than just explicitly remembering any exceptions resulting from pruning.
Pruning Goal: Prevent overfitting to noise in the data Two strategies for “pruning” the decision tree: (Stop earlier / Forward pruning): Stop growing the tree earlier – extra stopping conditions, e.g. Stop if  all instances belong to the same class Stop if  all the attribute values are the same Stop if  number of instances < some user-specified threshold Stop if expanding the current node  does not improve impurity   measures  (e.g.,  Gini or Gain ). (Post-pruning): Allow overfit and then post-prune the tree. Estimation of errors and tree size to decide which subtree should be pruned. Postpruning preferred in practice—prepruning can “stop too early”
Early stopping Pre-pruning may stop the growth process prematurely:  early stopping But: XOR-type problems rare in practice And: pre-pruning  faster  than post-pruning witten & eibe
Post-pruning First, build full tree Then, prune it Fully-grown tree shows all attribute interactions  Problem: some subtrees might be due to chance effects Two pruning operations:  Subtree replacement Subtree raising Possible strategies: error estimation significance testing MDL principle witten & eibe
Post-pruning: Subtree replacement, 1 Bottom-up Consider replacing a tree only after considering all its subtrees Ex: labor negotiations  witten & eibe
Post-pruning: Subtree replacement, 2 What subtree can we replace?
Subtree replacement, 3 Bottom-up Consider replacing a tree only after considering all its subtrees witten & eibe
Estimating error rates Prune only  if it reduces the estimated error Error on the training data is NOT a useful estimator Q: Why it would result in very little pruning? Use hold-out set for pruning (“reduced-error pruning”) C4.5’s method Derive confidence interval from training data Use a heuristic limit, derived from this, for pruning Standard Bernoulli-process-based method Shaky statistical assumptions (based on training data) witten & eibe
*Mean and variance Mean and variance for a Bernoulli trial:  p, p  (1– p ) Expected success rate  f=S / N Mean and variance for  f  :  p, p  (1– p )/ N For large enough  N ,  f   follows a Normal distribution c% confidence interval [– z     X      z ] for random variable with 0 mean is given by: With a symmetric distribution: witten & eibe
*Subtree raising Delete node Redistribute instances Slower than subtree replacement (Worthwhile?) witten & eibe X
C4.5’s method Error estimate for subtree is weighted sum of error estimates for all its leaves Error estimate for a node (upper bound): If  c =  25% then  z  = 0.69 (from normal distribution) f   is the error on the training data N   is the number of instances covered by the leaf witten & eibe
Example f=0.33 e=0.47 f=0.5 e=0.72 f=0.33 e=0.47 f = 5/14  e = 0.46 e < 0.51 so prune! Combined using ratios 6:2:6     e=0.51 witten & eibe Example
Issues in Decision Tree Learning Effect of Reduced-Error Pruning
Issues with Reduced Error Pruning The problem with this approach is that it potentially “wastes” training data on the validation set. Severity of this problem depends where we are on the learning curve: test accuracy number of training examples
Issues in Decision Tree Learning Rule Post-Pruning
Issues in Decision Tree Learning Converting A Tree to Rules
Issues in Decision Tree Learning Attributes with Many Values
Issues in Decision Tree Learning Attributes with Costs
Issues in Decision Tree Learning Weakness of decision trees Not Always sufficient to learn complex concepts  (e.g., weighted evaluation function) Can be hard to understand.   Real problems can produce deep trees with a large branching factor Some problems with continuously-valued attributes or classes  may not be easily discretized Methods for  handling missing attribute  values are somewhat clumsy
Additional Decision Tree Issues Better splitting criteria Information gain prefers features with many values. Continuous features Predicting a real-valued function (regression trees) Missing feature values Features with costs Misclassification costs Mining large databases that do not fit in main memory

More Related Content

PPTX
Decision Tree - C4.5&CART
PDF
Decision trees in Machine Learning
PPTX
Decision tree
PPT
Decision tree
PPTX
Machine learning clustering
PDF
Decision tree lecture 3
PPTX
Decision tree induction \ Decision Tree Algorithm with Example| Data science
PPTX
Decision Tree Learning
Decision Tree - C4.5&CART
Decision trees in Machine Learning
Decision tree
Decision tree
Machine learning clustering
Decision tree lecture 3
Decision tree induction \ Decision Tree Algorithm with Example| Data science
Decision Tree Learning

What's hot (20)

PPTX
Support vector machines (svm)
PPTX
Decision trees
PPTX
Hierarchical clustering.pptx
PPTX
Clustering in data Mining (Data Mining)
PPTX
K nearest neighbor
PPTX
Unsupervised learning clustering
ODP
Machine Learning with Decision trees
PPTX
Dimensionality Reduction | Machine Learning | CloudxLab
PDF
Decision Tree in Machine Learning
PDF
Support Vector Machines for Classification
PPTX
Data preprocessing in Machine learning
PPTX
Decision Tree Learning
PPTX
Naive Bayes
PPTX
Machine learning with ADA Boost
PPT
Perceptron algorithm
PPTX
Decision tree
PPTX
05 Clustering in Data Mining
PDF
Dimensionality Reduction
PPTX
Clusters techniques
PDF
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Support vector machines (svm)
Decision trees
Hierarchical clustering.pptx
Clustering in data Mining (Data Mining)
K nearest neighbor
Unsupervised learning clustering
Machine Learning with Decision trees
Dimensionality Reduction | Machine Learning | CloudxLab
Decision Tree in Machine Learning
Support Vector Machines for Classification
Data preprocessing in Machine learning
Decision Tree Learning
Naive Bayes
Machine learning with ADA Boost
Perceptron algorithm
Decision tree
05 Clustering in Data Mining
Dimensionality Reduction
Clusters techniques
Lecture 4 Decision Trees (2): Entropy, Information Gain, Gain Ratio
Ad

Similar to Slide3.ppt (20)

PDF
Decision-Tree-.pdf techniques and many more
PPTX
3830599
PPT
Classification and Prediction_ai_101.ppt
PPTX
NN Classififcation Neural Network NN.pptx
PPT
www1.cs.columbia.edu
PPTX
pattern recognition techniques and algo.pptx
PPT
Classification: Basic Concepts and Decision Trees
PPTX
Machine Learning with Python unit-2.pptx
PPTX
Introduction to Datamining Concept and Techniques
PPT
Unit 3classification
PPT
Data Mining Concepts and Techniques.ppt
PPT
Data Mining Concepts and Techniques.ppt
DOCX
Dr. Oner CelepcikayITS 632ITS 632Week 4Classification
PPTX
Decision tree induction
PPTX
Lecture4.pptx
PPTX
Random Forest and KNN is fun
PPTX
Lect9 Decision tree
PPTX
07 learning
PPTX
Decision tree
PPTX
Predictive analytics using 'R' Programming
Decision-Tree-.pdf techniques and many more
3830599
Classification and Prediction_ai_101.ppt
NN Classififcation Neural Network NN.pptx
www1.cs.columbia.edu
pattern recognition techniques and algo.pptx
Classification: Basic Concepts and Decision Trees
Machine Learning with Python unit-2.pptx
Introduction to Datamining Concept and Techniques
Unit 3classification
Data Mining Concepts and Techniques.ppt
Data Mining Concepts and Techniques.ppt
Dr. Oner CelepcikayITS 632ITS 632Week 4Classification
Decision tree induction
Lecture4.pptx
Random Forest and KNN is fun
Lect9 Decision tree
07 learning
Decision tree
Predictive analytics using 'R' Programming
Ad

More from butest (20)

PDF
EL MODELO DE NEGOCIO DE YOUTUBE
DOC
1. MPEG I.B.P frame之不同
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPT
Timeline: The Life of Michael Jackson
DOCX
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
PDF
LESSONS FROM THE MICHAEL JACKSON TRIAL
PPTX
Com 380, Summer II
PPT
PPT
DOCX
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
DOC
MICHAEL JACKSON.doc
PPTX
Social Networks: Twitter Facebook SL - Slide 1
PPT
Facebook
DOCX
Executive Summary Hare Chevrolet is a General Motors dealership ...
DOC
Welcome to the Dougherty County Public Library's Facebook and ...
DOC
NEWS ANNOUNCEMENT
DOC
C-2100 Ultra Zoom.doc
DOC
MAC Printing on ITS Printers.doc.doc
DOC
Mac OS X Guide.doc
DOC
hier
DOC
WEB DESIGN!
EL MODELO DE NEGOCIO DE YOUTUBE
1. MPEG I.B.P frame之不同
LESSONS FROM THE MICHAEL JACKSON TRIAL
Timeline: The Life of Michael Jackson
Popular Reading Last Updated April 1, 2010 Adams, Lorraine The ...
LESSONS FROM THE MICHAEL JACKSON TRIAL
Com 380, Summer II
PPT
The MYnstrel Free Press Volume 2: Economic Struggles, Meet Jazz
MICHAEL JACKSON.doc
Social Networks: Twitter Facebook SL - Slide 1
Facebook
Executive Summary Hare Chevrolet is a General Motors dealership ...
Welcome to the Dougherty County Public Library's Facebook and ...
NEWS ANNOUNCEMENT
C-2100 Ultra Zoom.doc
MAC Printing on ITS Printers.doc.doc
Mac OS X Guide.doc
hier
WEB DESIGN!

Slide3.ppt

  • 1. Machine Learning: Decision Tree Learning Soongsil University, Seoul Gun Ho Lee
  • 2. Decision Tree Learning Introduction Decision Tree Representation Appropriate Problems for Decision Tree Learning Basic Algorithm Hypothesis Space Search in Decision Tree Learning Inductive Bias in Decision Tree Learning Issues in Decision Tree Learning Summary
  • 3. Tree leaning Task Induction Deduction Learn Model (tree) Model (Tree) Test Set Learning algorithm Training Set Apply Model Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10
  • 4. Example of a Decision Tree Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Splitting Attributes Training Data Model: Decision Tree categorical categorical continuous class
  • 5. Another Example of Decision Tree categorical categorical continuous class MarSt Refund TaxInc YES NO NO Yes No Married Single, Divorced < 80K > 80K There could be more than one tree that fits the same data! NO
  • 6. Decision Tree Classification Task Decision Tree
  • 7. Apply Model to Test Data Test Data Start from the root of tree. Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K
  • 8. Apply Model to Test Data Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K
  • 9. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data
  • 10. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data
  • 11. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data
  • 12. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K Test Data Assign Cheat to “No”
  • 13. Decision Tree Classification Task Decision Tree
  • 14. Overview One of the most widely used and practical methods for inductive inference over supervised data It approximates discrete-valued functions (as opposed to continuous) It is robust to noisy data Decision trees can represent any discrete function on discrete features It is also efficient for processing large amounts of data , so is often used in data mining applications Decision Tree Learners’ bias typically prefers small tree over larger ones
  • 15. Decision Trees Tree-based classifiers for instances represented as feature-vectors. Nodes test features, there is one branch for each value of the feature, and leaves specify the category. Can represent arbitrary conjunction and disjunction. Can represent any classification function over discrete feature vectors. Can be rewritten as a set of rules, i.e. disjunctive normal form (DNF). red  circle -> pos red  circle -> A blue -> B; red  square -> B green -> C; red  triangle -> C color red blue green shape circle square triangle neg pos pos neg neg color red blue green shape circle square triangle B C A B C
  • 16. Properties of Decision Tree Learning Continuous (real-valued) features can be handled by allowing nodes to split a real valued feature into two ranges based on a threshold (e.g. length < 3 and length  3) Classification trees have discrete class labels at the leaves , regression trees allow real-valued outputs at the leaves . Algorithms for finding consistent trees are efficient for processing large amounts of training data for data mining tasks. Methods developed for handling noisy training data (both class and feature noise). Methods developed for handling missing feature values.
  • 17. What makes a good tree? Not too small – need to include enough attributes to handle possibly subtle distinctions in data Not too big - computational efficiency (avoid redundant, spurious attributes) - avoid over-fitting training examples (noisy, scarce data) Occam’s Razor: find simplest hypothesis (tree) that is consistent with all observations inductive bias – small trees, with informative nodes near root
  • 18. Basic Algorithm 가능한 모든 decision trees space 에서의 top-down, greedy search Training examples 를 가장 잘 분류할 수 있는 attribute 를 루트에 둔다 . Entropy, Information gain
  • 19. Top-Down Decision Tree Induction Recursively build a tree top-down by divide and conquer. Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>:  <big, blue, circle>:  <big, red , circle>: + <small, red , circle>: + <small, red , square>:  color red blue green
  • 20. Top-Down Decision Tree Induction Recursively build a tree top-down by divide and conquer. shape circle square triangle <big, red , circle>: + <small, red, circle>: + <small, red , square>:  color red blue green <big, red , circle>: + <small, red , circle>: + pos <small, red , square>:  neg pos <big, blue, circle>:  neg neg Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>:  <big, blue, circle>: 
  • 21. Decision Tree Induction Pseudocode DTree( examples , features ) returns a tree If all examples are in one category, return a leaf node with that category label. Else if the set of features is empty, return a leaf node with the category label that is the most common in examples. Else pick a feature F and create a node R for it For each possible value v i of F : Let examples i be the subset of examples that have value v i for F Add an out-going edge E to node R labeled with the value v i. If examples i is empty then attach a leaf node to edge E labeled with the category that is the most common in examples . else call DTree( examples i , features – { F }) and attach the resulting tree as the subtree under edge E. Return the subtree rooted at R.
  • 22. The Basic Decision Tree Learning Algorithm Top-Down Induction of Decision Trees . This approach is exemplified by the ID3 algorithm and its successor C4.5
  • 23. Tree Induction Greedy strategy. Split the records based on an attribute test that optimizes certain criterion. Issues Determine how to split the records How to specify the attribute test condition? How to determine the best split? Determine when to stop splitting
  • 24. How to determine the Best Split Income Age >=10k <10k young old Customers fair customers Good customers
  • 25. How to determine the Best Split Greedy approach: Nodes with homogeneous class distribution are preferred Need a measure of node impurity: High degree of impurity Low degree of impurity pure 50% red 50% green 75% red 25% green 100% red 0% green
  • 26. Split Selection Method Numerical or ordered attributes: Find a split point that separates the (two) classes (Yes: No: ) Age < 33 30 35 Age
  • 27. Split Selection Method (Contd.) Categorical attributes: How to group? Sport: Truck: Minivan: (Sport, Truck) -- (Minivan) (Sport) --- (Truck, Minivan) (Sport, Minivan) --- (Truck) Car Sport, Truck Minivan
  • 28. Decision Tree Induction Many Algorithms: Hunt’s Algorithm (1960’s, one of the earliest) ID3(Quinlan 1979), C4.5(Quinlan 1993) CART SLIQ (EDBT’96 — Mehta et al.) builds an index for each attribute and only class list and the current attribute list reside in memory SPRINT (VLDB’96 — J. Shafer et al.) constructs an attribute list data structure RainForest (VLDB’98 — Gehrke, Ramakrishnan & Ganti) separates the scalability aspects from the criteria that determine the quality of the tree builds an AVC-list (attribute, value, class label) BOAT Uses bootstrapping to create several small samples
  • 29. History of Decision-Tree Research Hunt and colleagues use exhaustive search decision-tree methods (CLS) to model human concept learning in the 1960’s. In the late 70’s, Quinlan developed ID3 with the information gain heuristic to learn expert systems from examples. Simulataneously, Breiman and Friedman and colleagues develop CART (Classification and Regression Trees), similar to ID3. In the 1980’s a variety of improvements are introduced to handle noise, continuous features, missing features, and improved splitting criteria. Various expert-system development tools results. Quinlan’s updated decision-tree package (C4.5) released in 1993. Weka includes Java version of C4.5 called J48 .
  • 30. General Structure of Hunt’s Algorithm Let D t be the set of training records that reach a node t General Procedure: If D t contains records that belong the same class y t , then t is a leaf node labeled as y t If D t is an empty set, then t is a leaf node labeled by the default class, y d If D t contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. D t ?
  • 31. Hunt’s Algorithm Cheat=No Refund Cheat=No Cheat=No Yes No Refund Cheat=No Yes No Marital Status Cheat=No Cheat Single, Divorced Married Taxable Income Cheat=No < 80K >= 80K Refund Cheat=No Yes No Marital Status Cheat=No Cheat Single, Divorced Married
  • 32. What is ID3( Interactive Dichotomizer 3) A mathematical algorithm for building the decision tree. Invented by J. Ross Quinlan in 1979. Uses Information Theory invented by Shannon in 1948. Information Gain is used to select the most useful attribute for classification. Builds the tree from the top down, with no backtracking.
  • 33. ID3 ID3 is designed for that case: many attributes training set contains many objects a reasonably good decision tree is required without much computation It is generally found to construct simple decision tree. But it cannot guarantee that the trees is always best.
  • 34. ID3 Algorithm Overview Step 1 : Choose a random subset of the training set Step 2 : Form a decision tree that correctly classifies all the objects in the window Step 3 : IF the tree gives the correct answer for all the objects in the training set, Then the process terminates; Else a selection of the incorrectly classified objects is added to the window and go to Step 2
  • 36. Picking a Good Split Feature Goal is to have the resulting tree be as small as possible , per Occam’s razor. Finding a minimal decision tree (nodes, leaves, or depth) is an NP-hard optimization problem. Top-down divide-and-conquer method does a greedy search for a simple tree but does not guarantee to find the smallest. General lesson in ML: “Greed is good.” Want to pick a feature that creates subsets of examples that are relatively “ pure” in a single class so they are “closer” to being leaf nodes. There are a variety of heuristics for picking a good test, a popular one is based on information gain that originated with the ID3 system of Quinlan (1979).
  • 37. Entropy Minimum number of bits of information needed to encode the classification of an arbitrary member of S entropy = 0, if all members in the same class entropy = 1, if |positive examples|=|negative examples|
  • 38. Example – Information Needed n = 5 p = 9 The information needed to generate a decision tree from this window is: E(p,n) = = 0.940 bits
  • 39. Entropy Entropy (disorder, impurity) of a set of examples, S, relative to a binary classification is: where p 1 is the fraction of positive examples in S and p 0 is the fraction of negatives. If all examples are in one category , entropy is zero (we define 0  log(0)=0 ) If examples are equally mixed ( p 1 = p 0 =0.5), entropy is a maximum of 1 . For multi-class problems with c categories , entropy generalizes to:
  • 40. Entropy Plot for Binary Classification
  • 41. Information Gain Parent Node, p is split into k partitions; n i is number of records in partition I Measures Reduction in Entropy achieved because of the split. Choose the split that achieves maximum GAIN value !! Used in ID3 and C4.5 Disadvantage : Tends to prefer splits that result in large number of partitions , each being small but pure.
  • 42. Information Gain Example: <big, red, circle>: + <small, red, circle>: + <small, red, square>:  <big, blue, circle>:  2+, 2  : E=1 size big small 1+,1  1+,1  E=1 E=1 Gain=1  (0.5  1 + 0.5  1) = 0 2+, 2  : E=1 color red blue 2+,1  0+,1  E=0.918 E=0 Gain=1  (0.75  0.918 +0.25  0) = 0.311 2+, 2  : E=1 shape circle square 2+,1  0+,1  E=0.918 E=0 Gain =1  (0.75  0.918+0.25  0) = 0.311
  • 43. Training Examples for PlayTennis Day Outlook 온도 Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No
  • 44. ID3 play don’t play p no = 5/14 p yes = 9/14 Impurity = - p yes log 2 p yes - p no log 2 p no = - 9/14 log 2 9/14 - 5/14 log 2 5/14 = 0.94 bits
  • 45. Selecting the Next Attribute : which attribute is the best classifier? ID3
  • 46. Example – Branch on outlook E(outlook) = 0.694 Gain(outlook)=0.940- E(outlook )=0.246bits p 3 =3, n 3 =2, E(p 3 , n 3 )=0.971 p 2 =4, n 2 =0, E(p 2 , n 2 )=0 p 1 =2, n 1 =3, E(p 1 , n 1 )=0.971 Outlook sunny overcast rain P True Normal Mild 11 P False Normal Cool 9 N False High Mild 8 N True High Hot 2 N False High Hot 1 P False Normal Hot 13 P True High Mild 12 P true Normal Cool 7 P False High Hot 3 N True High Mild 14 P False Normal Mild 10 N True Normal Cool 6 P False Normal Cool 5 P False High Mild 4
  • 47. ID3 amount of information required to specify class of an example given that it reaches node 0.94 bits 0.97 bits * 5/14 gain: 0.25 bits maximal information gain play don’t play 0.0 bits * 4/14 0.97 bits * 5/14 0.98 bits * 7/14 0.59 bits * 7/14 0.92 bits * 6/14 0.81 bits * 4/14 0.81 bits * 8/14 1.0 bits * 4/14 1.0 bits * 6/14 outlook sunny overcast rainy + = 0.69 bits + = 0.79 bits + = 0.91 bits + = 0.89 bits gain: 0.15 bits gain: 0.03 bits gain: 0.05 bits humidity temperature windy high normal hot mild cool false true
  • 48. ID3 outlook sunny overcast rainy maximal information gain 0.97 bits play don’t play 0.0 bits * 3/5 humidity temperature windy high normal hot mild cool false true + = 0.0 bits gain: 0.97 bits + = 0.40 bits gain: 0.57 bits + = 0.95 bits gain: 0.02 bits 0.0 bits * 2/5 0.0 bits * 2/5 1.0 bits * 2/5 0.0 bits * 1/5 0.92 bits * 3/5 1.0 bits * 2/5
  • 49. ID3 outlook sunny overcast rainy humidity high normal 0.97 bits play don’t play 1.0 bits *2/5 temperature windy hot mild cool false true + = 0.95 bits gain: 0.02 bits + = 0.95 bits gain: 0.02 bits + = 0.0 bits gain: 0.97 bits humidity high normal 0.92 bits * 3/5 0.92 bits * 3/5 1.0 bits * 2/5 0.0 bits * 3/5 0.0 bits * 2/5 
  • 50. ID3 outlook sunny overcast rainy windy false true humidity high normal Yes No No Yes Yes play don’t play
  • 51. Hypothesis Space Search in Decision Tree Learning ID3 can be characterized as searching a space of hypothesis for one that fits the training examples The hypothesis space searched by ID3 is the set of possible decisions ID3 performs a simple-to-complex , hill-climbing search through this hypothesis space, locally-optimal solution .
  • 52. Hypothesis Space Search in Decision Tree Learning Hypothesis space of all decision tree is a complete space of finite discrete-valued functions, relative to the available attributes Outputs a single hypothesis No Back Tracking Local minima… Inductive bias : approximate “prefer shortest tree” Information-gain gives a bias for trees with minimal depth
  • 53. Hypothesis Space Search Performs batch learning that processes all training instances at once rather than incremental learning that updates a hypothesis after each example. Guaranteed to find a tree consistent with any conflict-free training set i.e. identical feature vectors always assigned the same class,
  • 54. Occam’s Razor Occam’s Razor: Prefer the simplest hypothesis that fits the data William of Ockham (AD 1285? – 1347?)
  • 55. Complex DT : Simple DT temperature cool mild hot outlook outlook outlook sunny o’cast rain P N windy true false N P sunny o’cast rain windy P humid true false P N high normal windy P true false N P true false N humid high normal P outlook sunny o’cast rain N P null outlook sunny overcast rain humidity P windy high normal N P true false N P
  • 56. Inductive Bias in Decision Tree Learning Occam’s Razor
  • 58. Disadvantages Only allow 2 classes, This limitation is usually removed in most later systems. Not guaranteed to find the simplest tree. Not incremental. Additional training data can not be considered without rebuilt the the whole tree with all the former data.
  • 59. Advantages A reasonably good decision tree without much computation. Iterative method usually is found more quickly to build a tree than build on the whole training set. The process is not sensitive to parameters such as window size. ID3 is linear to the difficulty of the problem
  • 60. C4.5 History ID3, CHAID – 1960s C4.5 innovations (Quinlan): permit numeric attributes deal sensibly with missing values pruning to deal with for noisy data C4.5 - one of best-known and most widely-used learning algorithms Last research version: C4.8, implemented in Weka as J4.8 (Java) Commercial successor: C5.0 (available from Rulequest)
  • 61. C4.5 ID3 favors attributes with large number of divisions Lead to overfitting Improved version of ID3: Missing Data Continuous Data Pruning Subtree replacement by leaf node Subtree raising Automated rule generation GainRatio: take into account the cardinality of each division
  • 62. Weakness of ID3: Highly-branching attributes Problematic: attributes with a large number of values (extreme case: ID code) Subsets are more likely to be pure if there is a large number of values ⇒ Information gain is biased towards choosing attributes with a large number of values ⇒ This may result in overfitting (selection of an attribute that is non-optimal for prediction)
  • 63. Weakness of ID3: Split for ID Code Attribute Entropy of split = 0 (since each leaf node is “pure”, having only one case. Information gain is maximal for ID code ID code No Yes No No Yes D1 D2 D3 … D13 D14 Day Outlook 온도 Humidity Wind PlayTennis D1 Sunny Hot High Weak No D2 Sunny Hot High Strong No D3 Overcast Hot High Weak Yes D4 Rain Mild High Weak Yes D5 Rain Cool Normal Weak Yes D6 Rain Cool Normal Strong No D7 Overcast Cool Normal Strong Yes D8 Sunny Mild High Weak No D9 Sunny Cool Normal Weak Yes D10 Rain Mild Normal Weak Yes D11 Sunny Mild Normal Strong Yes D12 Overcast Mild High Strong Yes D13 Overcast Hot Normal Weak Yes D14 Rain Mild High Strong No
  • 64. C4.5 Gain Ratio: Parent Node, p is split into k partitions n i is the number of records in partition i Adjusts Information Gain by the entropy of the partitioning (SplitINFO). Higher entropy partitioning (large number of small partitions) is penalized! Used in C4.5 Designed to overcome the disadvantage of Information Gain Split information is sensitive to how broadly and uniformly the attribute splits the data
  • 65. Numeric attributes Standard method: binary splits E.g. temp < 45 Unlike nominal attributes, every attribute has many possible split points Solution is straightforward extension: Evaluate info gain (or other measure) for every possible split point of attribute Choose “best” split point Info gain for best split point is info gain for attribute Computationally more demanding witten & eibe
  • 66. Example Split on temperature attribute: E.g. temperature  71.5: yes/4, no/2 temperature  71.5: yes/5, no/3 Info([4,2],[5,3]) = 6/14 info([4,2]) + 8/14 info([5,3]) = 0.939 bits Place split points halfway between values Can evaluate all split points in one pass! witten & eibe 64 65 68 69 70 71 72 72 75 75 80 81 83 85 Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No
  • 67. Avoid repeated sorting! Sort instances by the values of the numeric attribute Time complexity for sorting: O ( n log n ) Q. Does this have to be repeated at each node of the tree? A: No! Sort order for children can be derived from sort order for parent Time complexity of derivation: O ( n ) Drawback: need to create and store an array of sorted indices for each numeric attribute witten & eibe
  • 68. Weather data – nominal values witten & eibe … … … … … Yes False Normal Mild Rainy Yes False High Hot Overcast No True High Hot Sunny No False High Hot Sunny Play Windy Humidity Temperature Outlook If outlook = sunny and humidity = high then play = no If outlook = rainy and windy = true then play = no If outlook = overcast then play = yes If humidity = normal then play = yes If none of the above then play = yes
  • 69. More speeding up Entropy only needs to be evaluated between points of different classes (Fayyad & Irani, 1992) Potential optimal breakpoints Breakpoints between values of the same class cannot be optimal value class 64 65 68 69 70 71 72 72 75 75 80 81 83 85 Yes No Yes Yes Yes No No Yes Yes Yes No Yes Yes No X
  • 70. Continuous Attributes: Computing Gini Index... For efficient computation: for each attribute, Sort the attribute on values Linearly scan these values, each time updating the count matrix and computing gini index Choose the split position that has the least gini index Split Positions Sorted Values
  • 71. Splitting Based on Nominal Attributes Multi-way split: Use as many partitions as distinct values. Binary split: Divides values into two subsets. Need to find optimal partitioning. OR CarType Family Sports Luxury CarType {Family, Luxury} {Sports} CarType {Sports, Luxury} {Family} Size {Small, Large} {Medium}
  • 72. Splitting Based on Continuous Attributes
  • 73. Missing as a separate value Missing value denoted “?” in C4.X Simple idea: treat missing as a separate value Q: When this is not appropriate? A: When values are missing due to different reasons Example 1: gene expression could be missing when it is very high or very low Example 2: field IsPregnant =missing for a male patient should be treated differently (no) than for a female patient of age 25 (unknown)
  • 74. Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
  • 75. Computing Impurity Measure Split on Refund: Entropy(Refund=Yes) -(0)log(0/3) – (3/3)log(3/3) = 0 Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183 Entropy(Children) = 3/10 (0) + 6/10 (0.9183) = 0.551 Gain = 0.8813 – 0.551 = 0.3303 Missing value Before Splitting: Entropy(Parent) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
  • 76. Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
  • 77. Distribute Instances Refund Yes No Refund Yes No Probability that Refund=Yes is 3/9 Probability that Refund=No is 6/9 Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9
  • 78. Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: Affects how impurity measures are computed Affects how to distribute instance with missing value to child nodes Affects how a test instance with missing value is classified
  • 79. Classify Instances Refund MarSt TaxInc YES NO NO NO Yes No Married Single, Divorced < 80K > 80K New record: Probability that Marital Status = Married is 3.67/6.67 Probability that Marital Status ={Single,Divorced} is 3/6.67 6.67 1 2 3.67 Total 2.67 1 1 6/9 Class=Yes 4 0 1 3 Class=No Total Divorced Single Married
  • 80.  
  • 81. From trees to rules – how? How can we produce a set of rules from a decision tree?
  • 82. From trees to rules – simple Simple way: one rule for each leaf C4.5rules: greedily prune conditions from each rule if this reduces its estimated error Can produce duplicate rules Check for this at the end Then look at each class in turn consider the rules for that class find a “good” subset (guided by MDL) Then rank the subsets to avoid conflicts Finally, remove rules (greedily) if this decreases error on the training data witten & eibe
  • 83. C4.5rules: choices and options C4.5rules slow for large and noisy datasets Commercial version C5.0rules uses a different technique Much faster and a bit more accurate C4.5 has two parameters Confidence value (default 25%): lower values incur heavier pruning Minimum number of instances in the two most popular branches (default 2) witten & eibe
  • 84. CART Split Selection Method Motivation: We need a way to choose quantitatively between different splitting predicates Idea: Quantify the impurity of a node Method: Select splitting predicate that generates children nodes with minimum impurity from a space of possible splitting predicates
  • 85. CART If a data set D contains examples from n classes, gini index, gini ( D ) is defined as where p j is the relative frequency of class j in D If a data set D is split on A into two subsets D 1 and D 2 , the gini index gini ( D ) is defined as Reduction in Impurity: The attribute provides the smallest gini split ( D ) (or the largest reduction in impurity ) is chosen to split the node
  • 86. Measure of Impurity: GINI Maximum (0.5) when records are equally distributed among all classes , implying least interesting information Minimum (0.0) when all records belong to one class , implying most interesting information
  • 87. Examples for computing GINI P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 – P(C1) 2 – P(C2) 2 = 1 – 0 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Gini = 1 – (1/6) 2 – (5/6) 2 = 0.278 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 – (2/6) 2 – (4/6) 2 = 0.444
  • 88. Comparison among Splitting Criteria For a 2-class problem:
  • 89. CART Ex. D has 9 tuples in buys_computer = “yes” and 5 in “no” Suppose the attribute income partitions D into 10 in D 1 : { low, medium } and 4 in D 2 but gini {medium,high} is 0.30 and thus the best since it is the lowest All attributes are assumed continuous-valued Can be modified for categorical attributes
  • 90. Comparing Attribute Selection Measures The three measures, in general, return good results but Information gain: biased towards multivalued attributes Gain ratio: tends to prefer unbalanced splits in which one partition is much smaller than the others Gini index: biased to multivalued attributes has difficulty when # of classes is large tends to favor tests that result in equal-sized partitions and purity in both partitions
  • 91. Issues in Decision Tree Learning Overfitting in Decision Trees
  • 92. Issues in Decision Tree Learning Overfitting in Decision Trees h h ’
  • 93. Overfitting Learning a tree that classifies the training data perfectly may not lead to the tree with the best generalization to unseen data. There may be noise in the training data that the tree is erroneously fitting. The algorithm may be making poor decisions towards the leaves of the tree that are based on very little data and may not reflect reliable trends. hypothesis complexity accuracy on training data on test data
  • 94. Overfitting Example voltage (V) current (I) Testing Ohms Law: V = IR (I = (1/R)V) Ohm was wrong, we have found a more accurate function! Experimentally measure 10 points Fit a curve to the Resulting data. Perfect fit to training data with an 9 th degree polynomial (can fit n points exactly with an n -1 degree polynomial)
  • 95. Overfitting Example voltage (V) current (I) Testing Ohms Law: V = IR (I = (1/R)V) Better generalization with a linear function that fits training data less accurately.
  • 96. Overfitting Noise in Decision Trees Category or feature noise can easily cause overfitting. Add noisy instance <medium, blue, circle>: pos (but really neg ) shape circle square triangle color red blue green pos neg pos neg neg
  • 97. Overfitting Noise in Decision Trees Category or feature noise can easily cause overfitting. Add noisy instance <medium, blue, circle>: pos (but really neg ) shape circle square triangle color red blue green pos neg pos neg <big, blue, circle>:  <medium, blue, circle>: + Noise can also cause different instances of the same feature vector to have different classes. Impossible to fit this data and must label leaf with the majority class. <big, red, circle>: neg (but really pos ) Conflicting examples can also arise if the features are incomplete and inadequate to determine the class or if the target concept is non-deterministic. small med big pos neg neg
  • 98. Overfitting Prevention (Pruning) Methods Two basic approaches for decision trees Prepruning : Stop growing tree as some point during top-down construction when there is no longer sufficient data to make reliable decisions. Postpruning : Grow the full tree, then remove subtrees that do not have sufficient evidence. Label leaf resulting from pruning with the majority class of the remaining data, or a class probability distribution . Method for determining which subtrees to prune: Cross-validation : Reserve some training data as a hold-out set ( validation set ) to evaluate utility of subtrees. Statistical test : Use a statistical test on the training data to determine if any observed regularity can be dismisses as likely due to random chance. Minimum description length (MDL): Determine if the additional complexity of the hypothesis is less complex than just explicitly remembering any exceptions resulting from pruning.
  • 99. Pruning Goal: Prevent overfitting to noise in the data Two strategies for “pruning” the decision tree: (Stop earlier / Forward pruning): Stop growing the tree earlier – extra stopping conditions, e.g. Stop if all instances belong to the same class Stop if all the attribute values are the same Stop if number of instances < some user-specified threshold Stop if expanding the current node does not improve impurity measures (e.g., Gini or Gain ). (Post-pruning): Allow overfit and then post-prune the tree. Estimation of errors and tree size to decide which subtree should be pruned. Postpruning preferred in practice—prepruning can “stop too early”
  • 100. Early stopping Pre-pruning may stop the growth process prematurely: early stopping But: XOR-type problems rare in practice And: pre-pruning faster than post-pruning witten & eibe
  • 101. Post-pruning First, build full tree Then, prune it Fully-grown tree shows all attribute interactions Problem: some subtrees might be due to chance effects Two pruning operations: Subtree replacement Subtree raising Possible strategies: error estimation significance testing MDL principle witten & eibe
  • 102. Post-pruning: Subtree replacement, 1 Bottom-up Consider replacing a tree only after considering all its subtrees Ex: labor negotiations witten & eibe
  • 103. Post-pruning: Subtree replacement, 2 What subtree can we replace?
  • 104. Subtree replacement, 3 Bottom-up Consider replacing a tree only after considering all its subtrees witten & eibe
  • 105. Estimating error rates Prune only if it reduces the estimated error Error on the training data is NOT a useful estimator Q: Why it would result in very little pruning? Use hold-out set for pruning (“reduced-error pruning”) C4.5’s method Derive confidence interval from training data Use a heuristic limit, derived from this, for pruning Standard Bernoulli-process-based method Shaky statistical assumptions (based on training data) witten & eibe
  • 106. *Mean and variance Mean and variance for a Bernoulli trial: p, p (1– p ) Expected success rate f=S / N Mean and variance for f : p, p (1– p )/ N For large enough N , f follows a Normal distribution c% confidence interval [– z  X  z ] for random variable with 0 mean is given by: With a symmetric distribution: witten & eibe
  • 107. *Subtree raising Delete node Redistribute instances Slower than subtree replacement (Worthwhile?) witten & eibe X
  • 108. C4.5’s method Error estimate for subtree is weighted sum of error estimates for all its leaves Error estimate for a node (upper bound): If c = 25% then z = 0.69 (from normal distribution) f is the error on the training data N is the number of instances covered by the leaf witten & eibe
  • 109. Example f=0.33 e=0.47 f=0.5 e=0.72 f=0.33 e=0.47 f = 5/14 e = 0.46 e < 0.51 so prune! Combined using ratios 6:2:6  e=0.51 witten & eibe Example
  • 110. Issues in Decision Tree Learning Effect of Reduced-Error Pruning
  • 111. Issues with Reduced Error Pruning The problem with this approach is that it potentially “wastes” training data on the validation set. Severity of this problem depends where we are on the learning curve: test accuracy number of training examples
  • 112. Issues in Decision Tree Learning Rule Post-Pruning
  • 113. Issues in Decision Tree Learning Converting A Tree to Rules
  • 114. Issues in Decision Tree Learning Attributes with Many Values
  • 115. Issues in Decision Tree Learning Attributes with Costs
  • 116. Issues in Decision Tree Learning Weakness of decision trees Not Always sufficient to learn complex concepts (e.g., weighted evaluation function) Can be hard to understand. Real problems can produce deep trees with a large branching factor Some problems with continuously-valued attributes or classes may not be easily discretized Methods for handling missing attribute values are somewhat clumsy
  • 117. Additional Decision Tree Issues Better splitting criteria Information gain prefers features with many values. Continuous features Predicting a real-valued function (regression trees) Missing feature values Features with costs Misclassification costs Mining large databases that do not fit in main memory