SlideShare a Scribd company logo
Overview of Tree Algorithms
from Decision Tree to xgboost
Takami Sato
8/10/2017Overview of Tree Algorithms 1
Agenda
• Xgboost occupied Kaggle
• Decision Tree
• Random Forest
• Gradient Boosting Tree
• Extreme Gradient Boosting(xgboost)
– Dart
8/10/2017Overview of Tree Algorithms 2
Xgboost occupied Kaggle
8/10/2017Overview of Tree Algorithms 3
More than half of the winning
solutions in machine learning
challenges hosted at Kaggle
adopt XGBoost
http://www.kdnuggets.com/2016/03/xgboost-implementing-winningest-kaggle-algorithm-spark-flink.html
Awesome XGBoost
• Vlad Sandulescu, Mihai Chiru, 1st place of the KDD Cup 2016 competition. Link to the arxiv paper.
• Marios Michailidis, Mathias Müller and HJ van Veen, 1st place of the Dato Truely Native? competition.
Link to the Kaggle interview.
• Vlad Mironov, Alexander Guschin, 1st place of the CERN LHCb experiment Flavour of Physics
competition. Link to the Kaggle interview.
• Josef Slavicek, 3rd place of the CERN LHCb experiment Flavour of Physics competition. Link to the
Kaggle interview.
• Mario Filho, Josef Feigl, Lucas, Gilberto, 1st place of the Caterpillar Tube Pricing competition. Link to the
Kaggle interview.
• Qingchen Wang, 1st place of the Liberty Mutual Property Inspection. Link to the Kaggle interview.
• Chenglong Chen, 1st place of the Crowdflower Search Results Relevance. Link to the winning solution.
• Alexandre Barachant (“Cat”) and Rafał Cycoń (“Dog”), 1st place of the Grasp-and-Lift EEG Detection.
Link to the Kaggle interview.
• Halla Yang, 2nd place of the Recruit Coupon Purchase Prediction Challenge. Link to the Kaggle interview.
• Owen Zhang, 1st place of the Avito Context Ad Clicks competition. Link to the Kaggle interview.
• Keiichi Kuroyanagi, 2nd place of the Airbnb New User Bookings. Link to the Kaggle interview.
• Marios Michailidis, Mathias Müller and Ning Situ, 1st place Homesite Quote Conversion. Link to the
Kaggle interview.
8/10/2017Overview of Tree Algorithms 4
Awesome XGBoost: Machine Learning Challenge Winning Solutions
https://github.com/dmlc/xgboost/tree/master/demo#machine-learning-challenge-winning-solutions
What’s happened?
XGBoost is a for Gradient boosting trees model
8/10/2017Overview of Tree Algorithms 5
Decision Tree Random Forest Gradient Boosting Tree
?xgboost
What’s happened during this evolution?
Decision Trees was the beginning of everything.
Decision Trees (DTs) are a non-parametric supervised learning
method used for classification and regression. The goal is to
create a model that predicts the value of a target variable by
learning simple decision rules inferred from the data features.
cited by http://scikit-learn.org/stable/modules/tree.html
8/10/2017Overview of Tree Algorithms 6
Definition.
Decision Tree
A
E
C
D
B
decision
rule 1
decision
rule 2
decision
rule 3
decision
rule 4
How were the rules found?
8/10/2017Overview of Tree Algorithms 7
Regression
Set a metric that evaluates imputicity of a split of data. then minimize the
metric on each node.
Classification
Gini impurity(CART)
Entropy
(C4.5)
Variance
𝑝 𝑘: probability of an item with label 𝑘
𝐾 : number of class
𝑆𝐷(𝑆): standard varience of set S
𝑆 𝐿, 𝑆 𝑅 : left and right split of a node
Examples
8/10/2017Overview of Tree Algorithms 8
Classification
sex age survived
female 29 1
male 1 1
female 2 0
male 30 0
female 25 0
male 48 1
female 63 1
male 39 0
female 53 1
male 71 0
Predict a person survived or not from Titanic Dataset.
age #survived #people probability Gini impurity
age > = 40 3 4 0.75 0.375
age <40 2 6 0.33 0.444
sex #survived #people probability Gini impurity
male 2 5 0.40 0.480
female 3 5 0.60 0.480
0.42
decide thresholds and
calculate probabilities
weighted average
Gini impurity
0.48
Gini impurity: 0.5
0.08 Down
0.03 Down
Examples
8/10/2017Overview of Tree Algorithms 9
Classification
sex age survived
female 29 1
male 1 1
female 2 0
male 30 0
female 25 0
male 48 1
female 63 1
male 39 0
female 53 1
male 71 0
Predict a person survived or not from Titanic Dataset.
age #survived #people probability Entropy
age > = 40 3 4 0.75 -0.375
age <40 2 6 0.33 -0.444
sex #survived #people probability Entropy
male 2 5 0.40 0.480
female 3 5 0.60 0.480
0.61
decide thresholds and
calculate probabilities
0.67
Entropy: 0.69
weighted average
Entropy
weighted average
Entropy
0.08 Down
0.02 Down
Examples
8/10/2017Overview of Tree Algorithms 10
Regression
sex survived age
female 1 29
male 1 1
female 0 2
male 0 30
female 0 25
male 1 48
female 1 63
male 0 39
female 1 53
male 0 71
Predict age of a person from Titanic Dataset.
491.0
calculate variances
weighted average
Variance
sex Var #people
male 524.56 5
female 466.24 5
survived Var #people
0 502.64 5
1 479.36 5
495.4
Varience: 498.29
7.29 Down
2.11 Down
Other techniques for decision tree
8/10/2017Overview of Tree Algorithms 11
Stopping Criteria
Finding a good threshold for numerical data
Pruning tree
• Maximum depth
• Minimum leaf nodes
• observed point of data
• the point that class labels are changed
• percentile of data
𝑇: a subtree of a original tree
𝜏: index of leaf nodes
Impurity metric
(gini, entropy or varience)
• Pruning tree when a subtree’s metric is above a threshold.
cited by PRML formula (14.31)
Random Forest
8/10/2017Overview of Tree Algorithms 12
Random Forest
8/10/2017Overview of Tree Algorithms 13
https://stat.ethz.ch/education/semesters/ss2012/ams/slides/v10.2.pdf
Main ideas of Random Forest
• Bootstrapping data
• Random selection of features
• Ensembling trees
– Average
– Majority voting
8/10/2017Overview of Tree Algorithms 14
Random Forest as a Feature Selector
Random Forest is difficult interpreted, but calculate some kind of feature
importances.
8/10/2017Overview of Tree Algorithms 15
Gain-based importance
Summing up gains on each split. (finally, normarizing all importances )
Above split, “Age” got 0.08 feature importance point.
Random Forest as a Feature Selector
8/10/2017Overview of Tree Algorithms 16
Permutation-based importance
Decreasing accuracy after permuting each column
Target Feat. 1 Feat. 2 Feat. 3 Feat. 4
0 1 2 11 101
1 2 3 12 102
1 3 5 13 103
0 4 7 14 104
Original data
Target Feat. 1 Feat. 2 Feat. 3 Feat. 4
0 1 5 11 101
1 2 7 12 102
1 3 2 13 103
0 4 3 14 104
Permuted data
Accuracy: 0.8 Accuracy: 0.7
0.1 Down
Feature 2’s importance is 0.1.
Which importance is good ?
8/10/2017Overview of Tree Algorithms 17
Pros. Cons.
Gain-based
importance
• No need additional computing
• Implemented in scikit-learn
• biased in favor of
continuous variables and
variables with many
categories [Strobl+ 2008]
Permutation-based
importance • Good for correlated variables? • Need additional computing
It is still a controversial issue.
If you want to learn more, please check [Louppe+ 2013]
Out-of-bag (OOB) Error
In random forests, we can get an unbiased estimator of the test error without CV.
8/10/2017Overview of Tree Algorithms 18
Procedure to get OOB Error
kth tree
bootstraping
Remains data
(OOB data)
All data
Calucurate an error for
the OOB data
Averaging the OOB errors
by each data
Loop for constructing trees
Scikit-learn options
8/10/2017Overview of Tree Algorithms 19
Parameter Description
n_estimators number of tree
criterion "gini" or "entropy"
max_features
The number of features to consider when looking for the best
split
max_depth The maximum depth of the tree
min_samples_split
The minimum number of samples required to split an internal
node
min_samples_leaf The minimum number of samples required to be at a leaf node
min_weight_fraction_leaf
The minimum weighted fraction of the sum total of weights (of
all the input samples) required to be at a leaf node.
max_leaf_nodes Grow trees with max_leaf_nodes in best-first fashion.
min_impurity_split Threshold for early stopping in tree growth.
bootstrap Whether bootstrap samples are used when building trees.
oob_score
Whether to use out-of-bag samples to estimate the
generalization accuracy.
warm_start
When set to True, reuse the solution of the previous call to fit
and add more estimators to the ensemble, otherwise, just fit a
whole new forest.
http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
Gradient Boosting Tree (GBT)
8/10/2017Overview of Tree Algorithms 20
Gradient Boosting Tree (GBT)
The Elements of Statistical Learning 2nd edition, p. 359
8/10/2017Overview of Tree Algorithms 21
psedo-residual
1-demantional
optimization for
each leaf.
Xgboost(eXtreme Gradient Boosting)
• xgboost is one of the implementation of GBT.
• Splitting criterion is different from the criterions I showed above.
8/10/2017Overview of Tree Algorithms 22
Loss function
number of leaves
xgboost also
implemented l1
regularization.
(we see later.)
Splitting criterion directly derived from loss function is
the biggest contribution of xgboost.
Xgboost’s Split finding algorithms
• xgboost is one of the implementation of GBT.
• Splitting criterion is different from the criterions I showed above.
8/10/2017Overview of Tree Algorithms 23
Quadratic Approximation First order gradient:
Second order gradient:
Xgboost’s Split finding algorithms
• xgboost is one of the implementation of GBT.
• Splitting criterion is different from the criterions I showed above.
8/10/2017Overview of Tree Algorithms 24
Solve the minimal point by isolating w
Gain of this criterion when a node splits to 𝐿 𝐿 and 𝐿 𝑅
This is the xgboost’s splitting criterion.
Xgboost’s Split finding algorithms
8/10/2017Overview of Tree Algorithms 25
Xgboost’s Split finding algorithms for sparse data
8/10/2017Overview of Tree Algorithms 26
Parameters of xgboost
8/10/2017Overview of Tree Algorithms 27
Parameters of xgboost
• eta [default=0.3, range: [0,1]]
– step size shrinkage used in update to prevents overfitting. After each boosting
step, we can directly get the weights of new features. and eta actually shrinks the
feature weights to make the boosting process more conservative.
8/10/2017Overview of Tree Algorithms 28
https://github.com/dmlc/xgboost/blob/master/doc/parameter.md
Updating of shrinkage
𝜂
• gamma [default=0, range: [0,∞]]
– minimum loss reduction required to make a further partition on a leaf node of the
tree. the larger, the more conservative the algorithm will be.
If gamma is big enough, this term will be minus. (it does not cause a split)
Parameters of xgboost
8/10/2017Overview of Tree Algorithms 29
• max_depth [default=6, range: [1,∞]]
– maximum depth of a tree, increase this value will make model more complex /
likely to be overfitting.
• min_child_weight [default=1, range: [0,∞]]
– minimum sum of instance weight(hessian) needed in a child. If the tree partition step
results in a leaf node with the sum of instance weight less than min_child_weight, then
the building process will give up further partitioning. In linear regression mode, this
simply corresponds to minimum number of instances needed to be in each node. The
larger, the more conservative the algorithm will be.
sum of instance hessian in leaf j
< min_child_weightIf
, then stop partitioning.
Parameters of xgboost
• max_delta_step [default=0, range: [0,∞]]
– Maximum delta step we allow each tree's weight estimation to be. If the
value is set to 0, it means there is no constraint. If it is set to a positive
value, it can help making the update step more conservative. Usually
this parameter is not needed, but it might help in logistic regression
when class is extremely imbalanced. Set it to value of 1-10 might help
control the update
8/10/2017Overview of Tree Algorithms 30
If > max_delta_step
, then max_delta_step ?
I am not sure, please someone tells me.
Parameters of xgboost
8/10/2017Overview of Tree Algorithms 31
• subsample [default=1, range: (0,1]]
– subsample ratio of the training instance. Setting it to 0.5 means that XGBoost
randomly collected half of the data instances to grow trees and this will prevent
overfitting.
• colsample_bylevel [default=1, range: (0,1]]
– subsample ratio of columns for each split, in each level.
• colsample_bytree [default=1, range: (0,1]]
– subsample ratio of columns when constructing each tree.
Parameters of xgboost
8/10/2017Overview of Tree Algorithms 32
• lambda [default=1]
– L2 regularization term on weights, increase this value will make model more conservative.
• alpha [default=1]
– L1 regularization term on weights, increase this value will make model more conservative.
https://www.kaggle.com/forums/f/15/kaggle-forum/t/24181/xgboost-alpha-parameter/138272
https://github.com/dmlc/xgboost/blob/v0.60/src/tree/param.h#L178
Parameters of xgboost
Please see Algorithm 1 and Algorithm 2.
8/10/2017Overview of Tree Algorithms 33
• tree_method [default='auto']
– The tree construction algorithm used in XGBoost(see description in the reference
paper)
– Distributed and external memory version only support approximate algorithm.
– Choices: {'auto', 'exact', 'approx'}
– 'auto': Use heuristic to choose faster one.
• For small to medium dataset, exact greedy will be used.
• For very large-dataset, approximate algorithm will be chosen.
• Because old behavior is always use exact greedy in single machine, user will get a message when approximate
algorithm is chosen to notify this choice.
– 'exact': Exact greedy algorithm.
– 'approx': Approximate greedy algorithm using sketching and histogram.
• sketch_eps [default=0.03, range: (0, 1)]
– This is only used for approximate greedy algorithm.
– This roughly translated into O(1 / sketch_eps) number of bins. Compared to
directly select number of bins, this comes with theoretical guarantee with sketch
accuracy.
– Usually user does not have to tune this. but consider setting to a lower number
for more accurate enumeration.
I am not sure the parameter, but the main developer also said
Parameters for early stopping
8/10/2017Overview of Tree Algorithms 34
• updater_seq, [default="grow_colmaker,prune"]
– A comma separated string mentioning The sequence of Tree updaters that
should be run. A tree updater is a pluggable operation performed on the tree at
every step using the gradient information. Tree updaters can be registered using
the plugin system provided.
https://github.com/dmlc/xgboost/issues/1732
• num_round
– The number of rounds for boosting
It counterparts of “n_estimator” in scikit-learn API.
Parameters for early stopping
8/10/2017Overview of Tree Algorithms 35
• early_stopping_rounds
– Activates early stopping. Validation error needs to decrease at least every <early_stopping_rounds>
round(s) to continue training. Requires at least one item in evals. If there’s more than one, will use the last.
Returns the model from the last iteration (not the best one). If early stopping occurs, the model will have
three additional fields: bst.best_score, bst.best_iteration and bst.best_ntree_limit. (Use bst.best_ntree_limit
to get the correct value if num_parallel_tree and/or num_class appears in the parameters)
• feval
– Customized evaluation function
def sample_feval(preds, dtrain):
labels = dtrain.get_label()
some_metric = calc_sume_metric(preds, labels)
return 'MCC', some_metric
sample feval
If you have a validation set, you can tune boosting round.
https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py
DART [2015 Rashmi+]
• Employing dropouts technique to GBT (MART)
• DART prevents over-specialization.
– Trees added at early have too much contribution to predict
– Shrinkage also prevents over-specialization,
but the authors claim not enough.
8/10/2017Overview of Tree Algorithms 36
DART(Dropouts meet Multiple Additive Regression Trees)
DART [2015 Rashmi+]
8/10/2017Overview of Tree Algorithms 37
Deciding which
trees are dropped
Calcurating
psedo-residual
Reducing the
weights of dropped
trees
Parameters for DART at xgboost
8/10/2017Overview of Tree Algorithms 38
• normalize_type [default="tree"]
– type of normalization algorithm.
– "tree": new trees have the same weight of each of dropped trees.
• weight of new trees are 1 / (k + learning_rate)
• dropped trees are scaled by a factor of k / (k + learning_rate)
– "forest": new trees have the same weight of sum of dropped trees (forest).
• weight of new trees are 1 / (1 + learning_rate)
• dropped trees are scaled by a factor of 1 / (1 + learning_rate)
• sample_type [default="uniform"]
– type of sampling algorithm.
– "uniform": dropped trees are selected uniformly.
– "weighted": dropped trees are selected in proportion to weight.
• rate_drop [default=0.0, range: [0.0, 1.0]]
– dropout rate.
• skip_drop [default=0.0, range: [0.0, 1.0]]
– probability of skip dropout.
• If a dropout is skipped, new trees are added in the same manner as gbtree.

More Related Content

PDF
SSII2021 [TS2] 深層強化学習 〜 強化学習の基礎から応用まで 〜
PDF
NIPS2017読み会 LightGBM: A Highly Efficient Gradient Boosting Decision Tree
PDF
グラフデータ分析 入門編
PPTX
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
PDF
動画認識サーベイv1(メタサーベイ )
PPTX
【DL輪読会】Scaling Laws for Neural Language Models
PDF
Kaggleのテクニック
PDF
Layer Normalization@NIPS+読み会・関西
SSII2021 [TS2] 深層強化学習 〜 強化学習の基礎から応用まで 〜
NIPS2017読み会 LightGBM: A Highly Efficient Gradient Boosting Decision Tree
グラフデータ分析 入門編
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
動画認識サーベイv1(メタサーベイ )
【DL輪読会】Scaling Laws for Neural Language Models
Kaggleのテクニック
Layer Normalization@NIPS+読み会・関西

What's hot (20)

PDF
[DL輪読会]ICLR2020の分布外検知速報
PDF
合成変量とアンサンブル:回帰森と加法モデルの要点
PDF
PDF
グラフニューラルネットワーク入門
PDF
【DL輪読会】Scaling laws for single-agent reinforcement learning
PDF
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
PDF
PCAの最終形態GPLVMの解説
PDF
[DL輪読会]Causality Inspired Representation Learning for Domain Generalization
PPTX
[DL輪読会]Focal Loss for Dense Object Detection
PDF
不老におけるOptunaを利用した分散ハイパーパラメータ最適化 - 今村秀明(名古屋大学 Optuna講習会)
PPTX
[DL輪読会] マルチエージェント強化学習と心の理論
PPTX
[DL輪読会]Revisiting Deep Learning Models for Tabular Data (NeurIPS 2021) 表形式デー...
PDF
GAN(と強化学習との関係)
PPTX
強化学習 DQNからPPOまで
PPTX
[DL輪読会]EfficientDet: Scalable and Efficient Object Detection
PDF
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
PDF
敵対的生成ネットワーク(GAN)
PDF
能動学習セミナー
PPTX
強化学習アルゴリズムPPOの解説と実験
PDF
[DL輪読会]Progressive Growing of GANs for Improved Quality, Stability, and Varia...
[DL輪読会]ICLR2020の分布外検知速報
合成変量とアンサンブル:回帰森と加法モデルの要点
グラフニューラルネットワーク入門
【DL輪読会】Scaling laws for single-agent reinforcement learning
最近のKaggleに学ぶテーブルデータの特徴量エンジニアリング
PCAの最終形態GPLVMの解説
[DL輪読会]Causality Inspired Representation Learning for Domain Generalization
[DL輪読会]Focal Loss for Dense Object Detection
不老におけるOptunaを利用した分散ハイパーパラメータ最適化 - 今村秀明(名古屋大学 Optuna講習会)
[DL輪読会] マルチエージェント強化学習と心の理論
[DL輪読会]Revisiting Deep Learning Models for Tabular Data (NeurIPS 2021) 表形式デー...
GAN(と強化学習との関係)
強化学習 DQNからPPOまで
[DL輪読会]EfficientDet: Scalable and Efficient Object Detection
【DL輪読会】DINOv2: Learning Robust Visual Features without Supervision
敵対的生成ネットワーク(GAN)
能動学習セミナー
強化学習アルゴリズムPPOの解説と実験
[DL輪読会]Progressive Growing of GANs for Improved Quality, Stability, and Varia...
Ad

Similar to Overview of tree algorithms from decision tree to xgboost (20)

PDF
2023 Supervised Learning for Orange3 from scratch
 
PPTX
[Women in Data Science Meetup ATX] Decision Trees
PDF
Boosting Algorithms Omar Odibat
PDF
193_report (1)
PPTX
Machine learning basics using trees algorithm (Random forest, Gradient Boosting)
PDF
XGBoost: the algorithm that wins every competition
PDF
Decision tree
PDF
Machine learning in science and industry — day 2
PDF
Lecture 9 - Decision Trees and Ensemble Methods, a lecture in subject module ...
PPTX
What we got from the Predicting Red Hat Business Value competition
PDF
22PCOAM16 _ML_Unit 3 Notes & Question bank
PDF
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
PPTX
Decision Tree - C4.5&CART
PPTX
Decision Trees
PDF
From decision trees to random forests
PPTX
Decision-trees basic decryptions DT .pptx
PDF
L3. Decision Trees
PPTX
13 random forest
PPTX
Primer on major data mining algorithms
PPTX
Machine Learning with Python unit-2.pptx
2023 Supervised Learning for Orange3 from scratch
 
[Women in Data Science Meetup ATX] Decision Trees
Boosting Algorithms Omar Odibat
193_report (1)
Machine learning basics using trees algorithm (Random forest, Gradient Boosting)
XGBoost: the algorithm that wins every competition
Decision tree
Machine learning in science and industry — day 2
Lecture 9 - Decision Trees and Ensemble Methods, a lecture in subject module ...
What we got from the Predicting Red Hat Business Value competition
22PCOAM16 _ML_Unit 3 Notes & Question bank
22PCOAM16 ML Unit 3 Full notes PDF & QB.pdf
Decision Tree - C4.5&CART
Decision Trees
From decision trees to random forests
Decision-trees basic decryptions DT .pptx
L3. Decision Trees
13 random forest
Primer on major data mining algorithms
Machine Learning with Python unit-2.pptx
Ad

More from Takami Sato (12)

PDF
Kaggle Santa 2019で学ぶMIP最適化入門
PDF
Kaggle&競プロ紹介 in 中田研究室
PDF
Quoraコンペ参加記録
PDF
Data Science Bowl 2017 Winning Solutions Survey
PDF
NIPS2016論文紹介 Riemannian SVRG fast stochastic optimization on riemannian manif...
PDF
Icml2015 論文紹介 sparse_subspace_clustering_with_missing_entries
PDF
AAをつくろう!
PDF
High performance python computing for data science
PDF
Scikit learnで学ぶ機械学習入門
PDF
最適化超入門
PDF
Word2vecで大谷翔平の二刀流論争に終止符を打つ!
PDF
セクシー女優で学ぶ画像分類入門
Kaggle Santa 2019で学ぶMIP最適化入門
Kaggle&競プロ紹介 in 中田研究室
Quoraコンペ参加記録
Data Science Bowl 2017 Winning Solutions Survey
NIPS2016論文紹介 Riemannian SVRG fast stochastic optimization on riemannian manif...
Icml2015 論文紹介 sparse_subspace_clustering_with_missing_entries
AAをつくろう!
High performance python computing for data science
Scikit learnで学ぶ機械学習入門
最適化超入門
Word2vecで大谷翔平の二刀流論争に終止符を打つ!
セクシー女優で学ぶ画像分類入門

Recently uploaded (20)

PDF
HPLC-PPT.docx high performance liquid chromatography
PPT
Chemical bonding and molecular structure
PPTX
Comparative Structure of Integument in Vertebrates.pptx
PDF
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
PPTX
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
PPTX
The KM-GBF monitoring framework – status & key messages.pptx
PDF
bbec55_b34400a7914c42429908233dbd381773.pdf
PDF
The scientific heritage No 166 (166) (2025)
PDF
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PPTX
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
PPTX
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
PPTX
Cell Membrane: Structure, Composition & Functions
DOCX
Q1_LE_Mathematics 8_Lesson 5_Week 5.docx
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
2. Earth - The Living Planet Module 2ELS
PDF
Biophysics 2.pdffffffffffffffffffffffffff
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
PPTX
Microbiology with diagram medical studies .pptx
HPLC-PPT.docx high performance liquid chromatography
Chemical bonding and molecular structure
Comparative Structure of Integument in Vertebrates.pptx
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
The KM-GBF monitoring framework – status & key messages.pptx
bbec55_b34400a7914c42429908233dbd381773.pdf
The scientific heritage No 166 (166) (2025)
Formation of Supersonic Turbulence in the Primordial Star-forming Cloud
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
Cell Membrane: Structure, Composition & Functions
Q1_LE_Mathematics 8_Lesson 5_Week 5.docx
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
2. Earth - The Living Planet Module 2ELS
Biophysics 2.pdffffffffffffffffffffffffff
Taita Taveta Laboratory Technician Workshop Presentation.pptx
Microbiology with diagram medical studies .pptx

Overview of tree algorithms from decision tree to xgboost

  • 1. Overview of Tree Algorithms from Decision Tree to xgboost Takami Sato 8/10/2017Overview of Tree Algorithms 1
  • 2. Agenda • Xgboost occupied Kaggle • Decision Tree • Random Forest • Gradient Boosting Tree • Extreme Gradient Boosting(xgboost) – Dart 8/10/2017Overview of Tree Algorithms 2
  • 3. Xgboost occupied Kaggle 8/10/2017Overview of Tree Algorithms 3 More than half of the winning solutions in machine learning challenges hosted at Kaggle adopt XGBoost http://www.kdnuggets.com/2016/03/xgboost-implementing-winningest-kaggle-algorithm-spark-flink.html
  • 4. Awesome XGBoost • Vlad Sandulescu, Mihai Chiru, 1st place of the KDD Cup 2016 competition. Link to the arxiv paper. • Marios Michailidis, Mathias Müller and HJ van Veen, 1st place of the Dato Truely Native? competition. Link to the Kaggle interview. • Vlad Mironov, Alexander Guschin, 1st place of the CERN LHCb experiment Flavour of Physics competition. Link to the Kaggle interview. • Josef Slavicek, 3rd place of the CERN LHCb experiment Flavour of Physics competition. Link to the Kaggle interview. • Mario Filho, Josef Feigl, Lucas, Gilberto, 1st place of the Caterpillar Tube Pricing competition. Link to the Kaggle interview. • Qingchen Wang, 1st place of the Liberty Mutual Property Inspection. Link to the Kaggle interview. • Chenglong Chen, 1st place of the Crowdflower Search Results Relevance. Link to the winning solution. • Alexandre Barachant (“Cat”) and Rafał Cycoń (“Dog”), 1st place of the Grasp-and-Lift EEG Detection. Link to the Kaggle interview. • Halla Yang, 2nd place of the Recruit Coupon Purchase Prediction Challenge. Link to the Kaggle interview. • Owen Zhang, 1st place of the Avito Context Ad Clicks competition. Link to the Kaggle interview. • Keiichi Kuroyanagi, 2nd place of the Airbnb New User Bookings. Link to the Kaggle interview. • Marios Michailidis, Mathias Müller and Ning Situ, 1st place Homesite Quote Conversion. Link to the Kaggle interview. 8/10/2017Overview of Tree Algorithms 4 Awesome XGBoost: Machine Learning Challenge Winning Solutions https://github.com/dmlc/xgboost/tree/master/demo#machine-learning-challenge-winning-solutions
  • 5. What’s happened? XGBoost is a for Gradient boosting trees model 8/10/2017Overview of Tree Algorithms 5 Decision Tree Random Forest Gradient Boosting Tree ?xgboost What’s happened during this evolution?
  • 6. Decision Trees was the beginning of everything. Decision Trees (DTs) are a non-parametric supervised learning method used for classification and regression. The goal is to create a model that predicts the value of a target variable by learning simple decision rules inferred from the data features. cited by http://scikit-learn.org/stable/modules/tree.html 8/10/2017Overview of Tree Algorithms 6 Definition. Decision Tree A E C D B decision rule 1 decision rule 2 decision rule 3 decision rule 4
  • 7. How were the rules found? 8/10/2017Overview of Tree Algorithms 7 Regression Set a metric that evaluates imputicity of a split of data. then minimize the metric on each node. Classification Gini impurity(CART) Entropy (C4.5) Variance 𝑝 𝑘: probability of an item with label 𝑘 𝐾 : number of class 𝑆𝐷(𝑆): standard varience of set S 𝑆 𝐿, 𝑆 𝑅 : left and right split of a node
  • 8. Examples 8/10/2017Overview of Tree Algorithms 8 Classification sex age survived female 29 1 male 1 1 female 2 0 male 30 0 female 25 0 male 48 1 female 63 1 male 39 0 female 53 1 male 71 0 Predict a person survived or not from Titanic Dataset. age #survived #people probability Gini impurity age > = 40 3 4 0.75 0.375 age <40 2 6 0.33 0.444 sex #survived #people probability Gini impurity male 2 5 0.40 0.480 female 3 5 0.60 0.480 0.42 decide thresholds and calculate probabilities weighted average Gini impurity 0.48 Gini impurity: 0.5 0.08 Down 0.03 Down
  • 9. Examples 8/10/2017Overview of Tree Algorithms 9 Classification sex age survived female 29 1 male 1 1 female 2 0 male 30 0 female 25 0 male 48 1 female 63 1 male 39 0 female 53 1 male 71 0 Predict a person survived or not from Titanic Dataset. age #survived #people probability Entropy age > = 40 3 4 0.75 -0.375 age <40 2 6 0.33 -0.444 sex #survived #people probability Entropy male 2 5 0.40 0.480 female 3 5 0.60 0.480 0.61 decide thresholds and calculate probabilities 0.67 Entropy: 0.69 weighted average Entropy weighted average Entropy 0.08 Down 0.02 Down
  • 10. Examples 8/10/2017Overview of Tree Algorithms 10 Regression sex survived age female 1 29 male 1 1 female 0 2 male 0 30 female 0 25 male 1 48 female 1 63 male 0 39 female 1 53 male 0 71 Predict age of a person from Titanic Dataset. 491.0 calculate variances weighted average Variance sex Var #people male 524.56 5 female 466.24 5 survived Var #people 0 502.64 5 1 479.36 5 495.4 Varience: 498.29 7.29 Down 2.11 Down
  • 11. Other techniques for decision tree 8/10/2017Overview of Tree Algorithms 11 Stopping Criteria Finding a good threshold for numerical data Pruning tree • Maximum depth • Minimum leaf nodes • observed point of data • the point that class labels are changed • percentile of data 𝑇: a subtree of a original tree 𝜏: index of leaf nodes Impurity metric (gini, entropy or varience) • Pruning tree when a subtree’s metric is above a threshold. cited by PRML formula (14.31)
  • 13. Random Forest 8/10/2017Overview of Tree Algorithms 13 https://stat.ethz.ch/education/semesters/ss2012/ams/slides/v10.2.pdf
  • 14. Main ideas of Random Forest • Bootstrapping data • Random selection of features • Ensembling trees – Average – Majority voting 8/10/2017Overview of Tree Algorithms 14
  • 15. Random Forest as a Feature Selector Random Forest is difficult interpreted, but calculate some kind of feature importances. 8/10/2017Overview of Tree Algorithms 15 Gain-based importance Summing up gains on each split. (finally, normarizing all importances ) Above split, “Age” got 0.08 feature importance point.
  • 16. Random Forest as a Feature Selector 8/10/2017Overview of Tree Algorithms 16 Permutation-based importance Decreasing accuracy after permuting each column Target Feat. 1 Feat. 2 Feat. 3 Feat. 4 0 1 2 11 101 1 2 3 12 102 1 3 5 13 103 0 4 7 14 104 Original data Target Feat. 1 Feat. 2 Feat. 3 Feat. 4 0 1 5 11 101 1 2 7 12 102 1 3 2 13 103 0 4 3 14 104 Permuted data Accuracy: 0.8 Accuracy: 0.7 0.1 Down Feature 2’s importance is 0.1.
  • 17. Which importance is good ? 8/10/2017Overview of Tree Algorithms 17 Pros. Cons. Gain-based importance • No need additional computing • Implemented in scikit-learn • biased in favor of continuous variables and variables with many categories [Strobl+ 2008] Permutation-based importance • Good for correlated variables? • Need additional computing It is still a controversial issue. If you want to learn more, please check [Louppe+ 2013]
  • 18. Out-of-bag (OOB) Error In random forests, we can get an unbiased estimator of the test error without CV. 8/10/2017Overview of Tree Algorithms 18 Procedure to get OOB Error kth tree bootstraping Remains data (OOB data) All data Calucurate an error for the OOB data Averaging the OOB errors by each data Loop for constructing trees
  • 19. Scikit-learn options 8/10/2017Overview of Tree Algorithms 19 Parameter Description n_estimators number of tree criterion "gini" or "entropy" max_features The number of features to consider when looking for the best split max_depth The maximum depth of the tree min_samples_split The minimum number of samples required to split an internal node min_samples_leaf The minimum number of samples required to be at a leaf node min_weight_fraction_leaf The minimum weighted fraction of the sum total of weights (of all the input samples) required to be at a leaf node. max_leaf_nodes Grow trees with max_leaf_nodes in best-first fashion. min_impurity_split Threshold for early stopping in tree growth. bootstrap Whether bootstrap samples are used when building trees. oob_score Whether to use out-of-bag samples to estimate the generalization accuracy. warm_start When set to True, reuse the solution of the previous call to fit and add more estimators to the ensemble, otherwise, just fit a whole new forest. http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html
  • 20. Gradient Boosting Tree (GBT) 8/10/2017Overview of Tree Algorithms 20
  • 21. Gradient Boosting Tree (GBT) The Elements of Statistical Learning 2nd edition, p. 359 8/10/2017Overview of Tree Algorithms 21 psedo-residual 1-demantional optimization for each leaf.
  • 22. Xgboost(eXtreme Gradient Boosting) • xgboost is one of the implementation of GBT. • Splitting criterion is different from the criterions I showed above. 8/10/2017Overview of Tree Algorithms 22 Loss function number of leaves xgboost also implemented l1 regularization. (we see later.) Splitting criterion directly derived from loss function is the biggest contribution of xgboost.
  • 23. Xgboost’s Split finding algorithms • xgboost is one of the implementation of GBT. • Splitting criterion is different from the criterions I showed above. 8/10/2017Overview of Tree Algorithms 23 Quadratic Approximation First order gradient: Second order gradient:
  • 24. Xgboost’s Split finding algorithms • xgboost is one of the implementation of GBT. • Splitting criterion is different from the criterions I showed above. 8/10/2017Overview of Tree Algorithms 24 Solve the minimal point by isolating w Gain of this criterion when a node splits to 𝐿 𝐿 and 𝐿 𝑅 This is the xgboost’s splitting criterion.
  • 25. Xgboost’s Split finding algorithms 8/10/2017Overview of Tree Algorithms 25
  • 26. Xgboost’s Split finding algorithms for sparse data 8/10/2017Overview of Tree Algorithms 26
  • 28. Parameters of xgboost • eta [default=0.3, range: [0,1]] – step size shrinkage used in update to prevents overfitting. After each boosting step, we can directly get the weights of new features. and eta actually shrinks the feature weights to make the boosting process more conservative. 8/10/2017Overview of Tree Algorithms 28 https://github.com/dmlc/xgboost/blob/master/doc/parameter.md Updating of shrinkage 𝜂 • gamma [default=0, range: [0,∞]] – minimum loss reduction required to make a further partition on a leaf node of the tree. the larger, the more conservative the algorithm will be. If gamma is big enough, this term will be minus. (it does not cause a split)
  • 29. Parameters of xgboost 8/10/2017Overview of Tree Algorithms 29 • max_depth [default=6, range: [1,∞]] – maximum depth of a tree, increase this value will make model more complex / likely to be overfitting. • min_child_weight [default=1, range: [0,∞]] – minimum sum of instance weight(hessian) needed in a child. If the tree partition step results in a leaf node with the sum of instance weight less than min_child_weight, then the building process will give up further partitioning. In linear regression mode, this simply corresponds to minimum number of instances needed to be in each node. The larger, the more conservative the algorithm will be. sum of instance hessian in leaf j < min_child_weightIf , then stop partitioning.
  • 30. Parameters of xgboost • max_delta_step [default=0, range: [0,∞]] – Maximum delta step we allow each tree's weight estimation to be. If the value is set to 0, it means there is no constraint. If it is set to a positive value, it can help making the update step more conservative. Usually this parameter is not needed, but it might help in logistic regression when class is extremely imbalanced. Set it to value of 1-10 might help control the update 8/10/2017Overview of Tree Algorithms 30 If > max_delta_step , then max_delta_step ? I am not sure, please someone tells me.
  • 31. Parameters of xgboost 8/10/2017Overview of Tree Algorithms 31 • subsample [default=1, range: (0,1]] – subsample ratio of the training instance. Setting it to 0.5 means that XGBoost randomly collected half of the data instances to grow trees and this will prevent overfitting. • colsample_bylevel [default=1, range: (0,1]] – subsample ratio of columns for each split, in each level. • colsample_bytree [default=1, range: (0,1]] – subsample ratio of columns when constructing each tree.
  • 32. Parameters of xgboost 8/10/2017Overview of Tree Algorithms 32 • lambda [default=1] – L2 regularization term on weights, increase this value will make model more conservative. • alpha [default=1] – L1 regularization term on weights, increase this value will make model more conservative. https://www.kaggle.com/forums/f/15/kaggle-forum/t/24181/xgboost-alpha-parameter/138272 https://github.com/dmlc/xgboost/blob/v0.60/src/tree/param.h#L178
  • 33. Parameters of xgboost Please see Algorithm 1 and Algorithm 2. 8/10/2017Overview of Tree Algorithms 33 • tree_method [default='auto'] – The tree construction algorithm used in XGBoost(see description in the reference paper) – Distributed and external memory version only support approximate algorithm. – Choices: {'auto', 'exact', 'approx'} – 'auto': Use heuristic to choose faster one. • For small to medium dataset, exact greedy will be used. • For very large-dataset, approximate algorithm will be chosen. • Because old behavior is always use exact greedy in single machine, user will get a message when approximate algorithm is chosen to notify this choice. – 'exact': Exact greedy algorithm. – 'approx': Approximate greedy algorithm using sketching and histogram. • sketch_eps [default=0.03, range: (0, 1)] – This is only used for approximate greedy algorithm. – This roughly translated into O(1 / sketch_eps) number of bins. Compared to directly select number of bins, this comes with theoretical guarantee with sketch accuracy. – Usually user does not have to tune this. but consider setting to a lower number for more accurate enumeration.
  • 34. I am not sure the parameter, but the main developer also said Parameters for early stopping 8/10/2017Overview of Tree Algorithms 34 • updater_seq, [default="grow_colmaker,prune"] – A comma separated string mentioning The sequence of Tree updaters that should be run. A tree updater is a pluggable operation performed on the tree at every step using the gradient information. Tree updaters can be registered using the plugin system provided. https://github.com/dmlc/xgboost/issues/1732 • num_round – The number of rounds for boosting It counterparts of “n_estimator” in scikit-learn API.
  • 35. Parameters for early stopping 8/10/2017Overview of Tree Algorithms 35 • early_stopping_rounds – Activates early stopping. Validation error needs to decrease at least every <early_stopping_rounds> round(s) to continue training. Requires at least one item in evals. If there’s more than one, will use the last. Returns the model from the last iteration (not the best one). If early stopping occurs, the model will have three additional fields: bst.best_score, bst.best_iteration and bst.best_ntree_limit. (Use bst.best_ntree_limit to get the correct value if num_parallel_tree and/or num_class appears in the parameters) • feval – Customized evaluation function def sample_feval(preds, dtrain): labels = dtrain.get_label() some_metric = calc_sume_metric(preds, labels) return 'MCC', some_metric sample feval If you have a validation set, you can tune boosting round. https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py
  • 36. DART [2015 Rashmi+] • Employing dropouts technique to GBT (MART) • DART prevents over-specialization. – Trees added at early have too much contribution to predict – Shrinkage also prevents over-specialization, but the authors claim not enough. 8/10/2017Overview of Tree Algorithms 36 DART(Dropouts meet Multiple Additive Regression Trees)
  • 37. DART [2015 Rashmi+] 8/10/2017Overview of Tree Algorithms 37 Deciding which trees are dropped Calcurating psedo-residual Reducing the weights of dropped trees
  • 38. Parameters for DART at xgboost 8/10/2017Overview of Tree Algorithms 38 • normalize_type [default="tree"] – type of normalization algorithm. – "tree": new trees have the same weight of each of dropped trees. • weight of new trees are 1 / (k + learning_rate) • dropped trees are scaled by a factor of k / (k + learning_rate) – "forest": new trees have the same weight of sum of dropped trees (forest). • weight of new trees are 1 / (1 + learning_rate) • dropped trees are scaled by a factor of 1 / (1 + learning_rate) • sample_type [default="uniform"] – type of sampling algorithm. – "uniform": dropped trees are selected uniformly. – "weighted": dropped trees are selected in proportion to weight. • rate_drop [default=0.0, range: [0.0, 1.0]] – dropout rate. • skip_drop [default=0.0, range: [0.0, 1.0]] – probability of skip dropout. • If a dropout is skipped, new trees are added in the same manner as gbtree.