SlideShare a Scribd company logo
XGBoost
eXtremeGradientBoosting
Tong He
Overview
Introduction
Basic Walkthrough
Real World Application
Model Specification
Parameter Introduction
Advanced Features
Kaggle Winning Solution
·
·
·
·
·
·
·
/
Introduction
/
Introduction
Nowadays we have plenty of machine learning models. Those most well-knowns are
Linear/Logistic Regression
k-Nearest Neighbours
Support Vector Machines
Tree-based Model
Neural Networks
·
·
·
·
Decision Tree
Random Forest
Gradient Boosting Machine
-
-
-
·
/
Introduction
XGBoost is short for eXtreme Gradient Boosting. It is
An open-sourced tool
A variant of the gradient boosting machine
The winning model for several kaggle competitions
·
Computation in C++
R/python/Julia interface provided
-
-
·
Tree-based model-
·
/
Introduction
XGBoost is currently host on github.
The primary author of the model and the c++ implementation is Tianqi Chen.
The author for the R-package is Tong He.
·
·
/
Introduction
XGBoost is widely used for kaggle competitions. The reason to choose XGBoost includes
Easy to use
Efficiency
Accuracy
Feasibility
·
Easy to install.
Highly developed R/python interface for users.
-
-
·
Automatic parallel computation on a single machine.
Can be run on a cluster.
-
-
·
Good result for most data sets.-
·
Customized objective and evaluation
Tunable parameters
-
-
/
Basic Walkthrough
We introduce the R package for XGBoost. To install, please run
This command downloads the package from github and compile it automatically on your
machine. Therefore we need RTools installed on Windows.
devtools::install_github('dmlc/xgboost',subdir='R-package')
/
Basic Walkthrough
XGBoost provides a data set to demonstrate its usages.
This data set includes the information for some kinds of mushrooms. The features are binary,
indicate whether the mushroom has this characteristic. The target variable is whether they are
poisonous.
require(xgboost)
##Loadingrequiredpackage:xgboost
data(agaricus.train,package='xgboost')
data(agaricus.test,package='xgboost')
train=agaricus.train
test=agaricus.test
/
Basic Walkthrough
Let's investigate the data first.
We can see that the data is a dgCMatrixclass object. This is a sparse matrix class from the
package Matrix. Sparse matrix is more memory efficient for some specific data.
str(train$data)
##Formalclass'dgCMatrix'[package"Matrix"]with6slots
## ..@i :int[1:143286]26811182021242832...
## ..@p :int[1:127]036937233065845648965138380838410991...
## ..@Dim :int[1:2]6513126
## ..@Dimnames:Listof2
## ....$:NULL
## ....$:chr[1:126]"cap-shape=bell""cap-shape=conical""cap-shape=convex""cap-shape=flat
## ..@x :num[1:143286]1111111111...
## ..@factors:list()
/
Basic Walkthrough
To use XGBoost to classify poisonous mushrooms, the minimum information we need to
provide is:
1. Input features
2. Target variable
3. Objective
4. Number of iteration
XGBoost allows dense and sparse matrix as the input.·
A numeric vector. Use integers starting from 0 for classification, or real values for
regression
·
For regression use 'reg:linear'
For binary classification use 'binary:logistic'
·
·
The number of trees added to the model·
/
Basic Walkthrough
To run XGBoost, we can use the following command:
The output is the classification error on the training data set.
bst=xgboost(data=train$data,label=train$label,
nround=2,objective="binary:logistic")
##[0] train-error:0.000614
##[1] train-error:0.001228
/
Basic Walkthrough
Sometimes we might want to measure the classification by 'Area Under the Curve':
bst=xgboost(data=train$data,label=train$label,nround=2,
objective="binary:logistic",eval_metric="auc")
##[0] train-auc:0.999238
##[1] train-auc:0.999238
/
Basic Walkthrough
To predict, you can simply write
pred=predict(bst,test$data)
head(pred)
##[1]0.25824980.74332210.25824980.25824980.25765090.2750908
/
Basic Walkthrough
Cross validation is an important method to measure the model's predictive power, as well as
the degree of overfitting. XGBoost provides a convenient function to do cross validation in a line
of code.
Notice the difference of the arguments between xgb.cvand xgboostis the additional nfold
parameter. To perform cross validation on a certain set of parameters, we just need to copy
them to the xgb.cvfunction and add the number of folds.
cv.res=xgb.cv(data=train$data,nfold=5,label=train$label,nround=2,
objective="binary:logistic",eval_metric="auc")
##[0] train-auc:0.998780+0.000590test-auc:0.998547+0.000854
##[1] train-auc:0.999114+0.000728test-auc:0.998736+0.001072
/
Basic Walkthrough
xgb.cvreturns a data.tableobject containing the cross validation results. This is helpful for
choosing the correct number of iterations.
cv.res
## train.auc.meantrain.auc.stdtest.auc.meantest.auc.std
##1: 0.998780 0.000590 0.998547 0.000854
##2: 0.999114 0.000728 0.998736 0.001072
/
Real World Experiment
/
Higgs Boson Competition
The debut of XGBoost was in the higgs boson competition.
Tianqi introduced the tool along with a benchmark code which achieved the top 10% at the
beginning of the competition.
To the end of the competition, it was already the mostly used tool in that competition.
/
Higgs Boson Competition
XGBoost offers the script on github.
To run the script, prepare a datadirectory and download the competition data into this
directory.
/
Higgs Boson Competition
Firstly we prepare the environment
require(xgboost)
require(methods)
testsize=550000
/
Higgs Boson Competition
Then we can read in the data
dtrain=read.csv("data/training.csv",header=TRUE)
dtrain[33]=dtrain[33]=="s"
label=as.numeric(dtrain[[33]])
data=as.matrix(dtrain[2:31])
weight=as.numeric(dtrain[[32]])*testsize/length(label)
/
Higgs Boson Competition
The data contains missing values and they are marked as -999. We can construct an
xgb.DMatrixobject containing the information of weightand missing.
xgmat=xgb.DMatrix(data,label=label,weight=weight,missing=-999.0)
/
Higgs Boson Competition
Next step is to set the basic parameters
param=list("objective"="binary:logitraw",
"scale_pos_weight"=sumwneg/sumwpos,
"bst:eta"=0.1,
"bst:max_depth"=6,
"eval_metric"="auc",
"eval_metric"="ams@0.15",
"silent"=1,
"nthread"=16)
/
Higgs Boson Competition
We then start the training step
bst=xgboost(params=param,data=xgmat,nround=120)
/
Higgs Boson Competition
Then we read in the test data
dtest=read.csv("data/test.csv",header=TRUE)
data=as.matrix(dtest[2:31])
xgmat=xgb.DMatrix(data,missing=-999.0)
/
Higgs Boson Competition
We now can make prediction on the test data set.
ypred=predict(bst,xgmat)
/
Higgs Boson Competition
Finally we output the prediction according to the required format.
Please submit the result to see your performance :)
rorder=rank(ypred,ties.method="first")
threshold=0.15
ntop=length(rorder)-as.integer(threshold*length(rorder))
plabel=ifelse(rorder>ntop,"s","b")
outdata=list("EventId"=idx,
"RankOrder"=rorder,
"Class"=plabel)
write.csv(outdata,file="submission.csv",quote=FALSE,row.names=FALSE)
/
Higgs Boson Competition
Besides the good performace, the efficiency is also a highlight of XGBoost.
The following plot shows the running time result on the Higgs boson data set.
/
Higgs Boson Competition
After some feature engineering and parameter tuning, one can achieve around 25th with a
single model on the leaderboard. This is an article written by a former-physist introducing his
solution with a single XGboost model.
On our post-competition attempts, we achieved 11th on the leaderboard with a single XGBoost
model.
/
Model Specification
/
Training Objective
To understand other parameters, one need to have a basic understanding of the model behind.
Suppose we have trees, the model is
where each is the prediction from a decision tree. The model is a collection of decision trees.
K
∑
k=1
K
f
k
f
k
/
Training Objective
Having all the decision trees, we make prediction by
where is the feature vector for the -th data point.
Similarly, the prediction at the -th step can be defined as
= ( )y
i
ˆ
∑
k=1
K
f
k
xi
xi
i
t
= ( )yi
ˆ (t)
∑
k=1
t
f
k
xi
/
Training Objective
To train the model, we need to optimize a loss function.
Typically, we use
Rooted Mean Squared Error for regression
LogLoss for binary classification
mlogloss for multi-classification
·
- L = ( −1
N
∑N
i=1
y
i
y
i
ˆ )
2
·
- L = − ( log( ) + (1 − ) log(1 − ))
1
N
∑N
i=1
y
i
p
i
y
i
p
i
·
- L = − log( )
1
N
∑N
i=1
∑M
j=1
y
i,j
p
i,j
/
Training Objective
Regularization is another important part of the model. A good regularization term controls the
complexity of the model which prevents overfitting.
Define
where is the number of leaves, and is the score on the -th leaf.
Ω = γT + λ
1
2 ∑
j=1
T
w
2
j
T w
2
j
j
/
Training Objective
Put loss function and regularization together, we have the objective of the model:
where loss function controls the predictive power, and regularization controls the simplicity.
Obj = L + Ω
/
Training Objective
In XGBoost, we use gradient descent to optimize the objective.
Given an objective to optimize, gradient descent is an iterative technique which
calculate
at each iteration. Then we improve along the direction of the gradient to minimize the
objective.
Obj(y, )yˆ
Obj(y, )∂yˆ yˆ
yˆ
/
Training Objective
Recall the definition of objective . For a iterative algorithm we can re-define the
objective function as
To optimize it by gradient descent, we need to calculate the gradient. The performance can also
be improved by considering both the first and the second order gradient.
Obj = L + Ω
Ob = L( , ) + Ω( ) = L( , + ( )) + Ω( )j
(t)
∑
i=1
N
y
i
yˆ
(t)
i
∑
i=1
t
f
i
∑
i=1
N
y
i
yi
ˆ (t−1)
f
t
xi
∑
i=1
t
f
i
Ob∂y
i
ˆ (t) j
(t)
Ob∂2
y
i
ˆ
(t) j
(t)
/
Training Objective
Since we don't have derivative for every objective function, we calculate the second order taylor
approximation of it
where
Ob ≃ [L( , ) + ( ) + ( )] + Ω( )j
(t)
∑
i=1
N
y
i
yˆ(t−1)
g
i
f
t
xi
1
2
hi f
2
t
xi
∑
i=1
t
f
i
· = l( , )g
i
∂yˆ(t−1) y
i
yˆ(t−1)
· = l( , )hi ∂2
yˆ
(t−1) y
i
yˆ(t−1)
/
Training Objective
Remove the constant terms, we get
This is the objective at the -th step. Our goal is to find a to optimize it.
Ob = [ ( ) + ( )] + Ω( )j
(t)
∑
i=1
n
g
i
f
t
xi
1
2
hi f
2
t
xi f
t
t f
t
/
Tree Building Algorithm
The tree structures in XGBoost leads to the core problem:
how can we find a tree that improves the prediction along the gradient?
/
Tree Building Algorithm
Every decision tree looks like this
Each data point flows to one of the leaves following the direction on each node.
/
Tree Building Algorithm
The core concepts are:
Internal Nodes
Leaves
·
Each internal node split the flow of data points by one of the features.
The condition on the edge specifies what data can flow through.
-
-
·
Data points reach to a leaf will be assigned a weight.
The weight is the prediction.
-
-
/
Tree Building Algorithm
Two key questions for building a decision tree are
1. How to find a good structure?
2. How to assign prediction score?
We want to solve these two problems with the idea of gradient descent.
/
Tree Building Algorithm
Let us assume that we already have the solution to question 1.
We can mathematically define a tree as
where is a "directing" function which assign every data point to the -th leaf.
This definition describes the prediction process on a tree as
(x) =f
t
wq(x)
q(x) q(x)
Assign the data point to a leaf by
Assign the corresponding score on the -th leaf to the data point.
· x q
· wq(x)
q(x)
/
Tree Building Algorithm
Define the index set
This set contains the indices of data points that are assigned to the -th leaf.
= {i|q( ) = j}Ij xi
j
/
Tree Building Algorithm
Then we rewrite the objective as
Since all the data points on the same leaf share the same prediction, this form sums the
prediction by leaves.
Ob = [ ( ) + ( )] + γT + λj
(t)
∑
i=1
n
gi
ft
xi
1
2
hi f
2
t
xi
1
2 ∑
j=1
T
w
2
j
= [( ) + ( + λ) ] + γT
∑
j=1
T
∑
i∈Ij
g
i
wj
1
2 ∑
i∈Ij
hi w
2
j
/
Tree Building Algorithm
It is a quadratic problem of , so it is easy to find the best to optimize .
The corresponding value of is
wj
wj
Obj
= −w
∗
j
∑i∈Ij
g
i
+ λ∑i∈Ij
hi
Obj
Ob = − + γTj
(t)
1
2 ∑
j=1
T
(∑i∈Ij
g
i
)
2
+ λ∑i∈Ij
hi
/
Tree Building Algorithm
The leaf score
relates to
= −wj
∑i∈Ij
g
i
+ λ∑i∈Ij
hi
The first and second order of the loss function and
The regularization parameter
· g h
· λ
/
Tree Building Algorithm
Now we come back to the first question: How to find a good structure?
We can further split it into two sub-questions:
1. How to choose the feature to split?
2. When to stop the split?
/
Tree Building Algorithm
In each split, we want to greedily find the best splitting point that can optimize the objective.
For each feature
1. Sort the numbers
2. Scan the best splitting point.
3. Choose the best feature.
/
Tree Building Algorithm
Now we give a definition to "the best split" by the objective.
Everytime we do a split, we are changing a leaf into a internal node.
/
Tree Building Algorithm
Let
Recall the best value of objective on the -th leaf is
be the set of indices of data points assigned to this node
and be the sets of indices of data points assigned to two new leaves.
· I
· IL
IR
j
Ob = − + γj
(t)
1
2
(∑i∈Ij
g
i
)
2
+ λ∑i∈Ij
hi
/
Tree Building Algorithm
The gain of the split is
gain =
[
+ −
]
− γ
1
2
(∑i∈IL
gi
)
2
+ λ∑i∈IL
hi
(∑i∈IR
gi
)
2
+ λ∑i∈IR
hi
(∑i∈I
g
i
)
2
+ λ∑i∈I
hi
/
Tree Building Algorithm
To build a tree, we find the best splitting point recursively until we reach to the maximum
depth.
Then we prune out the nodes with a negative gain in a bottom-up order.
/
Tree Building Algorithm
XGBoost can handle missing values in the data.
For each node, we guide all the data points with a missing value
Finally every node has a "default direction" for missing values.
to the left subnode, and calculate the maximum gain
to the right subnode, and calculate the maximum gain
Choose the direction with a larger gain
·
·
·
/
Tree Building Algorithm
To sum up, the outline of the algorithm is
Iterate for nroundtimes·
Grow the tree to the maximun depth
Prune the tree to delete nodes with negative gain
-
Find the best splitting point
Assign weight to the two new leaves
-
-
-
/
Parameters
/
Parameter Introduction
XGBoost has plenty of parameters. We can group them into
1. General parameters
2. Booster parameters
3. Task parameters
Number of threads·
Stepsize
Regularization
·
·
Objective
Evaluation metric
·
·
/
Parameter Introduction
After the introduction of the model, we can understand the parameters provided in XGBoost.
To check the parameter list, one can look into
The documentation of xgb.train.
The documentation in the repository.
·
·
/
Parameter Introduction
General parameters:
nthread
booster
·
Number of parallel threads.-
·
gbtree: tree-based model.
gblinear: linear function.
-
-
/
Parameter Introduction
Parameter for Tree Booster
eta
gamma
·
Step size shrinkage used in update to prevents overfitting.
Range in [0,1], default 0.3
-
-
·
Minimum loss reduction required to make a split.
Range [0, ], default 0
-
- ∞
/
Parameter Introduction
Parameter for Tree Booster
max_depth
min_child_weight
max_delta_step
·
Maximum depth of a tree.
Range [1, ], default 6
-
- ∞
·
Minimum sum of instance weight needed in a child.
Range [0, ], default 1
-
- ∞
·
Maximum delta step we allow each tree's weight estimation to be.
Range [0, ], default 0
-
- ∞
/
Parameter Introduction
Parameter for Tree Booster
subsample
colsample_bytree
·
Subsample ratio of the training instance.
Range (0, 1], default 1
-
-
·
Subsample ratio of columns when constructing each tree.
Range (0, 1], default 1
-
-
/
Parameter Introduction
Parameter for Linear Booster
lambda
alpha
lambda_bias
·
L2 regularization term on weights
default 0
-
-
·
L1 regularization term on weights
default 0
-
-
·
L2 regularization term on bias
default 0
-
-
/
Parameter Introduction
objectives·
"reg:linear": linear regression, default option.
"binary:logistic": logistic regression for binary classification, output probability
"multi:softmax": multiclass classification using the softmax objective, need to specify
num_class
User specified objective
-
-
-
-
/
Parameter Introduction
eval_metric·
"rmse"
"logloss"
"error"
"auc"
"merror"
"mlogloss"
User specified evaluation metric
-
-
-
-
-
-
-
/
Guide on Parameter Tuning
It is nearly impossible to give a set of universal optimal parameters, or a global algorithm
achieving it.
The key pointsof parameter tuning are
Control Overfitting
Deal with Imbalanced data
Trust the cross validation
·
·
·
/
Guide on Parameter Tuning
The "Bias-Variance Tradeoff", or the "Accuracy-Simplicity Tradeoff" is the main idea for
controlling overfitting.
For the booster specific parameters, we can group them as
Controlling the model complexity
Robust to noise
·
max_depth, min_child_weightand gamma-
·
subsample, colsample_bytree-
/
Guide on Parameter Tuning
Sometimes the data is imbalanced among classes.
Only care about the ranking order
Care about predicting the right probability
·
Balance the positive and negative weights, by scale_pos_weight
Use "auc" as the evaluation metric
-
-
·
Cannot re-balance the dataset
Set parameter max_delta_stepto a finite number (say 1) will help convergence
-
-
/
Guide on Parameter Tuning
To select ideal parameters, use the result from xgb.cv.
Trust the score for the test
Use early.stop.roundto detect continuously being worse on test set.
If overfitting observed, reduce stepsize etaand increase nroundat the same time.
·
·
·
/
Advanced Features
/
Advanced Features
There are plenty of highlights in XGBoost:
Customized objective and evaluation metric
Prediction from cross validation
Continue training on existing model
Calculate and plot the variable importance
·
·
·
·
/
Customization
According to the algorithm, we can define our own loss function, as long as we can calculate the
first and second order gradient of the loss function.
Define and . We can optimize the loss function if we can calculate
these two values.
grad = l∂y
t−1 hess = l∂2
y
t−1
/
Customization
We can rewrite logloss for the -th data point as
The is calculated by applying logistic transformation on our prediction .
Then the logloss is
i
L = log( ) + (1 − ) log(1 − )y
i
p
i
y
i
p
i
p
i
yˆi
L = log + (1 − ) logy
i
1
1 + e
−yˆi
y
i
e
−yˆi
1 + e
−yˆi
/
Customization
We can see that
Next we translate them into the code.
· grad = − = −1
1+e
−yˆ
i
y
i
p
i
y
i
· hess = = (1 − )
1+e
−yˆ
i
(1+e
−yˆ
i )
2
p
i
p
i
/
Customization
The complete code:
logregobj=function(preds,dtrain){
labels=getinfo(dtrain,"label")
preds=1/(1+exp(-preds))
grad=preds-labels
hess=preds*(1-preds)
return(list(grad=grad,hess=hess))
}
/
Customization
The complete code:
logregobj=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
preds=1/(1+exp(-preds))
grad=preds-labels
hess=preds*(1-preds)
return(list(grad=grad,hess=hess))
}
/
Customization
The complete code:
logregobj=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
#applylogistictransformationtotheoutput
preds=1/(1+exp(-preds))
grad=preds-labels
hess=preds*(1-preds)
return(list(grad=grad,hess=hess))
}
/
Customization
The complete code:
logregobj=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
#applylogistictransformationtotheoutput
preds=1/(1+exp(-preds))
#Calculatethe1stgradient
grad=preds-labels
hess=preds*(1-preds)
return(list(grad=grad,hess=hess))
}
/
Customization
The complete code:
logregobj=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
#applylogistictransformationtotheoutput
preds=1/(1+exp(-preds))
#Calculatethe1stgradient
grad=preds-labels
#Calculatethe2ndgradient
hess=preds*(1-preds)
return(list(grad=grad,hess=hess))
}
/
Customization
The complete code:
logregobj=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
#applylogistictransformationtotheoutput
preds=1/(1+exp(-preds))
#Calculatethe1stgradient
grad=preds-labels
#Calculatethe2ndgradient
hess=preds*(1-preds)
#Returntheresult
return(list(grad=grad,hess=hess))
}
/
Customization
We can also customize the evaluation metric.
evalerror=function(preds,dtrain){
labels=getinfo(dtrain,"label")
err=as.numeric(sum(labels!=(preds>0)))/length(labels)
return(list(metric="error",value=err))
}
/
Customization
We can also customize the evaluation metric.
evalerror=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
err=as.numeric(sum(labels!=(preds>0)))/length(labels)
return(list(metric="error",value=err))
}
/
Customization
We can also customize the evaluation metric.
evalerror=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
#Calculatetheerror
err=as.numeric(sum(labels!=(preds>0)))/length(labels)
return(list(metric="error",value=err))
}
/
Customization
We can also customize the evaluation metric.
evalerror=function(preds,dtrain){
#Extractthetruelabelfromthesecondargument
labels=getinfo(dtrain,"label")
#Calculatetheerror
err=as.numeric(sum(labels!=(preds>0)))/length(labels)
#Returnthenameofthismetricandthevalue
return(list(metric="error",value=err))
}
/
Customization
To utilize the customized objective and evaluation, we simply pass them to the arguments:
param=list(max.depth=2,eta=1,nthread=2,silent=1,
objective=logregobj,eval_metric=evalerror)
bst=xgboost(params=param,data=train$data,label=train$label,nround=2)
##[0] train-error:0.0465223399355136
##[1] train-error:0.0222631659757408
/
Prediction in Cross Validation
"Stacking" is an ensemble learning technique which takes the prediction from several models. It
is widely used in many scenarios.
One of the main concern is avoid overfitting. The common way is use the prediction value from
cross validation.
XGBoost provides a convenient argument to calculate the prediction during the cross
validation.
/
Prediction in Cross Validation
res=xgb.cv(params=param,data=train$data,label=train$label,nround=2,
nfold=5,prediction=TRUE)
##[0] train-error:0.046522+0.001041 test-error:0.046523+0.004164
##[1] train-error:0.022263+0.001221 test-error:0.022263+0.004885
str(res)
##Listof2
## $dt :Classes'data.table'and'data.frame': 2obs.of 4variables:
## ..$train.error.mean:num[1:2]0.04650.0223
## ..$train.error.std:num[1:2]0.001040.00122
## ..$test.error.mean:num[1:2]0.04650.0223
## ..$test.error.std :num[1:2]0.004160.00488
## ..-attr(*,".internal.selfref")=<externalptr>
## $pred:num[1:6513]2.58-1.04-1.122.57-3.04...
/
xgb.DMatrix
XGBoost has its own class of input data xgb.DMatrix. One can convert the usual data set into
it by
It is the data structure used by XGBoost algorithm. XGBoost preprocess the input dataand
labelinto an xgb.DMatrixobject before feed it to the training algorithm.
If one need to repeat training process on the same big data set, it is good to use the
xgb.DMatrixobject to save preprocessing time.
dtrain=xgb.DMatrix(data=train$data,label=train$label)
/
xgb.DMatrix
An xgb.DMatrixobject contains
1. Preprocessed training data
2. Several features
Missing values
data weight
·
·
/
Continue Training
Train the model for 5000 rounds is sometimes useful, but we are also taking the risk of
overfitting.
A better strategy is to train the model with fewer rounds and repeat that for many times. This
enable us to observe the outcome after each step.
/
Continue Training
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.0465223399355136
ptrain=predict(bst,dtrain,outputmargin=TRUE)
setinfo(dtrain,"base_margin",ptrain)
##[1]TRUE
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.0222631659757408
/
Continue Training
#Trainwithonlyoneround
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.0222631659757408
ptrain=predict(bst,dtrain,outputmargin=TRUE)
setinfo(dtrain,"base_margin",ptrain)
##[1]TRUE
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.00706279748195916
/
Continue Training
#Trainwithonlyoneround
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.00706279748195916
#marginmeansthebaselineoftheprediction
ptrain=predict(bst,dtrain,outputmargin=TRUE)
setinfo(dtrain,"base_margin",ptrain)
##[1]TRUE
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.0152003684937817
/
Continue Training
#Trainwithonlyoneround
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.0152003684937817
#marginmeansthebaselineoftheprediction
ptrain=predict(bst,dtrain,outputmargin=TRUE)
#Setthemargininformationtothexgb.DMatrixobject
setinfo(dtrain,"base_margin",ptrain)
##[1]TRUE
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.00706279748195916
/
Continue Training
#Trainwithonlyoneround
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.00706279748195916
#marginmeansthebaselineoftheprediction
ptrain=predict(bst,dtrain,outputmargin=TRUE)
#Setthemargininformationtothexgb.DMatrixobject
setinfo(dtrain,"base_margin",ptrain)
##[1]TRUE
#Trainbasedonthepreviousresult
bst=xgboost(params=param,data=dtrain,nround=1)
##[0] train-error:0.00122831260555811
/
Importance and Tree plotting
The result of XGBoost contains many trees. We can count the number of appearance of each
variable in all the trees, and use this number as the importance score.
bst=xgboost(data=train$data,label=train$label,max.depth=2,verbose=FALSE,
eta=1,nthread=2,nround=2,objective="binary:logistic")
xgb.importance(train$dataDimnames[[2]],model=bst)
## Feature Gain CoverFrequence
##1: 280.676154840.4978746 0.4
##2: 550.171353520.1920543 0.2
##3: 590.123172410.1638750 0.2
##4: 1080.029319220.1461960 0.2
/
Importance and Tree plotting
We can also plot the trees in the model by xgb.plot.tree.
xgb.plot.tree(agaricus.train$data@Dimnames[[2]],model=bst)
/
Tree plotting
/
Early Stopping
When doing cross validation, it is usual to encounter overfitting at a early stage of iteration.
Sometimes the prediction gets worse consistantly from round 300 while the total number of
iteration is 1000. To stop the cross validation process, one can use the early.stop.round
argumetn in xgb.cv.
bst=xgb.cv(params=param,data=train$data,label=train$label,
nround=20,nfold=5,
maximize=FALSE,early.stop.round=3)
/
Early Stopping
##[0] train-error:0.046522+0.001280 test-error:0.046523+0.005119
##[1] train-error:0.022263+0.001463 test-error:0.022264+0.005852
##[2] train-error:0.007063+0.000905 test-error:0.007064+0.003619
##[3] train-error:0.015200+0.001163 test-error:0.015201+0.004653
##[4] train-error:0.004414+0.002811 test-error:0.005989+0.004839
##[5] train-error:0.001689+0.001114 test-error:0.002304+0.002304
##[6] train-error:0.001228+0.000219 test-error:0.001228+0.000876
##[7] train-error:0.001228+0.000219 test-error:0.001228+0.000876
##[8] train-error:0.000921+0.000533 test-error:0.001228+0.000876
##[9] train-error:0.000653+0.000601 test-error:0.001075+0.001030
##[10]train-error:0.000422+0.000582 test-error:0.000768+0.001086
##[11]train-error:0.000000+0.000000 test-error:0.000000+0.000000
##[12]train-error:0.000000+0.000000 test-error:0.000000+0.000000
##[13]train-error:0.000000+0.000000 test-error:0.000000+0.000000
##[14]train-error:0.000000+0.000000 test-error:0.000000+0.000000
##Stopping.Bestiteration:12
/
Kaggle Winning Solution
/
Kaggle Winning Solution
To get a higher rank, one need to push the limit of
1. Feature Engineering
2. Parameter Tuning
3. Model Ensemble
The winning solution in the recent Otto Competition is an excellent example.
/
Kaggle Winning Solution
Then they used a 3-layer ensemble learning model, including
33 models on top of the original data
XGBoost, neural network and adaboost on 33 predictions from the models and 8 engineered
features
Weighted average of the 3 prediction from the second step
·
·
·
/
Kaggle Winning Solution
The data for this competition is special: the meanings of the featuers are hidden.
For feature engineering, they generated 8 new features:
Distances to nearest neighbours of each classes
Sum of distances of 2 nearest neighbours of each classes
Sum of distances of 4 nearest neighbours of each classes
Distances to nearest neighbours of each classes in TFIDF space
Distances to nearest neighbours of each classed in T-SNE space (3 dimensions)
Clustering features of original dataset
Number of non-zeros elements in each row
X (That feature was used only in NN 2nd level training)
·
·
·
·
·
·
·
·
/
Kaggle Winning Solution
This means a lot of work. However this also implies they need to try a lot of other models,
although some of them turned out to be not helpful in this competition. Their attempts include:
A lot of training algorithms in first level as
Some preprocessing like PCA, ICA and FFT
Feature Selection
Semi-supervised learning
·
Vowpal Wabbit(many configurations)
R glm, glmnet, scikit SVC, SVR, Ridge, SGD, etc...
-
-
·
·
·
/
Influencers in Social Networks
Let's learn to use a single XGBoost model to achieve a high rank in an old competition!
The competition we choose is the Influencers in Social Networks competition.
It was a hackathon in 2013, therefore the size of data is small enough so that we can train the
model in seconds.
/
Influencers in Social Networks
First let's download the data, and load them into R
train=read.csv('train.csv',header=TRUE)
test=read.csv('test.csv',header=TRUE)
y=train[,1]
train=as.matrix(train[,-1])
test=as.matrix(test)
/
Influencers in Social Networks
Observe the data:
colnames(train)
## [1]"A_follower_count" "A_following_count" "A_listed_count"
## [4]"A_mentions_received""A_retweets_received""A_mentions_sent"
## [7]"A_retweets_sent" "A_posts" "A_network_feature_1"
##[10]"A_network_feature_2""A_network_feature_3""B_follower_count"
##[13]"B_following_count" "B_listed_count" "B_mentions_received"
##[16]"B_retweets_received""B_mentions_sent" "B_retweets_sent"
##[19]"B_posts" "B_network_feature_1""B_network_feature_2"
##[22]"B_network_feature_3"
/
Influencers in Social Networks
Observe the data:
train[1,]
## A_follower_count A_following_count A_listed_count
## 2.280000e+02 3.020000e+02 3.000000e+00
##A_mentions_receivedA_retweets_received A_mentions_sent
## 5.839794e-01 1.005034e-01 1.005034e-01
## A_retweets_sent A_postsA_network_feature_1
## 1.005034e-01 3.621501e-01 2.000000e+00
##A_network_feature_2A_network_feature_3 B_follower_count
## 1.665000e+02 1.135500e+04 3.446300e+04
## B_following_count B_listed_countB_mentions_received
## 2.980800e+04 1.689000e+03 1.543050e+01
##B_retweets_received B_mentions_sent B_retweets_sent
## 3.984029e+00 8.204331e+00 3.324230e-01
## B_postsB_network_feature_1B_network_feature_2
## 6.988815e+00 6.600000e+01 7.553030e+01
##B_network_feature_3
## 1.916894e+03
/
Influencers in Social Networks
The data contains information from two users in a social network service. Our mission is to
determine who is more influencial than the other one.
This type of data gives us some room for feature engineering.
/
Influencers in Social Networks
The first trick is to increase the information in the data.
Every data point can be expressed as . Actually it indicates <1-y, B, A> as well. We can simply
use extract this part of information from the training set.
new.train=cbind(train[,12:22],train[,1:11])
train=rbind(train,new.train)
y=c(y,1-y)
/
Influencers in Social Networks
The following feature engineering steps are done on both training and test set. Therefore we
combine them together.
x=rbind(train,test)
/
Influencers in Social Networks
The next step could be calculating the ratio between features of A and B seperately:
followers/following
mentions received/sent
retweets received/sent
followers/posts
retweets received/posts
mentions received/posts
·
·
·
·
·
·
/
Influencers in Social Networks
Considering there might be zeroes, we need to smooth the ratio by a constant.
Next we can calculate the ratio with this helper function.
calcRatio=function(dat,i,j,lambda=1)(dat[,i]+lambda)/(dat[,j]+lambda)
/
Influencers in Social Networks
A.follow.ratio=calcRatio(x,1,2)
A.mention.ratio=calcRatio(x,4,6)
A.retweet.ratio=calcRatio(x,5,7)
A.follow.post=calcRatio(x,1,8)
A.mention.post=calcRatio(x,4,8)
A.retweet.post=calcRatio(x,5,8)
B.follow.ratio=calcRatio(x,12,13)
B.mention.ratio=calcRatio(x,15,17)
B.retweet.ratio=calcRatio(x,16,18)
B.follow.post=calcRatio(x,12,19)
B.mention.post=calcRatio(x,15,19)
B.retweet.post=calcRatio(x,16,19)
/
Influencers in Social Networks
Combine the features into the data set.
x=cbind(x[,1:11],
A.follow.ratio,A.mention.ratio,A.retweet.ratio,
A.follow.post,A.mention.post,A.retweet.post,
x[,12:22],
B.follow.ratio,B.mention.ratio,B.retweet.ratio,
B.follow.post,B.mention.post,B.retweet.post)
/
Influencers in Social Networks
Then we can compare the difference between A and B. Because XGBoost is scale invariant,
therefore minus and division are the essentially same.
AB.diff=x[,1:17]-x[,18:34]
x=cbind(x,AB.diff)
train=x[1:nrow(train),]
test=x[-(1:nrow(train)),]
/
Influencers in Social Networks
Now comes to the modeling part. We first investigate how far can we gowith a single model.
The parameter tuning step is very important in this step. We can see the performance from
cross validation.
/
Influencers in Social Networks
Here's the xgb.cvwith default parameters.
set.seed(1024)
cv.res=xgb.cv(data=train,nfold=3,label=y,nrounds=100,verbose=FALSE,
objective='binary:logistic',eval_metric='auc')
/
Influencers in Social Networks
We can see the trend of AUC on training and test sets.
/
Influencers in Social Networks
It is obvious our model severly overfits. The direct reason is simple: the default value of etais
0.3, which is too large for this mission.
Recall the parameter tuning guide, we need to decrease etaand inccrease nroundsbased on
the result of cross validation.
/
Influencers in Social Networks
After some trials, we get the following set of parameters:
set.seed(1024)
cv.res=xgb.cv(data=train,nfold=3,label=y,nrounds=3000,
objective='binary:logistic',eval_metric='auc',
eta=0.005,gamma=1,lambda=3,nthread=8,
max_depth=4,min_child_weight=1,verbose=F,
subsample=0.8,colsample_bytree=0.8)
/
Influencers in Social Networks
We can see the trend of AUC on training and test sets.
/
Influencers in Social Networks
Next we extract the best number of iterations.
We calculate the AUC minus the standard deviation, and choose the iteration with the largest
value.
bestRound=which.max(as.matrix(cv.res)[,3]-as.matrix(cv.res)[,4])
bestRound
##[1]2442
cv.res[bestRound,]
## train.auc.meantrain.auc.stdtest.auc.meantest.auc.std
##1: 0.934967 0.00125 0.876629 0.002073
/
Influencers in Social Networks
Then we train the model with the same set of parameters:
set.seed(1024)
bst=xgboost(data=train,label=y,nrounds=3000,
objective='binary:logistic',eval_metric='auc',
eta=0.005,gamma=1,lambda=3,nthread=8,
max_depth=5,min_child_weight=1,
subsample=0.8,colsample_bytree=0.8)
preds=predict(bst,test,ntreelimit=bestRound)
/
Influencers in Social Networks
Finally we submit our solution
This wins us into top 10 on the leaderboard!
result=data.frame(Id=1:nrow(test),
Choice=preds)
write.csv(result,'submission.csv',quote=FALSE,row.names=FALSE)
/
FAQ
/

More Related Content

PDF
Introduction to XGBoost
PPTX
Introduction to XGboost
PPTX
Introduction of Xgboost
PPTX
XgBoost.pptx
PDF
XGBoost: the algorithm that wins every competition
PDF
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
PDF
Python - the basics
PDF
Gradient descent method
Introduction to XGBoost
Introduction to XGboost
Introduction of Xgboost
XgBoost.pptx
XGBoost: the algorithm that wins every competition
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Python - the basics
Gradient descent method

What's hot (20)

PDF
Demystifying Xgboost
PDF
PDF
XGBoost @ Fyber
PPTX
Machine learning basics using trees algorithm (Random forest, Gradient Boosting)
PDF
XGBoost & LightGBM
PDF
Winning data science competitions, presented by Owen Zhang
PDF
Overview of tree algorithms from decision tree to xgboost
PDF
Deep Dive into Hyperparameter Tuning
PDF
Introduction to Neural Networks
PPTX
Birch Algorithm With Solved Example
PPTX
Decision Trees for Classification: A Machine Learning Algorithm
PPTX
Gradient descent method
PDF
Winning Data Science Competitions
PPTX
DBSCAN (1) (4).pptx
PPTX
Boosting Approach to Solving Machine Learning Problems
PPTX
Random forest
PPTX
GBM package in r
PDF
General Tips for participating Kaggle Competitions
PPTX
Presentation on unsupervised learning
PDF
Support Vector Machines for Classification
Demystifying Xgboost
XGBoost @ Fyber
Machine learning basics using trees algorithm (Random forest, Gradient Boosting)
XGBoost & LightGBM
Winning data science competitions, presented by Owen Zhang
Overview of tree algorithms from decision tree to xgboost
Deep Dive into Hyperparameter Tuning
Introduction to Neural Networks
Birch Algorithm With Solved Example
Decision Trees for Classification: A Machine Learning Algorithm
Gradient descent method
Winning Data Science Competitions
DBSCAN (1) (4).pptx
Boosting Approach to Solving Machine Learning Problems
Random forest
GBM package in r
General Tips for participating Kaggle Competitions
Presentation on unsupervised learning
Support Vector Machines for Classification
Ad

Viewers also liked (20)

PDF
Data driven modeling of systemic delay propagation under severe meteorologica...
PDF
Topic Modeling for Learning Analytics Researchers LAK15 Tutorial
DOC
Corey Sykes' Resume
PDF
순환신경망(Recurrent neural networks) 개요
PDF
Wenzhe Xu (Evelyn) Resume for Data Science
PDF
KSQL: Streaming SQL for Kafka
PDF
Spark streaming + kafka 0.10
PDF
Apache® Spark™ MLlib: From Quick Start to Scikit-Learn
PPTX
Inlining Heuristics
PPTX
XGBoost (System Overview)
PPTX
Gbm.more GBM in H2O
PPTX
Automated data analysis with Python
PDF
GBM theory code and parameters
PDF
Gradient boosting in practice: a deep dive into xgboost
PDF
We're so skewed_presentation
PDF
Wikipedia: Tuned Predictions on Big Data
PDF
Data mining with caret package
PDF
Max Kuhn's talk on R machine learning
PDF
Bayesian models in r
PPTX
Streaming Python on Hadoop
Data driven modeling of systemic delay propagation under severe meteorologica...
Topic Modeling for Learning Analytics Researchers LAK15 Tutorial
Corey Sykes' Resume
순환신경망(Recurrent neural networks) 개요
Wenzhe Xu (Evelyn) Resume for Data Science
KSQL: Streaming SQL for Kafka
Spark streaming + kafka 0.10
Apache® Spark™ MLlib: From Quick Start to Scikit-Learn
Inlining Heuristics
XGBoost (System Overview)
Gbm.more GBM in H2O
Automated data analysis with Python
GBM theory code and parameters
Gradient boosting in practice: a deep dive into xgboost
We're so skewed_presentation
Wikipedia: Tuned Predictions on Big Data
Data mining with caret package
Max Kuhn's talk on R machine learning
Bayesian models in r
Streaming Python on Hadoop
Ad

Similar to Xgboost (20)

PDF
Feature Engineering - Getting most out of data for predictive models - TDC 2017
PPTX
Understanding GBM and XGBoost in Scikit-Learn
PDF
AIML4 CNN lab256 1hr (111-1).pdf
PDF
maXbox starter65 machinelearning3
DOCX
Data visualization with R and ggplot2.docx
PDF
Eye deep
ODP
Applying Linear Optimization Using GLPK
PDF
Neural networks with python
PDF
20181212 - PGconfASIA - LT - English
PDF
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
PDF
Linear Regression (Machine Learning)
PDF
Using CNTK's Python Interface for Deep LearningDave DeBarr -
PDF
Feature Engineering - Getting most out of data for predictive models
PDF
Gradient Boosted Regression Trees in Scikit Learn by Gilles Louppe & Peter Pr...
PPSX
Real-time Face Recognition & Detection Systems 1
PPTX
KabirDataPreprocessingPyMMMMMMMMMMMMMMMMMMMMthon.pptx
PDF
Gradient Boosted Regression Trees in scikit-learn
PPTX
ML .pptx
PDF
Higgs Boson Challenge
PDF
Lab 2: Classification and Regression Prediction Models, training and testing ...
Feature Engineering - Getting most out of data for predictive models - TDC 2017
Understanding GBM and XGBoost in Scikit-Learn
AIML4 CNN lab256 1hr (111-1).pdf
maXbox starter65 machinelearning3
Data visualization with R and ggplot2.docx
Eye deep
Applying Linear Optimization Using GLPK
Neural networks with python
20181212 - PGconfASIA - LT - English
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Linear Regression (Machine Learning)
Using CNTK's Python Interface for Deep LearningDave DeBarr -
Feature Engineering - Getting most out of data for predictive models
Gradient Boosted Regression Trees in Scikit Learn by Gilles Louppe & Peter Pr...
Real-time Face Recognition & Detection Systems 1
KabirDataPreprocessingPyMMMMMMMMMMMMMMMMMMMMthon.pptx
Gradient Boosted Regression Trees in scikit-learn
ML .pptx
Higgs Boson Challenge
Lab 2: Classification and Regression Prediction Models, training and testing ...

More from Vivian S. Zhang (20)

PDF
Why NYC DSA.pdf
PPTX
Career services workshop- Roger Ren
PDF
Nycdsa wordpress guide book
PDF
A Hybrid Recommender with Yelp Challenge Data
PDF
Kaggle Top1% Solution: Predicting Housing Prices in Moscow
PDF
Nyc open-data-2015-andvanced-sklearn-expanded
PDF
Nycdsa ml conference slides march 2015
PDF
THE HACK ON JERSEY CITY CONDO PRICES explore trends in public data
PDF
Using Machine Learning to aid Journalism at the New York Times
PDF
Introducing natural language processing(NLP) with r
PDF
Natural Language Processing(SupStat Inc)
PDF
Hack session for NYTimes Dialect Map Visualization( developed by R Shiny)
PPTX
Data Science Academy Student Demo day--Moyi Dang, Visualizing global public c...
PPTX
Data Science Academy Student Demo day--Divyanka Sharma, Businesses in nyc
PPTX
Data Science Academy Student Demo day--Chang Wang, dogs breeds in nyc
PDF
Data Science Academy Student Demo day--Richard Sheng, kinvolved school attend...
PPTX
Data Science Academy Student Demo day--Peggy sobolewski,analyzing transporati...
PPTX
Data Science Academy Student Demo day--Michael blecher,the importance of clea...
PPTX
Data Science Academy Student Demo day--Shelby Ahern, An Exploration of Non-Mi...
PPTX
R003 laila restaurant sanitation report(NYC Data Science Academy, Data Scienc...
Why NYC DSA.pdf
Career services workshop- Roger Ren
Nycdsa wordpress guide book
A Hybrid Recommender with Yelp Challenge Data
Kaggle Top1% Solution: Predicting Housing Prices in Moscow
Nyc open-data-2015-andvanced-sklearn-expanded
Nycdsa ml conference slides march 2015
THE HACK ON JERSEY CITY CONDO PRICES explore trends in public data
Using Machine Learning to aid Journalism at the New York Times
Introducing natural language processing(NLP) with r
Natural Language Processing(SupStat Inc)
Hack session for NYTimes Dialect Map Visualization( developed by R Shiny)
Data Science Academy Student Demo day--Moyi Dang, Visualizing global public c...
Data Science Academy Student Demo day--Divyanka Sharma, Businesses in nyc
Data Science Academy Student Demo day--Chang Wang, dogs breeds in nyc
Data Science Academy Student Demo day--Richard Sheng, kinvolved school attend...
Data Science Academy Student Demo day--Peggy sobolewski,analyzing transporati...
Data Science Academy Student Demo day--Michael blecher,the importance of clea...
Data Science Academy Student Demo day--Shelby Ahern, An Exploration of Non-Mi...
R003 laila restaurant sanitation report(NYC Data Science Academy, Data Scienc...

Recently uploaded (20)

PPTX
Global journeys: estimating international migration
PPTX
Computer network topology notes for revision
PDF
Launch Your Data Science Career in Kochi – 2025
PPTX
1_Introduction to advance data techniques.pptx
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PPTX
climate analysis of Dhaka ,Banglades.pptx
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
Business Acumen Training GuidePresentation.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Introduction to Knowledge Engineering Part 1
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Global journeys: estimating international migration
Computer network topology notes for revision
Launch Your Data Science Career in Kochi – 2025
1_Introduction to advance data techniques.pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
IBA_Chapter_11_Slides_Final_Accessible.pptx
Galatica Smart Energy Infrastructure Startup Pitch Deck
Moving the Public Sector (Government) to a Digital Adoption
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
climate analysis of Dhaka ,Banglades.pptx
.pdf is not working space design for the following data for the following dat...
Business Acumen Training GuidePresentation.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Introduction to Knowledge Engineering Part 1
Miokarditis (Inflamasi pada Otot Jantung)
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
IB Computer Science - Internal Assessment.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx

Xgboost

  • 2. Overview Introduction Basic Walkthrough Real World Application Model Specification Parameter Introduction Advanced Features Kaggle Winning Solution · · · · · · · /
  • 4. Introduction Nowadays we have plenty of machine learning models. Those most well-knowns are Linear/Logistic Regression k-Nearest Neighbours Support Vector Machines Tree-based Model Neural Networks · · · · Decision Tree Random Forest Gradient Boosting Machine - - - · /
  • 5. Introduction XGBoost is short for eXtreme Gradient Boosting. It is An open-sourced tool A variant of the gradient boosting machine The winning model for several kaggle competitions · Computation in C++ R/python/Julia interface provided - - · Tree-based model- · /
  • 6. Introduction XGBoost is currently host on github. The primary author of the model and the c++ implementation is Tianqi Chen. The author for the R-package is Tong He. · · /
  • 7. Introduction XGBoost is widely used for kaggle competitions. The reason to choose XGBoost includes Easy to use Efficiency Accuracy Feasibility · Easy to install. Highly developed R/python interface for users. - - · Automatic parallel computation on a single machine. Can be run on a cluster. - - · Good result for most data sets.- · Customized objective and evaluation Tunable parameters - - /
  • 8. Basic Walkthrough We introduce the R package for XGBoost. To install, please run This command downloads the package from github and compile it automatically on your machine. Therefore we need RTools installed on Windows. devtools::install_github('dmlc/xgboost',subdir='R-package') /
  • 9. Basic Walkthrough XGBoost provides a data set to demonstrate its usages. This data set includes the information for some kinds of mushrooms. The features are binary, indicate whether the mushroom has this characteristic. The target variable is whether they are poisonous. require(xgboost) ##Loadingrequiredpackage:xgboost data(agaricus.train,package='xgboost') data(agaricus.test,package='xgboost') train=agaricus.train test=agaricus.test /
  • 10. Basic Walkthrough Let's investigate the data first. We can see that the data is a dgCMatrixclass object. This is a sparse matrix class from the package Matrix. Sparse matrix is more memory efficient for some specific data. str(train$data) ##Formalclass'dgCMatrix'[package"Matrix"]with6slots ## ..@i :int[1:143286]26811182021242832... ## ..@p :int[1:127]036937233065845648965138380838410991... ## ..@Dim :int[1:2]6513126 ## ..@Dimnames:Listof2 ## ....$:NULL ## ....$:chr[1:126]"cap-shape=bell""cap-shape=conical""cap-shape=convex""cap-shape=flat ## ..@x :num[1:143286]1111111111... ## ..@factors:list() /
  • 11. Basic Walkthrough To use XGBoost to classify poisonous mushrooms, the minimum information we need to provide is: 1. Input features 2. Target variable 3. Objective 4. Number of iteration XGBoost allows dense and sparse matrix as the input.· A numeric vector. Use integers starting from 0 for classification, or real values for regression · For regression use 'reg:linear' For binary classification use 'binary:logistic' · · The number of trees added to the model· /
  • 12. Basic Walkthrough To run XGBoost, we can use the following command: The output is the classification error on the training data set. bst=xgboost(data=train$data,label=train$label, nround=2,objective="binary:logistic") ##[0] train-error:0.000614 ##[1] train-error:0.001228 /
  • 13. Basic Walkthrough Sometimes we might want to measure the classification by 'Area Under the Curve': bst=xgboost(data=train$data,label=train$label,nround=2, objective="binary:logistic",eval_metric="auc") ##[0] train-auc:0.999238 ##[1] train-auc:0.999238 /
  • 14. Basic Walkthrough To predict, you can simply write pred=predict(bst,test$data) head(pred) ##[1]0.25824980.74332210.25824980.25824980.25765090.2750908 /
  • 15. Basic Walkthrough Cross validation is an important method to measure the model's predictive power, as well as the degree of overfitting. XGBoost provides a convenient function to do cross validation in a line of code. Notice the difference of the arguments between xgb.cvand xgboostis the additional nfold parameter. To perform cross validation on a certain set of parameters, we just need to copy them to the xgb.cvfunction and add the number of folds. cv.res=xgb.cv(data=train$data,nfold=5,label=train$label,nround=2, objective="binary:logistic",eval_metric="auc") ##[0] train-auc:0.998780+0.000590test-auc:0.998547+0.000854 ##[1] train-auc:0.999114+0.000728test-auc:0.998736+0.001072 /
  • 16. Basic Walkthrough xgb.cvreturns a data.tableobject containing the cross validation results. This is helpful for choosing the correct number of iterations. cv.res ## train.auc.meantrain.auc.stdtest.auc.meantest.auc.std ##1: 0.998780 0.000590 0.998547 0.000854 ##2: 0.999114 0.000728 0.998736 0.001072 /
  • 18. Higgs Boson Competition The debut of XGBoost was in the higgs boson competition. Tianqi introduced the tool along with a benchmark code which achieved the top 10% at the beginning of the competition. To the end of the competition, it was already the mostly used tool in that competition. /
  • 19. Higgs Boson Competition XGBoost offers the script on github. To run the script, prepare a datadirectory and download the competition data into this directory. /
  • 20. Higgs Boson Competition Firstly we prepare the environment require(xgboost) require(methods) testsize=550000 /
  • 21. Higgs Boson Competition Then we can read in the data dtrain=read.csv("data/training.csv",header=TRUE) dtrain[33]=dtrain[33]=="s" label=as.numeric(dtrain[[33]]) data=as.matrix(dtrain[2:31]) weight=as.numeric(dtrain[[32]])*testsize/length(label) /
  • 22. Higgs Boson Competition The data contains missing values and they are marked as -999. We can construct an xgb.DMatrixobject containing the information of weightand missing. xgmat=xgb.DMatrix(data,label=label,weight=weight,missing=-999.0) /
  • 23. Higgs Boson Competition Next step is to set the basic parameters param=list("objective"="binary:logitraw", "scale_pos_weight"=sumwneg/sumwpos, "bst:eta"=0.1, "bst:max_depth"=6, "eval_metric"="auc", "eval_metric"="ams@0.15", "silent"=1, "nthread"=16) /
  • 24. Higgs Boson Competition We then start the training step bst=xgboost(params=param,data=xgmat,nround=120) /
  • 25. Higgs Boson Competition Then we read in the test data dtest=read.csv("data/test.csv",header=TRUE) data=as.matrix(dtest[2:31]) xgmat=xgb.DMatrix(data,missing=-999.0) /
  • 26. Higgs Boson Competition We now can make prediction on the test data set. ypred=predict(bst,xgmat) /
  • 27. Higgs Boson Competition Finally we output the prediction according to the required format. Please submit the result to see your performance :) rorder=rank(ypred,ties.method="first") threshold=0.15 ntop=length(rorder)-as.integer(threshold*length(rorder)) plabel=ifelse(rorder>ntop,"s","b") outdata=list("EventId"=idx, "RankOrder"=rorder, "Class"=plabel) write.csv(outdata,file="submission.csv",quote=FALSE,row.names=FALSE) /
  • 28. Higgs Boson Competition Besides the good performace, the efficiency is also a highlight of XGBoost. The following plot shows the running time result on the Higgs boson data set. /
  • 29. Higgs Boson Competition After some feature engineering and parameter tuning, one can achieve around 25th with a single model on the leaderboard. This is an article written by a former-physist introducing his solution with a single XGboost model. On our post-competition attempts, we achieved 11th on the leaderboard with a single XGBoost model. /
  • 31. Training Objective To understand other parameters, one need to have a basic understanding of the model behind. Suppose we have trees, the model is where each is the prediction from a decision tree. The model is a collection of decision trees. K ∑ k=1 K f k f k /
  • 32. Training Objective Having all the decision trees, we make prediction by where is the feature vector for the -th data point. Similarly, the prediction at the -th step can be defined as = ( )y i ˆ ∑ k=1 K f k xi xi i t = ( )yi ˆ (t) ∑ k=1 t f k xi /
  • 33. Training Objective To train the model, we need to optimize a loss function. Typically, we use Rooted Mean Squared Error for regression LogLoss for binary classification mlogloss for multi-classification · - L = ( −1 N ∑N i=1 y i y i ˆ ) 2 · - L = − ( log( ) + (1 − ) log(1 − )) 1 N ∑N i=1 y i p i y i p i · - L = − log( ) 1 N ∑N i=1 ∑M j=1 y i,j p i,j /
  • 34. Training Objective Regularization is another important part of the model. A good regularization term controls the complexity of the model which prevents overfitting. Define where is the number of leaves, and is the score on the -th leaf. Ω = γT + λ 1 2 ∑ j=1 T w 2 j T w 2 j j /
  • 35. Training Objective Put loss function and regularization together, we have the objective of the model: where loss function controls the predictive power, and regularization controls the simplicity. Obj = L + Ω /
  • 36. Training Objective In XGBoost, we use gradient descent to optimize the objective. Given an objective to optimize, gradient descent is an iterative technique which calculate at each iteration. Then we improve along the direction of the gradient to minimize the objective. Obj(y, )yˆ Obj(y, )∂yˆ yˆ yˆ /
  • 37. Training Objective Recall the definition of objective . For a iterative algorithm we can re-define the objective function as To optimize it by gradient descent, we need to calculate the gradient. The performance can also be improved by considering both the first and the second order gradient. Obj = L + Ω Ob = L( , ) + Ω( ) = L( , + ( )) + Ω( )j (t) ∑ i=1 N y i yˆ (t) i ∑ i=1 t f i ∑ i=1 N y i yi ˆ (t−1) f t xi ∑ i=1 t f i Ob∂y i ˆ (t) j (t) Ob∂2 y i ˆ (t) j (t) /
  • 38. Training Objective Since we don't have derivative for every objective function, we calculate the second order taylor approximation of it where Ob ≃ [L( , ) + ( ) + ( )] + Ω( )j (t) ∑ i=1 N y i yˆ(t−1) g i f t xi 1 2 hi f 2 t xi ∑ i=1 t f i · = l( , )g i ∂yˆ(t−1) y i yˆ(t−1) · = l( , )hi ∂2 yˆ (t−1) y i yˆ(t−1) /
  • 39. Training Objective Remove the constant terms, we get This is the objective at the -th step. Our goal is to find a to optimize it. Ob = [ ( ) + ( )] + Ω( )j (t) ∑ i=1 n g i f t xi 1 2 hi f 2 t xi f t t f t /
  • 40. Tree Building Algorithm The tree structures in XGBoost leads to the core problem: how can we find a tree that improves the prediction along the gradient? /
  • 41. Tree Building Algorithm Every decision tree looks like this Each data point flows to one of the leaves following the direction on each node. /
  • 42. Tree Building Algorithm The core concepts are: Internal Nodes Leaves · Each internal node split the flow of data points by one of the features. The condition on the edge specifies what data can flow through. - - · Data points reach to a leaf will be assigned a weight. The weight is the prediction. - - /
  • 43. Tree Building Algorithm Two key questions for building a decision tree are 1. How to find a good structure? 2. How to assign prediction score? We want to solve these two problems with the idea of gradient descent. /
  • 44. Tree Building Algorithm Let us assume that we already have the solution to question 1. We can mathematically define a tree as where is a "directing" function which assign every data point to the -th leaf. This definition describes the prediction process on a tree as (x) =f t wq(x) q(x) q(x) Assign the data point to a leaf by Assign the corresponding score on the -th leaf to the data point. · x q · wq(x) q(x) /
  • 45. Tree Building Algorithm Define the index set This set contains the indices of data points that are assigned to the -th leaf. = {i|q( ) = j}Ij xi j /
  • 46. Tree Building Algorithm Then we rewrite the objective as Since all the data points on the same leaf share the same prediction, this form sums the prediction by leaves. Ob = [ ( ) + ( )] + γT + λj (t) ∑ i=1 n gi ft xi 1 2 hi f 2 t xi 1 2 ∑ j=1 T w 2 j = [( ) + ( + λ) ] + γT ∑ j=1 T ∑ i∈Ij g i wj 1 2 ∑ i∈Ij hi w 2 j /
  • 47. Tree Building Algorithm It is a quadratic problem of , so it is easy to find the best to optimize . The corresponding value of is wj wj Obj = −w ∗ j ∑i∈Ij g i + λ∑i∈Ij hi Obj Ob = − + γTj (t) 1 2 ∑ j=1 T (∑i∈Ij g i ) 2 + λ∑i∈Ij hi /
  • 48. Tree Building Algorithm The leaf score relates to = −wj ∑i∈Ij g i + λ∑i∈Ij hi The first and second order of the loss function and The regularization parameter · g h · λ /
  • 49. Tree Building Algorithm Now we come back to the first question: How to find a good structure? We can further split it into two sub-questions: 1. How to choose the feature to split? 2. When to stop the split? /
  • 50. Tree Building Algorithm In each split, we want to greedily find the best splitting point that can optimize the objective. For each feature 1. Sort the numbers 2. Scan the best splitting point. 3. Choose the best feature. /
  • 51. Tree Building Algorithm Now we give a definition to "the best split" by the objective. Everytime we do a split, we are changing a leaf into a internal node. /
  • 52. Tree Building Algorithm Let Recall the best value of objective on the -th leaf is be the set of indices of data points assigned to this node and be the sets of indices of data points assigned to two new leaves. · I · IL IR j Ob = − + γj (t) 1 2 (∑i∈Ij g i ) 2 + λ∑i∈Ij hi /
  • 53. Tree Building Algorithm The gain of the split is gain = [ + − ] − γ 1 2 (∑i∈IL gi ) 2 + λ∑i∈IL hi (∑i∈IR gi ) 2 + λ∑i∈IR hi (∑i∈I g i ) 2 + λ∑i∈I hi /
  • 54. Tree Building Algorithm To build a tree, we find the best splitting point recursively until we reach to the maximum depth. Then we prune out the nodes with a negative gain in a bottom-up order. /
  • 55. Tree Building Algorithm XGBoost can handle missing values in the data. For each node, we guide all the data points with a missing value Finally every node has a "default direction" for missing values. to the left subnode, and calculate the maximum gain to the right subnode, and calculate the maximum gain Choose the direction with a larger gain · · · /
  • 56. Tree Building Algorithm To sum up, the outline of the algorithm is Iterate for nroundtimes· Grow the tree to the maximun depth Prune the tree to delete nodes with negative gain - Find the best splitting point Assign weight to the two new leaves - - - /
  • 58. Parameter Introduction XGBoost has plenty of parameters. We can group them into 1. General parameters 2. Booster parameters 3. Task parameters Number of threads· Stepsize Regularization · · Objective Evaluation metric · · /
  • 59. Parameter Introduction After the introduction of the model, we can understand the parameters provided in XGBoost. To check the parameter list, one can look into The documentation of xgb.train. The documentation in the repository. · · /
  • 60. Parameter Introduction General parameters: nthread booster · Number of parallel threads.- · gbtree: tree-based model. gblinear: linear function. - - /
  • 61. Parameter Introduction Parameter for Tree Booster eta gamma · Step size shrinkage used in update to prevents overfitting. Range in [0,1], default 0.3 - - · Minimum loss reduction required to make a split. Range [0, ], default 0 - - ∞ /
  • 62. Parameter Introduction Parameter for Tree Booster max_depth min_child_weight max_delta_step · Maximum depth of a tree. Range [1, ], default 6 - - ∞ · Minimum sum of instance weight needed in a child. Range [0, ], default 1 - - ∞ · Maximum delta step we allow each tree's weight estimation to be. Range [0, ], default 0 - - ∞ /
  • 63. Parameter Introduction Parameter for Tree Booster subsample colsample_bytree · Subsample ratio of the training instance. Range (0, 1], default 1 - - · Subsample ratio of columns when constructing each tree. Range (0, 1], default 1 - - /
  • 64. Parameter Introduction Parameter for Linear Booster lambda alpha lambda_bias · L2 regularization term on weights default 0 - - · L1 regularization term on weights default 0 - - · L2 regularization term on bias default 0 - - /
  • 65. Parameter Introduction objectives· "reg:linear": linear regression, default option. "binary:logistic": logistic regression for binary classification, output probability "multi:softmax": multiclass classification using the softmax objective, need to specify num_class User specified objective - - - - /
  • 67. Guide on Parameter Tuning It is nearly impossible to give a set of universal optimal parameters, or a global algorithm achieving it. The key pointsof parameter tuning are Control Overfitting Deal with Imbalanced data Trust the cross validation · · · /
  • 68. Guide on Parameter Tuning The "Bias-Variance Tradeoff", or the "Accuracy-Simplicity Tradeoff" is the main idea for controlling overfitting. For the booster specific parameters, we can group them as Controlling the model complexity Robust to noise · max_depth, min_child_weightand gamma- · subsample, colsample_bytree- /
  • 69. Guide on Parameter Tuning Sometimes the data is imbalanced among classes. Only care about the ranking order Care about predicting the right probability · Balance the positive and negative weights, by scale_pos_weight Use "auc" as the evaluation metric - - · Cannot re-balance the dataset Set parameter max_delta_stepto a finite number (say 1) will help convergence - - /
  • 70. Guide on Parameter Tuning To select ideal parameters, use the result from xgb.cv. Trust the score for the test Use early.stop.roundto detect continuously being worse on test set. If overfitting observed, reduce stepsize etaand increase nroundat the same time. · · · /
  • 72. Advanced Features There are plenty of highlights in XGBoost: Customized objective and evaluation metric Prediction from cross validation Continue training on existing model Calculate and plot the variable importance · · · · /
  • 73. Customization According to the algorithm, we can define our own loss function, as long as we can calculate the first and second order gradient of the loss function. Define and . We can optimize the loss function if we can calculate these two values. grad = l∂y t−1 hess = l∂2 y t−1 /
  • 74. Customization We can rewrite logloss for the -th data point as The is calculated by applying logistic transformation on our prediction . Then the logloss is i L = log( ) + (1 − ) log(1 − )y i p i y i p i p i yˆi L = log + (1 − ) logy i 1 1 + e −yˆi y i e −yˆi 1 + e −yˆi /
  • 75. Customization We can see that Next we translate them into the code. · grad = − = −1 1+e −yˆ i y i p i y i · hess = = (1 − ) 1+e −yˆ i (1+e −yˆ i ) 2 p i p i /
  • 82. Customization We can also customize the evaluation metric. evalerror=function(preds,dtrain){ labels=getinfo(dtrain,"label") err=as.numeric(sum(labels!=(preds>0)))/length(labels) return(list(metric="error",value=err)) } /
  • 83. Customization We can also customize the evaluation metric. evalerror=function(preds,dtrain){ #Extractthetruelabelfromthesecondargument labels=getinfo(dtrain,"label") err=as.numeric(sum(labels!=(preds>0)))/length(labels) return(list(metric="error",value=err)) } /
  • 84. Customization We can also customize the evaluation metric. evalerror=function(preds,dtrain){ #Extractthetruelabelfromthesecondargument labels=getinfo(dtrain,"label") #Calculatetheerror err=as.numeric(sum(labels!=(preds>0)))/length(labels) return(list(metric="error",value=err)) } /
  • 85. Customization We can also customize the evaluation metric. evalerror=function(preds,dtrain){ #Extractthetruelabelfromthesecondargument labels=getinfo(dtrain,"label") #Calculatetheerror err=as.numeric(sum(labels!=(preds>0)))/length(labels) #Returnthenameofthismetricandthevalue return(list(metric="error",value=err)) } /
  • 86. Customization To utilize the customized objective and evaluation, we simply pass them to the arguments: param=list(max.depth=2,eta=1,nthread=2,silent=1, objective=logregobj,eval_metric=evalerror) bst=xgboost(params=param,data=train$data,label=train$label,nround=2) ##[0] train-error:0.0465223399355136 ##[1] train-error:0.0222631659757408 /
  • 87. Prediction in Cross Validation "Stacking" is an ensemble learning technique which takes the prediction from several models. It is widely used in many scenarios. One of the main concern is avoid overfitting. The common way is use the prediction value from cross validation. XGBoost provides a convenient argument to calculate the prediction during the cross validation. /
  • 88. Prediction in Cross Validation res=xgb.cv(params=param,data=train$data,label=train$label,nround=2, nfold=5,prediction=TRUE) ##[0] train-error:0.046522+0.001041 test-error:0.046523+0.004164 ##[1] train-error:0.022263+0.001221 test-error:0.022263+0.004885 str(res) ##Listof2 ## $dt :Classes'data.table'and'data.frame': 2obs.of 4variables: ## ..$train.error.mean:num[1:2]0.04650.0223 ## ..$train.error.std:num[1:2]0.001040.00122 ## ..$test.error.mean:num[1:2]0.04650.0223 ## ..$test.error.std :num[1:2]0.004160.00488 ## ..-attr(*,".internal.selfref")=<externalptr> ## $pred:num[1:6513]2.58-1.04-1.122.57-3.04... /
  • 89. xgb.DMatrix XGBoost has its own class of input data xgb.DMatrix. One can convert the usual data set into it by It is the data structure used by XGBoost algorithm. XGBoost preprocess the input dataand labelinto an xgb.DMatrixobject before feed it to the training algorithm. If one need to repeat training process on the same big data set, it is good to use the xgb.DMatrixobject to save preprocessing time. dtrain=xgb.DMatrix(data=train$data,label=train$label) /
  • 90. xgb.DMatrix An xgb.DMatrixobject contains 1. Preprocessed training data 2. Several features Missing values data weight · · /
  • 91. Continue Training Train the model for 5000 rounds is sometimes useful, but we are also taking the risk of overfitting. A better strategy is to train the model with fewer rounds and repeat that for many times. This enable us to observe the outcome after each step. /
  • 97. Importance and Tree plotting The result of XGBoost contains many trees. We can count the number of appearance of each variable in all the trees, and use this number as the importance score. bst=xgboost(data=train$data,label=train$label,max.depth=2,verbose=FALSE, eta=1,nthread=2,nround=2,objective="binary:logistic") xgb.importance(train$dataDimnames[[2]],model=bst) ## Feature Gain CoverFrequence ##1: 280.676154840.4978746 0.4 ##2: 550.171353520.1920543 0.2 ##3: 590.123172410.1638750 0.2 ##4: 1080.029319220.1461960 0.2 /
  • 98. Importance and Tree plotting We can also plot the trees in the model by xgb.plot.tree. xgb.plot.tree(agaricus.train$data@Dimnames[[2]],model=bst) /
  • 100. Early Stopping When doing cross validation, it is usual to encounter overfitting at a early stage of iteration. Sometimes the prediction gets worse consistantly from round 300 while the total number of iteration is 1000. To stop the cross validation process, one can use the early.stop.round argumetn in xgb.cv. bst=xgb.cv(params=param,data=train$data,label=train$label, nround=20,nfold=5, maximize=FALSE,early.stop.round=3) /
  • 101. Early Stopping ##[0] train-error:0.046522+0.001280 test-error:0.046523+0.005119 ##[1] train-error:0.022263+0.001463 test-error:0.022264+0.005852 ##[2] train-error:0.007063+0.000905 test-error:0.007064+0.003619 ##[3] train-error:0.015200+0.001163 test-error:0.015201+0.004653 ##[4] train-error:0.004414+0.002811 test-error:0.005989+0.004839 ##[5] train-error:0.001689+0.001114 test-error:0.002304+0.002304 ##[6] train-error:0.001228+0.000219 test-error:0.001228+0.000876 ##[7] train-error:0.001228+0.000219 test-error:0.001228+0.000876 ##[8] train-error:0.000921+0.000533 test-error:0.001228+0.000876 ##[9] train-error:0.000653+0.000601 test-error:0.001075+0.001030 ##[10]train-error:0.000422+0.000582 test-error:0.000768+0.001086 ##[11]train-error:0.000000+0.000000 test-error:0.000000+0.000000 ##[12]train-error:0.000000+0.000000 test-error:0.000000+0.000000 ##[13]train-error:0.000000+0.000000 test-error:0.000000+0.000000 ##[14]train-error:0.000000+0.000000 test-error:0.000000+0.000000 ##Stopping.Bestiteration:12 /
  • 103. Kaggle Winning Solution To get a higher rank, one need to push the limit of 1. Feature Engineering 2. Parameter Tuning 3. Model Ensemble The winning solution in the recent Otto Competition is an excellent example. /
  • 104. Kaggle Winning Solution Then they used a 3-layer ensemble learning model, including 33 models on top of the original data XGBoost, neural network and adaboost on 33 predictions from the models and 8 engineered features Weighted average of the 3 prediction from the second step · · · /
  • 105. Kaggle Winning Solution The data for this competition is special: the meanings of the featuers are hidden. For feature engineering, they generated 8 new features: Distances to nearest neighbours of each classes Sum of distances of 2 nearest neighbours of each classes Sum of distances of 4 nearest neighbours of each classes Distances to nearest neighbours of each classes in TFIDF space Distances to nearest neighbours of each classed in T-SNE space (3 dimensions) Clustering features of original dataset Number of non-zeros elements in each row X (That feature was used only in NN 2nd level training) · · · · · · · · /
  • 106. Kaggle Winning Solution This means a lot of work. However this also implies they need to try a lot of other models, although some of them turned out to be not helpful in this competition. Their attempts include: A lot of training algorithms in first level as Some preprocessing like PCA, ICA and FFT Feature Selection Semi-supervised learning · Vowpal Wabbit(many configurations) R glm, glmnet, scikit SVC, SVR, Ridge, SGD, etc... - - · · · /
  • 107. Influencers in Social Networks Let's learn to use a single XGBoost model to achieve a high rank in an old competition! The competition we choose is the Influencers in Social Networks competition. It was a hackathon in 2013, therefore the size of data is small enough so that we can train the model in seconds. /
  • 108. Influencers in Social Networks First let's download the data, and load them into R train=read.csv('train.csv',header=TRUE) test=read.csv('test.csv',header=TRUE) y=train[,1] train=as.matrix(train[,-1]) test=as.matrix(test) /
  • 109. Influencers in Social Networks Observe the data: colnames(train) ## [1]"A_follower_count" "A_following_count" "A_listed_count" ## [4]"A_mentions_received""A_retweets_received""A_mentions_sent" ## [7]"A_retweets_sent" "A_posts" "A_network_feature_1" ##[10]"A_network_feature_2""A_network_feature_3""B_follower_count" ##[13]"B_following_count" "B_listed_count" "B_mentions_received" ##[16]"B_retweets_received""B_mentions_sent" "B_retweets_sent" ##[19]"B_posts" "B_network_feature_1""B_network_feature_2" ##[22]"B_network_feature_3" /
  • 110. Influencers in Social Networks Observe the data: train[1,] ## A_follower_count A_following_count A_listed_count ## 2.280000e+02 3.020000e+02 3.000000e+00 ##A_mentions_receivedA_retweets_received A_mentions_sent ## 5.839794e-01 1.005034e-01 1.005034e-01 ## A_retweets_sent A_postsA_network_feature_1 ## 1.005034e-01 3.621501e-01 2.000000e+00 ##A_network_feature_2A_network_feature_3 B_follower_count ## 1.665000e+02 1.135500e+04 3.446300e+04 ## B_following_count B_listed_countB_mentions_received ## 2.980800e+04 1.689000e+03 1.543050e+01 ##B_retweets_received B_mentions_sent B_retweets_sent ## 3.984029e+00 8.204331e+00 3.324230e-01 ## B_postsB_network_feature_1B_network_feature_2 ## 6.988815e+00 6.600000e+01 7.553030e+01 ##B_network_feature_3 ## 1.916894e+03 /
  • 111. Influencers in Social Networks The data contains information from two users in a social network service. Our mission is to determine who is more influencial than the other one. This type of data gives us some room for feature engineering. /
  • 112. Influencers in Social Networks The first trick is to increase the information in the data. Every data point can be expressed as . Actually it indicates <1-y, B, A> as well. We can simply use extract this part of information from the training set. new.train=cbind(train[,12:22],train[,1:11]) train=rbind(train,new.train) y=c(y,1-y) /
  • 113. Influencers in Social Networks The following feature engineering steps are done on both training and test set. Therefore we combine them together. x=rbind(train,test) /
  • 114. Influencers in Social Networks The next step could be calculating the ratio between features of A and B seperately: followers/following mentions received/sent retweets received/sent followers/posts retweets received/posts mentions received/posts · · · · · · /
  • 115. Influencers in Social Networks Considering there might be zeroes, we need to smooth the ratio by a constant. Next we can calculate the ratio with this helper function. calcRatio=function(dat,i,j,lambda=1)(dat[,i]+lambda)/(dat[,j]+lambda) /
  • 116. Influencers in Social Networks A.follow.ratio=calcRatio(x,1,2) A.mention.ratio=calcRatio(x,4,6) A.retweet.ratio=calcRatio(x,5,7) A.follow.post=calcRatio(x,1,8) A.mention.post=calcRatio(x,4,8) A.retweet.post=calcRatio(x,5,8) B.follow.ratio=calcRatio(x,12,13) B.mention.ratio=calcRatio(x,15,17) B.retweet.ratio=calcRatio(x,16,18) B.follow.post=calcRatio(x,12,19) B.mention.post=calcRatio(x,15,19) B.retweet.post=calcRatio(x,16,19) /
  • 117. Influencers in Social Networks Combine the features into the data set. x=cbind(x[,1:11], A.follow.ratio,A.mention.ratio,A.retweet.ratio, A.follow.post,A.mention.post,A.retweet.post, x[,12:22], B.follow.ratio,B.mention.ratio,B.retweet.ratio, B.follow.post,B.mention.post,B.retweet.post) /
  • 118. Influencers in Social Networks Then we can compare the difference between A and B. Because XGBoost is scale invariant, therefore minus and division are the essentially same. AB.diff=x[,1:17]-x[,18:34] x=cbind(x,AB.diff) train=x[1:nrow(train),] test=x[-(1:nrow(train)),] /
  • 119. Influencers in Social Networks Now comes to the modeling part. We first investigate how far can we gowith a single model. The parameter tuning step is very important in this step. We can see the performance from cross validation. /
  • 120. Influencers in Social Networks Here's the xgb.cvwith default parameters. set.seed(1024) cv.res=xgb.cv(data=train,nfold=3,label=y,nrounds=100,verbose=FALSE, objective='binary:logistic',eval_metric='auc') /
  • 121. Influencers in Social Networks We can see the trend of AUC on training and test sets. /
  • 122. Influencers in Social Networks It is obvious our model severly overfits. The direct reason is simple: the default value of etais 0.3, which is too large for this mission. Recall the parameter tuning guide, we need to decrease etaand inccrease nroundsbased on the result of cross validation. /
  • 123. Influencers in Social Networks After some trials, we get the following set of parameters: set.seed(1024) cv.res=xgb.cv(data=train,nfold=3,label=y,nrounds=3000, objective='binary:logistic',eval_metric='auc', eta=0.005,gamma=1,lambda=3,nthread=8, max_depth=4,min_child_weight=1,verbose=F, subsample=0.8,colsample_bytree=0.8) /
  • 124. Influencers in Social Networks We can see the trend of AUC on training and test sets. /
  • 125. Influencers in Social Networks Next we extract the best number of iterations. We calculate the AUC minus the standard deviation, and choose the iteration with the largest value. bestRound=which.max(as.matrix(cv.res)[,3]-as.matrix(cv.res)[,4]) bestRound ##[1]2442 cv.res[bestRound,] ## train.auc.meantrain.auc.stdtest.auc.meantest.auc.std ##1: 0.934967 0.00125 0.876629 0.002073 /
  • 126. Influencers in Social Networks Then we train the model with the same set of parameters: set.seed(1024) bst=xgboost(data=train,label=y,nrounds=3000, objective='binary:logistic',eval_metric='auc', eta=0.005,gamma=1,lambda=3,nthread=8, max_depth=5,min_child_weight=1, subsample=0.8,colsample_bytree=0.8) preds=predict(bst,test,ntreelimit=bestRound) /
  • 127. Influencers in Social Networks Finally we submit our solution This wins us into top 10 on the leaderboard! result=data.frame(Id=1:nrow(test), Choice=preds) write.csv(result,'submission.csv',quote=FALSE,row.names=FALSE) /
  • 128. FAQ /