SlideShare a Scribd company logo
3
Most read
17
Most read
24
Most read
Gradient boosted trees
Nihar Ranjan
Data Mining
 Data Mining : It is a process of extracting patterns from data. They should be:
 Valid: holding on to new data with some certainity
 Novel: being non-obvious to the system.
 Useful: should be possible to act on the item
 Understandable: Humans should be able to interpret the pattern.
 Also known as Knowledge Discovery in Databases (KDD).
Data Mining might mean:
Statistics Visualizatiom
Artificial
Intelligence
Database
Technology
Machine Learning Neural Networks
Information
Retreival
Knowledge-based
systems
Knowledge
acquisition
Pattern
Recognition
High performance
computing
And so on….
What's needed?
Suitable data Computing power Data mining software
Someone who knows both
the nature of data and the
software tools.
Reason, theory or hunch
Typical
applications of
Data Mining
and KDD
Data Mining and KDD have
widespread applications.
Some examples include: Marketing
Healthcare Financial services And so on….
Some basic techniques
Predictive model: It basically describes what will happen in the future,rather predicts by
analyzing the given current data. It uses statistical analysis, machine learning algorithms and
other forecast techniques to predict what might happen in the future.It is not accurate as it is
essentially just a prediction into the future using the data and the given stastistical/Machine
Learning techniques. Eg- Performance Analysis.
Descriptive model: It basically gives a vision into the past and tells what exactly happened in
the past. It involves Data Aggregation and Data Mining.It is accurate as it describes exactly
what happened in the past. Eg- Sentiment Analysis.
Prescriptive model: This is realtively new field in Data Science.It is a step above predictive
and descriptive model. It basically provides a viable solution to the problem in hand and the
impact of considering a solution on future trend.It is still an evolving technique. Eg- Google
self driving car.
Some basic techniques
Predictive
 Regression
 Classification
 Collaborative Filtering
Descriptive
 Clustering
 Association rules and variants
 Deviation detection
Key data mining tasks
Classification: mapping
data into predefined
groups or classes.
Regression: mapping data
item to a real valued
prediction variable.
Clustering: Grouping
similar data together into
clusters.
Key learning tasks in Machine Learning
Supervised learning: A set of well-labled
data is given with defined inputs and
outputs variables (training data ) and the
algorithms learn to predict the output
from the input data.
Unsupervised learning: Data given is not
labelled ie. only input variables are given
with no corresponding output variables.
The algorithms find patterns and draw
inferences from the given data. This is
"pure Data Mining".
Semi-supervised: Some data is labeled
but most of it is unlabeled and a mixture
of supervised and unsupervised
techniques can be used.
Some basic Data Mining Methods
Decision Trees Neural Networks
Cluster/Nearest
Neighbour
Genetic
Algorithms/Evolutionary
Computing
Bayesien Networks Statistics Hybrids
Gradient
boosted trees
 We are interested in Gradient boosted trees.
 We would use Rapidminer (possibly Python?)
Gradient boosted trees
 Decision Trees
 We will discuss a bit about decision trees first.
 A decision tree is a tree where each node represents a feature(attribute), each
link(branch) represents a decision(rule) and each leaf represents an
outcome(categorical or continues value).
 A decision tree takes a set of input features and splits input data recursively based
on those features.
 The processes are repeated until some stop condition is met. Ex- Depth of tree, no
more information gain possible etc.
Gradient boosted trees
 Decision Trees have been there for a long time and have also known to suffer from
bias and variance.
 We have a large bias with simple trees and large variance with complex trees.
 Ensemble methods combine several decision trees to produce better predictive
performance rather than utilizing a single decision tree.
 The main principle behind the ensemble model is that a group of weak learners
come together to form a strong learner.
 A few ensemble methods : Bagging, Boosting
 We will see each of them.
Gradient boosted trees
 Bagging
 It's used when our goal is to reduce the variance of the decision tree.
 Here the idea is to take a súbset of data from training sample chosen randomly
with replacement.
 Now, each collection of subset data is used to train their decision trees.
 Thus we end up with ensemble of different models and their average is much more
robust than a single decision tree,which is much more robust in Predictive
Analysis.
 Random Forest is an extension of Bagging.
Gradient boosted trees
 Random Forest
 It is basically a collection or ensemble of model of numerous decision trees. A collection of
trees is generally called forest.
 It is also a bagging technique with a key difference, it takes a subset of features at each split
, and prune the trees with a stopping criteria for node splits.
 The tree is grown to the largest.
 The above steps are repeated and the prediction is given based on the aggregation of
predictions from n number of trees.
 Used for both classification and regression.
 It handles higher dimensionality data and missing values well and maintains accuracy, but
doesnt give precise values for the regression model as the final prediction is based on the
mean predictions from subset trees.
Gradient boosted trees
 Boosting
 Boosting refers to a family of learners which convert weak learners to strong learners.
 It learns sequentially from the errors from a prior random sample(in our case, a tree).
 The weak learners are trained sequentially each trying to correct its predecessor.
 The early learners fit simple models to the data and then analyze the data for errors.
 All the weak learners with their higher accuracy of error (only slighty less than
guessing,0.5) are combined in some way to get a strong classifier,with a higher accuracy.
 When an input is misclassified by a hypothesis, its weight is increased so that next
hypothesis is more likely to classify it correctly.
 By combining the whole set at the end, the weak learners are converted into better
performing model.
Gradient boosted trees
Types of boosting AdaBoost: short for
Adaptive boosting.
Start from a weak
classifier and learn to
linearly combine them so
that the error is reduced.
The result is strong
classifier built by
boosting of weak
classifiers.
We train an algorithm,
say Decision tree on a
model, whose all features
have been given equal
weights.
A model is built on a
subset of data and
predictions are made on
the whole dataset,and
errors are calculated by
the predictions and
actual values.
Gradient boosted trees
 Adaboost
 While creating the next model, higher weights are given to the data points which were
predicted incorrectly ie. misclassified.
 Weights can be determined using the error value, ie. Higher the error, more is the weight
associated to the observation.
 This process is repeated until the error function does not change, or the maximum limit of
the estimators is reached.
 Its used for both classfication and regression problem,mostly decision stamps are used with
Adaboost, but any machine learning algorithm, if it accepts weight on training data set can
be used a base learner.
 One of the applications of Adaboost is face recognition systems.
Gradient boosted trees
 Types of Boosting
 Gradient Boosting
 We will cover this in detail now.
 There are other implementations of Gradient boosting like XGBoost and Light
GB.
Gradient boosted trees
 Gradient Boost
 It’s also a machine learning technique which produces which produces a
prediction model in the form of an ensemble of weak prediction models, typically
decision trees.
 Thus, they may be referred as Gradient boosted trees.
 Like other boosting methods, it builds a model in a sequential or stage-wise
fashion.
Gradient boosted trees
 We shall now see some maths behind it.
 The objective of any supervised learning algorithm is to define a loss function and minimize it.
 We have mean square error defined as:
 We want our loss function(MSE) in our predictions be minimum using gradient descent and updating our
predictions based on a learning rate.
Gradient boosted trees
 We will see what is learning rate.
 Learning rates are the hypermeters which controls how much we are adjusting the weights of our network with
respect to the loss gradient. The learning rate affects how quickly our model can converge to a local minima (aka.
arrive at the best accuracy).
 The relationship is given by the formula: new_weight = existing_weight — learning_rate * gradient
 In gradient boosted trees, we use the following learning rate:
 We basically update the predictions such that the sum of our residuals is close to zero(or minimum) and the
predicted values are sufficiently close to the actual values.
 Learning rates are so tuned so as to prevent the overfitting which the gradient boosted trees are prone to.
Gradient boosted trees
 In Gradient boosted trees, models are sequentially trained, and each model minimizes the
loss function (y = ax + b + e, e needs special attention as it is an error term) of the whole
system using Gradient descent method, as explained earlier.
 The learning procedure consecutively fits new models to provide a more accurate estimate
of response variable.
 The principle idea behind this algorithm is to create new base learners, which can be
maximally corelated with negative gradient of the loss function, associated with the whole
ensemble.
 Pros of Gradient boosted trees: Fast, easy to tune, not sensitive to scale (features can be a
mix of continuous and categorical data), good performance, lots of software available(well
supported and tested)
 Cons: Sensitive to overfitting and noise (should always cross validate)
Thanks!

More Related Content

PPTX
Random forest
PPTX
Introduction to random forest and gradient boosting methods a lecture
ODP
Machine Learning with Decision trees
PPTX
Decision tree
PDF
Design Thinking: The one thing that will transform the way you think
PPTX
Naive Bayes Presentation
PDF
Introduction to the Artificial Intelligence and Computer Vision revolution
PPTX
Deep Learning - CNN and RNN
Random forest
Introduction to random forest and gradient boosting methods a lecture
Machine Learning with Decision trees
Decision tree
Design Thinking: The one thing that will transform the way you think
Naive Bayes Presentation
Introduction to the Artificial Intelligence and Computer Vision revolution
Deep Learning - CNN and RNN

What's hot (20)

PPTX
Unsupervised learning
PPTX
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
PPTX
05 Clustering in Data Mining
PPTX
Ensemble methods in machine learning
PDF
Understanding Bagging and Boosting
PPT
Clustering
PPTX
DBSCAN : A Clustering Algorithm
PPTX
Random forest
PDF
Hierarchical clustering
PPTX
Ensemble learning
PDF
Introduction to Recurrent Neural Network
PPT
3. mining frequent patterns
ODP
NAIVE BAYES CLASSIFIER
PPTX
Multiclass classification of imbalanced data
PDF
Dimensionality Reduction
PDF
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
PPTX
Data Reduction Stratergies
PPTX
CART – Classification & Regression Trees
PPTX
Data mining: Classification and prediction
Unsupervised learning
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
05 Clustering in Data Mining
Ensemble methods in machine learning
Understanding Bagging and Boosting
Clustering
DBSCAN : A Clustering Algorithm
Random forest
Hierarchical clustering
Ensemble learning
Introduction to Recurrent Neural Network
3. mining frequent patterns
NAIVE BAYES CLASSIFIER
Multiclass classification of imbalanced data
Dimensionality Reduction
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Reduction Stratergies
CART – Classification & Regression Trees
Data mining: Classification and prediction
Ad

Similar to Gradient Boosted trees (20)

PPTX
Introduction to XGBoost Machine Learning Model.pptx
PPTX
Diabetes prediction using Machine Leanring and Data Preprocessing techniques
PPTX
Mis End Term Exam Theory Concepts
PPTX
AIML UNIT 4.pptx. IT contains syllabus and full subject
PPTX
Machine Learning.pptx
DOCX
introduction to machine learning unit iv
PDF
BaggingBoosting.pdf
PPTX
5. Machine Learning.pptx
PPTX
20211229120253D6323_PERT 06_ Ensemble Learning.pptx
PPTX
Unit 2-ML.pptx
PDF
Building Azure Machine Learning Models
PDF
Machine Learning Interview Questions and Answers
PPTX
Big Data Analytics.pptx
PDF
Data Science - Part V - Decision Trees & Random Forests
PPTX
Industrial training ppt
PPTX
Data mining: Classification and Prediction
PDF
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
PPTX
chapter Three artificial intelligence 1.pptx
DOCX
dl unit 4.docx for deep learning in b tech
PPTX
Machine Learning_PPT.pptx
Introduction to XGBoost Machine Learning Model.pptx
Diabetes prediction using Machine Leanring and Data Preprocessing techniques
Mis End Term Exam Theory Concepts
AIML UNIT 4.pptx. IT contains syllabus and full subject
Machine Learning.pptx
introduction to machine learning unit iv
BaggingBoosting.pdf
5. Machine Learning.pptx
20211229120253D6323_PERT 06_ Ensemble Learning.pptx
Unit 2-ML.pptx
Building Azure Machine Learning Models
Machine Learning Interview Questions and Answers
Big Data Analytics.pptx
Data Science - Part V - Decision Trees & Random Forests
Industrial training ppt
Data mining: Classification and Prediction
Implementation of Naive Bayesian Classifier and Ada-Boost Algorithm Using Mai...
chapter Three artificial intelligence 1.pptx
dl unit 4.docx for deep learning in b tech
Machine Learning_PPT.pptx
Ad

Recently uploaded (20)

PPTX
Global journeys: estimating international migration
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Introduction to machine learning and Linear Models
PDF
Lecture1 pattern recognition............
PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
PPTX
Logistic Regression ml machine learning.pptx
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PPT
Quality review (1)_presentation of this 21
PPTX
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPTX
Database Infoormation System (DBIS).pptx
Global journeys: estimating international migration
climate analysis of Dhaka ,Banglades.pptx
Miokarditis (Inflamasi pada Otot Jantung)
Introduction to machine learning and Linear Models
Lecture1 pattern recognition............
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
.pdf is not working space design for the following data for the following dat...
iec ppt-1 pptx icmr ppt on rehabilitation.pptx
Logistic Regression ml machine learning.pptx
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Moving the Public Sector (Government) to a Digital Adoption
oil_refinery_comprehensive_20250804084928 (1).pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Business Ppt On Nestle.pptx huunnnhhgfvu
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
Quality review (1)_presentation of this 21
MODULE 8 - DISASTER risk PREPAREDNESS.pptx
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Database Infoormation System (DBIS).pptx

Gradient Boosted trees

  • 2. Data Mining  Data Mining : It is a process of extracting patterns from data. They should be:  Valid: holding on to new data with some certainity  Novel: being non-obvious to the system.  Useful: should be possible to act on the item  Understandable: Humans should be able to interpret the pattern.  Also known as Knowledge Discovery in Databases (KDD).
  • 3. Data Mining might mean: Statistics Visualizatiom Artificial Intelligence Database Technology Machine Learning Neural Networks Information Retreival Knowledge-based systems Knowledge acquisition Pattern Recognition High performance computing And so on….
  • 4. What's needed? Suitable data Computing power Data mining software Someone who knows both the nature of data and the software tools. Reason, theory or hunch
  • 5. Typical applications of Data Mining and KDD Data Mining and KDD have widespread applications. Some examples include: Marketing Healthcare Financial services And so on….
  • 6. Some basic techniques Predictive model: It basically describes what will happen in the future,rather predicts by analyzing the given current data. It uses statistical analysis, machine learning algorithms and other forecast techniques to predict what might happen in the future.It is not accurate as it is essentially just a prediction into the future using the data and the given stastistical/Machine Learning techniques. Eg- Performance Analysis. Descriptive model: It basically gives a vision into the past and tells what exactly happened in the past. It involves Data Aggregation and Data Mining.It is accurate as it describes exactly what happened in the past. Eg- Sentiment Analysis. Prescriptive model: This is realtively new field in Data Science.It is a step above predictive and descriptive model. It basically provides a viable solution to the problem in hand and the impact of considering a solution on future trend.It is still an evolving technique. Eg- Google self driving car.
  • 7. Some basic techniques Predictive  Regression  Classification  Collaborative Filtering Descriptive  Clustering  Association rules and variants  Deviation detection
  • 8. Key data mining tasks Classification: mapping data into predefined groups or classes. Regression: mapping data item to a real valued prediction variable. Clustering: Grouping similar data together into clusters.
  • 9. Key learning tasks in Machine Learning Supervised learning: A set of well-labled data is given with defined inputs and outputs variables (training data ) and the algorithms learn to predict the output from the input data. Unsupervised learning: Data given is not labelled ie. only input variables are given with no corresponding output variables. The algorithms find patterns and draw inferences from the given data. This is "pure Data Mining". Semi-supervised: Some data is labeled but most of it is unlabeled and a mixture of supervised and unsupervised techniques can be used.
  • 10. Some basic Data Mining Methods Decision Trees Neural Networks Cluster/Nearest Neighbour Genetic Algorithms/Evolutionary Computing Bayesien Networks Statistics Hybrids
  • 11. Gradient boosted trees  We are interested in Gradient boosted trees.  We would use Rapidminer (possibly Python?)
  • 12. Gradient boosted trees  Decision Trees  We will discuss a bit about decision trees first.  A decision tree is a tree where each node represents a feature(attribute), each link(branch) represents a decision(rule) and each leaf represents an outcome(categorical or continues value).  A decision tree takes a set of input features and splits input data recursively based on those features.  The processes are repeated until some stop condition is met. Ex- Depth of tree, no more information gain possible etc.
  • 13. Gradient boosted trees  Decision Trees have been there for a long time and have also known to suffer from bias and variance.  We have a large bias with simple trees and large variance with complex trees.  Ensemble methods combine several decision trees to produce better predictive performance rather than utilizing a single decision tree.  The main principle behind the ensemble model is that a group of weak learners come together to form a strong learner.  A few ensemble methods : Bagging, Boosting  We will see each of them.
  • 14. Gradient boosted trees  Bagging  It's used when our goal is to reduce the variance of the decision tree.  Here the idea is to take a súbset of data from training sample chosen randomly with replacement.  Now, each collection of subset data is used to train their decision trees.  Thus we end up with ensemble of different models and their average is much more robust than a single decision tree,which is much more robust in Predictive Analysis.  Random Forest is an extension of Bagging.
  • 15. Gradient boosted trees  Random Forest  It is basically a collection or ensemble of model of numerous decision trees. A collection of trees is generally called forest.  It is also a bagging technique with a key difference, it takes a subset of features at each split , and prune the trees with a stopping criteria for node splits.  The tree is grown to the largest.  The above steps are repeated and the prediction is given based on the aggregation of predictions from n number of trees.  Used for both classification and regression.  It handles higher dimensionality data and missing values well and maintains accuracy, but doesnt give precise values for the regression model as the final prediction is based on the mean predictions from subset trees.
  • 16. Gradient boosted trees  Boosting  Boosting refers to a family of learners which convert weak learners to strong learners.  It learns sequentially from the errors from a prior random sample(in our case, a tree).  The weak learners are trained sequentially each trying to correct its predecessor.  The early learners fit simple models to the data and then analyze the data for errors.  All the weak learners with their higher accuracy of error (only slighty less than guessing,0.5) are combined in some way to get a strong classifier,with a higher accuracy.  When an input is misclassified by a hypothesis, its weight is increased so that next hypothesis is more likely to classify it correctly.  By combining the whole set at the end, the weak learners are converted into better performing model.
  • 17. Gradient boosted trees Types of boosting AdaBoost: short for Adaptive boosting. Start from a weak classifier and learn to linearly combine them so that the error is reduced. The result is strong classifier built by boosting of weak classifiers. We train an algorithm, say Decision tree on a model, whose all features have been given equal weights. A model is built on a subset of data and predictions are made on the whole dataset,and errors are calculated by the predictions and actual values.
  • 18. Gradient boosted trees  Adaboost  While creating the next model, higher weights are given to the data points which were predicted incorrectly ie. misclassified.  Weights can be determined using the error value, ie. Higher the error, more is the weight associated to the observation.  This process is repeated until the error function does not change, or the maximum limit of the estimators is reached.  Its used for both classfication and regression problem,mostly decision stamps are used with Adaboost, but any machine learning algorithm, if it accepts weight on training data set can be used a base learner.  One of the applications of Adaboost is face recognition systems.
  • 19. Gradient boosted trees  Types of Boosting  Gradient Boosting  We will cover this in detail now.  There are other implementations of Gradient boosting like XGBoost and Light GB.
  • 20. Gradient boosted trees  Gradient Boost  It’s also a machine learning technique which produces which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees.  Thus, they may be referred as Gradient boosted trees.  Like other boosting methods, it builds a model in a sequential or stage-wise fashion.
  • 21. Gradient boosted trees  We shall now see some maths behind it.  The objective of any supervised learning algorithm is to define a loss function and minimize it.  We have mean square error defined as:  We want our loss function(MSE) in our predictions be minimum using gradient descent and updating our predictions based on a learning rate.
  • 22. Gradient boosted trees  We will see what is learning rate.  Learning rates are the hypermeters which controls how much we are adjusting the weights of our network with respect to the loss gradient. The learning rate affects how quickly our model can converge to a local minima (aka. arrive at the best accuracy).  The relationship is given by the formula: new_weight = existing_weight — learning_rate * gradient  In gradient boosted trees, we use the following learning rate:  We basically update the predictions such that the sum of our residuals is close to zero(or minimum) and the predicted values are sufficiently close to the actual values.  Learning rates are so tuned so as to prevent the overfitting which the gradient boosted trees are prone to.
  • 23. Gradient boosted trees  In Gradient boosted trees, models are sequentially trained, and each model minimizes the loss function (y = ax + b + e, e needs special attention as it is an error term) of the whole system using Gradient descent method, as explained earlier.  The learning procedure consecutively fits new models to provide a more accurate estimate of response variable.  The principle idea behind this algorithm is to create new base learners, which can be maximally corelated with negative gradient of the loss function, associated with the whole ensemble.  Pros of Gradient boosted trees: Fast, easy to tune, not sensitive to scale (features can be a mix of continuous and categorical data), good performance, lots of software available(well supported and tested)  Cons: Sensitive to overfitting and noise (should always cross validate)