SlideShare a Scribd company logo
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 1/8
Apply random forest with watsonx.ai
Subscribe for AI updates
What is random forest?
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 2/8
What is random forest?
Random forest is a commonly-used machine learning algorithm,
trademarked by Leo Breiman and Adele Cutler, that combines the
output of multiple decision trees to reach a single result. Its ease of
use and flexibility have fueled its adoption, as it handles both
classification and regression problems.
Decision trees
Since the random forest model is made up of multiple decision trees, it would be
helpful to start by describing the decision tree algorithm briefly. Decision trees start
with a basic question, such as, “Should I surf?” From there, you can ask a series of
questions to determine an answer, such as, “Is it a long period swell?” or “Is the wind
blowing offshore?”. These questions make up the decision nodes in the tree, acting as
a means to split the data. Each question helps an individual to arrive at a final
decision, which would be denoted by the leaf node. Observations that fit the criteria
will follow the “Yes” branch and those that don’t will follow the alternate path.
Decision trees seek to find the best split to subset the data, and they are typically
What is random forest?
How it works
Benefits and challenges of random forest
Random forest applications
Related solutions
Resources
Take the next step
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 3/8
trained through the Classification and Regression Tree (CART) algorithm. Metrics, such
as Gini impurity, information gain, or mean square error (MSE), can be used to
evaluate the quality of the split.
This decision tree is an example of a classification problem, where the class labels are
"surf" and "don't surf."
While decision trees are common supervised learning algorithms, they can be prone to
problems, such as bias and overfitting. However, when multiple decision trees form an
ensemble in the random forest algorithm, they predict more accurate results,
particularly when the individual trees are uncorrelated with each other.
Ensemble methods
Ensemble learning methods are made up of a set of classifiers—e.g. decision trees—
and their predictions are aggregated to identify the most popular result. The most
well-known ensemble methods are bagging, also known as bootstrap aggregation,
and boosting. In 1996, Leo Breiman (link resides outside ibm.com) introduced the
bagging method; in this method, a random sample of data in a training set is selected
with replacement—meaning that the individual data points can be chosen more than
once. After several data samples are generated, these models are then trained
independently, and depending on the type of task—i.e. regression or classification—the
average or majority of those predictions yield a more accurate estimate. This approach
is commonly used to reduce variance within a noisy dataset.
Random forest algorithm
The random forest algorithm is an extension of the bagging method as it utilizes both
bagging and feature randomness to create an uncorrelated forest of decision trees.
Feature randomness, also known as feature bagging or “the random subspace
method”(link resides outside ibm.com), generates a random subset of features, which
ensures low correlation among decision trees. This is a key difference between
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 4/8
decision trees and random forests. While decision trees consider all the possible
feature splits, random forests only select a subset of those features.
If we go back to the “should I surf?” example, the questions that I may ask to
determine the prediction may not be as comprehensive as someone else’s set of
questions. By accounting for all the potential variability in the data, we can reduce the
risk of overfitting, bias, and overall variance, resulting in more precise predictions.
Related content
How it works
Random forest algorithms have three main hyperparameters, which need to be set
before training. These include node size, the number of trees, and the number of
Analyst report
IBM named a leader by IDC
Read why IBM was named a leader in the IDC MarketScape: Worldwide AI Governance
Platforms 2023 report.
Register for the ebook on responsible AI workflows
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 5/8
features sampled. From there, the random forest classifier can be used to solve for
regression or classification problems.
The random forest algorithm is made up of a collection of decision trees, and each
tree in the ensemble is comprised of a data sample drawn from a training set with
replacement, called the bootstrap sample. Of that training sample, one-third of it is
set aside as test data, known as the out-of-bag (oob) sample, which we’ll come back
to later. Another instance of randomness is then injected through feature bagging,
adding more diversity to the dataset and reducing the correlation among decision
trees. Depending on the type of problem, the determination of the prediction will vary.
For a regression task, the individual decision trees will be averaged, and for a
classification task, a majority vote—i.e. the most frequent categorical variable—will
yield the predicted class. Finally, the oob sample is then used for cross-validation,
finalizing that prediction.
Benefits and challenges of random
forest
There are a number of key advantages and challenges that the random forest
algorithm presents when used for classification or regression problems. Some of them
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 6/8
include:
Key Benefits
Key Challenges
Random forest applications
The random forest algorithm has been applied across a number of industries, allowing
them to make better business decisions. Some use cases include:
Reduced risk of overfitting: Decision trees run the risk of overfitting as they tend to
tightly fit all the samples within training data. However, when there’s a robust
number of decision trees in a random forest, the classifier won’t overfit the model
since the averaging of uncorrelated trees lowers the overall variance and
prediction error.
–
Provides flexibility: Since random forest can handle both regression and
classification tasks with a high degree of accuracy, it is a popular method among
data scientists. Feature bagging also makes the random forest classifier an
effective tool for estimating missing values as it maintains accuracy when a portion
of the data is missing.
–
Easy to determine feature importance: Random forest makes it easy to evaluate
variable importance, or contribution, to the model. There are a few ways to
evaluate feature importance. Gini importance and mean decrease in impurity (MDI)
are usually used to measure how much the model’s accuracy decreases when a
given variable is excluded. However, permutation importance, also known as mean
decrease accuracy (MDA), is another importance measure. MDA identifies the
average decrease in accuracy by randomly permutating the feature values in oob
samples.
–
Time-consuming process: Since random forest algorithms can handle large data
sets, they can be provide more accurate predictions, but can be slow to process
data as they are computing data for each individual decision tree.
–
Requires more resources: Since random forests process larger data sets, they’ll
require more resources to store that data.
–
More complex: The prediction of a single decision tree is easier to interpret when
compared to a forest of them.
–
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 7/8
Related solutions
IBM SPSS® Modeler
IBM SPSS® Modeler provides predictive analytics to help you uncover data patterns, gain
predictive accuracy and improve decision making.
Explore SPSS Modeler
Resources
Finance: It is a preferred algorithm over others as it reduces time spent on data
management and pre-processing tasks. It can be used to evaluate customers with
high credit risk, to detect fraud, and option pricing problems.
–
Healthcare: The random forest algorithm has applications within computational
biology (link resides outside ibm.com), allowing doctors to tackle problems such as
gene expression classification, biomarker discovery, and sequence annotation. As
a result, doctors can make estimates around drug responses to specific
medications.
–
E-commerce: It can be used for recommendation engines for cross-sell purposes.
–
IBM SPSS®
Modeler
drag-and-
drop data
science
tool
Learn how
organizations
worldwide
use SPSS®
Modeler for
Random-
forest-
inspired
neural
networks
Learn how a
carefully
designed
neural
network with
Let’s talk
4/4/24, 11:41 PM What Is Random Forest? | IBM
https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 8/8
data
preparation
and
discovery,
predictive
analytics,
model
management
and
deployment,
and ML to
monetize
data assets.
random
forest
structure can
have better
generalizatio
n ability.
Tutorial
Using
random
forest to
predict
credit
defaults
using
Python
Build a
random
forest model
and optimize
it with
hyperparame
ter tuning
using scikit-
learn.
Take the next step
IBM SPSS Modeler is a visual data science and machine learning (ML) solution that
exposes patterns and models hidden in data through a bottom-up, hypothesis
generation approach. Organizations worldwide use it for data preparation and
discovery, predictive analytics, model management and deployment, and ML to
monetize data assets.
Explore SPSS Modeler
Try free for 30 days
Let’s talk

More Related Content

PPTX
Random Forest
PPTX
RandomForests_Sayed-tree based model.pptx
PPTX
An Introduction to Random Forest and linear regression algorithms
PPTX
Random Forest classifier in Machine Learning
PPTX
Supervised and Unsupervised Learning .pptx
PDF
Random Forests for AIML for 3rd year ECE department CSE
PDF
Random Forests for Machine Learning ML Decision Tree
PPTX
13 random forest
Random Forest
RandomForests_Sayed-tree based model.pptx
An Introduction to Random Forest and linear regression algorithms
Random Forest classifier in Machine Learning
Supervised and Unsupervised Learning .pptx
Random Forests for AIML for 3rd year ECE department CSE
Random Forests for Machine Learning ML Decision Tree
13 random forest

Similar to What Is Random Forest_ analytics_ IBM.pdf (20)

PPTX
Decision_Trees_Random_Forests for use in machine learning and computer scienc...
PPTX
Seminar PPT on Random Forest Tree Algorithm
PPTX
what is Random-Forest-Machine-Learning.pptx
PPTX
random forest.pptx
PDF
Random Forest / Bootstrap Aggregation
PPT
RANDOM FORESTS Ensemble technique Introduction
PDF
Random Forest Algorithm: A Machine Learning ALgorithm.pdf
PDF
Random forest (Machine learning)
PPTX
Random Forest Decision Tree.pptx
PDF
M3R.FINAL
PPTX
Random_Forest_Presentation_Detailed.pptx
PPTX
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
PDF
Working mechanism of a random forest classifier and its performance evaluation
PPTX
Random ForestRandomForestsRandomForests.pptx
PPTX
Algoritma Random Forest beserta aplikasi nya
PPTX
supervised machine learning algorithms support vector machine
PDF
Random forests-talk-nl-meetup
PPTX
Random Forest Classifier in Machine Learning | Palin Analytics
PPTX
Comparitive Analysis .pptx Footprinting, Enumeration, Scanning, Sniffing, Soc...
PPTX
Primer on major data mining algorithms
Decision_Trees_Random_Forests for use in machine learning and computer scienc...
Seminar PPT on Random Forest Tree Algorithm
what is Random-Forest-Machine-Learning.pptx
random forest.pptx
Random Forest / Bootstrap Aggregation
RANDOM FORESTS Ensemble technique Introduction
Random Forest Algorithm: A Machine Learning ALgorithm.pdf
Random forest (Machine learning)
Random Forest Decision Tree.pptx
M3R.FINAL
Random_Forest_Presentation_Detailed.pptx
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
Working mechanism of a random forest classifier and its performance evaluation
Random ForestRandomForestsRandomForests.pptx
Algoritma Random Forest beserta aplikasi nya
supervised machine learning algorithms support vector machine
Random forests-talk-nl-meetup
Random Forest Classifier in Machine Learning | Palin Analytics
Comparitive Analysis .pptx Footprinting, Enumeration, Scanning, Sniffing, Soc...
Primer on major data mining algorithms
Ad

More from Dr Arash Najmaei ( Phd., MBA, BSc) (15)

PPTX
media management series of lectures of media planning
PPTX
foundation of contemporary marketing Topic 7- Branding.pptx
PDF
Kantar-Catalyst State of Ecommerce 2021.pdf
PDF
A_pragmatist_approach_to_integrity_in_bu.pdf
PDF
2024-and-beyond-will-it-be-economic-stagnation-or-the-advent-of-productivity-...
PDF
2030-Visitor-Economy-Strategy-important.pdf
PDF
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
PDF
article-130324_Banjo_SME_Compass_2024 (1).pdf
PDF
social media marketing on brand trust and loyalty.pdf
PPTX
Media planning and media strategy- W9 2.4.24.pptx
PPTX
Marketing- Week 9 (Topic 9-10-11) 19.7.23.pptx
PPTX
media management and planning Week 5 NEW-Arash.pptx
PPTX
Media Management Lecture-Media Plan W67.pptx
PPTX
Media management post graduate Week 8.pptx
media management series of lectures of media planning
foundation of contemporary marketing Topic 7- Branding.pptx
Kantar-Catalyst State of Ecommerce 2021.pdf
A_pragmatist_approach_to_integrity_in_bu.pdf
2024-and-beyond-will-it-be-economic-stagnation-or-the-advent-of-productivity-...
2030-Visitor-Economy-Strategy-important.pdf
6 Tips for Interpretable Topic Models _ by Nicha Ruchirawat _ Towards Data Sc...
article-130324_Banjo_SME_Compass_2024 (1).pdf
social media marketing on brand trust and loyalty.pdf
Media planning and media strategy- W9 2.4.24.pptx
Marketing- Week 9 (Topic 9-10-11) 19.7.23.pptx
media management and planning Week 5 NEW-Arash.pptx
Media Management Lecture-Media Plan W67.pptx
Media management post graduate Week 8.pptx
Ad

Recently uploaded (20)

PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PDF
Mega Projects Data Mega Projects Data
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PDF
Business Analytics and business intelligence.pdf
PPTX
Qualitative Qantitative and Mixed Methods.pptx
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Introduction to machine learning and Linear Models
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
Business Acumen Training GuidePresentation.pptx
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PDF
Fluorescence-microscope_Botany_detailed content
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
climate analysis of Dhaka ,Banglades.pptx
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Mega Projects Data Mega Projects Data
Data_Analytics_and_PowerBI_Presentation.pptx
Business Analytics and business intelligence.pdf
Qualitative Qantitative and Mixed Methods.pptx
Miokarditis (Inflamasi pada Otot Jantung)
Acceptance and paychological effects of mandatory extra coach I classes.pptx
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Introduction to machine learning and Linear Models
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
STUDY DESIGN details- Lt Col Maksud (21).pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
Introduction to Knowledge Engineering Part 1
Business Ppt On Nestle.pptx huunnnhhgfvu
Business Acumen Training GuidePresentation.pptx
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Fluorescence-microscope_Botany_detailed content

What Is Random Forest_ analytics_ IBM.pdf

  • 1. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 1/8 Apply random forest with watsonx.ai Subscribe for AI updates What is random forest? Let’s talk
  • 2. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 2/8 What is random forest? Random forest is a commonly-used machine learning algorithm, trademarked by Leo Breiman and Adele Cutler, that combines the output of multiple decision trees to reach a single result. Its ease of use and flexibility have fueled its adoption, as it handles both classification and regression problems. Decision trees Since the random forest model is made up of multiple decision trees, it would be helpful to start by describing the decision tree algorithm briefly. Decision trees start with a basic question, such as, “Should I surf?” From there, you can ask a series of questions to determine an answer, such as, “Is it a long period swell?” or “Is the wind blowing offshore?”. These questions make up the decision nodes in the tree, acting as a means to split the data. Each question helps an individual to arrive at a final decision, which would be denoted by the leaf node. Observations that fit the criteria will follow the “Yes” branch and those that don’t will follow the alternate path. Decision trees seek to find the best split to subset the data, and they are typically What is random forest? How it works Benefits and challenges of random forest Random forest applications Related solutions Resources Take the next step Let’s talk
  • 3. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 3/8 trained through the Classification and Regression Tree (CART) algorithm. Metrics, such as Gini impurity, information gain, or mean square error (MSE), can be used to evaluate the quality of the split. This decision tree is an example of a classification problem, where the class labels are "surf" and "don't surf." While decision trees are common supervised learning algorithms, they can be prone to problems, such as bias and overfitting. However, when multiple decision trees form an ensemble in the random forest algorithm, they predict more accurate results, particularly when the individual trees are uncorrelated with each other. Ensemble methods Ensemble learning methods are made up of a set of classifiers—e.g. decision trees— and their predictions are aggregated to identify the most popular result. The most well-known ensemble methods are bagging, also known as bootstrap aggregation, and boosting. In 1996, Leo Breiman (link resides outside ibm.com) introduced the bagging method; in this method, a random sample of data in a training set is selected with replacement—meaning that the individual data points can be chosen more than once. After several data samples are generated, these models are then trained independently, and depending on the type of task—i.e. regression or classification—the average or majority of those predictions yield a more accurate estimate. This approach is commonly used to reduce variance within a noisy dataset. Random forest algorithm The random forest algorithm is an extension of the bagging method as it utilizes both bagging and feature randomness to create an uncorrelated forest of decision trees. Feature randomness, also known as feature bagging or “the random subspace method”(link resides outside ibm.com), generates a random subset of features, which ensures low correlation among decision trees. This is a key difference between Let’s talk
  • 4. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 4/8 decision trees and random forests. While decision trees consider all the possible feature splits, random forests only select a subset of those features. If we go back to the “should I surf?” example, the questions that I may ask to determine the prediction may not be as comprehensive as someone else’s set of questions. By accounting for all the potential variability in the data, we can reduce the risk of overfitting, bias, and overall variance, resulting in more precise predictions. Related content How it works Random forest algorithms have three main hyperparameters, which need to be set before training. These include node size, the number of trees, and the number of Analyst report IBM named a leader by IDC Read why IBM was named a leader in the IDC MarketScape: Worldwide AI Governance Platforms 2023 report. Register for the ebook on responsible AI workflows Let’s talk
  • 5. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 5/8 features sampled. From there, the random forest classifier can be used to solve for regression or classification problems. The random forest algorithm is made up of a collection of decision trees, and each tree in the ensemble is comprised of a data sample drawn from a training set with replacement, called the bootstrap sample. Of that training sample, one-third of it is set aside as test data, known as the out-of-bag (oob) sample, which we’ll come back to later. Another instance of randomness is then injected through feature bagging, adding more diversity to the dataset and reducing the correlation among decision trees. Depending on the type of problem, the determination of the prediction will vary. For a regression task, the individual decision trees will be averaged, and for a classification task, a majority vote—i.e. the most frequent categorical variable—will yield the predicted class. Finally, the oob sample is then used for cross-validation, finalizing that prediction. Benefits and challenges of random forest There are a number of key advantages and challenges that the random forest algorithm presents when used for classification or regression problems. Some of them Let’s talk
  • 6. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 6/8 include: Key Benefits Key Challenges Random forest applications The random forest algorithm has been applied across a number of industries, allowing them to make better business decisions. Some use cases include: Reduced risk of overfitting: Decision trees run the risk of overfitting as they tend to tightly fit all the samples within training data. However, when there’s a robust number of decision trees in a random forest, the classifier won’t overfit the model since the averaging of uncorrelated trees lowers the overall variance and prediction error. – Provides flexibility: Since random forest can handle both regression and classification tasks with a high degree of accuracy, it is a popular method among data scientists. Feature bagging also makes the random forest classifier an effective tool for estimating missing values as it maintains accuracy when a portion of the data is missing. – Easy to determine feature importance: Random forest makes it easy to evaluate variable importance, or contribution, to the model. There are a few ways to evaluate feature importance. Gini importance and mean decrease in impurity (MDI) are usually used to measure how much the model’s accuracy decreases when a given variable is excluded. However, permutation importance, also known as mean decrease accuracy (MDA), is another importance measure. MDA identifies the average decrease in accuracy by randomly permutating the feature values in oob samples. – Time-consuming process: Since random forest algorithms can handle large data sets, they can be provide more accurate predictions, but can be slow to process data as they are computing data for each individual decision tree. – Requires more resources: Since random forests process larger data sets, they’ll require more resources to store that data. – More complex: The prediction of a single decision tree is easier to interpret when compared to a forest of them. – Let’s talk
  • 7. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 7/8 Related solutions IBM SPSS® Modeler IBM SPSS® Modeler provides predictive analytics to help you uncover data patterns, gain predictive accuracy and improve decision making. Explore SPSS Modeler Resources Finance: It is a preferred algorithm over others as it reduces time spent on data management and pre-processing tasks. It can be used to evaluate customers with high credit risk, to detect fraud, and option pricing problems. – Healthcare: The random forest algorithm has applications within computational biology (link resides outside ibm.com), allowing doctors to tackle problems such as gene expression classification, biomarker discovery, and sequence annotation. As a result, doctors can make estimates around drug responses to specific medications. – E-commerce: It can be used for recommendation engines for cross-sell purposes. – IBM SPSS® Modeler drag-and- drop data science tool Learn how organizations worldwide use SPSS® Modeler for Random- forest- inspired neural networks Learn how a carefully designed neural network with Let’s talk
  • 8. 4/4/24, 11:41 PM What Is Random Forest? | IBM https://www.ibm.com/topics/random-forest#:~:text=Random forest is a commonly,both classification and regression problems. 8/8 data preparation and discovery, predictive analytics, model management and deployment, and ML to monetize data assets. random forest structure can have better generalizatio n ability. Tutorial Using random forest to predict credit defaults using Python Build a random forest model and optimize it with hyperparame ter tuning using scikit- learn. Take the next step IBM SPSS Modeler is a visual data science and machine learning (ML) solution that exposes patterns and models hidden in data through a bottom-up, hypothesis generation approach. Organizations worldwide use it for data preparation and discovery, predictive analytics, model management and deployment, and ML to monetize data assets. Explore SPSS Modeler Try free for 30 days Let’s talk