SlideShare a Scribd company logo
Unit II-
Supervised learning :
Naves Bayes, Decision Tree
Bayes’ Theorem
• Bayes’ Theorem is a mathematical formula that helps determine
the conditional probability of an event based on prior knowledge and new
evidence.
• It adjusts probabilities when new information comes in and helps make
better decisions in uncertain situations.
• Bayes Theorem and Conditional Probability
• Bayes theorem (also known as the Bayes Rule or Bayes Law) is used to
determine the conditional probability of event A when event B has already
occurred.
• The general statement of Bayes’ theorem is “The conditional probability of
an event A, given the occurrence of another event B, is equal to the product
of the event of B, given A and the probability of A divided by the probability
of event B.” i.e.
Bayes Theorem Formula
For any two events A and B, then the formula for the Bayes theorem is
given by:
Where,
P(A) and P(B) are the probabilities of events A and B also P(B) is never equal to zero,
P(A|B) is the probability of event A when event B happens,
P(B|A) is the probability of event B when A happens.
ere,
•P(A) and P(B) are the probabilities of events A and B also P(B) is never equal to zero,
•P(A|B) is the probability of event A when event B happens,
•P(B|A) is the probability of event B when A happens.
Naive Bayes Classifiers
• Naive Bayes classifiers are supervised machine learning algorithms used for classification
tasks, based on Bayes’ Theorem to find probabilities.
• Key Features of Naive Bayes Classifiers
• The main idea behind the Naive Bayes classifier is to use Bayes’ Theorem to classify data
based on the probabilities of different classes given the features of the data. It is used
mostly in high-dimensional text classification
• The Naive Bayes Classifier is a simple probabilistic classifier and it has very few number
of parameters which are used to build the ML models that can predict at a faster speed
than other classification algorithms.
• It is a probabilistic classifier because it assumes that one feature in the model is
independent of existence of another feature. In other words, each feature contributes to
the predictions with no relation between each other.
• Naïve Bayes Algorithm is used in spam filtration, Sentimental analysis, classifying articles
and many more.
cAdvantages of Naive Bayes Classifier
1. Easy to implement and computationally efficient.
2. Effective in cases with a large number of features.
3. Performs well even with limited training data.
4. It performs well in the presence of categorical features.
5. For numerical features data is assumed to come from normal
distributions.
Disadvantages of Naive Bayes Classifier
6. Assumes that features are independent, which may not always hold in
real-world data.
7. Can be influenced by irrelevant attributes.
8. May assign zero probability to unseen events, leading to poor
generalization.
9. tive in cases with a large number of features.
•Performs well even with limited training data.
•It performs well in the presence of categorical features.
Applications of Naive Bayes Classifier
•Spam Email Filtering: Classifies emails as spam or non-spam based on features.
•Text Classification: UseApplications of Naive Bayes Classifier
Spam Email Filtering: Classifies emails as spam or non-spam based on
features.
Text Classification: Used in sentiment analysis, document categorization,
and topic classification.
Medical Diagnosis: Helps in predicting the likelihood of a disease based on
symptoms.
Credit Scoring: Evaluates creditworthiness of individuals for loan approval.
Weather Prediction: Classifies weather conditions based on various factors.
Implementation naive Bayes classifier
• 1. Import Required Libraries
• 2. Load or Generate a Dataset
• 3. Train the Naïve Bayes Model
• 4. Make Predictions
• 5. Evaluate the Model
Decision Tree
• Decision tree is a simple diagram that shows different choices and their possible
results helping you make decisions easily. This article is all about what decision trees
are, how they work, their advantages and disadvantages and their applications.
• A decision tree is a graphical representation of different options for solving a
problem and show how different factors are related. It has a hierarchical tree
structure starts with one main question at the top called a node which further
branches out into different possible outcomes where:
• Root Node is the starting point that represents the entire dataset.
• Branches: These are the lines that connect nodes. It shows the flow from one
decision to another.
• Internal Nodes are Points where decisions are made based on the input features.
• Leaf Nodes: These are the terminal nodes at the end of branches that represent final
outcomes or predictions
T They also support decision-making by visualizing outcomes. You can quickly evaluate and compare the “branches”
to determine which course of action is best for you.hey also support decision-making by visualizing
outcomes. You can quickly evaluate and compare the “branches” to determine which course of
action is best for you.
Now, let’s take an example to understand the decision tree. Imagine you want to decide whether to
drink coffee based on the time of day and how tired you feel. First the tree checks the time of day—if
it’s morning it asks whether you are tired. If you’re tired the tree suggests drinking coffee if not it
says there’s no need. Similarly in the afternoon the tree again asks if you are tired. If you
recommends drinking coffee if not it concludes no coffee is needed.
Classification of Decision Tree
We have mainly two types of decision tree based on the nature
of the target variable: classification trees and regression trees.
Classification trees: They are designed to predict categorical
outcomes means they classify data into different classes. They
can determine whether an email is “spam” or “not spam” based
on various features of the email.
Regression trees : These are used when the target variable is
continuous It predict numerical values rather than categories.
For example a regression tree can estimate the price of a house
based on its size, location, and other features.
mainly two types of decision tree based on the nature of the target
How Decision Trees Work?
• A decision tree working starts with a main question known as the root node. This
question is derived from the features of the dataset and serves as the starting point for
decision-making.
• From the root node, the tree asks a series of yes/no questions. Each question is designed
to split the data into subsets based on specific attributes. For example if the first question
is “Is it raining?”, the answer will determine which branch of the tree to follow.
Depending on the response to each question you follow different branches. If your
answer is “Yes,” you might proceed down one path if “No,” you will take another path.
• This branching continues through a sequence of decisions. As you follow each branch, you
get more questions that break the data into smaller groups. This step-by-step process
continues until you have no more helpful questions .
• You reach at the end of a branch where you find the final outcome or decision. It could be
a classification (like “spam” or “not spam”) or a prediction (such as estimated price).
• Advantages of Decision Trees
• Simplicity and Interpretability: Decision trees are straightforward and easy to understand. You
can visualize them like a flowchart which makes it simple to see how decisions are made.
• Versatility: It means they can be used for different types of tasks can work well for
both classification and regression
• No Need for Feature Scaling: They don’t require you to normalize or scale your data.
• Handles Non-linear Relationships: It is capable of capturing non-linear relationships between
features and target variables.
• Disadvantages of Decision Trees
• Overfitting: Overfitting occurs when a decision tree captures noise and details in the training
data and it perform poorly on new data.
• Instability: instability means that the model can be unreliable slight variations in input can lead
to significant differences in predictions.
• Bias towards Features with More Levels: Decision trees can become biased towards features
with many categories focusing too much on them during decision-making. This can cause the
model to miss out other important features led to less accurate predictions .
• Applications of Decision Trees
• Loan Approval in Banking: A bank needs to decide whether to approve a loan
application based on customer profiles.
• Input features include income, credit score, employment status, and loan history.
• The decision tree predicts loan approval or rejection, helping the bank make quick and
reliable decisions.
• Medical Diagnosis: A healthcare provider wants to predict whether a patient
has diabetes based on clinical test results.
• Features like glucose levels, BMI, and blood pressure are used to make a decision tree.
• Tree classifies patients into diabetic or non-diabetic, assisting doctors in diagnosis.
• Predicting Exam Results in Education : School wants to predict whether a
student will pass or fail based on study habits.
• Data includes attendance, time spent studying, and previous grades.
• The decision tree identifies at-risk students, allowing teachers to provide additional
support.
Random Forest Algorithm
• A Random Forest is a collection of decision trees that work together to make
predictions. In this article, we'll explain how the Random Forest algorithm
works and how to use it.
• Understanding Intuition for Random Forest Algorithm
• Random Forest algorithm is a powerful tree learning technique in Machine
Learning to make predictions and then we do voting of all the tress to make
prediction. They are widely used for classification and regression task.
• It is a type of classifier that uses many decision trees to make predictions.
• It takes different random parts of the dataset to train each tree and then it
combines the results by averaging them. This approach helps improve the
accuracy of predictions. Random Forest is based on ensemble learning.
supervised machine learning algorithms support vector machine
• As explained in image: Process starts with a dataset with rows and their
corresponding class labels (columns).
• Then - Multiple Decision Trees are created from the training data. Each
tree is trained on a random subset of the data (with replacement) and a
random subset of features. This process is known as bagging or bootstrap
aggregating.
• Each Decision Tree in the ensemble learns to make predictions
independently.
• When presented with a new, unseen instance, each Decision Tree in the
ensemble makes a prediction.
• The final prediction is made by combining the predictions of all the
Decision Trees. This is typically done through a majority vote (for
classification) or averaging (for regression).
How Random Forest Algorithm Works?
• The random Forest algorithm works in several steps:
• Random Forest builds multiple decision trees using random samples of the data. Each tree is trained on
a different subset of the data which makes each tree unique.
• When creating each tree the algorithm randomly selects a subset of features or variables to split the
data rather than using all available features at a time. This adds diversity to the trees.
• Each decision tree in the forest makes a prediction based on the data it was trained on. When making
final prediction random forest combines the results from all the trees.
• For classification tasks the final prediction is decided by a majority vote. This means that the category
predicted by most trees is the final prediction.
• For regression tasks the final prediction is the average of the predictions from all the trees.
• The randomness in data samples and feature selection helps to prevent the model from overfitting
making the predictions more accurate and reliable.
• Advantages of Random Forest
• Random Forest provides very accurate predictions even with large datasets.
• Random Forest can handle missing data well without compromising with
accuracy.
• It doesn’t require normalization or standardization on dataset.
• When we combine multiple decision trees it reduces the risk of overfitting of the
model.
• Limitations of Random Forest
• It can be computationally expensive especially with a large number of trees.
• It’s harder to interpret the model compared to simpler models like decision trees.

More Related Content

PPTX
Primer on major data mining algorithms
PPTX
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
PPTX
Ai & Machine learning - 31140523010 - BDS302.pptx
PDF
Decision Tree-ID3,C4.5,CART,Regression Tree
PDF
Using Decision Trees to Analyze Online Learning Data
PDF
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
PPTX
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
PPTX
20211229120253D6323_PERT 06_ Ensemble Learning.pptx
Primer on major data mining algorithms
Decision Tree Algorithm With Example | Decision Tree In Machine Learning | Da...
Ai & Machine learning - 31140523010 - BDS302.pptx
Decision Tree-ID3,C4.5,CART,Regression Tree
Using Decision Trees to Analyze Online Learning Data
Machine Learning Unit-5 Decesion Trees & Random Forest.pdf
Random Forest Algorithm - Random Forest Explained | Random Forest In Machine ...
20211229120253D6323_PERT 06_ Ensemble Learning.pptx

Similar to supervised machine learning algorithms support vector machine (20)

PPT
classification in data warehouse and mining
PPTX
Decision Tree in Machine Learning
PPTX
Unit 2-ML.pptx
PPTX
DECESION TREE and -SVM-NAIVEs bayes-BAYS.pptx
PPTX
Macine learning algorithms - K means, KNN
PDF
Understanding Decision Trees in Machine Learning: A Comprehensive Guide
PPTX
Machine learning session6(decision trees random forrest)
PDF
IRJET- Performance Evaluation of Various Classification Algorithms
PDF
IRJET- Performance Evaluation of Various Classification Algorithms
DOCX
Classification Using Decision Trees and RulesChapter 5.docx
PPTX
Introduction to Machine Learning Concepts
PPTX
PAIML-UNIT 5hgthgghrgergrgrgrgrgrgrg.pptx
PDF
A Survey Ondecision Tree Learning Algorithms for Knowledge Discovery
PPTX
Machine learning(UNIT 4)
PPTX
Machine learning and decision trees
PDF
Classification Based Machine Learning Algorithms
PPTX
Decision Tree Machine Learning Detailed Explanation.
PPT
Business Analytics using R.ppt
PDF
Classifiers
PPTX
An Introduction to Random Forest and linear regression algorithms
classification in data warehouse and mining
Decision Tree in Machine Learning
Unit 2-ML.pptx
DECESION TREE and -SVM-NAIVEs bayes-BAYS.pptx
Macine learning algorithms - K means, KNN
Understanding Decision Trees in Machine Learning: A Comprehensive Guide
Machine learning session6(decision trees random forrest)
IRJET- Performance Evaluation of Various Classification Algorithms
IRJET- Performance Evaluation of Various Classification Algorithms
Classification Using Decision Trees and RulesChapter 5.docx
Introduction to Machine Learning Concepts
PAIML-UNIT 5hgthgghrgergrgrgrgrgrgrg.pptx
A Survey Ondecision Tree Learning Algorithms for Knowledge Discovery
Machine learning(UNIT 4)
Machine learning and decision trees
Classification Based Machine Learning Algorithms
Decision Tree Machine Learning Detailed Explanation.
Business Analytics using R.ppt
Classifiers
An Introduction to Random Forest and linear regression algorithms
Ad

Recently uploaded (20)

PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
UNIT 4 Total Quality Management .pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Sustainable Sites - Green Building Construction
PPTX
Construction Project Organization Group 2.pptx
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
web development for engineering and engineering
PDF
PPT on Performance Review to get promotions
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Model Code of Practice - Construction Work - 21102022 .pdf
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Operating System & Kernel Study Guide-1 - converted.pdf
UNIT 4 Total Quality Management .pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
bas. eng. economics group 4 presentation 1.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
OOP with Java - Java Introduction (Basics)
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Sustainable Sites - Green Building Construction
Construction Project Organization Group 2.pptx
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems
CYBER-CRIMES AND SECURITY A guide to understanding
web development for engineering and engineering
PPT on Performance Review to get promotions
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Ad

supervised machine learning algorithms support vector machine

  • 1. Unit II- Supervised learning : Naves Bayes, Decision Tree
  • 2. Bayes’ Theorem • Bayes’ Theorem is a mathematical formula that helps determine the conditional probability of an event based on prior knowledge and new evidence. • It adjusts probabilities when new information comes in and helps make better decisions in uncertain situations. • Bayes Theorem and Conditional Probability • Bayes theorem (also known as the Bayes Rule or Bayes Law) is used to determine the conditional probability of event A when event B has already occurred. • The general statement of Bayes’ theorem is “The conditional probability of an event A, given the occurrence of another event B, is equal to the product of the event of B, given A and the probability of A divided by the probability of event B.” i.e.
  • 3. Bayes Theorem Formula For any two events A and B, then the formula for the Bayes theorem is given by: Where, P(A) and P(B) are the probabilities of events A and B also P(B) is never equal to zero, P(A|B) is the probability of event A when event B happens, P(B|A) is the probability of event B when A happens. ere, •P(A) and P(B) are the probabilities of events A and B also P(B) is never equal to zero, •P(A|B) is the probability of event A when event B happens, •P(B|A) is the probability of event B when A happens.
  • 4. Naive Bayes Classifiers • Naive Bayes classifiers are supervised machine learning algorithms used for classification tasks, based on Bayes’ Theorem to find probabilities. • Key Features of Naive Bayes Classifiers • The main idea behind the Naive Bayes classifier is to use Bayes’ Theorem to classify data based on the probabilities of different classes given the features of the data. It is used mostly in high-dimensional text classification • The Naive Bayes Classifier is a simple probabilistic classifier and it has very few number of parameters which are used to build the ML models that can predict at a faster speed than other classification algorithms. • It is a probabilistic classifier because it assumes that one feature in the model is independent of existence of another feature. In other words, each feature contributes to the predictions with no relation between each other. • Naïve Bayes Algorithm is used in spam filtration, Sentimental analysis, classifying articles and many more.
  • 5. cAdvantages of Naive Bayes Classifier 1. Easy to implement and computationally efficient. 2. Effective in cases with a large number of features. 3. Performs well even with limited training data. 4. It performs well in the presence of categorical features. 5. For numerical features data is assumed to come from normal distributions. Disadvantages of Naive Bayes Classifier 6. Assumes that features are independent, which may not always hold in real-world data. 7. Can be influenced by irrelevant attributes. 8. May assign zero probability to unseen events, leading to poor generalization. 9. tive in cases with a large number of features. •Performs well even with limited training data. •It performs well in the presence of categorical features.
  • 6. Applications of Naive Bayes Classifier •Spam Email Filtering: Classifies emails as spam or non-spam based on features. •Text Classification: UseApplications of Naive Bayes Classifier Spam Email Filtering: Classifies emails as spam or non-spam based on features. Text Classification: Used in sentiment analysis, document categorization, and topic classification. Medical Diagnosis: Helps in predicting the likelihood of a disease based on symptoms. Credit Scoring: Evaluates creditworthiness of individuals for loan approval. Weather Prediction: Classifies weather conditions based on various factors.
  • 7. Implementation naive Bayes classifier • 1. Import Required Libraries • 2. Load or Generate a Dataset • 3. Train the Naïve Bayes Model • 4. Make Predictions • 5. Evaluate the Model
  • 8. Decision Tree • Decision tree is a simple diagram that shows different choices and their possible results helping you make decisions easily. This article is all about what decision trees are, how they work, their advantages and disadvantages and their applications. • A decision tree is a graphical representation of different options for solving a problem and show how different factors are related. It has a hierarchical tree structure starts with one main question at the top called a node which further branches out into different possible outcomes where: • Root Node is the starting point that represents the entire dataset. • Branches: These are the lines that connect nodes. It shows the flow from one decision to another. • Internal Nodes are Points where decisions are made based on the input features. • Leaf Nodes: These are the terminal nodes at the end of branches that represent final outcomes or predictions
  • 9. T They also support decision-making by visualizing outcomes. You can quickly evaluate and compare the “branches” to determine which course of action is best for you.hey also support decision-making by visualizing outcomes. You can quickly evaluate and compare the “branches” to determine which course of action is best for you.
  • 10. Now, let’s take an example to understand the decision tree. Imagine you want to decide whether to drink coffee based on the time of day and how tired you feel. First the tree checks the time of day—if it’s morning it asks whether you are tired. If you’re tired the tree suggests drinking coffee if not it says there’s no need. Similarly in the afternoon the tree again asks if you are tired. If you recommends drinking coffee if not it concludes no coffee is needed.
  • 11. Classification of Decision Tree We have mainly two types of decision tree based on the nature of the target variable: classification trees and regression trees. Classification trees: They are designed to predict categorical outcomes means they classify data into different classes. They can determine whether an email is “spam” or “not spam” based on various features of the email. Regression trees : These are used when the target variable is continuous It predict numerical values rather than categories. For example a regression tree can estimate the price of a house based on its size, location, and other features. mainly two types of decision tree based on the nature of the target
  • 12. How Decision Trees Work? • A decision tree working starts with a main question known as the root node. This question is derived from the features of the dataset and serves as the starting point for decision-making. • From the root node, the tree asks a series of yes/no questions. Each question is designed to split the data into subsets based on specific attributes. For example if the first question is “Is it raining?”, the answer will determine which branch of the tree to follow. Depending on the response to each question you follow different branches. If your answer is “Yes,” you might proceed down one path if “No,” you will take another path. • This branching continues through a sequence of decisions. As you follow each branch, you get more questions that break the data into smaller groups. This step-by-step process continues until you have no more helpful questions . • You reach at the end of a branch where you find the final outcome or decision. It could be a classification (like “spam” or “not spam”) or a prediction (such as estimated price).
  • 13. • Advantages of Decision Trees • Simplicity and Interpretability: Decision trees are straightforward and easy to understand. You can visualize them like a flowchart which makes it simple to see how decisions are made. • Versatility: It means they can be used for different types of tasks can work well for both classification and regression • No Need for Feature Scaling: They don’t require you to normalize or scale your data. • Handles Non-linear Relationships: It is capable of capturing non-linear relationships between features and target variables. • Disadvantages of Decision Trees • Overfitting: Overfitting occurs when a decision tree captures noise and details in the training data and it perform poorly on new data. • Instability: instability means that the model can be unreliable slight variations in input can lead to significant differences in predictions. • Bias towards Features with More Levels: Decision trees can become biased towards features with many categories focusing too much on them during decision-making. This can cause the model to miss out other important features led to less accurate predictions .
  • 14. • Applications of Decision Trees • Loan Approval in Banking: A bank needs to decide whether to approve a loan application based on customer profiles. • Input features include income, credit score, employment status, and loan history. • The decision tree predicts loan approval or rejection, helping the bank make quick and reliable decisions. • Medical Diagnosis: A healthcare provider wants to predict whether a patient has diabetes based on clinical test results. • Features like glucose levels, BMI, and blood pressure are used to make a decision tree. • Tree classifies patients into diabetic or non-diabetic, assisting doctors in diagnosis. • Predicting Exam Results in Education : School wants to predict whether a student will pass or fail based on study habits. • Data includes attendance, time spent studying, and previous grades. • The decision tree identifies at-risk students, allowing teachers to provide additional support.
  • 15. Random Forest Algorithm • A Random Forest is a collection of decision trees that work together to make predictions. In this article, we'll explain how the Random Forest algorithm works and how to use it. • Understanding Intuition for Random Forest Algorithm • Random Forest algorithm is a powerful tree learning technique in Machine Learning to make predictions and then we do voting of all the tress to make prediction. They are widely used for classification and regression task. • It is a type of classifier that uses many decision trees to make predictions. • It takes different random parts of the dataset to train each tree and then it combines the results by averaging them. This approach helps improve the accuracy of predictions. Random Forest is based on ensemble learning.
  • 17. • As explained in image: Process starts with a dataset with rows and their corresponding class labels (columns). • Then - Multiple Decision Trees are created from the training data. Each tree is trained on a random subset of the data (with replacement) and a random subset of features. This process is known as bagging or bootstrap aggregating. • Each Decision Tree in the ensemble learns to make predictions independently. • When presented with a new, unseen instance, each Decision Tree in the ensemble makes a prediction. • The final prediction is made by combining the predictions of all the Decision Trees. This is typically done through a majority vote (for classification) or averaging (for regression).
  • 18. How Random Forest Algorithm Works? • The random Forest algorithm works in several steps: • Random Forest builds multiple decision trees using random samples of the data. Each tree is trained on a different subset of the data which makes each tree unique. • When creating each tree the algorithm randomly selects a subset of features or variables to split the data rather than using all available features at a time. This adds diversity to the trees. • Each decision tree in the forest makes a prediction based on the data it was trained on. When making final prediction random forest combines the results from all the trees. • For classification tasks the final prediction is decided by a majority vote. This means that the category predicted by most trees is the final prediction. • For regression tasks the final prediction is the average of the predictions from all the trees. • The randomness in data samples and feature selection helps to prevent the model from overfitting making the predictions more accurate and reliable.
  • 19. • Advantages of Random Forest • Random Forest provides very accurate predictions even with large datasets. • Random Forest can handle missing data well without compromising with accuracy. • It doesn’t require normalization or standardization on dataset. • When we combine multiple decision trees it reduces the risk of overfitting of the model. • Limitations of Random Forest • It can be computationally expensive especially with a large number of trees. • It’s harder to interpret the model compared to simpler models like decision trees.