SlideShare a Scribd company logo
Name of Institution
1
AIAS
M. DS., Semester II
Machine Learning
Name of Institution
Machine Learning
Machine learning is a subset of artificial intelligence (AI) that involves
training algorithms to learn from data and make predictions or decisions
without being explicitly programmed.
Types of Machine Learning
1.Supervised Learning: The algorithm is trained on labeled data to learn the
relationship between input and output variables.
2.Unsupervised Learning: The algorithm is trained on unlabeled data to
discover patterns or relationships.
3.Reinforcement Learning: The algorithm learns through trial and error by
interacting with an environment and receiving feedback.
2
Name of Institution
Machine Learning Algorithms
1.Linear Regression: A linear model that predicts a continuous output
variable.
2.Decision Trees: A tree-based model that splits data into subsets based on
features.
3.Random Forest: An ensemble learning method that combines multiple
decision trees.
4. Support Vector Machines (SVMs): A linear or non-linear model that finds
the hyperplane that maximally separates classes.
5. Neural Networks: A complex network of interconnected nodes (neurons)
that learn to represent data.
3
Name of Institution
Classification is the process of assigning a label or category to a new
instance based on its characteristics."- Example: spam vs. non-spam emails
Types of Classification
• Binary Classification: two classes (e.g., 0/1, yes/no)
• - Multi-Class Classification: more than two classes (e.g., animal species)
• - Image/Text Classification: classification of images or text documents
4
Name of Institution
Classification Techniques
• Decision Trees
• Random Forest
• Support Vector Machines (SVM)
• k-Nearest Neighbors (k-NN)
• Naive Bayes- Neural Networks
5
Name of Institution
Decision Tree
A decision tree is a tree-like model that splits data into subsets based on
features. It's a popular supervised learning algorithm used for classification
and regression tasks.
6
Name of Institution
How does a Decision Tree work?
1. Root Node: The decision tree starts with a root node, which represents
the entire dataset.
2. 2. Splitting: The algorithm splits the data into subsets based on the best
feature to split on. This is typically done using a greedy approach, where
the algorithm chooses the feature that results in the purest subsets.
3. 3. Child Nodes: The subsets of data are represented by child nodes,
which are created by splitting the data.
4. 4. Recursion: The algorithm recursively splits the data into subsets until
a stopping criterion is met, such as when all instances in a node belong
to the same class.
5. 5. Leaf Nodes: The final nodes in the tree are called leaf nodes, which
represent the predicted class labels.
7
Name of Institution
Advantages of Decision Trees
• 1. Easy to Interpret: Decision trees are easy to understand and interpret,
making them a popular choice for many applications.
• 2. Handle Missing Values: Decision trees can handle missing values by
treating them as a separate category.
• 3. Non-Parametric: Decision trees are non-parametric, meaning they
don't require any assumptions about the distribution of the data.
8
Name of Institution
Disadvantages of Decision Trees
• 1. Overfitting: Decision trees can suffer from overfitting, especially when
the trees are deep.
• 2. Not Robust to Noise: Decision trees can be sensitive to noise in the
data.
• 3. Not Suitable for Complex Relationships: Decision trees are not
suitable for modeling complex relationships between features.
9
Name of Institution
Support Vector Machine (SVM)
• A Support Vector Machine (SVM) is a supervised learning algorithm
used for classification and regression tasks. It's a powerful and versatile
algorithm that can handle high-dimensional data and non-linear
relationships.
10
Name of Institution
How does an SVM work?
• 1. Linear Separability: SVMs work by finding a hyperplane that separates
the data into different classes. In the case of linearly separable data, the
hyperplane is a line that maximally separates the classes.
• 2. Maximizing the Margin: The goal of the SVM is to maximize the
margin between the hyperplane and the nearest data points (called
support vectors). This is done by minimizing the norm of the weight
vector.
• 3. Soft Margin: In cases where the data is not linearly separable, SVMs
use a soft margin approach, which allows for some misclassifications.
• 4. Kernel Trick: SVMs can also handle non-linear relationships by using
the kernel trick, which maps the data into a higher-dimensional space
where it becomes linearly separable.
11
Name of Institution
Types of SVMs
1. Linear SVM: Used for linearly separable data.
2. 2. Non-Linear SVM: Used for non-linearly separable data, using the
kernel trick.
3. 3. Support Vector Regression (SVR): Used for regression tasks.
12
Name of Institution
Advantages of SVMs
• 1. High Accuracy: SVMs can achieve high accuracy, especially in cases
where the data is non-linearly separable.
• 2. Robustness to Noise: SVMs are robust to noise and outliers in the
data.
• 3. Flexibility: SVMs can handle high-dimensional data and non-linear
relationships.
13
Name of Institution
Disadvantages of SVMs
• 1. Computational Complexity: SVMs can be computationally expensive,
especially for large datasets.
• 2. Overfitting: SVMs can suffer from overfitting, especially when the
number of features is large.
• 3. Difficult to Interpret: SVMs can be difficult to interpret, especially when
using the kernel trick.
14
Name of Institution
References
 Burkov, Andriy (2019): The hundred-page machine learning book, Vol. 1.
Quebec City, QC, Canada: Andriy Burkov.
 Robert, Christian (2020): Machine learning, a probabilistic perspective, 62-63.
 Burkov, Andriy (2020): Machine learning engineering, Vol. 1. Montreal, QC,
Canada: True Positive Incorporated.
 Harrington, Peter (2012): Machine learning in action, Simon and Schuster.
15

More Related Content

DOCX
introduction to machine learning unit iv
PPTX
Supervised Learning Algorithmswith better lifecycle.pptx
PPTX
DECESION TREE and -SVM-NAIVEs bayes-BAYS.pptx
PPTX
Introduction to Machine Learning Concepts
PPTX
demo lecture for foundation class for btech
PPTX
Macine learning algorithms - K means, KNN
PPTX
TPT ATAL PPT As on 18.12.2023.pptx
PPTX
Linear Modelscxccxcxcxsaddsaccsdddd.pptx
introduction to machine learning unit iv
Supervised Learning Algorithmswith better lifecycle.pptx
DECESION TREE and -SVM-NAIVEs bayes-BAYS.pptx
Introduction to Machine Learning Concepts
demo lecture for foundation class for btech
Macine learning algorithms - K means, KNN
TPT ATAL PPT As on 18.12.2023.pptx
Linear Modelscxccxcxcxsaddsaccsdddd.pptx

Similar to Machine Learning M1A.ppt for supervise and unsupervise learning (20)

PPTX
Machine Learning using Support Vector Machine
PPTX
Introduction to data visualization tools like Tableau and Power BI and Excel
PDF
Python Code for Classification Supervised Machine Learning.pdf
PPTX
AI Algorithms
PPTX
Machine learning - session 5
PDF
2018 p 2019-ee-a2
PDF
50120140504015
PPTX
Support Vector machine(SVM) and Random Forest
PPTX
Decision Tree Machine Learning Detailed Explanation.
PDF
SVM(support vector Machine)withExplanation.pdf
PPTX
Machine Learning techniques used in AI.
DOCX
Performance analysis of machine learning algorithms on self localization system1
PPTX
Machine Learning: Transforming Data into Insights
PPTX
Introduction-to-SVM-Models_presentation.pptx
PPTX
sentiment analysis using support vector machine
PDF
Analysis and Comparison Study of Data Mining Algorithms Using Rapid Miner
PDF
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
PPT
Supervised and unsupervised learning
PDF
Survey_Paper_Comparative_Study_of_Machine_Learning_Techniques_and_its_Recent_...
PPTX
ML SFCSE.pptx
Machine Learning using Support Vector Machine
Introduction to data visualization tools like Tableau and Power BI and Excel
Python Code for Classification Supervised Machine Learning.pdf
AI Algorithms
Machine learning - session 5
2018 p 2019-ee-a2
50120140504015
Support Vector machine(SVM) and Random Forest
Decision Tree Machine Learning Detailed Explanation.
SVM(support vector Machine)withExplanation.pdf
Machine Learning techniques used in AI.
Performance analysis of machine learning algorithms on self localization system1
Machine Learning: Transforming Data into Insights
Introduction-to-SVM-Models_presentation.pptx
sentiment analysis using support vector machine
Analysis and Comparison Study of Data Mining Algorithms Using Rapid Miner
ANALYSIS AND COMPARISON STUDY OF DATA MINING ALGORITHMS USING RAPIDMINER
Supervised and unsupervised learning
Survey_Paper_Comparative_Study_of_Machine_Learning_Techniques_and_its_Recent_...
ML SFCSE.pptx
Ad

Recently uploaded (20)

PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PDF
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
PPTX
Major-Components-ofNKJNNKNKNKNKronment.pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Introduction to Knowledge Engineering Part 1
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PDF
.pdf is not working space design for the following data for the following dat...
PDF
Fluorescence-microscope_Botany_detailed content
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PDF
Foundation of Data Science unit number two notes
PPTX
Computer network topology notes for revision
PPT
Quality review (1)_presentation of this 21
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PDF
Launch Your Data Science Career in Kochi – 2025
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
168300704-gasification-ppt.pdfhghhhsjsjhsuxush
Major-Components-ofNKJNNKNKNKNKronment.pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
TRAFFIC-MANAGEMENT-AND-ACCIDENT-INVESTIGATION-WITH-DRIVING-PDF-FILE.pdf
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Introduction to Knowledge Engineering Part 1
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
.pdf is not working space design for the following data for the following dat...
Fluorescence-microscope_Botany_detailed content
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Foundation of Data Science unit number two notes
Computer network topology notes for revision
Quality review (1)_presentation of this 21
IBA_Chapter_11_Slides_Final_Accessible.pptx
Launch Your Data Science Career in Kochi – 2025
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Ad

Machine Learning M1A.ppt for supervise and unsupervise learning

  • 1. Name of Institution 1 AIAS M. DS., Semester II Machine Learning
  • 2. Name of Institution Machine Learning Machine learning is a subset of artificial intelligence (AI) that involves training algorithms to learn from data and make predictions or decisions without being explicitly programmed. Types of Machine Learning 1.Supervised Learning: The algorithm is trained on labeled data to learn the relationship between input and output variables. 2.Unsupervised Learning: The algorithm is trained on unlabeled data to discover patterns or relationships. 3.Reinforcement Learning: The algorithm learns through trial and error by interacting with an environment and receiving feedback. 2
  • 3. Name of Institution Machine Learning Algorithms 1.Linear Regression: A linear model that predicts a continuous output variable. 2.Decision Trees: A tree-based model that splits data into subsets based on features. 3.Random Forest: An ensemble learning method that combines multiple decision trees. 4. Support Vector Machines (SVMs): A linear or non-linear model that finds the hyperplane that maximally separates classes. 5. Neural Networks: A complex network of interconnected nodes (neurons) that learn to represent data. 3
  • 4. Name of Institution Classification is the process of assigning a label or category to a new instance based on its characteristics."- Example: spam vs. non-spam emails Types of Classification • Binary Classification: two classes (e.g., 0/1, yes/no) • - Multi-Class Classification: more than two classes (e.g., animal species) • - Image/Text Classification: classification of images or text documents 4
  • 5. Name of Institution Classification Techniques • Decision Trees • Random Forest • Support Vector Machines (SVM) • k-Nearest Neighbors (k-NN) • Naive Bayes- Neural Networks 5
  • 6. Name of Institution Decision Tree A decision tree is a tree-like model that splits data into subsets based on features. It's a popular supervised learning algorithm used for classification and regression tasks. 6
  • 7. Name of Institution How does a Decision Tree work? 1. Root Node: The decision tree starts with a root node, which represents the entire dataset. 2. 2. Splitting: The algorithm splits the data into subsets based on the best feature to split on. This is typically done using a greedy approach, where the algorithm chooses the feature that results in the purest subsets. 3. 3. Child Nodes: The subsets of data are represented by child nodes, which are created by splitting the data. 4. 4. Recursion: The algorithm recursively splits the data into subsets until a stopping criterion is met, such as when all instances in a node belong to the same class. 5. 5. Leaf Nodes: The final nodes in the tree are called leaf nodes, which represent the predicted class labels. 7
  • 8. Name of Institution Advantages of Decision Trees • 1. Easy to Interpret: Decision trees are easy to understand and interpret, making them a popular choice for many applications. • 2. Handle Missing Values: Decision trees can handle missing values by treating them as a separate category. • 3. Non-Parametric: Decision trees are non-parametric, meaning they don't require any assumptions about the distribution of the data. 8
  • 9. Name of Institution Disadvantages of Decision Trees • 1. Overfitting: Decision trees can suffer from overfitting, especially when the trees are deep. • 2. Not Robust to Noise: Decision trees can be sensitive to noise in the data. • 3. Not Suitable for Complex Relationships: Decision trees are not suitable for modeling complex relationships between features. 9
  • 10. Name of Institution Support Vector Machine (SVM) • A Support Vector Machine (SVM) is a supervised learning algorithm used for classification and regression tasks. It's a powerful and versatile algorithm that can handle high-dimensional data and non-linear relationships. 10
  • 11. Name of Institution How does an SVM work? • 1. Linear Separability: SVMs work by finding a hyperplane that separates the data into different classes. In the case of linearly separable data, the hyperplane is a line that maximally separates the classes. • 2. Maximizing the Margin: The goal of the SVM is to maximize the margin between the hyperplane and the nearest data points (called support vectors). This is done by minimizing the norm of the weight vector. • 3. Soft Margin: In cases where the data is not linearly separable, SVMs use a soft margin approach, which allows for some misclassifications. • 4. Kernel Trick: SVMs can also handle non-linear relationships by using the kernel trick, which maps the data into a higher-dimensional space where it becomes linearly separable. 11
  • 12. Name of Institution Types of SVMs 1. Linear SVM: Used for linearly separable data. 2. 2. Non-Linear SVM: Used for non-linearly separable data, using the kernel trick. 3. 3. Support Vector Regression (SVR): Used for regression tasks. 12
  • 13. Name of Institution Advantages of SVMs • 1. High Accuracy: SVMs can achieve high accuracy, especially in cases where the data is non-linearly separable. • 2. Robustness to Noise: SVMs are robust to noise and outliers in the data. • 3. Flexibility: SVMs can handle high-dimensional data and non-linear relationships. 13
  • 14. Name of Institution Disadvantages of SVMs • 1. Computational Complexity: SVMs can be computationally expensive, especially for large datasets. • 2. Overfitting: SVMs can suffer from overfitting, especially when the number of features is large. • 3. Difficult to Interpret: SVMs can be difficult to interpret, especially when using the kernel trick. 14
  • 15. Name of Institution References  Burkov, Andriy (2019): The hundred-page machine learning book, Vol. 1. Quebec City, QC, Canada: Andriy Burkov.  Robert, Christian (2020): Machine learning, a probabilistic perspective, 62-63.  Burkov, Andriy (2020): Machine learning engineering, Vol. 1. Montreal, QC, Canada: True Positive Incorporated.  Harrington, Peter (2012): Machine learning in action, Simon and Schuster. 15