SlideShare a Scribd company logo
Supervised & Unsupervised
Learning
~S. Amanpal
Supervised Learning
• In Supervised learning, you train the machine
using data which is well "labeled." It means some
data is already tagged with the correct answer. It
can be compared to learning which takes place in
the presence of a supervisor or a teacher. A
supervised learning algorithm learns from labeled
training data, helps you to predict outcomes for
unforeseen data. Types
– Regression: technique predicts a single output value using
training data.
– Classification: means to group the output inside a class.
Unsupervised Learning
• Unsupervised learning is a machine learning
technique, where you do not need to supervise
the model. Instead, you need to allow the model
to work on its own to discover information. It
mainly deals with the unlabeled data.
Unsupervised learning algorithms allow you to
perform more complex processing tasks
compared to supervised learning. Types
– Clustering is an important concept when it comes to
unsupervised learning. It mainly deals with finding a structure or
pattern in a collection of uncategorized data.
– Association rules allow you to establish associations amongst
data objects inside large databases.
Parameters
Supervised machine learning
technique
Unsupervised machine learning
technique
Process
In a supervised learning model, input
and output variables will be given.
In unsupervised learning model, only
input data will be given
Input Data
Algorithms are trained using labeled
data.
Algorithms are used against data which
is not labeled
Algorithms Used
Support vector machine, Neural
network, Linear and logistics
regression, random forest, and
Classification trees.
Unsupervised algorithms can be divided
into different categories: like Cluster
algorithms, K-means, Hierarchical
clustering, etc.
Computational
Complexity
Supervised learning is a simpler
method.
Unsupervised learning is
computationally complex
Use of Data
Supervised learning model uses
training data to learn a link between
the input and the outputs.
Unsupervised learning does not use
output data.
Accuracy of Results
Highly accurate and trustworthy
method.
Less accurate and trustworthy method.
Real Time Learning Learning method takes place offline.
Learning method takes place in real
time.
Number of Classes Number of classes is known. Number of classes is not known.
Main Drawback
Classifying big data can be a real
challenge in Supervised Learning.
You cannot get precise information
regarding data sorting, and the output
as data used in unsupervised learning is
labeled and not known.
Naive Bayes Classification
• It is a probabilistic classifier that makes classifications
using the Maximum A Posteriori decision rule in a
Bayesian setting.
• Bayes rule: P(A|B) = P(B|A) P(A) / P(B)
• where A and B are events
• Basically, we are trying to find probability of event A,
given the event B is true. Event B is also termed as
evidence.
• P(A) is the priori of A (the prior probability, i.e.
Probability of event before evidence is seen). The
evidence is an attribute value of an unknown
instance(here, it is event B).
• P(A|B) is a posteriori probability of B, i.e. probability of
event after evidence is seen.
– In the context of classification,
– you can replace A with a class, c_i, and
– B with our set of features, x_0 through x_n.
– Since P(B) serves as normalization
– P(c_i | x_0, …, x_n) ∝ P(x_0, …, x_n | c_i) * P(c_i)
• Now, if any two events A and B are independent,
then, P(A,B) = P(A)P(B) ; Hence:
Gaussian Naive Bayes classifier
In Gaussian Naive Bayes, continuous
values associated with each feature
are assumed to be distributed
according to a Gaussian distribution. A
Gaussian distribution is also called
Normal distribution. When plotted, it
gives a bell shaped curve which is
symmetric about the mean of the
feature values as shown below:
The likelihood of the features is assumed to be Gaussian, hence, conditional
probability is given by:
Supervised and unsupervised learning
K-Nearest Neighbors Algorithm
• The KNN algorithm assumes that
similar things exist in close
proximity. In other words, similar
things are near to each other. “Birds
of a feather flock together.”
The KNN Algorithm
1. Load the data
2. Initialize K to your chosen number of neighbors
3. For each example in the data
3.1 Calculate the distance between the query example and the current
example from the data.
3.2 Add the distance and the index of the example to an ordered collection
4. Sort the ordered collection of distances and indices from smallest to largest (in
ascending order) by the distances
5. Pick the first K entries from the sorted collection
6. Get the labels of the selected K entries
7. If regression, return the mean of the K labels
8. If classification, return the mode of the K labels
Decision Tree
• A Decision Tree has many analogies in real life and turns out, it has
influenced a wide area of Machine Learning, covering both
Classification and Regression. In decision analysis, a decision tree can be
used to visually and explicitly represent decisions and decision making.
• A decision tree is a map of the possible outcomes of a series of related
choices. It allows an individual or organization to weigh possible actions
against one another based on their costs, probabilities, and benefits.
• A decision tree typically starts with a single node, which branches into
possible outcomes. Each of those outcomes leads to additional nodes,
which branch off into other possibilities. This gives it a tree-like shape. eir
costs, probabilities, and benefits.
• Decision trees can be computationally expensive to train. The
process of growing a decision tree is computationally
expensive.
Support Vector Machines (SVM)
• A support vector machine (SVM) is a supervised machine learning model that uses
classification algorithms for two-group classification problems. After giving an SVM
model sets of labeled training data for each category, they’re able to categorize
new text.
• A support vector machine takes these data points and outputs the hyperplane
(which in two dimensions it’s simply a line) that best separates the tags. This line is
the decision boundary: anything that falls to one side of it we will classify as blue,
and anything that falls to the other as red.
• But, what exactly is the best hyperplane? For SVM, it’s the one that maximizes the
margins from both tags. In other words: the hyperplane (remember it’s a line in
this case) whose distance to the nearest element of each tag is the largest.
• The points closest to the hyperplane are called as the support
vector points and the distance of the vectors from the
hyperplane are called the margins.
• Soft Margin SVM is better than Hard Margin SVM, because:
• Hard Margin SVM is quite sensitive to outliers.
• Soft Margin SVM avoids iterating over outliers.
• In the below diagram you can notice overfitting of hard
margin SVM.
• Soft-margin SVM can choose a decision boundary that has
non-zero training error even if the dataset is linearly
separable, and is less likely to overfit. You can notice that
decreasing C value causes the classifier to leave linear
separability in order to gain stability.
Support Vector Machines kernel
• The SVM model is a supervised machine learning model that is mainly
used for classifications (but it could also be used for regression!). It learns
how to separate different groups by forming decision boundaries.
• It sounds simple. However, not all data are linearly separable. In fact, in
the real world, almost all the data are randomly distributed, which makes
it hard to separate different classes linearly.
The kernel trick
• As you can see in the above picture, if we find a way to map the data from
2-dimensional space to 3-dimensional space, we will be able to find a
decision surface that clearly divides between different classes. My first
thought of this data transformation process is to map all the data point to
a higher dimension (in this case, 3 dimension), find the boundary, and
make the classification.
• That sounds alright. However, when there are more and more dimensions,
computations within that space become more and more expensive. This is
when the kernel trick comes in.
• It allows us to operate in the original feature space without computing the
coordinates of the data in a higher dimensional space.
• Let’s look at an example:
Here x and y are two data points in 3 dimensions. Let’s assume that we need
to map x and y to 9-dimensional space. We need to do the following
calculations to get the final result, which is just a scalar. The computational
complexity, in this case, is O(n²).
However, if we use the kernel function, which is denoted as k(x, y), instead of
doing the complicated computations in the 9-dimensional space, we reach the
same result within the 3-dimensional space by calculating the dot product of x -
transpose and y. The computational complexity, in this case, is O(n).
The kernel trick sounds like a “perfect” plan. However, one
critical thing to keep in mind is that when we map data to a
higher dimension, there are chances that we may overfit the
model. Thus choosing the right kernel function (including the
right parameters) and regularization are of great importance.
Performance Metrics for Classification
Confusion Matrix: The confusion matrix, is a table with two
dimensions (“Actual” and “Predicted”), and sets of “classes” in both
dimensions. Our Actual classifications are columns and Predicted
ones are Rows.
The Confusion matrix in itself is not a performance measure as such,
but almost all of the performance metrics are based on Confusion
Matrix and the numbers inside it.
Terms associated with Confusion matrix
Before diving into what the confusion matrix is all about and what it conveys, Let’s say we are solving a
classification problem where we are predicting whether a person is having cancer or not.
Let’s give a label of to our target variable:
1: When a person is having cancer 0: When a person is NOT having cancer.
• True Positives (TP): True positives are the cases when the actual class of the data point was 1(True)
and the predicted is also 1(True)
• Ex: The case where a person is actually having cancer(1) and the model classifying his case as
cancer(1) comes under True positive.
• 2. True Negatives (TN): True negatives are the cases when the actual class of the data point was
0(False) and the predicted is also 0(False
• Ex: The case where a person NOT having cancer and the model classifying his case as Not cancer
comes under True Negatives.
• 3. False Positives (FP): False positives are the cases when the actual class of the data point was
0(False) and the predicted is 1(True). False is because the model has predicted incorrectly and
positive because the class predicted was a positive one. (1)
• Ex: A person NOT having cancer and the model classifying his case as cancer comes under False
Positives.
• 4. False Negatives (FN): False negatives are the cases when the actual class of the data point was
1(True) and the predicted is 0(False). False is because the model has predicted incorrectly and
negative because the class predicted was a negative one. (0)
• Ex: A person having cancer and the model classifying his case as No-cancer comes under False
Negatives.
• The ideal scenario that we all want is that the
model should give 0 False Positives and 0 False
Negatives. But that’s not the case in real life as
any model will NOT be 100% accurate most of
the times.
Accuracy
• Accuracy in classification problems is the number of correct
predictions made by the model over all kinds predictions
made.
• Accuracy is a good measure when the target variable classes
in the data are nearly balanced.
• Accuracy should NEVER be used as a measure when the target
variable classes in the data are a majority of one class.
Precision
Precision is a measure that tells us what proportion of patients
that we diagnosed as having cancer, actually had cancer. The
predicted positives (People predicted as cancerous are TP and
FP) and the people actually having a cancer are TP.
Recall or Sensitivity
Recall is a measure that tells us what proportion of patients that
actually had cancer was diagnosed by the algorithm as having
cancer. The actual positives (People having cancer are TP and FN)
and the people diagnosed by the model having a cancer are TP.
(Note: FN is included because the Person actually had a cancer
even though the model predicted otherwise).
When to use Precision and When to use Recall?
• It is clear that recall gives us information about a classifier’s
performance with respect to false negatives (how many did we
miss), while precision gives us information about its performance
with respect to false positives(how many did we caught).
• Precision is about being precise. So even if we managed to capture
only one cancer case, and we captured it correctly, then we are
100% precise.
• Recall is not so much about capturing cases correctly but more
about capturing all cases that have “cancer” with the answer as
“cancer”. So if we simply always say every case as “cancer”, we have
100% recall.
• So basically if we want to focus more on minimizing False Negatives,
we would want our Recall to be as close to 100% as possible
without precision being too bad and if we want to focus on
minimizing False positives, then our focus should be to make
Precision as close to 100% as possible.
Specificity
• Specificity is a measure that tells us what proportion of
patients that did NOT have cancer, were predicted by the
model as non-cancerous. The actual negatives (People
actually NOT having cancer are FP and TN) and the people
diagnosed by us not having cancer are TN. (Note: FP is
included because the Person did NOT actually have cancer
even though the model predicted otherwise).
• Specificity is the exact opposite of Recall.
F1 Score
We don’t really want to carry both Precision and Recall in our
pockets every time we make a model for solving a classification
problem. So it’s best if we can get a single score that kind of
represents both Precision(P) and Recall(R).
One way to do that is simply taking their arithmetic mean. i.e (P
+ R) / 2 where P is Precision and R is Recall. But that’s pretty bad
in some situations.
Supervised and unsupervised learning

More Related Content

PPTX
Classification Algorithm.
PDF
Naive Bayes
PPTX
Activation function
PPTX
Classification and Regression
PPTX
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...
PDF
Support Vector Machines for Classification
PDF
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...
PDF
Supervised learning
Classification Algorithm.
Naive Bayes
Activation function
Classification and Regression
Supervised and Unsupervised Learning In Machine Learning | Machine Learning T...
Support Vector Machines for Classification
What is Deep Learning | Deep Learning Simplified | Deep Learning Tutorial | E...
Supervised learning

What's hot (20)

PPTX
Machine Learning and Real-World Applications
PPTX
Naive bayes
PDF
Parametric & Non-Parametric Machine Learning (Supervised ML)
PDF
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
ODP
Machine Learning with Decision trees
PPTX
Introduction to Deep Learning
PPTX
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Begin...
PPTX
INTRODUCTION TO MACHINE LEARNING.pptx
PPTX
Unsupervised learning
PPTX
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
PPTX
Decision Tree Learning
PPTX
Machine Learning
PPTX
Presentation on supervised learning
PPTX
Supervised learning and Unsupervised learning
PPTX
Unsupervised learning (clustering)
PDF
K - Nearest neighbor ( KNN )
PPTX
Knowledge representation in AI
PPTX
Supervised Machine Learning
PDF
Machine Learning and its Applications
PDF
Machine Learning
Machine Learning and Real-World Applications
Naive bayes
Parametric & Non-Parametric Machine Learning (Supervised ML)
AI Vs ML Vs DL PowerPoint Presentation Slide Templates Complete Deck
Machine Learning with Decision trees
Introduction to Deep Learning
KNN Algorithm - How KNN Algorithm Works With Example | Data Science For Begin...
INTRODUCTION TO MACHINE LEARNING.pptx
Unsupervised learning
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
Decision Tree Learning
Machine Learning
Presentation on supervised learning
Supervised learning and Unsupervised learning
Unsupervised learning (clustering)
K - Nearest neighbor ( KNN )
Knowledge representation in AI
Supervised Machine Learning
Machine Learning and its Applications
Machine Learning
Ad

Similar to Supervised and unsupervised learning (20)

PPTX
Machine Learning_PPT.pptx
PPTX
demo lecture for foundation class for btech
PPTX
How Machine Learning Helps Organizations to Work More Efficiently?
PPTX
Intro to machine learning
PPTX
SVM - Functional Verification
PPTX
Lecture 09(introduction to machine learning)
PPTX
Introduction to Machine Learning
PDF
machine_learning.pptx
PDF
ml basics ARTIFICIAL INTELLIGENCE, MACHINE LEARNING, TYPES OF MACHINE LEARNIN...
PDF
Computational decision making
PDF
Machine learning Algorithms
PPTX
Macine learning algorithms - K means, KNN
PPTX
INTRODUCTIONTOML2024 for graphic era.pptx
PPT
Machine Learning Deep Learning Machine learning
PPTX
ML SFCSE.pptx
PPTX
Machine Learning using Support Vector Machine
PPT
Introduction to Machine Learning Aristotelis Tsirigos
PPT
slides
PPT
slides
PPT
Winnow vs perceptron
Machine Learning_PPT.pptx
demo lecture for foundation class for btech
How Machine Learning Helps Organizations to Work More Efficiently?
Intro to machine learning
SVM - Functional Verification
Lecture 09(introduction to machine learning)
Introduction to Machine Learning
machine_learning.pptx
ml basics ARTIFICIAL INTELLIGENCE, MACHINE LEARNING, TYPES OF MACHINE LEARNIN...
Computational decision making
Machine learning Algorithms
Macine learning algorithms - K means, KNN
INTRODUCTIONTOML2024 for graphic era.pptx
Machine Learning Deep Learning Machine learning
ML SFCSE.pptx
Machine Learning using Support Vector Machine
Introduction to Machine Learning Aristotelis Tsirigos
slides
slides
Winnow vs perceptron
Ad

Recently uploaded (20)

PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Cloud computing and distributed systems.
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Approach and Philosophy of On baking technology
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
Modernizing your data center with Dell and AMD
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPT
Teaching material agriculture food technology
20250228 LYD VKU AI Blended-Learning.pptx
Cloud computing and distributed systems.
Dropbox Q2 2025 Financial Results & Investor Presentation
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Network Security Unit 5.pdf for BCA BBA.
Per capita expenditure prediction using model stacking based on satellite ima...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Approach and Philosophy of On baking technology
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
GamePlan Trading System Review: Professional Trader's Honest Take
Modernizing your data center with Dell and AMD
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
The AUB Centre for AI in Media Proposal.docx
MYSQL Presentation for SQL database connectivity
Diabetes mellitus diagnosis method based random forest with bat algorithm
Teaching material agriculture food technology

Supervised and unsupervised learning

  • 2. Supervised Learning • In Supervised learning, you train the machine using data which is well "labeled." It means some data is already tagged with the correct answer. It can be compared to learning which takes place in the presence of a supervisor or a teacher. A supervised learning algorithm learns from labeled training data, helps you to predict outcomes for unforeseen data. Types – Regression: technique predicts a single output value using training data. – Classification: means to group the output inside a class.
  • 3. Unsupervised Learning • Unsupervised learning is a machine learning technique, where you do not need to supervise the model. Instead, you need to allow the model to work on its own to discover information. It mainly deals with the unlabeled data. Unsupervised learning algorithms allow you to perform more complex processing tasks compared to supervised learning. Types – Clustering is an important concept when it comes to unsupervised learning. It mainly deals with finding a structure or pattern in a collection of uncategorized data. – Association rules allow you to establish associations amongst data objects inside large databases.
  • 4. Parameters Supervised machine learning technique Unsupervised machine learning technique Process In a supervised learning model, input and output variables will be given. In unsupervised learning model, only input data will be given Input Data Algorithms are trained using labeled data. Algorithms are used against data which is not labeled Algorithms Used Support vector machine, Neural network, Linear and logistics regression, random forest, and Classification trees. Unsupervised algorithms can be divided into different categories: like Cluster algorithms, K-means, Hierarchical clustering, etc. Computational Complexity Supervised learning is a simpler method. Unsupervised learning is computationally complex Use of Data Supervised learning model uses training data to learn a link between the input and the outputs. Unsupervised learning does not use output data. Accuracy of Results Highly accurate and trustworthy method. Less accurate and trustworthy method. Real Time Learning Learning method takes place offline. Learning method takes place in real time. Number of Classes Number of classes is known. Number of classes is not known. Main Drawback Classifying big data can be a real challenge in Supervised Learning. You cannot get precise information regarding data sorting, and the output as data used in unsupervised learning is labeled and not known.
  • 5. Naive Bayes Classification • It is a probabilistic classifier that makes classifications using the Maximum A Posteriori decision rule in a Bayesian setting. • Bayes rule: P(A|B) = P(B|A) P(A) / P(B) • where A and B are events • Basically, we are trying to find probability of event A, given the event B is true. Event B is also termed as evidence. • P(A) is the priori of A (the prior probability, i.e. Probability of event before evidence is seen). The evidence is an attribute value of an unknown instance(here, it is event B). • P(A|B) is a posteriori probability of B, i.e. probability of event after evidence is seen.
  • 6. – In the context of classification, – you can replace A with a class, c_i, and – B with our set of features, x_0 through x_n. – Since P(B) serves as normalization – P(c_i | x_0, …, x_n) ∝ P(x_0, …, x_n | c_i) * P(c_i)
  • 7. • Now, if any two events A and B are independent, then, P(A,B) = P(A)P(B) ; Hence:
  • 8. Gaussian Naive Bayes classifier In Gaussian Naive Bayes, continuous values associated with each feature are assumed to be distributed according to a Gaussian distribution. A Gaussian distribution is also called Normal distribution. When plotted, it gives a bell shaped curve which is symmetric about the mean of the feature values as shown below: The likelihood of the features is assumed to be Gaussian, hence, conditional probability is given by:
  • 10. K-Nearest Neighbors Algorithm • The KNN algorithm assumes that similar things exist in close proximity. In other words, similar things are near to each other. “Birds of a feather flock together.” The KNN Algorithm 1. Load the data 2. Initialize K to your chosen number of neighbors 3. For each example in the data 3.1 Calculate the distance between the query example and the current example from the data. 3.2 Add the distance and the index of the example to an ordered collection 4. Sort the ordered collection of distances and indices from smallest to largest (in ascending order) by the distances 5. Pick the first K entries from the sorted collection 6. Get the labels of the selected K entries 7. If regression, return the mean of the K labels 8. If classification, return the mode of the K labels
  • 11. Decision Tree • A Decision Tree has many analogies in real life and turns out, it has influenced a wide area of Machine Learning, covering both Classification and Regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. • A decision tree is a map of the possible outcomes of a series of related choices. It allows an individual or organization to weigh possible actions against one another based on their costs, probabilities, and benefits. • A decision tree typically starts with a single node, which branches into possible outcomes. Each of those outcomes leads to additional nodes, which branch off into other possibilities. This gives it a tree-like shape. eir costs, probabilities, and benefits.
  • 12. • Decision trees can be computationally expensive to train. The process of growing a decision tree is computationally expensive.
  • 13. Support Vector Machines (SVM) • A support vector machine (SVM) is a supervised machine learning model that uses classification algorithms for two-group classification problems. After giving an SVM model sets of labeled training data for each category, they’re able to categorize new text. • A support vector machine takes these data points and outputs the hyperplane (which in two dimensions it’s simply a line) that best separates the tags. This line is the decision boundary: anything that falls to one side of it we will classify as blue, and anything that falls to the other as red. • But, what exactly is the best hyperplane? For SVM, it’s the one that maximizes the margins from both tags. In other words: the hyperplane (remember it’s a line in this case) whose distance to the nearest element of each tag is the largest.
  • 14. • The points closest to the hyperplane are called as the support vector points and the distance of the vectors from the hyperplane are called the margins.
  • 15. • Soft Margin SVM is better than Hard Margin SVM, because: • Hard Margin SVM is quite sensitive to outliers. • Soft Margin SVM avoids iterating over outliers. • In the below diagram you can notice overfitting of hard margin SVM. • Soft-margin SVM can choose a decision boundary that has non-zero training error even if the dataset is linearly separable, and is less likely to overfit. You can notice that decreasing C value causes the classifier to leave linear separability in order to gain stability.
  • 16. Support Vector Machines kernel • The SVM model is a supervised machine learning model that is mainly used for classifications (but it could also be used for regression!). It learns how to separate different groups by forming decision boundaries. • It sounds simple. However, not all data are linearly separable. In fact, in the real world, almost all the data are randomly distributed, which makes it hard to separate different classes linearly.
  • 17. The kernel trick • As you can see in the above picture, if we find a way to map the data from 2-dimensional space to 3-dimensional space, we will be able to find a decision surface that clearly divides between different classes. My first thought of this data transformation process is to map all the data point to a higher dimension (in this case, 3 dimension), find the boundary, and make the classification. • That sounds alright. However, when there are more and more dimensions, computations within that space become more and more expensive. This is when the kernel trick comes in. • It allows us to operate in the original feature space without computing the coordinates of the data in a higher dimensional space. • Let’s look at an example:
  • 18. Here x and y are two data points in 3 dimensions. Let’s assume that we need to map x and y to 9-dimensional space. We need to do the following calculations to get the final result, which is just a scalar. The computational complexity, in this case, is O(n²). However, if we use the kernel function, which is denoted as k(x, y), instead of doing the complicated computations in the 9-dimensional space, we reach the same result within the 3-dimensional space by calculating the dot product of x - transpose and y. The computational complexity, in this case, is O(n).
  • 19. The kernel trick sounds like a “perfect” plan. However, one critical thing to keep in mind is that when we map data to a higher dimension, there are chances that we may overfit the model. Thus choosing the right kernel function (including the right parameters) and regularization are of great importance.
  • 20. Performance Metrics for Classification Confusion Matrix: The confusion matrix, is a table with two dimensions (“Actual” and “Predicted”), and sets of “classes” in both dimensions. Our Actual classifications are columns and Predicted ones are Rows. The Confusion matrix in itself is not a performance measure as such, but almost all of the performance metrics are based on Confusion Matrix and the numbers inside it.
  • 21. Terms associated with Confusion matrix Before diving into what the confusion matrix is all about and what it conveys, Let’s say we are solving a classification problem where we are predicting whether a person is having cancer or not. Let’s give a label of to our target variable: 1: When a person is having cancer 0: When a person is NOT having cancer. • True Positives (TP): True positives are the cases when the actual class of the data point was 1(True) and the predicted is also 1(True) • Ex: The case where a person is actually having cancer(1) and the model classifying his case as cancer(1) comes under True positive. • 2. True Negatives (TN): True negatives are the cases when the actual class of the data point was 0(False) and the predicted is also 0(False • Ex: The case where a person NOT having cancer and the model classifying his case as Not cancer comes under True Negatives. • 3. False Positives (FP): False positives are the cases when the actual class of the data point was 0(False) and the predicted is 1(True). False is because the model has predicted incorrectly and positive because the class predicted was a positive one. (1) • Ex: A person NOT having cancer and the model classifying his case as cancer comes under False Positives. • 4. False Negatives (FN): False negatives are the cases when the actual class of the data point was 1(True) and the predicted is 0(False). False is because the model has predicted incorrectly and negative because the class predicted was a negative one. (0) • Ex: A person having cancer and the model classifying his case as No-cancer comes under False Negatives.
  • 22. • The ideal scenario that we all want is that the model should give 0 False Positives and 0 False Negatives. But that’s not the case in real life as any model will NOT be 100% accurate most of the times.
  • 23. Accuracy • Accuracy in classification problems is the number of correct predictions made by the model over all kinds predictions made. • Accuracy is a good measure when the target variable classes in the data are nearly balanced. • Accuracy should NEVER be used as a measure when the target variable classes in the data are a majority of one class.
  • 24. Precision Precision is a measure that tells us what proportion of patients that we diagnosed as having cancer, actually had cancer. The predicted positives (People predicted as cancerous are TP and FP) and the people actually having a cancer are TP.
  • 25. Recall or Sensitivity Recall is a measure that tells us what proportion of patients that actually had cancer was diagnosed by the algorithm as having cancer. The actual positives (People having cancer are TP and FN) and the people diagnosed by the model having a cancer are TP. (Note: FN is included because the Person actually had a cancer even though the model predicted otherwise).
  • 26. When to use Precision and When to use Recall? • It is clear that recall gives us information about a classifier’s performance with respect to false negatives (how many did we miss), while precision gives us information about its performance with respect to false positives(how many did we caught). • Precision is about being precise. So even if we managed to capture only one cancer case, and we captured it correctly, then we are 100% precise. • Recall is not so much about capturing cases correctly but more about capturing all cases that have “cancer” with the answer as “cancer”. So if we simply always say every case as “cancer”, we have 100% recall. • So basically if we want to focus more on minimizing False Negatives, we would want our Recall to be as close to 100% as possible without precision being too bad and if we want to focus on minimizing False positives, then our focus should be to make Precision as close to 100% as possible.
  • 27. Specificity • Specificity is a measure that tells us what proportion of patients that did NOT have cancer, were predicted by the model as non-cancerous. The actual negatives (People actually NOT having cancer are FP and TN) and the people diagnosed by us not having cancer are TN. (Note: FP is included because the Person did NOT actually have cancer even though the model predicted otherwise). • Specificity is the exact opposite of Recall.
  • 28. F1 Score We don’t really want to carry both Precision and Recall in our pockets every time we make a model for solving a classification problem. So it’s best if we can get a single score that kind of represents both Precision(P) and Recall(R). One way to do that is simply taking their arithmetic mean. i.e (P + R) / 2 where P is Precision and R is Recall. But that’s pretty bad in some situations.