SlideShare a Scribd company logo
2
Most read
3
Most read
6
Most read
CONFUSION MATRIX
MEANING
• A tool for evaluating the performance of a classification model for a
given set of test data
• Provides a detailed breakdown of how well a model is performing in
terms of classifying instances into different classes
• For 2 prediction classes of classifiers, the matrix is of 2*2 table
• For 3 classes, it is 3*3 table
• The matrix is divided into two dimensions, that are predicted values and
actual values.
• Predicted values are values, which are predicted by the model, and actual
values are true values for the given observations.
Matrix
• True Negative [TN]: Model has given prediction No, and the real or
actual value was also No.
• True Positive [TP]: The model has predicted yes, and the actual value
was also true.
• False Negative [FN]: The model has predicted no, but the actual value
was Yes, it is also called as Type-II error.
• False Positive [FP]: The model has predicted Yes, but the actual value
was No. It is also called a Type-I error.
Need for Confusion Matrix
• Evaluates the performance of the classification models, making
predictions on test data, and tells how good classification model is.
• Tells the error made by the classifiers but also the type of errors such
as either type-I or type-II error.
• Helpful to calculate the parameters for classification model, such as
accuracy, precision, etc.
Sensitivity and Specificity
• Sensitivity tells us what proportion of the positive class got correctly
classified.
• A simple example would be determining what proportion of the
actual sick people were correctly detected by the model.
• False Negative Rate (FNR) tells us what proportion of the positive
class got incorrectly classified by the classifier.
• A higher TPR and lower FNR are desirable since we want to classify
the positive class correctly.
• Specificity tells us what proportion of the negative class got correctly
classified.
Calculation using Confusion Matrix
• Classification Accuracy: Defines how often the model predicts the
correct output
• Misclassification rate: Also termed as Error rate, and it defines how
often the model gives the wrong predictions.
• Precision: It can be defined as the number of correct outputs
provided by the model or out of all positive classes that have
predicted correctly by the model.
• Recall: It is defined as the out of total positive classes, how our model
predicted correctly.
• F-measure: Score helps us to evaluate the recall and precision at the
same time.

More Related Content

PDF
Confusion Matrix
PPTX
Wireless Sensor Networks ppt
PPTX
Presentation on Ethical Hacking ppt
PDF
An overview of gradient descent optimization algorithms.pdf
PPTX
Introduction to Node.js
PDF
Linear regression
PPTX
Time Series
PPT
Negotiation
Confusion Matrix
Wireless Sensor Networks ppt
Presentation on Ethical Hacking ppt
An overview of gradient descent optimization algorithms.pdf
Introduction to Node.js
Linear regression
Time Series
Negotiation

What's hot (20)

PPTX
Machine Learning - Accuracy and Confusion Matrix
PPTX
Clustering in data Mining (Data Mining)
PPTX
Supervised learning and Unsupervised learning
PPTX
Machine learning with ADA Boost
PDF
Supervised and Unsupervised Machine Learning
PPTX
supervised learning
PPTX
Linear and Logistics Regression
PDF
Performance Metrics for Machine Learning Algorithms
PPTX
Ensemble learning
PDF
Data preprocessing using Machine Learning
PDF
Naive Bayes
PPTX
Overfitting & Underfitting
PDF
Cross validation
PPTX
Machine Learning-Linear regression
PDF
Modelling and evaluation
PPTX
Problem solving agents
ODP
NAIVE BAYES CLASSIFIER
PPTX
Constraint satisfaction problems (csp)
PPTX
Machine learning clustering
PDF
Bayesian Networks - A Brief Introduction
Machine Learning - Accuracy and Confusion Matrix
Clustering in data Mining (Data Mining)
Supervised learning and Unsupervised learning
Machine learning with ADA Boost
Supervised and Unsupervised Machine Learning
supervised learning
Linear and Logistics Regression
Performance Metrics for Machine Learning Algorithms
Ensemble learning
Data preprocessing using Machine Learning
Naive Bayes
Overfitting & Underfitting
Cross validation
Machine Learning-Linear regression
Modelling and evaluation
Problem solving agents
NAIVE BAYES CLASSIFIER
Constraint satisfaction problems (csp)
Machine learning clustering
Bayesian Networks - A Brief Introduction
Ad

Similar to CONFUSION MATRIX.ppt (20)

PPT
clustering, k-mean clustering, confusion matrices
PDF
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
PPTX
UNIT IV MODEL EVALUATION and sequences.pptx
PPTX
Performance Measurement for Machine Leaning.pptx
PPTX
lecture-12evaluationmeasures-ml-221219130248-3522ee79.pptx eval
PDF
evaluationmeasures-ml.pdf evaluation measures
PPTX
Lecture-12Evaluation Measures-ML.pptx
PPTX
measures pptekejwjejejeeisjsjsjdjdjdjjddjdj
PPTX
machine Learning subject of third year information technology unit 2.pptx
PPTX
Evaluating classification algorithms
PPTX
Week_8machine learning (feature selection).pptx
PPTX
Build_Machine_Learning_System for Machine Learning Course
PPTX
Boost model accuracy of imbalanced covid 19 mortality prediction
PPTX
IME 672 - Classifier Evaluation I.pptx
PPTX
Machine Learning Unit 2 Semester 3 MSc IT Part 2 Mumbai University
PPTX
Evaluation measures Data Science Course.pptx
PDF
Introduction to Artificial Intelligence_ Lec 10
PPTX
Statistical Learning and Model Selection (1).pptx
PDF
Evaluation presentation on ai project cycle
DOCX
Performance of the classification algorithm
clustering, k-mean clustering, confusion matrices
Lecture 10 - Model Testing and Evaluation, a lecture in subject module Statis...
UNIT IV MODEL EVALUATION and sequences.pptx
Performance Measurement for Machine Leaning.pptx
lecture-12evaluationmeasures-ml-221219130248-3522ee79.pptx eval
evaluationmeasures-ml.pdf evaluation measures
Lecture-12Evaluation Measures-ML.pptx
measures pptekejwjejejeeisjsjsjdjdjdjjddjdj
machine Learning subject of third year information technology unit 2.pptx
Evaluating classification algorithms
Week_8machine learning (feature selection).pptx
Build_Machine_Learning_System for Machine Learning Course
Boost model accuracy of imbalanced covid 19 mortality prediction
IME 672 - Classifier Evaluation I.pptx
Machine Learning Unit 2 Semester 3 MSc IT Part 2 Mumbai University
Evaluation measures Data Science Course.pptx
Introduction to Artificial Intelligence_ Lec 10
Statistical Learning and Model Selection (1).pptx
Evaluation presentation on ai project cycle
Performance of the classification algorithm
Ad

Recently uploaded (20)

PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPT
Quality review (1)_presentation of this 21
PPT
Miokarditis (Inflamasi pada Otot Jantung)
PPTX
Database Infoormation System (DBIS).pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPTX
Supervised vs unsupervised machine learning algorithms
PPTX
Computer network topology notes for revision
PPTX
climate analysis of Dhaka ,Banglades.pptx
PPTX
Global journeys: estimating international migration
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
Mega Projects Data Mega Projects Data
PDF
Fluorescence-microscope_Botany_detailed content
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Introduction to Knowledge Engineering Part 1
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
22.Patil - Early prediction of Alzheimer’s disease using convolutional neural...
Galatica Smart Energy Infrastructure Startup Pitch Deck
Quality review (1)_presentation of this 21
Miokarditis (Inflamasi pada Otot Jantung)
Database Infoormation System (DBIS).pptx
Clinical guidelines as a resource for EBP(1).pdf
STUDY DESIGN details- Lt Col Maksud (21).pptx
Supervised vs unsupervised machine learning algorithms
Computer network topology notes for revision
climate analysis of Dhaka ,Banglades.pptx
Global journeys: estimating international migration
Business Acumen Training GuidePresentation.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Mega Projects Data Mega Projects Data
Fluorescence-microscope_Botany_detailed content
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
IB Computer Science - Internal Assessment.pptx
Introduction to Knowledge Engineering Part 1

CONFUSION MATRIX.ppt

  • 2. MEANING • A tool for evaluating the performance of a classification model for a given set of test data • Provides a detailed breakdown of how well a model is performing in terms of classifying instances into different classes • For 2 prediction classes of classifiers, the matrix is of 2*2 table • For 3 classes, it is 3*3 table • The matrix is divided into two dimensions, that are predicted values and actual values. • Predicted values are values, which are predicted by the model, and actual values are true values for the given observations.
  • 3. Matrix • True Negative [TN]: Model has given prediction No, and the real or actual value was also No. • True Positive [TP]: The model has predicted yes, and the actual value was also true. • False Negative [FN]: The model has predicted no, but the actual value was Yes, it is also called as Type-II error. • False Positive [FP]: The model has predicted Yes, but the actual value was No. It is also called a Type-I error.
  • 4. Need for Confusion Matrix • Evaluates the performance of the classification models, making predictions on test data, and tells how good classification model is. • Tells the error made by the classifiers but also the type of errors such as either type-I or type-II error. • Helpful to calculate the parameters for classification model, such as accuracy, precision, etc.
  • 5. Sensitivity and Specificity • Sensitivity tells us what proportion of the positive class got correctly classified. • A simple example would be determining what proportion of the actual sick people were correctly detected by the model. • False Negative Rate (FNR) tells us what proportion of the positive class got incorrectly classified by the classifier. • A higher TPR and lower FNR are desirable since we want to classify the positive class correctly. • Specificity tells us what proportion of the negative class got correctly classified.
  • 6. Calculation using Confusion Matrix • Classification Accuracy: Defines how often the model predicts the correct output • Misclassification rate: Also termed as Error rate, and it defines how often the model gives the wrong predictions. • Precision: It can be defined as the number of correct outputs provided by the model or out of all positive classes that have predicted correctly by the model. • Recall: It is defined as the out of total positive classes, how our model predicted correctly. • F-measure: Score helps us to evaluate the recall and precision at the same time.