SlideShare a Scribd company logo
Credibility: Evaluating what’s Been Learned
Training and TestingWe measure the success of a classification procedure by using error rates (or equivalent success rates)Measuring success rate using training set is highly optimisticThe error rate on training set is called resubstitution errorWe have a separate test set for calculating success errorTest set should be independent of the training setAlso some time to improve our classification technique we use a validation setWhen we hold out some part of training set for testing (which is now not used for training), this process is called holdout procedure
Predicting performanceExpected success rate = 100 – error rate (If error rate is also in percentage)We want the true success rateCalculation of true success rateSuppose we have expected success rate(f) = s/n, where s is the number of success out of a total n instancesFor large value of n, f follows a normal distributionNow we will predict the true success rate (p) based on the  confidence percentage we want For example say our f = 75%, then p will lie in [73.2%,76.7%] with 80% confidence
Predicting performanceNow using properties of statistics we know that the mean of f is p and the variance is p(1-p)/nTo use normal distribution we will have to make the mean of f = 0 and standard deviation = 1 So suppose our confidence = c% and we want to calculate pWe will use the two tailed property of normal distributionAnd also that the are covered by normal distribution is taken as 100% so the are we will leave is 100 - c
Predicting performanceFinally after all the manipulations we have ,true success rate as:Here,                        p -> true success rate                        f - > expected success rate                        N -> Number of instances                         Z -> Factor derived from a normal distribution table using the  100-c measure
Cross validationWe use cross validation when amount of data is small and we need to have independent training and test set from itIt is important that each class is represented in its actual proportions in the training and test set: Stratification An important cross validation technique is stratified 10 fold cross validation, where the instance set is divided into 10 foldsWe have 10 iterations with taking a different single fold for testing and the rest 9 folds for training, averaging the error of the 10 iterationsProblem: Computationally intensive
Other estimatesLeave-one-out:StepsOne instance is left for testing and the rest are used for trainingThis is iterated for all the instances and the errors are averagedLeave-one-out:AdvantageWe use larger training setsLeave-one-out:DisadvantageComputationally intensiveCannot be stratified
Other estimates0.632 BootstrapDataset of n samples is sampled n times, with replacements, to give another dataset with n instancesThere will be some repeated instances in the second setHere error is defined as:e = 0.632x(error in test instances) + 0.368x(error in training instances)
Comparing data mining methodsTill now we were dealing with performance predictionNow we will look at methods to compare algorithms, to see which one did betterWe cant directly use Error rate to predict which algorithm is better as the error rate might have been calculated on different data setsSo to compare algorithms we need some statistical testsWe use Student’s  t- test to do this. This test help us to figure out if the mean error of two algorithm are different or not for a given confidence level
Comparing data mining methodsWe will use paired t-test which is a slight modification of student’s t-testPaired t-testSuppose we have unlimited data, do the following:Find k data sets from the unlimited data we haveUse cross validation with each technique to get the respective outcomes: x1, x2, x3,….,xk and y1,y2,y3,……,ykmx = mean of x values and similarly mydi = xi – yiUsing t-statistic:
Comparing data mining methodsBased on the value of k we get a degree of freedom, which enables us to figure out a z for a particular confidence valueIf  t <= (-z)   or  t >= (z) then, the two means differ significantly In case t = 0 then they don’t differ, we call this null hypothesis
Predicting ProbabilitiesTill now we were considering a scheme which when applied, results in either a correct or an incorrect prediction. This is called 0 – loss functionNow we will deal with the success incase of algorithms that outputs probability distribution for e.g. Naïve Bayes
Predicting ProbabilitiesQuadratic loss function:For a single instance there are k out comes or classesProbability vector: p1,p2,….,pkThe actual out come vector is: a1,a2,a3,…..ak (where the actual outcome will be 1, rest all 0)We have to minimize the quadratic loss function given by:The minimum will be achieved when the probability vector is the true probability vector
Predicting ProbabilitiesInformational loss function:Given by:–log(pi)Minimum is again reached at true probabilitiesDifferences between Quadratic loss and Informational lossWhile quadratic loss takes all probabilities under consideration, Informational loss is based only on the class probability While quadratic loss is bounded as its maximum output is 2, Informational loss is unbounded as it can output values up to infinity
Counting the costDifferent outcomes might have different costFor example in loan decision, the cost of lending to a defaulter is far greater that the lost-business cost of  refusing a loan to a non defaulterSuppose we have two class prediction. Outcomes can be:
Counting the costTrue positive rate: TP/(TP+FN)False positive rate: FP/(FP+TN)Overall success rate: Number of correct classification / Total Number of classificationError rate = 1 – success rateIn multiclass case we have a confusion matrix like (actual and a random one):
Counting the costThese are the actual and the random outcome of a three class problemThe diagonal represents the successful casesKappa statistic = (D-observed  -  D-actual) / (D-perfect  -  D-actual)Here kappa statistic = (140 – 82)/(200-82) = 49.2%Kappa is used to measure the agreement between predicted and observed categorizations of a dataset, while correcting for agreements that occurs by chanceDoes not take cost into account
Classification with costsExample Cost matrices (just gives us the number of errors):Success rate is measured by average cost per predictionWe try to minimize the costsExpected costs: dot products of vectors of class probabilities and appropriate column in cost matrix
Classification with costsSteps to take cost into consideration while testing:First use a learning method to get the probability vector (like Naïve Bayes) Now multiple the probability vector to each column of a cost matrix one by one so as to get the cost for each class/columnSelect the class with the minimum(or maximum!!) cost
Cost sensitive learningTill now we included the cost factor during evaluationWe will incorporate costs into the learning phase of a methodWe can change the ratio of instances in the training set so as to take care of costsFor example we can do replication of a instances of particular class so that our learning method will give us a model with less errors of that class
Lift ChartsIn practice, costs are rarely knownIn marketing terminology the response rate is referred to as the lift factorWe compare probable scenarios to make decisionsA lift chart allows visual comparisonExample: promotional mail out to 1,000,000 householdsMail to all: 0.1%response (1000)Some data mining tool identifies subset of 100, 000 of which 0.4% respond (400)A lift of 4
Lift ChartsSteps to calculate lift factor:We decide a sample sizeNow we arrange our data in decreasing order of the predicted probability of a class (the one which we will base our lift factor on: positive class)We calculate:Sample success proportion = Number of positive instances / Sample size Lift factor = Sample success proportion / Data success proportionWe calculate lift factor for different sample size to get  Lift Charts
Lift ChartsA hypothetical lift chart
Lift ChartsIn the lift chart we will like to stay towards the upper left cornerThe diagonal line is the curve for random samples without using sorted dataAny good selection will keep the lift curve above the diagonal
ROC CurvesStands for receiver operating characteristicDifference to lift charts:Y axis showspercentage of true positive X axis shows percentage of false positives in samplesROC is a jagged curveIt can be smoothened out by cross validation
ROC CurvesA ROC curve
ROC CurvesWays to generate cost curves(Consider the previous diagram for reference)First way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionAverage the number of yes from all the folds and plot it
ROC CurvesSecond way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionPlot a ROC for each fold individually Average all the ROCs
ROC CurvesROC curves for two schemes
ROC CurvesIn the previous ROC curves:For a small, focused sample, use method AFor a large one, use method BIn between, choose between A and B with appropriate probabilities
Recall – precision curvesIn case of a search query:Recall = number of documents retrieved that are relevant / total number of documents that are relevantPrecision = number of documents retrieved that are relevant / total number of documents that are retrieved
A summary         Different measures used to evaluate the false positive versus the false negative tradeoff
Cost curvesCost curves plot expected costs directlyExample for case with uniform costs (i.e. error):
Cost curvesExample with costs:
Cost curvesC[+|-]  is the cost of predicting + when the instance is –C[-|+]  is the cost of predicting - when the instance is +
Minimum Description Length PrincipleThe description length is defined as:Space required to describe a theory + space required to describe the theory’s mistakesTheory  = Classifier and mistakes = errors on the training dataWe try to minimize the description lengthMDL theory is the one that compresses the data the most. I.e to compress a data set we generate a model and then store the model and its mistakesWe need to compute:Size of the modelSpace needed to encode the error
Minimum Description Length PrincipleThe 2nd  one is easy. Just use informational loss functionFor  1st  we need a method to encode the modelL[T] = “length” of the theoryL[E|T] = training set encoded wrt the theory
Minimum Description Length PrincipleMDL and clusteringDescription length of theory: bits needed to encode the clusters. E.g. cluster centersDescription length of data given theory: encode cluster membership and position relative to cluster. E.g. distance to cluster centersWorks if coding scheme uses less code space for small numbers than for large ones
Visit more self help tutorialsPick a tutorial of your choice and browse through it at your own pace.The tutorials section is free, self-guiding and will not involve any additional support.Visit us at www.dataminingtools.net

More Related Content

PPTX
Data Analysis project "TITANIC SURVIVAL"
PPTX
WEKA: Practical Machine Learning Tools And Techniques
PPTX
Machine learning session6(decision trees random forrest)
PPTX
WEKA: Credibility Evaluating Whats Been Learned
PPTX
Machine learning session8(svm nlp)
PPTX
Random forest
PPT
Learning On The Border:Active Learning in Imbalanced classification Data
PDF
Accelerating the Random Forest algorithm for commodity parallel- Mark Seligman
Data Analysis project "TITANIC SURVIVAL"
WEKA: Practical Machine Learning Tools And Techniques
Machine learning session6(decision trees random forrest)
WEKA: Credibility Evaluating Whats Been Learned
Machine learning session8(svm nlp)
Random forest
Learning On The Border:Active Learning in Imbalanced classification Data
Accelerating the Random Forest algorithm for commodity parallel- Mark Seligman

What's hot (20)

ODP
Linear Regression Ex
PPTX
Machine learning session7(nb classifier k-nn)
PPTX
Heart disease classification
PPT
Decision tree and random forest
PDF
Classification Based Machine Learning Algorithms
PDF
Data Science - Part IX - Support Vector Machine
PDF
Random Forest / Bootstrap Aggregation
PPTX
Machine Learning - Simple Linear Regression
PPTX
Machine learning session1
PPTX
WEKA: Algorithms The Basic Methods
PPTX
Machine learning basics using trees algorithm (Random forest, Gradient Boosting)
PDF
Aaa ped-14-Ensemble Learning: About Ensemble Learning
PPTX
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
PDF
L4. Ensembles of Decision Trees
PPTX
Machine Learning using Support Vector Machine
PPTX
Borderline Smote
PPTX
An Introduction to Simulation in the Social Sciences
PPTX
Predict Backorder on a supply chain data for an Organization
PDF
Data Science - Part III - EDA & Model Selection
PPTX
Linear Regression, Machine learning term
Linear Regression Ex
Machine learning session7(nb classifier k-nn)
Heart disease classification
Decision tree and random forest
Classification Based Machine Learning Algorithms
Data Science - Part IX - Support Vector Machine
Random Forest / Bootstrap Aggregation
Machine Learning - Simple Linear Regression
Machine learning session1
WEKA: Algorithms The Basic Methods
Machine learning basics using trees algorithm (Random forest, Gradient Boosting)
Aaa ped-14-Ensemble Learning: About Ensemble Learning
Data Analysis: Evaluation Metrics for Supervised Learning Models of Machine L...
L4. Ensembles of Decision Trees
Machine Learning using Support Vector Machine
Borderline Smote
An Introduction to Simulation in the Social Sciences
Predict Backorder on a supply chain data for an Organization
Data Science - Part III - EDA & Model Selection
Linear Regression, Machine learning term
Ad

Viewers also liked (20)

PPTX
Ofimatica
PDF
Pdf bimestral diapositivas
RTF
"ЖАЗУЫ CODE - түрме ӘЛЕМ Матрикс OUT РУХАНИ
PPTX
10 gode råd om samliv i landbruket
PPTX
Fylkeslagets rolle i bygdekvinnelaget
PPTX
Verkkokaupan8Tukitointa
RTF
"كتابة التعليمات البرمجية - سجن WORLD MATRIX OUT الروحية
PPTX
WEKA:Practical Machine Learning Tools And Techniques
DOCX
PPTX
10 een marketingcampagne plannen in ms dynamics crm
PPT
What is the Artez Mobile Fundraising App?
PPTX
SolidWorks Model : SW2015X-A12
PPT
Presentación Francisco Lupiáñez, las TIC para la Salud
PDF
Building Business Service Intelligence
PDF
Design e conteúdo, o 'prato feito' das mídias sociais
PPTX
What is the Artez Mobile Fundraising App for iPhone?
PPTX
WEKA:Output Knowledge Representation
PPTX
WEKA: The Knowledge Flow Interface
PPTX
HEZKIDETZA
PPTX
Variantes normales en el eeg
Ofimatica
Pdf bimestral diapositivas
"ЖАЗУЫ CODE - түрме ӘЛЕМ Матрикс OUT РУХАНИ
10 gode råd om samliv i landbruket
Fylkeslagets rolle i bygdekvinnelaget
Verkkokaupan8Tukitointa
"كتابة التعليمات البرمجية - سجن WORLD MATRIX OUT الروحية
WEKA:Practical Machine Learning Tools And Techniques
10 een marketingcampagne plannen in ms dynamics crm
What is the Artez Mobile Fundraising App?
SolidWorks Model : SW2015X-A12
Presentación Francisco Lupiáñez, las TIC para la Salud
Building Business Service Intelligence
Design e conteúdo, o 'prato feito' das mídias sociais
What is the Artez Mobile Fundraising App for iPhone?
WEKA:Output Knowledge Representation
WEKA: The Knowledge Flow Interface
HEZKIDETZA
Variantes normales en el eeg
Ad

Similar to WEKA:Credibility Evaluating Whats Been Learned (20)

PPTX
credibility : evaluating what's been learned from data science
PPT
BIIntroduction. on business intelligenceppt
PPT
Business Intelligence and Data Analytics.ppt
PPT
BIIntro.ppt
PPT
clustering, k-mean clustering, confusion matrices
PPTX
Think-Aloud Protocols
PDF
13ClassifierPerformance.pdf
PDF
Assessing Model Performance - Beginner's Guide
PPTX
module_of_healthcare_wound_healing_mbbs_3.pptx
PPTX
Predictive analytics using 'R' Programming
PPTX
Statistical Learning and Model Selection module 2.pptx
PPTX
Module 3_ Classification.pptx
PPTX
datamining-lect11.pptx
PDF
lec21.pdf
PDF
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
PPT
MLlectureMethod.ppt
PPT
MLlectureMethod.ppt
PPT
isabelle_webinar_jan..
credibility : evaluating what's been learned from data science
BIIntroduction. on business intelligenceppt
Business Intelligence and Data Analytics.ppt
BIIntro.ppt
clustering, k-mean clustering, confusion matrices
Think-Aloud Protocols
13ClassifierPerformance.pdf
Assessing Model Performance - Beginner's Guide
module_of_healthcare_wound_healing_mbbs_3.pptx
Predictive analytics using 'R' Programming
Statistical Learning and Model Selection module 2.pptx
Module 3_ Classification.pptx
datamining-lect11.pptx
lec21.pdf
Bridging the Gap: Machine Learning for Ubiquitous Computing -- Evaluation
MLlectureMethod.ppt
MLlectureMethod.ppt
isabelle_webinar_jan..

More from weka Content (7)

PPTX
WEKA:The Command Line Interface
PPTX
WEKA:The Experimenter
PPTX
WEKA:The Explorer
PPTX
WEKA:Algorithms The Basic Methods
PPTX
WEKA:Data Mining Input Concepts Instances And Attributes
PPTX
WEKA:Introduction To Weka
PPT
An Introduction To Weka
WEKA:The Command Line Interface
WEKA:The Experimenter
WEKA:The Explorer
WEKA:Algorithms The Basic Methods
WEKA:Data Mining Input Concepts Instances And Attributes
WEKA:Introduction To Weka
An Introduction To Weka

Recently uploaded (20)

PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
PPTX
AI-assistance in Knowledge Collection and Curation supporting Safe and Sustai...
PPTX
New Microsoft PowerPoint Presentation - Copy.pptx
PDF
Dr. Enrique Segura Ense Group - A Self-Made Entrepreneur And Executive
PDF
20250805_A. Stotz All Weather Strategy - Performance review July 2025.pdf
DOCX
unit 2 cost accounting- Tender and Quotation & Reconciliation Statement
PDF
Reconciliation AND MEMORANDUM RECONCILATION
PPTX
Lecture (1)-Introduction.pptx business communication
PDF
Chapter 5_Foreign Exchange Market in .pdf
PDF
MSPs in 10 Words - Created by US MSP Network
PDF
BsN 7th Sem Course GridNNNNNNNN CCN.pdf
PPTX
Probability Distribution, binomial distribution, poisson distribution
PPT
340036916-American-Literature-Literary-Period-Overview.ppt
PPTX
Belch_12e_PPT_Ch18_Accessible_university.pptx
PDF
IFRS Notes in your pocket for study all the time
PDF
Stem Cell Market Report | Trends, Growth & Forecast 2025-2034
PDF
Laughter Yoga Basic Learning Workshop Manual
DOCX
Euro SEO Services 1st 3 General Updates.docx
PPTX
Business Ethics - An introduction and its overview.pptx
PDF
Types of control:Qualitative vs Quantitative
Ôn tập tiếng anh trong kinh doanh nâng cao
AI-assistance in Knowledge Collection and Curation supporting Safe and Sustai...
New Microsoft PowerPoint Presentation - Copy.pptx
Dr. Enrique Segura Ense Group - A Self-Made Entrepreneur And Executive
20250805_A. Stotz All Weather Strategy - Performance review July 2025.pdf
unit 2 cost accounting- Tender and Quotation & Reconciliation Statement
Reconciliation AND MEMORANDUM RECONCILATION
Lecture (1)-Introduction.pptx business communication
Chapter 5_Foreign Exchange Market in .pdf
MSPs in 10 Words - Created by US MSP Network
BsN 7th Sem Course GridNNNNNNNN CCN.pdf
Probability Distribution, binomial distribution, poisson distribution
340036916-American-Literature-Literary-Period-Overview.ppt
Belch_12e_PPT_Ch18_Accessible_university.pptx
IFRS Notes in your pocket for study all the time
Stem Cell Market Report | Trends, Growth & Forecast 2025-2034
Laughter Yoga Basic Learning Workshop Manual
Euro SEO Services 1st 3 General Updates.docx
Business Ethics - An introduction and its overview.pptx
Types of control:Qualitative vs Quantitative

WEKA:Credibility Evaluating Whats Been Learned

  • 2. Training and TestingWe measure the success of a classification procedure by using error rates (or equivalent success rates)Measuring success rate using training set is highly optimisticThe error rate on training set is called resubstitution errorWe have a separate test set for calculating success errorTest set should be independent of the training setAlso some time to improve our classification technique we use a validation setWhen we hold out some part of training set for testing (which is now not used for training), this process is called holdout procedure
  • 3. Predicting performanceExpected success rate = 100 – error rate (If error rate is also in percentage)We want the true success rateCalculation of true success rateSuppose we have expected success rate(f) = s/n, where s is the number of success out of a total n instancesFor large value of n, f follows a normal distributionNow we will predict the true success rate (p) based on the confidence percentage we want For example say our f = 75%, then p will lie in [73.2%,76.7%] with 80% confidence
  • 4. Predicting performanceNow using properties of statistics we know that the mean of f is p and the variance is p(1-p)/nTo use normal distribution we will have to make the mean of f = 0 and standard deviation = 1 So suppose our confidence = c% and we want to calculate pWe will use the two tailed property of normal distributionAnd also that the are covered by normal distribution is taken as 100% so the are we will leave is 100 - c
  • 5. Predicting performanceFinally after all the manipulations we have ,true success rate as:Here, p -> true success rate f - > expected success rate N -> Number of instances Z -> Factor derived from a normal distribution table using the 100-c measure
  • 6. Cross validationWe use cross validation when amount of data is small and we need to have independent training and test set from itIt is important that each class is represented in its actual proportions in the training and test set: Stratification An important cross validation technique is stratified 10 fold cross validation, where the instance set is divided into 10 foldsWe have 10 iterations with taking a different single fold for testing and the rest 9 folds for training, averaging the error of the 10 iterationsProblem: Computationally intensive
  • 7. Other estimatesLeave-one-out:StepsOne instance is left for testing and the rest are used for trainingThis is iterated for all the instances and the errors are averagedLeave-one-out:AdvantageWe use larger training setsLeave-one-out:DisadvantageComputationally intensiveCannot be stratified
  • 8. Other estimates0.632 BootstrapDataset of n samples is sampled n times, with replacements, to give another dataset with n instancesThere will be some repeated instances in the second setHere error is defined as:e = 0.632x(error in test instances) + 0.368x(error in training instances)
  • 9. Comparing data mining methodsTill now we were dealing with performance predictionNow we will look at methods to compare algorithms, to see which one did betterWe cant directly use Error rate to predict which algorithm is better as the error rate might have been calculated on different data setsSo to compare algorithms we need some statistical testsWe use Student’s t- test to do this. This test help us to figure out if the mean error of two algorithm are different or not for a given confidence level
  • 10. Comparing data mining methodsWe will use paired t-test which is a slight modification of student’s t-testPaired t-testSuppose we have unlimited data, do the following:Find k data sets from the unlimited data we haveUse cross validation with each technique to get the respective outcomes: x1, x2, x3,….,xk and y1,y2,y3,……,ykmx = mean of x values and similarly mydi = xi – yiUsing t-statistic:
  • 11. Comparing data mining methodsBased on the value of k we get a degree of freedom, which enables us to figure out a z for a particular confidence valueIf t <= (-z) or t >= (z) then, the two means differ significantly In case t = 0 then they don’t differ, we call this null hypothesis
  • 12. Predicting ProbabilitiesTill now we were considering a scheme which when applied, results in either a correct or an incorrect prediction. This is called 0 – loss functionNow we will deal with the success incase of algorithms that outputs probability distribution for e.g. Naïve Bayes
  • 13. Predicting ProbabilitiesQuadratic loss function:For a single instance there are k out comes or classesProbability vector: p1,p2,….,pkThe actual out come vector is: a1,a2,a3,…..ak (where the actual outcome will be 1, rest all 0)We have to minimize the quadratic loss function given by:The minimum will be achieved when the probability vector is the true probability vector
  • 14. Predicting ProbabilitiesInformational loss function:Given by:–log(pi)Minimum is again reached at true probabilitiesDifferences between Quadratic loss and Informational lossWhile quadratic loss takes all probabilities under consideration, Informational loss is based only on the class probability While quadratic loss is bounded as its maximum output is 2, Informational loss is unbounded as it can output values up to infinity
  • 15. Counting the costDifferent outcomes might have different costFor example in loan decision, the cost of lending to a defaulter is far greater that the lost-business cost of refusing a loan to a non defaulterSuppose we have two class prediction. Outcomes can be:
  • 16. Counting the costTrue positive rate: TP/(TP+FN)False positive rate: FP/(FP+TN)Overall success rate: Number of correct classification / Total Number of classificationError rate = 1 – success rateIn multiclass case we have a confusion matrix like (actual and a random one):
  • 17. Counting the costThese are the actual and the random outcome of a three class problemThe diagonal represents the successful casesKappa statistic = (D-observed - D-actual) / (D-perfect - D-actual)Here kappa statistic = (140 – 82)/(200-82) = 49.2%Kappa is used to measure the agreement between predicted and observed categorizations of a dataset, while correcting for agreements that occurs by chanceDoes not take cost into account
  • 18. Classification with costsExample Cost matrices (just gives us the number of errors):Success rate is measured by average cost per predictionWe try to minimize the costsExpected costs: dot products of vectors of class probabilities and appropriate column in cost matrix
  • 19. Classification with costsSteps to take cost into consideration while testing:First use a learning method to get the probability vector (like Naïve Bayes) Now multiple the probability vector to each column of a cost matrix one by one so as to get the cost for each class/columnSelect the class with the minimum(or maximum!!) cost
  • 20. Cost sensitive learningTill now we included the cost factor during evaluationWe will incorporate costs into the learning phase of a methodWe can change the ratio of instances in the training set so as to take care of costsFor example we can do replication of a instances of particular class so that our learning method will give us a model with less errors of that class
  • 21. Lift ChartsIn practice, costs are rarely knownIn marketing terminology the response rate is referred to as the lift factorWe compare probable scenarios to make decisionsA lift chart allows visual comparisonExample: promotional mail out to 1,000,000 householdsMail to all: 0.1%response (1000)Some data mining tool identifies subset of 100, 000 of which 0.4% respond (400)A lift of 4
  • 22. Lift ChartsSteps to calculate lift factor:We decide a sample sizeNow we arrange our data in decreasing order of the predicted probability of a class (the one which we will base our lift factor on: positive class)We calculate:Sample success proportion = Number of positive instances / Sample size Lift factor = Sample success proportion / Data success proportionWe calculate lift factor for different sample size to get Lift Charts
  • 24. Lift ChartsIn the lift chart we will like to stay towards the upper left cornerThe diagonal line is the curve for random samples without using sorted dataAny good selection will keep the lift curve above the diagonal
  • 25. ROC CurvesStands for receiver operating characteristicDifference to lift charts:Y axis showspercentage of true positive X axis shows percentage of false positives in samplesROC is a jagged curveIt can be smoothened out by cross validation
  • 27. ROC CurvesWays to generate cost curves(Consider the previous diagram for reference)First way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionAverage the number of yes from all the folds and plot it
  • 28. ROC CurvesSecond way:Get the probability distribution over different folds of dataSort the data in decreasing order of the probability of yes classSelect a point on X-axis and for that number of no, get the number of yes for each probability distributionPlot a ROC for each fold individually Average all the ROCs
  • 29. ROC CurvesROC curves for two schemes
  • 30. ROC CurvesIn the previous ROC curves:For a small, focused sample, use method AFor a large one, use method BIn between, choose between A and B with appropriate probabilities
  • 31. Recall – precision curvesIn case of a search query:Recall = number of documents retrieved that are relevant / total number of documents that are relevantPrecision = number of documents retrieved that are relevant / total number of documents that are retrieved
  • 32. A summary Different measures used to evaluate the false positive versus the false negative tradeoff
  • 33. Cost curvesCost curves plot expected costs directlyExample for case with uniform costs (i.e. error):
  • 35. Cost curvesC[+|-] is the cost of predicting + when the instance is –C[-|+] is the cost of predicting - when the instance is +
  • 36. Minimum Description Length PrincipleThe description length is defined as:Space required to describe a theory + space required to describe the theory’s mistakesTheory = Classifier and mistakes = errors on the training dataWe try to minimize the description lengthMDL theory is the one that compresses the data the most. I.e to compress a data set we generate a model and then store the model and its mistakesWe need to compute:Size of the modelSpace needed to encode the error
  • 37. Minimum Description Length PrincipleThe 2nd one is easy. Just use informational loss functionFor 1st we need a method to encode the modelL[T] = “length” of the theoryL[E|T] = training set encoded wrt the theory
  • 38. Minimum Description Length PrincipleMDL and clusteringDescription length of theory: bits needed to encode the clusters. E.g. cluster centersDescription length of data given theory: encode cluster membership and position relative to cluster. E.g. distance to cluster centersWorks if coding scheme uses less code space for small numbers than for large ones
  • 39. Visit more self help tutorialsPick a tutorial of your choice and browse through it at your own pace.The tutorials section is free, self-guiding and will not involve any additional support.Visit us at www.dataminingtools.net