SlideShare a Scribd company logo
Localization, Classification, and
Evaluation
Agenda
● Introduction
● Descriptors, Classifiers, and Learning
● Performance of Object Detectors
Three basic steps of an object detection system
● Localization
● Classification
● Evaluation
Object candidates are localized within a rectangular bounding box.
A bounding box is a special example for a region of interest.
● Localized
○ Object candidates are mapped by classification either in detected objects or rejected candidates.
● Classification
○ Results should be evaluated within the system or by a subsequent performance analysis of the system.
true-positive / hit or a detection correctly detected object
false-positive / miss or a false detection detect an object where there is none
true-negative non-object regions are correctly identified as non-object regions
false-negative miss an object
Face Detected - Positive Case
Face Not Detected - Negative Case
Localization, Classification, and Evaluation.pdf
● One false-positive (the largest square)
● Two false-negatives (a man in the middle and a girl on the right of the image).
● A head seen from the side (one case in the figure) does not define a face.
Descriptors, Classifiers, and Learning
● Classification is defined by membership in constructed pairwise-disjoint classes being
subsets of Rn
for a given value n > 0.
● In other words, classes define a partitioning of the space Rn
.
● Time-efficiency is an important issue when performing classification.
● This subsection only provides a few brief basic explanations for the extensive area of
classification algorithms.
Descriptors
● A descriptor x = (x1
, . . . , xn
) is a point in an n-dimensional real space Rn
, called
descriptor space.
● A descriptor also called a feature.
● A feature in an image that, as commonly used in image analysis, combines a keypoint and
a descriptor.
● Thus, we continue to use “descriptor” rather than “feature” for avoiding confusion.
● It representing measured or calculated property values in a given order an illustration for
n = 2 (Eg: a SIFT descriptor is of length n = 128).
● Example: A descriptor x1
= (621.605, 10940) for Segment 1 in this descriptor space
defined by the properties “perimeter” and “area”.
Localization, Classification, and Evaluation.pdf
Classifiers
● A classifier assigns class numbers to descriptors, typically, first, to a given set {x1
,
. . . , xm
} of already-classified descriptors for training (the learning set), and then to
the descriptors generated for recorded image/video data while being applied:
○ A (general) classifier assigns class numbers 1, 2, . . . , k for k > 1 classes and 0
for ‘not classified’.
○ A binary classifier assigns class numbers −1 and +1 in the cases where we are
only interested whether a particular event (Eg: ‘driver has closed eyes’) occurs,
specified by output +1.
● A classifier is weak if it does not perform up to expectations (Eg: it might be just a
bit better than random guessing).
● Multiple weak classifiers can be mapped into a strong classifier, aiming at a
satisfactory solution of a classification problem.
● A statistical combination of multiple weak classifiers into one strong classifier.
● Weak or strong classifiers can be general-case (i.e. multi-class)
● classifiers or just binary classifiers; just being “binary” does not define “weak”.
Example 10.1 (Binary Classifier by Linear Separation
● A binary classifier may be defined by constructing a hyperplane Π : wT
x + b = 0
in Rn
for n ≥ 1.
● Vector w ∈ Rn
is the weight vector, and the real b ∈ R is the bias of Π.
● For n = 2 or n = 3, w is the gradient or normal orthogonal to the defined line or
plane, respectively.
● One side of the hyperplane (including the plane itself) defines
the value “+1”, and the other side (not including the plane
itself) the value “−1”.
● Example for n = 2; the hyperplane is a straight line in this
case, and for n = 1, it is just a point separating R1
.
h(x) = wT
x + b
● The cases h(x) > 0 or h(x) < 0 then define the class values
“+1” or “−1” for one side of the hyperplane Π.
Localization, Classification, and Evaluation.pdf
● Such a linear classifier (i.e. defined by the weight vector w and bias b) can be calculated
for a distribution of (pre-classified) training descriptors in the n D descriptor space if the
given distribution is linear separable.
● If this is not the case, then we define the error for a misclassified descriptor x by its
perpendicular distance to the hyperplane Π.
d2
(x, Π) = |wT
x + b| / ǁwǁ2
● The task is to calculate a hyperplane Π such that the total error for all misclassified
training descriptors is minimized.
● This is the error defined in margin-based classifiers such as support vector machines and
is (usually) not explicitly used in AdaBoost.
Example 10.2 (Classification by Using a Binary Decision Tree)
● A classifier can also be defined by binary decisions at split nodes in a tree (i.e. “yes” or “no”).
● Each decision is formalized by a rule, and given input data can be tested whether they satisfy the rule
or not.
● Accordingly, we proceed with the identified successor node in the tree.
● Each leaf node of the tree defines finally an assignment of data arriving at this node into classes.
Example: Each leaf node can identify exactly one class in Rn
.
● The tested rules in the shown tree define straight lines in the 2D descriptor space.
● Descriptors arriving at one of the leaf nodes are then in one of the shown subsets of R2
.
● A single decision tree, or just one split node in such a tree, can be considered to be an example for a
weak classifier; a set of decision trees (called a forest) is needed for being able to define a strong
classifier.
Localization, Classification, and Evaluation.pdf
Learning
● Learning is the process of defining or training a classifier based on a
set of descriptors; classification is then the actual application of the
classifier.
● During classification, we may also identify some mis-behaviour (e.g.
“assumed” mis-classifications), and this again can lead to another
phase of learning.
● The set of descriptors used for learning may be pre-classified or not.
Localization, Classification, and Evaluation.pdf
Supervised Learning
● In supervised learning we assign class numbers to descriptors
“manually” based on expertise.
Example: “Yes, the driver does have closed eyes in this image”.
● As a result, we can define a classifier by locating optimized separating
manifolds in Rn
for the training set of descriptors.
● A hyperplane Π is the simplest case of an optimized separating
manifold.
● An alternative use of expert knowledge for supervised learning is the
specification of rules at nodes in a decision tree.
● This requires knowledge about possible sequences of decisions.
Unsupervised Learning
● In unsupervised learning we do not have prior knowledge about class memberships
of descriptors.
● When aiming at separations in the descriptor space, we may apply a clustering
algorithm for a given set of descriptors for identifying a separation of Rn
into
classes.
● Example: We may analyse the density of the distribution of given descriptors in Rn
;
● A region having a dense distribution defines a seed point of one class, and then we
assign all descriptors to identified seed points by applying, for example, the
nearest-neighbour rule.
● When aiming at defining decision trees for partitioning the descriptor space, we can
learn decisions at nodes of a decision tree based on some general data analysis
rules.
● The data distribution then “decides” about the generated rules.
Combined Learning Approaches
● There are also cases where we may combine supervised learning with
strategies known from unsupervised learning.
● Example: We can decide whether a given image window, also called
a bounding box, shows a pedestrian.
● This defines the supervised part in learning.
● We can also decide for a patch, being a subwindow of a bounding
box, whether it possibly belongs to a pedestrian.
● Example: The head of the cyclist is also considered to belong
possibly to a pedestrian.
● When we generate descriptors for bounding boxes or patches (e.g.
measured image intensities at selected pixel locations),
● Then we cannot decide anymore manually for each individual
descriptor whether it is characteristic for a pedestrian or not.
● Example: for a set of given image windows, we know that they are
all parts of pedestrians, and the algorithm designed for generating a
classifier decides at some point to use a particular feature of those
windows for processing them further.
● This particular feature might not be generic in the sense that it
separates any window showing a part of a pedestrian from any
window showing no part of a pedestrian.
● Such an “internal” mechanism in a program that generates a classifier
defines an unsupervised part in learning.
● The overall task is to combine available supervision with
unsupervised data analysis when generating a classifier.
Localization, Classification, and Evaluation.pdf
Performance of Object Detectors
● Object Detector → Applying a classifier for an object detection problem.
● Evaluations of designed object detectors are required to compare their performance under particular conditions.
● There are common measures in pattern recognition or information retrieval for performance evaluation of
classifiers.
● TP = 12, FP = 1, and FN = 2, TN = None
● In this figure doesn’t indicate how many
non-object regions have been analysed and
correctly identified as being no faces.
● We can’t specify the number TN.
● We need to analyse the applied classifier for
obtaining TN.
● Thus, TN is not a common entry for performance
measures.
Precision (PR) Recall (RC)
It is the ratio of true-positives compared to all
detections.
Itis the ratio of true-positives compared to all potentially
possible detections
PR = 1, termed 1-precision, means that no
false-positive is detected.
RC = 1 means that all the visible objects in an image are
detected and that there is no false-negative.
TP = 1
FP = 2
PR = 1/(1+2)
= 1/3
Convert to 100% = (1/3) x 100
= 33%
Miss Rate (MR) False-Positives per Image (FPPI)
It is the ratio of false-negatives compared to all objects
in an image.
It is the ratio of false-positives compared to all detected
objects.
MR = 0 means that all the visible objects in the image
are detected, which is equivalent to RC = 1.
FPPI = 0 means that all the detected objects are correctly
classified, which is equivalent to 1-precision.
MR = 0 means that all the visible objects in the image
are detected, which is equivalent to RC = 1.
FPPI = 0 means that all the detected objects are correctly
classified, which is equivalent to 1-precision.
True-Negative Rate (TNR) Accuracy (AC)
It is the ratio of true-negatives com-
pared to all decisions in “no-object” regions.
It is the ratio of false-positives compared to all detected
objects.
MR = 0 means that all the visible objects in the image
are detected, which is equivalent to RC = 1.
The accuracy is the ratio of correct decisions compared to all
decisions.
We are usually not interested in numbers of true-negatives, these two measures have less significance in performance
evaluation studies.
Localization, Classification, and Evaluation.pdf
Localization, Classification, and Evaluation.pdf

More Related Content

PPTX
UNIT IV (4).pptx
PPTX
Instance Learning and Genetic Algorithm by Dr.C.R.Dhivyaa Kongu Engineering C...
PPTX
OBJECTRECOGNITION1.pptxjjjkkkkjjjjkkkkkkk
PPTX
OBJECTRECOGNITION1.pptxjjjkkkkjjjjkkkkkkk
PPTX
Learning machine learning with Yellowbrick
PDF
Introduction to machine learning
PDF
Principal component analysis and lda
PPT
20IT501_DWDM_PPT_Unit_IV.ppt
UNIT IV (4).pptx
Instance Learning and Genetic Algorithm by Dr.C.R.Dhivyaa Kongu Engineering C...
OBJECTRECOGNITION1.pptxjjjkkkkjjjjkkkkkkk
OBJECTRECOGNITION1.pptxjjjkkkkjjjjkkkkkkk
Learning machine learning with Yellowbrick
Introduction to machine learning
Principal component analysis and lda
20IT501_DWDM_PPT_Unit_IV.ppt

Similar to Localization, Classification, and Evaluation.pdf (20)

PPT
20IT501_DWDM_PPT_Unit_IV.ppt
PDF
Data Science Interview Preparation(#DAY 02).pdf
PPTX
[NS][Lab_Seminar_240609]Point-GNN: Graph Neural Network for 3D Object Detecti...
PDF
ML MODULE 4.pdf
PDF
Machine Learning Notes for beginners ,Step by step
PPTX
PPTX
KNN CLASSIFIER, INTRODUCTION TO K-NEAREST NEIGHBOR ALGORITHM.pptx
PPTX
Ai saturdays presentation
PPTX
04 Classification in Data Mining
PPTX
Data discretization
PPTX
UNIT 3: Data Warehousing and Data Mining
PPT
3.1 clustering
PPTX
Digital Image Classification.pptx
PPTX
Similarity Measures (pptx)
PPTX
W5_CLASSIFICATION.pptxW5_CLASSIFICATION.pptx
PDF
Classification Based Machine Learning Algorithms
PPT
Clustering_Unsupervised learning Unsupervised learning.ppt
PDF
Machine Learning.pdf
ODP
Hierarchical Clustering With KSAI
PPTX
Cluster Analysis.pptx
20IT501_DWDM_PPT_Unit_IV.ppt
Data Science Interview Preparation(#DAY 02).pdf
[NS][Lab_Seminar_240609]Point-GNN: Graph Neural Network for 3D Object Detecti...
ML MODULE 4.pdf
Machine Learning Notes for beginners ,Step by step
KNN CLASSIFIER, INTRODUCTION TO K-NEAREST NEIGHBOR ALGORITHM.pptx
Ai saturdays presentation
04 Classification in Data Mining
Data discretization
UNIT 3: Data Warehousing and Data Mining
3.1 clustering
Digital Image Classification.pptx
Similarity Measures (pptx)
W5_CLASSIFICATION.pptxW5_CLASSIFICATION.pptx
Classification Based Machine Learning Algorithms
Clustering_Unsupervised learning Unsupervised learning.ppt
Machine Learning.pdf
Hierarchical Clustering With KSAI
Cluster Analysis.pptx
Ad

More from SSN College of Engineering, Kalavakkam (20)

PPTX
Database Management System - 2a
PPTX
Database Management System
PPTX
Unit III - Inventory Problems
PPTX
PPTX
PPTX
PPTX
Unit IV-Project Management
PPTX
Web technology Unit-II Part-C
PPTX
Data structure Unit-I Part-C
PPTX
Data structure unit I part B
PPTX
Web technology Unit-II Part A
PPTX
Data structure Unit-I Part A
PPTX
Web technology Unit-I Part E
Ad

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Electronic commerce courselecture one. Pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
MYSQL Presentation for SQL database connectivity
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Approach and Philosophy of On baking technology
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Encapsulation theory and applications.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
cuic standard and advanced reporting.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
NewMind AI Weekly Chronicles - August'25-Week II
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Electronic commerce courselecture one. Pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Assigned Numbers - 2025 - Bluetooth® Document
The AUB Centre for AI in Media Proposal.docx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
MYSQL Presentation for SQL database connectivity
MIND Revenue Release Quarter 2 2025 Press Release
Approach and Philosophy of On baking technology
gpt5_lecture_notes_comprehensive_20250812015547.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Encapsulation theory and applications.pdf
Unlocking AI with Model Context Protocol (MCP)
Advanced methodologies resolving dimensionality complications for autism neur...
cuic standard and advanced reporting.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx

Localization, Classification, and Evaluation.pdf

  • 2. Agenda ● Introduction ● Descriptors, Classifiers, and Learning ● Performance of Object Detectors
  • 3. Three basic steps of an object detection system ● Localization ● Classification ● Evaluation Object candidates are localized within a rectangular bounding box. A bounding box is a special example for a region of interest.
  • 4. ● Localized ○ Object candidates are mapped by classification either in detected objects or rejected candidates. ● Classification ○ Results should be evaluated within the system or by a subsequent performance analysis of the system. true-positive / hit or a detection correctly detected object false-positive / miss or a false detection detect an object where there is none true-negative non-object regions are correctly identified as non-object regions false-negative miss an object Face Detected - Positive Case Face Not Detected - Negative Case
  • 6. ● One false-positive (the largest square) ● Two false-negatives (a man in the middle and a girl on the right of the image). ● A head seen from the side (one case in the figure) does not define a face.
  • 7. Descriptors, Classifiers, and Learning ● Classification is defined by membership in constructed pairwise-disjoint classes being subsets of Rn for a given value n > 0. ● In other words, classes define a partitioning of the space Rn . ● Time-efficiency is an important issue when performing classification. ● This subsection only provides a few brief basic explanations for the extensive area of classification algorithms.
  • 8. Descriptors ● A descriptor x = (x1 , . . . , xn ) is a point in an n-dimensional real space Rn , called descriptor space. ● A descriptor also called a feature. ● A feature in an image that, as commonly used in image analysis, combines a keypoint and a descriptor. ● Thus, we continue to use “descriptor” rather than “feature” for avoiding confusion. ● It representing measured or calculated property values in a given order an illustration for n = 2 (Eg: a SIFT descriptor is of length n = 128). ● Example: A descriptor x1 = (621.605, 10940) for Segment 1 in this descriptor space defined by the properties “perimeter” and “area”.
  • 10. Classifiers ● A classifier assigns class numbers to descriptors, typically, first, to a given set {x1 , . . . , xm } of already-classified descriptors for training (the learning set), and then to the descriptors generated for recorded image/video data while being applied: ○ A (general) classifier assigns class numbers 1, 2, . . . , k for k > 1 classes and 0 for ‘not classified’. ○ A binary classifier assigns class numbers −1 and +1 in the cases where we are only interested whether a particular event (Eg: ‘driver has closed eyes’) occurs, specified by output +1.
  • 11. ● A classifier is weak if it does not perform up to expectations (Eg: it might be just a bit better than random guessing). ● Multiple weak classifiers can be mapped into a strong classifier, aiming at a satisfactory solution of a classification problem. ● A statistical combination of multiple weak classifiers into one strong classifier. ● Weak or strong classifiers can be general-case (i.e. multi-class) ● classifiers or just binary classifiers; just being “binary” does not define “weak”.
  • 12. Example 10.1 (Binary Classifier by Linear Separation ● A binary classifier may be defined by constructing a hyperplane Π : wT x + b = 0 in Rn for n ≥ 1. ● Vector w ∈ Rn is the weight vector, and the real b ∈ R is the bias of Π. ● For n = 2 or n = 3, w is the gradient or normal orthogonal to the defined line or plane, respectively.
  • 13. ● One side of the hyperplane (including the plane itself) defines the value “+1”, and the other side (not including the plane itself) the value “−1”. ● Example for n = 2; the hyperplane is a straight line in this case, and for n = 1, it is just a point separating R1 . h(x) = wT x + b ● The cases h(x) > 0 or h(x) < 0 then define the class values “+1” or “−1” for one side of the hyperplane Π.
  • 15. ● Such a linear classifier (i.e. defined by the weight vector w and bias b) can be calculated for a distribution of (pre-classified) training descriptors in the n D descriptor space if the given distribution is linear separable. ● If this is not the case, then we define the error for a misclassified descriptor x by its perpendicular distance to the hyperplane Π. d2 (x, Π) = |wT x + b| / ǁwǁ2 ● The task is to calculate a hyperplane Π such that the total error for all misclassified training descriptors is minimized. ● This is the error defined in margin-based classifiers such as support vector machines and is (usually) not explicitly used in AdaBoost.
  • 16. Example 10.2 (Classification by Using a Binary Decision Tree) ● A classifier can also be defined by binary decisions at split nodes in a tree (i.e. “yes” or “no”). ● Each decision is formalized by a rule, and given input data can be tested whether they satisfy the rule or not. ● Accordingly, we proceed with the identified successor node in the tree. ● Each leaf node of the tree defines finally an assignment of data arriving at this node into classes. Example: Each leaf node can identify exactly one class in Rn . ● The tested rules in the shown tree define straight lines in the 2D descriptor space. ● Descriptors arriving at one of the leaf nodes are then in one of the shown subsets of R2 . ● A single decision tree, or just one split node in such a tree, can be considered to be an example for a weak classifier; a set of decision trees (called a forest) is needed for being able to define a strong classifier.
  • 18. Learning ● Learning is the process of defining or training a classifier based on a set of descriptors; classification is then the actual application of the classifier. ● During classification, we may also identify some mis-behaviour (e.g. “assumed” mis-classifications), and this again can lead to another phase of learning. ● The set of descriptors used for learning may be pre-classified or not.
  • 20. Supervised Learning ● In supervised learning we assign class numbers to descriptors “manually” based on expertise. Example: “Yes, the driver does have closed eyes in this image”. ● As a result, we can define a classifier by locating optimized separating manifolds in Rn for the training set of descriptors. ● A hyperplane Π is the simplest case of an optimized separating manifold. ● An alternative use of expert knowledge for supervised learning is the specification of rules at nodes in a decision tree. ● This requires knowledge about possible sequences of decisions.
  • 21. Unsupervised Learning ● In unsupervised learning we do not have prior knowledge about class memberships of descriptors. ● When aiming at separations in the descriptor space, we may apply a clustering algorithm for a given set of descriptors for identifying a separation of Rn into classes. ● Example: We may analyse the density of the distribution of given descriptors in Rn ; ● A region having a dense distribution defines a seed point of one class, and then we assign all descriptors to identified seed points by applying, for example, the nearest-neighbour rule. ● When aiming at defining decision trees for partitioning the descriptor space, we can learn decisions at nodes of a decision tree based on some general data analysis rules. ● The data distribution then “decides” about the generated rules.
  • 22. Combined Learning Approaches ● There are also cases where we may combine supervised learning with strategies known from unsupervised learning. ● Example: We can decide whether a given image window, also called a bounding box, shows a pedestrian. ● This defines the supervised part in learning. ● We can also decide for a patch, being a subwindow of a bounding box, whether it possibly belongs to a pedestrian. ● Example: The head of the cyclist is also considered to belong possibly to a pedestrian.
  • 23. ● When we generate descriptors for bounding boxes or patches (e.g. measured image intensities at selected pixel locations), ● Then we cannot decide anymore manually for each individual descriptor whether it is characteristic for a pedestrian or not. ● Example: for a set of given image windows, we know that they are all parts of pedestrians, and the algorithm designed for generating a classifier decides at some point to use a particular feature of those windows for processing them further.
  • 24. ● This particular feature might not be generic in the sense that it separates any window showing a part of a pedestrian from any window showing no part of a pedestrian. ● Such an “internal” mechanism in a program that generates a classifier defines an unsupervised part in learning. ● The overall task is to combine available supervision with unsupervised data analysis when generating a classifier.
  • 26. Performance of Object Detectors ● Object Detector → Applying a classifier for an object detection problem. ● Evaluations of designed object detectors are required to compare their performance under particular conditions. ● There are common measures in pattern recognition or information retrieval for performance evaluation of classifiers.
  • 27. ● TP = 12, FP = 1, and FN = 2, TN = None ● In this figure doesn’t indicate how many non-object regions have been analysed and correctly identified as being no faces. ● We can’t specify the number TN. ● We need to analyse the applied classifier for obtaining TN. ● Thus, TN is not a common entry for performance measures.
  • 28. Precision (PR) Recall (RC) It is the ratio of true-positives compared to all detections. Itis the ratio of true-positives compared to all potentially possible detections PR = 1, termed 1-precision, means that no false-positive is detected. RC = 1 means that all the visible objects in an image are detected and that there is no false-negative.
  • 29. TP = 1 FP = 2 PR = 1/(1+2) = 1/3 Convert to 100% = (1/3) x 100 = 33%
  • 30. Miss Rate (MR) False-Positives per Image (FPPI) It is the ratio of false-negatives compared to all objects in an image. It is the ratio of false-positives compared to all detected objects. MR = 0 means that all the visible objects in the image are detected, which is equivalent to RC = 1. FPPI = 0 means that all the detected objects are correctly classified, which is equivalent to 1-precision. MR = 0 means that all the visible objects in the image are detected, which is equivalent to RC = 1. FPPI = 0 means that all the detected objects are correctly classified, which is equivalent to 1-precision.
  • 31. True-Negative Rate (TNR) Accuracy (AC) It is the ratio of true-negatives com- pared to all decisions in “no-object” regions. It is the ratio of false-positives compared to all detected objects. MR = 0 means that all the visible objects in the image are detected, which is equivalent to RC = 1. The accuracy is the ratio of correct decisions compared to all decisions. We are usually not interested in numbers of true-negatives, these two measures have less significance in performance evaluation studies.