SlideShare a Scribd company logo
TEST METRICS
1
Test Metrics
Metrics derive information from raw
data with a view to help in decision
making.
Some of the areas that such
information would shed light on are,
 Relationship between the data points.
 Any cause and effect correlation
between the observed data points.
 Any pointers to how the data can be
used for future planning and
continuous improvements. 2
Types of Metrics
 At a very high level, metrics can be
classified as product metrics and
process metrics.
 Product metrics can be further
classified as,
Project metrics
Progress metrics
Productivity metrics
3
Project metrics:
A set of metrics that indicates how the
project is planned and executed.
Progress metrics:
• A set of metrics that tracks how the different
activities of the project are progressing.
• The activities include both development
activities and testing activities.
• Progress metrics is monitored during
testing phases.
• Progress metrics helps in finding out the
status of test activities and they are also good
indicators of product quality.
4
 The defects that emerge from testing
provide a wealth of information that help
both development team and test team to
analyze and improve.
 For this reason, progress metrics focus
only on defects.
 Progress metrics is further classified into
test defect metrics and development
defect metrics.
5
Productivity metrics:
 A set of metrics that takes into account
various productivity numbers that can be
collected and used for planning and
tracking testing activities.
 These metrics help in planning and
estimating of testing activities.
6
Types of metrics
Product
metricstrics
Project metrics Progress
metricsmetrics
Productivity
metricsmetrisetrics
Effort variance
Defect find rate
Schedule
variance
Effort distribution
Defect fix rate
Outstanding
defects rate
Priority
outstanding
rate
Defects trend
Defects
classification
trend
Weighted
defects trend
Defect cause
distribution
Component wise
defect distribution
Defect density and
defect removal rate
Age analysis of
outstanding
defects
Introduced and
reopened defects
rate
Development
defect metrics
Testing defect
metrics
Defects per 100
hrs of testing
Test cases
executed per 100
hrs of testing
Test cases
developed per
100 hours
Defects per 100
test cases
Defects per 100
failed test cases
Test phases
effectiveness
Closed defects
distribution 7
1.Project metrics
 A typical project starts with requirements gathering and
ends with product release.
 All the phases that fall in between these points need to
be planned and tracked.
 In the planning cycle, the scope of the project is
finalized.
 The project scope gets translated to size estimates,
which specify the quantum of work to be done.
 This size estimate gets translated to effort estimate for
each of the phases and activities by using the available
productivity data available. This initial effort is called
baselined effort.
 As the project progresses and if the scope of the project
changes or if the available productivity numbers are not
correct, that the effort estimates are re-evaluated again
and this re-evaluated effort estimate is called revised 8
 Effort and schedule are two factors to be
tracked for any phase or activity.
 The basic measurements that are very
natural, simple to capture, and form the
inputs to the metrics in the section are
◦ The different activities and the initial baselined
effort and schedule for each of the activities; this is
input at the beginning of the project/phase.
◦ The actual effort and time taken for the various
activities; this is entered as and when the activities
take place.
◦ The revised estimate of effort and schedule; these
are re-calculated at appropriate times in the
project life.
9
The project metrics included are,
1. Effort variance(Planned vs Actual)
2. Schedule variance(Planned vs Actual)
3. Effort Distribution Across Phases
10
1.1 Effort variance
 When the baselined effort estimates, revised
effort estimates, and actual effort are plotted
together for all the phases of SDLC, it
provides many insights about the estimation
process.
 As different set of people may get involved in
different phases, it is good idea to plot these
effort numbers phase wise.
 Normally, this variation chart is plotted as the
point revised estimates are being made or at
the end of a release.
 If there is substantial difference between the
baselined and revised effort, it points to
incorrect initial estimates. 11
 Calculating effort variance for each of the
phase provides a quantitative measure of
the relative difference between the revised
and actual efforts.
Perso
n
days
Phase-wise effort variation 12
Effort Req Design Coding Testing Doc Defect
fixing
Variance% 7.1 8.7 5 0 40 15
Sample variance percentage by phase:
Variance%=[(Actual effort-Revised estimate)/Revised
estimate]*100
A variance of more than 5% in any of the SDLC phase
indicates the scope of improvement in the estimation.
A negative variance is an indication of an over estimate.
13
1.2 Schedule variance
 Most software projects are not only
concerned about the variance in effort, but
are also concerned about meeting
schedules.
 This leads us to the schedule variance
metric.
 Schedule variance, like effort variance, is
the deviation of the actual schedule from
the estimated schedule.
 Schedule variance is calculated at the end
of every milestone to find out how well the
project is doing with respect to the 14
126.00
136.00
110.00
200.00
150.00
100.00
50.00
0.00
56.00
Base line estimated Actual/remaining
Estimated Remaining
Schedule Variance
15
 Effort and schedule variance have to be
analyzed in totality, not in isolation.
 Variance can be classified into negative
variance, zero variance, acceptable
variance, and unacceptable variance.
 Generally 0-5% is considered as
acceptable variance.
16
Effort variance Schedule variance Probable cause/result
Zero or acceptable
variance
Zero variance A well executed project
Zero or acceptable
variance
Acceptable variance Need slight improvement
in effort/schedule
estimation
Unacceptable variance Zero or acceptable
variance
Underestimation; needs
further analysis
Unacceptable variance Unacceptable variance Underestimation of both
effort and schedule
Negative variance Zero or acceptable
variance
Overestimation and
schedule; both effort and
schedule estimation need
improvement
Negative variance Negative variance Overestimation and over
schedule; both effort and
schedule estimation need
improvement
Interpretation of ranges of effort and schedule variation
17
Some of the typical questions one should
ask to analyze effort and schedule variances
are given below:
Did the effort variance take place because of poor
initial estimation or poor execution?
If the initial estimation turns out to be off the mark,
is it because of lack of availability of the supporting
data to enable good estimation?
If the effort was on target, but the schedule was
not, did the plan take into account appropriate
parallelism? Did it explore the right multiplexing of
the resources?
Can any process or tool be enhanced to improve
parallelism and thereby speed up the schedules? 18
1.3 Effort distribution across phases
 Variance calculation helps in finding out whether
commitments are met on time and whether the
estimation method works well.
 In addition some indications on product quality can
be obtained if the effort distribution across the
various phases are captured and analyzed. For
example
◦ Spending very little effort on requirements may lead to
frequent changes but one should also leave sufficient time
for development and testing phases.
◦ Spending less effort in testing may cause defects to crop up
in the customer place but spending more time in testing
than what is needed may make the product lose the market
window.
19
Actual effort distribution
20
2.PROGRESS METRICS
 Any project needs to be tracked from two
angles.
 one, how well the project is doing with
respect to effort and schedule.
 The other equally important angle is to find
out how well the product is meeting the
quality requirements for the release.
 The number of defects that are found in the
product is one of the main indicators of
quality.
 Defects get detected by the testing team and
get fixed by the development team. 21
 Defect metrics are further classified into
test defect metrics and development
defect metrics.
 Test defect metrics-which help the testing
team in analysis of product quality and
testing.
 Development defect metrics-which help
the development team analysis of
development activities.
 How many defects have already been
found and how many more defects may
get unearthed are two parameters that
determine product quality and its
assessment. 22
 The progress chart gives the pass rate and fail
rate of executed cases, pending test cases, and
test cases are waiting for defects fixed.
 Representing testing progress in this manner will
make it is easy understand the status and for
further analysis.
Week 23
2.1 Test defect metrics
 Some organizations classify defects by
assigning a defect priority(for eg:P1,P2,P3
and so on).
 The priority of a defect provides a
management perspective for the order of
defect fixes.
 For example, a defect with priority P1
indicates that it should be fixed before
another defect with priority P2.
 Some organizations use defect severity
levels(for eg:S1,S2,S3 and so on).
 The severity of defects provides the test
team a perspective of the impact of that
24
 For example, a defect with severity level S1
means that either the major functionality is not
working or the software is crashing.S2 may
mean a failure or functionality not working.
Defect priority and defect severity-sample
interpretation:
Priority What it means
1 Fix the defect on highest priority; fix it before the next build
2 Fix the defect on high priority before next text cycle
3 Fix the defect on moderate priority when time permits,before the
release
4 Postpone this defect for next release or live with this effect
25
severity What it means
1 The basic product functionality failing or product crashes
2 Unexpected error condition or a functionality not working
3 A minor functionality is failing or behaves differently than expected
4 Cosmetic issue and no impact on the users
A common defect definition and classification
Defect classification What it means
Extreme •Product crashes or unusable
•Needs to be fixed immediately
Critical •Basic functionality of the product not working
•Needs to be fixed before next test cycle starts
Important •Extended functionality of the product not working
•Does not affect the progress of testing
•Fix it before the release
Minor •Product behaves differently
•No impact on the test team or customers
•Fix it when time permits
cosmetic •Minor irritant
•Need not be fixed for this release 26
2.1.1 Defect find rate:
The purpose of testing is to find defects
early in the test cycle.
 When tracking and plotting the total number of
defects found in the product at regular intervals(say
daily or weekly) from beginning to end of a product
development cycle.
 It may show a pattern for defect arrival.
 The idea of testing is to find as many defects as
possible early in the cycle.
 However, this may not be possible for two reasons.
- First, not all features of a product may become available
early ; because of scheduling of resources, the features
of a product arrive in a particular sequence.
- second, some of the test cases may be blocked because
of some show stopper defects.
The purpose of testing is to find defects early in
the test cycle.
27
 Once a majority of the modules become
available and the defects that are blocking
the tests are fixed, the defects arrival rate
increases.
 After a certain period of defect fixing and
testing, the arrival of defects tends to slow
down and a continuation of that trend
enables product release. This results in a
“bell curve”.
 A bell curve along with minimum number
of defects found in the last few days
indicate that
the release quality of the product is likely
to be good. 28
Bell curve
Time
Num
of
defect
s
29
2.1.2 Defect fix rate
 If the goal of testing is to find defects as early
as possible, it is nature to expect that the goal of
development should be to fix defects as soon as they
arrive.
If the defect fixing curve is in line with defect arrival a
“bell curve” .
 There is a reason why defect fixing rate should be same
as defect arrival rate.
 If more defects are fixed later in the cycle they may not
get tested properly for all possible side-effects.
 In regression testing, when defects are fixed in the
product, it opens the doors for the introduction of new
defects.
 Hence, it is a good idea to fix the defects early and test
The purpose of development is to fix defects as soon
as they arrive
30
2.1.3 outstanding defects rate
In a well-executed project, the number of outstanding
defects is very close to zero all the time during the
test cycle.
 The number of defects outstanding in the
product is calculated by subtracting the
total defects fixed from the total defects in
the product.
 If the defect-fixing pattern matches the
arrival rate, then the outstanding defects
curve will look like a straight line.
31
2.1.4 Priority outstanding rate
 Sometimes the defects that are coming out
of testing may be very critical and may take
enormous effort to fix and to test.
 Hence, it is important to look at how many
serious issues are being uncovered in the
product.
 The modification to the outstanding defects
rate curve by plotting only the high priority
defects and filtering out the low-priority
defects is called priority outstanding
Provide additional focus for those defects that matter
to the release.
32
 The priority outstanding defects correspond
to extreme and critical classification of
defects.
 Some organization include important defects
also in priority outstanding defects.
 Some high-priority defects may require a
change in design or architecture. If they are
found late in the cycle, the release may get
delayed to address the defect.
 But if a low-priority defect found is close to
the release date and it requires a design
change, a likely decision of the management
would be not to fix the defect.
33
2.1.5 Defect Trend
 Having discussed individual measures of
defects, it is time for the trend chart to
consolidate all of the above into one chart.
The effectiveness of analysis increases when several
perspectives of find rate, fix rate, outstanding, and
priority outstanding defects are combined
d
ef
e
ct
s
34
The following observations can be made.
1. The find rate, fix rate, outstanding defects and
priority outstanding follow a bell curve pattern
indicating readiness for release at the end of the
19th week.
2. A sudden downward movement as well as
upward spike in defect fixes rate needs analysis.
3. There are close to 75 outstanding defects at the
end of the 19th week. By looking at the priority
outstanding which shows close to zero defects in
the 19th week, it can be concluded that all
outstanding defects belong to low priority,
indicating release readiness. The outstanding
defects need analysis before the release.
35
4. Defect fix rate is not in line with
outstanding defects rate. If defect fix rate
had been improved, it would have enabled
a quicker release cycle as incoming
defects from the 14th week were in control.
5. Defect fix rate was not at the same
degree of defect find rate.
6. A smooth priority outstanding rate
suggests that priority defects were closely
tracked and fixed.
36
2.1.6 Defect classification trend
 The classifications of defects are only at two
levels(high priority and low priority defects).
 Some of the data drilling or chart analysis needs
further information on defects with respect to each
classification of defects- extreme, critical, important,
minor, and cosmetic.
 When talking about the total number of outstanding
defects, some of the questions that can be asked
are,
 How many of them are extreme defects?
 How many are critical?
 How many are important?
Providing the perspective of defect classification in the
chart helps in finding out release readiness of the
product.
37
 These questions require the charts to be
plotted separately based on defect
classification.
 The sum of extreme, critical, important, minor
and cosmetic defects is equal to the total
number of defects.
 A graph in which each type of defects is
plotted separately on top of each other to get
the total defects is called “Stacked area
charts”.
 This type of graph helps in identifying each
type of defect and also presents a
perspective of how they add up to or
contribute to the total defects. 38
Defect classification trend
week 39
A pie chart of defect distribution
40
2.1.7 Weighted defects trend
 The stacked area chart provides information
on how the different levels or types of defects
contribute to the total number of defects.
 In this approach all the defects are counted
on par, for example, both a critical defect and
a cosmetic defect are treated equally and
counted as one defect.
 Counting the defects the same way takes
away the seriousness of extreme or critical
defects.
 To solve this problem, a metric called
Weighted defects = (Extreme*5 + Critical*4 +
Important*3 + Minor*2 + Cosmetic)
41
 To solve this problem, a metric called
weighted defects is introduced.
 This concept helps in quick analysis of
defects, instead of worrying about the
classification of defects.
42
2.1.8 Defect cause distribution
Both “large defects” and “large number of
small defects” affect product release.
 All the metrics help in analyzing defects and their
impact.
 The next logical questions that would arise are,
◦ Why are those defects occurring and what are the root
causes?
◦ What areas must be focused for getting more defects
out of testing?
Knowing the causes of defects helps in finding more
defects and also in preventing such defects early
in the cycle.
43
Defect cause distribution chart
44
2.2 Development defect metrics
 The defect metrics that directly help in
improving development activities are
termed as development defect metrics.
2.2.1 Component-wise defect distribution
 While it is important to count the number
of defects in the product, for development
it is important to map them to different
components of the product so that they
can be assigned to the appropriate
developer to fix those defects.
45
 The project manager in charge of development maintains
a module ownership list where all product modules and
owners are listed.
 Based on the number of defects existing in each of the
modules, the effort needed to fix them, and the availability
of skill sets for each of the modules, the project manager
assigns resources accordingly. Module wise defect
distribution
46
2.2.2 Defect density and defect
removal rate
 A good quality product can have a
long lifetime before becoming
obsolete.
 The lifetime of the product depends on
its quality, over the different releases.
 One of the metrics that correlates
source code and defects is defect
density.
 This metric maps the defects in the
product with the volume of code that is47
 Defects per KLOC is the most practical
and easy metric to calculate and plot.
 KLOC stands for kilo lines of code.
 Every 1000 lines of executable statements
in the product is counted as one KLOC.
 There are several variants of this metric to
make it relevant to releases, and one of
them is calculating AMD(added,
modified, deleted code) to find out how a
particular release affects product quality.
 Defects per KLOC=(Total defects found
in the product)/(Total executable AMD
lines of code in KLOC)
48
 The formula for calculating the defect
removal rate is
 (Defects found by verification activities +
Defects found in unit testing)/(Defects
found by test teams)*100
 The above formula helps in finding the
efficiency of verification activities and unit
testing which are normally responsibilities of
the development team and compare them to
the defects found by the testing teams.
 These metrics are tracked over various
releases to study in-release-on-release
trends in the verification/quality assurance
activities.
49
Defects/KLOC and defect removal%
50
2.2.3 Age analysis of outstanding
defects
 The time needed to fix a defect may be
proportional to its age.
 The age of a defect in a way represents
the complexity of the defect fix needed.
 Given the complexity and time involved in
fixing those defects, they need to be
tracked closely else they may get
postponed close to release which may
even delay the release.
 A method to track such defects is called
age analysis of outstanding defects.
51
2.2.4 Introduced and reopened
defects trend
 When adding new code or modifying the
code to provide a defect fix, something
that was working earlier may stop
working. This is called an introduced
defect.
 These defects are those injected in to
the code while fixing the defects or while
trying to provide an enhancement to the
product.
 Sometimes a fix that is provided in the
code may not have fixed the problem
completely or some other modification
may have reproduced a defect that was
fixed earlier. This is called a reopened 52
3.Productivity Metrics
 Productivity metric combine several
measurements and parameters with effort spent
on the product.
 They help in finding out the capability of the team
as well as for other purposes, such as
1. Estimating for the new release.
2. Finding out how well the posit team is
progressing, understanding the reasons for (both
positive and negative) variations in results
3. Estimating the number of defects that can be
found.
4. Estimating release date and quality.
5. Estimating the cost involved in the release. 53
3.1 Defects per 100 hours of testing
 If incoming defects in the product are reducing, it
may mean various things.
1. Testing is not effective.
2. The quality of the product is improving.
3. Effort spent in testing is falling.
 The metric defects per 100 hours of testing
covers the third point and normalizes the number
of defects found in the product wih respect to the
effort spent.
 Defect per 100 hours of testing=(Total defects
found in the product for a period/Total hours
spent to get those defects)*100
54
3.2 Test cases executed per 100
hours of testing
 The number of test cases executed by the test
team for a particular duration depends on team
productivity and quality of product.
 The team productivity has to be calculated
accurately so that it can be tracked for the current
release and be used to estimate the next release
of the product.
 If the quality of the product is good, more test
cases can be executed, as there may not be
defects blocking the tests.
Test cases executed per 100 hours of testing=(Total
test cases executed for a period/Total hours spent
in test execution)*100
55
3.3 Test cases developed per 100
hours of testing
 Both manual execution of test cases and
automating test cases require estimating and
tracking of productivity numbers.
 In a product scenario, not all the test cases
are written afresh for every release.
 New test cases are added to address new
functionality and for testing features that were
not tested earlier.
 Existing test cases are modified to reflect
changes in the product.
Test cases developed per 100 hours of
testing=(Total test cases developed for a
product/Total hours spent in test case
development)*100 56
3.4 Defects per 100 test cases
 Since the goal of testing is find out as many defects as
possible, it is appropriate to measure the “defect yield”
of tests, that is, how many defects get uncovered
during testing.
 This is a function of two parameters- one, the
effectiveness of the tests in uncovering defects and
two, the effectiveness of choosing tests that are
capable of uncovering defects.
 The ability of a test case to uncover defects depends
on how well the test cases are designed and
developed.
 A measure that quantifies these two parameters is
defect per 100 testcases.
Defects per 100 test cases=(Total defects found
for a period/Total test cases executed for the same
period)*100
57
3.5 Defects per 100 failed test
cases
 Defect per 100 failed test cases is a good
measure to find out how granular the test
cases are. It indicates
1. How many test cases need to be executed
when a defect is fixed;
2. What defects need to be fixed so that an
acceptable number of test cases reach the
pass state; and
3. How the fail rate of test cases and defects
affect each other for release readiness
analysis.
Defects per 100 failed test cases=(Total
defects found for a period/Total test
cases failed due to those defects)*100
58
Productivity metrics
59
3.6 Test phase effectiveness
 Developers perform unit testing and there could
be multiple testing teams performing component,
integration, and system testing phases.
 The idea of testing is to find defects early in the
cycle and in the early phases of testing.
 The defects found in various phases such as unit
testing(UT), component testing(CT), integration
testing(IT) and system testing(ST) are plotted and
analyzed.
 The following observations can be made,
1. A good proportion of defects were found in the
early phases of testing(UT and CT).
2. Product quality improved from phase to phase.
60
3.7 Closed defect distribution
 The objective of testing is not only find
defects.
 The testing team also has the objective to
ensure that all the defects found through
testing are fixed.
 So that the customer gets the benefit of
testing and the product quality improves.
 To ensure that most of the defects are fixed
the testing team has to track the defects and
analyze how they are closed.
 The closed defect distribution helps in this
analysis.
61
Not
producable
As per
design
7%
62
From the above chart the following observations
can be made
1. Only 28% of the defects found by test team were
fixed in the product. This suggests that product
quality needs improvement before release.
2. Of the defects filed 19% were duplicates. It
suggests that the test team needs to update itself
on existing defects before new defects filed.
3. Non-reproducible defects amounted to 11%. This
means that the product has some random
defects or the defects are not provided will
reproducible test cases. This area needs further
analysis.
4. Close to 40% of defects were not fixed for
reasons “as per design”, “will not fix”, and next
release”. These defects may impact the
customers.
63
4.Release Metrics
 Several metrics can be used to determine
whether the product is ready for release.
 The decision to release a product would
need to consider several perspectives and
several metrics.
 All the metrics that were discussed in the
previous sections need to be considered in
totality for making the release decision.
 The guidelines and exact number and nature
of criteria can vary from release to release,
product to product, and organization to
organization.
64

More Related Content

PPT
Presentation1
PDF
Software metrics
PPT
Testing Metrics
PPTX
Quality in software industry
PDF
Performance Methodology It Project Metrics Workbook
PPT
Lecture 7 Software Metrics.ppt
PDF
Importance of software quality metrics
PDF
Practical Software Development Metrics
Presentation1
Software metrics
Testing Metrics
Quality in software industry
Performance Methodology It Project Metrics Workbook
Lecture 7 Software Metrics.ppt
Importance of software quality metrics
Practical Software Development Metrics

Similar to A metric expresses the degree to which a system, system component, or process possesses a certain attribute in numerical terms. (20)

PDF
Agile Base Camp - Agile metrics
PPTX
Top 10 Agile Metrics
PPS
Agile Metrics
PPT
Pressman ch-22-process-and-project-metrics
PDF
Software Measurement and Metrics (Quantified Attribute)
PDF
Agile metrics at-pmi bangalore
PPTX
software metrics(process,project,product)
PDF
HCLT Whitepaper: Landmines of Software Testing Metrics
PPT
Agile estimates - Insights about the basic
PPT
Measurements &milestones for monitoring and controlling
PDF
Agile practices for management
PDF
Paper-Milestone_met_what_next_1.0
PDF
Test Metrics and KPIs
PDF
Project control and process instrumentation
PPT
Key Measurements For Testers
PDF
Agile metrics at-pmi bangalore
PPTX
Success recipe for new IT projects-Agile way. Fail Fast, Fail Early
PPT
Key Measurements For Testers
PPT
Chapter 11 Metrics for process and projects.ppt
PPTX
Software engineering
Agile Base Camp - Agile metrics
Top 10 Agile Metrics
Agile Metrics
Pressman ch-22-process-and-project-metrics
Software Measurement and Metrics (Quantified Attribute)
Agile metrics at-pmi bangalore
software metrics(process,project,product)
HCLT Whitepaper: Landmines of Software Testing Metrics
Agile estimates - Insights about the basic
Measurements &milestones for monitoring and controlling
Agile practices for management
Paper-Milestone_met_what_next_1.0
Test Metrics and KPIs
Project control and process instrumentation
Key Measurements For Testers
Agile metrics at-pmi bangalore
Success recipe for new IT projects-Agile way. Fail Fast, Fail Early
Key Measurements For Testers
Chapter 11 Metrics for process and projects.ppt
Software engineering
Ad

Recently uploaded (20)

PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
RMMM.pdf make it easy to upload and study
PPTX
master seminar digital applications in india
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Insiders guide to clinical Medicine.pdf
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Basic Mud Logging Guide for educational purpose
PDF
Classroom Observation Tools for Teachers
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Pharma ospi slides which help in ospi learning
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Institutional Correction lecture only . . .
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Pre independence Education in Inndia.pdf
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
RMMM.pdf make it easy to upload and study
master seminar digital applications in india
2.FourierTransform-ShortQuestionswithAnswers.pdf
Insiders guide to clinical Medicine.pdf
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
O7-L3 Supply Chain Operations - ICLT Program
Basic Mud Logging Guide for educational purpose
Classroom Observation Tools for Teachers
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Pharma ospi slides which help in ospi learning
FourierSeries-QuestionsWithAnswers(Part-A).pdf
human mycosis Human fungal infections are called human mycosis..pptx
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Institutional Correction lecture only . . .
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Pre independence Education in Inndia.pdf
O5-L3 Freight Transport Ops (International) V1.pdf
Ad

A metric expresses the degree to which a system, system component, or process possesses a certain attribute in numerical terms.

  • 2. Test Metrics Metrics derive information from raw data with a view to help in decision making. Some of the areas that such information would shed light on are,  Relationship between the data points.  Any cause and effect correlation between the observed data points.  Any pointers to how the data can be used for future planning and continuous improvements. 2
  • 3. Types of Metrics  At a very high level, metrics can be classified as product metrics and process metrics.  Product metrics can be further classified as, Project metrics Progress metrics Productivity metrics 3
  • 4. Project metrics: A set of metrics that indicates how the project is planned and executed. Progress metrics: • A set of metrics that tracks how the different activities of the project are progressing. • The activities include both development activities and testing activities. • Progress metrics is monitored during testing phases. • Progress metrics helps in finding out the status of test activities and they are also good indicators of product quality. 4
  • 5.  The defects that emerge from testing provide a wealth of information that help both development team and test team to analyze and improve.  For this reason, progress metrics focus only on defects.  Progress metrics is further classified into test defect metrics and development defect metrics. 5
  • 6. Productivity metrics:  A set of metrics that takes into account various productivity numbers that can be collected and used for planning and tracking testing activities.  These metrics help in planning and estimating of testing activities. 6
  • 7. Types of metrics Product metricstrics Project metrics Progress metricsmetrics Productivity metricsmetrisetrics Effort variance Defect find rate Schedule variance Effort distribution Defect fix rate Outstanding defects rate Priority outstanding rate Defects trend Defects classification trend Weighted defects trend Defect cause distribution Component wise defect distribution Defect density and defect removal rate Age analysis of outstanding defects Introduced and reopened defects rate Development defect metrics Testing defect metrics Defects per 100 hrs of testing Test cases executed per 100 hrs of testing Test cases developed per 100 hours Defects per 100 test cases Defects per 100 failed test cases Test phases effectiveness Closed defects distribution 7
  • 8. 1.Project metrics  A typical project starts with requirements gathering and ends with product release.  All the phases that fall in between these points need to be planned and tracked.  In the planning cycle, the scope of the project is finalized.  The project scope gets translated to size estimates, which specify the quantum of work to be done.  This size estimate gets translated to effort estimate for each of the phases and activities by using the available productivity data available. This initial effort is called baselined effort.  As the project progresses and if the scope of the project changes or if the available productivity numbers are not correct, that the effort estimates are re-evaluated again and this re-evaluated effort estimate is called revised 8
  • 9.  Effort and schedule are two factors to be tracked for any phase or activity.  The basic measurements that are very natural, simple to capture, and form the inputs to the metrics in the section are ◦ The different activities and the initial baselined effort and schedule for each of the activities; this is input at the beginning of the project/phase. ◦ The actual effort and time taken for the various activities; this is entered as and when the activities take place. ◦ The revised estimate of effort and schedule; these are re-calculated at appropriate times in the project life. 9
  • 10. The project metrics included are, 1. Effort variance(Planned vs Actual) 2. Schedule variance(Planned vs Actual) 3. Effort Distribution Across Phases 10
  • 11. 1.1 Effort variance  When the baselined effort estimates, revised effort estimates, and actual effort are plotted together for all the phases of SDLC, it provides many insights about the estimation process.  As different set of people may get involved in different phases, it is good idea to plot these effort numbers phase wise.  Normally, this variation chart is plotted as the point revised estimates are being made or at the end of a release.  If there is substantial difference between the baselined and revised effort, it points to incorrect initial estimates. 11
  • 12.  Calculating effort variance for each of the phase provides a quantitative measure of the relative difference between the revised and actual efforts. Perso n days Phase-wise effort variation 12
  • 13. Effort Req Design Coding Testing Doc Defect fixing Variance% 7.1 8.7 5 0 40 15 Sample variance percentage by phase: Variance%=[(Actual effort-Revised estimate)/Revised estimate]*100 A variance of more than 5% in any of the SDLC phase indicates the scope of improvement in the estimation. A negative variance is an indication of an over estimate. 13
  • 14. 1.2 Schedule variance  Most software projects are not only concerned about the variance in effort, but are also concerned about meeting schedules.  This leads us to the schedule variance metric.  Schedule variance, like effort variance, is the deviation of the actual schedule from the estimated schedule.  Schedule variance is calculated at the end of every milestone to find out how well the project is doing with respect to the 14
  • 15. 126.00 136.00 110.00 200.00 150.00 100.00 50.00 0.00 56.00 Base line estimated Actual/remaining Estimated Remaining Schedule Variance 15
  • 16.  Effort and schedule variance have to be analyzed in totality, not in isolation.  Variance can be classified into negative variance, zero variance, acceptable variance, and unacceptable variance.  Generally 0-5% is considered as acceptable variance. 16
  • 17. Effort variance Schedule variance Probable cause/result Zero or acceptable variance Zero variance A well executed project Zero or acceptable variance Acceptable variance Need slight improvement in effort/schedule estimation Unacceptable variance Zero or acceptable variance Underestimation; needs further analysis Unacceptable variance Unacceptable variance Underestimation of both effort and schedule Negative variance Zero or acceptable variance Overestimation and schedule; both effort and schedule estimation need improvement Negative variance Negative variance Overestimation and over schedule; both effort and schedule estimation need improvement Interpretation of ranges of effort and schedule variation 17
  • 18. Some of the typical questions one should ask to analyze effort and schedule variances are given below: Did the effort variance take place because of poor initial estimation or poor execution? If the initial estimation turns out to be off the mark, is it because of lack of availability of the supporting data to enable good estimation? If the effort was on target, but the schedule was not, did the plan take into account appropriate parallelism? Did it explore the right multiplexing of the resources? Can any process or tool be enhanced to improve parallelism and thereby speed up the schedules? 18
  • 19. 1.3 Effort distribution across phases  Variance calculation helps in finding out whether commitments are met on time and whether the estimation method works well.  In addition some indications on product quality can be obtained if the effort distribution across the various phases are captured and analyzed. For example ◦ Spending very little effort on requirements may lead to frequent changes but one should also leave sufficient time for development and testing phases. ◦ Spending less effort in testing may cause defects to crop up in the customer place but spending more time in testing than what is needed may make the product lose the market window. 19
  • 21. 2.PROGRESS METRICS  Any project needs to be tracked from two angles.  one, how well the project is doing with respect to effort and schedule.  The other equally important angle is to find out how well the product is meeting the quality requirements for the release.  The number of defects that are found in the product is one of the main indicators of quality.  Defects get detected by the testing team and get fixed by the development team. 21
  • 22.  Defect metrics are further classified into test defect metrics and development defect metrics.  Test defect metrics-which help the testing team in analysis of product quality and testing.  Development defect metrics-which help the development team analysis of development activities.  How many defects have already been found and how many more defects may get unearthed are two parameters that determine product quality and its assessment. 22
  • 23.  The progress chart gives the pass rate and fail rate of executed cases, pending test cases, and test cases are waiting for defects fixed.  Representing testing progress in this manner will make it is easy understand the status and for further analysis. Week 23
  • 24. 2.1 Test defect metrics  Some organizations classify defects by assigning a defect priority(for eg:P1,P2,P3 and so on).  The priority of a defect provides a management perspective for the order of defect fixes.  For example, a defect with priority P1 indicates that it should be fixed before another defect with priority P2.  Some organizations use defect severity levels(for eg:S1,S2,S3 and so on).  The severity of defects provides the test team a perspective of the impact of that 24
  • 25.  For example, a defect with severity level S1 means that either the major functionality is not working or the software is crashing.S2 may mean a failure or functionality not working. Defect priority and defect severity-sample interpretation: Priority What it means 1 Fix the defect on highest priority; fix it before the next build 2 Fix the defect on high priority before next text cycle 3 Fix the defect on moderate priority when time permits,before the release 4 Postpone this defect for next release or live with this effect 25
  • 26. severity What it means 1 The basic product functionality failing or product crashes 2 Unexpected error condition or a functionality not working 3 A minor functionality is failing or behaves differently than expected 4 Cosmetic issue and no impact on the users A common defect definition and classification Defect classification What it means Extreme •Product crashes or unusable •Needs to be fixed immediately Critical •Basic functionality of the product not working •Needs to be fixed before next test cycle starts Important •Extended functionality of the product not working •Does not affect the progress of testing •Fix it before the release Minor •Product behaves differently •No impact on the test team or customers •Fix it when time permits cosmetic •Minor irritant •Need not be fixed for this release 26
  • 27. 2.1.1 Defect find rate: The purpose of testing is to find defects early in the test cycle.  When tracking and plotting the total number of defects found in the product at regular intervals(say daily or weekly) from beginning to end of a product development cycle.  It may show a pattern for defect arrival.  The idea of testing is to find as many defects as possible early in the cycle.  However, this may not be possible for two reasons. - First, not all features of a product may become available early ; because of scheduling of resources, the features of a product arrive in a particular sequence. - second, some of the test cases may be blocked because of some show stopper defects. The purpose of testing is to find defects early in the test cycle. 27
  • 28.  Once a majority of the modules become available and the defects that are blocking the tests are fixed, the defects arrival rate increases.  After a certain period of defect fixing and testing, the arrival of defects tends to slow down and a continuation of that trend enables product release. This results in a “bell curve”.  A bell curve along with minimum number of defects found in the last few days indicate that the release quality of the product is likely to be good. 28
  • 30. 2.1.2 Defect fix rate  If the goal of testing is to find defects as early as possible, it is nature to expect that the goal of development should be to fix defects as soon as they arrive. If the defect fixing curve is in line with defect arrival a “bell curve” .  There is a reason why defect fixing rate should be same as defect arrival rate.  If more defects are fixed later in the cycle they may not get tested properly for all possible side-effects.  In regression testing, when defects are fixed in the product, it opens the doors for the introduction of new defects.  Hence, it is a good idea to fix the defects early and test The purpose of development is to fix defects as soon as they arrive 30
  • 31. 2.1.3 outstanding defects rate In a well-executed project, the number of outstanding defects is very close to zero all the time during the test cycle.  The number of defects outstanding in the product is calculated by subtracting the total defects fixed from the total defects in the product.  If the defect-fixing pattern matches the arrival rate, then the outstanding defects curve will look like a straight line. 31
  • 32. 2.1.4 Priority outstanding rate  Sometimes the defects that are coming out of testing may be very critical and may take enormous effort to fix and to test.  Hence, it is important to look at how many serious issues are being uncovered in the product.  The modification to the outstanding defects rate curve by plotting only the high priority defects and filtering out the low-priority defects is called priority outstanding Provide additional focus for those defects that matter to the release. 32
  • 33.  The priority outstanding defects correspond to extreme and critical classification of defects.  Some organization include important defects also in priority outstanding defects.  Some high-priority defects may require a change in design or architecture. If they are found late in the cycle, the release may get delayed to address the defect.  But if a low-priority defect found is close to the release date and it requires a design change, a likely decision of the management would be not to fix the defect. 33
  • 34. 2.1.5 Defect Trend  Having discussed individual measures of defects, it is time for the trend chart to consolidate all of the above into one chart. The effectiveness of analysis increases when several perspectives of find rate, fix rate, outstanding, and priority outstanding defects are combined d ef e ct s 34
  • 35. The following observations can be made. 1. The find rate, fix rate, outstanding defects and priority outstanding follow a bell curve pattern indicating readiness for release at the end of the 19th week. 2. A sudden downward movement as well as upward spike in defect fixes rate needs analysis. 3. There are close to 75 outstanding defects at the end of the 19th week. By looking at the priority outstanding which shows close to zero defects in the 19th week, it can be concluded that all outstanding defects belong to low priority, indicating release readiness. The outstanding defects need analysis before the release. 35
  • 36. 4. Defect fix rate is not in line with outstanding defects rate. If defect fix rate had been improved, it would have enabled a quicker release cycle as incoming defects from the 14th week were in control. 5. Defect fix rate was not at the same degree of defect find rate. 6. A smooth priority outstanding rate suggests that priority defects were closely tracked and fixed. 36
  • 37. 2.1.6 Defect classification trend  The classifications of defects are only at two levels(high priority and low priority defects).  Some of the data drilling or chart analysis needs further information on defects with respect to each classification of defects- extreme, critical, important, minor, and cosmetic.  When talking about the total number of outstanding defects, some of the questions that can be asked are,  How many of them are extreme defects?  How many are critical?  How many are important? Providing the perspective of defect classification in the chart helps in finding out release readiness of the product. 37
  • 38.  These questions require the charts to be plotted separately based on defect classification.  The sum of extreme, critical, important, minor and cosmetic defects is equal to the total number of defects.  A graph in which each type of defects is plotted separately on top of each other to get the total defects is called “Stacked area charts”.  This type of graph helps in identifying each type of defect and also presents a perspective of how they add up to or contribute to the total defects. 38
  • 40. A pie chart of defect distribution 40
  • 41. 2.1.7 Weighted defects trend  The stacked area chart provides information on how the different levels or types of defects contribute to the total number of defects.  In this approach all the defects are counted on par, for example, both a critical defect and a cosmetic defect are treated equally and counted as one defect.  Counting the defects the same way takes away the seriousness of extreme or critical defects.  To solve this problem, a metric called Weighted defects = (Extreme*5 + Critical*4 + Important*3 + Minor*2 + Cosmetic) 41
  • 42.  To solve this problem, a metric called weighted defects is introduced.  This concept helps in quick analysis of defects, instead of worrying about the classification of defects. 42
  • 43. 2.1.8 Defect cause distribution Both “large defects” and “large number of small defects” affect product release.  All the metrics help in analyzing defects and their impact.  The next logical questions that would arise are, ◦ Why are those defects occurring and what are the root causes? ◦ What areas must be focused for getting more defects out of testing? Knowing the causes of defects helps in finding more defects and also in preventing such defects early in the cycle. 43
  • 45. 2.2 Development defect metrics  The defect metrics that directly help in improving development activities are termed as development defect metrics. 2.2.1 Component-wise defect distribution  While it is important to count the number of defects in the product, for development it is important to map them to different components of the product so that they can be assigned to the appropriate developer to fix those defects. 45
  • 46.  The project manager in charge of development maintains a module ownership list where all product modules and owners are listed.  Based on the number of defects existing in each of the modules, the effort needed to fix them, and the availability of skill sets for each of the modules, the project manager assigns resources accordingly. Module wise defect distribution 46
  • 47. 2.2.2 Defect density and defect removal rate  A good quality product can have a long lifetime before becoming obsolete.  The lifetime of the product depends on its quality, over the different releases.  One of the metrics that correlates source code and defects is defect density.  This metric maps the defects in the product with the volume of code that is47
  • 48.  Defects per KLOC is the most practical and easy metric to calculate and plot.  KLOC stands for kilo lines of code.  Every 1000 lines of executable statements in the product is counted as one KLOC.  There are several variants of this metric to make it relevant to releases, and one of them is calculating AMD(added, modified, deleted code) to find out how a particular release affects product quality.  Defects per KLOC=(Total defects found in the product)/(Total executable AMD lines of code in KLOC) 48
  • 49.  The formula for calculating the defect removal rate is  (Defects found by verification activities + Defects found in unit testing)/(Defects found by test teams)*100  The above formula helps in finding the efficiency of verification activities and unit testing which are normally responsibilities of the development team and compare them to the defects found by the testing teams.  These metrics are tracked over various releases to study in-release-on-release trends in the verification/quality assurance activities. 49
  • 50. Defects/KLOC and defect removal% 50
  • 51. 2.2.3 Age analysis of outstanding defects  The time needed to fix a defect may be proportional to its age.  The age of a defect in a way represents the complexity of the defect fix needed.  Given the complexity and time involved in fixing those defects, they need to be tracked closely else they may get postponed close to release which may even delay the release.  A method to track such defects is called age analysis of outstanding defects. 51
  • 52. 2.2.4 Introduced and reopened defects trend  When adding new code or modifying the code to provide a defect fix, something that was working earlier may stop working. This is called an introduced defect.  These defects are those injected in to the code while fixing the defects or while trying to provide an enhancement to the product.  Sometimes a fix that is provided in the code may not have fixed the problem completely or some other modification may have reproduced a defect that was fixed earlier. This is called a reopened 52
  • 53. 3.Productivity Metrics  Productivity metric combine several measurements and parameters with effort spent on the product.  They help in finding out the capability of the team as well as for other purposes, such as 1. Estimating for the new release. 2. Finding out how well the posit team is progressing, understanding the reasons for (both positive and negative) variations in results 3. Estimating the number of defects that can be found. 4. Estimating release date and quality. 5. Estimating the cost involved in the release. 53
  • 54. 3.1 Defects per 100 hours of testing  If incoming defects in the product are reducing, it may mean various things. 1. Testing is not effective. 2. The quality of the product is improving. 3. Effort spent in testing is falling.  The metric defects per 100 hours of testing covers the third point and normalizes the number of defects found in the product wih respect to the effort spent.  Defect per 100 hours of testing=(Total defects found in the product for a period/Total hours spent to get those defects)*100 54
  • 55. 3.2 Test cases executed per 100 hours of testing  The number of test cases executed by the test team for a particular duration depends on team productivity and quality of product.  The team productivity has to be calculated accurately so that it can be tracked for the current release and be used to estimate the next release of the product.  If the quality of the product is good, more test cases can be executed, as there may not be defects blocking the tests. Test cases executed per 100 hours of testing=(Total test cases executed for a period/Total hours spent in test execution)*100 55
  • 56. 3.3 Test cases developed per 100 hours of testing  Both manual execution of test cases and automating test cases require estimating and tracking of productivity numbers.  In a product scenario, not all the test cases are written afresh for every release.  New test cases are added to address new functionality and for testing features that were not tested earlier.  Existing test cases are modified to reflect changes in the product. Test cases developed per 100 hours of testing=(Total test cases developed for a product/Total hours spent in test case development)*100 56
  • 57. 3.4 Defects per 100 test cases  Since the goal of testing is find out as many defects as possible, it is appropriate to measure the “defect yield” of tests, that is, how many defects get uncovered during testing.  This is a function of two parameters- one, the effectiveness of the tests in uncovering defects and two, the effectiveness of choosing tests that are capable of uncovering defects.  The ability of a test case to uncover defects depends on how well the test cases are designed and developed.  A measure that quantifies these two parameters is defect per 100 testcases. Defects per 100 test cases=(Total defects found for a period/Total test cases executed for the same period)*100 57
  • 58. 3.5 Defects per 100 failed test cases  Defect per 100 failed test cases is a good measure to find out how granular the test cases are. It indicates 1. How many test cases need to be executed when a defect is fixed; 2. What defects need to be fixed so that an acceptable number of test cases reach the pass state; and 3. How the fail rate of test cases and defects affect each other for release readiness analysis. Defects per 100 failed test cases=(Total defects found for a period/Total test cases failed due to those defects)*100 58
  • 60. 3.6 Test phase effectiveness  Developers perform unit testing and there could be multiple testing teams performing component, integration, and system testing phases.  The idea of testing is to find defects early in the cycle and in the early phases of testing.  The defects found in various phases such as unit testing(UT), component testing(CT), integration testing(IT) and system testing(ST) are plotted and analyzed.  The following observations can be made, 1. A good proportion of defects were found in the early phases of testing(UT and CT). 2. Product quality improved from phase to phase. 60
  • 61. 3.7 Closed defect distribution  The objective of testing is not only find defects.  The testing team also has the objective to ensure that all the defects found through testing are fixed.  So that the customer gets the benefit of testing and the product quality improves.  To ensure that most of the defects are fixed the testing team has to track the defects and analyze how they are closed.  The closed defect distribution helps in this analysis. 61
  • 63. From the above chart the following observations can be made 1. Only 28% of the defects found by test team were fixed in the product. This suggests that product quality needs improvement before release. 2. Of the defects filed 19% were duplicates. It suggests that the test team needs to update itself on existing defects before new defects filed. 3. Non-reproducible defects amounted to 11%. This means that the product has some random defects or the defects are not provided will reproducible test cases. This area needs further analysis. 4. Close to 40% of defects were not fixed for reasons “as per design”, “will not fix”, and next release”. These defects may impact the customers. 63
  • 64. 4.Release Metrics  Several metrics can be used to determine whether the product is ready for release.  The decision to release a product would need to consider several perspectives and several metrics.  All the metrics that were discussed in the previous sections need to be considered in totality for making the release decision.  The guidelines and exact number and nature of criteria can vary from release to release, product to product, and organization to organization. 64

Editor's Notes