SlideShare a Scribd company logo
Not Just Numericals Values_ByDrSanjayGupta
Teatimewithtesters.com February 2014|42
Sanjay Gupta
We Are Not Just Numerical Values: Game changers, when discussed as Test Metrics
Data collection during Software testing phase and their intelligent representation ensures the success of
the project by taking appropriate actions to track the progress on a right path. Test Metrics
representation and sharing them with key stakeholders during the life cycle of the project ensures pro-
actively mitigating risks and helps in bringing any missing requirements along with the assessment of
the progress and quality of ongoing development and testing.
Teams can use these numbers (data) to track required information, study progress, and make the
correct conclusions. One can use built-in and custom reports that are based on the data captured during
test case development & execution, defects and their life cycle from test management or defect
management tools automatically or collecting them manually. Intelligent interpretation after analyzing
these results will help in answering questions such as:
Are we on right path?
Is our defect pool growing or shrinking?
What is our burndown rate*
?
What is the test case design and execution productivity of the team
Teatimewithtesters.com February 2014|43
Can we measure the Skill set of the team on numerical scale and take appropriate
actions?
This article is going to highlight key important test metrics and their interpretations towards ensuring
on-time , quality deliverables to our customers.
Productivity as a measurement:
Often, you might have assured to key stakeholders saying we are working towards productivity
improvement initiatives. How to measure team productivity and what does it mean for a business? For
example, we are engaged in a QA phase and would like to measure the outcome of the team members
at global scale. In a manual testing scenario, a good test analyst can write and execute 20and 25-30
test cases respectively depending on the complexity of the test scenario. A productivity rate for test case
design or execution compare to a baseline number is a powerful measurement to showcase the
capability and quality of the test team in terms of their functional understanding, capability in writing the
effective test cases and speed during the QA phase.
How does a productivity number become a bottleneck in on-time delivery?
Times when we have decided on the number of Test cases to be written and executed in a given time, it
is very important to meet the Takt time [Takt time is a tool for planning and tracking]. Takt time is
the measurement on the required productivity of the team in terms of completing the job in time. For
example, if we have 100 test cases to be executed in 5 days. The required productivity should be 20
days per day to complete the job in time. In this case, if one Test Analyst is working, his productivity
should be minimum 20 TC/day. Below are the typical productivity metrics to be measure.
Teatimewithtesters.com February 2014|44
Fig. 1: (A) A typical productivity measurement of team members week wise towards Test Case Design.
(B) Test Case Pass Rate can also be important measurement of team productivity during test case
execution cycle. This can be correlated to the quality of the test cases and functional understanding of
the team.
Defect Count as Information:
Information on number of defects, their state & severity is critical information for both the QA and
development team towards delivering bug free application. The project health is also related to the
number of defects detected by Test Team in a given QA phase. The criticality of the defects decides on
the decision for the application stakeholders to decide on the priority of the defects to be fixed.
Defect rejection rate is a key metrics to be captured in determining the quality and capability of the test
analyst. If the QA team is raining many defects which are rejected by development/BA team, it
questions the functional understanding of test team.
Defect age is also one more key defect metrics to be reviewed. Defect age is defined as time interval
between the open and closed status of a given defect. Defect age associated with criticality of a defect
helps you in finding out the reason for the delay and an opportunity to take the appropriate actions.
Teatimewithtesters.com February 2014|45
Fig 2: Fig 2.a presents the defect trend in a given test cycle &Fig 2.b represents the defect age of the
defects along with their severity.
The defect trend analysis can give a measure on the stability of the application. If the numbers of
defects raised are getting low with time, it means that the application is getting stable day by day. It
also gives a confidence to decide on releasing the application in production.
Defect trend and defect age gives an opportunity to control the quality of the testing phase. In case
when a given defect is open for long time, one gets an opportunity to immediate take an action to find
out the reason, why it is open for long time and take the appropriate action.
What can go wrong with defect numbers?
Many times, it is observed that the development team or the application owners get furious about the
reported number of defects. In few cases, the same defect raised by different team members increases
the number and it consumes a good amount of time before concluding it as similar /repeated defects. A
good analysis and functional description written against each defects is a good practice to follow.
Do we have desired Skillset? Can it be measured?
In today‘s competitive environment, cost , employee attrition rate , reduced time to market and skillful
team members are major challenges. Ensuring a right skill set among the team members is a
challenging job and an art of the project manager to ensure the best skill set by taking the appropriate
actions.
How can we measure it?
A good measurement will be a holistic (3 D) view of skillset:
1. A scale to evaluate individual‘s skillset w.r.t required skills
2. Overall Team Competency for required skill sets
3. A measurement of Team readiness against a given skill set
Below is a tool , which is capable to evaluate skills on all the above three dimensions. In the given
required skill sets as represented in below picture, resource one has overall competency of 0.75, which
is a good number as in global standards a skill of 0.7 or above against required set of skills are
acceptable.
If you closely look the total overall team competency (in yellow) number, it is 0.57, which says that over
all team competency for required skillset is not good on the same scale. The third dimension is the
readiness of the team against a given technology or skill. If you see, overall team is good for Java as it
has java competency as 0.89, which is good but on the other hand if you look the scores against XML
and Data Base knowledge they need an improvement as they are 0.39 & 0.36 respectively. These scores
are far below the acceptable global norms.
Teatimewithtesters.com February 2014|46
How to improve on competency?
Once you identify the skill area to be improved on, create an action plan like technical, hands-on
experience or domain trainings and evaluate the team after the few months of trainings. The
improvements in the skills can be seen in the output from the team. For example as decreased in
number of defects slipped in production, number of quality defects raised by test team, reduction in
number of rejected defects etc.
What Next?
Test Metrics can be represented or tailored in many ways. The same data can be represented in different
visual ways. Decide and stick to pre-defined test metrics for a given engagement. For example, you can
make a checklist of the test metrics used for water fall model, Agile development or for enhancements
type engagements. It will generate a common understanding in all the Test Analysts in an organization
and helps in smooth quality delivery. A meaningful pictorial representation of project progress as Test
metrics is a powerful tool in assessing the progress and an opportunity to take appropriate actions in
advance to mitigate any major risk.
Reference:
[1] Burndown and Burn Rate Reports http://msdn.microsoft.com/en-us/library/dd380678.aspx
Acknowledgement:
The author would like to thank and gratefully acknowledge the support and encouragement provided by
Mr. Dhiraj Sinha, Ramamurthy Ramki and Ashutosh Vaidya .

More Related Content

PDF
Agile Testing: Best Practices and Methodology
PDF
Testing Metrics and why Managers like them
PPTX
Testing fundamentals in a changing world
PDF
software testing for beginners
PDF
Important skills a Tester should have
PPT
Risk Driven Testing
PDF
Communication skills for testers
PDF
No more excuses QASymphony
Agile Testing: Best Practices and Methodology
Testing Metrics and why Managers like them
Testing fundamentals in a changing world
software testing for beginners
Important skills a Tester should have
Risk Driven Testing
Communication skills for testers
No more excuses QASymphony

What's hot (20)

PDF
The Risk Questionnaire - by: Adam Knight
PDF
RCA on Residual defects – Techniques for adaptive Regression testing
PDF
Test Automation Strategies and Frameworks: What Should Your Team Do?
PDF
Risk-Based Testing for Agile Projects
PPS
Estimating test effort part 2 of 2
PDF
How to accurately estimate the size and effort of your software testing (1)
PDF
Software QA Metrics Dashboard Benchmarking
PDF
Agile testing guide_2021
PPT
But Did You Test It
PPT
Best practices quality assurance
PPTX
S.M.A.R.T & F.O.C.U.S Testing - Increasing the value provided by your testing...
PDF
Test Estimation in Practice
PPTX
ISTQB Technical Test Analyst 2012 Training - The Technical Test Analyst's Tas...
PDF
Anders Claesson - Test Strategies in Agile Projects - EuroSTAR 2010
PPTX
The Value-Adding Tester
PPT
Beginners QA Testing
PPTX
Value added testing (VAT)
PDF
Business Representative as a Successful Agile Product Owner by Kemal Bajramović
PPT
Practical Application Of Risk Based Testing Methods
The Risk Questionnaire - by: Adam Knight
RCA on Residual defects – Techniques for adaptive Regression testing
Test Automation Strategies and Frameworks: What Should Your Team Do?
Risk-Based Testing for Agile Projects
Estimating test effort part 2 of 2
How to accurately estimate the size and effort of your software testing (1)
Software QA Metrics Dashboard Benchmarking
Agile testing guide_2021
But Did You Test It
Best practices quality assurance
S.M.A.R.T & F.O.C.U.S Testing - Increasing the value provided by your testing...
Test Estimation in Practice
ISTQB Technical Test Analyst 2012 Training - The Technical Test Analyst's Tas...
Anders Claesson - Test Strategies in Agile Projects - EuroSTAR 2010
The Value-Adding Tester
Beginners QA Testing
Value added testing (VAT)
Business Representative as a Successful Agile Product Owner by Kemal Bajramović
Practical Application Of Risk Based Testing Methods
Ad

Similar to Not Just Numericals Values_ByDrSanjayGupta (20)

PDF
HCLT Whitepaper: Landmines of Software Testing Metrics
PDF
Agile metrics
PDF
Impact_Agile_Quantified
PDF
ImpactAgile_Quantified
PDF
Testing metrics
PDF
Agile metrics
PDF
Test Metrics in Agile: A Powerful Tool to Demonstrate Value
PPTX
Software Productivity Framework
PPT
Testing Metrics
PDF
Sanitized tb swstmppp1516july
PDF
Agile Team Performance Measurement webinar
PPTX
Team maturity scale: How old is your team?
PPTX
Тетяна Іванова “Team Maturity Scale: How Old Is Your Team?” - Lviv PMDay
PPTX
Test Metrics in Agile - powerful tool to support changes - Zavertailo Iuliia
PDF
CAJ-012 Bob Woods
PDF
An Agile Testing Dashboard: Metrics that Matter
PDF
Agile Metrics to Boost Software Quality improvement
PPTX
Measuring for team effectiveness (NEW)
PDF
Quality culture transition guide model - full
PDF
Agile metrics.pdf
HCLT Whitepaper: Landmines of Software Testing Metrics
Agile metrics
Impact_Agile_Quantified
ImpactAgile_Quantified
Testing metrics
Agile metrics
Test Metrics in Agile: A Powerful Tool to Demonstrate Value
Software Productivity Framework
Testing Metrics
Sanitized tb swstmppp1516july
Agile Team Performance Measurement webinar
Team maturity scale: How old is your team?
Тетяна Іванова “Team Maturity Scale: How Old Is Your Team?” - Lviv PMDay
Test Metrics in Agile - powerful tool to support changes - Zavertailo Iuliia
CAJ-012 Bob Woods
An Agile Testing Dashboard: Metrics that Matter
Agile Metrics to Boost Software Quality improvement
Measuring for team effectiveness (NEW)
Quality culture transition guide model - full
Agile metrics.pdf
Ad

Not Just Numericals Values_ByDrSanjayGupta

  • 2. Teatimewithtesters.com February 2014|42 Sanjay Gupta We Are Not Just Numerical Values: Game changers, when discussed as Test Metrics Data collection during Software testing phase and their intelligent representation ensures the success of the project by taking appropriate actions to track the progress on a right path. Test Metrics representation and sharing them with key stakeholders during the life cycle of the project ensures pro- actively mitigating risks and helps in bringing any missing requirements along with the assessment of the progress and quality of ongoing development and testing. Teams can use these numbers (data) to track required information, study progress, and make the correct conclusions. One can use built-in and custom reports that are based on the data captured during test case development & execution, defects and their life cycle from test management or defect management tools automatically or collecting them manually. Intelligent interpretation after analyzing these results will help in answering questions such as: Are we on right path? Is our defect pool growing or shrinking? What is our burndown rate* ? What is the test case design and execution productivity of the team
  • 3. Teatimewithtesters.com February 2014|43 Can we measure the Skill set of the team on numerical scale and take appropriate actions? This article is going to highlight key important test metrics and their interpretations towards ensuring on-time , quality deliverables to our customers. Productivity as a measurement: Often, you might have assured to key stakeholders saying we are working towards productivity improvement initiatives. How to measure team productivity and what does it mean for a business? For example, we are engaged in a QA phase and would like to measure the outcome of the team members at global scale. In a manual testing scenario, a good test analyst can write and execute 20and 25-30 test cases respectively depending on the complexity of the test scenario. A productivity rate for test case design or execution compare to a baseline number is a powerful measurement to showcase the capability and quality of the test team in terms of their functional understanding, capability in writing the effective test cases and speed during the QA phase. How does a productivity number become a bottleneck in on-time delivery? Times when we have decided on the number of Test cases to be written and executed in a given time, it is very important to meet the Takt time [Takt time is a tool for planning and tracking]. Takt time is the measurement on the required productivity of the team in terms of completing the job in time. For example, if we have 100 test cases to be executed in 5 days. The required productivity should be 20 days per day to complete the job in time. In this case, if one Test Analyst is working, his productivity should be minimum 20 TC/day. Below are the typical productivity metrics to be measure.
  • 4. Teatimewithtesters.com February 2014|44 Fig. 1: (A) A typical productivity measurement of team members week wise towards Test Case Design. (B) Test Case Pass Rate can also be important measurement of team productivity during test case execution cycle. This can be correlated to the quality of the test cases and functional understanding of the team. Defect Count as Information: Information on number of defects, their state & severity is critical information for both the QA and development team towards delivering bug free application. The project health is also related to the number of defects detected by Test Team in a given QA phase. The criticality of the defects decides on the decision for the application stakeholders to decide on the priority of the defects to be fixed. Defect rejection rate is a key metrics to be captured in determining the quality and capability of the test analyst. If the QA team is raining many defects which are rejected by development/BA team, it questions the functional understanding of test team. Defect age is also one more key defect metrics to be reviewed. Defect age is defined as time interval between the open and closed status of a given defect. Defect age associated with criticality of a defect helps you in finding out the reason for the delay and an opportunity to take the appropriate actions.
  • 5. Teatimewithtesters.com February 2014|45 Fig 2: Fig 2.a presents the defect trend in a given test cycle &Fig 2.b represents the defect age of the defects along with their severity. The defect trend analysis can give a measure on the stability of the application. If the numbers of defects raised are getting low with time, it means that the application is getting stable day by day. It also gives a confidence to decide on releasing the application in production. Defect trend and defect age gives an opportunity to control the quality of the testing phase. In case when a given defect is open for long time, one gets an opportunity to immediate take an action to find out the reason, why it is open for long time and take the appropriate action. What can go wrong with defect numbers? Many times, it is observed that the development team or the application owners get furious about the reported number of defects. In few cases, the same defect raised by different team members increases the number and it consumes a good amount of time before concluding it as similar /repeated defects. A good analysis and functional description written against each defects is a good practice to follow. Do we have desired Skillset? Can it be measured? In today‘s competitive environment, cost , employee attrition rate , reduced time to market and skillful team members are major challenges. Ensuring a right skill set among the team members is a challenging job and an art of the project manager to ensure the best skill set by taking the appropriate actions. How can we measure it? A good measurement will be a holistic (3 D) view of skillset: 1. A scale to evaluate individual‘s skillset w.r.t required skills 2. Overall Team Competency for required skill sets 3. A measurement of Team readiness against a given skill set Below is a tool , which is capable to evaluate skills on all the above three dimensions. In the given required skill sets as represented in below picture, resource one has overall competency of 0.75, which is a good number as in global standards a skill of 0.7 or above against required set of skills are acceptable. If you closely look the total overall team competency (in yellow) number, it is 0.57, which says that over all team competency for required skillset is not good on the same scale. The third dimension is the readiness of the team against a given technology or skill. If you see, overall team is good for Java as it has java competency as 0.89, which is good but on the other hand if you look the scores against XML and Data Base knowledge they need an improvement as they are 0.39 & 0.36 respectively. These scores are far below the acceptable global norms.
  • 6. Teatimewithtesters.com February 2014|46 How to improve on competency? Once you identify the skill area to be improved on, create an action plan like technical, hands-on experience or domain trainings and evaluate the team after the few months of trainings. The improvements in the skills can be seen in the output from the team. For example as decreased in number of defects slipped in production, number of quality defects raised by test team, reduction in number of rejected defects etc. What Next? Test Metrics can be represented or tailored in many ways. The same data can be represented in different visual ways. Decide and stick to pre-defined test metrics for a given engagement. For example, you can make a checklist of the test metrics used for water fall model, Agile development or for enhancements type engagements. It will generate a common understanding in all the Test Analysts in an organization and helps in smooth quality delivery. A meaningful pictorial representation of project progress as Test metrics is a powerful tool in assessing the progress and an opportunity to take appropriate actions in advance to mitigate any major risk. Reference: [1] Burndown and Burn Rate Reports http://msdn.microsoft.com/en-us/library/dd380678.aspx Acknowledgement: The author would like to thank and gratefully acknowledge the support and encouragement provided by Mr. Dhiraj Sinha, Ramamurthy Ramki and Ashutosh Vaidya .