SlideShare a Scribd company logo
Done by,
B.Shruthi
(11109A067
)
 Software

Quality Metrics.
 Types of Software Quality Metrics.
 Three groups of Software Quality Metrics.
 Difference Between Errors, Defects, Faults,
and Failures.
 Lines of code.
 Function Points.
 Customer Satisfaction Metrics
 The

subset of metrics that focus on quality
 Software quality metrics can be divided into:



End-product quality metrics
In-process quality metrics

 The

essence of software quality engineering
is to investigate the relationships among inprocess metric, project characteristics , and
end-product quality, and, based on the
findings, engineer improvements in quality to
both the process and the product.
 Product

metrics – e.g., size, complexity,
design features, performance, quality level
 Process metrics – e.g., effectiveness of
defect removal, response time of the fix
process
 Project metrics – e.g., number of software
developers, cost, schedule, productivity
 Product

quality
 In-process quality
 Maintenance quality
 Product Quality Metrices
 Intrinsic



product quality

Mean time to failure
Defect density

 Customer



related

Customer problems
Customer satisfaction
 Intrinsic

product quality is usually measured

by:



the number of “bugs” (functional defects) in the
software (defect density), or
how long the software can run before “crashing”
(MTTF – mean time to failure)

 The

two metrics are correlated but different
An error is a human mistake that results in
incorrect software.
 The resulting fault is an accidental condition
that causes a unit of the system to fail to
function as required.
 A defect is an anomaly in a product.
 A failure occurs when a functional unit of a
software-related system can no longer perform
its required function or cannot perform it within
specified limits

 This

metric is the number of defects over the
opportunities for error (OPE) during some
specified time frame.
 We can use the number of unique causes of
observed failures (failures are just defects
materialized) to approximate the number of
defects.
 The size of the software in either lines of
code or function points is used to
approximate OPE.
 Possible







variations

Count only executable lines
Count executable lines plus data definitions
Count executable lines, data definitions, and
comments
Count executable lines, data definitions, comments,
and job control language
Count lines as physical lines on an input screen
Count lines as terminated by logical delimiters
 Other







difficulties

LOC measures are language dependent
Can’t make comparisons when different languages
are used or different operational definitions of LOC
are used
For productivity studies the problems in using LOC
are greater since LOC is negatively correlated with
design efficiency
Code enhancements and revisions complicates the
situation – must calculate defect rate of new and
changed lines of code only
 Depends

on the availability on having LOC
counts for both the entire produce as well as
the new and changed code
 Depends on tracking defects to the release
origin (the portion of code that contains the
defects) and at what release that code was
added, changed, or enhanced
A

function can be defined as a collection of
executable statements that performs a
certain task, together with declarations of
the formal parameters and local variables
manipulated by those statements.
 In practice functions are measured indirectly.
 Many of the problems associated with LOC
counts are addressed.
 The

number of function points is a weighted
total of five major components that comprise
an application.







Number of external inputs x 4
Number of external outputs x 5
Number of logical internal files x10
Number of external interface files x 7
Number of external inquiries x 4
 The

function count (FC) is a weighted total of
five major components that comprise an
application.






Number of external inputs x (3 to 6)
Number of external outputs x (4 to 7)
Number of logical internal files x (7 to 15)
Number of external interface files x (5 to 10)
Number of external inquiries x (3 to 6)
the weighting factor depends on complexity
 Each

number is multiplied by the weighting factor
and then they are summed.
 This weighted sum (FC) is further refined by
multiplying it by the Value Adjustment Factor
(VAF).
 Each of 14 general system characteristics are
assessed on a scale of 0 to 5 as to their impact on
(importance to) the application.
1.
2.
3.
4.
5.
6.
7.

Data Communications
Distributed functions
Performance
Heavily used configuration
Transaction rate
Online data entry
End-user efficiency
8.
9.
10.
11.
12.
13.
14.

Online update
Complex processing
Reusability
Installation ease
Operational ease
Multiple sites
Facilitation of change






VAF is the sum of these 14 characteristics
divided by 100 plus 0.65.
Notice that if an average rating is given
each of the 14 factors, their sum is 35 and
therefore VAF =1
The final function point total is then the
function count multiplied by VAF
FP = FC x VAF
Customer problems are all the difficulties
customers encounter when using the product.
 They include:











Valid defects
Usability problems
Unclear documentation or information
Duplicates of valid defects (problems already fixed
but not known to customer)
User errors

The problem metric is usually expressed in terms
of problems per user month (PUM)
 PUM

= Total problems that customers
reported for a time period <divided by> Total
number of license-months of the software
during the period
where
Number of license-months = Number
of the install licenses of the software
x Number of months in the
calculation period
 Improve

the development process and reduce
the product defects.
 Reduce the non-defect-oriented problems by
improving all aspects of the products (e.g.,
usability, documentation), customer
education, and support.
 Increase the sale (number of installed
licenses) of the product.
Defect Rate

Problems per UserMonth (PUM)

Numerator

Valid and unique
product defects

All customer problems (defects
and nondefects, first time and
repeated)

Denominator

Size of product (KLOC
or function point)

Customer usage of the product
(user-months)

Measurement
perspective

Producer—software
development
organization

Customer

Scope

Intrinsic product quality

Intrinsic product quality plus
other factors
Customer
Satisfaction
Issues
Customer
Problems
Defects
 Customer

satisfaction is often measured by
customer survey data via the five-point
scale:







Very satisfied
Satisfied
Neutral
Dissatisfied
Very dissatisfied


CUPRIMDSO










Capability (functionality)
Usability
Performance
Reliability
Installability
Maintainability
Documentation
Service
Overall
 FURPS







Functionality
Usability
Reliability
Performance
Service
1.
2.
3.
4.

Percent of completely satisfied customers
Percent of satisfied customers (satisfied and
completely satisfied)
Percent of dissatisfied customers (dissatisfied and
completely dissatisfied)
Percent of nonsatisfied customers (neutral,
dissatisfied, and completely dissatisfied)
 Defect

density during machine testing
 Defect arrival pattern during machine testing
 Phase-based defect removal pattern
 Defect removal effectiveness
 Defect

rate during formal machine testing
(testing after code is integrated into the
system library) is usually positively
correlated with the defect rate in the field.
 The simple metric of defects per KLOC or
function point is a good indicator of quality
while the product is being tested.
 Scenarios


for judging release quality:

If the defect rate during testing is the same or
lower than that of the previous release, then
ask: Does the testing for the current release
deteriorate?



If the answer is no, the quality perspective is positive.
If the answer is yes, you need to do extra testing.
 Scenarios

for judging release quality

(cont’d):


If the defect rate during testing is substantially
higher than that of the previous release, then
ask: Did we plan for and actually improve testing
effectiveness?



If the answer is no, the quality perspective is negative.
If the answer is yes, then the quality perspective is the
same or positive.
 The

pattern of defect arrivals gives more
information than defect density during
testing.
 The objective is to look for defect arrivals
that stabilize at a very low level, or times
between failures that are far apart before
ending the testing effort and releasing the
software.
The defect arrivals during the testing phase by
time interval (e.g., week). These are raw
arrivals, not all of which are valid.
 The pattern of valid defect arrivals – when
problem determination is done on the reported
problems. This is the true defect pattern.
 The pattern of defect backlog over time. This
is needed because development organizations
cannot investigate and fix all reported
problems immediately. This metric is a
workload statement as well as a quality
statement.

 This

is an extension of the test defect
density metric.
 It requires tracking defects in all phases of
the development cycle.
 The pattern of phase-based defect removal
reflects the overall defect removal ability of
the development process.
 DRE

= (Defects removed during a
development phase <divided by> Defects
latent in the product) x 100%
 The denominator can only be approximated.
 It is usually estimated as:
Defects removed during the phase +
Defects found later
 When

done for the front end of the process
(before code integration), it is called early
defect removal effectiveness.
 When done for a specific phase, it is called
phase effectiveness.
 The

goal during maintenance is to fix the
defects as soon as possible with excellent fix
quality
 The following metrics are important:





Fix backlog and backlog management index
Fix response time and fix responsiveness
Percent delinquent fixes
Fix quality

More Related Content

PPTX
Operating system; Multitasking
PPTX
Serial vs Parallel communication & Synchronous and Asynchronous transmission
PPTX
software maintenance
PDF
Semaphores
PPTX
Error detection and correction
PDF
Unit 4 Real Time Operating System
PPTX
C++ language basic
Operating system; Multitasking
Serial vs Parallel communication & Synchronous and Asynchronous transmission
software maintenance
Semaphores
Error detection and correction
Unit 4 Real Time Operating System
C++ language basic

What's hot (20)

PPT
Real Time Operating system (RTOS) - Embedded systems
PPT
Layered Architecture
PPTX
Asynchronous and synchronous
PDF
IEEE 802.11 Architecture and Services
PPT
6 multiprogramming & time sharing
PPTX
HTTP & WWW
PPTX
Phased life cycle model
PPTX
Lect5 improving software economics
PPT
Networking and Internetworking Devices
PPT
Transport services
PPT
System call
PPTX
Interfacing With High Level Programming Language
PPTX
Stack and its usage in assembly language
PPT
Presentation on telnet
PPTX
Real Time Operating Systems
PDF
Embedded C - Optimization techniques
PPTX
Unix operating system
PPTX
System program
PDF
Stop and-wait protocol
PPTX
Protocols and the TCP/IP Protocol Suite
Real Time Operating system (RTOS) - Embedded systems
Layered Architecture
Asynchronous and synchronous
IEEE 802.11 Architecture and Services
6 multiprogramming & time sharing
HTTP & WWW
Phased life cycle model
Lect5 improving software economics
Networking and Internetworking Devices
Transport services
System call
Interfacing With High Level Programming Language
Stack and its usage in assembly language
Presentation on telnet
Real Time Operating Systems
Embedded C - Optimization techniques
Unix operating system
System program
Stop and-wait protocol
Protocols and the TCP/IP Protocol Suite
Ad

Viewers also liked (20)

PPTX
COMMUNICATION PROCESS,TYPES,MODES,BARRIERS
PPTX
Communication ppt
PDF
Gestion de redes
PDF
Truongquocte.ifno_Nhân vật lịch sử & giai thoại
PDF
Truongquocte.info_Phương pháp luyện trí não [Tập 1]
DOCX
Assignment 1
PPT
Гемодинамічні та нейрореабілітичні особливості при апалічному синдромі
PDF
Корпоративные открытки по индивидуальному дизайну
PPT
Колоризация ультразвуковых скенированных изображений
DOCX
แบบสอบถาม พึงใจบริการ Bts mrt airport link
PPTX
DOCX
Writing sample - Criminal Defense
PPT
Philadelphia cpa
DOCX
Ahli jawatan kuasa hal ehwal murid 2011
PPTX
บท4
PPTX
Imagen Sales Kit 2014 Casiano Communications
PPTX
บท7
PPTX
COMMUNICATION PROCESS,TYPES,MODES,BARRIERS
Communication ppt
Gestion de redes
Truongquocte.ifno_Nhân vật lịch sử & giai thoại
Truongquocte.info_Phương pháp luyện trí não [Tập 1]
Assignment 1
Гемодинамічні та нейрореабілітичні особливості при апалічному синдромі
Корпоративные открытки по индивидуальному дизайну
Колоризация ультразвуковых скенированных изображений
แบบสอบถาม พึงใจบริการ Bts mrt airport link
Writing sample - Criminal Defense
Philadelphia cpa
Ahli jawatan kuasa hal ehwal murid 2011
บท4
Imagen Sales Kit 2014 Casiano Communications
บท7
Ad

Similar to Sw quality metrics (20)

PPTX
SE-Lecture-7.pptx
PPTX
Software quality assurance. subject slides.pptx
PPTX
Software Metrics - Software Engineering
PPT
Chapter 11 Metrics for process and projects.ppt
PPT
Lecture3
PPT
Software Quality Metrics
DOCX
Metrics used in testing
PPT
Managing software project, software engineering
PPT
Lecture 7 Software Metrics.ppt
PPT
Software process and project metrics
PDF
Software metrics by Dr. B. J. Mohite
PPTX
Software engineering
PDF
How to (Effectively) Measure Quality across Software Deliverables
DOC
Defect Age
PPT
Key Measurements For Testers
PPT
Software metrics
PPT
Software Metrics
PDF
Reading Summary - Effective Software Defect Tracking + Pragmatic Unit Testing
PPTX
Quality management
PPTX
Quality in software industry
SE-Lecture-7.pptx
Software quality assurance. subject slides.pptx
Software Metrics - Software Engineering
Chapter 11 Metrics for process and projects.ppt
Lecture3
Software Quality Metrics
Metrics used in testing
Managing software project, software engineering
Lecture 7 Software Metrics.ppt
Software process and project metrics
Software metrics by Dr. B. J. Mohite
Software engineering
How to (Effectively) Measure Quality across Software Deliverables
Defect Age
Key Measurements For Testers
Software metrics
Software Metrics
Reading Summary - Effective Software Defect Tracking + Pragmatic Unit Testing
Quality management
Quality in software industry

Sw quality metrics

  • 2.  Software Quality Metrics.  Types of Software Quality Metrics.  Three groups of Software Quality Metrics.  Difference Between Errors, Defects, Faults, and Failures.  Lines of code.  Function Points.  Customer Satisfaction Metrics
  • 3.  The subset of metrics that focus on quality  Software quality metrics can be divided into:   End-product quality metrics In-process quality metrics  The essence of software quality engineering is to investigate the relationships among inprocess metric, project characteristics , and end-product quality, and, based on the findings, engineer improvements in quality to both the process and the product.
  • 4.  Product metrics – e.g., size, complexity, design features, performance, quality level  Process metrics – e.g., effectiveness of defect removal, response time of the fix process  Project metrics – e.g., number of software developers, cost, schedule, productivity
  • 5.  Product quality  In-process quality  Maintenance quality  Product Quality Metrices
  • 6.  Intrinsic   product quality Mean time to failure Defect density  Customer   related Customer problems Customer satisfaction
  • 7.  Intrinsic product quality is usually measured by:   the number of “bugs” (functional defects) in the software (defect density), or how long the software can run before “crashing” (MTTF – mean time to failure)  The two metrics are correlated but different
  • 8. An error is a human mistake that results in incorrect software.  The resulting fault is an accidental condition that causes a unit of the system to fail to function as required.  A defect is an anomaly in a product.  A failure occurs when a functional unit of a software-related system can no longer perform its required function or cannot perform it within specified limits 
  • 9.  This metric is the number of defects over the opportunities for error (OPE) during some specified time frame.  We can use the number of unique causes of observed failures (failures are just defects materialized) to approximate the number of defects.  The size of the software in either lines of code or function points is used to approximate OPE.
  • 10.  Possible       variations Count only executable lines Count executable lines plus data definitions Count executable lines, data definitions, and comments Count executable lines, data definitions, comments, and job control language Count lines as physical lines on an input screen Count lines as terminated by logical delimiters
  • 11.  Other     difficulties LOC measures are language dependent Can’t make comparisons when different languages are used or different operational definitions of LOC are used For productivity studies the problems in using LOC are greater since LOC is negatively correlated with design efficiency Code enhancements and revisions complicates the situation – must calculate defect rate of new and changed lines of code only
  • 12.  Depends on the availability on having LOC counts for both the entire produce as well as the new and changed code  Depends on tracking defects to the release origin (the portion of code that contains the defects) and at what release that code was added, changed, or enhanced
  • 13. A function can be defined as a collection of executable statements that performs a certain task, together with declarations of the formal parameters and local variables manipulated by those statements.  In practice functions are measured indirectly.  Many of the problems associated with LOC counts are addressed.
  • 14.  The number of function points is a weighted total of five major components that comprise an application.      Number of external inputs x 4 Number of external outputs x 5 Number of logical internal files x10 Number of external interface files x 7 Number of external inquiries x 4
  • 15.  The function count (FC) is a weighted total of five major components that comprise an application.      Number of external inputs x (3 to 6) Number of external outputs x (4 to 7) Number of logical internal files x (7 to 15) Number of external interface files x (5 to 10) Number of external inquiries x (3 to 6) the weighting factor depends on complexity
  • 16.  Each number is multiplied by the weighting factor and then they are summed.  This weighted sum (FC) is further refined by multiplying it by the Value Adjustment Factor (VAF).  Each of 14 general system characteristics are assessed on a scale of 0 to 5 as to their impact on (importance to) the application.
  • 17. 1. 2. 3. 4. 5. 6. 7. Data Communications Distributed functions Performance Heavily used configuration Transaction rate Online data entry End-user efficiency
  • 18. 8. 9. 10. 11. 12. 13. 14. Online update Complex processing Reusability Installation ease Operational ease Multiple sites Facilitation of change
  • 19.     VAF is the sum of these 14 characteristics divided by 100 plus 0.65. Notice that if an average rating is given each of the 14 factors, their sum is 35 and therefore VAF =1 The final function point total is then the function count multiplied by VAF FP = FC x VAF
  • 20. Customer problems are all the difficulties customers encounter when using the product.  They include:        Valid defects Usability problems Unclear documentation or information Duplicates of valid defects (problems already fixed but not known to customer) User errors The problem metric is usually expressed in terms of problems per user month (PUM)
  • 21.  PUM = Total problems that customers reported for a time period <divided by> Total number of license-months of the software during the period where Number of license-months = Number of the install licenses of the software x Number of months in the calculation period
  • 22.  Improve the development process and reduce the product defects.  Reduce the non-defect-oriented problems by improving all aspects of the products (e.g., usability, documentation), customer education, and support.  Increase the sale (number of installed licenses) of the product.
  • 23. Defect Rate Problems per UserMonth (PUM) Numerator Valid and unique product defects All customer problems (defects and nondefects, first time and repeated) Denominator Size of product (KLOC or function point) Customer usage of the product (user-months) Measurement perspective Producer—software development organization Customer Scope Intrinsic product quality Intrinsic product quality plus other factors
  • 25.  Customer satisfaction is often measured by customer survey data via the five-point scale:      Very satisfied Satisfied Neutral Dissatisfied Very dissatisfied
  • 28. 1. 2. 3. 4. Percent of completely satisfied customers Percent of satisfied customers (satisfied and completely satisfied) Percent of dissatisfied customers (dissatisfied and completely dissatisfied) Percent of nonsatisfied customers (neutral, dissatisfied, and completely dissatisfied)
  • 29.  Defect density during machine testing  Defect arrival pattern during machine testing  Phase-based defect removal pattern  Defect removal effectiveness
  • 30.  Defect rate during formal machine testing (testing after code is integrated into the system library) is usually positively correlated with the defect rate in the field.  The simple metric of defects per KLOC or function point is a good indicator of quality while the product is being tested.
  • 31.  Scenarios  for judging release quality: If the defect rate during testing is the same or lower than that of the previous release, then ask: Does the testing for the current release deteriorate?   If the answer is no, the quality perspective is positive. If the answer is yes, you need to do extra testing.
  • 32.  Scenarios for judging release quality (cont’d):  If the defect rate during testing is substantially higher than that of the previous release, then ask: Did we plan for and actually improve testing effectiveness?   If the answer is no, the quality perspective is negative. If the answer is yes, then the quality perspective is the same or positive.
  • 33.  The pattern of defect arrivals gives more information than defect density during testing.  The objective is to look for defect arrivals that stabilize at a very low level, or times between failures that are far apart before ending the testing effort and releasing the software.
  • 34. The defect arrivals during the testing phase by time interval (e.g., week). These are raw arrivals, not all of which are valid.  The pattern of valid defect arrivals – when problem determination is done on the reported problems. This is the true defect pattern.  The pattern of defect backlog over time. This is needed because development organizations cannot investigate and fix all reported problems immediately. This metric is a workload statement as well as a quality statement. 
  • 35.  This is an extension of the test defect density metric.  It requires tracking defects in all phases of the development cycle.  The pattern of phase-based defect removal reflects the overall defect removal ability of the development process.
  • 36.  DRE = (Defects removed during a development phase <divided by> Defects latent in the product) x 100%  The denominator can only be approximated.  It is usually estimated as: Defects removed during the phase + Defects found later
  • 37.  When done for the front end of the process (before code integration), it is called early defect removal effectiveness.  When done for a specific phase, it is called phase effectiveness.
  • 38.  The goal during maintenance is to fix the defects as soon as possible with excellent fix quality  The following metrics are important:     Fix backlog and backlog management index Fix response time and fix responsiveness Percent delinquent fixes Fix quality