SlideShare a Scribd company logo
2_metrics modified.ppt of software quality metrics
Software Metrics
1. What is the size of the program?
2. What is the estimated cost and duration of the software?
3. Is a requirement testable?
4. When is the right time to stop testing?
5. What is the effort expended during maintenance
phase?
Software Metrics
6. What is the complexity of a module?
7. What is the module strength and coupling?
8. What is the reliability at the time of release?
9. Which test technique is more effective?
10. Are we testing hard or are we testing smart?
11. Do we have a strong program or a weak test suite?
Software Metrics
 Pressman explained as “A measure provides a quantitative
indication of the extent, amount, dimension, capacity, or size
of some attribute of the product or process”.
 Measurement is the act of determine a measure
 The metric is a quantitative measure of the degree to which
a system, component, or process possesses a given
attribute.
 Fenton defined measurement as “ it is the process by which
numbers or symbols are assigned to attributes of entities in
the real world in such a way as to describe them according
to clearly defined rules”.
Software Metrics
 Definition
Software metrics can be defined as “The continuous application of
measurement based techniques to the software development
process and its products to supply meaningful and timely
management information, together with the use of those
techniques to improve that process and its products”.
Software Metrics
 Areas of Applications
The most established area of software metrics is cost and size
estimation techniques.
Software metrics can be used to measure the effectiveness of
various activities or processes such as inspections and audits.
Various software constructs such as size, coupling, cohesion or
inheritance can be measured using software metrics.
The prediction of quality levels for software, often in terms of
reliability, is another area where software metrics have an important
role to play.
Software Metrics
 Areas of Applications
Another important area of application of software metrics is
prediction of software quality attributes. There are many quality
attributes proposed in the literature such as maintainability,
testability, usability, reliability.
The metrics also provide meaningful and timely information to the
management. The software quality, process efficiency, people
productivity can be computed using the metrics.
The use of software metrics to provide quantitative checks on
software design is also a well established area.
Software Metrics
 Areas of Applications
SRS is an important document produced during the software
development life cycle. The metrics can be used to measure the
readability, faults found during SRS verification, change request
frequency etc.
Testing metrics can be used to measure the effectiveness of the test
suite. The testing metrics include number of failures experienced per
unit of time, number of paths in a control flow graph, number of
independent paths, percentage of statement coverage, percentage
of decision covered.
Statement coverage metrics are also available that calculate the
percentage of statements in the source code covered during testing.
The effectiveness of test suite may be measured.
9
Measurements Should
Be Meaning
Repeatable
Example: same
measuring time and
same instrument
Precise
Valid scale and known
source
Comparable
Example: over time
and/or between sources
Economical
Affordable to collect and
analyze compared to
their value
Software Metrics
10
Software Metrics
Check whether numeric relations preserve the empirical relations
Map real world entities to numeric numbers
Determine numerical relations for empirical relations
Identify empirical relations for characteristics
Identify characteristics for real life entities
Measurement Basics
Software Metrics
Measurement Basics
In the first step the characteristics for representing real life entities
should be identified.
In the second step the empirical relations for these characteristics
are identified.
The third step determines numerical relations corresponding to the
empirical relations.
In the next step, real world entities are mapped to numeric numbers
in the last step we determine whether the numeric relations preserve
the empirical relation.
Software Metrics
 Problems During Implementation
 Statement : Software development is to complex; it
cannot be managed like other parts of
the organization.
Management view : Forget it, we will find developers and
managers who will manage that
development.
 Statement : I am only six months late with project.
Management view : Fine, you are only out of a job.
Software Metrics
 Statement : I am only six months late with project.
Management view : Fine, you are only out of a job.
 Statement : But you cannot put reliability constraints
in the contract.
Management view : Then we may not get the contract.
Software Metrics
The metrics can be categorized by the entities we need to
measure. In software development there are two
entities that need to be measured:
• Products are deliverables or documents produced
during the software development.
• Processes are set activities that are used to produce
a product.
 Categories of Metrics
Software Metrics
• The metrics related to the product are known as
product metrics and the metrics related to the
process are known as process metrics.
• The process metrics measures the effectiveness of
the process.
• These metrics can be applied at all the phases of
software development.
Software Metrics
The process metrics can be used to:
• Measure the cost of the process
• the time taken to complete the process
• the efficiency and effectiveness of process in order
to determine which one is effective.
• compare various processes in order to determine
which one is effective.
• to guide future projects.
Software Metrics
• The product metrics can be used to assess the
document or a deliverable produced during the
software development life cycle.
• The examples of product metrics include size,
functionality, complexity and quality.
• Documents such as SRS, user manual can be
measured for correctness, readability and
understandability.
Software Metrics
• The process and product metrics can be further
categorized as internal or external attributes:
– Internal attributes: are those that are related to the internal
structure of the product or the process.
– External attributes are those that are related to the
behaviour of the product or the process.
Software Metrics
• For example, the size of a class, structural
attributes including complexity of the source code,
number of independent paths, number of
statements, and number of branches can be
determined without executing the source code.
These are known as internal attributes of a product
or process.
• When the source code is executed the number of
failures encountered, user friendliness of the forms
and navigational items or response time of a
module describe the behavior of the software.
These are known as external attributes of a
process or product. The external attributes are
related to the quality of the system.
Software Metrics
Software
Metrics
Process
Internal
Attributes
Failure rate found in
reviews, No. of
issues
External
Attributes
Effectiveness of a
method
Product
Internal
Attributes
Size, Inheritance,
Coupling
External
Attributes
Reliability,
Maintainability,
Usability
Objectives of software
measurement
Objectives of software
measurement
Objectives of software
measurement
Objectives of software
measurement
Software Metrics
• There are two types of data non metric and metric.
• Non metric type of data is of categorical or discrete
type. For example, gender of a person can be male
or female.
• In contrast, metric type of data represents amount
or magnitude such as lines of source code.
Software Metrics
• Non metric measurements can be measured either
on nominal scale or ordinal scale.
• In nominal sale, the categories or class of a
variable is described.
• A metric has nominal scale when it can be divided
into categories or classes and there is no ranking
or ordering amongst the categories or classes.
Software Metrics
• The number indicates the presence or absence of
the attribute value. For example,
• Class x is faulty or not faulty. Thus there are two
categories of faults either they are present or they
are not present.




faulty
if
1,
faulty
not
if
,
0
x
Software Metrics
• Similar to nominal scales, a metric having ordinal
scale can be divided into categories or classes,
however, it involves ranking or ordering.
• In ordinal scale, each category can be compared
with another in terms of “higher than” or “lower
than” relationship.
• For example, fault impact can be categorized as
high, medium, low.






low
3,
medium
2,
high
,
1
impact
fault
Software Metrics
• Metric scale provides higher precision permitting
various arithmetic operations to be performed.
• Interval ratio and absolute scales are of metric
type. In interval scale, on any part of the scale the
difference between two adjacent points are equal
• The interval scale has an arbitrary zero point.
• Thus on an interval scale it is not possible to
express any value in multiple of some other value
on the scale.
Software Metrics
• For example, a day with 100 F temperature cannot
be said as twice as hot as a day with 50 F
temperature. The reason is that on Celsius scale
100 F is 37.8 and 50 F is 10. Thus this relationship
can be expressed as:
• C= 5 × ((F-32) / 9)
Software Metrics
• Ratio scales give more precision since they have all
advantages of other scales plus an absolute zero
point.
• For example, if weight of a person is 100 kg then
he/she is twice heavy as the person having 50 Kg
weight.
• Absolute scale simply represents counts.
• For example, number of faults encountered during
inspections can only be measured in one way by
counting the faults encountered.
Software Metrics
Measurem
ent type
Measureme
nt scale
Characteristic
s
Transformatio
n
Examples
Non metric Nominal Order not
defined
Arithmetic
not involved
One to one
mapping
Categorical
classification
s like type of
language (C+
+, java)
Ordinal Order
defined
Monotonic
increasing
function (=, <,
>)
Arithmetic
not involved
Increasing
function P(x) >
P(y)
Student
grades,
customer
satisfaction
level,
employee
capability
levels
Software Metrics
Metric Interval =, <, >
No ratios
Addition,
subtraction
Arbitrary zero
point
P=xP’ + y Temperatures,
date and time
Ratio Absolute zero
point
All arithmetic
operations
possible
P=xP’ Weight,
height, length
Absolute Simple count
values
P=P’ Number of
faults
encountered
in testing
Software Metrics
Example 8.1: Consider the maintenance effort in terms
of lines of source code added, deleted, changes
during maintenance phase classified between 1
and 5, where 1 means very high, 2 means high, 3
means medium, 4 means low and 5 means very
low.
1. What is the measurement scale for this definition of
maintenance effort?
2. Give an example to determine criteria to determine
the maintenance effort levels for a given class.
Software Metrics
Analyzing the Metric Data
• The role of statistics is to function as a tool in
analyzing research data and drawing conclusions
from it.
• The research data must be suitably reduced to be
read easily and used for further analysis.
• Descriptive statistics concern development of
certain indices or measures to summarize data.
• Data can be summarized by using measures of
central tendency (mean, median and mode) and
measures of dispersion (standard deviation,
variance, quartile).
Software Metrics
Measures of central tendency
• Measures of central tendency include mean,
median and mode.
• These measures are known as measures of central
tendency as they give us the idea about the central
values of the data around which all the other data
points have a tendency to gather.
• Mean can be computed by taking average values
of the data set and is given as:



N
i
i
N
x
Mean
1
)
(
Software Metrics
Analyzing the Metric Data
• Median gives the middle value in the data set which
means half of the data points are below the median
value and half of the data points are above the
median value. It is calculated as the value of the
data set, where n is the number of data points in
the data set.
• The most frequently occurring value in the data set
is denoted by mode.
Software Metrics
Choice of Measures of Central Tendency
The choice of selecting a measure of central tendency
depends upon:
1. the scale type of data at which it is measured
2. the distribution of data (left skewed, symmetrical, right
skewed)
Measures Relevant Scale Type
Mean Interval and ratio data which are not
skewed.
Median Ordinal, interval and ratio but not useful
for ordinal scales having few values.
Mode All scale types but not useful for scales
having multiple values.
Software Metrics
Graphs representing skewed and symmetrical distributions
Software Metrics
Example 8.2: Consider the following data set consisting
of lines of source code (SLOC) for a given project.
Calculate mean, median and mode for it.
107, 128, 186, 222, 322, 466, 657, 706, 790, 844,
1129, 1280, 1411, 1532, 1824, 1882, 3442
Software Metrics
Measures of dispersion
• Measures of dispersion include standard deviation,
variance and quartiles. Measures of dispersion tell us
how the data is scattered or spread. Standard deviation
calculates the distance of the data point from the mean.
If most of the data points are far away from the mean
then the standard deviation of the variable is large. The
standard deviation is calculated as given below:
N
x
x
 

2
)
( 

Software Metrics
Measures of dispersion
• Variance is a measure of variability and is the square of
standard deviation.
• The quartile divides the metric data into four equal
parts. For calculating quartile, the data is first arranged
in ascending order.
• The 25 percent of the metric data is below the lower
quartile (25 percentile), fifty percent of the metric data is
below the median value and seventy five percent of
metric data is below the upper quartile (75 percentile).
Software Metrics
Median Upper quartile
Lower quartile
1st
part 2nd
part 3rd
part 4th
part
Software Metrics
Measures of dispersion
• The lower quartile (Q1) is computed by:
– finding the median of the data set
– finding median of the lower half of the data set
• The upper quartile (Q3) is computed by:
– finding the median of the data set
– finding median of the upper half of the data set
Software Metrics
Measures of dispersion
• Interquartile range = Q3 - Q1
Example: Consider data set consisting of lines of source
code (SLOC) given in example 8.2. Calculate standard
deviation, variance and quartile for it.
107, 128, 186, 222, 322, 466, 657, 706, 790, 844, 1129,
1280, 1411, 1532, 1824, 1882, 3442
Software Metrics
Metric Data Distribution
• In order to understand the metric data, the starting point
is to analyze the shape of the distribution of the data.
• There are a number of methods available to analyze the
shape of the data one of them is histogram through
which a researcher can gain insight about the normality
of the data.
• Histogram is a graphical representation of frequency of
occurrences of the values of a given variable.
Software Metrics
Metric Data Distribution
4000
3000
2000
1000
0
SLOC
6
5
4
3
2
1
0
Frequency
Software Metrics
Metric Data Distribution
• The bars show the frequency of values of LOC metric.
• The normal curve is superimposed on the distribution of
values to determine the normality of the data values of
LOC.
• Most of the data is left skewed or right skewed. These
curves will not be applicable for discrete data (nominal
or ordinal).
• For example, the classes may be faulty or non faulty,
thus the distribution will not be normal.
Software Metrics
Metric Data Distribution
• The measures of central tendency such as mean,
median and mode are all equal for normal curves.
• The normal curves are like a bell shaped curve and
within three standard deviations of the means, 96% of
data occurs
Software Metrics
Metric Data Distribution
• Consider the data sets given for three metric variables
in table below. Determine the normality of these
variables
Fault
count
Cyclomatic
complexity
Branch
count
470.00 26.00 826.00
128.00 20.00 211.00
268.00 14.00 485.00
19.00 10.00 29.00
404.00 15.00 405.00
127.00 11.00 240.00
263.00 14.00 464.00
94.00 10.00 187.00
Software Metrics
Metric Data Distribution
Fault
count
Cyclomatic
complexity
Branch
count
207.00 13.00 344.00
42.00 7.00 83.00
24.00 10.00 47.00
94.00 6.00 163.00
34.00 9.00 67.00
286.00 10.00 503.00
104.00 12.00 175.00
82.00 8.00 147.00
20.00 7.00 39.00
Software Metrics
Metric Data Distribution
500.00
400.00
300.00
200.00
100.00
0.00
Fault_count
8
6
4
2
0
Frequency
Software Metrics
Metric Data Distribution
30.00
25.00
20.00
15.00
10.00
5.00
Cyclomatic_complexity
6
5
4
3
2
1
0
Frequency
Software Metrics
Metric Data Distribution
1000.00
800.00
600.00
400.00
200.00
0.00
Branch_count
5
4
3
2
1
0
Frequency
Software Metrics
Metric Data Distribution
Metric Mean Median Std. Deviation
Fault count 156.8235 108 137.7599
Cyclomatic complexity 11.88235 10 5.035901
Branch count 259.7059 187 216.9976
Software Metrics
Metric Data Distribution
• The normality of the data can also be checked by
calculating mean and median.
• The mean, median, and the mode should be same for a
normal distribution.
• We compare the values of mean and median.
• The values of median are less than the mean for
variables fault count and branch count.
• Thus, these variables are not normally distributed.
• However, the mean and median of cyclomatic
complexity do not differ by a larger value.
Software Metrics
Outlier Analysis
• Data points, which are located in an empty part of the
sample space, are called outliers.
• These are the data values that are numerically distant
from the rest of the data.
• For example, suppose one calculates the average
weight of 10 students in a class, where most are
between 51 pounds and 61 pounds, but the weight of
one student is 210 pounds. In this case, the mean will
be 72 pounds and the median will be 58.
Software Metrics
Outlier Analysis
• Hence, the median better reflects the weight of the
students than the mean.
• Thus, the data point with the value 210 is an outlier, that
is, it is located away from other values in the data set.
• Outlier analysis is done to find data points that are over
influential and removing them is essential.
Software Metrics
Outlier Analysis
• Once the outliers are identified the decision about the
inclusion or exclusion of the outlier must be made.
• The decision depends upon the reason why the case is
identified as outlier.
• There are three types of outliers: univariate, bivariate
and multivariate.
• Univariate outliers are those exceptional values that
occur within a single variable.
• Bivariate outliers occur within the combination of two
variables and multivariate outliers are present within the
combination of more than two variables.
Software Metrics
Outlier Analysis
• Box plots and scatter plots are two popular methods
that are used for univariate and bivariate outlier
detection.
• Box plots are based on median and quartiles. The upper
and lower quartiles statistics are used to construct a box
plot.
• The median value is the middle value of the data set
half of the values are less than this value and half of the
values are greater than this value.
Software Metrics
Outlier Analysis
Upper
quartile
Lower
quartile
Start of
the tail
End of
the tail
Median
Software Metrics
Outlier Analysis
• The box starts at the lower quartile and ends at the
upper quartile.
• The distance between lower and the upper quartile is
called box length.
• The tails of the box plot specifies the bounds between
which all the data points must lie.
• The start of the tail is Q1 -1.5 × IQR and end of the tail
is Q3 +1.5 × IQR.
Software Metrics
Outlier Analysis
• These values are truncated to the nearest values of the
actual data points in order to avoid negative values.
• Thus, actual start of the tail is the lowest value in the
variable above (Q1 -1.5 × IQR) and actual end of the tail
is the highest value below (Q3 +1.5 × IQR).
• Any value outside the start of the tail and the end of the
tail is outlier and these data points must be identified as
they are unusual occurrences of data values which
must be considered for inclusion or exclusion.
Software Metrics
Outlier Analysis
• The box plots also tell us whether the data is skewed or
not.
• If the data is not skewed the median will lie in the center
of the box.
• If the data is left or right skewed, then the median will
be away from the center.
Software Metrics
Outlier Analysis
For example consider the LOC values for a sample project
given below:
17, 25, 36, 48, 56, 62, 78, 82, 103, 140, 162, 181, 202, 251,
310, 335, 508
Software Metrics
Outlier Analysis
• The median of the data set is 103, lower quartile is 56 and
upper quartile is 202. The interquartile range is 146.
• The start of the tail is 56-1.5×146=-163 and end of the tail
is 202+1.5×146=421.
• The actual start of the tail is lowest value above -163 i.e.
17 and actual end of the tail is highest value below 421 i.e
335.
• Thus the case number 17 with value 508 is above the end
of the tail and hence is an outlier.
Outlier Analysis
LOC
600.00
500.00
400.00
300.00
200.00
100.00
0.00
17
Software Metrics
Example:
Consider the data set given in example above. Construct
box plots and identify univariate outliers for all the variables
in the data set.
Outlier Analysis
Fault_count
500
400
300
200
100
0
Outlier Analysis
Cyclomatic_compl
exity
30
25
20
15
10
5
1
Outlier Analysis
Branch_count
800
600
400
200
0
Software Metrics
• The outliers should be analyzed by the researchers to
make a decision about their inclusion or exclusion in
the data analysis.
• There may be many reasons for an outlier
– error in the entry of the data
– some extra information that represents extraordinary or
unusual event
– an extraordinary event that is unexplained by the
researcher.
Software Metrics
• Outlier values may be present due to combination of
data values present across more than one variable.
• These outliers are called multivariate outliers.
• Scatter plot is another visualization method to detect
outliers.
• In scatter plots, we simply represent graphically all the
data points.
• The scatter plot allows us to examine more than one
metric variable at a given time.
Software Metrics
• The univariate outliers can also be determined by
calculating the Z-score value of the data points of the
variable.
• The data values exceeding ±2.5 are considered to be
outliers
Outlier Analysis
30.00
25.00
20.00
15.00
10.00
5.00
Cyclomatic_complexity
500.00
400.00
300.00
200.00
100.00
0.00
Fault_
count
Software Metrics
Exploring Analysis
• The metric variables can be of two type’s independent
variables and dependent (target) variable.
• The effect of independent variables on the dependent
variable can be explored by using various statistical
and machine learning methods.
• The choice of the method for analyzing the
relationship between independent and dependent
variable depends upon the type of the dependent
variable (continuous or discrete).
Software Metrics
Exploring Analysis
• If the dependent variable is continuous,
then the widely known statistical method
linear regression method may be used,
whereas if the dependent variable is of
discrete type them logistic regression
method may be used for analyzing
relationships.
Software Metrics
Exploring Analysis
• If the dependent variable is continuous,
then the widely known statistical method
linear regression method may be used,
whereas if the dependent variable is of
discrete type them logistic regression
method may be used for analyzing
relationships.
1. int sort (int x[ ], int n)
2. {
3. int i, j, save, im1;
4. /*This function sorts array x in ascending order */
5. If (n<2) return 1;
6. for (i=2; i<=n; i++)
7. {
8. im1=i-1;
9. for (j=1; j<=im1; j++)
10. if (x[i] < x[j])
11. {
12. save = x[i];
13. x[i] = x[j];
14. x[j] = save;
15. }
16. }
17. return 0;
18. }
If LOC is simply a count of
the number of lines then
figure shown below contains
18 LOC .
When comments and blank
lines are ignored, the
program in figure 2 shown
below contains 17 LOC.
Lines of Code (LOC)
Size Estimation
Software Project Planning
Fig. 2: Function for sorting an array
0
500,000
1,000,000
1,500,000
2,000,000
2,500,000
Jan 1993 Jun 1994 Oct 1995 Mar 1997 Jul 1998 Dec 1999 Apr 2001
Total
LOC
Total LOC ("wc -l") -- development releases
Total LOC ("wc -l") -- stable releases
Total LOC uncommented -- development releases
Total LOC uncommented -- stable releases
Growth of Lines of Code (LOC)
Software Project Planning
Furthermore, if the main interest is the size of the program
for specific functionality, it may be reasonable to include
executable statements. The only executable statements in
figure shown above are in lines 5-17 leading to a count of
13. The differences in the counts are 18 to 17 to 13. One
can easily see the potential for major discrepancies for
large programs with many comments or programs written
in language that allow a large number of descriptive but
non-executable statement. Conte has defined lines of code
as:
Software Project Planning
“A line of code is any line of program text that is not a
comment or blank line, regardless of the number of
statements or fragments of statements on the line. This
specifically includes all lines containing program header,
declaration, and executable and non-executable
statements”.
This is the predominant definition for lines of code used
by researchers. By this definition, figure shown above
has 17 LOC.
Software Project Planning
Software Metrics
Token Count
The size of the vocabulary of a program, which consists of the
number of unique tokens used to build a program is defined as:
η = η1+ η2
η : vocabulary of a program
η1 : number of unique operators
η2 : number of unique operands
where
Software Metrics
The length of the program in the terms of the total number of tokens
used is
N = N1+N2
N : program length
N1 : total occurrences of operators
N2 : total occurrences of operands
where
Software Metrics
V = N * log2 η
Volume
The unit of measurement of volume is the common unit for
size “bits”. It is the actual size of a program if a uniform
binary encoding for the vocabulary is used.
Program Level
The value of L ranges between zero and one, with L=1
representing a program written at the highest possible level
(i.e., with minimum size).
L = V* / V
Software Metrics
D = 1 / L
E = V / L = D * V
Program Difficulty
As the volume of an implementation of a program increases,
the program level decreases and the difficulty increases.
Thus, programming practices such as redundant usage of
operands, or the failure to use higher-level control constructs
will tend to increase the volume as well as the difficulty.
Effort
The unit of measurement of E is elementary mental
discriminations.
Software Metrics
 Estimated Program Length
2
2
2
1
2
1 log
log 


 



10
log
10
14
log
14 2
2 



= 53.34 + 33.22 = 86.56
)
!
(
log
)
!
( 2
2
1
2 
 

 Log
J
The following alternate expressions have been published to
estimate program length.
Software Metrics
1. Comments are not considered.
2. The identifier and function declarations are not considered.
3. All the variables and constants are considered operands.
4. Global variables used in different modules of the same
program are counted as multiple occurrences of the same
variable.
 Counting rules for C language
Software Metrics
6. Functions calls are considered as operators.
7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( )
{…}, all control statements e.g., if ( ) {…}, if ( ) {…} else {…}, etc.
are considered as operators.
8. In control construct switch ( ) {case:…}, switch as well as all the
case statements are considered as operators.
5. Local variables with the same name in different functions are
counted as unique operands.
Software Metrics
11. GOTO is counted as an operator and the label is counted as an
operand.
12. The unary and binary occurrence of “+” and “-” are dealt
separately. Similarly “*” (multiplication operator) are dealt with
separately.
9. The reserve words like return, default, continue, break, sizeof,
etc., are considered as operators.
10. All the brackets, commas, and terminators are considered as
operators.
Software Metrics
15. All the hash directive are ignored.
14. In the structure variables such as “struct-name, member-name”
or “struct-name -> member-name”, struct-name, member-name
are taken as operands and ‘.’, ‘->’ are taken as operators. Some
names of member elements in different structure variables are
counted as unique operands.
13. In the array variables such as “array-name [index]” “array-
name” and “index” are considered as operands and [ ] is
considered as operator.
Software Metrics
 Potential Volume
)
2
(
log
)
2
(
* *
2
2
*
2 
 


V
 Estimated Program Level / Difficulty
Halstead offered an alternate formula that estimate the program
level.
where

)
/(
2 2
1
2 




L
2
2
1
2
1

 

 

L
D
Software Metrics




 D
V
L
V *
/
2
2
2
1 2
/
)
log
( 

N
N
n


/
E
T 
 Effort and Time
β is normally set to 18 since this seemed to give best results in
Halstead’s earliest experiments, which compared the predicted
times with observed programming times, including the time for
design, coding, and testing.
Software Metrics
Table 1: Language levels
Example- 6.I
Consider the sorting program. List out the operators and operands
and also calculate the values of software science measures like
Software Metrics
.
,
,
, etc
E
V
N

Solution
The list of operators and operands is given in table 2.
Software Metrics
Software Metrics
Table 2: Operators and operands of sorting program
Software Metrics
Here N1=53 and N2=38. The program length N=N1+N2=91
Vocabulary of the program
Volume
= 91 x log224 = 417 bits
24
10
14
2
1 



 



2
log

N
V
The estimated program length of the program

N
= 14 log214 + 10 log210
= 14 * 3.81 + 10 * 3.32
= 53.34 + 33.2 = 86.45
Software Metrics
Conceptually unique input and output parameters are
represented by
{x: array holding the integer to be sorted. This is used
both as input and output}.
{N: the size of the array to be sorted}.
The potential volume V* = 5 log25 = 11.6
L = V* / V
*
2

3
*
2 

Since
Software Metrics
Estimated program level
027
.
0
417
6
.
11


D = I / L
03
.
37
027
.
0
1


038
.
0
38
10
14
2
2
2
2
1






N
L


Software Metrics
/*This function checks for validity of a triangle*/
int triangle(int a, int b, int c)
{
int valid;
If(((a+b)>c)&&((c+a)>b)&&((b+c)>a)))
//checks validity
{
valid=1;
}
else
{
valid=0;
}
return valid;
}
Software Metrics
• Consider the function to check the validity
of a triangle. List out the operator and
operands and also calculate the values of
software science metrics.
Software Metrics
Operators Occurrences Operands Occurrence
s
int 5 triangle 1
{ } 3 a 4
if 1 b 4
+ 3 c 4
> 3 valid 4
&& 2 1 1
= 2 0 1
else 1
return 1
; 4
, 2
() 8
N1=35 N2=19
12
1 
 7
2 

Software Metrics
67
.
62
65
.
19
02
.
43
7
log
7
12
log
12
log
log
45
.
229
249
.
4
*
54
19
log
54
)
(
log
*
54
19
35
19
7
12
2
2
2
2
2
1
2
1
2
2
1
2
2
1
2
1























n
n
N
bits
n
n
N
V
N
N
N






utes
ond
E
T
D
V
E
L
D
L
N
n
n
L
n
n
V
min
33
.
3
sec
97
.
199
18
44
.
3599
44
.
3599
29
.
16
*
96
.
220
*
29
.
16
1
0614
.
0
19
7
*
12
2
*
2
5
.
15
6
log
6
)
4
2
(
log
)
4
2
(
*)
2
(
log
*)
2
(
*
2
2
1
2
2
2
2
2





















 The Sharing of Data Among Modules
A program normally contains several modules and share coupling
among modules. However, it may be desirable to know the amount
of data being shared among the modules.
Fig.10: Three modules from an imaginary program
Software Metrics
Fig.11: ”Pipes” of data shared among the modules
Software Metrics
Fig.12: The data shared in program bubble
Software Metrics
Component : Any element identified by decomposing a
(software) system into its constituent
parts.
Cohesion : The degree to which a component
performs a single function.
Coupling : The term used to describe the degree of
linkage between one component to
others in the same system.
Information Flow Metrics
Software Metrics
1. ‘FAN IN’ is simply a count of the number of other Components
that can call, or pass control, to Component A.
2. ‘FANOUT’ is the number of Components that are called by
Component A.
3. This is derived from the first two by using the following formula.
We will call this measure the INFORMATION FLOW index of
Component A, abbreviated as IF(A).
 The Basic Information Flow Model
Information Flow metrics are applied to the Components of a system
design. Fig. 13 shows a fragment of such a design, and for
component ‘A’ we can define three measures, but remember that
these are the simplest models of IF.
IF(A) = [FAN IN(A) x FAN OUT (A)]2
Software Metrics
Fig.13: Aspects of complexity
Software Metrics
1. Note the level of each Component in the system design.
2. For each Component, count the number of calls so that
Component – this is the FAN IN of that Component. Some
organizations allow more than one Component at the highest
level in the design, so for Components at the highest level which
should have a FAN IN of zero, assign a FAN IN of one. Also note
that a simple model of FAN IN can penalize reused
Components.
3. For each Component, count the number of calls from the
Component. For Component that call no other, assign a FAN
OUT value of one.
The following is a step-by-step guide to deriving these most simple
of IF metrics.
cont…
Software Metrics
4. Calculate the IF value for each Component using the above
formula.
5. Sum the IF value for all Components within each level which is
called as the LEVEL SUM.
6. Sum the IF values for the total system design which is called the
SYSTEM SUM.
7. For each level, rank the Component in that level according to
FAN IN, FAN OUT and IF values. Three histograms or line plots
should be prepared for each level.
8. Plot the LEVEL SUM values for each level using a histogram or
line plot.
Software Metrics
 A More Sophisticated Information Flow Model
a = the number of components that call A.
b = the number of parameters passed to A from components
higher in the hierarchy.
c = the number of parameters passed to A from components
lower in the hierarchy.
d = the number of data elements read by component A.
Then:
FAN IN(A)= a + b + c + d
Software Metrics
Also let:
e = the number of components called by A;
f = the number of parameters passed from A to components higher
in the hierarchy;
g = the number of parameters passed from A to components lower
in the hierarchy;
h = the number of data elements written to by A.
Then:
FAN OUT(A)= e + f + g + h
Software Metrics
Class A
Fan-in=0
Fan-out=4
Class B
Fan-in=1
Fan-out=1
Class C
Fan-in=2
Fan-out=1
Class D
Fan-in=1
Fan-out=1
Class E
Fan-in=1
Fan-out=0
Class F
Fan-in=2
Fan-out=0
Software Metrics
Data Structure Metrics
Program Data Input Internal Data Data Output
Payroll Name/ Social Security
No./ Pay Rate/ Number
of hours worked
Spreadsheet
Software
Planner
Item Names/ Item
amounts/ Relationships
among items
Program size/ No. of
software developers on
team
Withholding rates
Overtime factors
Insurance premium
Rates
Cell computations
Sub-totals
Model parameters
Constants
Coefficients
Gross pay withholding
Net pay
Pay ledgers
Spreadsheet of items
and totals
Est. project effort
Est. project duration
Fig.1: Some examples of input, internal, and output data
Software Metrics
One method for determining the amount of data is to count the
number of entries in the cross-reference list.
 The Amount of Data
A variable is a string of alphanumeric characters that is defined by a
developer and that is used to represent some value during either
compilation or execution.
Software Metrics
Fig.2: Payday program
Software Metrics
Fig.3: A cross reference of program payday
check
gross
hours
net
pay
rate
tax
2
4
6
4
5
6
4
14
12
11
14
12
11
13
14
13
12
15
13
12
14
15
14
13
15
15
15
14
15
Software Metrics
2

feof 9
stdin 10
Fig.4: Some items not counted as VARS
= VARS + unique constants + labels.
Halstead introduced a metric that he referred to as to be a count
of the operands in a program – including all variables, constants, and
labels. Thus,
2

labels
constants
unique
2 

VARS

Software Metrics
Fig.6: Program payday with operands in brackets
Software Metrics
The Usage of Data within a Module
 Live Variables
Definitions :
1. A variable is live from the beginning of a procedure to the end
of the procedure.
2. A variable is live at a particular statement only if it is referenced
a certain number of statements before or after that statement.
3. A variable is live from its first to its last references within a
procedure.
Software Metrics
cont…
Software Metrics
Fig.6: Bubble sort program
Software Metrics
It is thus possible to define the average number of live
variables,
which is the sum of the count of live variables divided by
the count of executable statements in a procedure. This is a
complexity measure for data usage in a procedure or program.
The live variables in the program in fig. 6 appear in fig. 7 the
average live variables for this program is
647
.
3
34
124

)
(LV
Software Metrics
Line Live Variables Count
cont…
4
5
6
7
8
9
10
11
12
13
14
15
0
0
3
3
3
0
0
0
0
0
1
2
16 4
----
----
t, x, k
t, x, k
t, x, k
----
----
----
----
----
size
size, j
Size, j, a, b
Software Metrics
Line Live Variables Count
cont…
17
18
19
20
21
22
23
24
25
26
27
28
5
6
6
6
6
6
7
7
6
6
6
6
29 5
size, j, a, b, last
size, j, a, b, last, continue
size, j, a, b, last, continue
size, j, a, b, last, continue
size, j, a, b, last, continue
size, j, a, b, last, continue
size, j, a, b, last, continue, i
size, j, a, b, last, continue, i
size, j, a, b, continue, i
size, j, a, b, continue, i
size, j, a, b, continue, i
size, j, a, b, continue, i
size, j, a, b, i
Software Metrics
Line Live Variables Count
30
31
32
33
34
35
36
37
5
5
5
4
4
4
3
0
size, j, a, b, i
size, j, a, b, i
size, j, a, b, i
size, j, a, b
size, j, a, b
size, j, a, b
j, a, b
--
Fig.7: Live variables for the program in fig.6
Software Metrics
 Variable spans
…
21
…
32
…
45
…
53
…
60
…
scanf (“%d %d”, &a, &b)
x =a;
y = a – b;
z = a;
printf (“%d %d”, a, b);
Fig.: Statements in ac program referring to variables a and b.
The size of a span indicates the number of statements that pass
between successive uses of a variables
Software Metrics
 Making program-wide metrics from intra-module metrics
m
LV
program
LV
i
m
i 1



n
SP
program
SP
i
n
i 1



For example if we want to characterize the average number of live variables
for a program having modules, we can use this equation.
where is the average live variable metric computed from the ith module
i
LV )
(
The average span size for a program of n spans could be computed by
using the equation.
)
(SP
Software Metrics
 Program Weakness

*
LV
WM 
A program consists of modules. Using the average number of live
variables and average life variables , the module weakness
has been defined as
)
(LV )
(
Software Metrics
m
WM
WP
i
m
i









1
A program is normally a combination of various modules, hence
program weakness can be a useful measure and is defined as:
where, WMi : weakness of ith module
WP : weakness of the program
m : number of modules in the program
Example- 6.3
Consider a program for sorting and searching. The program sorts an
array using selection sort and than search for an element in the
sorted array. The program is given in fig. 8. Generate cross
reference list for the program and also calculate and WM for the
program.
Software Metrics
LV ,
,
Solution
Software Metrics
The given program is of 66 lines and has 11 variables. The variables
are a, I, j, item, min, temp, low, high, mid, loc and option.
Software Metrics
Software Metrics
Software Metrics
Software Metrics
Fig.8: Sorting & searching program
Software Metrics
Cross-Reference list of the program is given below:
a
i
j
item
min
temp
low
high
mid
loc
option
11
12
12
12
12
12
13
13
13
13
14
18
16
25
44
24
29
46
45
46
56
40
19
16
25
47
27
31
47
46
47
61
41
27
16
25
49
29
50
47
49
62
27
18
27
59
30
52
51
50
29
19
30
62
54
52
51
30
22
31
54
52
30
22
59
31
22
61
37
24
47
36
49
36
59
36 37 37
Line Live Variables Count
cont…
13
14
15
16
17
18
19
20
22
23
24
25
1
1
1
2
2
3
3
3
3
3
4
5
26 5
low
low
low
low, i
low, i
low, i, a
low, i, a
low, i, a
low, i, a
low, i, a
low, i, a, min
low, i, a, min, j
Live Variables per line are calculated as:
low, i, a, min, j
Software Metrics
Line Live Variables Count
cont…
27
28
29
30
31
32
33
34
35
36
37
38
5
5
6
6
5
3
3
3
3
3
3
2
39 2
low, i, a
low, i, a, min, j
low, i, a, min, j
low, i, a, min, j, temp
low, i, a, min, j, temp
low, i, a, j, temp
low, i, a
low, i, a
low, i, a
low, i, a
low, i, a
low, a
low, a
Line Live Variables Count
cont…
40
41
42
43
44
45
46
47
48
49
50
51
3
3
2
2
3
4
5
5
5
5
5
5
52 5
low, a, option
low, a, option
low, a
low, a
low, a, item
low, a, item, high
low, a, item, high, mid
low, a, item, high, mid
low, a, item, high, mid
low, a, item, high, mid
low, a, item, high, mid
low, a, item, high, mid
low, a, item, high, mid
Software Metrics
Software Metrics
Line Live Variables Count
cont…
53
54
55
56
57
58
59
60
61
62
5
5
3
4
4
4
4
3
3
2
low, a, item, high, mid
low, a, item, high, mid
a, item, mid
a, item, mid, loc
a, item, mid, loc
a, item, mid, loc
a, item, mid, loc
item, mid, loc
item, mid, loc
item, loc
Software Metrics
Line Live Variables Count
63
64
65
66
0
0
0
0
174
Total
Average number of live variables ( ) =
Software Metrics
statements
executable
of
Count
variables
live
of
count
of
Sum
8
51
8
15
28
3
LV
(WM)
Weakness
Module
8
15
11
174
variables
of
number
Total
variables
live
of
count
of
Sum
28
3
53
174
.
.
.
.
.










WM
LV



LV
S.No Term Meaning/purpose
1 Object Object is an entity able to save a state (information)
and offers a number of operations (behavior) to
either examine or affect this state.
2 Message A request that an object makes of another object to
perform an operation.
3 Class A set of objects that share a common structure and
common behavior manifested by a set of methods;
the set serves as a template from which object can
be created.
4 Method an operation upon an object, defined as part of the
declaration of a class.
5 Attribute Defines the structural properties of a class and
unique within a class.
6 Operation An action performed by or on an object, available
to all instances of class, need not be unique.
Object Oriented Metrics
Terminologies
S.No Term Meaning/purpose
7 Instantiation The process of creating an instance of the object
and binding or adding the specific data.
8 Inheritance A relationship among classes, where in an object in
a class acquires characteristics from one or more
other classes.
9 Cohesion The degree to which the methods within a class
are related to one another.
10 Coupling Object A is coupled to object B, if and only if A
sends a message to B.
Object Oriented Metrics
Terminologies
Object Oriented Metrics
• Measuring on class level
– coupling
– inheritance
– methods
– attributes
– cohesion
• Measuring on system level
Object Oriented Metrics
• Object oriented systems are rapidly increasing in the
market. Object oriented software engineering leads to
better design, higher quality and maintainable software.
• As the object oriented development is growing, the
need for object oriented metrics that can be used
across the software industry is also increasing.
• The traditional metrics, although applicable to object
oriented systems, do not measure object oriented
constructs such as inheritance and polymorphism. This
need has led to the development of object oriented
metrics.
Object Oriented Metrics
Coupling Metrics
• The degree of interdependence between classes is
defined by coupling.
• During object oriented analysis and design phases,
coupling is measured by counting the relationship a
class has with other classes or systems.
• Coupling increases complexity and decreases
maintainability, reusability and understandability.
• Hence coupling should be reduced amongst classes
and the classes should be designed with the aim of
weak coupling.
Object Oriented Metrics
Coupling Metrics
Chidamber and Kemerer defined coupling as:
“Two classes are coupled when methods declared in one
class use methods or instance variables of the other
classes”.
Object Oriented Metrics
Coupling Metrics
• This definition also includes coupling based on
inheritance.
• In 1994, Chidamber and Kemerer defined Coupling
Between Objects (CBO). In his paper he defined CBO
as the count of number of other classes to which a
class is coupled.
Object Oriented Metrics
Coupling Metrics
• The CBO definition given in 1994 includes inheritance
based coupling. For example, consider figure below,
two variables of other classes (class B and class C) are
used in class A, hence the value of CBO for class A is
2. Similarly, for class B and class C the value of CBO is
zero.
Class B Class A
B objB
C objC
Class C
Object Oriented Metrics
Coupling Metrics
Li and Henry used data abstraction technique for defining
coupling. Data abstraction provides the ability to create
user-defined data types called Abstract Data Types
(ADTs). Li and Henry defined Data Abstraction Coupling
(DAC) as:
DAC = number of ADTs defined in a class
class A has two abstract data types (i.e. two non simple
attributes) objB and objC.
Object Oriented Metrics
Coupling Metrics
Li and Henry defined another coupling metric known as
Message Passing Coupling (MPC) as “number of
unique send statements in a class”. Hence if four
different methods in class A access one method in class
B, then MPC is 4.
Class B Class A
Object Oriented Metrics
Coupling Metrics
In 1994, Chidamber and Kemerer defined RFC metric as a
set of methods defined in a class and called by a class
[CHID94]. It is given by RFC=|RS|, where RS, the
response set of the class, is given by:
where Mi = set of all methods in a class (total n) and Ri =
{Rij} = set of methods called by Mi.
}
{R
M ij
j
all
i 

RS
Object Oriented Metrics
Class consists of four functions: A1, A2, A3,
A4
A1 calls B1 and B2 of class B
A2 calls C1 of class C
A3 calls D1 and D2 of class D
Thus, RFC=|RS|
Mi={A1, A2, A3, A4}
Ri ={B1, B2, C1, D1, D2}
RS={ A1, A2, A3, A4, B1, B2, C1, D1, D2}
RFC=9
Object Oriented Metrics
Coupling Metrics
• In 1997, Briand gave a suite of 18 metrics that
measured different types of interaction between classes
• These metrics may be used to guide software
developers about which type of coupling effects
maintenance cost and reduces reusability.
• Briand observed that the coupling between classes can
be divided into different facets
Object Oriented Metrics
Coupling Metrics
• Relationship: It refers to the type of relationship
between classes: friendship, inheritance, or other.
• Export or import coupling: This identifies whether a
class A uses methods/variable of other class B (import).
Object Oriented Metrics
Coupling Metrics
• Type of interaction: Briand identified three types of
interaction between classes: class-attribute, class-
method, and method-method.
• Class attribute (CA) interaction: if there is any change in
class B and/or class C the class A will be effected.
• Class-method (CM) interaction: if the parameter of
class B is passed as argument to method of class A the
type of interaction is said to be class-method.
Object Oriented Metrics
Coupling Metrics
• Method-method (MM) interaction: If a method Mi of
class Ci calls method Mj of class Cj or method Mi of
class Ci consists of reference of method Mj of class Cj
as arguments, then there is MM type of interaction
between class Ci and class Cj.
Object Oriented Metrics
Coupling Metrics
• The metrics for CA are FCAIC, OCAIC, IFCAEC,
DCAEC, OCAEC.
• In these metrics the first one/two letters signify the type
of relationship (IF signify inverse friendship, A signify
ancestors, D signify descendant, F signify friendship,
and O signify others).
• The next two letters signify the type of interaction (CA,
CM, MM). Finally the last two letters signify import
coupling (IC) or export coupling (EC).
Object Oriented Metrics
Coupling Metrics
• Lee et al. acknowledged the need to differentiate
between inheritance-based and non inheritance-based
coupling by proposing the corresponding measures:
• Non Inheritance information flow-based coupling (NIH-
ICP),
• Information flow-based inheritance coupling (IH-ICP).
Information flow-based coupling (ICP) metric was the
sum of NIH-ICP and IH-ICP metrics.
• Lee et al. emphasized that their ICP metrics based on
method invocations, take polymorphism into account
Object Oriented Metrics
Cohesion Metrics
Cohesion is a desirable properly of a class and should be
maximized as it supports the concept of data hiding.
Low cohesive class is more complex and is more prone to
faults in the development life cycle. Chidamber and
Kemerer proposed Lack of Cohesion in methods
(LCOM) metric in 1994.
LCOM measures the dissimilarity of methods in a class by
finding the attributes used by the methods.
It calculates the difference between the number of methods
that have similarly zero and number of methods that
have similarity greater than zero.
The larger the similarly between methods, more cohesive
is the class.
Object Oriented Metrics
M1={a1, a2, a3, a4}
M2={a1, a2}
M3={a3}
M4={a3, a4}
M1∩M2=1
M1∩M3=1
M1∩M4=1
M2∩M3=0
M2∩M4=0
M3∩M4=1
LCOM=2-4, Hence, LCOM=0
Object Oriented Metrics
• Henderseller observed some problems in the definition
of LCOM metric proposed by Chidamber and Kemerer
in 1994. The problems were:
• A number of real examples showed the value of LCOM as
zero due to presence of dissimilarity amongst methods.
Hence large number of projects showed low cohesion.
• No guideline for interpretation of value of LCOM was given
by Chidamber and Kemerer. Thus, Hendersellors revised the
LCOM value. Consider m methods accessing a set of
attributes Di(i=1,…….n). Let number of methods that access
each datum be . The revised LCOM1 metric is given
below:
)
( i
D

Object Oriented Metrics
m
m
D
N
LCOM
n
i
i





1
)
(
1
1 1

Object Oriented Metrics
• Beimen et al. defined two cohesion metrics Tight Class
Cohesion (TCC) and Loose Class Cohesion (LCC).
• TCC metric is defined as the percentage of pairs of directly
connected public methods of the class with common
attribute usage. LCC is same as TCC, except that it also
considers indirectly connected methods.
• A method M1 is said to be indirectly connected with
method M3, if M1 is connected to method M2 and method
M2 is connected to method M3. Hence, indirectly
connected methods represent transitive closure of directly
connected methods.
Object Oriented Metrics
class queue
{
private:
int *a;
int rear;
int front;
int n;
public:
queue(int s)
{
n=s;
rear=0;
front=0;
a=new int[n];
}
Object Oriented Metrics
int empty()
{
if(rear==0)
{
return 1;
}
else
{
return 0;
}
}
void insert(int);
int remove();
int getsize()
{
return n;
}
void display();
};
Object Oriented Metrics
void queue::insert(int data)
{
if(rear==n)
{
cout<<"Queue overflow";
}
else
{
a[rear++]=data;
}
}
int queue::remove()
{
int element,i;
if(empty())
{
cout<<"Queue underflow";
getch();
}
Object Oriented Metrics
else
{
element=a[front];
for(i=0;i<rear-1;i++)
{
a[i]=a[i+1];
}
rear--;
}
return element;
}
void queue::display()
{
int i;
for(i=0;i<rear;i++)
{
cout<<a[i]<<" ";
}
getch();
}
Object Oriented Metrics
rear front n a
empty getsize display
insert remove
Object Oriented Metrics
• The pair of public functions with common attribute usage is
given below
• {(empty, insert), (empty, remove), (empty, display),
(getsize, insert), (insert, remove), (insert, display),
(remove, display)}
TCC(Queue)= %
70
100
10
7


Object Oriented Metrics
• The methods empty and getsize are indirectly connected,
since empty is connected to insert and getsize is also
connected to insert. Thus, by transitivity empty is
connected to getsize.
LCC(Queue)= %
100
100
10
10


Object Oriented Metrics
• Lee et al. proposed Information flow based cohesion (ICH)
metric.
• ICH for a class is defined as the weighted sum of the
number of invocations of other methods of the same class,
weighted by the number of parameters of the invoked
method.
• method remove invokes method empty which does not
consist of any arguments. Thus, ICH (Queue) =1.
Object Oriented Metrics
Inheritance Metrics
• The inheritance is measured in terms of depth of
inheritance hierarchy by many authors in the literature.
• The depth of a class within an inheritance hierarchy is
measured by Depth of Inheritance Tree (DIT) metric given
by Chidamber and Kemerer, 1994.
• It is measured as the number of steps from the class node
to the root node of the tree.
• In case involving multiple inheritances, the DIT will be the
maximum length from the node to the root of the tree.
Object Oriented Metrics
A
B
D
C F
E
Object Oriented Metrics
Inheritance Metrics
The average inheritance depth (AID) is calculated as [YAP93]:
AID=the depth of sub class D is 1.5 ((2+1)/2).
AID=
classes
of
number
total
class
each
of
depth

Object Oriented Metrics
Inheritance Metrics
• The AID of overall inheritance structure is:0(A)+1(B)+0(C)
+D(1.5)+E(1)+0(F)=3.5. Finally dividing by total number of classes we
get 3.5/6=0.
• Number of Children (NOC) metric counts the number of immediate sub
classes of a class in a hierarchy. In figure 8. NOC value for class A is 1
and class E is 2.
• Lorenz and Kidd developed Number of Parents (NOP) metric which
counts the number of classes that a class directly inherits (i.e. multiple
inheritance) and Number of Descendants (NOD) as the number of sub
classes of a class (both directly and indirectly). Number of Ancestors
(NOA) given by Tegarden and Sheetz (1992) counts number of base
classes of a class (both directly and indirectly). Hence , NOA(D)=3 (A,
B, C), NOP(D)=2(B,C) and NOD(A)=2 (B,D).
Object Oriented Metrics
Inheritance Metrics
• Lorenz and Kidd gave three measures Number of methods
Overridden (NMO), Number of Methods Added (NMA) and
Number of methods Inherited (NMI).
• When a method in a sub class has the same name and
type (signature) as in the super class, then the method in
the super class is said to be overridden by the method in
the sub class.
• NMA counts the number of new methods (neither
overridden nor inherited) added in a class. NMI counts
number of methods a class inherits from its super classes.
Object Oriented Metrics
Inheritance Metrics
• Finally, Lorenz and Kidd use NMO, NMA, and NMI metrics
to calculate Specialization Index (SIX) as given below:
NMI
NMA
NMO
DIT
*
NMO
SIX



Object Oriented Metrics
class Person {
protected:
char name[25];
int age;
public:
void readperson();
void displayperson();
};
class Student extends Person{
protected:
roll_no[10];
float average;
public:
void readperson();
void displayperson();
float getaverage();
};
Object Oriented Metrics
class GradStudent extends Student{
private:
char subject[25];
char working[25];
public:
void readperson);
void displayperson();
void workstatus();
};
Object Oriented Metrics
Inheritance Metrics
• The class student overrides two methods of class person,
readperson() and displayperson(). Thus, the value of NMO
metric for class student is two. One new method is added
in this class (getaverage).
• Hence, the value of NMA metric is 1. The value of SIX for
class grastudent is
1
4
4
1
1
2
2
*
2
SIX 




Object Oriented Metrics
Size Metrics
Several traditional metrics are applicable to object oriented
systems. The traditional LOC metric is a measure of size of
a class.
Halstead’s software science and McCabe’s measures for
measuring size are also applicable to object oriented
systems, however, the object oriented paradigm defines a
different way of doing things.
This has led to development of size metrics applicable to
object oriented constructs. Chidamber and Kemerer
defined Weighted Methods per Class (WMC) metric given
by:



n
1
i
i
C
WMC
Object Oriented Metrics
Size Metrics
Number of attributes (NOA), given by Lorenz and Kidd is
defined as the sum of number of instance variables and
number of class variables.
Number of methods defined by Li and Henry (1993) is defined
as number of local methods defined in a class.
They also gave two additional size metrics besides LOC
metric SIZE1 and SIZE2 given as
SIZE1= number of semicolons in a class
SIZE2=NOA+NOM
Measuring Software Quality
Software quality should be an essential practice in software
development and thus arises the need of measuring the
aspects of software quality.
Measuring quality attributes will guide the software
professionals about the quality of the software.
Software quality must be measured throughout the software
development life cycle phase.
Software Reliability Models
 Basic Execution Time Model










0
0 1
)
(
V




Fig.7.13: Failure intensity  as a
function of μ for basic model
(1)
Software Reliability
0
0
V
d
d 

 

Fig.7.14: Relationship between & μ for basic model

(2)
Software Reliability










0
0
)
(
1
)
(
V
d
d 





For a derivation of this relationship, equation 1 can be written as:
The above equation can be solved for and result in :
)
(
















 


0
0
0 exp
1
)
(
V
V



 (3)
Software Reliability
Fig.7.15: Failure intensity versus execution time for basic model
The failure intensity as a function of execution time is shown in
figure given below







 

0
0
0 exp
)
(
V





Software Reliability
 Derived quantities
Software Reliability
Fig.7.16: Additional failures required to be experienced to reach the
objective
Software Reliability
This can be derived in mathematical form as:










F
P
Ln
V




0
0
Fig.7.17: Additional time required to reach the
objective
Example- 7.1
Assume that a program will experience 200 failures in infinite time. It has
now experienced 100. The initial failure intensity was 20 failures/CPU hr.
Software Reliability
(i) Determine the current failure intensity.
(ii) Find the decrement of failure intensity per failure.
(iii)Calculate the failures experienced and failure intensity after 20 and 100
CPU hrs. of execution.
(iv)Compute addition failures and additional execution time required to
reach the failure intensity objective of 5 failures/CPU hr.
Use the basic execution time model for the above mentioned calculations.
Solution
Here Vo=200 failures
Software Reliability
(i) Current failure intensity:










0
0 1
)
(
V




failures
100


hr.
PU
failures/C
20
0 

r
h
PU
failures/C
10
)
5
.
0
1
(
20
200
100
1
20 










Software Reliability
(ii) Decrement of failure intensity per failure can be calculated as:
hr.
CPU
/
1
.
0
200
20
0
0






V
d
d 

















 


0
0
0 exp
1
)
(
V
V




(iii) (a) Failures experienced & failure intensity after 20 CPU hr:
))
2
1
exp(
1
(
200
200
20
20
exp
1
200 















 



failures
173
)
1353
.
0
1
(
200 


Software Reliability







 

0
0
0 exp
)
(
V




















 


0
0
0 exp
1
)
(
V
V




(b) Failures experienced & failure intensity after 100 CPU hr:
lmost)
failures(a
200
200
100
20
exp
1
200 













 










 

0
0
0 exp
)
(
V





hr
CPU
failures /
71
.
2
)
2
exp(
20
200
20
20
exp
20 







 


Software Reliability
hr
CPU
failures /
000908
.
0
200
100
20
exp
20 





 


  failures
50
)
5
10
(
20
200
0
0



















 F
P
V




(iv) Additional failures required to reach the failure intensity
objective of 5 failures/CPU hr.
 


Software Reliability


















F
P
Ln
V




0
0
Additional execution time required to reach failure intensity objective
of 5 failures/CPU hr.
hr.
CPU
93
.
6
5
10
20
200







 Ln
Example 10.1: A program will experience 100 failures in
infinite time. It has now experienced 50 failures. The
initial failure intensity is 10 failures/hour. Use the basic
execution time model for the following:
•Find the present failure intensity.
•Calculate the decrement of failure intensity per
failure.
•Determine the failure experienced and failure
intensity after 10 and 50 hours of execution.
•Find the additional failures and additional execution
time needed to reach the failure intensity objective of
2 failures/hour.
 Logarithmic Poisson Execution Time Model
Failure Intensity
Software Reliability
Fig.7.18: Relationship between
)
exp(
)
( 0 


 

Software Reliability
Fig.7.19: Relationship between
)
exp(
0 







d
d





d
d
)
1
(
1
)
( 0 
 



 Ln
)
1
/(
)
( 0
0 
 




(4)
Software Reliability










F
P
Ln




1









P
F 



1
1
1
objective
intensity
Failure
intensity
failure
Present


F
P


Example- 7.2
Assume that the initial failure intensity is 20 failures/CPU hr. The failure
intensity decay parameter is 0.02/failures. We have experienced 100
failures up to this time.
Software Reliability
(i) Determine the current failure intensity.
(ii) Calculate the decrement of failure intensity per failure.
(iii)Find the failures experienced and failure intensity after 20 and 100 CPU
hrs. of execution.
(iv)Compute the additional failures and additional execution time required to
reach the failure intensity objective of 2 failures/CPU hr.
Use Logarithmic Poisson execution time model for the above mentioned
calculations.
Solution
Software Reliability
(i) Current failure intensity:
)
exp(
)
( 0 


 

failures
100


failures
/
02
.
0


hr.
PU
failures/C
20
0 

= 20 exp (-0.02 x 100)
= 2.7 failures/CPU hr.
Software Reliability
(ii) Decrement of failure intensity per failure can be calculated as:
θλ
d
d




 
1
1
)
( 0 
 



 Ln
(iii) (a) Failures experienced & failure intensity after 20 CPU hr:
failures
Ln 109
)
1
20
02
.
0
20
(
02
.
0
1





= -.02 x 2.7 = -.054/CPU hr.
Software Reliability
 
1
/
)
( 0
0 
 




(b) Failures experienced & failure intensity after 100 CPU hr:
.
/
22
.
2
)
1
20
02
.
20
/(
)
20
( hr
CPU
failures





 
1
1
)
( 0 
 



 Ln
failures
Ln 186
)
1
100
02
.
0
20
(
02
.
0
1





 
1
/
)
( 0
0 
 




.
/
4878
.
0
)
1
100
02
.
20
/(
)
20
( hr
CPU
failures





Software Reliability
failures
15
2
7
2
02
0
1
1










.
.
Ln
Ln
F
P




(iv) Additional failures required to reach the failure intensity
objective of 2 failures/CPU hr.
 


hr.
CPU
5
6
7
2
1
2
1
02
0
1
1
1
1
.
.
.


















P
F 



Example- 7.3
The following parameters for basic and logarithmic Poisson models are
given:
Software Reliability
(a) Determine the addition failures and additional execution time required to
reach the failure intensity objective of 5 failures/CPU hr. for both models.
(b) Repeat this for an objective function of 0.5 failure/CPU hr. Assume that
we start with the initial failure intensity only.
Basic execution time model Logarithmic Poisson
execution time model
hr
PU
failures/C
10

o
 hr
PU
failures/C
30

o

failures
0
10

o
V failure
25
0 /
.


Solution
Software Reliability
(a) (i) Basic execution time model
)
(
0
0
F
P
V



 


0











F
P
Ln




0
0
V
P

failures
50
)
5
10
(
10
100



(Present failure intensity) in this case is same as (initial
failure intensity).
Now,
Software Reliability
(ii) Logarithmic execution time model
hr.
CPU
93
.
6
5
10
10
100







 Ln










F
P
Ln




1
Failures
67
.
71
5
30
025
.
0
1







 Ln











P
F 



1
1
1
hr.
CPU
66
.
6
30
1
5
1
025
.
0
1








 Ln
Software Reliability
(b) Failure intensity objective = 0.5 failures/CPU hr.
 
F
P
V



 


0
0
failures
95
)
5
.
0
10
(
10
100



Logarithmic model has calculated more failures in almost some duration of
execution time initially.
 
F

(i) Basic execution time model










F
P
Ln
V




0
0
hr
CPU
Ln /
30
05
.
0
10
10
100








Software Reliability










F
P
Ln



θ
1
failures
Ln 164
5
.
0
30
025
.
0
1








(ii) Logarithmic execution time model











P
F 


1
1
θ
1
hr
CPU/
66
.
78
30
1
5
.
0
1
025
.
0
1









Software Quality metrics based on Defects
• According to IEEE/ANSI standard, defect can be
defined as “an accidental condition that causes a unit of
the system to fail to function as required”.
• A fault can cause many failures, hence there is no one-
to-one correspondence between fault and a failure.
Software Quality metrics based on Defects
Defect density
Defect density can be measured as the ratio of number of
defects encountered to the size of the software. Size of
the software is usually measured in terms of thousands
of lines of code (KLOC) and is given as:
KLOC
defects
of
Number
density
Defect 
Software Quality metrics based on Defects
Phase based defect density
It is an extension of defect density metric. The defect
density can be tracked at various phases of software
development including verification activities such as
reviews, inspections, formal reviews before the start of
validation testing.
Software Quality metrics based on Defects
Defect removal effectiveness
Defect Removal Effectiveness (DRE) is defined as: .
defects
Latent
phase
cycle
life
given
a
in
removed
Defects
DRE 
Software Quality metrics based on Defects
Defect removal effectiveness
Latent defects for a given phase is not known. Thus, they
are estimated as the sum of defects removed during a
phase and defects detected later. The higher the value
of the metric more efficient and effective is the process
followed in a particular phase.
A
B
B
D
D
D
DRE


Software Quality metrics based on Defects
Testing coverage metrics can be used to monitor the
amount of testing been done. These include the basic
coverage metrics:
Statement coverage metric describes the degree to which
statements are covered while testing.
Branch coverage metric determines whether each branch
in the source code has been tested.
Operation coverage metric determines whether every
operation of a class been tested.
Condition coverage metric determines whether each
condition is evaluated both for true and false.
Software Quality metrics based on Defects
Path coverage metric determines whether each path of
the control flow graph has been exercised or not.
Loop coverage metric determines how many times a loop
is covered.
Multiple condition coverage metric determines whether
every possible combination of conditions are covered.
Software Quality metrics based on Defects
The test focus (TF) metric is given as
STRs
of
number
Total
closed
and
fixed
STRs
of
Number
TF 
Software Quality metrics based on Defects
The fault coverage metric (FCM) is given as:
faults
of
severity
faults
of
number
Total
faults
of
severity
addressed
faults
of
Number
FCM




More Related Content

PPTX
Comprehensive Analysis of Metrics in Software Engineering for Enhanced Projec...
PDF
Relational Analysis of Software Developer’s Quality Assures
PPTX
Software Engineering Software Engineering
PPTX
UNIT4(2) OB UNIT II NOTESOB UNIT II NOTES
PDF
55 sample chapter
PDF
55 sample chapter
PPTX
242296
Comprehensive Analysis of Metrics in Software Engineering for Enhanced Projec...
Relational Analysis of Software Developer’s Quality Assures
Software Engineering Software Engineering
UNIT4(2) OB UNIT II NOTESOB UNIT II NOTES
55 sample chapter
55 sample chapter
242296

Similar to 2_metrics modified.ppt of software quality metrics (20)

PPT
Chapter 11 Metrics for process and projects.ppt
PPTX
Software Metrics - Software Engineering
PDF
Importance of software quality metrics
PPT
Lecture3
PPT
Software process and project metrics
PPTX
Software engineering
DOCX
Effectiveness of software product metrics for mobile application
PPTX
Unit 8 software quality and matrices
PPTX
SE-Lecture-7.pptx
PPTX
Software Matrix it's a topic in software quality.pptx
PPTX
Software Productivity Measurement
DOCX
Algorithm ExampleFor the following taskUse the random module .docx
PDF
1811 1815
PDF
1811 1815
PPTX
Software matrics and measurement
PPTX
Software metrics
PPT
Software metrics
PDF
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICS
PDF
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICS
Chapter 11 Metrics for process and projects.ppt
Software Metrics - Software Engineering
Importance of software quality metrics
Lecture3
Software process and project metrics
Software engineering
Effectiveness of software product metrics for mobile application
Unit 8 software quality and matrices
SE-Lecture-7.pptx
Software Matrix it's a topic in software quality.pptx
Software Productivity Measurement
Algorithm ExampleFor the following taskUse the random module .docx
1811 1815
1811 1815
Software matrics and measurement
Software metrics
Software metrics
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICS
ANALYSIS OF SOFTWARE QUALITY USING SOFTWARE METRICS
Ad

Recently uploaded (20)

PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
master seminar digital applications in india
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PPTX
Pharma ospi slides which help in ospi learning
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
Classroom Observation Tools for Teachers
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PPTX
Lesson notes of climatology university.
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
VCE English Exam - Section C Student Revision Booklet
Pharmacology of Heart Failure /Pharmacotherapy of CHF
102 student loan defaulters named and shamed – Is someone you know on the list?
2.FourierTransform-ShortQuestionswithAnswers.pdf
Supply Chain Operations Speaking Notes -ICLT Program
master seminar digital applications in india
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Pharma ospi slides which help in ospi learning
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
Microbial diseases, their pathogenesis and prophylaxis
STATICS OF THE RIGID BODIES Hibbelers.pdf
Classroom Observation Tools for Teachers
Abdominal Access Techniques with Prof. Dr. R K Mishra
O7-L3 Supply Chain Operations - ICLT Program
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Lesson notes of climatology university.
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Ad

2_metrics modified.ppt of software quality metrics

  • 2. Software Metrics 1. What is the size of the program? 2. What is the estimated cost and duration of the software? 3. Is a requirement testable? 4. When is the right time to stop testing? 5. What is the effort expended during maintenance phase?
  • 3. Software Metrics 6. What is the complexity of a module? 7. What is the module strength and coupling? 8. What is the reliability at the time of release? 9. Which test technique is more effective? 10. Are we testing hard or are we testing smart? 11. Do we have a strong program or a weak test suite?
  • 4. Software Metrics  Pressman explained as “A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some attribute of the product or process”.  Measurement is the act of determine a measure  The metric is a quantitative measure of the degree to which a system, component, or process possesses a given attribute.  Fenton defined measurement as “ it is the process by which numbers or symbols are assigned to attributes of entities in the real world in such a way as to describe them according to clearly defined rules”.
  • 5. Software Metrics  Definition Software metrics can be defined as “The continuous application of measurement based techniques to the software development process and its products to supply meaningful and timely management information, together with the use of those techniques to improve that process and its products”.
  • 6. Software Metrics  Areas of Applications The most established area of software metrics is cost and size estimation techniques. Software metrics can be used to measure the effectiveness of various activities or processes such as inspections and audits. Various software constructs such as size, coupling, cohesion or inheritance can be measured using software metrics. The prediction of quality levels for software, often in terms of reliability, is another area where software metrics have an important role to play.
  • 7. Software Metrics  Areas of Applications Another important area of application of software metrics is prediction of software quality attributes. There are many quality attributes proposed in the literature such as maintainability, testability, usability, reliability. The metrics also provide meaningful and timely information to the management. The software quality, process efficiency, people productivity can be computed using the metrics. The use of software metrics to provide quantitative checks on software design is also a well established area.
  • 8. Software Metrics  Areas of Applications SRS is an important document produced during the software development life cycle. The metrics can be used to measure the readability, faults found during SRS verification, change request frequency etc. Testing metrics can be used to measure the effectiveness of the test suite. The testing metrics include number of failures experienced per unit of time, number of paths in a control flow graph, number of independent paths, percentage of statement coverage, percentage of decision covered. Statement coverage metrics are also available that calculate the percentage of statements in the source code covered during testing. The effectiveness of test suite may be measured.
  • 9. 9 Measurements Should Be Meaning Repeatable Example: same measuring time and same instrument Precise Valid scale and known source Comparable Example: over time and/or between sources Economical Affordable to collect and analyze compared to their value Software Metrics
  • 10. 10 Software Metrics Check whether numeric relations preserve the empirical relations Map real world entities to numeric numbers Determine numerical relations for empirical relations Identify empirical relations for characteristics Identify characteristics for real life entities Measurement Basics
  • 11. Software Metrics Measurement Basics In the first step the characteristics for representing real life entities should be identified. In the second step the empirical relations for these characteristics are identified. The third step determines numerical relations corresponding to the empirical relations. In the next step, real world entities are mapped to numeric numbers in the last step we determine whether the numeric relations preserve the empirical relation.
  • 12. Software Metrics  Problems During Implementation  Statement : Software development is to complex; it cannot be managed like other parts of the organization. Management view : Forget it, we will find developers and managers who will manage that development.  Statement : I am only six months late with project. Management view : Fine, you are only out of a job.
  • 13. Software Metrics  Statement : I am only six months late with project. Management view : Fine, you are only out of a job.  Statement : But you cannot put reliability constraints in the contract. Management view : Then we may not get the contract.
  • 14. Software Metrics The metrics can be categorized by the entities we need to measure. In software development there are two entities that need to be measured: • Products are deliverables or documents produced during the software development. • Processes are set activities that are used to produce a product.  Categories of Metrics
  • 15. Software Metrics • The metrics related to the product are known as product metrics and the metrics related to the process are known as process metrics. • The process metrics measures the effectiveness of the process. • These metrics can be applied at all the phases of software development.
  • 16. Software Metrics The process metrics can be used to: • Measure the cost of the process • the time taken to complete the process • the efficiency and effectiveness of process in order to determine which one is effective. • compare various processes in order to determine which one is effective. • to guide future projects.
  • 17. Software Metrics • The product metrics can be used to assess the document or a deliverable produced during the software development life cycle. • The examples of product metrics include size, functionality, complexity and quality. • Documents such as SRS, user manual can be measured for correctness, readability and understandability.
  • 18. Software Metrics • The process and product metrics can be further categorized as internal or external attributes: – Internal attributes: are those that are related to the internal structure of the product or the process. – External attributes are those that are related to the behaviour of the product or the process.
  • 19. Software Metrics • For example, the size of a class, structural attributes including complexity of the source code, number of independent paths, number of statements, and number of branches can be determined without executing the source code. These are known as internal attributes of a product or process. • When the source code is executed the number of failures encountered, user friendliness of the forms and navigational items or response time of a module describe the behavior of the software. These are known as external attributes of a process or product. The external attributes are related to the quality of the system.
  • 20. Software Metrics Software Metrics Process Internal Attributes Failure rate found in reviews, No. of issues External Attributes Effectiveness of a method Product Internal Attributes Size, Inheritance, Coupling External Attributes Reliability, Maintainability, Usability
  • 25. Software Metrics • There are two types of data non metric and metric. • Non metric type of data is of categorical or discrete type. For example, gender of a person can be male or female. • In contrast, metric type of data represents amount or magnitude such as lines of source code.
  • 26. Software Metrics • Non metric measurements can be measured either on nominal scale or ordinal scale. • In nominal sale, the categories or class of a variable is described. • A metric has nominal scale when it can be divided into categories or classes and there is no ranking or ordering amongst the categories or classes.
  • 27. Software Metrics • The number indicates the presence or absence of the attribute value. For example, • Class x is faulty or not faulty. Thus there are two categories of faults either they are present or they are not present.     faulty if 1, faulty not if , 0 x
  • 28. Software Metrics • Similar to nominal scales, a metric having ordinal scale can be divided into categories or classes, however, it involves ranking or ordering. • In ordinal scale, each category can be compared with another in terms of “higher than” or “lower than” relationship. • For example, fault impact can be categorized as high, medium, low.       low 3, medium 2, high , 1 impact fault
  • 29. Software Metrics • Metric scale provides higher precision permitting various arithmetic operations to be performed. • Interval ratio and absolute scales are of metric type. In interval scale, on any part of the scale the difference between two adjacent points are equal • The interval scale has an arbitrary zero point. • Thus on an interval scale it is not possible to express any value in multiple of some other value on the scale.
  • 30. Software Metrics • For example, a day with 100 F temperature cannot be said as twice as hot as a day with 50 F temperature. The reason is that on Celsius scale 100 F is 37.8 and 50 F is 10. Thus this relationship can be expressed as: • C= 5 × ((F-32) / 9)
  • 31. Software Metrics • Ratio scales give more precision since they have all advantages of other scales plus an absolute zero point. • For example, if weight of a person is 100 kg then he/she is twice heavy as the person having 50 Kg weight. • Absolute scale simply represents counts. • For example, number of faults encountered during inspections can only be measured in one way by counting the faults encountered.
  • 32. Software Metrics Measurem ent type Measureme nt scale Characteristic s Transformatio n Examples Non metric Nominal Order not defined Arithmetic not involved One to one mapping Categorical classification s like type of language (C+ +, java) Ordinal Order defined Monotonic increasing function (=, <, >) Arithmetic not involved Increasing function P(x) > P(y) Student grades, customer satisfaction level, employee capability levels
  • 33. Software Metrics Metric Interval =, <, > No ratios Addition, subtraction Arbitrary zero point P=xP’ + y Temperatures, date and time Ratio Absolute zero point All arithmetic operations possible P=xP’ Weight, height, length Absolute Simple count values P=P’ Number of faults encountered in testing
  • 34. Software Metrics Example 8.1: Consider the maintenance effort in terms of lines of source code added, deleted, changes during maintenance phase classified between 1 and 5, where 1 means very high, 2 means high, 3 means medium, 4 means low and 5 means very low. 1. What is the measurement scale for this definition of maintenance effort? 2. Give an example to determine criteria to determine the maintenance effort levels for a given class.
  • 35. Software Metrics Analyzing the Metric Data • The role of statistics is to function as a tool in analyzing research data and drawing conclusions from it. • The research data must be suitably reduced to be read easily and used for further analysis. • Descriptive statistics concern development of certain indices or measures to summarize data. • Data can be summarized by using measures of central tendency (mean, median and mode) and measures of dispersion (standard deviation, variance, quartile).
  • 36. Software Metrics Measures of central tendency • Measures of central tendency include mean, median and mode. • These measures are known as measures of central tendency as they give us the idea about the central values of the data around which all the other data points have a tendency to gather. • Mean can be computed by taking average values of the data set and is given as:    N i i N x Mean 1 ) (
  • 37. Software Metrics Analyzing the Metric Data • Median gives the middle value in the data set which means half of the data points are below the median value and half of the data points are above the median value. It is calculated as the value of the data set, where n is the number of data points in the data set. • The most frequently occurring value in the data set is denoted by mode.
  • 38. Software Metrics Choice of Measures of Central Tendency The choice of selecting a measure of central tendency depends upon: 1. the scale type of data at which it is measured 2. the distribution of data (left skewed, symmetrical, right skewed) Measures Relevant Scale Type Mean Interval and ratio data which are not skewed. Median Ordinal, interval and ratio but not useful for ordinal scales having few values. Mode All scale types but not useful for scales having multiple values.
  • 39. Software Metrics Graphs representing skewed and symmetrical distributions
  • 40. Software Metrics Example 8.2: Consider the following data set consisting of lines of source code (SLOC) for a given project. Calculate mean, median and mode for it. 107, 128, 186, 222, 322, 466, 657, 706, 790, 844, 1129, 1280, 1411, 1532, 1824, 1882, 3442
  • 41. Software Metrics Measures of dispersion • Measures of dispersion include standard deviation, variance and quartiles. Measures of dispersion tell us how the data is scattered or spread. Standard deviation calculates the distance of the data point from the mean. If most of the data points are far away from the mean then the standard deviation of the variable is large. The standard deviation is calculated as given below: N x x    2 ) (  
  • 42. Software Metrics Measures of dispersion • Variance is a measure of variability and is the square of standard deviation. • The quartile divides the metric data into four equal parts. For calculating quartile, the data is first arranged in ascending order. • The 25 percent of the metric data is below the lower quartile (25 percentile), fifty percent of the metric data is below the median value and seventy five percent of metric data is below the upper quartile (75 percentile).
  • 43. Software Metrics Median Upper quartile Lower quartile 1st part 2nd part 3rd part 4th part
  • 44. Software Metrics Measures of dispersion • The lower quartile (Q1) is computed by: – finding the median of the data set – finding median of the lower half of the data set • The upper quartile (Q3) is computed by: – finding the median of the data set – finding median of the upper half of the data set
  • 45. Software Metrics Measures of dispersion • Interquartile range = Q3 - Q1 Example: Consider data set consisting of lines of source code (SLOC) given in example 8.2. Calculate standard deviation, variance and quartile for it. 107, 128, 186, 222, 322, 466, 657, 706, 790, 844, 1129, 1280, 1411, 1532, 1824, 1882, 3442
  • 46. Software Metrics Metric Data Distribution • In order to understand the metric data, the starting point is to analyze the shape of the distribution of the data. • There are a number of methods available to analyze the shape of the data one of them is histogram through which a researcher can gain insight about the normality of the data. • Histogram is a graphical representation of frequency of occurrences of the values of a given variable.
  • 47. Software Metrics Metric Data Distribution 4000 3000 2000 1000 0 SLOC 6 5 4 3 2 1 0 Frequency
  • 48. Software Metrics Metric Data Distribution • The bars show the frequency of values of LOC metric. • The normal curve is superimposed on the distribution of values to determine the normality of the data values of LOC. • Most of the data is left skewed or right skewed. These curves will not be applicable for discrete data (nominal or ordinal). • For example, the classes may be faulty or non faulty, thus the distribution will not be normal.
  • 49. Software Metrics Metric Data Distribution • The measures of central tendency such as mean, median and mode are all equal for normal curves. • The normal curves are like a bell shaped curve and within three standard deviations of the means, 96% of data occurs
  • 50. Software Metrics Metric Data Distribution • Consider the data sets given for three metric variables in table below. Determine the normality of these variables Fault count Cyclomatic complexity Branch count 470.00 26.00 826.00 128.00 20.00 211.00 268.00 14.00 485.00 19.00 10.00 29.00 404.00 15.00 405.00 127.00 11.00 240.00 263.00 14.00 464.00 94.00 10.00 187.00
  • 51. Software Metrics Metric Data Distribution Fault count Cyclomatic complexity Branch count 207.00 13.00 344.00 42.00 7.00 83.00 24.00 10.00 47.00 94.00 6.00 163.00 34.00 9.00 67.00 286.00 10.00 503.00 104.00 12.00 175.00 82.00 8.00 147.00 20.00 7.00 39.00
  • 52. Software Metrics Metric Data Distribution 500.00 400.00 300.00 200.00 100.00 0.00 Fault_count 8 6 4 2 0 Frequency
  • 53. Software Metrics Metric Data Distribution 30.00 25.00 20.00 15.00 10.00 5.00 Cyclomatic_complexity 6 5 4 3 2 1 0 Frequency
  • 54. Software Metrics Metric Data Distribution 1000.00 800.00 600.00 400.00 200.00 0.00 Branch_count 5 4 3 2 1 0 Frequency
  • 55. Software Metrics Metric Data Distribution Metric Mean Median Std. Deviation Fault count 156.8235 108 137.7599 Cyclomatic complexity 11.88235 10 5.035901 Branch count 259.7059 187 216.9976
  • 56. Software Metrics Metric Data Distribution • The normality of the data can also be checked by calculating mean and median. • The mean, median, and the mode should be same for a normal distribution. • We compare the values of mean and median. • The values of median are less than the mean for variables fault count and branch count. • Thus, these variables are not normally distributed. • However, the mean and median of cyclomatic complexity do not differ by a larger value.
  • 57. Software Metrics Outlier Analysis • Data points, which are located in an empty part of the sample space, are called outliers. • These are the data values that are numerically distant from the rest of the data. • For example, suppose one calculates the average weight of 10 students in a class, where most are between 51 pounds and 61 pounds, but the weight of one student is 210 pounds. In this case, the mean will be 72 pounds and the median will be 58.
  • 58. Software Metrics Outlier Analysis • Hence, the median better reflects the weight of the students than the mean. • Thus, the data point with the value 210 is an outlier, that is, it is located away from other values in the data set. • Outlier analysis is done to find data points that are over influential and removing them is essential.
  • 59. Software Metrics Outlier Analysis • Once the outliers are identified the decision about the inclusion or exclusion of the outlier must be made. • The decision depends upon the reason why the case is identified as outlier. • There are three types of outliers: univariate, bivariate and multivariate. • Univariate outliers are those exceptional values that occur within a single variable. • Bivariate outliers occur within the combination of two variables and multivariate outliers are present within the combination of more than two variables.
  • 60. Software Metrics Outlier Analysis • Box plots and scatter plots are two popular methods that are used for univariate and bivariate outlier detection. • Box plots are based on median and quartiles. The upper and lower quartiles statistics are used to construct a box plot. • The median value is the middle value of the data set half of the values are less than this value and half of the values are greater than this value.
  • 62. Software Metrics Outlier Analysis • The box starts at the lower quartile and ends at the upper quartile. • The distance between lower and the upper quartile is called box length. • The tails of the box plot specifies the bounds between which all the data points must lie. • The start of the tail is Q1 -1.5 × IQR and end of the tail is Q3 +1.5 × IQR.
  • 63. Software Metrics Outlier Analysis • These values are truncated to the nearest values of the actual data points in order to avoid negative values. • Thus, actual start of the tail is the lowest value in the variable above (Q1 -1.5 × IQR) and actual end of the tail is the highest value below (Q3 +1.5 × IQR). • Any value outside the start of the tail and the end of the tail is outlier and these data points must be identified as they are unusual occurrences of data values which must be considered for inclusion or exclusion.
  • 64. Software Metrics Outlier Analysis • The box plots also tell us whether the data is skewed or not. • If the data is not skewed the median will lie in the center of the box. • If the data is left or right skewed, then the median will be away from the center.
  • 65. Software Metrics Outlier Analysis For example consider the LOC values for a sample project given below: 17, 25, 36, 48, 56, 62, 78, 82, 103, 140, 162, 181, 202, 251, 310, 335, 508
  • 66. Software Metrics Outlier Analysis • The median of the data set is 103, lower quartile is 56 and upper quartile is 202. The interquartile range is 146. • The start of the tail is 56-1.5×146=-163 and end of the tail is 202+1.5×146=421. • The actual start of the tail is lowest value above -163 i.e. 17 and actual end of the tail is highest value below 421 i.e 335. • Thus the case number 17 with value 508 is above the end of the tail and hence is an outlier.
  • 68. Software Metrics Example: Consider the data set given in example above. Construct box plots and identify univariate outliers for all the variables in the data set.
  • 72. Software Metrics • The outliers should be analyzed by the researchers to make a decision about their inclusion or exclusion in the data analysis. • There may be many reasons for an outlier – error in the entry of the data – some extra information that represents extraordinary or unusual event – an extraordinary event that is unexplained by the researcher.
  • 73. Software Metrics • Outlier values may be present due to combination of data values present across more than one variable. • These outliers are called multivariate outliers. • Scatter plot is another visualization method to detect outliers. • In scatter plots, we simply represent graphically all the data points. • The scatter plot allows us to examine more than one metric variable at a given time.
  • 74. Software Metrics • The univariate outliers can also be determined by calculating the Z-score value of the data points of the variable. • The data values exceeding ±2.5 are considered to be outliers
  • 76. Software Metrics Exploring Analysis • The metric variables can be of two type’s independent variables and dependent (target) variable. • The effect of independent variables on the dependent variable can be explored by using various statistical and machine learning methods. • The choice of the method for analyzing the relationship between independent and dependent variable depends upon the type of the dependent variable (continuous or discrete).
  • 77. Software Metrics Exploring Analysis • If the dependent variable is continuous, then the widely known statistical method linear regression method may be used, whereas if the dependent variable is of discrete type them logistic regression method may be used for analyzing relationships.
  • 78. Software Metrics Exploring Analysis • If the dependent variable is continuous, then the widely known statistical method linear regression method may be used, whereas if the dependent variable is of discrete type them logistic regression method may be used for analyzing relationships.
  • 79. 1. int sort (int x[ ], int n) 2. { 3. int i, j, save, im1; 4. /*This function sorts array x in ascending order */ 5. If (n<2) return 1; 6. for (i=2; i<=n; i++) 7. { 8. im1=i-1; 9. for (j=1; j<=im1; j++) 10. if (x[i] < x[j]) 11. { 12. save = x[i]; 13. x[i] = x[j]; 14. x[j] = save; 15. } 16. } 17. return 0; 18. } If LOC is simply a count of the number of lines then figure shown below contains 18 LOC . When comments and blank lines are ignored, the program in figure 2 shown below contains 17 LOC. Lines of Code (LOC) Size Estimation Software Project Planning Fig. 2: Function for sorting an array
  • 80. 0 500,000 1,000,000 1,500,000 2,000,000 2,500,000 Jan 1993 Jun 1994 Oct 1995 Mar 1997 Jul 1998 Dec 1999 Apr 2001 Total LOC Total LOC ("wc -l") -- development releases Total LOC ("wc -l") -- stable releases Total LOC uncommented -- development releases Total LOC uncommented -- stable releases Growth of Lines of Code (LOC) Software Project Planning
  • 81. Furthermore, if the main interest is the size of the program for specific functionality, it may be reasonable to include executable statements. The only executable statements in figure shown above are in lines 5-17 leading to a count of 13. The differences in the counts are 18 to 17 to 13. One can easily see the potential for major discrepancies for large programs with many comments or programs written in language that allow a large number of descriptive but non-executable statement. Conte has defined lines of code as: Software Project Planning
  • 82. “A line of code is any line of program text that is not a comment or blank line, regardless of the number of statements or fragments of statements on the line. This specifically includes all lines containing program header, declaration, and executable and non-executable statements”. This is the predominant definition for lines of code used by researchers. By this definition, figure shown above has 17 LOC. Software Project Planning
  • 83. Software Metrics Token Count The size of the vocabulary of a program, which consists of the number of unique tokens used to build a program is defined as: η = η1+ η2 η : vocabulary of a program η1 : number of unique operators η2 : number of unique operands where
  • 84. Software Metrics The length of the program in the terms of the total number of tokens used is N = N1+N2 N : program length N1 : total occurrences of operators N2 : total occurrences of operands where
  • 85. Software Metrics V = N * log2 η Volume The unit of measurement of volume is the common unit for size “bits”. It is the actual size of a program if a uniform binary encoding for the vocabulary is used. Program Level The value of L ranges between zero and one, with L=1 representing a program written at the highest possible level (i.e., with minimum size). L = V* / V
  • 86. Software Metrics D = 1 / L E = V / L = D * V Program Difficulty As the volume of an implementation of a program increases, the program level decreases and the difficulty increases. Thus, programming practices such as redundant usage of operands, or the failure to use higher-level control constructs will tend to increase the volume as well as the difficulty. Effort The unit of measurement of E is elementary mental discriminations.
  • 87. Software Metrics  Estimated Program Length 2 2 2 1 2 1 log log         10 log 10 14 log 14 2 2     = 53.34 + 33.22 = 86.56 ) ! ( log ) ! ( 2 2 1 2      Log J The following alternate expressions have been published to estimate program length.
  • 88. Software Metrics 1. Comments are not considered. 2. The identifier and function declarations are not considered. 3. All the variables and constants are considered operands. 4. Global variables used in different modules of the same program are counted as multiple occurrences of the same variable.  Counting rules for C language
  • 89. Software Metrics 6. Functions calls are considered as operators. 7. All looping statements e.g., do {…} while ( ), while ( ) {…}, for ( ) {…}, all control statements e.g., if ( ) {…}, if ( ) {…} else {…}, etc. are considered as operators. 8. In control construct switch ( ) {case:…}, switch as well as all the case statements are considered as operators. 5. Local variables with the same name in different functions are counted as unique operands.
  • 90. Software Metrics 11. GOTO is counted as an operator and the label is counted as an operand. 12. The unary and binary occurrence of “+” and “-” are dealt separately. Similarly “*” (multiplication operator) are dealt with separately. 9. The reserve words like return, default, continue, break, sizeof, etc., are considered as operators. 10. All the brackets, commas, and terminators are considered as operators.
  • 91. Software Metrics 15. All the hash directive are ignored. 14. In the structure variables such as “struct-name, member-name” or “struct-name -> member-name”, struct-name, member-name are taken as operands and ‘.’, ‘->’ are taken as operators. Some names of member elements in different structure variables are counted as unique operands. 13. In the array variables such as “array-name [index]” “array- name” and “index” are considered as operands and [ ] is considered as operator.
  • 92. Software Metrics  Potential Volume ) 2 ( log ) 2 ( * * 2 2 * 2      V  Estimated Program Level / Difficulty Halstead offered an alternate formula that estimate the program level. where  ) /( 2 2 1 2      L 2 2 1 2 1        L D
  • 93. Software Metrics      D V L V * / 2 2 2 1 2 / ) log (   N N n   / E T   Effort and Time β is normally set to 18 since this seemed to give best results in Halstead’s earliest experiments, which compared the predicted times with observed programming times, including the time for design, coding, and testing.
  • 94. Software Metrics Table 1: Language levels
  • 95. Example- 6.I Consider the sorting program. List out the operators and operands and also calculate the values of software science measures like Software Metrics . , , , etc E V N 
  • 96. Solution The list of operators and operands is given in table 2. Software Metrics
  • 97. Software Metrics Table 2: Operators and operands of sorting program
  • 98. Software Metrics Here N1=53 and N2=38. The program length N=N1+N2=91 Vocabulary of the program Volume = 91 x log224 = 417 bits 24 10 14 2 1          2 log  N V The estimated program length of the program  N = 14 log214 + 10 log210 = 14 * 3.81 + 10 * 3.32 = 53.34 + 33.2 = 86.45
  • 99. Software Metrics Conceptually unique input and output parameters are represented by {x: array holding the integer to be sorted. This is used both as input and output}. {N: the size of the array to be sorted}. The potential volume V* = 5 log25 = 11.6 L = V* / V * 2  3 * 2   Since
  • 100. Software Metrics Estimated program level 027 . 0 417 6 . 11   D = I / L 03 . 37 027 . 0 1   038 . 0 38 10 14 2 2 2 2 1       N L  
  • 101. Software Metrics /*This function checks for validity of a triangle*/ int triangle(int a, int b, int c) { int valid; If(((a+b)>c)&&((c+a)>b)&&((b+c)>a))) //checks validity { valid=1; } else { valid=0; } return valid; }
  • 102. Software Metrics • Consider the function to check the validity of a triangle. List out the operator and operands and also calculate the values of software science metrics.
  • 103. Software Metrics Operators Occurrences Operands Occurrence s int 5 triangle 1 { } 3 a 4 if 1 b 4 + 3 c 4 > 3 valid 4 && 2 1 1 = 2 0 1 else 1 return 1 ; 4 , 2 () 8 N1=35 N2=19 12 1   7 2  
  • 106.  The Sharing of Data Among Modules A program normally contains several modules and share coupling among modules. However, it may be desirable to know the amount of data being shared among the modules. Fig.10: Three modules from an imaginary program Software Metrics
  • 107. Fig.11: ”Pipes” of data shared among the modules Software Metrics Fig.12: The data shared in program bubble
  • 108. Software Metrics Component : Any element identified by decomposing a (software) system into its constituent parts. Cohesion : The degree to which a component performs a single function. Coupling : The term used to describe the degree of linkage between one component to others in the same system. Information Flow Metrics
  • 109. Software Metrics 1. ‘FAN IN’ is simply a count of the number of other Components that can call, or pass control, to Component A. 2. ‘FANOUT’ is the number of Components that are called by Component A. 3. This is derived from the first two by using the following formula. We will call this measure the INFORMATION FLOW index of Component A, abbreviated as IF(A).  The Basic Information Flow Model Information Flow metrics are applied to the Components of a system design. Fig. 13 shows a fragment of such a design, and for component ‘A’ we can define three measures, but remember that these are the simplest models of IF. IF(A) = [FAN IN(A) x FAN OUT (A)]2
  • 111. Software Metrics 1. Note the level of each Component in the system design. 2. For each Component, count the number of calls so that Component – this is the FAN IN of that Component. Some organizations allow more than one Component at the highest level in the design, so for Components at the highest level which should have a FAN IN of zero, assign a FAN IN of one. Also note that a simple model of FAN IN can penalize reused Components. 3. For each Component, count the number of calls from the Component. For Component that call no other, assign a FAN OUT value of one. The following is a step-by-step guide to deriving these most simple of IF metrics. cont…
  • 112. Software Metrics 4. Calculate the IF value for each Component using the above formula. 5. Sum the IF value for all Components within each level which is called as the LEVEL SUM. 6. Sum the IF values for the total system design which is called the SYSTEM SUM. 7. For each level, rank the Component in that level according to FAN IN, FAN OUT and IF values. Three histograms or line plots should be prepared for each level. 8. Plot the LEVEL SUM values for each level using a histogram or line plot.
  • 113. Software Metrics  A More Sophisticated Information Flow Model a = the number of components that call A. b = the number of parameters passed to A from components higher in the hierarchy. c = the number of parameters passed to A from components lower in the hierarchy. d = the number of data elements read by component A. Then: FAN IN(A)= a + b + c + d
  • 114. Software Metrics Also let: e = the number of components called by A; f = the number of parameters passed from A to components higher in the hierarchy; g = the number of parameters passed from A to components lower in the hierarchy; h = the number of data elements written to by A. Then: FAN OUT(A)= e + f + g + h
  • 115. Software Metrics Class A Fan-in=0 Fan-out=4 Class B Fan-in=1 Fan-out=1 Class C Fan-in=2 Fan-out=1 Class D Fan-in=1 Fan-out=1 Class E Fan-in=1 Fan-out=0 Class F Fan-in=2 Fan-out=0
  • 116. Software Metrics Data Structure Metrics Program Data Input Internal Data Data Output Payroll Name/ Social Security No./ Pay Rate/ Number of hours worked Spreadsheet Software Planner Item Names/ Item amounts/ Relationships among items Program size/ No. of software developers on team Withholding rates Overtime factors Insurance premium Rates Cell computations Sub-totals Model parameters Constants Coefficients Gross pay withholding Net pay Pay ledgers Spreadsheet of items and totals Est. project effort Est. project duration Fig.1: Some examples of input, internal, and output data
  • 117. Software Metrics One method for determining the amount of data is to count the number of entries in the cross-reference list.  The Amount of Data A variable is a string of alphanumeric characters that is defined by a developer and that is used to represent some value during either compilation or execution.
  • 119. Software Metrics Fig.3: A cross reference of program payday check gross hours net pay rate tax 2 4 6 4 5 6 4 14 12 11 14 12 11 13 14 13 12 15 13 12 14 15 14 13 15 15 15 14 15
  • 120. Software Metrics 2  feof 9 stdin 10 Fig.4: Some items not counted as VARS = VARS + unique constants + labels. Halstead introduced a metric that he referred to as to be a count of the operands in a program – including all variables, constants, and labels. Thus, 2  labels constants unique 2   VARS 
  • 121. Software Metrics Fig.6: Program payday with operands in brackets
  • 122. Software Metrics The Usage of Data within a Module  Live Variables Definitions : 1. A variable is live from the beginning of a procedure to the end of the procedure. 2. A variable is live at a particular statement only if it is referenced a certain number of statements before or after that statement. 3. A variable is live from its first to its last references within a procedure.
  • 125. Software Metrics It is thus possible to define the average number of live variables, which is the sum of the count of live variables divided by the count of executable statements in a procedure. This is a complexity measure for data usage in a procedure or program. The live variables in the program in fig. 6 appear in fig. 7 the average live variables for this program is 647 . 3 34 124  ) (LV
  • 126. Software Metrics Line Live Variables Count cont… 4 5 6 7 8 9 10 11 12 13 14 15 0 0 3 3 3 0 0 0 0 0 1 2 16 4 ---- ---- t, x, k t, x, k t, x, k ---- ---- ---- ---- ---- size size, j Size, j, a, b
  • 127. Software Metrics Line Live Variables Count cont… 17 18 19 20 21 22 23 24 25 26 27 28 5 6 6 6 6 6 7 7 6 6 6 6 29 5 size, j, a, b, last size, j, a, b, last, continue size, j, a, b, last, continue size, j, a, b, last, continue size, j, a, b, last, continue size, j, a, b, last, continue size, j, a, b, last, continue, i size, j, a, b, last, continue, i size, j, a, b, continue, i size, j, a, b, continue, i size, j, a, b, continue, i size, j, a, b, continue, i size, j, a, b, i
  • 128. Software Metrics Line Live Variables Count 30 31 32 33 34 35 36 37 5 5 5 4 4 4 3 0 size, j, a, b, i size, j, a, b, i size, j, a, b, i size, j, a, b size, j, a, b size, j, a, b j, a, b -- Fig.7: Live variables for the program in fig.6
  • 129. Software Metrics  Variable spans … 21 … 32 … 45 … 53 … 60 … scanf (“%d %d”, &a, &b) x =a; y = a – b; z = a; printf (“%d %d”, a, b); Fig.: Statements in ac program referring to variables a and b. The size of a span indicates the number of statements that pass between successive uses of a variables
  • 130. Software Metrics  Making program-wide metrics from intra-module metrics m LV program LV i m i 1    n SP program SP i n i 1    For example if we want to characterize the average number of live variables for a program having modules, we can use this equation. where is the average live variable metric computed from the ith module i LV ) ( The average span size for a program of n spans could be computed by using the equation. ) (SP
  • 131. Software Metrics  Program Weakness  * LV WM  A program consists of modules. Using the average number of live variables and average life variables , the module weakness has been defined as ) (LV ) (
  • 132. Software Metrics m WM WP i m i          1 A program is normally a combination of various modules, hence program weakness can be a useful measure and is defined as: where, WMi : weakness of ith module WP : weakness of the program m : number of modules in the program
  • 133. Example- 6.3 Consider a program for sorting and searching. The program sorts an array using selection sort and than search for an element in the sorted array. The program is given in fig. 8. Generate cross reference list for the program and also calculate and WM for the program. Software Metrics LV , ,
  • 134. Solution Software Metrics The given program is of 66 lines and has 11 variables. The variables are a, I, j, item, min, temp, low, high, mid, loc and option.
  • 138. Software Metrics Fig.8: Sorting & searching program
  • 139. Software Metrics Cross-Reference list of the program is given below: a i j item min temp low high mid loc option 11 12 12 12 12 12 13 13 13 13 14 18 16 25 44 24 29 46 45 46 56 40 19 16 25 47 27 31 47 46 47 61 41 27 16 25 49 29 50 47 49 62 27 18 27 59 30 52 51 50 29 19 30 62 54 52 51 30 22 31 54 52 30 22 59 31 22 61 37 24 47 36 49 36 59 36 37 37
  • 140. Line Live Variables Count cont… 13 14 15 16 17 18 19 20 22 23 24 25 1 1 1 2 2 3 3 3 3 3 4 5 26 5 low low low low, i low, i low, i, a low, i, a low, i, a low, i, a low, i, a low, i, a, min low, i, a, min, j Live Variables per line are calculated as: low, i, a, min, j
  • 141. Software Metrics Line Live Variables Count cont… 27 28 29 30 31 32 33 34 35 36 37 38 5 5 6 6 5 3 3 3 3 3 3 2 39 2 low, i, a low, i, a, min, j low, i, a, min, j low, i, a, min, j, temp low, i, a, min, j, temp low, i, a, j, temp low, i, a low, i, a low, i, a low, i, a low, i, a low, a low, a
  • 142. Line Live Variables Count cont… 40 41 42 43 44 45 46 47 48 49 50 51 3 3 2 2 3 4 5 5 5 5 5 5 52 5 low, a, option low, a, option low, a low, a low, a, item low, a, item, high low, a, item, high, mid low, a, item, high, mid low, a, item, high, mid low, a, item, high, mid low, a, item, high, mid low, a, item, high, mid low, a, item, high, mid Software Metrics
  • 143. Software Metrics Line Live Variables Count cont… 53 54 55 56 57 58 59 60 61 62 5 5 3 4 4 4 4 3 3 2 low, a, item, high, mid low, a, item, high, mid a, item, mid a, item, mid, loc a, item, mid, loc a, item, mid, loc a, item, mid, loc item, mid, loc item, mid, loc item, loc
  • 144. Software Metrics Line Live Variables Count 63 64 65 66 0 0 0 0 174 Total
  • 145. Average number of live variables ( ) = Software Metrics statements executable of Count variables live of count of Sum 8 51 8 15 28 3 LV (WM) Weakness Module 8 15 11 174 variables of number Total variables live of count of Sum 28 3 53 174 . . . . .           WM LV    LV
  • 146. S.No Term Meaning/purpose 1 Object Object is an entity able to save a state (information) and offers a number of operations (behavior) to either examine or affect this state. 2 Message A request that an object makes of another object to perform an operation. 3 Class A set of objects that share a common structure and common behavior manifested by a set of methods; the set serves as a template from which object can be created. 4 Method an operation upon an object, defined as part of the declaration of a class. 5 Attribute Defines the structural properties of a class and unique within a class. 6 Operation An action performed by or on an object, available to all instances of class, need not be unique. Object Oriented Metrics Terminologies
  • 147. S.No Term Meaning/purpose 7 Instantiation The process of creating an instance of the object and binding or adding the specific data. 8 Inheritance A relationship among classes, where in an object in a class acquires characteristics from one or more other classes. 9 Cohesion The degree to which the methods within a class are related to one another. 10 Coupling Object A is coupled to object B, if and only if A sends a message to B. Object Oriented Metrics Terminologies
  • 148. Object Oriented Metrics • Measuring on class level – coupling – inheritance – methods – attributes – cohesion • Measuring on system level
  • 149. Object Oriented Metrics • Object oriented systems are rapidly increasing in the market. Object oriented software engineering leads to better design, higher quality and maintainable software. • As the object oriented development is growing, the need for object oriented metrics that can be used across the software industry is also increasing. • The traditional metrics, although applicable to object oriented systems, do not measure object oriented constructs such as inheritance and polymorphism. This need has led to the development of object oriented metrics.
  • 150. Object Oriented Metrics Coupling Metrics • The degree of interdependence between classes is defined by coupling. • During object oriented analysis and design phases, coupling is measured by counting the relationship a class has with other classes or systems. • Coupling increases complexity and decreases maintainability, reusability and understandability. • Hence coupling should be reduced amongst classes and the classes should be designed with the aim of weak coupling.
  • 151. Object Oriented Metrics Coupling Metrics Chidamber and Kemerer defined coupling as: “Two classes are coupled when methods declared in one class use methods or instance variables of the other classes”.
  • 152. Object Oriented Metrics Coupling Metrics • This definition also includes coupling based on inheritance. • In 1994, Chidamber and Kemerer defined Coupling Between Objects (CBO). In his paper he defined CBO as the count of number of other classes to which a class is coupled.
  • 153. Object Oriented Metrics Coupling Metrics • The CBO definition given in 1994 includes inheritance based coupling. For example, consider figure below, two variables of other classes (class B and class C) are used in class A, hence the value of CBO for class A is 2. Similarly, for class B and class C the value of CBO is zero. Class B Class A B objB C objC Class C
  • 154. Object Oriented Metrics Coupling Metrics Li and Henry used data abstraction technique for defining coupling. Data abstraction provides the ability to create user-defined data types called Abstract Data Types (ADTs). Li and Henry defined Data Abstraction Coupling (DAC) as: DAC = number of ADTs defined in a class class A has two abstract data types (i.e. two non simple attributes) objB and objC.
  • 155. Object Oriented Metrics Coupling Metrics Li and Henry defined another coupling metric known as Message Passing Coupling (MPC) as “number of unique send statements in a class”. Hence if four different methods in class A access one method in class B, then MPC is 4. Class B Class A
  • 156. Object Oriented Metrics Coupling Metrics In 1994, Chidamber and Kemerer defined RFC metric as a set of methods defined in a class and called by a class [CHID94]. It is given by RFC=|RS|, where RS, the response set of the class, is given by: where Mi = set of all methods in a class (total n) and Ri = {Rij} = set of methods called by Mi. } {R M ij j all i   RS
  • 157. Object Oriented Metrics Class consists of four functions: A1, A2, A3, A4 A1 calls B1 and B2 of class B A2 calls C1 of class C A3 calls D1 and D2 of class D Thus, RFC=|RS| Mi={A1, A2, A3, A4} Ri ={B1, B2, C1, D1, D2} RS={ A1, A2, A3, A4, B1, B2, C1, D1, D2} RFC=9
  • 158. Object Oriented Metrics Coupling Metrics • In 1997, Briand gave a suite of 18 metrics that measured different types of interaction between classes • These metrics may be used to guide software developers about which type of coupling effects maintenance cost and reduces reusability. • Briand observed that the coupling between classes can be divided into different facets
  • 159. Object Oriented Metrics Coupling Metrics • Relationship: It refers to the type of relationship between classes: friendship, inheritance, or other. • Export or import coupling: This identifies whether a class A uses methods/variable of other class B (import).
  • 160. Object Oriented Metrics Coupling Metrics • Type of interaction: Briand identified three types of interaction between classes: class-attribute, class- method, and method-method. • Class attribute (CA) interaction: if there is any change in class B and/or class C the class A will be effected. • Class-method (CM) interaction: if the parameter of class B is passed as argument to method of class A the type of interaction is said to be class-method.
  • 161. Object Oriented Metrics Coupling Metrics • Method-method (MM) interaction: If a method Mi of class Ci calls method Mj of class Cj or method Mi of class Ci consists of reference of method Mj of class Cj as arguments, then there is MM type of interaction between class Ci and class Cj.
  • 162. Object Oriented Metrics Coupling Metrics • The metrics for CA are FCAIC, OCAIC, IFCAEC, DCAEC, OCAEC. • In these metrics the first one/two letters signify the type of relationship (IF signify inverse friendship, A signify ancestors, D signify descendant, F signify friendship, and O signify others). • The next two letters signify the type of interaction (CA, CM, MM). Finally the last two letters signify import coupling (IC) or export coupling (EC).
  • 163. Object Oriented Metrics Coupling Metrics • Lee et al. acknowledged the need to differentiate between inheritance-based and non inheritance-based coupling by proposing the corresponding measures: • Non Inheritance information flow-based coupling (NIH- ICP), • Information flow-based inheritance coupling (IH-ICP). Information flow-based coupling (ICP) metric was the sum of NIH-ICP and IH-ICP metrics. • Lee et al. emphasized that their ICP metrics based on method invocations, take polymorphism into account
  • 164. Object Oriented Metrics Cohesion Metrics Cohesion is a desirable properly of a class and should be maximized as it supports the concept of data hiding. Low cohesive class is more complex and is more prone to faults in the development life cycle. Chidamber and Kemerer proposed Lack of Cohesion in methods (LCOM) metric in 1994. LCOM measures the dissimilarity of methods in a class by finding the attributes used by the methods. It calculates the difference between the number of methods that have similarly zero and number of methods that have similarity greater than zero. The larger the similarly between methods, more cohesive is the class.
  • 165. Object Oriented Metrics M1={a1, a2, a3, a4} M2={a1, a2} M3={a3} M4={a3, a4} M1∩M2=1 M1∩M3=1 M1∩M4=1 M2∩M3=0 M2∩M4=0 M3∩M4=1 LCOM=2-4, Hence, LCOM=0
  • 166. Object Oriented Metrics • Henderseller observed some problems in the definition of LCOM metric proposed by Chidamber and Kemerer in 1994. The problems were: • A number of real examples showed the value of LCOM as zero due to presence of dissimilarity amongst methods. Hence large number of projects showed low cohesion. • No guideline for interpretation of value of LCOM was given by Chidamber and Kemerer. Thus, Hendersellors revised the LCOM value. Consider m methods accessing a set of attributes Di(i=1,…….n). Let number of methods that access each datum be . The revised LCOM1 metric is given below: ) ( i D 
  • 168. Object Oriented Metrics • Beimen et al. defined two cohesion metrics Tight Class Cohesion (TCC) and Loose Class Cohesion (LCC). • TCC metric is defined as the percentage of pairs of directly connected public methods of the class with common attribute usage. LCC is same as TCC, except that it also considers indirectly connected methods. • A method M1 is said to be indirectly connected with method M3, if M1 is connected to method M2 and method M2 is connected to method M3. Hence, indirectly connected methods represent transitive closure of directly connected methods.
  • 169. Object Oriented Metrics class queue { private: int *a; int rear; int front; int n; public: queue(int s) { n=s; rear=0; front=0; a=new int[n]; }
  • 170. Object Oriented Metrics int empty() { if(rear==0) { return 1; } else { return 0; } } void insert(int); int remove(); int getsize() { return n; } void display(); };
  • 171. Object Oriented Metrics void queue::insert(int data) { if(rear==n) { cout<<"Queue overflow"; } else { a[rear++]=data; } } int queue::remove() { int element,i; if(empty()) { cout<<"Queue underflow"; getch(); }
  • 172. Object Oriented Metrics else { element=a[front]; for(i=0;i<rear-1;i++) { a[i]=a[i+1]; } rear--; } return element; } void queue::display() { int i; for(i=0;i<rear;i++) { cout<<a[i]<<" "; } getch(); }
  • 173. Object Oriented Metrics rear front n a empty getsize display insert remove
  • 174. Object Oriented Metrics • The pair of public functions with common attribute usage is given below • {(empty, insert), (empty, remove), (empty, display), (getsize, insert), (insert, remove), (insert, display), (remove, display)} TCC(Queue)= % 70 100 10 7  
  • 175. Object Oriented Metrics • The methods empty and getsize are indirectly connected, since empty is connected to insert and getsize is also connected to insert. Thus, by transitivity empty is connected to getsize. LCC(Queue)= % 100 100 10 10  
  • 176. Object Oriented Metrics • Lee et al. proposed Information flow based cohesion (ICH) metric. • ICH for a class is defined as the weighted sum of the number of invocations of other methods of the same class, weighted by the number of parameters of the invoked method. • method remove invokes method empty which does not consist of any arguments. Thus, ICH (Queue) =1.
  • 177. Object Oriented Metrics Inheritance Metrics • The inheritance is measured in terms of depth of inheritance hierarchy by many authors in the literature. • The depth of a class within an inheritance hierarchy is measured by Depth of Inheritance Tree (DIT) metric given by Chidamber and Kemerer, 1994. • It is measured as the number of steps from the class node to the root node of the tree. • In case involving multiple inheritances, the DIT will be the maximum length from the node to the root of the tree.
  • 179. Object Oriented Metrics Inheritance Metrics The average inheritance depth (AID) is calculated as [YAP93]: AID=the depth of sub class D is 1.5 ((2+1)/2). AID= classes of number total class each of depth 
  • 180. Object Oriented Metrics Inheritance Metrics • The AID of overall inheritance structure is:0(A)+1(B)+0(C) +D(1.5)+E(1)+0(F)=3.5. Finally dividing by total number of classes we get 3.5/6=0. • Number of Children (NOC) metric counts the number of immediate sub classes of a class in a hierarchy. In figure 8. NOC value for class A is 1 and class E is 2. • Lorenz and Kidd developed Number of Parents (NOP) metric which counts the number of classes that a class directly inherits (i.e. multiple inheritance) and Number of Descendants (NOD) as the number of sub classes of a class (both directly and indirectly). Number of Ancestors (NOA) given by Tegarden and Sheetz (1992) counts number of base classes of a class (both directly and indirectly). Hence , NOA(D)=3 (A, B, C), NOP(D)=2(B,C) and NOD(A)=2 (B,D).
  • 181. Object Oriented Metrics Inheritance Metrics • Lorenz and Kidd gave three measures Number of methods Overridden (NMO), Number of Methods Added (NMA) and Number of methods Inherited (NMI). • When a method in a sub class has the same name and type (signature) as in the super class, then the method in the super class is said to be overridden by the method in the sub class. • NMA counts the number of new methods (neither overridden nor inherited) added in a class. NMI counts number of methods a class inherits from its super classes.
  • 182. Object Oriented Metrics Inheritance Metrics • Finally, Lorenz and Kidd use NMO, NMA, and NMI metrics to calculate Specialization Index (SIX) as given below: NMI NMA NMO DIT * NMO SIX   
  • 183. Object Oriented Metrics class Person { protected: char name[25]; int age; public: void readperson(); void displayperson(); }; class Student extends Person{ protected: roll_no[10]; float average; public: void readperson(); void displayperson(); float getaverage(); };
  • 184. Object Oriented Metrics class GradStudent extends Student{ private: char subject[25]; char working[25]; public: void readperson); void displayperson(); void workstatus(); };
  • 185. Object Oriented Metrics Inheritance Metrics • The class student overrides two methods of class person, readperson() and displayperson(). Thus, the value of NMO metric for class student is two. One new method is added in this class (getaverage). • Hence, the value of NMA metric is 1. The value of SIX for class grastudent is 1 4 4 1 1 2 2 * 2 SIX     
  • 186. Object Oriented Metrics Size Metrics Several traditional metrics are applicable to object oriented systems. The traditional LOC metric is a measure of size of a class. Halstead’s software science and McCabe’s measures for measuring size are also applicable to object oriented systems, however, the object oriented paradigm defines a different way of doing things. This has led to development of size metrics applicable to object oriented constructs. Chidamber and Kemerer defined Weighted Methods per Class (WMC) metric given by:    n 1 i i C WMC
  • 187. Object Oriented Metrics Size Metrics Number of attributes (NOA), given by Lorenz and Kidd is defined as the sum of number of instance variables and number of class variables. Number of methods defined by Li and Henry (1993) is defined as number of local methods defined in a class. They also gave two additional size metrics besides LOC metric SIZE1 and SIZE2 given as SIZE1= number of semicolons in a class SIZE2=NOA+NOM
  • 188. Measuring Software Quality Software quality should be an essential practice in software development and thus arises the need of measuring the aspects of software quality. Measuring quality attributes will guide the software professionals about the quality of the software. Software quality must be measured throughout the software development life cycle phase.
  • 189. Software Reliability Models  Basic Execution Time Model           0 0 1 ) ( V     Fig.7.13: Failure intensity  as a function of μ for basic model (1) Software Reliability
  • 190. 0 0 V d d      Fig.7.14: Relationship between & μ for basic model  (2) Software Reliability
  • 191.           0 0 ) ( 1 ) ( V d d       For a derivation of this relationship, equation 1 can be written as: The above equation can be solved for and result in : ) (                     0 0 0 exp 1 ) ( V V     (3) Software Reliability
  • 192. Fig.7.15: Failure intensity versus execution time for basic model The failure intensity as a function of execution time is shown in figure given below           0 0 0 exp ) ( V      Software Reliability
  • 193.  Derived quantities Software Reliability Fig.7.16: Additional failures required to be experienced to reach the objective
  • 194. Software Reliability This can be derived in mathematical form as:           F P Ln V     0 0 Fig.7.17: Additional time required to reach the objective
  • 195. Example- 7.1 Assume that a program will experience 200 failures in infinite time. It has now experienced 100. The initial failure intensity was 20 failures/CPU hr. Software Reliability (i) Determine the current failure intensity. (ii) Find the decrement of failure intensity per failure. (iii)Calculate the failures experienced and failure intensity after 20 and 100 CPU hrs. of execution. (iv)Compute addition failures and additional execution time required to reach the failure intensity objective of 5 failures/CPU hr. Use the basic execution time model for the above mentioned calculations.
  • 196. Solution Here Vo=200 failures Software Reliability (i) Current failure intensity:           0 0 1 ) ( V     failures 100   hr. PU failures/C 20 0   r h PU failures/C 10 ) 5 . 0 1 ( 20 200 100 1 20           
  • 197. Software Reliability (ii) Decrement of failure intensity per failure can be calculated as: hr. CPU / 1 . 0 200 20 0 0       V d d                       0 0 0 exp 1 ) ( V V     (iii) (a) Failures experienced & failure intensity after 20 CPU hr: )) 2 1 exp( 1 ( 200 200 20 20 exp 1 200                      failures 173 ) 1353 . 0 1 ( 200   
  • 198. Software Reliability           0 0 0 exp ) ( V                         0 0 0 exp 1 ) ( V V     (b) Failures experienced & failure intensity after 100 CPU hr: lmost) failures(a 200 200 100 20 exp 1 200                              0 0 0 exp ) ( V      hr CPU failures / 71 . 2 ) 2 exp( 20 200 20 20 exp 20            
  • 199. Software Reliability hr CPU failures / 000908 . 0 200 100 20 exp 20             failures 50 ) 5 10 ( 20 200 0 0                     F P V     (iv) Additional failures required to reach the failure intensity objective of 5 failures/CPU hr.    
  • 200. Software Reliability                   F P Ln V     0 0 Additional execution time required to reach failure intensity objective of 5 failures/CPU hr. hr. CPU 93 . 6 5 10 20 200         Ln
  • 201. Example 10.1: A program will experience 100 failures in infinite time. It has now experienced 50 failures. The initial failure intensity is 10 failures/hour. Use the basic execution time model for the following: •Find the present failure intensity. •Calculate the decrement of failure intensity per failure. •Determine the failure experienced and failure intensity after 10 and 50 hours of execution. •Find the additional failures and additional execution time needed to reach the failure intensity objective of 2 failures/hour.
  • 202.  Logarithmic Poisson Execution Time Model Failure Intensity Software Reliability Fig.7.18: Relationship between ) exp( ) ( 0      
  • 203. Software Reliability Fig.7.19: Relationship between ) exp( 0         d d      d d
  • 204. ) 1 ( 1 ) ( 0        Ln ) 1 /( ) ( 0 0        (4) Software Reliability           F P Ln     1          P F     1 1 1 objective intensity Failure intensity failure Present   F P  
  • 205. Example- 7.2 Assume that the initial failure intensity is 20 failures/CPU hr. The failure intensity decay parameter is 0.02/failures. We have experienced 100 failures up to this time. Software Reliability (i) Determine the current failure intensity. (ii) Calculate the decrement of failure intensity per failure. (iii)Find the failures experienced and failure intensity after 20 and 100 CPU hrs. of execution. (iv)Compute the additional failures and additional execution time required to reach the failure intensity objective of 2 failures/CPU hr. Use Logarithmic Poisson execution time model for the above mentioned calculations.
  • 206. Solution Software Reliability (i) Current failure intensity: ) exp( ) ( 0       failures 100   failures / 02 . 0   hr. PU failures/C 20 0   = 20 exp (-0.02 x 100) = 2.7 failures/CPU hr.
  • 207. Software Reliability (ii) Decrement of failure intensity per failure can be calculated as: θλ d d       1 1 ) ( 0        Ln (iii) (a) Failures experienced & failure intensity after 20 CPU hr: failures Ln 109 ) 1 20 02 . 0 20 ( 02 . 0 1      = -.02 x 2.7 = -.054/CPU hr.
  • 208. Software Reliability   1 / ) ( 0 0        (b) Failures experienced & failure intensity after 100 CPU hr: . / 22 . 2 ) 1 20 02 . 20 /( ) 20 ( hr CPU failures        1 1 ) ( 0        Ln failures Ln 186 ) 1 100 02 . 0 20 ( 02 . 0 1        1 / ) ( 0 0        . / 4878 . 0 ) 1 100 02 . 20 /( ) 20 ( hr CPU failures     
  • 209. Software Reliability failures 15 2 7 2 02 0 1 1           . . Ln Ln F P     (iv) Additional failures required to reach the failure intensity objective of 2 failures/CPU hr.     hr. CPU 5 6 7 2 1 2 1 02 0 1 1 1 1 . . .                   P F    
  • 210. Example- 7.3 The following parameters for basic and logarithmic Poisson models are given: Software Reliability (a) Determine the addition failures and additional execution time required to reach the failure intensity objective of 5 failures/CPU hr. for both models. (b) Repeat this for an objective function of 0.5 failure/CPU hr. Assume that we start with the initial failure intensity only. Basic execution time model Logarithmic Poisson execution time model hr PU failures/C 10  o  hr PU failures/C 30  o  failures 0 10  o V failure 25 0 / .  
  • 211. Solution Software Reliability (a) (i) Basic execution time model ) ( 0 0 F P V        0            F P Ln     0 0 V P  failures 50 ) 5 10 ( 10 100    (Present failure intensity) in this case is same as (initial failure intensity). Now,
  • 212. Software Reliability (ii) Logarithmic execution time model hr. CPU 93 . 6 5 10 10 100         Ln           F P Ln     1 Failures 67 . 71 5 30 025 . 0 1         Ln            P F     1 1 1 hr. CPU 66 . 6 30 1 5 1 025 . 0 1          Ln
  • 213. Software Reliability (b) Failure intensity objective = 0.5 failures/CPU hr.   F P V        0 0 failures 95 ) 5 . 0 10 ( 10 100    Logarithmic model has calculated more failures in almost some duration of execution time initially.   F  (i) Basic execution time model           F P Ln V     0 0 hr CPU Ln / 30 05 . 0 10 10 100        
  • 214. Software Reliability           F P Ln    θ 1 failures Ln 164 5 . 0 30 025 . 0 1         (ii) Logarithmic execution time model            P F    1 1 θ 1 hr CPU/ 66 . 78 30 1 5 . 0 1 025 . 0 1         
  • 215. Software Quality metrics based on Defects • According to IEEE/ANSI standard, defect can be defined as “an accidental condition that causes a unit of the system to fail to function as required”. • A fault can cause many failures, hence there is no one- to-one correspondence between fault and a failure.
  • 216. Software Quality metrics based on Defects Defect density Defect density can be measured as the ratio of number of defects encountered to the size of the software. Size of the software is usually measured in terms of thousands of lines of code (KLOC) and is given as: KLOC defects of Number density Defect 
  • 217. Software Quality metrics based on Defects Phase based defect density It is an extension of defect density metric. The defect density can be tracked at various phases of software development including verification activities such as reviews, inspections, formal reviews before the start of validation testing.
  • 218. Software Quality metrics based on Defects Defect removal effectiveness Defect Removal Effectiveness (DRE) is defined as: . defects Latent phase cycle life given a in removed Defects DRE 
  • 219. Software Quality metrics based on Defects Defect removal effectiveness Latent defects for a given phase is not known. Thus, they are estimated as the sum of defects removed during a phase and defects detected later. The higher the value of the metric more efficient and effective is the process followed in a particular phase. A B B D D D DRE  
  • 220. Software Quality metrics based on Defects Testing coverage metrics can be used to monitor the amount of testing been done. These include the basic coverage metrics: Statement coverage metric describes the degree to which statements are covered while testing. Branch coverage metric determines whether each branch in the source code has been tested. Operation coverage metric determines whether every operation of a class been tested. Condition coverage metric determines whether each condition is evaluated both for true and false.
  • 221. Software Quality metrics based on Defects Path coverage metric determines whether each path of the control flow graph has been exercised or not. Loop coverage metric determines how many times a loop is covered. Multiple condition coverage metric determines whether every possible combination of conditions are covered.
  • 222. Software Quality metrics based on Defects The test focus (TF) metric is given as STRs of number Total closed and fixed STRs of Number TF 
  • 223. Software Quality metrics based on Defects The fault coverage metric (FCM) is given as: faults of severity faults of number Total faults of severity addressed faults of Number FCM   