2. Introduction
• Many software engineers argue that size is misleading, and
that the amount of functionality inherent in a product paints
a better picture of product size
• As a distinct attribute, required functionality captures an
intuitive notion of the amount of function contained in a
delivered product or in a description of how the product is
supposed to be.
3. : Albrecht’s function points (FPs)
• To compute the number of FPs we first compute an
unadjusted function point count (UFC). To do this, we
determine from some representation of the software the
number of “items” of the following types:
4. To compute the number of FPs we first compute an unadjusted function point count (UFC). To
do this, we determine from some representation of the software the number of “items” of the
following types:
1.External inputs: Those items provided by the user that describe distinct application-oriented
data (such as file names and menu selections). These items do not include inquiries, which are
counted separately.
2. External outputs: Those items provided to the user that generate distinct application-oriented
data (such as reports and messages, rather than the individual components of these).
3. External inquiries: Interactive inputs requiring a response.
4. External files: Machine-readable interfaces to other systems.
5 Internal files: Logical master files in the system.
13. • 4. Problems with accuracy. One study found that the TCF does not significantly improve
resource estimates
• 5. Problems with changing requirements. FPs are an appealing size mea sure in part
because they can be recalculated as development con tinues. They can also be used to
track progress, in terms of number of FPs completed. However, if we compare an FP count
generated from an initial specification with the count obtained from the result ing system,
we sometimes find an increase of 400–2000% (Kemerer 1993). This difference may be not
due to the FP calculation method but rather due to “creeping elegance,” where new,
nonspecified func tionality is built into the system as development progresses. However,
the difference occurs also because the level of detail in a specification is coarser than that
of the actual implementation. That is, the number and complexity of inputs, outputs,
enquiries, and other FP-related data will be underestimated in a specification because they
are not well understood or articulated early in the project. Thus, calibrating FP
relationships based on actual projects in your historical database will not always produce
equations that are useful for predictive pur poses (unless the FPs were derived from the
original system specifi cation, not the system itself).
14. • 6. Problems with differentiating specified items. Because
evaluating the input items and technology factor
components involves expert judg ment, the calculation of
FPs from a specification cannot be completely automated.
To minimize subjectivity and ensure some consistency
across different evaluators, organizations (such as the
International Function Point User Group) often publish
detailed counting rules that distinguish inputs and outputs
from enquiries, to define what constitutes a “logical master
file,” etc. However, there is still great variety in results when
different people compute FPs from the same specification.
15. • 6. Problems with differentiating specified items. Because
evaluating the input items and technology factor
components involves expert judg ment, the calculation of
FPs from a specification cannot be completely automated.
To minimize subjectivity and ensure some consistency
across different evaluators, organizations (such as the
International Function Point User Group) often publish
detailed counting rules that distinguish inputs and outputs
from enquiries, to define what constitutes a “logical master
file,” etc. However, there is still great variety in results when
different people compute FPs from the same specification.
16. • 7. Problems with subjective weighting. Symons notes that the choice of weights for
calculating unadjusted FPs was determined subjectively from IBM experience. These
values may not be appropriate in other development environments (Symons 1988).
• 8. Problems with measurement theory. Kitchenham et al. have proposed a
framework for evaluating measurements, much like the framework presented in
Chapters 2 and 3. They apply the framework to FPs, not ing that the FP calculation
combines measures from different scales in a manner that is inconsistent with
measurement theory. In par ticular, the weights and TCF ratings are on an ordinal
scale, while the counts are on a ratio scale, so the linear combinations in the for mula
are meaningless. They propose that FPs be viewed as a vector of several aspects of
functionality, rather than as a single number (Kitchenham et al. 1995). A similar
approach proposed by Abran and Robillard is to treat FPs as a multidimensional
index rather than as a dimensionless number (Abran and Robillard 1994)