SlideShare a Scribd company logo
©The McGraw-Hill Companies, 2005
1
Software Project
Management
4th Edition
Software effort
estimation
Chapter 5
©The McGraw-Hill Companies, 2005
2
What makes a successful
project?
Delivering:
 agreed functionality
 on time
 at the agreed cost
 with the required
quality
Stages:
1. set targets
2. Attempt to achieve
targets
BUT what if the targets are not achievable?
©The McGraw-Hill Companies, 2005
Where are estimates done
• Strategic Planning
• Feasibility study
• System specification
• Evaluation of suppliers proposal
• Project planning
3
©The McGraw-Hill Companies, 2005
4
Over and under-
estimating
• Parkinson’s Law:
‘Work expands to fill
the time available’
• Brook’s Law: Putting
more people on a late
job makes it better
• An over-estimate is
likely to cause project
to take longer than it
would otherwise
• Weinberg’s Zeroth
Law of reliability: ‘a
software project
that does not have
to meet a reliability
requirement can
meet any other
requirement’
©The McGraw-Hill Companies, 2005
Basis for Software
Estimation
• The need for historical Data : based on past
experience
• Measure of Work:
– SLOC(Source Lines of Code)
• No precise definition
• Difficult to estimate at start of the project
• Only a code measure(not effort)
• Programmer dependent
• Does not consider code complexity
– FP(Function Point)
5
©The McGraw-Hill Companies, 2005
6
A taxonomy of estimating
methods
• Bottom-up - activity based, analytical
• Parametric or algorithmic models
e.g. function points
• Expert opinion - just guessing?
• Analogy - case-based, comparative
• Parkinson and ‘price to win’
©The McGraw-Hill Companies, 2005
7
Bottom-up versus top-down
• Bottom-up
– use when no past project data
– identify all tasks that have to be done – so quite time-
consuming
– use when you have no data about similar past
projects
• Top-down
– produce overall estimate based on project cost drivers
– based on past project data
– divide overall estimate between jobs to be done
©The McGraw-Hill Companies, 2005
8
Bottom-up estimating
1. Break project into smaller and smaller
components
[2. Stop when you get to what one
person can do in one/two weeks]
3. Estimate costs for the lowest level
activities
4. At each higher level calculate estimate
by adding estimates for lower levels
©The McGraw-Hill Companies, 2005
9
Top-down estimates
• Produce overall
estimate using
effort driver (s)
• distribute
proportions of
overall estimate to
components
design code
overall
project
test
Estimate
100 days
30%
i.e.
30 days
30%
i.e.
30 days
40%
i.e. 40 days
©The McGraw-Hill Companies, 2005
10
Algorithmic/Parametric models
• COCOMO (lines of code) and function
points examples of these
• Problem with COCOMO etc:
guess algorithm estimate
but what is desired is
system
characteristic
algorithm estimate
©The McGraw-Hill Companies, 2005
11
Parametric models - continued
• Examples of system characteristics
– no of screens x 4 hours
– no of reports x 2 days
– no of entity types x 2 days
• the quantitative relationship between
the input and output products of a
process can be used as the basis of a
parametric model
©The McGraw-Hill Companies, 2005
12
Parametric models - the
need for historical data
• simplistic model for an estimate
estimated effort = (system size) /
productivity
e.g.
system size = lines of code
productivity = lines of code per day
• productivity = (system size) / effort
– based on past projects
©The McGraw-Hill Companies, 2005
13
Parametric models
• Some models focus on task or system
size e.g. Function Points
• FPs originally used to estimate Lines
of Code, rather than effort
model
Number
of file types
Numbers of input
and output transaction types
‘system
size’
©The McGraw-Hill Companies, 2005
14
Parametric models
• Other models focus on productivity: e.g.
COCOMO
• Lines of code (or FPs etc) an input
System
size
Productivity
factors
Estimated effort
©The McGraw-Hill Companies, 2005
15
Function points Mark II
• Developed by Charles R. Symons
• ‘Software sizing and estimating - Mk II
FPA’, Wiley & Sons, 1991.
• Builds on work by Albrecht
• Work originally for CCTA:
– should be compatible with SSADM; mainly
used in UK
• has developed in parallel to IFPUG FPs
©The McGraw-Hill Companies, 2005
16
Function points Mk II
continued
For each
transaction,
count
– data items
input (Ni)
– data items
output (No)
– entity types
accessed (Ne)
#entities
accessed
#input
items
#output
items
FP count = Ni * 0.58 + Ne * 1.66 + No * 0.26
©The McGraw-Hill Companies, 2005
17
Function points for embedded
systems
• Mark II function points, IFPUG function
points were designed for information
systems environments
• COSMIC FPs attempt to extend concept
to embedded systems
• Embedded software seen as being in a
particular ‘layer’ in the system
• Communicates with other layers and
also other components at same level
©The McGraw-Hill Companies, 2005
18
Layered software
Higher layers
Lower layers
Software component peer
component
Makes a request
for a service
Receives service
Receives request Supplies service
Peer to peer
communication
Persistent
storage
Data reads/
writes
©The McGraw-Hill Companies, 2005
19
COSMIC FPs
The following are counted:
• Entries: movement of data into software
component from a higher layer or a peer
component
• Exits: movements of data out
• Reads: data movement from persistent storage
• Writes: data movement to persistent storage
Each counts as 1 ‘COSMIC functional size unit’
(Cfsu)
©The McGraw-Hill Companies, 2005
20
COCOMO81
• Based on industry productivity standards
- database is constantly updated
• Allows an organization to benchmark its
software development productivity
• Basic model
effort = c x sizek
• C and k depend on the type of system:
organic, semi-detached, embedded
• Size is measured in ‘kloc’ ie. Thousands of
lines of code
©The McGraw-Hill Companies, 2005
21
The COCOMO constants
System type c k
Organic (broadly,
information
systems)
2.4 1.05
Semi-detached 3.0 1.12
Embedded (broadly,
real-time)
3.6 1.20
k exponentiation – ‘to the power of…’
adds disproportionately more effort to the larger projects
takes account of bigger management overheads
©The McGraw-Hill Companies, 2005
22
Development effort multipliers
(dem)
According to COCOMO, the major productivity
drivers include:
Product attributes: required reliability, database
size, product complexity
Computer attributes: execution time constraints,
storage constraints, virtual machine (VM)
volatility
Personnel attributes: analyst capability,
application experience, VM experience,
programming language experience
Project attributes: modern programming
practices, software tools, schedule constraints
©The McGraw-Hill Companies, 2005
23
Using COCOMO development
effort multipliers (dem)
An example: for analyst capability:
• Assess capability as very low, low, nominal, high
or very high
• Extract multiplier:
very low 1.46
low 1.19
nominal 1.00
high 0.80
very high 0.71
• Adjust nominal estimate e.g. 32.6 x 0.80 = 26.8
staff months
©The McGraw-Hill Companies, 2005
24
Estimating by analogy
source cases
attribute values
effort
attribute values ?????
target case
attribute values
attribute values
attribute values
attribute values
attribute values
effort
effort
effort
effort
effort
Select case
with closet attribute
values
Use effort
from source as
estimate
©The McGraw-Hill Companies, 2005
25
Stages: identify
• Significant features of the current
project
• previous project(s) with similar features
• differences between the current and
previous projects
• possible reasons for error (risk)
• measures to reduce uncertainty
©The McGraw-Hill Companies, 2005
26
Machine assistance for
source selection (ANGEL)
Num
ber
of
inputs
Number of outputs
target
Source A
Source B
Euclidean distance = sq root ((It - Is)2
+ (Ot - Os)2
)
It-Is
Ot-Os
©The McGraw-Hill Companies, 2005
27
Some conclusions: how to
review estimates
Ask the following questions about an estimate
• What are the task size drivers?
• What productivity rates have been used?
• Is there an example of a previous project of
about the same size?
• Are there examples of where the productivity
rates used have actually been found?

More Related Content

PPT
PPT
Software effort estimation
PDF
Software_effort_estimation for Software engineering.pdf
PDF
Spm software effort estimation
PPT
dokumen.tips_spm-5e-software-effort-estimation-the-mcgraw-hill-companies-2009...
PPTX
Group-5-presentation_SPM, here is deatiled version.pptx
PPT
Chapter 3- Software Project Management(Reduced).ppt
PDF
APznzaZSEwUJhKEim-rOA-Svk6nc1xZygCeBBAW4QZluPqM0dLSELK_S9YNDE8po44L2LgB6Is5VJ...
Software effort estimation
Software_effort_estimation for Software engineering.pdf
Spm software effort estimation
dokumen.tips_spm-5e-software-effort-estimation-the-mcgraw-hill-companies-2009...
Group-5-presentation_SPM, here is deatiled version.pptx
Chapter 3- Software Project Management(Reduced).ppt
APznzaZSEwUJhKEim-rOA-Svk6nc1xZygCeBBAW4QZluPqM0dLSELK_S9YNDE8po44L2LgB6Is5VJ...

Similar to software project management and its effort (20)

PPT
Cost effort in softwrae project management.ppt
PPT
21UCAE52 Software Project Management.ppt
PPT
Ch05_Software_effort_estimuguhuigyugation (1).ppt
PPT
LECT9.ppt
PDF
BIT 413_ITPM_Lecture_estimation and cost mgt_wk8.pdf
PPT
Cost effort.ppt
PDF
CHAPTER II 2.pdfhhhjjjjjjhrrtujiiiuyrrtjj
PPT
Lecture5
PPTX
Lec_6_Sosssssftwaaaaaare_Estimation.pptx
PPTX
SE2023 0301 Software Project Management.pptx
PPT
Software cost estimation project
PPTX
Se 381 - lec 25 - 32 - 12 may29 - program size and cost estimation models
PDF
software-effort_estimation(updated)9 ch05
PPT
Project Estimation.ppt
PPT
Project Estimation.ppt
PPT
cost factor.ppt
PPT
spm cost estmate slides for bca 4-195245927.ppt
PPTX
3. Lect 29_ 30_ 32 Project Planning.pptx
PPTX
Mg6088 spm unit-2
PPTX
Chapter 5 -Software cost estimate- ref.pptx
Cost effort in softwrae project management.ppt
21UCAE52 Software Project Management.ppt
Ch05_Software_effort_estimuguhuigyugation (1).ppt
LECT9.ppt
BIT 413_ITPM_Lecture_estimation and cost mgt_wk8.pdf
Cost effort.ppt
CHAPTER II 2.pdfhhhjjjjjjhrrtujiiiuyrrtjj
Lecture5
Lec_6_Sosssssftwaaaaaare_Estimation.pptx
SE2023 0301 Software Project Management.pptx
Software cost estimation project
Se 381 - lec 25 - 32 - 12 may29 - program size and cost estimation models
software-effort_estimation(updated)9 ch05
Project Estimation.ppt
Project Estimation.ppt
cost factor.ppt
spm cost estmate slides for bca 4-195245927.ppt
3. Lect 29_ 30_ 32 Project Planning.pptx
Mg6088 spm unit-2
Chapter 5 -Software cost estimate- ref.pptx
Ad

Recently uploaded (20)

PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Computing-Curriculum for Schools in Ghana
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
Sports Quiz easy sports quiz sports quiz
PPTX
PPH.pptx obstetrics and gynecology in nursing
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Classroom Observation Tools for Teachers
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
Lesson notes of climatology university.
102 student loan defaulters named and shamed – Is someone you know on the list?
Computing-Curriculum for Schools in Ghana
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Sports Quiz easy sports quiz sports quiz
PPH.pptx obstetrics and gynecology in nursing
Final Presentation General Medicine 03-08-2024.pptx
Complications of Minimal Access Surgery at WLH
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
human mycosis Human fungal infections are called human mycosis..pptx
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
VCE English Exam - Section C Student Revision Booklet
Classroom Observation Tools for Teachers
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Anesthesia in Laparoscopic Surgery in India
Abdominal Access Techniques with Prof. Dr. R K Mishra
O7-L3 Supply Chain Operations - ICLT Program
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Cell Types and Its function , kingdom of life
Lesson notes of climatology university.
Ad

software project management and its effort

  • 1. ©The McGraw-Hill Companies, 2005 1 Software Project Management 4th Edition Software effort estimation Chapter 5
  • 2. ©The McGraw-Hill Companies, 2005 2 What makes a successful project? Delivering:  agreed functionality  on time  at the agreed cost  with the required quality Stages: 1. set targets 2. Attempt to achieve targets BUT what if the targets are not achievable?
  • 3. ©The McGraw-Hill Companies, 2005 Where are estimates done • Strategic Planning • Feasibility study • System specification • Evaluation of suppliers proposal • Project planning 3
  • 4. ©The McGraw-Hill Companies, 2005 4 Over and under- estimating • Parkinson’s Law: ‘Work expands to fill the time available’ • Brook’s Law: Putting more people on a late job makes it better • An over-estimate is likely to cause project to take longer than it would otherwise • Weinberg’s Zeroth Law of reliability: ‘a software project that does not have to meet a reliability requirement can meet any other requirement’
  • 5. ©The McGraw-Hill Companies, 2005 Basis for Software Estimation • The need for historical Data : based on past experience • Measure of Work: – SLOC(Source Lines of Code) • No precise definition • Difficult to estimate at start of the project • Only a code measure(not effort) • Programmer dependent • Does not consider code complexity – FP(Function Point) 5
  • 6. ©The McGraw-Hill Companies, 2005 6 A taxonomy of estimating methods • Bottom-up - activity based, analytical • Parametric or algorithmic models e.g. function points • Expert opinion - just guessing? • Analogy - case-based, comparative • Parkinson and ‘price to win’
  • 7. ©The McGraw-Hill Companies, 2005 7 Bottom-up versus top-down • Bottom-up – use when no past project data – identify all tasks that have to be done – so quite time- consuming – use when you have no data about similar past projects • Top-down – produce overall estimate based on project cost drivers – based on past project data – divide overall estimate between jobs to be done
  • 8. ©The McGraw-Hill Companies, 2005 8 Bottom-up estimating 1. Break project into smaller and smaller components [2. Stop when you get to what one person can do in one/two weeks] 3. Estimate costs for the lowest level activities 4. At each higher level calculate estimate by adding estimates for lower levels
  • 9. ©The McGraw-Hill Companies, 2005 9 Top-down estimates • Produce overall estimate using effort driver (s) • distribute proportions of overall estimate to components design code overall project test Estimate 100 days 30% i.e. 30 days 30% i.e. 30 days 40% i.e. 40 days
  • 10. ©The McGraw-Hill Companies, 2005 10 Algorithmic/Parametric models • COCOMO (lines of code) and function points examples of these • Problem with COCOMO etc: guess algorithm estimate but what is desired is system characteristic algorithm estimate
  • 11. ©The McGraw-Hill Companies, 2005 11 Parametric models - continued • Examples of system characteristics – no of screens x 4 hours – no of reports x 2 days – no of entity types x 2 days • the quantitative relationship between the input and output products of a process can be used as the basis of a parametric model
  • 12. ©The McGraw-Hill Companies, 2005 12 Parametric models - the need for historical data • simplistic model for an estimate estimated effort = (system size) / productivity e.g. system size = lines of code productivity = lines of code per day • productivity = (system size) / effort – based on past projects
  • 13. ©The McGraw-Hill Companies, 2005 13 Parametric models • Some models focus on task or system size e.g. Function Points • FPs originally used to estimate Lines of Code, rather than effort model Number of file types Numbers of input and output transaction types ‘system size’
  • 14. ©The McGraw-Hill Companies, 2005 14 Parametric models • Other models focus on productivity: e.g. COCOMO • Lines of code (or FPs etc) an input System size Productivity factors Estimated effort
  • 15. ©The McGraw-Hill Companies, 2005 15 Function points Mark II • Developed by Charles R. Symons • ‘Software sizing and estimating - Mk II FPA’, Wiley & Sons, 1991. • Builds on work by Albrecht • Work originally for CCTA: – should be compatible with SSADM; mainly used in UK • has developed in parallel to IFPUG FPs
  • 16. ©The McGraw-Hill Companies, 2005 16 Function points Mk II continued For each transaction, count – data items input (Ni) – data items output (No) – entity types accessed (Ne) #entities accessed #input items #output items FP count = Ni * 0.58 + Ne * 1.66 + No * 0.26
  • 17. ©The McGraw-Hill Companies, 2005 17 Function points for embedded systems • Mark II function points, IFPUG function points were designed for information systems environments • COSMIC FPs attempt to extend concept to embedded systems • Embedded software seen as being in a particular ‘layer’ in the system • Communicates with other layers and also other components at same level
  • 18. ©The McGraw-Hill Companies, 2005 18 Layered software Higher layers Lower layers Software component peer component Makes a request for a service Receives service Receives request Supplies service Peer to peer communication Persistent storage Data reads/ writes
  • 19. ©The McGraw-Hill Companies, 2005 19 COSMIC FPs The following are counted: • Entries: movement of data into software component from a higher layer or a peer component • Exits: movements of data out • Reads: data movement from persistent storage • Writes: data movement to persistent storage Each counts as 1 ‘COSMIC functional size unit’ (Cfsu)
  • 20. ©The McGraw-Hill Companies, 2005 20 COCOMO81 • Based on industry productivity standards - database is constantly updated • Allows an organization to benchmark its software development productivity • Basic model effort = c x sizek • C and k depend on the type of system: organic, semi-detached, embedded • Size is measured in ‘kloc’ ie. Thousands of lines of code
  • 21. ©The McGraw-Hill Companies, 2005 21 The COCOMO constants System type c k Organic (broadly, information systems) 2.4 1.05 Semi-detached 3.0 1.12 Embedded (broadly, real-time) 3.6 1.20 k exponentiation – ‘to the power of…’ adds disproportionately more effort to the larger projects takes account of bigger management overheads
  • 22. ©The McGraw-Hill Companies, 2005 22 Development effort multipliers (dem) According to COCOMO, the major productivity drivers include: Product attributes: required reliability, database size, product complexity Computer attributes: execution time constraints, storage constraints, virtual machine (VM) volatility Personnel attributes: analyst capability, application experience, VM experience, programming language experience Project attributes: modern programming practices, software tools, schedule constraints
  • 23. ©The McGraw-Hill Companies, 2005 23 Using COCOMO development effort multipliers (dem) An example: for analyst capability: • Assess capability as very low, low, nominal, high or very high • Extract multiplier: very low 1.46 low 1.19 nominal 1.00 high 0.80 very high 0.71 • Adjust nominal estimate e.g. 32.6 x 0.80 = 26.8 staff months
  • 24. ©The McGraw-Hill Companies, 2005 24 Estimating by analogy source cases attribute values effort attribute values ????? target case attribute values attribute values attribute values attribute values attribute values effort effort effort effort effort Select case with closet attribute values Use effort from source as estimate
  • 25. ©The McGraw-Hill Companies, 2005 25 Stages: identify • Significant features of the current project • previous project(s) with similar features • differences between the current and previous projects • possible reasons for error (risk) • measures to reduce uncertainty
  • 26. ©The McGraw-Hill Companies, 2005 26 Machine assistance for source selection (ANGEL) Num ber of inputs Number of outputs target Source A Source B Euclidean distance = sq root ((It - Is)2 + (Ot - Os)2 ) It-Is Ot-Os
  • 27. ©The McGraw-Hill Companies, 2005 27 Some conclusions: how to review estimates Ask the following questions about an estimate • What are the task size drivers? • What productivity rates have been used? • Is there an example of a previous project of about the same size? • Are there examples of where the productivity rates used have actually been found?

Editor's Notes

  • #1: This talk provides an overview of the basic steps needed to produce a project plan. The framework provided should allow students to identify where some of the particular issues discussed in other chapters are applied to the planning process. As the focus is on project planning, techniques to do with project control are not explicitly described. However, in practice, one element of project planning will be to decide what project control procedures need to be in place.
  • #2: A key point here is that developers may in fact be very competent, but incorrect estimates leading to unachievable targets will lead to extreme customer dissatisfaction.
  • #4: The answer to the problem of over-optimistic estimates might seem to be to pad out all estimates, but this itself can lead to problems. You might miss out to the competition who could underbid you, if you were tendering for work. Generous estimates also tend to lead to reductions in productivity. On the other hand, having aggressive targets in order to increase productivity could lead to poorer product quality. Ask how many students have heard of Parkinson’s Law – the response could be interesting! It is best to explain that C. Northcote Parkinson was to some extent a humourist, rather than a heavyweight social scientist. Note that ‘zeroth’ is what comes before first. This is discussed in Section 5.3 of the text which also covers Brooks’Law.
  • #6: This taxonomy is based loosely on Barry Boehm’s in the big blue book, ‘Software Engineering Economics’. One problem is that different people call these approaches by different names. In the case of bottom-up and analogy some of the alternative nomenclatures have been listed. ‘Parkinson’ is setting a target based on the amount of staff effort you happen to have available at the time. ‘Price to win’ is setting a target that is likely to win business when tendering for work. Boehm is scathing about these as methods of estimating. However, sometimes you might have to start with an estimate of an acceptable cost and then set the scope of the project and the application to be built to be within that cost. This is discussed in Section 5.5 of the textbook.
  • #7: There is often confusion between the two approaches as the first part of the bottom-up approach is a top-down analysis of the tasks to be done, followed by the bottom-up adding up of effort for all the work to be done. Make sure you students understand this or it will return to haunt you (and them) at examination time.
  • #8: The idea is that even if you have never done something before you can imagine what you could do in about a week. Exercise 5.2 relates to bottom-up estimating
  • #10: The problems with COCOMO is that the input parameter for system size is an estimate of lines of code. This is going to have to be an estimate at the beginning of the project. Function points, as will be seen, counts various features of the logical design of an information system and produced an index number which reflects the amount of information processing it will have to carry out. This can be crudely equated to the amount of code it will need.
  • #11: It would be worth getting the students to do Exercise 5.3 at this point to make sure they grasp the underlying concepts.
  • #12: This is analogous to calculating speed from distance and time.
  • #13: ‘System size’ here can be seen as an index that allows the size of different applications to be compared. It will usually correlate to the number of lines of code required.
  • #14: COCOMO originally was based on a size parameter of lines of code (actually ‘thousand of delivered sourcecode instructions’ or kdsi). Newer versions recognize the use of functions points as a size measure, but convert them to a number called ‘equivalent lines of code (eloc).
  • #15: Once again, just a reminder that the lecture is just an overview of concepts. Mark II FPs is a version of function points developed in the UK and is only used by a minority of FP specialists. The US-based IFPUG method (developed from the original Albrecht approach) is more widely used. I use the Mark II version because it has simpler rules and thus provides an easier introduction to the principles of FPs. Mark II FPs are explained in more detail in Section 5.9. If you are really keen on teaching the IFPUG approach then look at Section 5.10. The IFPUG rules are really quite tricky in places and for the full rules it is best to contact IFPUG.
  • #16: For each transaction (cf use case) count the number of input types (not occurrences e.g. where a table of payments is input on a screen so the account number is repeated a number of times), the number of output types, and the number of entities accessed. Multiply by the weightings shown and sum. This produces an FP count for the transaction which will not be very useful. Sum the counts for all the transactions in an application and the resulting index value is a reasonable indicator of the amount of processing carried out. The number can be used as a measure of size rather than lines of code. See calculations of productivity etc discussed earlier. There is an example calculation in Section 5.9 (Example 5.3) and Exercise 5.7 should give a little practice in applying the method.
  • #17: Attempts have been made to extend IFPUG FPs to real-time and embedded systems, but this has not been very convincing (IMHO). Embedded software component is seen as at a particular level in the system. It receives calls for services from a higher layer and requests services from lower layers. It will receive responses from lower levels and will send responses to higher levels.
  • #18: Each arrow represents an enter or exit if in black, or a read/write if in red.
  • #19: Exercise 5.8 gives some practice in applying the technique.
  • #20: Recall that the aim of this lecture is to give an overview of principles. COCOMO81 is the original version of the model which has subsequently been developed into COCOMO II some details of which are discussed in Section 5.12. For full details read Barry Boehm et al. Software estimation with COCOMO II Prentice-Hall 2002.
  • #21: An interesting question is what a ‘semi-detached’ system is exactly. To my mind, a project that combines elements of both real-time and information systems (i.e. has a substantial database) ought to be even more difficult than an embedded system. Another point is that COCOMO was based on data from very large projects. There are data from smaller projects that larger projects tend to be more productive because of economies of scale. At some point the diseconomies of scale caused by the additional management and communication overheads then start to make themselves felt. Exercise 5.10 in the textbook provides practice in applying the basic model.
  • #22: Virtual machine volatility is where the operating system that will run your software is subject to change. This could particularly be the case with embedded control software in an industrial environment. Schedule constraints refers to situations where extra resources are deployed to meet a tight deadline. If two developers can complete a task in three months, it does not follow that six developers could complete the job in one month. There would be additional effort needed to divide up the work and co-ordinate effort and so on.
  • #23: Exercise 5.11 gives practice in applying these.
  • #24: The source cases, in this situation, are completed projects. For each of details of the factors that would have a bearing on effort are recorded. These might include lines of code, function points (or elements of the FP counts such as the number of inputs, outputs etc), number of team members etc etc. For the values for the new project are used to find one or more instances from the past projects than match the current one. The actual effort from the past project becomes the basis of the estimate for the new project.