Chapter 1
Conventional Software
Management
Introduction
Three analyses of the state of the software
engineering industry as of mid 1990s yielded:
Software Development is still highly unpredictable
• Only about 10% of software projects are delivered
successfully on time, within initial budget, and meets user
requirements
The management discipline is more of a discriminator in
success or failure than are technology advances
• The level of software scrap and rework is indicative of an
immature process.
Behold the magnitude of the software problem and
current norms!
But is the ‘theory’ bad? “Practice bad?” Both?
Let’s consider….
I. The Waterfall Model
Recognize that there are numerous variations of
the ‘waterfall model.’
Tailored to many diverse environments
The ‘theory’ behind the waterfall model – good
Oftentimes ignored in the ‘practice’
The ‘practice’ – some good; some poor
Waterfall – Theory
Historical Perspective and Update
Circa 1970: lessons learned and observations
Point 1: There are two essential steps common to the development of
computer programs: analysis and coding - More later on this one.
Point 2: In order to manage and control all of the intellectual freedom
associated with software development, one must introduce several other
‘overhead’ steps, including system requirements definition, software
requirements definition, program design, and testing. These steps
supplement the analysis and coding steps.” (See Fig 1-1, text, p. 7,
which model basic programming steps and large-scale approach)
Point 3: The basic framework … is risky and invites failure. The
testing phases that occurs at the end of the development cycle is the first
event for which timing, storage, input/output transfers, etc. are
experienced as distinguished from analyzed. The resulting design
changes are likely to be so disruptive that the software requirements
upon which the design is based are likely violated. Either the
requirements must be modified or a substantial design change is
warranted.  Discuss.
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
1. “Program design” comes first.
Occurs between SRS generation and analysis.
 Program designer looks at storage, timing, data.
Very high level…First glimpse. First concepts…
During analysis: program designer must then impose
storage, timing, and operational constraints to
determine consequences.
Begin design process with program designers, not
analysts and programmers
Design, define, and allocate data processing modes
even if wrong. (allocate functions, database design,
interfacing, processing modes, i/o processing, operating
procedures…. Even if wrong!!)
 Build an overview document – to gain a basic
understanding of system for all stakeholders.
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 1: Update: We use the term ‘architecture
first’ development rather than program design.
Elaborate: distribution, layered architectures, components
Nowadays, the basic architecture MUST come first.
Recall the RUP: use-case driven, architecture-
centric, iterative development process……
Architecture comes first; then it is designed and
developed in parallel with planning and
requirements definition.
Recall RUP Workflow diagrams….
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 2: Document the Design
Development efforts required huge amounts of
documentation – manuals for everything
• User manuals; operation manuals, program maintenance
manuals, staff user manuals, test manuals…
• Most of us would like to ‘ignore’ documentation. 
Each designer MUST communicate with various
stakeholders: interface designers, managers,
customers, testers, developers, …..
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 2: Update: Document the Design
Now, we concentrate primarily on ‘artifacts’ –
those models produced as a result of developing an
architecture, performing analysis, capturing
requirements, and deriving a design solution
• Include Use Cases, static models (class diagrams, state
diagrams, activity diagrams), dynamic models (sequence
and collaboration diagrams), domain models, glossaries,
supplementary specifications (constraints, operational
environmental constraints, distribution, ….)
• Modern tools / notations, and methods produce self-
documenting artifacts from development activities.
• Visual modeling provides considerable documentation
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 3: Do it twice.
History argues that the delivered version is really version #2.
Microcosm of software development.
Version 1, major problems and alternatives are addressed –
the ‘big cookies’ such as communications, interfacing, data
modeling, platforms, operational constraints, other
constraints. Plan to throw first version away sometimes…
Version 2, is a refinement of version 1 where the major
requirements are implemented.
Version 1 often austere; Version 2 addressed shortcomings!
Point 3: Update.
This approach is a precursor to architecture-first
development (see RUP). Initial engineering is done. Forms
the basis for iterative development and addressing risk!
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 4: Then: Plan, Control, and Monitor
Testing.
Largest consumer of project resources (manpower,
computing time, …) is the test phase.
 Phase of greatest risk – in terms of cost and schedule. (EST
1…)
• Occurs last, when alternatives are least available, and expenses are at
a maximum.
• Typically that phase that is shortchanged the most
To do:
• 1. Employ a non-vested team of test specialists – not responsible for
original design.
• 2. Employ visual inspections to spot obvious errors (code reviews,
other technical reviews and interfaces)
• 3. Test every logic path
• 4. Employ final checkout on target computer…..
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 4: Now: Plan, Control, and Monitor
Testing.
Items 1 and 4 are still valid.
• 1) Use a test team not involved in the development of the system –
at least for testing other than ‘unit testing…’
• 4) Employ final checkout on target computer….
Item 2 (software inspections) – good years ago, but modern
development environments obviate this need. Many code
analyzers, optimizing compilers, static and dynamic
analyzers are available to automatically assist…
• May still yield good results – but not for significant problems!
Stylistic!
Item 3 (testing every path) is impossible. Very difficult
with distributed systems, reusable components (necessary?),
and other factors…. (aspects)
Waterfall – Theory
Suggested Changes ‘Then’ and ‘Now’
Point 5 – Old: Involve the Customer
Old advice: involve customer in requirements
definition, preliminary software review,
preliminary program design (critical design
review briefings…)
Now: Involving the customer and all stakeholders
is critical to overall project success.
Demonstrate increments; solicit feedback;
embrace change; cyclic and iterative and
evolving software. Address risk early…..
Overall Appraisal of Waterfall Model
Criticism of the waterfall model is
misplaced.
Theory is fine.
Practice is what was poor!
But: Less than 20% success rate
The Software Development Plan:
Old Version
Define precise requirements
Define precise plan to deliver system
Constrained by specified time and budget
Execute and track to plan
Stakeholder
Satisfaction Space
Initial Project SituationInitial Project Situation
 Reused or legacy assetsReused or legacy assets
 Detailed plans, scopeDetailed plans, scope
Planned Path
1.1.2 In Practice
Characteristics of Conventional Process – as it has
been applied (in general)
Projects not delivered on-time, not within initial
budget, and rarely met user requirements
Projects frequently had:
1. Protracted integration and late design breakage
2. Late risk resolution
3. Requirements-driven functional decomposition
4. Adversarial stakeholder relationships
5. Focus on documents and review meetings
Let’s look at these five major problems…
Sequential Activities:
Requirements Design Code Integration
Symptoms of conventional
waterfall process
 Late design breakage
 40% effort on integration & test
Late Design
Breakage
100%
Project Schedule
DevelopmentProgress
(%coded)
Original
Target Date
Integration
Begins
Completion
Date
. Early paper designs and thorough briefings
. Commitment to code very late in cycle
. Integration nightmares due to unforeseen
implementation and interface issues
. Heavy budget and schedule pressure
. Late ‘shoe-horning’ of non-optimal fixes
with no time for redesign!!!!
. A very fragile, un-maintainable product and
almost always: delivered late.
1. Protracted Integration and Late Design Breakage
Expenditures per activity for a
Conventional Software Project
Activity Cost
Management 5%
Requirements 5%
Design 10%
Code and unit test 30%
Integration and Test 40%
Deployment 5%
Environment 5%
Total 100%
 Lots of time spent on ‘perfecting the software design’ prior to
commitment to code.
 Typically had: requirements in English, design in flowcharts, detailed
design in pdl, and implementations in Fortran, Cobol, or C
Waterfall model  late integration and performance showstoppers.
Could only perform testing ‘at the end’ (other than unit testing)
Testing ‘should have’ required 40% of life-cycle resources: often didn’t!!
2. Late Risk Resolution
Problem here:  focused on early paper artifacts.
Real issues – still unknown and hard to grasp.
Difficult to resolve risk during requirements when many
key items still not fully understood.
Even in design, when requirements better understood, still
difficult to get objective assessment.
• Risks were at a very high level
During coding, some risks resolved, BUT during
 Integration, many risks were quite clear and changes to
many artifacts and retrenchment often had to occur
While much ‘retrenchment ‘did’ occur, it often caused missed
dates, delayed requirement compliance, or, at a
minimum, sacrificed quality (extensibility,
maintainability, loss of original design integrity, and
more).
Quick fixes, often without documentation occurred a lot!
3. Requirements-Driven Functional Decomposition
Traditionally, software development processes have been requirements-driven.
Developers: assumed requirement specs: complete, clear, necessary, feasible, and
remaining constant! This is RARELY the case!!!!
All too often, too much time spent on equally treating ‘all’ requirements rather
than on critical ones.
Much time spent on documentation on topics (traceability, testability, etc.) that
was later made obsolete as ‘DRIVING REQUIREMENTS AND SUBSEQUENT
DESIGN UNDERSTANDING EVOLVE.’ We do not KNOW all we’d like to
know ‘up front.’
Too much time addressing all of the scripted requirements
• normally listed in tables, decision-logic tables, flowcharts, and plain, old text.
• Much brainpower wasted on the ‘lesser’ requirements.
Also, assumption that all requirements could be captured as ‘functions’ and
resulting decomposition of these functions.
Functions, sub-functions, etc. became the basis for contracts and work
apportionment, while ignoring major architectural-driven approaches and
requirements that are ‘threaded’ throughout functions and that transcend
individual functions…... (security; authentication; persistency; performance…)
Fallacy: all requirements can be completely specified ‘up front’ and (and
decomposed) via functions.
4. Adversarial Stakeholder Relationships (1 of 2)
Who are stakeholders? Discuss….Quite a diverse group!
Adversarial relationships OFTEN true!
Misunderstanding of documentation usually written in
English and with business jargon.
Paper transmission of requirements – only method used….
No real modeling, universally-agreed-to languages with
common notations; (no GUIs, network components already
available; Most systems were ‘custom.’)
Subjective reviews / opinions. Generally without value!
…more
Management Reviews; Technical Reviews!
4. Adversarial Stakeholder Relationships
Common Occurrences:
Common events with contractual software:
1. Contractor prepared a draft contract-deliverable document that
constituted an intermediate artifact and delivered it to the customer for
approval. (usually done after interviews, questionnaires, meetings…)
2. Customer was expected to provide comments
(typically within 15-30 days.)
3. Contractor incorporated these comments and submitted
(typically 15-30 days) a final version for approval.
Evaluation:
Overhead of paper was huge and ‘intolerable.’ Volumes of paper! (often
under-read)
Strained contractor/customer relationships
Mutual distrust – basis for much litigation
Often, once approved, rendered obsolete later….(living document?)
5. Focus on Documents and Review Meetings
A very documentation-intensive approach.
Insufficient attention on producing credible ‘increments’ of the
desired products.
Big bang approach – all FDs delivered at once;
All Design Specs ‘ok’d’ at once and ‘briefed’…
Milestones ‘commemorated’ via review meetings – technical,
managerial, ….. Everyone nodding and smiling often…
Incredible energies expended on producing paper documentation to
show progress versus efforts to address real risk issues and
integration issues.
Stakeholders often did not go through design…
Very VERY low value in meetings and high costs
• Travel, accommodations…..
Many issues could have been averted early during development –
during early life-cycle phases rather than encountered huge problems
late….but…
Continuing….
Typical Software product design Reviews….
1. Big briefing to a diverse audience
Results: only a small percentage of the audience understands the
software
Briefings and documents expose few of the important assets and risks
of complex software.
2. A design that appears to be compliant
There is no tangible evidence of compliance
Compliance with ambiguous requirements is of little value.
3. Coverage of requirements (typically hundreds….)
Few (tens) are in reality the real design drivers, but many presented
Dealing with all requirements dilutes the focus on critical drivers.
4. A design considered ‘innocent until proven guilty’
The design is always guilty
Design flaws are exposed later in the life cycle.
1.2 Conventional Software
Management Performance
Very few changes from Barry Boehm’s
“industrial software metrics” from 1987.
Most still generally describe some of the
fundamental economic relationships that are
derived from years of practice:
What follows is Barry’s top ten (and your
author’s (and my) comments.
Basic Software Economics…
1. Finding and fixing a software problem after delivery costs 100
times more than fining and fixing the problem in early design phases.
Flat true.
2. You can compress software development schedules 25% of
nominal, but no more.
Addition of people requires more management overhead and training
of people.
Still a good heuristic. Some compression is sometimes possible! Be
careful! Oftentimes it is a killer to add people….(Discuss later)
3. For every dollar you spend on development, you will spend two
dollars on maintenance. We HOPE this is true!
Hope so. Long life cycles mean revenue…Still, hard to tell
Product’s success in market place is driver.
Successful products will have much higher ratios of “maintenance to
development”…..
One of a kind development will most likely NOT spend this kind of
money on maintenance .
• Examples: implementation / conversion subsystems…..
• Conversion software….
Basic Software Economics (cont)
4. Software development and maintenance costs are primarily a
function of the number of source lines of code.
Generally true. Component-based development may dilute
this as might reuse - but not in common use in the past.
5. Variations among people account for the biggest differences
in software productivity.
Always try to hire good people. But we cannot always to that. Balance
is critical. Don’t want all team members trying to self-actualize and
become heroes. Build the ‘team concept.’ While there is no “I” in
‘team”, there is an implicit “we.”
6. Overall ratio of software to hardware costs is still growing.
In 1955 it was 15:85; In 1985, it was 85:15. Now? I don’t
know.
While true, impacting these figures is the ever-increasing demand for
functionality and attendant complexity. They appear w/o bound.
Basic Software Economics (cont)
7.  Only about 15% of software development effort is
devoted to programming. (Sorry! But this is the way it is!)
Approximately true. This figure has been used for years –
and is shattering to a lot of programmers – especially ‘new’
ones. And, this 15% is only for the development! It does not
include, hopefully, some 65% - 70% of the overall total life
cycle expenses based on maintenance!!
8. Software systems and products typically cost three times as
much per SLOC as individual software programs. Software-
system products, that is system of systems, cost nine times as
much.
A real fact: the more software you build, the more
expensive it is per source line. Why do you think?
Discuss!
Basic Software Economics (cont)
9. Walkthroughs catch 60% of the errors.
Usually good for catching stylistic things; sometimes errors, but
usually do not represent / require the deep analysis necessary to
catch significant shortcomings.
Major problems, such as performance, resource contention, …
are not caught.
10. 80% of the contribution comes from 20% of the contributors.
80/20 rule applies to many things: see text. But pretty correct!
• See text for a number of these – which are ‘generally’ true….

More Related Content

PPT
Software Project Management
PPTX
Software Engineering Practice
PPTX
Cost estimation using cocomo model
PPTX
Software maintenance Unit5
PPTX
Project scheduling and tracking
PPTX
COCOMO (Software Engineering)
PPTX
Walkthroughs
PDF
Software Engineering : Requirement Analysis & Specification
Software Project Management
Software Engineering Practice
Cost estimation using cocomo model
Software maintenance Unit5
Project scheduling and tracking
COCOMO (Software Engineering)
Walkthroughs
Software Engineering : Requirement Analysis & Specification

What's hot (20)

PDF
Learning Python with PyCharm EDU
PPTX
Software Cost Estimation Techniques
ODP
Evolutionary process models se.ppt
PPTX
PPTX
Unit 1 spm
PPTX
Extreme Programming
PDF
Iterative process planning.pdf
PPTX
Php internal architecture
PPT
Software reliability
PDF
Software project management
PPT
Putnam Resource allocation model.ppt
PPTX
difference between c c++ c#
ODP
The Art Of Debugging
PPTX
SPM_UNIT-1(1).pptx
PPTX
Real time and distributed design
PPTX
Inter process communication
PPTX
Planning the development process
PPTX
SDLC Model (Waterfall,Iterative Waterfall,Spiral)
PPTX
RMMM Plan
PPTX
Software project management
Learning Python with PyCharm EDU
Software Cost Estimation Techniques
Evolutionary process models se.ppt
Unit 1 spm
Extreme Programming
Iterative process planning.pdf
Php internal architecture
Software reliability
Software project management
Putnam Resource allocation model.ppt
difference between c c++ c#
The Art Of Debugging
SPM_UNIT-1(1).pptx
Real time and distributed design
Inter process communication
Planning the development process
SDLC Model (Waterfall,Iterative Waterfall,Spiral)
RMMM Plan
Software project management
Ad

Similar to Chapter1 conventional softwaremanagement (1) (20)

PPT
chapter1-convehisudhiusdiudiudsiusdiuddsdshdibsdiubdsjxkjxjntionalsoftwareman...
PDF
JNTUA Software Project Management Notes - R20.pdf
PPT
WaterFall Model.ppt
PPTX
Conventional software Management---.pptx
PPTX
The Waterfall Model
PPTX
Lect2 conventional software management
PPTX
software project management Waterfall model
PDF
Software model
PDF
7 5-94-101
PPTX
Waterfall model
PPTX
waterfall model.pptx
PDF
Software Engineering : Process Models
PDF
Waterfall model
PPTX
Waterfall model of Software Engineering
PPTX
What is waterfall
PPTX
Slide set 1 (Traditional Software Development) (1).pptx
PPTX
Problems with water fall model ppt
PPT
2.SDLC Models.ppt
PPT
2. Life Cycle Models for Software Engineeting
PPTX
Waterfall model
chapter1-convehisudhiusdiudiudsiusdiuddsdshdibsdiubdsjxkjxjntionalsoftwareman...
JNTUA Software Project Management Notes - R20.pdf
WaterFall Model.ppt
Conventional software Management---.pptx
The Waterfall Model
Lect2 conventional software management
software project management Waterfall model
Software model
7 5-94-101
Waterfall model
waterfall model.pptx
Software Engineering : Process Models
Waterfall model
Waterfall model of Software Engineering
What is waterfall
Slide set 1 (Traditional Software Development) (1).pptx
Problems with water fall model ppt
2.SDLC Models.ppt
2. Life Cycle Models for Software Engineeting
Waterfall model
Ad

Recently uploaded (20)

PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PPTX
communication and presentation skills 01
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
"Array and Linked List in Data Structures with Types, Operations, Implementat...
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
Soil Improvement Techniques Note - Rabbi
PPTX
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
PPT
Total quality management ppt for engineering students
PPTX
CyberSecurity Mobile and Wireless Devices
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PDF
Design Guidelines and solutions for Plastics parts
PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
PDF
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
737-MAX_SRG.pdf student reference guides
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
communication and presentation skills 01
Exploratory_Data_Analysis_Fundamentals.pdf
Fundamentals of safety and accident prevention -final (1).pptx
"Array and Linked List in Data Structures with Types, Operations, Implementat...
III.4.1.2_The_Space_Environment.p pdffdf
Soil Improvement Techniques Note - Rabbi
AUTOMOTIVE ENGINE MANAGEMENT (MECHATRONICS).pptx
Total quality management ppt for engineering students
CyberSecurity Mobile and Wireless Devices
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
Design Guidelines and solutions for Plastics parts
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
Abrasive, erosive and cavitation wear.pdf
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
737-MAX_SRG.pdf student reference guides

Chapter1 conventional softwaremanagement (1)

  • 2. Introduction Three analyses of the state of the software engineering industry as of mid 1990s yielded: Software Development is still highly unpredictable • Only about 10% of software projects are delivered successfully on time, within initial budget, and meets user requirements The management discipline is more of a discriminator in success or failure than are technology advances • The level of software scrap and rework is indicative of an immature process. Behold the magnitude of the software problem and current norms! But is the ‘theory’ bad? “Practice bad?” Both? Let’s consider….
  • 3. I. The Waterfall Model Recognize that there are numerous variations of the ‘waterfall model.’ Tailored to many diverse environments The ‘theory’ behind the waterfall model – good Oftentimes ignored in the ‘practice’ The ‘practice’ – some good; some poor
  • 4. Waterfall – Theory Historical Perspective and Update Circa 1970: lessons learned and observations Point 1: There are two essential steps common to the development of computer programs: analysis and coding - More later on this one. Point 2: In order to manage and control all of the intellectual freedom associated with software development, one must introduce several other ‘overhead’ steps, including system requirements definition, software requirements definition, program design, and testing. These steps supplement the analysis and coding steps.” (See Fig 1-1, text, p. 7, which model basic programming steps and large-scale approach) Point 3: The basic framework … is risky and invites failure. The testing phases that occurs at the end of the development cycle is the first event for which timing, storage, input/output transfers, etc. are experienced as distinguished from analyzed. The resulting design changes are likely to be so disruptive that the software requirements upon which the design is based are likely violated. Either the requirements must be modified or a substantial design change is warranted.  Discuss.
  • 5. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ 1. “Program design” comes first. Occurs between SRS generation and analysis.  Program designer looks at storage, timing, data. Very high level…First glimpse. First concepts… During analysis: program designer must then impose storage, timing, and operational constraints to determine consequences. Begin design process with program designers, not analysts and programmers Design, define, and allocate data processing modes even if wrong. (allocate functions, database design, interfacing, processing modes, i/o processing, operating procedures…. Even if wrong!!)  Build an overview document – to gain a basic understanding of system for all stakeholders.
  • 6. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 1: Update: We use the term ‘architecture first’ development rather than program design. Elaborate: distribution, layered architectures, components Nowadays, the basic architecture MUST come first. Recall the RUP: use-case driven, architecture- centric, iterative development process…… Architecture comes first; then it is designed and developed in parallel with planning and requirements definition. Recall RUP Workflow diagrams….
  • 7. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 2: Document the Design Development efforts required huge amounts of documentation – manuals for everything • User manuals; operation manuals, program maintenance manuals, staff user manuals, test manuals… • Most of us would like to ‘ignore’ documentation.  Each designer MUST communicate with various stakeholders: interface designers, managers, customers, testers, developers, …..
  • 8. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 2: Update: Document the Design Now, we concentrate primarily on ‘artifacts’ – those models produced as a result of developing an architecture, performing analysis, capturing requirements, and deriving a design solution • Include Use Cases, static models (class diagrams, state diagrams, activity diagrams), dynamic models (sequence and collaboration diagrams), domain models, glossaries, supplementary specifications (constraints, operational environmental constraints, distribution, ….) • Modern tools / notations, and methods produce self- documenting artifacts from development activities. • Visual modeling provides considerable documentation
  • 9. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 3: Do it twice. History argues that the delivered version is really version #2. Microcosm of software development. Version 1, major problems and alternatives are addressed – the ‘big cookies’ such as communications, interfacing, data modeling, platforms, operational constraints, other constraints. Plan to throw first version away sometimes… Version 2, is a refinement of version 1 where the major requirements are implemented. Version 1 often austere; Version 2 addressed shortcomings! Point 3: Update. This approach is a precursor to architecture-first development (see RUP). Initial engineering is done. Forms the basis for iterative development and addressing risk!
  • 10. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 4: Then: Plan, Control, and Monitor Testing. Largest consumer of project resources (manpower, computing time, …) is the test phase.  Phase of greatest risk – in terms of cost and schedule. (EST 1…) • Occurs last, when alternatives are least available, and expenses are at a maximum. • Typically that phase that is shortchanged the most To do: • 1. Employ a non-vested team of test specialists – not responsible for original design. • 2. Employ visual inspections to spot obvious errors (code reviews, other technical reviews and interfaces) • 3. Test every logic path • 4. Employ final checkout on target computer…..
  • 11. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 4: Now: Plan, Control, and Monitor Testing. Items 1 and 4 are still valid. • 1) Use a test team not involved in the development of the system – at least for testing other than ‘unit testing…’ • 4) Employ final checkout on target computer…. Item 2 (software inspections) – good years ago, but modern development environments obviate this need. Many code analyzers, optimizing compilers, static and dynamic analyzers are available to automatically assist… • May still yield good results – but not for significant problems! Stylistic! Item 3 (testing every path) is impossible. Very difficult with distributed systems, reusable components (necessary?), and other factors…. (aspects)
  • 12. Waterfall – Theory Suggested Changes ‘Then’ and ‘Now’ Point 5 – Old: Involve the Customer Old advice: involve customer in requirements definition, preliminary software review, preliminary program design (critical design review briefings…) Now: Involving the customer and all stakeholders is critical to overall project success. Demonstrate increments; solicit feedback; embrace change; cyclic and iterative and evolving software. Address risk early…..
  • 13. Overall Appraisal of Waterfall Model Criticism of the waterfall model is misplaced. Theory is fine. Practice is what was poor!
  • 14. But: Less than 20% success rate The Software Development Plan: Old Version Define precise requirements Define precise plan to deliver system Constrained by specified time and budget Execute and track to plan Stakeholder Satisfaction Space Initial Project SituationInitial Project Situation  Reused or legacy assetsReused or legacy assets  Detailed plans, scopeDetailed plans, scope Planned Path
  • 15. 1.1.2 In Practice Characteristics of Conventional Process – as it has been applied (in general) Projects not delivered on-time, not within initial budget, and rarely met user requirements Projects frequently had: 1. Protracted integration and late design breakage 2. Late risk resolution 3. Requirements-driven functional decomposition 4. Adversarial stakeholder relationships 5. Focus on documents and review meetings Let’s look at these five major problems…
  • 16. Sequential Activities: Requirements Design Code Integration Symptoms of conventional waterfall process  Late design breakage  40% effort on integration & test Late Design Breakage 100% Project Schedule DevelopmentProgress (%coded) Original Target Date Integration Begins Completion Date . Early paper designs and thorough briefings . Commitment to code very late in cycle . Integration nightmares due to unforeseen implementation and interface issues . Heavy budget and schedule pressure . Late ‘shoe-horning’ of non-optimal fixes with no time for redesign!!!! . A very fragile, un-maintainable product and almost always: delivered late. 1. Protracted Integration and Late Design Breakage
  • 17. Expenditures per activity for a Conventional Software Project Activity Cost Management 5% Requirements 5% Design 10% Code and unit test 30% Integration and Test 40% Deployment 5% Environment 5% Total 100%  Lots of time spent on ‘perfecting the software design’ prior to commitment to code.  Typically had: requirements in English, design in flowcharts, detailed design in pdl, and implementations in Fortran, Cobol, or C Waterfall model  late integration and performance showstoppers. Could only perform testing ‘at the end’ (other than unit testing) Testing ‘should have’ required 40% of life-cycle resources: often didn’t!!
  • 18. 2. Late Risk Resolution Problem here:  focused on early paper artifacts. Real issues – still unknown and hard to grasp. Difficult to resolve risk during requirements when many key items still not fully understood. Even in design, when requirements better understood, still difficult to get objective assessment. • Risks were at a very high level During coding, some risks resolved, BUT during  Integration, many risks were quite clear and changes to many artifacts and retrenchment often had to occur While much ‘retrenchment ‘did’ occur, it often caused missed dates, delayed requirement compliance, or, at a minimum, sacrificed quality (extensibility, maintainability, loss of original design integrity, and more). Quick fixes, often without documentation occurred a lot!
  • 19. 3. Requirements-Driven Functional Decomposition Traditionally, software development processes have been requirements-driven. Developers: assumed requirement specs: complete, clear, necessary, feasible, and remaining constant! This is RARELY the case!!!! All too often, too much time spent on equally treating ‘all’ requirements rather than on critical ones. Much time spent on documentation on topics (traceability, testability, etc.) that was later made obsolete as ‘DRIVING REQUIREMENTS AND SUBSEQUENT DESIGN UNDERSTANDING EVOLVE.’ We do not KNOW all we’d like to know ‘up front.’ Too much time addressing all of the scripted requirements • normally listed in tables, decision-logic tables, flowcharts, and plain, old text. • Much brainpower wasted on the ‘lesser’ requirements. Also, assumption that all requirements could be captured as ‘functions’ and resulting decomposition of these functions. Functions, sub-functions, etc. became the basis for contracts and work apportionment, while ignoring major architectural-driven approaches and requirements that are ‘threaded’ throughout functions and that transcend individual functions…... (security; authentication; persistency; performance…) Fallacy: all requirements can be completely specified ‘up front’ and (and decomposed) via functions.
  • 20. 4. Adversarial Stakeholder Relationships (1 of 2) Who are stakeholders? Discuss….Quite a diverse group! Adversarial relationships OFTEN true! Misunderstanding of documentation usually written in English and with business jargon. Paper transmission of requirements – only method used…. No real modeling, universally-agreed-to languages with common notations; (no GUIs, network components already available; Most systems were ‘custom.’) Subjective reviews / opinions. Generally without value! …more Management Reviews; Technical Reviews!
  • 21. 4. Adversarial Stakeholder Relationships Common Occurrences: Common events with contractual software: 1. Contractor prepared a draft contract-deliverable document that constituted an intermediate artifact and delivered it to the customer for approval. (usually done after interviews, questionnaires, meetings…) 2. Customer was expected to provide comments (typically within 15-30 days.) 3. Contractor incorporated these comments and submitted (typically 15-30 days) a final version for approval. Evaluation: Overhead of paper was huge and ‘intolerable.’ Volumes of paper! (often under-read) Strained contractor/customer relationships Mutual distrust – basis for much litigation Often, once approved, rendered obsolete later….(living document?)
  • 22. 5. Focus on Documents and Review Meetings A very documentation-intensive approach. Insufficient attention on producing credible ‘increments’ of the desired products. Big bang approach – all FDs delivered at once; All Design Specs ‘ok’d’ at once and ‘briefed’… Milestones ‘commemorated’ via review meetings – technical, managerial, ….. Everyone nodding and smiling often… Incredible energies expended on producing paper documentation to show progress versus efforts to address real risk issues and integration issues. Stakeholders often did not go through design… Very VERY low value in meetings and high costs • Travel, accommodations….. Many issues could have been averted early during development – during early life-cycle phases rather than encountered huge problems late….but…
  • 23. Continuing…. Typical Software product design Reviews…. 1. Big briefing to a diverse audience Results: only a small percentage of the audience understands the software Briefings and documents expose few of the important assets and risks of complex software. 2. A design that appears to be compliant There is no tangible evidence of compliance Compliance with ambiguous requirements is of little value. 3. Coverage of requirements (typically hundreds….) Few (tens) are in reality the real design drivers, but many presented Dealing with all requirements dilutes the focus on critical drivers. 4. A design considered ‘innocent until proven guilty’ The design is always guilty Design flaws are exposed later in the life cycle.
  • 24. 1.2 Conventional Software Management Performance Very few changes from Barry Boehm’s “industrial software metrics” from 1987. Most still generally describe some of the fundamental economic relationships that are derived from years of practice: What follows is Barry’s top ten (and your author’s (and my) comments.
  • 25. Basic Software Economics… 1. Finding and fixing a software problem after delivery costs 100 times more than fining and fixing the problem in early design phases. Flat true. 2. You can compress software development schedules 25% of nominal, but no more. Addition of people requires more management overhead and training of people. Still a good heuristic. Some compression is sometimes possible! Be careful! Oftentimes it is a killer to add people….(Discuss later) 3. For every dollar you spend on development, you will spend two dollars on maintenance. We HOPE this is true! Hope so. Long life cycles mean revenue…Still, hard to tell Product’s success in market place is driver. Successful products will have much higher ratios of “maintenance to development”….. One of a kind development will most likely NOT spend this kind of money on maintenance . • Examples: implementation / conversion subsystems….. • Conversion software….
  • 26. Basic Software Economics (cont) 4. Software development and maintenance costs are primarily a function of the number of source lines of code. Generally true. Component-based development may dilute this as might reuse - but not in common use in the past. 5. Variations among people account for the biggest differences in software productivity. Always try to hire good people. But we cannot always to that. Balance is critical. Don’t want all team members trying to self-actualize and become heroes. Build the ‘team concept.’ While there is no “I” in ‘team”, there is an implicit “we.” 6. Overall ratio of software to hardware costs is still growing. In 1955 it was 15:85; In 1985, it was 85:15. Now? I don’t know. While true, impacting these figures is the ever-increasing demand for functionality and attendant complexity. They appear w/o bound.
  • 27. Basic Software Economics (cont) 7.  Only about 15% of software development effort is devoted to programming. (Sorry! But this is the way it is!) Approximately true. This figure has been used for years – and is shattering to a lot of programmers – especially ‘new’ ones. And, this 15% is only for the development! It does not include, hopefully, some 65% - 70% of the overall total life cycle expenses based on maintenance!! 8. Software systems and products typically cost three times as much per SLOC as individual software programs. Software- system products, that is system of systems, cost nine times as much. A real fact: the more software you build, the more expensive it is per source line. Why do you think? Discuss!
  • 28. Basic Software Economics (cont) 9. Walkthroughs catch 60% of the errors. Usually good for catching stylistic things; sometimes errors, but usually do not represent / require the deep analysis necessary to catch significant shortcomings. Major problems, such as performance, resource contention, … are not caught. 10. 80% of the contribution comes from 20% of the contributors. 80/20 rule applies to many things: see text. But pretty correct! • See text for a number of these – which are ‘generally’ true….

Editor's Notes

  • #15: The important difference between the problems is the focus on the product, not the activities.
  • #17: This is a perspective of development progress versus time where progress is defined as % coded, i.e., demonstrable in its target form. (The software is compilable and executable, not necessarily complete, compliant, nor up to specifications.) Software development progress over the project lifecycle typically progressed without issue until the integration phase. Requirements were first captured in complete detail in ad hoc text. Then, design documents were fully elaborated in ad hoc notations. Then, coding and unit testing of individual components was performed. Then the components were compiled and linked together into a complete system. This integration activity was the first time that significant inconsistencies among components, (their interfaces and behavior) could be tangibly recognized. These inconsistencies, some of them extremely difficult to uncover, were the ramifications of using ambiguous formats for the early life-cycle artifacts. Getting the software to operate reliably enough to test its usefulness took much longer than planned. Budget and schedule pressure drove teams to shoe-horn in the quickest fixes, redesign was usually out of the question. Then testing of system threads, usefulness, requirements compliance and quality was performed through a series of releases until the software was judged adequate for the user. 90% of the time, the end result was a late, over budget, fragile, and expensive to maintain software system. Looking back on numerous conventional projects, there was a recurring symptom of following a waterfall model. While it was never planned this way, the resources expended in the major software development workflows resulted in an excessive allocation of resource (either time or effort) to accomplish the integration and test activities. Successful projects would complete with 40% of their effort in I&T. Unsuccessful projects would spend even more. The over-riding reason for this was that the effort associated with the late scrap and rework of design issues was collected in the I&T activity. Furthermore, most I&T organizations spent 80% of their time integrating and only 20% of their time testing. “Integration” is a non-value-added activity. You would prefer that it take zero time and zero effort. For more information, see Royce page 12.