SlideShare a Scribd company logo
WEEK 3: SOFTWARE TEST PLANNING
UNIT 12: TEST PLANNING
SOFTWARE TESTING AND CONTINUOUS QUALITY IMPROVEMENT BY WILLIAM E. LEWIS
HTTPS://TIENHUONG.FILES.WORDPRESS.COM/2009/08/SOFTWARE-TESTING-AND-
CONTINUOUS-QUALITY-IMPROVEMENT-SECOND-EDITION.PDF
- INTRODUCTION
- PLANNING METHODOLOGY
- BUILD A TEST PLAN
- DEFINE METRIC OBJECTIVES
- REVIEW APPROVE PLAN
Outline
INTRODUCTION
Purpose of Test plan
 provide the basis for accomplishing testing in an
organized manner
 Successful and smooth test execution and analysis
Discuss the nature of project plan according to different
development models. E.g. waterfall, spiral etc?
QUALITIES OF A GOOD TEST PLAN
 Has a good chance of detecting a majority of the defects
 Provides test coverage for most of the code
 Is flexible
 Is executed easily and automatically and is repeatable
 Defines the types of tests to be performed
 Clarifies the test strategy
 Clearly defines the test exit criteria
 Identifies the risks
 Documents the test requirements
 Defines the test deliverables
 Clearly defines the test objectives
PLANNING TEST METHODOLOGY
Step 1: Building the test project plan
Step 2: Defining the metrics
Step 3: Reviewing/approving the test project plan
STEPS OF TEST PLANNING
Step 1
Build A Test Plan
TEST PLANNING (TASKS)
 Task 1: Prepare an Introduction
 Task 2: Define the High-Level Functional Requirements (in Scope)
 Task 3: Identify Manual/Automated Test Types
 Task 4: Identify the Test Exit Criteria
 Task 5: Establish Regression Test Strategy
 Task 6: Define the Test Deliverables
 Task 7: Organize the Test Team
 Task 8: Establish a Test Environment
 Task 9: Define the Dependencies
 Task 10: Create a Test Schedule
 Task 11: Select the Test Tools
 Task 12: Establish Defect Recording/Tracking Procedures
 Task 13: Establish Change Request Procedures
 Task 14: Establish Version Control Procedures
 Task 15: Define Configuration Build Procedures
 Task 16: Define Project Issue Resolution Procedures
 Task 17: Establish Reporting Procedures
 Task 18: Define Approval Procedures
TASK 1: PREPARE AN INTRODUCTION
 Description of the problem
 Application’s risks, purpose, objectives, and
benefits
 Available documentation
Examples: SRS , SDS, project plan, prototypes,
user manuals etc
TASK 2: DEFINE THE HIGH-LEVEL FUNCTIONAL
REQUIREMENTS (IN SCOPE)
A functional specification consists of
 Hierarchical functional decomposition e.g. order
processing (create new order, edit order etc.)
 Functional window structure e.g. main window (menu
bar, order placing window etc.)
 Window standards e.g. Windows 95 GUI Standards
 Minimum system requirements of the system to be
developed e.g. Windows 95, a Pentium II
microprocessor, 24-MB RAM, 3-gig disk space, and a
modem
TASK 3: IDENTIFY MANUAL/AUTOMATED TEST
TYPES
Depend totally on the objectives of the application. .
However, three types of tests that are nearly always
required are:
1. Function testing comprises the majority of the testing
effort and is concerned with verifying that the
functions work properly.
2. Regression testing tests the application in light of
changes made during debugging, maintenance, or
the development of a new release.
3. User interface testing, or GUI testing, checks the
user’s interaction or functional window structure.
TASK 4: IDENTIFY THE TEST EXIT CRITERIA
1. Scheduled testing time has expired —
2. Some predefined number of defects discovered —
3. All the formal tests execute without detecting any
defects —
4. Combination of the above — Most testing projects
utilize a combination of the above exit criteria.
TASK 5: ESTABLISH REGRESSION TEST STRATEGY
 Regression testing tests the application in light of changes made during
a development spiral, debugging, maintenance, or the development of
a new release. This test must be performed after functional
improvements or repairs have been made to a system to confirm that
the changes have no unintended side effects. Correction of errors
relating to logic and control flow, computational errors, and interface
errors are examples of conditions that necessitate regression testing.
 It would be ideal if all the tests in the test suite were rerun for each
new spiral but, due to time constraints, this is probably not realistic.
 A good regression strategy during spiral development is for some
regression testing to be performed during each spiral to ensure that
previously demonstrated capabilities are not adversely affected by later
development spirals or error corrections.
TASK 5. CONT…
 Regression tests are potential candidates for test automation when they are repeated over and over
in every testing spiral.
 Regression testing needs to occur between releases after the initial release of the system.
• The test that uncovered an original defect should be rerun after it has been corrected.
 An in-depth effort should be made to ensure that the original defect was corrected and not just the
symptoms.
 Use of Retest matrix is recommended
TASK 6: DEFINE THE TEST DELIVERABLES
Test deliverables result from test planning, test
design, test development, and test defect
documentation.
Examples?
TASK 7: ORGANIZE THE TEST TEAM
Skilled staffing for testing.
Areas of responsibility in testing:
 Testing the application, which is the responsibility of
the test team
 The overall testing processes, which is handled by
the test manager.
TASK 8: ESTABLISH A TEST ENVIRONMENT
The purpose of the test environment is to provide a physical framework for testing
necessary for the testing activity. For this task, the test environment needs are
established and reviewed before implementation.
The main components of the test environment:
 The test facility component includes the physical setup.
 The technologies component includes the hardware platforms, physical network and
all its components, operating system software.
 The tools component includes any specialized testing software such as automated
test tools, testing libraries, and support software.
TASK 9: DEFINE THE DEPENDENCIES
A good source of information is previously produced test plans on other
projects. If available, the sequence of tasks in the project work plans can
be analyzed for activity and task dependencies that apply to this project.
Examples of test dependencies include:
• Code availability
• Tester availability (in a timely fashion)
• Test requirements (reasonably defined)
• Test tools availability
• Test group training
• Defects fixed in a timely manner
• Adequate testing time
TASK 10: CREATE A TEST SCHEDULE
A test schedule should be produced that includes
the testing steps (and perhaps tasks), target start
and end dates, and responsibilities. It should also
describe how it will be reviewed, tracked, and
approved
TASK 11: SELECT THE TEST TOOLS
 Test tools range from relatively simple to sophisticated software. New
tools are being developed to help provide the high-quality software
needed for today’s applications.
 Because test tools are critical to effective testing, those responsible for
testing should be proficient in using them.
 The tools selected should be most effective for the environment in
which the tester operates and the specific types of software being
tested.
 The test plan needs to name specific test tools and their vendors.
 The test team should review and approve the use of each test tool,
because the tool selected must be consistent with the objectives of the
test plan.
TASK 12: ESTABLISH DEFECT RECORDING/
TRACKING PROCEDURES
 During the testing process a defect is discovered. It
needs to be recorded. A defect is related to individual
tests that have been conducted, and the objective is to
produce a complete record of those defects.
 The overall motivation for recording defects is to
correct defects and record metric information about the
application. Development should have access to the
defects reports, which they can use to evaluate
whether there is a defect and how to reconcile it.
TASK 13: ESTABLISH CHANGE REQUEST PROCEDURE
If it were a perfect world, a system would be built and there would be no future
changes. Unfortunately, it is not a perfect world and after a system is deployed,
there are change requests.
Some of the reasons for change are:
 • The requirements change.
 • The design changes.
 • The specification is incomplete or ambiguous.
 • A defect is discovered that was not discovered during reviews.
 • The software environment changes, for example, platform, hardware,
 and so on.
Change control is the process by which a modification to a software component is
proposed, evaluated, approved or rejected, scheduled, and tracked. It is a decision
process used in controlling the changes made to software. Some proposed changes
are accepted and implemented during this process. Others are rejected or
postponed, and are not implemented.
TASK 14: ESTABLISH VERSION CONTROL PROCEDURES
A method for uniquely identifying each software
component needs to be established via a labeling
scheme. Every software component must have a unique
name. Software components evolve through successive
revisions, and each needs to be distinguished.
A simple way to distinguish component revisions is with
a pair of integers, 1.1, 1.2, . . . , that define the release
number and level number. When a software component
is first identified, it is revision 1 and subsequent major
revisions are 2, 3, and so on.
TASK 15: DEFINE CONFIGURATION BUILD PROCEDURES
Assembling a software system involves tools to
transform the source components, or source code, into
executable programs. Examples of tools are compilers
and linkage editors.
Configuration build procedures need to be defined to
identify the correct component versions. The
configuration build model addresses the crucial question
of how to control the way components are built.
TASK 16: DEFINE PROJECT ISSUE RESOLUTION
PROCEDURES
Testing issues can arise at any point in the development process and must be
resolved successfully. The primary responsibility of issue resolution is with the
project manager who should work with the project sponsor to resolve those issues.
Typically, the testing manager will document test issues that arise during the
testing process. The project manager or project sponsor should screen every issue
that arises. An issue can be rejected or deferred for further investigation but should
be considered relative to its impact on the project.
In any case, a form should be created that contains the essential information.
 Examples of testing issues include lack of testing tools, lack of adequate time to
test, inadequate knowledge of the requirements, and so on.
TASK 17: ESTABLISH REPORTING PROCEDURES
The objectives of test status reporting are to report test issues,
problems, and concerns.
Two key reports that need to be published are:
1.Interim Test Report — An interim test report is a report
published between testing spirals indicating the status of the
testing effort.
2. System Summary Report — A test summary report is a
comprehensive test report after all spiral testing has been
completed.
TASK 18: DEFINE APPROVAL PROCEDURES
Approval procedures are critical in a testing project.
They help provide the necessary agreement
between members of the project team. The testing
manager needs to define who needs to approve a
test deliverable, when it will be approved, and what
the backup plan is if an approval cannot be
obtained.
STEPS OF TEST PLANNING
Step 2
Define the Metric Objectives
DEFINE METRIC OBJECTIVES
“You can’t control what you can’t measure.”
Tom DeMarco’s
 Control: Extent to which a manager can ensure minimum surprises.
 Measurement: Recording of past effects to quantitatively predict future effects.
Main tasks:
Task 1: Define the Metrics
Task 2: Define the Metric Points
TASK 1: DEFINE THE METRICS
A metric is a measurable indication of some quantitative aspect of a system.
A metric can be a “result” or a “predictor.”
Result metric: measures a completed event or process.
Example: actual total elapsed time to process a business transaction or total
test costs of a project.
Predictor metric: an early warning metric that has a strong correlation to
some later result.
Example: predicted response time through statistical regression analysis
when more terminals are added to a system when the amount of terminals has
not yet been measured
Other examples of metrics:
• Test effectiveness — How well is testing doing, for example, return on investment?
• Development effectiveness — How well is development fixing defects?
• Test automation — How much effort is expended on test automation?
• Test cost — What are the resources and time spent on testing?
TASK 2: DEFINE THE METRIC POINTS
Few metric points associated with the general metrics:
STEPS OF TEST PLANNING
Step 3
Review/Approve The Plan
STEP 3: REVIEW / APPROVE PLAN (TASKS)
Task 1: Schedule/Conduct the Review
Scheduled well in advance of the actual review and the participants should have the latest copy
of the test plan.
Review elements:
1. The first is defining what will be discussed, or “talking about what we are going to talk
about.”
2. The second is discussing the details or “talking about it.”
3. The third is summarization, or “talking about what we talked about.”
4. The final element is timeliness.
The purpose of this task is for development and the project sponsor to agree and accept the
test plan. If there are any suggested changes to the test plan during the review, they should be
incorporated into the test plan.
STEP 3: REVIEW / APPROVE PLAN (TASKS)
Task 2: Obtain Approvals
Approval is critical in a testing effort, for it helps provide the necessary agreements between testing,
development, and the sponsor. The best approach is with a formal sign-off procedure of a test plan.
However, if a formal agreement procedure is not in place, send a memo to each key participant.
In the document attach the latest test plan and point out that all their feedback comments have been
incorporated.
Finally, indicate that in a spiral development environment, the test plan will evolve with each iteration
but that you will include them in any modification.

More Related Content

PPTX
System testing
PPT
Software Quality Metrics
PPTX
Maintenance, Re-engineering &Reverse Engineering in Software Engineering
PPTX
Performance testing
PPTX
ISO 29119 -The new international software testing standards
DOCX
Software engineering
PPTX
Risk-based Testing
PPT
Introduction to Software Engineering
System testing
Software Quality Metrics
Maintenance, Re-engineering &Reverse Engineering in Software Engineering
Performance testing
ISO 29119 -The new international software testing standards
Software engineering
Risk-based Testing
Introduction to Software Engineering

What's hot (20)

PPTX
System testing
PPTX
PPT
Software Verification & Validation
PPT
Software maintenance
PPT
SOFTWARE TESTING
PPTX
Software Testing
PDF
Requirements Engineering
PPT
Non Functional Testing
PPT
Software Testing Fundamentals
PPT
Manual testing concepts course 1
PPTX
Software Testing: History, Trends, Perspectives - a Brief Overview
PPTX
Component Based Software Engineering
PDF
Verification and validation
PPTX
SDLC ITS MODEL AND SOFTWARE TESTING
PPTX
Automation Testing
PPTX
Software quality assurance
PPT
Automated Testing with Agile
PPTX
System testing
PPT
Chapter 15
PDF
Software testing
System testing
Software Verification & Validation
Software maintenance
SOFTWARE TESTING
Software Testing
Requirements Engineering
Non Functional Testing
Software Testing Fundamentals
Manual testing concepts course 1
Software Testing: History, Trends, Perspectives - a Brief Overview
Component Based Software Engineering
Verification and validation
SDLC ITS MODEL AND SOFTWARE TESTING
Automation Testing
Software quality assurance
Automated Testing with Agile
System testing
Chapter 15
Software testing
Ad

Similar to Test planning.ppt (20)

DOC
Question ISTQB foundation 3
DOC
Ôn tập kiến thức ISTQB
PDF
Fundamentals of Software Testing
PPTX
SOFTWARE TESTING
DOCX
Istqb v.1.2
PPTX
IT8076 – Software Testing Intro
PPT
Software Testing Process
PPT
Testing process
PPTX
Chapter 1 Fundamental of testing ISTQB v4
PDF
Testing Slides 1 (Testing Intro+Static Testing).pdf
PPTX
Test planning AND concepts planning Test planning AND concepts planning
PPTX
Aim (A).pptx
PPTX
Software testing and process
PDF
Software testing for project report system.
PPT
Test plan
PPTX
object oriented system analysis and design
PPTX
Fundamental test process
PPT
9 test_levels-
DOCX
Chapter 10 Testing and Quality Assurance1Unders.docx
PPTX
Software testing a guide from experience
Question ISTQB foundation 3
Ôn tập kiến thức ISTQB
Fundamentals of Software Testing
SOFTWARE TESTING
Istqb v.1.2
IT8076 – Software Testing Intro
Software Testing Process
Testing process
Chapter 1 Fundamental of testing ISTQB v4
Testing Slides 1 (Testing Intro+Static Testing).pdf
Test planning AND concepts planning Test planning AND concepts planning
Aim (A).pptx
Software testing and process
Software testing for project report system.
Test plan
object oriented system analysis and design
Fundamental test process
9 test_levels-
Chapter 10 Testing and Quality Assurance1Unders.docx
Software testing a guide from experience
Ad

Recently uploaded (20)

PDF
System and Network Administraation Chapter 3
PPT
JAVA ppt tutorial basics to learn java programming
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PDF
Complete React Javascript Course Syllabus.pdf
PDF
AI in Product Development-omnex systems
PDF
How Creative Agencies Leverage Project Management Software.pdf
PPTX
ISO 45001 Occupational Health and Safety Management System
PPTX
ManageIQ - Sprint 268 Review - Slide Deck
PPTX
Essential Infomation Tech presentation.pptx
PPTX
L1 - Introduction to python Backend.pptx
PPT
Introduction Database Management System for Course Database
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
PDF
medical staffing services at VALiNTRY
PDF
Understanding Forklifts - TECH EHS Solution
PDF
PTS Company Brochure 2025 (1).pdf.......
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PPTX
ai tools demonstartion for schools and inter college
System and Network Administraation Chapter 3
JAVA ppt tutorial basics to learn java programming
How to Migrate SBCGlobal Email to Yahoo Easily
Softaken Excel to vCard Converter Software.pdf
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
VVF-Customer-Presentation2025-Ver1.9.pptx
Complete React Javascript Course Syllabus.pdf
AI in Product Development-omnex systems
How Creative Agencies Leverage Project Management Software.pdf
ISO 45001 Occupational Health and Safety Management System
ManageIQ - Sprint 268 Review - Slide Deck
Essential Infomation Tech presentation.pptx
L1 - Introduction to python Backend.pptx
Introduction Database Management System for Course Database
Wondershare Filmora 15 Crack With Activation Key [2025
medical staffing services at VALiNTRY
Understanding Forklifts - TECH EHS Solution
PTS Company Brochure 2025 (1).pdf.......
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
ai tools demonstartion for schools and inter college

Test planning.ppt

  • 1. WEEK 3: SOFTWARE TEST PLANNING UNIT 12: TEST PLANNING SOFTWARE TESTING AND CONTINUOUS QUALITY IMPROVEMENT BY WILLIAM E. LEWIS HTTPS://TIENHUONG.FILES.WORDPRESS.COM/2009/08/SOFTWARE-TESTING-AND- CONTINUOUS-QUALITY-IMPROVEMENT-SECOND-EDITION.PDF
  • 2. - INTRODUCTION - PLANNING METHODOLOGY - BUILD A TEST PLAN - DEFINE METRIC OBJECTIVES - REVIEW APPROVE PLAN Outline
  • 3. INTRODUCTION Purpose of Test plan  provide the basis for accomplishing testing in an organized manner  Successful and smooth test execution and analysis Discuss the nature of project plan according to different development models. E.g. waterfall, spiral etc?
  • 4. QUALITIES OF A GOOD TEST PLAN  Has a good chance of detecting a majority of the defects  Provides test coverage for most of the code  Is flexible  Is executed easily and automatically and is repeatable  Defines the types of tests to be performed  Clarifies the test strategy  Clearly defines the test exit criteria  Identifies the risks  Documents the test requirements  Defines the test deliverables  Clearly defines the test objectives
  • 5. PLANNING TEST METHODOLOGY Step 1: Building the test project plan Step 2: Defining the metrics Step 3: Reviewing/approving the test project plan
  • 6. STEPS OF TEST PLANNING Step 1 Build A Test Plan
  • 7. TEST PLANNING (TASKS)  Task 1: Prepare an Introduction  Task 2: Define the High-Level Functional Requirements (in Scope)  Task 3: Identify Manual/Automated Test Types  Task 4: Identify the Test Exit Criteria  Task 5: Establish Regression Test Strategy  Task 6: Define the Test Deliverables  Task 7: Organize the Test Team  Task 8: Establish a Test Environment  Task 9: Define the Dependencies  Task 10: Create a Test Schedule
  • 8.  Task 11: Select the Test Tools  Task 12: Establish Defect Recording/Tracking Procedures  Task 13: Establish Change Request Procedures  Task 14: Establish Version Control Procedures  Task 15: Define Configuration Build Procedures  Task 16: Define Project Issue Resolution Procedures  Task 17: Establish Reporting Procedures  Task 18: Define Approval Procedures
  • 9. TASK 1: PREPARE AN INTRODUCTION  Description of the problem  Application’s risks, purpose, objectives, and benefits  Available documentation Examples: SRS , SDS, project plan, prototypes, user manuals etc
  • 10. TASK 2: DEFINE THE HIGH-LEVEL FUNCTIONAL REQUIREMENTS (IN SCOPE) A functional specification consists of  Hierarchical functional decomposition e.g. order processing (create new order, edit order etc.)  Functional window structure e.g. main window (menu bar, order placing window etc.)  Window standards e.g. Windows 95 GUI Standards  Minimum system requirements of the system to be developed e.g. Windows 95, a Pentium II microprocessor, 24-MB RAM, 3-gig disk space, and a modem
  • 11. TASK 3: IDENTIFY MANUAL/AUTOMATED TEST TYPES Depend totally on the objectives of the application. . However, three types of tests that are nearly always required are: 1. Function testing comprises the majority of the testing effort and is concerned with verifying that the functions work properly. 2. Regression testing tests the application in light of changes made during debugging, maintenance, or the development of a new release. 3. User interface testing, or GUI testing, checks the user’s interaction or functional window structure.
  • 12. TASK 4: IDENTIFY THE TEST EXIT CRITERIA 1. Scheduled testing time has expired — 2. Some predefined number of defects discovered — 3. All the formal tests execute without detecting any defects — 4. Combination of the above — Most testing projects utilize a combination of the above exit criteria.
  • 13. TASK 5: ESTABLISH REGRESSION TEST STRATEGY  Regression testing tests the application in light of changes made during a development spiral, debugging, maintenance, or the development of a new release. This test must be performed after functional improvements or repairs have been made to a system to confirm that the changes have no unintended side effects. Correction of errors relating to logic and control flow, computational errors, and interface errors are examples of conditions that necessitate regression testing.  It would be ideal if all the tests in the test suite were rerun for each new spiral but, due to time constraints, this is probably not realistic.  A good regression strategy during spiral development is for some regression testing to be performed during each spiral to ensure that previously demonstrated capabilities are not adversely affected by later development spirals or error corrections.
  • 14. TASK 5. CONT…  Regression tests are potential candidates for test automation when they are repeated over and over in every testing spiral.  Regression testing needs to occur between releases after the initial release of the system. • The test that uncovered an original defect should be rerun after it has been corrected.  An in-depth effort should be made to ensure that the original defect was corrected and not just the symptoms.  Use of Retest matrix is recommended
  • 15. TASK 6: DEFINE THE TEST DELIVERABLES Test deliverables result from test planning, test design, test development, and test defect documentation. Examples?
  • 16. TASK 7: ORGANIZE THE TEST TEAM Skilled staffing for testing. Areas of responsibility in testing:  Testing the application, which is the responsibility of the test team  The overall testing processes, which is handled by the test manager.
  • 17. TASK 8: ESTABLISH A TEST ENVIRONMENT The purpose of the test environment is to provide a physical framework for testing necessary for the testing activity. For this task, the test environment needs are established and reviewed before implementation. The main components of the test environment:  The test facility component includes the physical setup.  The technologies component includes the hardware platforms, physical network and all its components, operating system software.  The tools component includes any specialized testing software such as automated test tools, testing libraries, and support software.
  • 18. TASK 9: DEFINE THE DEPENDENCIES A good source of information is previously produced test plans on other projects. If available, the sequence of tasks in the project work plans can be analyzed for activity and task dependencies that apply to this project. Examples of test dependencies include: • Code availability • Tester availability (in a timely fashion) • Test requirements (reasonably defined) • Test tools availability • Test group training • Defects fixed in a timely manner • Adequate testing time
  • 19. TASK 10: CREATE A TEST SCHEDULE A test schedule should be produced that includes the testing steps (and perhaps tasks), target start and end dates, and responsibilities. It should also describe how it will be reviewed, tracked, and approved
  • 20. TASK 11: SELECT THE TEST TOOLS  Test tools range from relatively simple to sophisticated software. New tools are being developed to help provide the high-quality software needed for today’s applications.  Because test tools are critical to effective testing, those responsible for testing should be proficient in using them.  The tools selected should be most effective for the environment in which the tester operates and the specific types of software being tested.  The test plan needs to name specific test tools and their vendors.  The test team should review and approve the use of each test tool, because the tool selected must be consistent with the objectives of the test plan.
  • 21. TASK 12: ESTABLISH DEFECT RECORDING/ TRACKING PROCEDURES  During the testing process a defect is discovered. It needs to be recorded. A defect is related to individual tests that have been conducted, and the objective is to produce a complete record of those defects.  The overall motivation for recording defects is to correct defects and record metric information about the application. Development should have access to the defects reports, which they can use to evaluate whether there is a defect and how to reconcile it.
  • 22. TASK 13: ESTABLISH CHANGE REQUEST PROCEDURE If it were a perfect world, a system would be built and there would be no future changes. Unfortunately, it is not a perfect world and after a system is deployed, there are change requests. Some of the reasons for change are:  • The requirements change.  • The design changes.  • The specification is incomplete or ambiguous.  • A defect is discovered that was not discovered during reviews.  • The software environment changes, for example, platform, hardware,  and so on. Change control is the process by which a modification to a software component is proposed, evaluated, approved or rejected, scheduled, and tracked. It is a decision process used in controlling the changes made to software. Some proposed changes are accepted and implemented during this process. Others are rejected or postponed, and are not implemented.
  • 23. TASK 14: ESTABLISH VERSION CONTROL PROCEDURES A method for uniquely identifying each software component needs to be established via a labeling scheme. Every software component must have a unique name. Software components evolve through successive revisions, and each needs to be distinguished. A simple way to distinguish component revisions is with a pair of integers, 1.1, 1.2, . . . , that define the release number and level number. When a software component is first identified, it is revision 1 and subsequent major revisions are 2, 3, and so on.
  • 24. TASK 15: DEFINE CONFIGURATION BUILD PROCEDURES Assembling a software system involves tools to transform the source components, or source code, into executable programs. Examples of tools are compilers and linkage editors. Configuration build procedures need to be defined to identify the correct component versions. The configuration build model addresses the crucial question of how to control the way components are built.
  • 25. TASK 16: DEFINE PROJECT ISSUE RESOLUTION PROCEDURES Testing issues can arise at any point in the development process and must be resolved successfully. The primary responsibility of issue resolution is with the project manager who should work with the project sponsor to resolve those issues. Typically, the testing manager will document test issues that arise during the testing process. The project manager or project sponsor should screen every issue that arises. An issue can be rejected or deferred for further investigation but should be considered relative to its impact on the project. In any case, a form should be created that contains the essential information.  Examples of testing issues include lack of testing tools, lack of adequate time to test, inadequate knowledge of the requirements, and so on.
  • 26. TASK 17: ESTABLISH REPORTING PROCEDURES The objectives of test status reporting are to report test issues, problems, and concerns. Two key reports that need to be published are: 1.Interim Test Report — An interim test report is a report published between testing spirals indicating the status of the testing effort. 2. System Summary Report — A test summary report is a comprehensive test report after all spiral testing has been completed.
  • 27. TASK 18: DEFINE APPROVAL PROCEDURES Approval procedures are critical in a testing project. They help provide the necessary agreement between members of the project team. The testing manager needs to define who needs to approve a test deliverable, when it will be approved, and what the backup plan is if an approval cannot be obtained.
  • 28. STEPS OF TEST PLANNING Step 2 Define the Metric Objectives
  • 29. DEFINE METRIC OBJECTIVES “You can’t control what you can’t measure.” Tom DeMarco’s  Control: Extent to which a manager can ensure minimum surprises.  Measurement: Recording of past effects to quantitatively predict future effects. Main tasks: Task 1: Define the Metrics Task 2: Define the Metric Points
  • 30. TASK 1: DEFINE THE METRICS A metric is a measurable indication of some quantitative aspect of a system. A metric can be a “result” or a “predictor.” Result metric: measures a completed event or process. Example: actual total elapsed time to process a business transaction or total test costs of a project. Predictor metric: an early warning metric that has a strong correlation to some later result. Example: predicted response time through statistical regression analysis when more terminals are added to a system when the amount of terminals has not yet been measured Other examples of metrics: • Test effectiveness — How well is testing doing, for example, return on investment? • Development effectiveness — How well is development fixing defects? • Test automation — How much effort is expended on test automation? • Test cost — What are the resources and time spent on testing?
  • 31. TASK 2: DEFINE THE METRIC POINTS Few metric points associated with the general metrics:
  • 32. STEPS OF TEST PLANNING Step 3 Review/Approve The Plan
  • 33. STEP 3: REVIEW / APPROVE PLAN (TASKS) Task 1: Schedule/Conduct the Review Scheduled well in advance of the actual review and the participants should have the latest copy of the test plan. Review elements: 1. The first is defining what will be discussed, or “talking about what we are going to talk about.” 2. The second is discussing the details or “talking about it.” 3. The third is summarization, or “talking about what we talked about.” 4. The final element is timeliness. The purpose of this task is for development and the project sponsor to agree and accept the test plan. If there are any suggested changes to the test plan during the review, they should be incorporated into the test plan.
  • 34. STEP 3: REVIEW / APPROVE PLAN (TASKS) Task 2: Obtain Approvals Approval is critical in a testing effort, for it helps provide the necessary agreements between testing, development, and the sponsor. The best approach is with a formal sign-off procedure of a test plan. However, if a formal agreement procedure is not in place, send a memo to each key participant. In the document attach the latest test plan and point out that all their feedback comments have been incorporated. Finally, indicate that in a spiral development environment, the test plan will evolve with each iteration but that you will include them in any modification.