SlideShare a Scribd company logo
FUNDAMENTAL TEST PROCESS
By Graham et.al (2011)
UNIVERSITAS SULTAN SYARIF
KASIM RIAU
Bima Alvamiko Pratama
In this section, we will describe the fundamental
test process and activities. These start with test
planning and continue through to test closure. For
each part of the test process, we'll discuss the
main tasks of each test activity.
In this section, you'll also encounter the glossary
terms confirmation testing, exit criteria,
incident, regression testing, test basis, test
condition, test coverage, test data, test
execution, test log, test plan, test strategy, test
summary report and testware.
Introduction
As we have seen, although executing tests is important, we also need a
plan of action and a report on the outcome of testing. Project and test plans
should include time to be spent on planning the tests, designing test cases,
preparing for execution and evaluating status. The idea of a fundamental
test process for all levels of test has developed over the years. Whatever
the level of testing, we see the same type of main activities happening,
although there may be a different amount of formality at the different levels,
for example, component tests might be carried out less formally than
system tests in most organizations with a less documented test process.
The decision about the level of formality of the processes will depend on
the system and software context and the level of risk associated with the
software. So we can divide the activities within the fundamental test
process into the following basic steps:
 planning and control;
 analysis and design;
 implementation and execution;
 evaluating exit criteria and reporting;
 test closure activities.
Cont…
During test planning, we make sure we understand the goals and
objectives of the customers, stakeholders, and the project, and the risks
which testing is intended to address. This will give us what is sometimes
called the mission of testing or the test assignment. Based on this
understanding, we set the goals and objectives for the testing itself, and
derive an approach and plan for the tests, including specification of test
activities. To help us we may have organization or program test policies
and a test strategy. Test policy gives rules for testing, e.g. 'we always
review the design documents'; test strategy is the overall high-level
approach, e.g. 'system testing is carried out by an independent team
reporting to the program quality manager. It will be risk-based and
proceeds from a product (quality) risk analysis' (see Chapter 5). If policy
and strategy are defined already they drive our planning but if not we
should ask for them to be stated and defined. Test planning has the
following major tasks, given approxi- mately in order, which help us build
a test plan:
 Determine the scope and risks and identify the objectives of testing: we consider
what software, components, systems or other products are in scope for testing;
the business, product, project and technical risks which need to be addressed;
and whether we are testing primarily to uncover defects, to show that the
software meets requirements, to demonstrate that the system is fit for purpose
or to measure the qualities and attributes of the software.
 Determine the test approach (techniques, test items, coverage, identifying and
interfacing with the teams involved in testing, testware): we consider how we will
carry out the testing, the techniques to use, what needs testing and how
extensively (i.e. what extent of coverage). We'll look at who needs to get
involved and when (this could include developers, users, IT infrastruc ture
teams); we'll decide what we are going to produce as part of the testing (e.g.
testware such as test procedures and test data). This will be related to the
requirements of the test strategy.
 Implement the test policy and/or the test strategy: we mentioned that there may
be an organization or program policy and strategy for testing. If this is the case,
during our planning we must ensure that what we plan to do adheres to the
policy and strategy or we must have agreed with stakeholders, and documented,
Cont…
 Determine the required test resources (e.g. people, test environment, PCs): from
the planning we have already done we can now go into detail; we decide on our
team make-up and we also set up all the supporting hardware and software we
require for the test environment.
 Schedule test analysis and design tasks, test implementation, execution and
evaluation: we will need a schedule of all the tasks and activities, so that we can
track them and make sure we can complete the testing on time.
 Determine the exit criteria: we need to set criteria such as coverage criteria (for
example, the percentage of statements in the software that must be executed
during testing) that will help us track whether we are completing the test activ
ities correctly. They will show us which tasks and checks we must complete for a
particular level of testing before we can say that testing is finished.
Cont…
Test analysis and design is the activity where general testing objectives are trans- formed into
tangible test conditions and test designs. During test analysis and design, we take general testing
objectives identified during planning and build test designs and test procedures (scripts). You'll see
how to do this in Chapter 4. Test analysis and design has the following major tasks, in approximately
the following order:
 Review the test basis (such as the product risk analysis, requirements, architecture, design
specifications, and interfaces), examining the specifications for the software we are testing. We
use the test basis to help us build our tests. We can start designing certain kinds of tests (called
black-box tests) before the code exists, as we can use the test basis documents to understand
what the system should do once built. As we study the test basis, we often identify gaps and
ambiguities in the specifications, because we are trying to identify precisely what happens at
each point in the system, and this also pre- vents defects appearing in the code.
 Identify test conditions based on analysis of test items, their specifications, and what we know
about their behavior and structure. This gives us a highlevel list of what we are interested in
testing. If we return to our driving example, the examiner might have a list of test conditions
including 'behav ior at road junctions', 'use of indicators', 'ability to maneuver the car' and so on.
In testing, we use the test techniques to help us define the test condi tions. From this we can
start to identify the type of generic test data we might need.
 Design the tests (you'll see how to do this in Chapter 4), using techniques to help
select representative tests that relate to particular aspects of the soft ware which
carry risks or which are of particular interest, based on the test conditions and going
into more detail. For example, the driving examiner might look at a list of test
conditions and decide that junctions need to include T-junctions, cross roads and so
on. In testing, we'll define the test case and test procedures.
 Evaluate testability of the requirements and system. The requirements may be written
in a way that allows a tester to design tests; for example, if the per formance of the
software is important, that should be specified in a testable way. If the requirements
just say 'the software needs to respond quickly enough' that is not testable, because
'quick enough' may mean different things to different people. A more testable
requirement would be 'the soft ware needs to respond in 5 seconds with 20 people
logged on'. The testabil ity of the system depends on aspects such as whether it is
possible to set up the system in an environment that matches the operational
environment and whether all the ways the system can be configured or used can be
understood and tested. For example, if we test a website, it may not be possible to
iden tify and recreate all the configurations of hardware, operating system, browser,
connection, firewall and other factors that the website might encounter.
 Design the test environment set-up and identify any required infrastructure and tools.
This includes testing tools (see Chapter 6) and support tools such as spreadsheets,
word processors, project planning tools, and non-IT tools and equipment - everything
we need to carry out our work.
Cont…
During test implementation and execution, we take the
test conditions and make them into test cases and
testware and set up the test environment. This means
that, having put together a high-level design for our
tests, we now start to build them. We transform our test
conditions into test cases and procedures, other
testware such as scripts for automation. We also need
to set up an envi- ronment where we will run the tests
and build our test data. Setting up environ- ments and
data often involves significant time and effort, so you
should plan and monitor this work carefully. Test
implementation and execution have the following major
tasks, in approximately the following order:
 Implementation:
 Develop and prioritize our test cases, using the techniques you'll see
in Chapter 4, and create test data for those tests. We will also write
instructions for carrying out the tests (test procedures). For the driving
examiner this might mean changing the test condition 'junc tions' to
'take the route down Mayfield Road to the junction with Summer Road
and ask the driver to turn left into Summer Road and then right into
Green Road, expecting that the driver checks mirrors, signals and
maneuvers correctly, while remaining aware of other road users.' We
may need to automate some tests using test harnesses and
automated test scripts. We'll talk about automation more in Chapter 6.
 Create test suites from the test cases for efficient test execution. A
test suite is a logical collection of test cases which naturally work
together. Test suites often share data and a common high-level set of
objectives. We'll also set up a test execution schedule.
 Implement and verify the environment. We make sure the test envi
ronment has been set up correctly, possibly even running specific tests
on it.
Cont…
 Execution:
 Execute the test suites and individual test cases, following our test proce dures. We might do
this manually or by using test execution tools, accord ing to the planned sequence.
 Log the outcome of test execution and record the identities and versions of the software under
test, test tools and testware. We must know exactly what tests we used against what version
of the software; we must report defects against specific versions; and the test log we keep
provides an audit trail.
 Compare actual results (what happened when we ran the tests) with expected results (what
we anticipated would happen).
 Where there are differences between actual and expected results, report discrepancies as
incidents. We analyze them to gather further details about the defect, reporting additional
information on the problem, identify the causes of the defect, and differentiate between
problems in the software and other products under test and any defects in test data, in test
documents, or mistakes in the way we exe cuted the test. We would want to log the latter in
order to improve the testing itself.
 Repeat test activities as a result of action taken for each discrepancy. We need to re-execute
tests that previously failed in order to confirm a fix (confirmation testing or re-testing). We
execute corrected tests and suites if there were defects in our tests. We test corrected
software again to ensure that the defect was indeed fixed correctly (confirmation test) and that
the programmers did not introduce defects in unchanged areas of the software and that fixing
a defect did not uncover other defects (regression testing).
Cont…
Evaluating exit criteria is the activity where test execution is assessed
against the defined objectives. This should be done for each test level, as
for each we need to know whether we have done enough testing. Based on
our risk assess- ment, we'll have set criteria against which we'll measure
'enough'. These criteria vary for each project and are known as exit criteria.
They tell us whether we can declare a given testing activity or level
complete. We may have a mix of cov- erage or completion criteria (which
tell us about test cases that must be included, e.g. 'the driving test must
include an emergency stop' or 'the software test must include a response
measurement'), acceptance criteria (which tell us how we know whether the
software has passed or failed overall, e.g. 'only pass the driver if they have
completed the emergency stop correctly' or 'only pass the software for
release if it meets the priority 1 requirements list') and process exit criteria
(which tell us whether we have completed all the tasks we need to do,
e.g. 'the examiner/tester has not finished until they have written and filed
the end of test report'). Exit criteria should be set and evaluated for each
test level. Evaluating exit criteria has the following major tasks:
 Check test logs against the exit criteria specified in test planning:
We look to see what evidence we have for which tests have been
executed and checked, and what defects have been raised, fixed,
confirmation tested, or are out standing.
 Assess if more tests are needed or if the exit criteria specified
should be changed: We may need to run more tests if we have not
run all the tests we designed, or if we realize we have not reached
the coverage we expected, or if the risks have increased for the
project. We may need to change the exit criteria to lower them, if the
business and project risks rise in impor tance and the product or
technical risks drop in importance. Note that this is not easy to do
and must be agreed with stakeholders. The test manage ment tools
and test coverage tools that we'll discuss in Chapter 6 help us with
this assessment.
 Write a test summary report for stakeholders: It is not enough that
the testers know the outcome of the test. All the stakeholders need
to know what testing has been done and the outcome of the testing,
in order to make informed decisions about the software.
Cont…
During test closure activities, we collect data from completed test
activities to consolidate experience, including checking and filing
testware, and analyzing facts and numbers. We may need to do this
when software is delivered. We also might close testing for other
reasons, such as when we have gathered the infor- mation needed
from testing, when the project is cancelled, when a particular
milestone is achieved, or when a maintenance release or update is
done. Test closure activities include the following major tasks:
 Check which planned deliverables we actually delivered and ensure all incident reports have
been resolved through defect repair or deferral. For deferred defects, in other words those that
remain open, we may request a change in a future release. We document the-acceptance or
rejection of the software system.
 Finalize and archive testware, such as scripts, the test environment, and any other test
infrastructure, for later reuse. It is important to reuse whatever we can of testware; we will
inevitable carry out maintenance testing, and it saves time and effort if our testware can be
pulled out from a library of existing tests. It also allows us to compare the results of testing
between software versions.
 Hand over testware to the maintenance organization who will
support the software and make any bug fixes or maintenance
changes, for use in con firmation testing and regression testing.
This group may be a separate group to the people who build and
test the software; the maintenance testers are one of the
customers of the development testers; they will use the library of
tests.
 Evaluate how the testing went and analyze lessons learned for
future releases and projects. This might include process
improvements for the soft ware development life cycle as a
whole and also improvement of the test processes. If you reflect
on Figure 1.3 again, we might use the test results to set targets
for improving reviews and testing with a goal of reducing the
number of defects in live use. We might look at the number of
incidents which were test problems, with the goal of improving
the way we design, execute and check our tests or the
management of the test environments and data. This helps us
make our testing more mature and cost-effective for the
Cont…
Fundamental test process 1

More Related Content

PPTX
Fundamental test process (andika m)
PPTX
Fundamental test process
PPTX
Fundamental test process
PPTX
Fundamental test process
PPTX
Fundamental test process
PPTX
2 . fundamental test process
PPTX
Fundamental test process
PPTX
Fundamental Test Process
Fundamental test process (andika m)
Fundamental test process
Fundamental test process
Fundamental test process
Fundamental test process
2 . fundamental test process
Fundamental test process
Fundamental Test Process

What's hot (20)

PPTX
Fundamentals of testing
PDF
Test Documentation Based On Ieee829 155261
PPT
Test data documentation ss
PPTX
Fundamental Test Process
PPT
Test planning
PPTX
Fundamental test process
DOC
Testplan
PPTX
Qa documentation pp
PPTX
Test design
PPTX
Test Planning_Arsala
PDF
Ieee 829 1998-a3
PDF
TestPlan for IIT website
PDF
02 test planning
PPT
Test Planning
DOC
02 software test plan template
PDF
Test plan
PPT
Tiara Ramadhani - Program Studi S1 Sistem Informasi - Fakultas Sains dan Tekn...
PPTX
Agile Bureaucracy
PPTX
Fundamental test process
PPTX
Test planning
Fundamentals of testing
Test Documentation Based On Ieee829 155261
Test data documentation ss
Fundamental Test Process
Test planning
Fundamental test process
Testplan
Qa documentation pp
Test design
Test Planning_Arsala
Ieee 829 1998-a3
TestPlan for IIT website
02 test planning
Test Planning
02 software test plan template
Test plan
Tiara Ramadhani - Program Studi S1 Sistem Informasi - Fakultas Sains dan Tekn...
Agile Bureaucracy
Fundamental test process
Test planning
Ad

Similar to Fundamental test process 1 (20)

PPTX
Fundamental test process
PPTX
Fundamental test process
PPTX
Fundamental test process
PPTX
Fundamental test process (TESTING IMPLEMENTATION SYSTEM)
PPTX
Fundamental Test Process
PPTX
FUNDAMENTAL TEST PROCESS
PPTX
Fundamental test process_rendi_saputra_infosys_USR
PPTX
Fundamentaltestprocess windirohmaheny11453205427 kelase
PPTX
Fundamental test process endang
PPTX
Fundamental test process hazahara
PPTX
Software Testing 2/5
PPTX
Advanced quality control
PPTX
Test design techniques
PPTX
Test design techniques
DOC
Question ISTQB foundation 3
DOC
Ôn tập kiến thức ISTQB
PPTX
Fundamentals of testing
PPT
Test planning.ppt
PPTX
Test Plan.pptx
PPTX
Test analysis identifying test conditions
Fundamental test process
Fundamental test process
Fundamental test process
Fundamental test process (TESTING IMPLEMENTATION SYSTEM)
Fundamental Test Process
FUNDAMENTAL TEST PROCESS
Fundamental test process_rendi_saputra_infosys_USR
Fundamentaltestprocess windirohmaheny11453205427 kelase
Fundamental test process endang
Fundamental test process hazahara
Software Testing 2/5
Advanced quality control
Test design techniques
Test design techniques
Question ISTQB foundation 3
Ôn tập kiến thức ISTQB
Fundamentals of testing
Test planning.ppt
Test Plan.pptx
Test analysis identifying test conditions
Ad

Recently uploaded (20)

PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Web App vs Mobile App What Should You Build First.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
DP Operators-handbook-extract for the Mautical Institute
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
TLE Review Electricity (Electricity).pptx
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPTX
A Presentation on Touch Screen Technology
PPTX
Tartificialntelligence_presentation.pptx
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Approach and Philosophy of On baking technology
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
WOOl fibre morphology and structure.pdf for textiles
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
A comparative study of natural language inference in Swahili using monolingua...
SOPHOS-XG Firewall Administrator PPT.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Encapsulation_ Review paper, used for researhc scholars
Web App vs Mobile App What Should You Build First.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
DP Operators-handbook-extract for the Mautical Institute
Group 1 Presentation -Planning and Decision Making .pptx
TLE Review Electricity (Electricity).pptx
Enhancing emotion recognition model for a student engagement use case through...
A Presentation on Touch Screen Technology
Tartificialntelligence_presentation.pptx
NewMind AI Weekly Chronicles - August'25-Week II
Approach and Philosophy of On baking technology
Accuracy of neural networks in brain wave diagnosis of schizophrenia
WOOl fibre morphology and structure.pdf for textiles
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
Hindi spoken digit analysis for native and non-native speakers
Zenith AI: Advanced Artificial Intelligence
A comparative study of natural language inference in Swahili using monolingua...

Fundamental test process 1

  • 1. FUNDAMENTAL TEST PROCESS By Graham et.al (2011) UNIVERSITAS SULTAN SYARIF KASIM RIAU Bima Alvamiko Pratama
  • 2. In this section, we will describe the fundamental test process and activities. These start with test planning and continue through to test closure. For each part of the test process, we'll discuss the main tasks of each test activity. In this section, you'll also encounter the glossary terms confirmation testing, exit criteria, incident, regression testing, test basis, test condition, test coverage, test data, test execution, test log, test plan, test strategy, test summary report and testware. Introduction
  • 3. As we have seen, although executing tests is important, we also need a plan of action and a report on the outcome of testing. Project and test plans should include time to be spent on planning the tests, designing test cases, preparing for execution and evaluating status. The idea of a fundamental test process for all levels of test has developed over the years. Whatever the level of testing, we see the same type of main activities happening, although there may be a different amount of formality at the different levels, for example, component tests might be carried out less formally than system tests in most organizations with a less documented test process. The decision about the level of formality of the processes will depend on the system and software context and the level of risk associated with the software. So we can divide the activities within the fundamental test process into the following basic steps:  planning and control;  analysis and design;  implementation and execution;  evaluating exit criteria and reporting;  test closure activities. Cont…
  • 4. During test planning, we make sure we understand the goals and objectives of the customers, stakeholders, and the project, and the risks which testing is intended to address. This will give us what is sometimes called the mission of testing or the test assignment. Based on this understanding, we set the goals and objectives for the testing itself, and derive an approach and plan for the tests, including specification of test activities. To help us we may have organization or program test policies and a test strategy. Test policy gives rules for testing, e.g. 'we always review the design documents'; test strategy is the overall high-level approach, e.g. 'system testing is carried out by an independent team reporting to the program quality manager. It will be risk-based and proceeds from a product (quality) risk analysis' (see Chapter 5). If policy and strategy are defined already they drive our planning but if not we should ask for them to be stated and defined. Test planning has the following major tasks, given approxi- mately in order, which help us build a test plan:
  • 5.  Determine the scope and risks and identify the objectives of testing: we consider what software, components, systems or other products are in scope for testing; the business, product, project and technical risks which need to be addressed; and whether we are testing primarily to uncover defects, to show that the software meets requirements, to demonstrate that the system is fit for purpose or to measure the qualities and attributes of the software.  Determine the test approach (techniques, test items, coverage, identifying and interfacing with the teams involved in testing, testware): we consider how we will carry out the testing, the techniques to use, what needs testing and how extensively (i.e. what extent of coverage). We'll look at who needs to get involved and when (this could include developers, users, IT infrastruc ture teams); we'll decide what we are going to produce as part of the testing (e.g. testware such as test procedures and test data). This will be related to the requirements of the test strategy.  Implement the test policy and/or the test strategy: we mentioned that there may be an organization or program policy and strategy for testing. If this is the case, during our planning we must ensure that what we plan to do adheres to the policy and strategy or we must have agreed with stakeholders, and documented, Cont…
  • 6.  Determine the required test resources (e.g. people, test environment, PCs): from the planning we have already done we can now go into detail; we decide on our team make-up and we also set up all the supporting hardware and software we require for the test environment.  Schedule test analysis and design tasks, test implementation, execution and evaluation: we will need a schedule of all the tasks and activities, so that we can track them and make sure we can complete the testing on time.  Determine the exit criteria: we need to set criteria such as coverage criteria (for example, the percentage of statements in the software that must be executed during testing) that will help us track whether we are completing the test activ ities correctly. They will show us which tasks and checks we must complete for a particular level of testing before we can say that testing is finished. Cont…
  • 7. Test analysis and design is the activity where general testing objectives are trans- formed into tangible test conditions and test designs. During test analysis and design, we take general testing objectives identified during planning and build test designs and test procedures (scripts). You'll see how to do this in Chapter 4. Test analysis and design has the following major tasks, in approximately the following order:  Review the test basis (such as the product risk analysis, requirements, architecture, design specifications, and interfaces), examining the specifications for the software we are testing. We use the test basis to help us build our tests. We can start designing certain kinds of tests (called black-box tests) before the code exists, as we can use the test basis documents to understand what the system should do once built. As we study the test basis, we often identify gaps and ambiguities in the specifications, because we are trying to identify precisely what happens at each point in the system, and this also pre- vents defects appearing in the code.  Identify test conditions based on analysis of test items, their specifications, and what we know about their behavior and structure. This gives us a highlevel list of what we are interested in testing. If we return to our driving example, the examiner might have a list of test conditions including 'behav ior at road junctions', 'use of indicators', 'ability to maneuver the car' and so on. In testing, we use the test techniques to help us define the test condi tions. From this we can start to identify the type of generic test data we might need.
  • 8.  Design the tests (you'll see how to do this in Chapter 4), using techniques to help select representative tests that relate to particular aspects of the soft ware which carry risks or which are of particular interest, based on the test conditions and going into more detail. For example, the driving examiner might look at a list of test conditions and decide that junctions need to include T-junctions, cross roads and so on. In testing, we'll define the test case and test procedures.  Evaluate testability of the requirements and system. The requirements may be written in a way that allows a tester to design tests; for example, if the per formance of the software is important, that should be specified in a testable way. If the requirements just say 'the software needs to respond quickly enough' that is not testable, because 'quick enough' may mean different things to different people. A more testable requirement would be 'the soft ware needs to respond in 5 seconds with 20 people logged on'. The testabil ity of the system depends on aspects such as whether it is possible to set up the system in an environment that matches the operational environment and whether all the ways the system can be configured or used can be understood and tested. For example, if we test a website, it may not be possible to iden tify and recreate all the configurations of hardware, operating system, browser, connection, firewall and other factors that the website might encounter.  Design the test environment set-up and identify any required infrastructure and tools. This includes testing tools (see Chapter 6) and support tools such as spreadsheets, word processors, project planning tools, and non-IT tools and equipment - everything we need to carry out our work. Cont…
  • 9. During test implementation and execution, we take the test conditions and make them into test cases and testware and set up the test environment. This means that, having put together a high-level design for our tests, we now start to build them. We transform our test conditions into test cases and procedures, other testware such as scripts for automation. We also need to set up an envi- ronment where we will run the tests and build our test data. Setting up environ- ments and data often involves significant time and effort, so you should plan and monitor this work carefully. Test implementation and execution have the following major tasks, in approximately the following order:
  • 10.  Implementation:  Develop and prioritize our test cases, using the techniques you'll see in Chapter 4, and create test data for those tests. We will also write instructions for carrying out the tests (test procedures). For the driving examiner this might mean changing the test condition 'junc tions' to 'take the route down Mayfield Road to the junction with Summer Road and ask the driver to turn left into Summer Road and then right into Green Road, expecting that the driver checks mirrors, signals and maneuvers correctly, while remaining aware of other road users.' We may need to automate some tests using test harnesses and automated test scripts. We'll talk about automation more in Chapter 6.  Create test suites from the test cases for efficient test execution. A test suite is a logical collection of test cases which naturally work together. Test suites often share data and a common high-level set of objectives. We'll also set up a test execution schedule.  Implement and verify the environment. We make sure the test envi ronment has been set up correctly, possibly even running specific tests on it. Cont…
  • 11.  Execution:  Execute the test suites and individual test cases, following our test proce dures. We might do this manually or by using test execution tools, accord ing to the planned sequence.  Log the outcome of test execution and record the identities and versions of the software under test, test tools and testware. We must know exactly what tests we used against what version of the software; we must report defects against specific versions; and the test log we keep provides an audit trail.  Compare actual results (what happened when we ran the tests) with expected results (what we anticipated would happen).  Where there are differences between actual and expected results, report discrepancies as incidents. We analyze them to gather further details about the defect, reporting additional information on the problem, identify the causes of the defect, and differentiate between problems in the software and other products under test and any defects in test data, in test documents, or mistakes in the way we exe cuted the test. We would want to log the latter in order to improve the testing itself.  Repeat test activities as a result of action taken for each discrepancy. We need to re-execute tests that previously failed in order to confirm a fix (confirmation testing or re-testing). We execute corrected tests and suites if there were defects in our tests. We test corrected software again to ensure that the defect was indeed fixed correctly (confirmation test) and that the programmers did not introduce defects in unchanged areas of the software and that fixing a defect did not uncover other defects (regression testing). Cont…
  • 12. Evaluating exit criteria is the activity where test execution is assessed against the defined objectives. This should be done for each test level, as for each we need to know whether we have done enough testing. Based on our risk assess- ment, we'll have set criteria against which we'll measure 'enough'. These criteria vary for each project and are known as exit criteria. They tell us whether we can declare a given testing activity or level complete. We may have a mix of cov- erage or completion criteria (which tell us about test cases that must be included, e.g. 'the driving test must include an emergency stop' or 'the software test must include a response measurement'), acceptance criteria (which tell us how we know whether the software has passed or failed overall, e.g. 'only pass the driver if they have completed the emergency stop correctly' or 'only pass the software for release if it meets the priority 1 requirements list') and process exit criteria (which tell us whether we have completed all the tasks we need to do, e.g. 'the examiner/tester has not finished until they have written and filed the end of test report'). Exit criteria should be set and evaluated for each test level. Evaluating exit criteria has the following major tasks:
  • 13.  Check test logs against the exit criteria specified in test planning: We look to see what evidence we have for which tests have been executed and checked, and what defects have been raised, fixed, confirmation tested, or are out standing.  Assess if more tests are needed or if the exit criteria specified should be changed: We may need to run more tests if we have not run all the tests we designed, or if we realize we have not reached the coverage we expected, or if the risks have increased for the project. We may need to change the exit criteria to lower them, if the business and project risks rise in impor tance and the product or technical risks drop in importance. Note that this is not easy to do and must be agreed with stakeholders. The test manage ment tools and test coverage tools that we'll discuss in Chapter 6 help us with this assessment.  Write a test summary report for stakeholders: It is not enough that the testers know the outcome of the test. All the stakeholders need to know what testing has been done and the outcome of the testing, in order to make informed decisions about the software. Cont…
  • 14. During test closure activities, we collect data from completed test activities to consolidate experience, including checking and filing testware, and analyzing facts and numbers. We may need to do this when software is delivered. We also might close testing for other reasons, such as when we have gathered the infor- mation needed from testing, when the project is cancelled, when a particular milestone is achieved, or when a maintenance release or update is done. Test closure activities include the following major tasks:  Check which planned deliverables we actually delivered and ensure all incident reports have been resolved through defect repair or deferral. For deferred defects, in other words those that remain open, we may request a change in a future release. We document the-acceptance or rejection of the software system.  Finalize and archive testware, such as scripts, the test environment, and any other test infrastructure, for later reuse. It is important to reuse whatever we can of testware; we will inevitable carry out maintenance testing, and it saves time and effort if our testware can be pulled out from a library of existing tests. It also allows us to compare the results of testing between software versions.
  • 15.  Hand over testware to the maintenance organization who will support the software and make any bug fixes or maintenance changes, for use in con firmation testing and regression testing. This group may be a separate group to the people who build and test the software; the maintenance testers are one of the customers of the development testers; they will use the library of tests.  Evaluate how the testing went and analyze lessons learned for future releases and projects. This might include process improvements for the soft ware development life cycle as a whole and also improvement of the test processes. If you reflect on Figure 1.3 again, we might use the test results to set targets for improving reviews and testing with a goal of reducing the number of defects in live use. We might look at the number of incidents which were test problems, with the goal of improving the way we design, execute and check our tests or the management of the test environments and data. This helps us make our testing more mature and cost-effective for the Cont…