SlideShare a Scribd company logo
121
LESSON 8: SOFTWARE TESTING
Contents
8.0.Aims and Objectives
8.1.Introduction
8.2.Levels of Testing
8.3.Unit Testing
8.4.System Testing
8.5.Acceptance test
8.6.White Box Testing
8.7.Black Box Testing
8.8.Testing for Specialized Environments
8.9.Formal Verification
8.10. Debugging
8.11. Review questions
8.12. Let us Sum up
8.13. Lesson End Activities
8.14. Points for Discussion
8.15. References
8.0. AIMS AND OBJECTIVES
• To understand the types of testing done on software
• To understand the different approaches to testing
• To do testing effectively by designing proper test cases
8.1. INTRODUCTION
In a software development project, errors can be injected at any stage
during the development. Techniques are available for detecting and eliminating
errors that originate in each phase. However, some requirement errors and
design errors are likely to remain undetected. Such errors will ultimately be
reflected in the code. Since code is the only product that can be executed and
whose actual behavior can be observed, testing the code forms an important
part of the software development activity.
122
8.1.1. Software testing process
Software testing is the process used to help identify the correctness,
completeness, security and quality of developed computer software. With that in
mind, testing can never completely establish the correctness of arbitrary
computer software. In computability theory, a field of computer science and
neat mathematical proof concludes that it is impossible to solve the halting
problem, the question of whether an arbitrary computer program will enter an
infinite loop, or halt and produce output. In other words, testing is nothing but
criticism or comparison that is comparing the actual value with expected one.
Testing presents an interesting variance for the software engineer. The engineer
creates a series a series of test cases that are intended to demolish the software
that has been built. In fact, testing is the only activity in the software
engineering process that could be viewed as “destructive” rather than
“constructive”. Testing performs a very critical role for quality assurance and for
ensuring the reliability of the software.
Glen Meyers [1979] states a number of rules that can serve well as testing
objectives:
1. Testing is the process of executing a program with the intent of finding an
error.
2. A good test case is one that has a high probability of finding an as-yet
undiscovered error.
3. A successful test is one that uncovers an as-yet undiscovered error.
The common viewpoint of testing is that a successful test is one in which no
errors are found. But from the above rules, it can be inferred that a successful
test is one that systematically uncovers different classes of errors and that too
with a minimum time and effort. The more the errors detected, the more
successful is the test.
What testing can do?
1. It can uncover errors in the software
2. It can demonstrate that the software behaves according to the specification
3. It can show that the performance requirements have been met
4. It can prove to be a good indication of software reliability and software
quality
What testing cannot do?
Testing cannot show the absence of defects, it can only show that software
errors are present.
Davis [1995] suggests a set of testing principles as given below:
123
1. All tests should be traceable to the customer requirements.
From the customer’s point of view, when the program fails to meet
requirements, it is considered to be a severe defect. Tests are to be
designed to detect such defects.
2. Tests should be planned long before testing begins.
It is commonly misunderstood that testing begins only after coding is
complete. Testing is to be carried out all throughout the software
development life cycle. Test planning can begin as soon as the
requirements model is complete.
3. The Pareto principle applies to software testing.
The Pareto principle implies that 80% of all errors uncovered during
testing will likely be traceable to 20% of all program modules. Hence the
idea is that those 20% suspect modules are to be isolated and thoroughly
tested.
4. Testing should begin “in the small” and progress towards testing “in the
large”.
Testing activity normally begins with the testing of individual program
modules and then progresses towards integrated clusters (group) of
modules, and ultimately the entire system.
5. Exhaustive testing is not possible.
It is highly impossible to test all possible paths in a program because
even for a moderately sized program, the number of path permutations is
exceptionally large. However in practice, it is possible to adequately cover
program logic.
6. To be most effective, testing should be conducted by an independent third
party.
By “most effective”, we mean testing that has the highest probability of
finding errors. In general, the software engineer who created the system
is not the best person to conduct all tests for the software and hence an
independent third party is the obvious choice.
Kaner, Falk, and Nguyen [1993] suggest the following attributes of a good test:
1. A good test has a high probability of finding an error.
2. A good test is not redundant (unnecessary).
Since testing time and resources are limited, there is no point in
conducting a test that has the same purpose as another test. For
example, a library routine is developed to find the factorial of a given
number. To test the routine in order to uncover errors, the test input
124
may be chosen to be 3. If the routine produces the correct result, then it
is better to test for the input 50, a larger input when compared to 3,
instead of testing with the input 4, because if the routine behaved
correctly for the input 3 there is every possibility that it will also behave
correctly for the input 4.
3. A good test should be the best of kind.
In a group of tests that have a similar intent, the test that has the
highest likelihood of uncovering a whole class of errors should be used.
This is because the testing time and resources are limited.
4. A good test should be neither too simple nor too complex.
Too simple tests may fail to uncover errors. At the same time, it is
possible to combine a series of tests into one complex test, which may
mask some errors. Hence each test should be executed separately.
In carrying out testing we will wish to address some or all of the following
questions:
• How is functional validity tested?
• What classes of input will make good test cases?
• Is the system particularly sensitive to certain input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system tolerate?
• What effect will specific combinations of data have on system operation?
• Which activities are necessary for the systematic testing of a software
system?
• What is to be tested?
 System specifications,
 Individual modules,
 Connections between modules,
 Integration of the modules in the overall system
 Acceptance of the software product.
8.1.2. Dynamic testing
 For dynamic testing the test objects are executed or simulated.
 Dynamic testing is an imperative process in the software life cycle. Every
procedure, every module and class, every subsystem and the overall
system must be tested dynamically, even if static tests and program
verifications have been carried out.
126
 Identify and catalog reusable modules and components
 Identify areas where programmers and developers need training
8.2. LEVELS OF TESTING
The different levels of testing are used to validate the software at different
levels of the development process.
Fig. 8.1 shows the different testing phases and the corresponding development
phases that it validates. Unit Testing is done to validate the code written and is
usually done by the author of the code. Integration testing is done to validate
the design strategies of the software. System testing is done to ensure that all
the functional and non functional requirements of the software are met.
Acceptance testing is then done by the customer to ensure that the software
works well according to customer specification.
8.3. UNIT TESTING
Unit testing is essentially for verification of the code produced during the
coding phase, and hence the goal is to test the internal logic of the modules.
The unit test is normally white box oriented, and the step can be conducted in
parallel for multiple modules. Unit testing is simplified when a module with
high cohesion is designed. When only one function is addressed by a module,
the number of test cases is reduced and errors can easily be uncovered.
Code
Design
Requirement
Analysis
System Engineering
System
Testing
Acceptance
Testing
Fig. 8.1. Levels of testing
Unit Test
Integrated Testing
127
8.3.1. Unit Test Considerations
The tests that occur as part of unit testing are listed below:
• Interface – The module interface is tested to ensure that information
properly enters into and out of the program unit under test. If data does
not enter or exit properly, then all other tests are unresolved.
• Local data structures – The local data structure is examined to ensure
that data stored temporarily maintains its integrity during all steps in an
algorithm’s execution.
• Boundary conditions – Boundary conditions are tested to ensure that
the module operates properly at boundaries established to limit or
restrict processing.
• Independent paths – All independent paths through the control
structure are exercised to ensure that all statements in a module have
been executed at least once.
• Error handling paths – All error handling paths are tested.
Figure 8.2. Unit Test Environment
8.3.2. Unit Test Procedures
Because a module is not a standalone program, driver and/or stub
software must be developed for each unit test. A driver is nothing more than a
“main program” that accepts test case data, passes such data to the module to
be tested, and prints relevant results. Stubs serve to replace modules that are
subordinate to the module to be tested. A stub or “dummy subprogram” uses
the subordinate module’s interface, may do minimal data manipulation, prints
128
verification of entry, and returns. Drivers and stubs represent overhead.
Because, both are software modules that must be developed to aid in testing
but not delivered with the final software product.
8.3.3. Integration Testing
Integration testing involves checking for errors when units are put
together as described in the design specifications. While integrating, software
can be thought of as a system consisting of several levels. A unit that makes a
function call to another unit is considered to be one level above it. Fig. 8.3
There are several approaches for integration:
a. Bottom-Up
The bottom-up approach integrates the units at the lowest level (bottom
level) first, and then the units at the next level above it an so on till the topmost
level is integrated. When integrating, each interface is tested to see if it works
properly together.
Method
 First those operations are tested that require no other program
components; then their integration to a module is tested.
 After the module test the integration of multiple (tested) modules to a
subsystem is tested, until finally the integration of the subsystems, i.e.,
the overall system can be tested.
The advantages
 The advantages of bottom-up testing prove to be the drawbacks of top-
down testing (and vice versa).
 The bottom-up test method is solid and proven. The objects to be tested
are known in full detail. It is often simpler to define relevant test cases
and test data.
Unit
Unit Unit
Unit Unit
Unit Unit
Fig. 8.3. Functional Unit
129
 The bottom-up approach is psychologically more satisfying because the
tester can be certain that the foundations for the test objects have been
tested in full detail.
The drawbacks
 The characteristics of the finished product are only known after the
completion of all implementation and testing, which means that design
errors in the upper levels are detected very late.
 Testing individual levels also cause high costs for providing a suitable test
environment.
b. Top-Down
Top-Down integration starts from the units at the top level first and
works downwards integrating the units at a lower level. While integrating if a
unit in the lower level is not available a replica of the lower level unit is created
which imitates its behavior.
Method
 The control module is implemented and tested first.
 Imported modules are represented by substitute modules.
 Surrogates have the same interfaces as the imported modules and simulate
their input/output behavior.
 After the test of the control module, all other modules of the software
systems are tested in the same way; i.e., their operations are represented
by surrogate procedures until the development has progressed enough to
allow implementation and testing of the operations.
 The test advances stepwise with the implementation. Implementation and
phases merge, and the integration test of subsystems become unnecessary.
The advantages
 Design errors are detected as early as possible, saving development time
and costs because corrections in the module design can be made before
their implementation.
 The characteristics of a software system are evident from the start, which
enables a simple test of the development state and the acceptance by the
user.
 The software system can be tested thoroughly from the start with test
cases without providing (expensive) test environments.
130
The drawbacks
 Strict top-down testing proves extremely difficult because designing
usable surrogate objects can prove very complicated, especially for
complex operations.
 Errors in lower hierarchy levels are hard to localize.
c. Sandwich
Sandwich integration is an attempt to combine the advantages of both
the above approaches. A “target” layer is identified somewhere in between and
the integration converges on the layer using a top-down approach above it and
a bottom-up approach below it. Identifying the target layers must be done by
people with good experience in similar projects or else it might leads to serious
delays.
d. Big-Bang
A different and somewhat simplistic approach is the big-bang approach,
which consists of putting all unit-tested modules together and testing them in
one go. Chances are that it will not work! This is not a very feasible approach
as it will be very difficult to identify interfacing issues.
8.4. SYSTEM TESTING
System testing is testing conducted on a complete, integrated system to
evaluate the system's compliance with its specified requirements. Software is
only one element of a larger computer-based system. The software developed is
ultimately incorporated with other system elements such as new hardware,
information etc., and a series of system integration and validation tests are
conducted. These tests are not conducted by the software developer alone.
System testing is actually a series of different tests whose primary purpose is to
fully exercise the computer-based system. Although each test has a different
purpose, all work to verify that all system elements have been properly
integrated and perform allocated functions.
System testing falls within the scope of Black box testing, and as such,
should require no knowledge of the inner design of the code or logic.
8.4.1. Alpha and Beta Test
Alpha testing and Beta testing are sub-categories of System testing. If
software is developed as a product (example: Microsoft Word) which is intended
to be used by many end-users, it is not practical to perform formal acceptance
tests with each end-user. In this situation most software products are tested
using the process called alpha and beta testing to allow the end-user to find
defects.
The Alpha test is conducted in the developer’s environment by the end-
users. The environment might be simulated, with the developer and the
131
typical end-user present for the testing. The end-user uses the software
and records the errors and problems. Alpha test is conducted in a
controlled environment.
The Beta test is conducted in the end-user’s environment. The
developer is not present for the beta testing. The beta testing is always
in the real-world environment which is not controlled by the developer.
The end-user records the problems and reports it back to the developer
at intervals. Based on the results of the beta testing the software is
made ready for the final release to the intended customer base.
As a rule, System testing takes, as its input, all of the integrated
software components that have successfully passed Integration testing and also
the software system itself integrated with any applicable hardware system(s).
The purpose of Integration testing is to detect any inconsistencies between the
software units that are integrated together called assemblages or between any
of the assemblages and hardware. System testing is more of a limiting type of
testing, where it seeks to detect both defects within the inter-assemblages and
also the system as a whole.
8.4.2. Finger Pointing
A classic system testing problem is “finger pointing”. This occurs when
an error is uncovered, and each system element developer blames the other for
the problem. The software engineer should anticipate potential interfacing
problems and do the following:
1) Design error-handling paths that test all information coming from
other elements of the system
2) Conduct a series of tests that simulate bad data or other potential
errors at the software interface
3) Record the results of tests to use as “evidence” if finger pointing does
occur
4) Participate in planning and design of system tests to ensure that
software is adequately tested.
8.4.3. Types of System Tests
The types of system tests for software-based systems are:
a. Recovery Testing
b. Security Testing
c. Stress testing
d. Sensitivity Testing
e. Performance Testing
132
a. Recovery Testing
Recovery testing is a system test that forces the software to fail in a
variety of ways and verifies that recovery is properly performed. If recovery is
automatically performed by the system itself, then re-initialization, check
pointing mechanisms, data recovery, and restart are each evaluated for
correctness. If recovery requires human intervention, the mean time to repair is
evaluated to determine whether it is within acceptable limits.
b. Security Testing
Security testing attempts to verify that protection mechanisms built into
a system will in fact protect it from improper penetration. Penetration spans a
broad range of activities: hackers who attempt to penetrate systems for sport;
unhappy employees who attempt to penetrate for revenge; and dishonest
individuals who attempt to penetrate for illegal personal gain.
c. Stress Testing
Stress tests are designed to tackle programs with abnormal situations.
Stress testing executes a system in a manner that demands resources in
abnormal quantity, frequency, or volume. For example,
1) Special tests may be designed that generate 10 interrupts per second,
when one or two is the average rate
2) Input data rates may be increased by an order of magnitude to determine
how input functions will respond
3) Test cases that require maximum memory or other resources may be
executed
4) Test cases that may cause thrashing in a virtual operating system may
be designed
5) Test cases that may cause excessive hunting for disk resident data may
be created.
d. Sensitivity Testing
A variation of stress testing is a technique called sensitivity testing. In
some situations a very small range of data contained within the bounds of valid
data for a program may cause extreme and even erroneous processing or
profound performance degradation. Sensitivity testing attempts to uncover data
combinations within valid input classes that may cause instability or improper
processing.
e. Performance Testing
Performance testing is designed to test run time performance (speed and
response time) of software within the context of an integrated system. It occurs
throughout all steps in the testing process. Performance tests are often coupled
with stress testing and often require both hardware and software
133
instrumentation. External instrumentation can monitor execution intervals, log
events as they occur, and sample machine states on a regular basis.
Performance testing can be categorized into the following:
 Load Testing is conducted to check whether the system is capable of
handling an anticipated load. Here, Load refers to the number of
concurrent user accessing the system. Load testing is used to determine
whether the system is capable of handling various activities performed
concurrently by different users.
 Endurance testing deals with the reliability of the system. This type of
testing is conducted for a longer duration to find out the health of the
system in terms of its consistency. Endurance testing is conducted on
either a normal load, or stress load. However, the duration of the test is
long.
 Stress testing helps to identify the number of users the system can
handle at a time before breaking down or degrading severely. Stress
testing goes one step beyond the load testing and identifies the system’s
capability to handle the peak load.
 Spike testing is conducted to stress the system suddenly for a short
duration. This testing ensures whether the system will be a stable and
responsive under an unexpected rise in load.
8.4.4. Regression Testing
This is an important aspect of testing-ensuring that when an error is
fixed in a system, the new version of the system does not fail any test that the
older version passed. Regression testing consists of running the corrected
system against tests which the program had already passed successfully. This
is to ensure that in the process of modifying the existing system, the original
functionality of the system was not disturbed. This is particularly important in
maintenance project, where the likelihood of making changes can inadvertently
affect the program’s behavior.
Maintenance projects require enhancement or updating of the existing
system; enhancements are introduction of new features to the software and
might be released in different versions. Whenever a version is released,
regression testing should be done on the system to ensure that the existing
features have not been disturbed.
8.5. ACCEPTANCE TEST
Acceptance testing is the process of testing the entire system, with the
completed software as part of it. This is done to ensure that all the
requirements that the customer specified are met. Acceptance testing (done
after System testing) is similar to system testing but administered by the
customer to test if the system follow to the agreed upon requirements.

More Related Content

PPTX
SOFTWARE TESTING UNIT-4
PPTX
Software unit4
DOCX
Software engg unit 4
PDF
What is software testing in software engineering?
PDF
What is Testing in Software Engineering?
DOC
Testing
PDF
S440999102
SOFTWARE TESTING UNIT-4
Software unit4
Software engg unit 4
What is software testing in software engineering?
What is Testing in Software Engineering?
Testing
S440999102

Similar to testing.pdf (20)

PPTX
Software testing
PDF
PPTX
Ch 2 Apraoaches Of Software Testing
PPTX
Software Testing Principal
PPT
software testing
PDF
What Is Unit Testing_ A Complete Guide With Examples.pdf
PDF
What Is Unit Testing A Complete Guide With Examples.pdf
PDF
Quality assurance tests
PPTX
SOFTWARE Engineering (SOFTWARE TESTING).pptx
PPTX
PPTX
Software testing methods
PPTX
Software Testing Strategies ,Validation Testing and System Testing.
PPT
Testing chapter updated (1)
DOCX
Manual Testing Interview Questions & Answers.docx
PPT
Chapter 8 Testing Tactics.ppt Software engineering
PDF
Software testing for project report system.
DOCX
Unit 4 Software engineering deatiled notes.docx
PPTX
Software testing
Software testing
Ch 2 Apraoaches Of Software Testing
Software Testing Principal
software testing
What Is Unit Testing_ A Complete Guide With Examples.pdf
What Is Unit Testing A Complete Guide With Examples.pdf
Quality assurance tests
SOFTWARE Engineering (SOFTWARE TESTING).pptx
Software testing methods
Software Testing Strategies ,Validation Testing and System Testing.
Testing chapter updated (1)
Manual Testing Interview Questions & Answers.docx
Chapter 8 Testing Tactics.ppt Software engineering
Software testing for project report system.
Unit 4 Software engineering deatiled notes.docx
Software testing
Ad

More from kumari36 (20)

PPTX
Data Analytics with Data Science Algorithm
PPTX
Transaction of program execution updates
PPTX
ER-Model specification logical structure
DOCX
Virtualize of IO Devices .docx
DOCX
VIRTUALIZATION STRUCTURES TOOLS.docx
DOCX
Operating System extension.docx
DOCX
Levels of Virtualization.docx
PDF
Overview of java Language-3.pdf
PDF
Java Evolution-2.pdf
PDF
Inheritance in Java.pdf
PDF
Constructors in Java (2).pdf
PDF
Chapter4-var.pdf
PDF
softwareMaintenance.pdf
PDF
Debugging.pdf
PDF
QualityAssurance.pdf
PPTX
Prediction of heart disease using machine learning.pptx
PPTX
Fast Wavelet Based Image Characterization for Highly Adaptive Image Retrieval...
PPTX
Presentation1.4.pptx
PPTX
Presentation1.3.pptx
PPTX
Cloud 1.2.pptx
Data Analytics with Data Science Algorithm
Transaction of program execution updates
ER-Model specification logical structure
Virtualize of IO Devices .docx
VIRTUALIZATION STRUCTURES TOOLS.docx
Operating System extension.docx
Levels of Virtualization.docx
Overview of java Language-3.pdf
Java Evolution-2.pdf
Inheritance in Java.pdf
Constructors in Java (2).pdf
Chapter4-var.pdf
softwareMaintenance.pdf
Debugging.pdf
QualityAssurance.pdf
Prediction of heart disease using machine learning.pptx
Fast Wavelet Based Image Characterization for Highly Adaptive Image Retrieval...
Presentation1.4.pptx
Presentation1.3.pptx
Cloud 1.2.pptx
Ad

Recently uploaded (20)

PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Pre independence Education in Inndia.pdf
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PPTX
Institutional Correction lecture only . . .
PDF
Computing-Curriculum for Schools in Ghana
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
PDF
Insiders guide to clinical Medicine.pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Classroom Observation Tools for Teachers
PDF
Complications of Minimal Access Surgery at WLH
TR - Agricultural Crops Production NC III.pdf
Pre independence Education in Inndia.pdf
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Institutional Correction lecture only . . .
Computing-Curriculum for Schools in Ghana
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
Anesthesia in Laparoscopic Surgery in India
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
human mycosis Human fungal infections are called human mycosis..pptx
Microbial disease of the cardiovascular and lymphatic systems
Pharma ospi slides which help in ospi learning
Microbial diseases, their pathogenesis and prophylaxis
IMMUNITY IMMUNITY refers to protection against infection, and the immune syst...
Insiders guide to clinical Medicine.pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Classroom Observation Tools for Teachers
Complications of Minimal Access Surgery at WLH

testing.pdf

  • 1. 121 LESSON 8: SOFTWARE TESTING Contents 8.0.Aims and Objectives 8.1.Introduction 8.2.Levels of Testing 8.3.Unit Testing 8.4.System Testing 8.5.Acceptance test 8.6.White Box Testing 8.7.Black Box Testing 8.8.Testing for Specialized Environments 8.9.Formal Verification 8.10. Debugging 8.11. Review questions 8.12. Let us Sum up 8.13. Lesson End Activities 8.14. Points for Discussion 8.15. References 8.0. AIMS AND OBJECTIVES • To understand the types of testing done on software • To understand the different approaches to testing • To do testing effectively by designing proper test cases 8.1. INTRODUCTION In a software development project, errors can be injected at any stage during the development. Techniques are available for detecting and eliminating errors that originate in each phase. However, some requirement errors and design errors are likely to remain undetected. Such errors will ultimately be reflected in the code. Since code is the only product that can be executed and whose actual behavior can be observed, testing the code forms an important part of the software development activity.
  • 2. 122 8.1.1. Software testing process Software testing is the process used to help identify the correctness, completeness, security and quality of developed computer software. With that in mind, testing can never completely establish the correctness of arbitrary computer software. In computability theory, a field of computer science and neat mathematical proof concludes that it is impossible to solve the halting problem, the question of whether an arbitrary computer program will enter an infinite loop, or halt and produce output. In other words, testing is nothing but criticism or comparison that is comparing the actual value with expected one. Testing presents an interesting variance for the software engineer. The engineer creates a series a series of test cases that are intended to demolish the software that has been built. In fact, testing is the only activity in the software engineering process that could be viewed as “destructive” rather than “constructive”. Testing performs a very critical role for quality assurance and for ensuring the reliability of the software. Glen Meyers [1979] states a number of rules that can serve well as testing objectives: 1. Testing is the process of executing a program with the intent of finding an error. 2. A good test case is one that has a high probability of finding an as-yet undiscovered error. 3. A successful test is one that uncovers an as-yet undiscovered error. The common viewpoint of testing is that a successful test is one in which no errors are found. But from the above rules, it can be inferred that a successful test is one that systematically uncovers different classes of errors and that too with a minimum time and effort. The more the errors detected, the more successful is the test. What testing can do? 1. It can uncover errors in the software 2. It can demonstrate that the software behaves according to the specification 3. It can show that the performance requirements have been met 4. It can prove to be a good indication of software reliability and software quality What testing cannot do? Testing cannot show the absence of defects, it can only show that software errors are present. Davis [1995] suggests a set of testing principles as given below:
  • 3. 123 1. All tests should be traceable to the customer requirements. From the customer’s point of view, when the program fails to meet requirements, it is considered to be a severe defect. Tests are to be designed to detect such defects. 2. Tests should be planned long before testing begins. It is commonly misunderstood that testing begins only after coding is complete. Testing is to be carried out all throughout the software development life cycle. Test planning can begin as soon as the requirements model is complete. 3. The Pareto principle applies to software testing. The Pareto principle implies that 80% of all errors uncovered during testing will likely be traceable to 20% of all program modules. Hence the idea is that those 20% suspect modules are to be isolated and thoroughly tested. 4. Testing should begin “in the small” and progress towards testing “in the large”. Testing activity normally begins with the testing of individual program modules and then progresses towards integrated clusters (group) of modules, and ultimately the entire system. 5. Exhaustive testing is not possible. It is highly impossible to test all possible paths in a program because even for a moderately sized program, the number of path permutations is exceptionally large. However in practice, it is possible to adequately cover program logic. 6. To be most effective, testing should be conducted by an independent third party. By “most effective”, we mean testing that has the highest probability of finding errors. In general, the software engineer who created the system is not the best person to conduct all tests for the software and hence an independent third party is the obvious choice. Kaner, Falk, and Nguyen [1993] suggest the following attributes of a good test: 1. A good test has a high probability of finding an error. 2. A good test is not redundant (unnecessary). Since testing time and resources are limited, there is no point in conducting a test that has the same purpose as another test. For example, a library routine is developed to find the factorial of a given number. To test the routine in order to uncover errors, the test input
  • 4. 124 may be chosen to be 3. If the routine produces the correct result, then it is better to test for the input 50, a larger input when compared to 3, instead of testing with the input 4, because if the routine behaved correctly for the input 3 there is every possibility that it will also behave correctly for the input 4. 3. A good test should be the best of kind. In a group of tests that have a similar intent, the test that has the highest likelihood of uncovering a whole class of errors should be used. This is because the testing time and resources are limited. 4. A good test should be neither too simple nor too complex. Too simple tests may fail to uncover errors. At the same time, it is possible to combine a series of tests into one complex test, which may mask some errors. Hence each test should be executed separately. In carrying out testing we will wish to address some or all of the following questions: • How is functional validity tested? • What classes of input will make good test cases? • Is the system particularly sensitive to certain input values? • How are the boundaries of a data class isolated? • What data rates and data volume can the system tolerate? • What effect will specific combinations of data have on system operation? • Which activities are necessary for the systematic testing of a software system? • What is to be tested? System specifications, Individual modules, Connections between modules, Integration of the modules in the overall system Acceptance of the software product. 8.1.2. Dynamic testing For dynamic testing the test objects are executed or simulated. Dynamic testing is an imperative process in the software life cycle. Every procedure, every module and class, every subsystem and the overall system must be tested dynamically, even if static tests and program verifications have been carried out.
  • 5. 126 Identify and catalog reusable modules and components Identify areas where programmers and developers need training 8.2. LEVELS OF TESTING The different levels of testing are used to validate the software at different levels of the development process. Fig. 8.1 shows the different testing phases and the corresponding development phases that it validates. Unit Testing is done to validate the code written and is usually done by the author of the code. Integration testing is done to validate the design strategies of the software. System testing is done to ensure that all the functional and non functional requirements of the software are met. Acceptance testing is then done by the customer to ensure that the software works well according to customer specification. 8.3. UNIT TESTING Unit testing is essentially for verification of the code produced during the coding phase, and hence the goal is to test the internal logic of the modules. The unit test is normally white box oriented, and the step can be conducted in parallel for multiple modules. Unit testing is simplified when a module with high cohesion is designed. When only one function is addressed by a module, the number of test cases is reduced and errors can easily be uncovered. Code Design Requirement Analysis System Engineering System Testing Acceptance Testing Fig. 8.1. Levels of testing Unit Test Integrated Testing
  • 6. 127 8.3.1. Unit Test Considerations The tests that occur as part of unit testing are listed below: • Interface – The module interface is tested to ensure that information properly enters into and out of the program unit under test. If data does not enter or exit properly, then all other tests are unresolved. • Local data structures – The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm’s execution. • Boundary conditions – Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. • Independent paths – All independent paths through the control structure are exercised to ensure that all statements in a module have been executed at least once. • Error handling paths – All error handling paths are tested. Figure 8.2. Unit Test Environment 8.3.2. Unit Test Procedures Because a module is not a standalone program, driver and/or stub software must be developed for each unit test. A driver is nothing more than a “main program” that accepts test case data, passes such data to the module to be tested, and prints relevant results. Stubs serve to replace modules that are subordinate to the module to be tested. A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data manipulation, prints
  • 7. 128 verification of entry, and returns. Drivers and stubs represent overhead. Because, both are software modules that must be developed to aid in testing but not delivered with the final software product. 8.3.3. Integration Testing Integration testing involves checking for errors when units are put together as described in the design specifications. While integrating, software can be thought of as a system consisting of several levels. A unit that makes a function call to another unit is considered to be one level above it. Fig. 8.3 There are several approaches for integration: a. Bottom-Up The bottom-up approach integrates the units at the lowest level (bottom level) first, and then the units at the next level above it an so on till the topmost level is integrated. When integrating, each interface is tested to see if it works properly together. Method First those operations are tested that require no other program components; then their integration to a module is tested. After the module test the integration of multiple (tested) modules to a subsystem is tested, until finally the integration of the subsystems, i.e., the overall system can be tested. The advantages The advantages of bottom-up testing prove to be the drawbacks of top- down testing (and vice versa). The bottom-up test method is solid and proven. The objects to be tested are known in full detail. It is often simpler to define relevant test cases and test data. Unit Unit Unit Unit Unit Unit Unit Fig. 8.3. Functional Unit
  • 8. 129 The bottom-up approach is psychologically more satisfying because the tester can be certain that the foundations for the test objects have been tested in full detail. The drawbacks The characteristics of the finished product are only known after the completion of all implementation and testing, which means that design errors in the upper levels are detected very late. Testing individual levels also cause high costs for providing a suitable test environment. b. Top-Down Top-Down integration starts from the units at the top level first and works downwards integrating the units at a lower level. While integrating if a unit in the lower level is not available a replica of the lower level unit is created which imitates its behavior. Method The control module is implemented and tested first. Imported modules are represented by substitute modules. Surrogates have the same interfaces as the imported modules and simulate their input/output behavior. After the test of the control module, all other modules of the software systems are tested in the same way; i.e., their operations are represented by surrogate procedures until the development has progressed enough to allow implementation and testing of the operations. The test advances stepwise with the implementation. Implementation and phases merge, and the integration test of subsystems become unnecessary. The advantages Design errors are detected as early as possible, saving development time and costs because corrections in the module design can be made before their implementation. The characteristics of a software system are evident from the start, which enables a simple test of the development state and the acceptance by the user. The software system can be tested thoroughly from the start with test cases without providing (expensive) test environments.
  • 9. 130 The drawbacks Strict top-down testing proves extremely difficult because designing usable surrogate objects can prove very complicated, especially for complex operations. Errors in lower hierarchy levels are hard to localize. c. Sandwich Sandwich integration is an attempt to combine the advantages of both the above approaches. A “target” layer is identified somewhere in between and the integration converges on the layer using a top-down approach above it and a bottom-up approach below it. Identifying the target layers must be done by people with good experience in similar projects or else it might leads to serious delays. d. Big-Bang A different and somewhat simplistic approach is the big-bang approach, which consists of putting all unit-tested modules together and testing them in one go. Chances are that it will not work! This is not a very feasible approach as it will be very difficult to identify interfacing issues. 8.4. SYSTEM TESTING System testing is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. Software is only one element of a larger computer-based system. The software developed is ultimately incorporated with other system elements such as new hardware, information etc., and a series of system integration and validation tests are conducted. These tests are not conducted by the software developer alone. System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all work to verify that all system elements have been properly integrated and perform allocated functions. System testing falls within the scope of Black box testing, and as such, should require no knowledge of the inner design of the code or logic. 8.4.1. Alpha and Beta Test Alpha testing and Beta testing are sub-categories of System testing. If software is developed as a product (example: Microsoft Word) which is intended to be used by many end-users, it is not practical to perform formal acceptance tests with each end-user. In this situation most software products are tested using the process called alpha and beta testing to allow the end-user to find defects. The Alpha test is conducted in the developer’s environment by the end- users. The environment might be simulated, with the developer and the
  • 10. 131 typical end-user present for the testing. The end-user uses the software and records the errors and problems. Alpha test is conducted in a controlled environment. The Beta test is conducted in the end-user’s environment. The developer is not present for the beta testing. The beta testing is always in the real-world environment which is not controlled by the developer. The end-user records the problems and reports it back to the developer at intervals. Based on the results of the beta testing the software is made ready for the final release to the intended customer base. As a rule, System testing takes, as its input, all of the integrated software components that have successfully passed Integration testing and also the software system itself integrated with any applicable hardware system(s). The purpose of Integration testing is to detect any inconsistencies between the software units that are integrated together called assemblages or between any of the assemblages and hardware. System testing is more of a limiting type of testing, where it seeks to detect both defects within the inter-assemblages and also the system as a whole. 8.4.2. Finger Pointing A classic system testing problem is “finger pointing”. This occurs when an error is uncovered, and each system element developer blames the other for the problem. The software engineer should anticipate potential interfacing problems and do the following: 1) Design error-handling paths that test all information coming from other elements of the system 2) Conduct a series of tests that simulate bad data or other potential errors at the software interface 3) Record the results of tests to use as “evidence” if finger pointing does occur 4) Participate in planning and design of system tests to ensure that software is adequately tested. 8.4.3. Types of System Tests The types of system tests for software-based systems are: a. Recovery Testing b. Security Testing c. Stress testing d. Sensitivity Testing e. Performance Testing
  • 11. 132 a. Recovery Testing Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatically performed by the system itself, then re-initialization, check pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits. b. Security Testing Security testing attempts to verify that protection mechanisms built into a system will in fact protect it from improper penetration. Penetration spans a broad range of activities: hackers who attempt to penetrate systems for sport; unhappy employees who attempt to penetrate for revenge; and dishonest individuals who attempt to penetrate for illegal personal gain. c. Stress Testing Stress tests are designed to tackle programs with abnormal situations. Stress testing executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. For example, 1) Special tests may be designed that generate 10 interrupts per second, when one or two is the average rate 2) Input data rates may be increased by an order of magnitude to determine how input functions will respond 3) Test cases that require maximum memory or other resources may be executed 4) Test cases that may cause thrashing in a virtual operating system may be designed 5) Test cases that may cause excessive hunting for disk resident data may be created. d. Sensitivity Testing A variation of stress testing is a technique called sensitivity testing. In some situations a very small range of data contained within the bounds of valid data for a program may cause extreme and even erroneous processing or profound performance degradation. Sensitivity testing attempts to uncover data combinations within valid input classes that may cause instability or improper processing. e. Performance Testing Performance testing is designed to test run time performance (speed and response time) of software within the context of an integrated system. It occurs throughout all steps in the testing process. Performance tests are often coupled with stress testing and often require both hardware and software
  • 12. 133 instrumentation. External instrumentation can monitor execution intervals, log events as they occur, and sample machine states on a regular basis. Performance testing can be categorized into the following: Load Testing is conducted to check whether the system is capable of handling an anticipated load. Here, Load refers to the number of concurrent user accessing the system. Load testing is used to determine whether the system is capable of handling various activities performed concurrently by different users. Endurance testing deals with the reliability of the system. This type of testing is conducted for a longer duration to find out the health of the system in terms of its consistency. Endurance testing is conducted on either a normal load, or stress load. However, the duration of the test is long. Stress testing helps to identify the number of users the system can handle at a time before breaking down or degrading severely. Stress testing goes one step beyond the load testing and identifies the system’s capability to handle the peak load. Spike testing is conducted to stress the system suddenly for a short duration. This testing ensures whether the system will be a stable and responsive under an unexpected rise in load. 8.4.4. Regression Testing This is an important aspect of testing-ensuring that when an error is fixed in a system, the new version of the system does not fail any test that the older version passed. Regression testing consists of running the corrected system against tests which the program had already passed successfully. This is to ensure that in the process of modifying the existing system, the original functionality of the system was not disturbed. This is particularly important in maintenance project, where the likelihood of making changes can inadvertently affect the program’s behavior. Maintenance projects require enhancement or updating of the existing system; enhancements are introduction of new features to the software and might be released in different versions. Whenever a version is released, regression testing should be done on the system to ensure that the existing features have not been disturbed. 8.5. ACCEPTANCE TEST Acceptance testing is the process of testing the entire system, with the completed software as part of it. This is done to ensure that all the requirements that the customer specified are met. Acceptance testing (done after System testing) is similar to system testing but administered by the customer to test if the system follow to the agreed upon requirements.