SlideShare a Scribd company logo
Unit 8: Software Testing
Preeti Mishra
Course Incharge
What they Say…
• “Testing is the process of executing a program with
the intention of finding errors.” – Myers
• “Testing can show the presence of bugs but never
their absence.” - Dijkstra
Definition
• Testing is the process of exercising a program with
the specific intent of finding errors prior to delivery
to the end user.
4
Who Tests the
Software?
developerdeveloper independent testerindependent tester
Understands the systemUnderstands the system
but, will test "gently"but, will test "gently"
and, is driven by "and, is driven by "deliverydelivery""
Must learn about the system,Must learn about the system,
but, will attempt tobut, will attempt to breakbreak itit
and, is driven byand, is driven by qualityquality
Characteristics of Testable Software
• Operable
– The better it works (i.e., better quality), the easier it is to test
• Observable
– Incorrect output is easily identified; internal errors are automatically detected
• Controllable
– The states and variables of the software can be controlled directly by the tester
• Decomposable
– The software is built from independent modules that can be tested independently
• Simple
– The program should exhibit functional, structural, and code simplicity
• Stable
– Changes to the software during testing are infrequent and do not invalidate
existing tests
• Understandable
– The architectural design is well understood; documentation is available and
organized
Test Characteristics
• A good test has a high probability of finding an error
– The tester must understand the software and how it might fail
• A good test is not redundant
– Testing time is limited; one test should not serve the same purpose as another
test
• A good test should be “best of breed”
– Tests that have the highest likelihood of uncovering a whole class of errors
should be used
• A good test should be neither too simple nor too complex
– Each test should be executed separately; combining a series of tests could
cause side effects and mask certain errors
A strategy for software testing
• integrates the design of software test cases into a well-planned
series of steps that result in successful development of the
software
• The strategy provides a road map that describes the steps to be
taken, when, and how much effort, time, and resources will be
required
• The strategy incorporates test planning, test case design, test
execution, and test result collection and evaluation
• The strategy provides guidance for the practitioner and a set of
milestones for the manager
• Because of time pressures, progress must be measurable and
problems must surface as early as possible
A Strategic Approach to
Testing
General Characteristics of
Strategic Testing
• To perform effective testing, a software team should conduct
effective formal technical reviews
• Testing begins at the component level and work outward toward
the integration of the entire computer-based system
• Different testing techniques are appropriate at different points in
time
• Testing is conducted by the developer of the software and (for
large projects) by an independent test group
• Testing and debugging are different activities, but debugging
must be accommodated in any testing strategy
A Strategy for Testing Conventional Software
Code
Design
Requirements
System Engineering
Unit Testing
Integration Testing
Validation Testing
System Testing
Abstractto
concrete
Narrow
to
Broaderscope
Levels of Testing
Unit Testing
Unit Testing
• Unit testing is a software development process in which the
smallest testable parts of an application, called units, are
individually and independently scrutinized for proper operation.
Unit testing is often automated but it can also be done
manually.
• Algorithms and logic
• Data structures (global and local)
• Interfaces
• Independent paths
• Boundary conditions
• Error handling
Unit Testing
modulemodule
to beto be
testedtested
test casestest cases
resultsresults
softwaresoftware
engineerengineer
interfaceinterface
local data structureslocal data structures
boundary conditionsboundary conditions
independent pathsindependent paths
error handling pathserror handling paths
Integration Testing
Integration Testing
• Integration testing (sometimes called integration and testing, abbreviated I&T)
is the phase in software testing in which individual software modules are combined
and tested as a group. It occurs after unit testing and before validation testing.
• Integration: Combining 2 or more software units
– often a subset of the overall project
Why Integration Testing Is Necessary
• One module can have an adverse effect on another
• Sub functions, when combined, may not produce the desired
major function
• Individually acceptable imprecision in calculations may be
magnified to unacceptable levels
• Interfacing errors not detected in unit testing may appear
• Timing problems (in real-time systems) are not detectable by
unit testing
• Resource contention problems are not detectable by unit testing
Types of Integration Testing
• Big-Bang Integration
• Top-down Integration
• Bottom-Up Integration
• Sandwich Integration
Phased integration
• phased ("big-bang") integration:
– design, code, test, debug each class/unit/subsystem
separately
– combine them all
– pray
Top-down integration
• top-down integration:
Start with outer UI layers and work inward
– must write (lots of) stub lower layers for UI to interact with
– allows postponing tough design/debugging decisions (bad?)
Problems with Top-Down Integration
• Many times, calculations are performed in the modules at the
bottom of the hierarchy
• Stubs typically do not pass data up to the higher modules
• Delaying testing until lower-level modules are ready usually
results in integrating many modules at the same time rather
than one at a time
• Developing stubs that can pass data up is almost as much
work as developing the actual module
Bottom-up integration
• bottom-up integration:
Start with low-level data/logic layers and work outward
– must write test drivers to run these layers
– won't discover high-level / UI design flaws until late
Problems with Bottom-Up Integration
• The whole program does not exist until the last module is
integrated
• Timing and resource contention problems are not found
until late in the process
Stubs
• stub: A controllable replacement for an existing software unit to
which your code under test has a dependency.
– useful for simulating difficult-to-control elements:
• network / internet
• database
• time/date-sensitive code
• files
• threads
• memory
– also useful when dealing with brittle legacy code/systems
"Sandwich" integration
• "sandwich" integration:
Connect top-level UI with crucial bottom-level classes
– add middle layers later as needed
– more practical than top-down or bottom-up?
System Testing
System Testing
• System testing of software or hardware is testing
conducted on a complete, integrated system to evaluate the
system's compliance with its specified requirements.
• System testing falls within the scope of black box testing,
and as such, should require no knowledge of the inner design
of the code or logic.
Principles of System Testing
System Testing Process
• Function testing: does the integrated system perform as
promised by the requirements specification?
• Performance testing: are the non-functional requirements
met?
• Acceptance testing: is the system what the customer
expects?
• Installation testing: does the system run at the customer
site(s)?
Performance Tests
Purpose and Roles
• Used to examine
– the calculation
– the speed of response
– the accuracy of the result
– the accessibility of the data
• Designed and administrated by the test team
Performance Tests
Types of Performance Tests
• Stress tests
• Volume tests
• Configuration tests
• Compatibility tests
• Regression tests
• Security tests
• Timing tests
• Environmental tests
• Quality tests
• Recovery tests
• Maintenance tests
• Documentation tests
• Human factors
(usability) tests
Reliability, Availability, and Maintainability
Definition
•Software reliability: operating without
failure under given condition for a
given time interval
•Software availability: operating
successfully according to specification
at a given point in time
•Software maintainability: for a given
condition of use, a maintenance
activity can be carried out within stated
time interval, procedures and
resources
Different Level of Failure Severity
•Catastrophic: causes death or
system loss
•Critical: causes severe injury or
major system damage
•Marginal: causes minor injury or
minor system damage
•Minor: causes no injury or
system damage
Acceptance Tests
• Enable the customers and users to determine if the built system
meets their needs and expectations
• Written, conducted and evaluated by the customers
• Pilot test: install on experimental basis
• Alpha test: in-house test
• Beta test: customer pilot
• Parallel testing: new system operates in parallel with old
system
Installation Testing
• Before the testing
– Configure the system
– Attach proper number and kind of devices
– Establish communication with other system
• The testing
– Regression tests: to verify that the system has been
installed properly and works
Testing Approaches
Approaches to Testing
• Black Box ,White Box and Grey Box
• Alpha and Beta
White Box testing
White box testing is testing where we use the info available
from the code of the component to generate tests.
This info is usually used to achieve coverage in one way or
another – e.g.
• Code coverage
• Path coverage
• Decision coverage
Debugging will always be white-box testing
Coverage report. Example – 1
Coverage report. Example – 2
Black Box testing
Black box testing is also called functional testing. The
main ideas are simple:
1. Define initial component state, input and expected
output for the test.
2. Set the component in the required state.
3. Give the defined input
4. Observe the output and compare to the expected
output.
Info for Black Box testing
That we do not have access to the code does not mean
that one test is just as good as the other one. We
should consider the following info:
• Algorithm understanding
• Parts of the solutions that are difficult to implement
• Special – often seldom occurring – cases.
Black Box vs. White Box testing
We can contrast the two methods as follows:
• White Box testing
– Understanding the implemented code.
– Checking the implementation
– Debugging
• Black Box testing
– Understanding the algorithm used.
– Checking the solution – functional testing
Criteria Black Box Testing White Box Testing
Definition
Black Box Testing is a software testing
method in which the internal structure/
design/ implementation of the item being
tested is NOT known to the tester
White Box Testing is a software testing
method in which the internal structure/
design/ implementation of the item being
tested is known to the tester.
Levels Applicable To
Mainly applicable to higher levels of testing:
Acceptance Testing
System Testing
Mainly applicable to lower levels of
testing:Unit Testing
Integration Testing
Responsibility Generally, independent Software Testers Generally, Software Developers
Programming Knowledge Not Required Required
Implementation Knowledge Not Required Required
Basis for Test Cases Requirement Specifications Detail Design
Basis Path Testing
• White-box technique usually based on the program flow graph
• The cyclomatic complexity of the program computed from its flow graph using the
formula V(G) = E – N + 2 or by counting the conditional statements in the PDL
representation and adding 1
• Determine the basis set of linearly independent paths (the cardinality of this set is the
program cyclomatic complexity)
• Prepare test cases that will force the execution of each path in the basis set.
Control Structure Testing
• White-box techniques focusing on control structures present in the
software
• Condition testing (e.g. branch testing)
– focuses on testing each decision statement in a software module
– it is important to ensure coverage of all logical combinations of data that
may be processed by the module (a truth table may be helpful)
• Data flow testing
– selects test paths based according to the locations of variable definitions
and uses in the program (e.g. definition use chains)
• Loop testing
– focuses on the validity of the program loop constructs (i.e. while, for, go to)
– involves checking to ensure loops start and stop when they are supposed to
(unstructured loops should be redesigned whenever possible)
Alpha and Beta Testing
Product Use Testing
Product use under normal operating conditions.
Some terms:
– Alpha testing: done in-house.
– Beta testing: done at the customer site.
Typical goals of beta testing: to determine if the product works
and is free of “bugs.”
       Alpha Testing   Beta Testing (Field Testing)
1. It is always performed by the developers at the 
software development site.
1. It is always performed by the customers at their own 
site.
2. Sometimes it is also performed by Independent 
Testing Team.
2. It is not performed by Independent Testing Team.
3. Alpha Testing is not open to the market and public 3. Beta Testing is always open to the market and public.
4. It is conducted for the software application and 
project.
4. It is usually conducted for software product.
5. It is always performed in Virtual Environment. 5. It is performed in Real Time Environment.
6. It is always performed within the organization. 6. It is always performed outside the organization.
7. It is the form of Acceptance Testing. 7. It is also the form of Acceptance Testing.
8. Alpha Testing is definitely performed and carried out 
at the developing organizations location with the 
involvement of developers.
8. Beta Testing (field testing) is performed and carried out 
by users or you can say people at their own locations and 
site using customer data.
9. It comes under the category of both White Box Testing 
and Black Box Testing.
9. It is only a kind of Black Box Testing.
10. Alpha Testing is always performed at the time of 
Acceptance Testing when developers test the product 
and project to check whether it meets the user 
requirements or not.
10. Beta Testing is always performed at the time when 
software product and project are marketed.
11. It is always performed at the developer’s premises in 
the absence of the users.
11. It is always performed at the user’s premises in the 
absence of the development team.
12. Alpha Testing is not known by any other different 
name.
12 Beta Testing is also known by the name Field
Testing means it is also known as field testing.
13. It is considered as the User Acceptance Testing (UAT) 
which is done at developer’s area.
13. It is also considered as the User Acceptance Testing 
(UAT) which is done at customers or users area.
Verification and Validation
Testing
Verification and Validation
• Verification
– Are you building the product right?
– Software must conform to its specification
• Validation
– Are you building the right product?
– Software should do what the user really requires
Verification and Validation Process
• Must applied at each stage of the software
development process to be effective
• Objectives
– Discovery of system defects
– Assessment of system usability in an operational
situation
             Verification              Validation
1. Verification is a static practice of verifying documents, 
design, code and program.
1. Validation is a dynamic mechanism of validating and 
testing the actual product.
2. It does not involve executing the code. 2. It always involves executing the code.
3. It is human based checking of documents and files. 3. It is computer based execution of program.
4. Verification uses methods like inspections, reviews, 
walkthroughs, and Desk-checking etc.
4. Validation uses methods like black box (functional)  
testing, gray box testing, and white box (structural) testing 
etc.
5. Verification is to check whether the software conforms 
to specifications.
5. Validation is to check whether software meets the 
customer expectations and requirements.
6. It can catch errors that validation cannot catch. It is low 
level exercise.
6. It can catch errors that verification cannot catch. It is 
High Level Exercise.
7. Target is requirements specification, application and 
software architecture, high level, complete design, and 
database design etc.
7. Target is actual product-a unit, a module, a bent of 
integrated modules, and effective final product.
8. Verification is done by QA team to ensure that the 
software is as per the specifications in the SRS document.
8. Validation is carried out with the involvement of testing 
team.
9. It generally comes first-done before validation. 9. It generally follows after verification.
Performance Testing
• Performance testing is the process of determining the speed or
effectiveness of a computer, network, software program or device.
• Before going into the details, we should understand the factors that
governs Performance testing:
 Throughput
 Response Time
 Tuning
 Benchmarking
Stress testing
• Exercises the system beyond its maximum design load.
Stressing the system often causes defects to
come to light
• Stressing the system test failure behaviour.. Systems should
not fail catastrophically. Stress testing checks for
unacceptable loss of service or data
• Particularly relevant to distributed systems
which can exhibit severe degradation as a
network becomes overloaded
Smoke Testing
• Taken from the world of hardware
– Power is applied and a technician checks for sparks, smoke, or
other dramatic signs of fundamental failure
• Designed as a pacing mechanism for time-critical projects
– Allows the software team to assess its project on a frequent basis
• Includes the following activities
– The software is compiled and linked into a build
– A series of breadth tests is designed to expose errors that will keep
the build from properly performing its function
• The goal is to uncover “show stopper” errors that have the highest
likelihood of throwing the software project behind schedule
– The build is integrated with other builds and the entire product is
smoke tested daily
• Daily testing gives managers and practitioners a realistic assessment of
the progress of the integration testing
– After a smoke test is completed, detailed test scripts are executed
 testing strategies and tactics
Art of Debugging
Debugging Process
• Debugging occurs as a consequence of successful testing
• It is still very much an art rather than a science
• Good debugging ability may be an innate human trait
• Large variances in debugging ability exist
• The debugging process begins with the execution of a test case
• Results are assessed and the difference between expected and actual
performance is encountered
• This difference is a symptom of an underlying cause that lies hidden
• The debugging process attempts to match symptom with cause, thereby
leading to error correction
Why is Debugging so Difficult?
• The symptom and the cause may be geographically remote
• The symptom may disappear (temporarily) when another error
is corrected
• The symptom may actually be caused by nonerrors (e.g., round-
off accuracies)
• The symptom may be caused by human error that is not easily
traced
Why is Debugging so Difficult?
(continued)
• The symptom may be a result of timing problems, rather than
processing problems
• It may be difficult to accurately reproduce input conditions, such as
asynchronous real-time information
• The symptom may be intermittent such as in embedded systems
involving both hardware and software
• The symptom may be due to causes that are distributed across a
number of tasks running on different processes
Debugging Strategies
• Objective of debugging is to find and correct the cause of a
software error
• Bugs are found by a combination of systematic evaluation,
intuition, and luck
• Debugging methods and tools are not a substitute for careful
evaluation based on a complete design model and clear source
code
• There are three main debugging strategies
– Brute force
– Backtracking
– Cause elimination
Strategy #1: Brute Force
• Most commonly used and least efficient method
• Used when all else fails
• Involves the use of memory dumps, run-time traces, and output
statements
• Leads many times to wasted effort and time
Strategy #2: Backtracking
• Can be used successfully in small programs
• The method starts at the location where a symptom has been
uncovered
• The source code is then traced backward (manually) until the
location of the cause is found
• In large programs, the number of potential backward paths may
become unmanageably large
Strategy #3: Cause Elimination
• Involves the use of induction or deduction and introduces the
concept of binary partitioning
– Induction (specific to general): Prove that a specific starting value is
true; then prove the general case is true
– Deduction (general to specific): Show that a specific conclusion follows
from a set of general premises
• Data related to the error occurrence are organized to isolate
potential causes
• A cause hypothesis is devised, and the aforementioned data are
used to prove or disprove the hypothesis
• Alternatively, a list of all possible causes is developed, and tests are
conducted to eliminate each cause
• If initial tests indicate that a particular cause hypothesis shows
promise, data are refined in an attempt to isolate the bug
Three Questions to ask Before Correcting the Error
• Is the cause of the bug reproduced in another part of the program?
– Similar errors may be occurring in other parts of the program
• What next bug might be introduced by the fix that I’m about to
make?
– The source code (and even the design) should be studied to assess
the coupling of logic and data structures related to the fix
• What could we have done to prevent this bug in the first place?
– This is the first step toward software quality assurance
– By correcting the process as well as the product, the bug will be
removed from the current program and may be eliminated from all
future programs

More Related Content

PPT
Architecture design in software engineering
PPTX
White Box Testing
PPTX
Chapter 1 2 - some size factors
PPTX
software cost factor
PPT
Chapter 12 user interface design
PDF
Lecture 01 introduction to compiler
PPTX
Software Engineering Practice
PPTX
Compiler construction tools
Architecture design in software engineering
White Box Testing
Chapter 1 2 - some size factors
software cost factor
Chapter 12 user interface design
Lecture 01 introduction to compiler
Software Engineering Practice
Compiler construction tools

What's hot (20)

PPTX
Software requirements specification
PPTX
Requirements modeling
PPTX
Software Engineering
PPTX
Estimating Software Maintenance Costs
PDF
Software Engineering : Requirement Analysis & Specification
PPTX
Software Engineering Ethics
PPTX
Designing Techniques in Software Engineering
PPTX
Staffing level estimation
PPTX
Software quality assurance
PPTX
User Interface Analysis and Design
PPTX
Planning the development process
PPT
Rad model
PPTX
Defining the Problem - Goals and requirements
PPTX
Data Designs (Software Engg.)
PPTX
Metrics for project size estimation
PDF
Programming team structure
PPTX
Language and Processors for Requirements Specification
PPT
Compiler Design Basics
PPTX
Evolving role of Software
PPTX
object oriented methodologies
Software requirements specification
Requirements modeling
Software Engineering
Estimating Software Maintenance Costs
Software Engineering : Requirement Analysis & Specification
Software Engineering Ethics
Designing Techniques in Software Engineering
Staffing level estimation
Software quality assurance
User Interface Analysis and Design
Planning the development process
Rad model
Defining the Problem - Goals and requirements
Data Designs (Software Engg.)
Metrics for project size estimation
Programming team structure
Language and Processors for Requirements Specification
Compiler Design Basics
Evolving role of Software
object oriented methodologies
Ad

Viewers also liked (8)

PPT
Test case design
PDF
Pairwise testing - Strategic test case design
PPT
Software Testing Tutorials - MindScripts Technologies, Pune
DOC
software testing strategies
PPS
Testing techniques
PPTX
An Overview of User Acceptance Testing (UAT)
PPT
Design Test Case Technique (Equivalence partitioning And Boundary value analy...
PDF
Software testing methods, levels and types
Test case design
Pairwise testing - Strategic test case design
Software Testing Tutorials - MindScripts Technologies, Pune
software testing strategies
Testing techniques
An Overview of User Acceptance Testing (UAT)
Design Test Case Technique (Equivalence partitioning And Boundary value analy...
Software testing methods, levels and types
Ad

Similar to testing strategies and tactics (20)

PPT
Testing fundamentals
PPT
Software testing-and-analysis
PDF
Module V - Software Testing Strategies.pdf
PPTX
Software Quality Assurance
PPTX
Testing strategies part -1
PPT
SOFTWARE ENGINEERING unit4-1 CLASS notes in pptx 2nd year
PPTX
SENG202-v-and-v-modeling_121810.pptx
PPTX
UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4
PPT
Chapter 13 software testing strategies
PPT
Software testing
PPTX
S.E Unit 6colorcolorcolorcolorcolorcolor.pptx
PPTX
Fundamentals of software part 1
PPT
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
PPTX
UNIT TESTING.pptx
PPT
Unit iv-testing-pune-university-sres-coe
PPT
Unit 4 chapter 22 - testing strategies.ppt
PPT
Software Engineering (Testing Overview)
PPT
PPTX
Software Testing Strategies
PPTX
Software testing strategies And its types
Testing fundamentals
Software testing-and-analysis
Module V - Software Testing Strategies.pdf
Software Quality Assurance
Testing strategies part -1
SOFTWARE ENGINEERING unit4-1 CLASS notes in pptx 2nd year
SENG202-v-and-v-modeling_121810.pptx
UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4 UNIt-4
Chapter 13 software testing strategies
Software testing
S.E Unit 6colorcolorcolorcolorcolorcolor.pptx
Fundamentals of software part 1
Software Engineering (Software Quality Assurance & Testing: Supplementary Mat...
UNIT TESTING.pptx
Unit iv-testing-pune-university-sres-coe
Unit 4 chapter 22 - testing strategies.ppt
Software Engineering (Testing Overview)
Software Testing Strategies
Software testing strategies And its types

More from Preeti Mishra (20)

PDF
Effective Ways to Conduct Programming labs
PDF
Uml intro
PDF
Component diagram
PDF
Activity diag
PDF
Object diagram
PDF
Sequence diagrams
PDF
State chart diagram
PPT
Use case Diagram
PPTX
Unit 8 software quality and matrices
PPTX
Unit 5 design engineering ssad
PPT
architectural design
PPTX
Oo concepts and class modeling
PPTX
Unit 7 performing user interface design
PPT
requirements analysis and design
PPTX
Design process interaction design basics
PPTX
Design process design rules
PPTX
Design process evaluating interactive_designs
PPTX
Foundations understanding users and interactions
PPTX
IntrIntroduction
PPT
Coupling coheshion tps
Effective Ways to Conduct Programming labs
Uml intro
Component diagram
Activity diag
Object diagram
Sequence diagrams
State chart diagram
Use case Diagram
Unit 8 software quality and matrices
Unit 5 design engineering ssad
architectural design
Oo concepts and class modeling
Unit 7 performing user interface design
requirements analysis and design
Design process interaction design basics
Design process design rules
Design process evaluating interactive_designs
Foundations understanding users and interactions
IntrIntroduction
Coupling coheshion tps

Recently uploaded (20)

PPTX
Welding lecture in detail for understanding
PDF
composite construction of structures.pdf
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Well-logging-methods_new................
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Sustainable Sites - Green Building Construction
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
web development for engineering and engineering
PPT
Project quality management in manufacturing
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Welding lecture in detail for understanding
composite construction of structures.pdf
bas. eng. economics group 4 presentation 1.pptx
Well-logging-methods_new................
Model Code of Practice - Construction Work - 21102022 .pdf
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Strings in CPP - Strings in C++ are sequences of characters used to store and...
CYBER-CRIMES AND SECURITY A guide to understanding
Sustainable Sites - Green Building Construction
UNIT 4 Total Quality Management .pptx
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Lesson 3_Tessellation.pptx finite Mathematics
Operating System & Kernel Study Guide-1 - converted.pdf
web development for engineering and engineering
Project quality management in manufacturing
Foundation to blockchain - A guide to Blockchain Tech
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...

testing strategies and tactics

  • 1. Unit 8: Software Testing Preeti Mishra Course Incharge
  • 2. What they Say… • “Testing is the process of executing a program with the intention of finding errors.” – Myers • “Testing can show the presence of bugs but never their absence.” - Dijkstra
  • 3. Definition • Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.
  • 4. 4 Who Tests the Software? developerdeveloper independent testerindependent tester Understands the systemUnderstands the system but, will test "gently"but, will test "gently" and, is driven by "and, is driven by "deliverydelivery"" Must learn about the system,Must learn about the system, but, will attempt tobut, will attempt to breakbreak itit and, is driven byand, is driven by qualityquality
  • 5. Characteristics of Testable Software • Operable – The better it works (i.e., better quality), the easier it is to test • Observable – Incorrect output is easily identified; internal errors are automatically detected • Controllable – The states and variables of the software can be controlled directly by the tester • Decomposable – The software is built from independent modules that can be tested independently • Simple – The program should exhibit functional, structural, and code simplicity • Stable – Changes to the software during testing are infrequent and do not invalidate existing tests • Understandable – The architectural design is well understood; documentation is available and organized
  • 6. Test Characteristics • A good test has a high probability of finding an error – The tester must understand the software and how it might fail • A good test is not redundant – Testing time is limited; one test should not serve the same purpose as another test • A good test should be “best of breed” – Tests that have the highest likelihood of uncovering a whole class of errors should be used • A good test should be neither too simple nor too complex – Each test should be executed separately; combining a series of tests could cause side effects and mask certain errors
  • 7. A strategy for software testing • integrates the design of software test cases into a well-planned series of steps that result in successful development of the software • The strategy provides a road map that describes the steps to be taken, when, and how much effort, time, and resources will be required • The strategy incorporates test planning, test case design, test execution, and test result collection and evaluation • The strategy provides guidance for the practitioner and a set of milestones for the manager • Because of time pressures, progress must be measurable and problems must surface as early as possible
  • 8. A Strategic Approach to Testing
  • 9. General Characteristics of Strategic Testing • To perform effective testing, a software team should conduct effective formal technical reviews • Testing begins at the component level and work outward toward the integration of the entire computer-based system • Different testing techniques are appropriate at different points in time • Testing is conducted by the developer of the software and (for large projects) by an independent test group • Testing and debugging are different activities, but debugging must be accommodated in any testing strategy
  • 10. A Strategy for Testing Conventional Software Code Design Requirements System Engineering Unit Testing Integration Testing Validation Testing System Testing Abstractto concrete Narrow to Broaderscope
  • 13. Unit Testing • Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually. • Algorithms and logic • Data structures (global and local) • Interfaces • Independent paths • Boundary conditions • Error handling
  • 14. Unit Testing modulemodule to beto be testedtested test casestest cases resultsresults softwaresoftware engineerengineer interfaceinterface local data structureslocal data structures boundary conditionsboundary conditions independent pathsindependent paths error handling pathserror handling paths
  • 16. Integration Testing • Integration testing (sometimes called integration and testing, abbreviated I&T) is the phase in software testing in which individual software modules are combined and tested as a group. It occurs after unit testing and before validation testing. • Integration: Combining 2 or more software units – often a subset of the overall project
  • 17. Why Integration Testing Is Necessary • One module can have an adverse effect on another • Sub functions, when combined, may not produce the desired major function • Individually acceptable imprecision in calculations may be magnified to unacceptable levels • Interfacing errors not detected in unit testing may appear • Timing problems (in real-time systems) are not detectable by unit testing • Resource contention problems are not detectable by unit testing
  • 18. Types of Integration Testing • Big-Bang Integration • Top-down Integration • Bottom-Up Integration • Sandwich Integration
  • 19. Phased integration • phased ("big-bang") integration: – design, code, test, debug each class/unit/subsystem separately – combine them all – pray
  • 20. Top-down integration • top-down integration: Start with outer UI layers and work inward – must write (lots of) stub lower layers for UI to interact with – allows postponing tough design/debugging decisions (bad?)
  • 21. Problems with Top-Down Integration • Many times, calculations are performed in the modules at the bottom of the hierarchy • Stubs typically do not pass data up to the higher modules • Delaying testing until lower-level modules are ready usually results in integrating many modules at the same time rather than one at a time • Developing stubs that can pass data up is almost as much work as developing the actual module
  • 22. Bottom-up integration • bottom-up integration: Start with low-level data/logic layers and work outward – must write test drivers to run these layers – won't discover high-level / UI design flaws until late
  • 23. Problems with Bottom-Up Integration • The whole program does not exist until the last module is integrated • Timing and resource contention problems are not found until late in the process
  • 24. Stubs • stub: A controllable replacement for an existing software unit to which your code under test has a dependency. – useful for simulating difficult-to-control elements: • network / internet • database • time/date-sensitive code • files • threads • memory – also useful when dealing with brittle legacy code/systems
  • 25. "Sandwich" integration • "sandwich" integration: Connect top-level UI with crucial bottom-level classes – add middle layers later as needed – more practical than top-down or bottom-up?
  • 27. System Testing • System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system's compliance with its specified requirements. • System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
  • 28. Principles of System Testing System Testing Process • Function testing: does the integrated system perform as promised by the requirements specification? • Performance testing: are the non-functional requirements met? • Acceptance testing: is the system what the customer expects? • Installation testing: does the system run at the customer site(s)?
  • 29. Performance Tests Purpose and Roles • Used to examine – the calculation – the speed of response – the accuracy of the result – the accessibility of the data • Designed and administrated by the test team
  • 30. Performance Tests Types of Performance Tests • Stress tests • Volume tests • Configuration tests • Compatibility tests • Regression tests • Security tests • Timing tests • Environmental tests • Quality tests • Recovery tests • Maintenance tests • Documentation tests • Human factors (usability) tests
  • 31. Reliability, Availability, and Maintainability Definition •Software reliability: operating without failure under given condition for a given time interval •Software availability: operating successfully according to specification at a given point in time •Software maintainability: for a given condition of use, a maintenance activity can be carried out within stated time interval, procedures and resources Different Level of Failure Severity •Catastrophic: causes death or system loss •Critical: causes severe injury or major system damage •Marginal: causes minor injury or minor system damage •Minor: causes no injury or system damage
  • 32. Acceptance Tests • Enable the customers and users to determine if the built system meets their needs and expectations • Written, conducted and evaluated by the customers • Pilot test: install on experimental basis • Alpha test: in-house test • Beta test: customer pilot • Parallel testing: new system operates in parallel with old system
  • 33. Installation Testing • Before the testing – Configure the system – Attach proper number and kind of devices – Establish communication with other system • The testing – Regression tests: to verify that the system has been installed properly and works
  • 35. Approaches to Testing • Black Box ,White Box and Grey Box • Alpha and Beta
  • 36. White Box testing White box testing is testing where we use the info available from the code of the component to generate tests. This info is usually used to achieve coverage in one way or another – e.g. • Code coverage • Path coverage • Decision coverage Debugging will always be white-box testing
  • 39. Black Box testing Black box testing is also called functional testing. The main ideas are simple: 1. Define initial component state, input and expected output for the test. 2. Set the component in the required state. 3. Give the defined input 4. Observe the output and compare to the expected output.
  • 40. Info for Black Box testing That we do not have access to the code does not mean that one test is just as good as the other one. We should consider the following info: • Algorithm understanding • Parts of the solutions that are difficult to implement • Special – often seldom occurring – cases.
  • 41. Black Box vs. White Box testing We can contrast the two methods as follows: • White Box testing – Understanding the implemented code. – Checking the implementation – Debugging • Black Box testing – Understanding the algorithm used. – Checking the solution – functional testing
  • 42. Criteria Black Box Testing White Box Testing Definition Black Box Testing is a software testing method in which the internal structure/ design/ implementation of the item being tested is NOT known to the tester White Box Testing is a software testing method in which the internal structure/ design/ implementation of the item being tested is known to the tester. Levels Applicable To Mainly applicable to higher levels of testing: Acceptance Testing System Testing Mainly applicable to lower levels of testing:Unit Testing Integration Testing Responsibility Generally, independent Software Testers Generally, Software Developers Programming Knowledge Not Required Required Implementation Knowledge Not Required Required Basis for Test Cases Requirement Specifications Detail Design
  • 43. Basis Path Testing • White-box technique usually based on the program flow graph • The cyclomatic complexity of the program computed from its flow graph using the formula V(G) = E – N + 2 or by counting the conditional statements in the PDL representation and adding 1 • Determine the basis set of linearly independent paths (the cardinality of this set is the program cyclomatic complexity) • Prepare test cases that will force the execution of each path in the basis set.
  • 44. Control Structure Testing • White-box techniques focusing on control structures present in the software • Condition testing (e.g. branch testing) – focuses on testing each decision statement in a software module – it is important to ensure coverage of all logical combinations of data that may be processed by the module (a truth table may be helpful) • Data flow testing – selects test paths based according to the locations of variable definitions and uses in the program (e.g. definition use chains) • Loop testing – focuses on the validity of the program loop constructs (i.e. while, for, go to) – involves checking to ensure loops start and stop when they are supposed to (unstructured loops should be redesigned whenever possible)
  • 45. Alpha and Beta Testing
  • 46. Product Use Testing Product use under normal operating conditions. Some terms: – Alpha testing: done in-house. – Beta testing: done at the customer site. Typical goals of beta testing: to determine if the product works and is free of “bugs.”
  • 47.        Alpha Testing   Beta Testing (Field Testing) 1. It is always performed by the developers at the  software development site. 1. It is always performed by the customers at their own  site. 2. Sometimes it is also performed by Independent  Testing Team. 2. It is not performed by Independent Testing Team. 3. Alpha Testing is not open to the market and public 3. Beta Testing is always open to the market and public. 4. It is conducted for the software application and  project. 4. It is usually conducted for software product. 5. It is always performed in Virtual Environment. 5. It is performed in Real Time Environment. 6. It is always performed within the organization. 6. It is always performed outside the organization. 7. It is the form of Acceptance Testing. 7. It is also the form of Acceptance Testing. 8. Alpha Testing is definitely performed and carried out  at the developing organizations location with the  involvement of developers. 8. Beta Testing (field testing) is performed and carried out  by users or you can say people at their own locations and  site using customer data. 9. It comes under the category of both White Box Testing  and Black Box Testing. 9. It is only a kind of Black Box Testing. 10. Alpha Testing is always performed at the time of  Acceptance Testing when developers test the product  and project to check whether it meets the user  requirements or not. 10. Beta Testing is always performed at the time when  software product and project are marketed. 11. It is always performed at the developer’s premises in  the absence of the users. 11. It is always performed at the user’s premises in the  absence of the development team. 12. Alpha Testing is not known by any other different  name. 12 Beta Testing is also known by the name Field Testing means it is also known as field testing. 13. It is considered as the User Acceptance Testing (UAT)  which is done at developer’s area. 13. It is also considered as the User Acceptance Testing  (UAT) which is done at customers or users area.
  • 49. Verification and Validation • Verification – Are you building the product right? – Software must conform to its specification • Validation – Are you building the right product? – Software should do what the user really requires
  • 50. Verification and Validation Process • Must applied at each stage of the software development process to be effective • Objectives – Discovery of system defects – Assessment of system usability in an operational situation
  • 51.              Verification              Validation 1. Verification is a static practice of verifying documents,  design, code and program. 1. Validation is a dynamic mechanism of validating and  testing the actual product. 2. It does not involve executing the code. 2. It always involves executing the code. 3. It is human based checking of documents and files. 3. It is computer based execution of program. 4. Verification uses methods like inspections, reviews,  walkthroughs, and Desk-checking etc. 4. Validation uses methods like black box (functional)   testing, gray box testing, and white box (structural) testing  etc. 5. Verification is to check whether the software conforms  to specifications. 5. Validation is to check whether software meets the  customer expectations and requirements. 6. It can catch errors that validation cannot catch. It is low  level exercise. 6. It can catch errors that verification cannot catch. It is  High Level Exercise. 7. Target is requirements specification, application and  software architecture, high level, complete design, and  database design etc. 7. Target is actual product-a unit, a module, a bent of  integrated modules, and effective final product. 8. Verification is done by QA team to ensure that the  software is as per the specifications in the SRS document. 8. Validation is carried out with the involvement of testing  team. 9. It generally comes first-done before validation. 9. It generally follows after verification.
  • 52. Performance Testing • Performance testing is the process of determining the speed or effectiveness of a computer, network, software program or device. • Before going into the details, we should understand the factors that governs Performance testing:  Throughput  Response Time  Tuning  Benchmarking
  • 53. Stress testing • Exercises the system beyond its maximum design load. Stressing the system often causes defects to come to light • Stressing the system test failure behaviour.. Systems should not fail catastrophically. Stress testing checks for unacceptable loss of service or data • Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded
  • 54. Smoke Testing • Taken from the world of hardware – Power is applied and a technician checks for sparks, smoke, or other dramatic signs of fundamental failure • Designed as a pacing mechanism for time-critical projects – Allows the software team to assess its project on a frequent basis • Includes the following activities – The software is compiled and linked into a build – A series of breadth tests is designed to expose errors that will keep the build from properly performing its function • The goal is to uncover “show stopper” errors that have the highest likelihood of throwing the software project behind schedule – The build is integrated with other builds and the entire product is smoke tested daily • Daily testing gives managers and practitioners a realistic assessment of the progress of the integration testing – After a smoke test is completed, detailed test scripts are executed
  • 57. Debugging Process • Debugging occurs as a consequence of successful testing • It is still very much an art rather than a science • Good debugging ability may be an innate human trait • Large variances in debugging ability exist • The debugging process begins with the execution of a test case • Results are assessed and the difference between expected and actual performance is encountered • This difference is a symptom of an underlying cause that lies hidden • The debugging process attempts to match symptom with cause, thereby leading to error correction
  • 58. Why is Debugging so Difficult? • The symptom and the cause may be geographically remote • The symptom may disappear (temporarily) when another error is corrected • The symptom may actually be caused by nonerrors (e.g., round- off accuracies) • The symptom may be caused by human error that is not easily traced
  • 59. Why is Debugging so Difficult? (continued) • The symptom may be a result of timing problems, rather than processing problems • It may be difficult to accurately reproduce input conditions, such as asynchronous real-time information • The symptom may be intermittent such as in embedded systems involving both hardware and software • The symptom may be due to causes that are distributed across a number of tasks running on different processes
  • 60. Debugging Strategies • Objective of debugging is to find and correct the cause of a software error • Bugs are found by a combination of systematic evaluation, intuition, and luck • Debugging methods and tools are not a substitute for careful evaluation based on a complete design model and clear source code • There are three main debugging strategies – Brute force – Backtracking – Cause elimination
  • 61. Strategy #1: Brute Force • Most commonly used and least efficient method • Used when all else fails • Involves the use of memory dumps, run-time traces, and output statements • Leads many times to wasted effort and time
  • 62. Strategy #2: Backtracking • Can be used successfully in small programs • The method starts at the location where a symptom has been uncovered • The source code is then traced backward (manually) until the location of the cause is found • In large programs, the number of potential backward paths may become unmanageably large
  • 63. Strategy #3: Cause Elimination • Involves the use of induction or deduction and introduces the concept of binary partitioning – Induction (specific to general): Prove that a specific starting value is true; then prove the general case is true – Deduction (general to specific): Show that a specific conclusion follows from a set of general premises • Data related to the error occurrence are organized to isolate potential causes • A cause hypothesis is devised, and the aforementioned data are used to prove or disprove the hypothesis • Alternatively, a list of all possible causes is developed, and tests are conducted to eliminate each cause • If initial tests indicate that a particular cause hypothesis shows promise, data are refined in an attempt to isolate the bug
  • 64. Three Questions to ask Before Correcting the Error • Is the cause of the bug reproduced in another part of the program? – Similar errors may be occurring in other parts of the program • What next bug might be introduced by the fix that I’m about to make? – The source code (and even the design) should be studied to assess the coupling of logic and data structures related to the fix • What could we have done to prevent this bug in the first place? – This is the first step toward software quality assurance – By correcting the process as well as the product, the bug will be removed from the current program and may be eliminated from all future programs