SlideShare a Scribd company logo
UNIT 3
TESTING ACTIVITIES
LEVELS OF TESTING
 Software testing is generally carried out at different
levels. There are four such levels: unit testing,
integration testing, system testing and
acceptance testing.
 The first 3 levels of testing are done by testers and
last level of testing is done by the customer.
 Each level has specific objectives.
Unit
Testing
Integration
Testing
Acceptance
Testing
System
Testing
Testing is done by
customers.
Software is ready for customer.
Testing is done by testers and developers only.
UNIT TESTING
 “A unit is the smallest testable piece of software, which
may consist of hundreds or even just few lines of source
code, and generally represents the result of the work of
one or few developers. The unit test cases purpose is to
ensure that the unit satisfies functional specifications.”
 This type of testing is performed by developers before
the setup is handed over to the testing team to formally
execute the test cases. Unit testing is performed by the
respective developers on the individual units of source
code assigned areas. The developers use test data that
is different from the test data of the quality assurance
team.
 The goal of unit testing is to isolate each part of the
program and show that individual parts are correct in
terms of requirements and functionality.
LIMITATIONS OF UNIT TESTING
 Testing cannot catch each and every bug in an
application. It is impossible to evaluate every
execution path in every software application. The
same is the case with unit testing.
 There is a limit to the number of scenarios and test
data that a developer can use to verify a source
code. After having exhausted all the options, there
is no choice but to stop unit testing and merge the
code segment with other units.
 There is a major problem with unit testing: How
can we run a unit independently?
 A unit may not be completely independent. It may be
calling a few units and also be called by one or more
units. We may have to write additional source code to
execute a unit.
 A unit X may call a unit Y and a unit Y may call a unit
A and unit B. To execute a unit Y independently, we
may have to write additional source code in a unit Y
which may handle the activities of unit X and the
activities of unit A and B.
 The additional source code to handle the activities of
unit X is called DRIVER and the additional code to
handle the activities of unit A and B is called STUB.
 The complete additional source code which is written
for the design of stub an driver is called
SCAFFOLDING.
Unit
Y
Unit
B
Unit
A
Unit
X
Unit under Test
Unit Y
under
test
Stub
for B
Stub
for A
Driver
X
Replacing unit B
Replacing unit A
Replacing unit X
INTEGRATION TESTING
 When we combine two or more units, we may like
to test the interfaces amongst these units. We
combine two or more units because they share
some relationship. This relationship is called
coupling.
 Coupling is a measure that defines the level of
inter-dependability among modules of a program. It
tells at what level the modules interfere and interact
with each other. The lower the coupling, the better
the program.
ST Unit-3.pptx
APPROACHES FOR INTEGRATION TESTING
 Top Down is an approach to Integration Testing where
top level units (main module) are tested first and lower
level units (sub module) are tested step by step after
that. This approach is taken when top down
development approach is followed. Test Stubs are
needed to simulate lower level units which may not be
available during the initial phases.
 Bottom Up is an approach to Integration Testing where
bottom level units (sub modules) are tested first and
upper level units (main module) step by step after that.
This approach is taken when bottom up development
approach is followed. Test Drivers are needed to
simulate higher level units which may not be available
during the initial phases.
 Sandwich/Hybrid is an approach to Integration Testing
which is a combination of Top Down and Bottom Up
approaches.
TOP DOWN APPROACH
 In Top to down
approach, testing
takes place from top
to down following the
control flow of the
software system.
 Takes help of stubs for testing.
 Advantages:
 Fault Localization is easier.
 Possibility to obtain an early prototype.
 Critical Modules are tested on priority; major design flaws
could be found and fixed first.
 Disadvantages:
 Needs many Stubs.
 Modules at lower level are tested inadequately.
BOTTOM UP APPROACH
 In the bottom up
strategy, each module
at lower levels is tested
with higher modules
until all modules are tested.
It takes help of Drivers for testing.
 Advantages:
 Fault localization is easier.
 No time is wasted waiting for all modules to be developed
 Disadvantages:
 Critical modules (at the top level of software architecture)
which control the flow of application are tested last and may
be prone to defects.
 Early prototype is not possible
SYSTEM TESTING
 System Testing is a level of the software
testing where a complete and integrated software is
tested.
 System testing is performed after the completion of
unit and integration testing in an expected
environment.
 System testing ensures that each system function
works as expected and also tests for the non
functional requirements like performance, security,
reliability, stress, load, etc.
 System Testing (ST) is a black box testing technique
performed to evaluate the complete system the
system's compliance against specified requirements.
 In System testing, the functionalities of the system are
tested from an end-to-end perspective.
 System Testing is usually carried out by a team that is
independent of the development team in order to
measure the quality of the system unbiased.
 It includes both functional and Non-Functional testing.
STEPS IN SYSTEM TESTING
 In Software System Testing following steps
needs to be executed:
 Step 1) First & important step is preparation of System
Test Plan
 Step 2) Second step is to creation Test Cases
 Step 3) Creation of test data which used for System
testing.
 Step 4) Automated test case execution.
 Step 5) Execution of normal test case & update test
case if using any test management tool (if any).
 Step 6) Bug Reporting, Bug verification & Regression
testing.
 Step 7) Repeat testing life cycle (if required).
ACCEPTANCE TESTING
 This is the extension of system testing. When the
testing team feels that the product is ready for the
customers, they invite the customers for
demonstration. After demo of the product,
customers may like to use the product to assess
their satisfaction and confidence. This type of
usage is essential before accepting the final
product. The testing done for the purpose of
accepting a product is known as acceptance
testing.
 This is carried out by the customer at developer’s or
customer’s site.
 Acceptance Testing is a level of the software testing where a
system is tested for acceptability.
 The purpose of this test is to evaluate the system’s compliance
with the business requirements and assess whether it is
acceptable for delivery.
 Internal Acceptance Testing (Also known as Alpha Testing) is
performed by members of the organization that developed the
software but who are not directly involved in the project.
Usually, it is the members of Product Management, Sales
and/or Customer Support.
 External Acceptance Testing is performed by people who are
not employees of the organization that developed the
software.
 Customer Acceptance Testing is performed by the customers of the
organization that developed the software. They are the ones who
asked the organization to develop the software. [This is in the case
of the software not being owned by the organization that developed
it.]
 User Acceptance Testing (Also known as Beta Testing) is performed
by the end users of the software. They can be the customers
themselves or the customers’ customers.
DEBUGGING
 The process of identifying and correcting a software
error is known as debugging.
 It is a multistep process that involves identifying a
problem, isolating the source of the problem, and
then either correcting the problem or determining a
way to work around it. The final step of debugging
is to test the correction or workaround and make
sure it works.
DEBUGGING
 The goal of testing is to identify errors (bugs) in the
program.
 The process of testing generates symptoms, and a
program’s failure is a clear symptom of the
presence of an error.
 After getting a symptom, we begin to investigate the
cause and place of that error.
 After identification of place, we examine that portion
to identify the cause of the problem.
 This process is called debugging.
DEBUGGING TECHNIQUES
 Most developers have learned through experience
several techniques for debugging.
 Generally they are applied in trial and error manner.
 Debugging is not an easy process.
 Error removal requires humility to even admit the
possibility of errors in the code we have created.
Debugging
Techniques
Core
Dumps
Traces
Print
Statements
Debugging
Programs
Core Dumps
• A printout of all registers and relevant memory locations is
obtained and studied.
Traces
• Similar to core dumps except the printout contains only
certain memory and register contents.
Print Statements
• The standard print statement in the language being used is
sprinkled throughout the program to output values of key
variables.
Debugging Programs
• A program that runs concurrently to examine memory and
registers, stop execution of program at a certain point.
DEBUGGING APPROACHES
 In heart of debugging process is not the debugging
tools but the underlying approaches used to deduce
the cause of the error.
Debugging
Approaches
Trail and
error
Backtracking
Insert Watch
Points
Induction and
Deduction
Approach
DEBUGGING APPROACHES
 Trial and error: The debugger looks at the error
symptoms, reaches a snap judgment as to where in
the code the underlying error might be and roam
around in the program with one or more debugging
techniques. This is slow and wasteful approach.
 Backtracking: In this we examine the error
symptoms to see where they are first noticed. One
then backtracks in the program flow of control to a
point where the symptoms have disappeared. This
process brackets the location of the error in the
program.
DEBUGGING APPROACHES
 Insert Watch Points: In this approach we insert
watch points at the appropriate place in the
program. We can use a software to insert watch
points in a program without modifying the program
manually.
 Induction and Deduction Approach:
 Induction Approach
 Locate the pertinent data
 Organize the data
 Devise a hypothesis
 Prove the hypothesis
ST Unit-3.pptx
DEBUGGING APPROACHES
 Deduction Approach
 Enumerate the possible causes or hypotheses
 Use the data to eliminate possible causes
 Refine the remaining hypothesis
 Prove the remaining hypothesis
ST Unit-3.pptx
TESTING TOOLS
 One way to improve the quality & quantity of testing
is to make the process as pleasant as possible for
the tester.
 This means that tools should be as concise,
powerful & natural as possible.
 The two broad categories of software testing tools
are :
 Static
 Dynamic
Static Testing
Tools
Static
Analyzers
Code
Inspectors
Standard
Enforcers
Dynamic
Testing Tools
Coverage
Analyzers
Output
Comparators
Test file
Generators
DEBUGGING PROCESS
1. Replication of the bug: This means to recreate the
undesired behaviour under controlled conditions.
2. Understanding the bug: This means we want to find
the reason for the failure.
3. Locate the bug: There are two portions of the source
code which need to be considered for locating bug.
The first portion of the source code is one which
causes the visible incorrect behaviour and the second
portion of the source code is one which is actually
incorrect.
4. Fix the bug and re test the program: The fixing of
bug is a programming exercise i.e. Making necessary
changes in source code.
REGRESSION TESTING
 Regression testing is a type of software testing which
verifies that software which was previously developed
and tested still performs the same way after it was
changed or interfaced with other software. Changes
may include software enhancements, patches,
configuration changes, etc.
 Regression Testing is defined as a type of software
testing to confirm that a recent program or code change
has not adversely affected existing features.
 Regression testing is nothing but full or partial selection
of already executed test cases which are re-executed to
ensure existing functionalities work fine.
 This testing is done to make sure that new code
changes should not have side effects on the existing
functionalities. It ensures that old code still works once
the new code changes are done.
REGRESSION TESTING PROCESS
1. Fault Identification
1. Failure of the program and generation of failure report.
2. Debugging of source code.
3. Identification of faults in the source code
2. Modification
1. Source code modification.
3. Execution based on selected test cases and new
test cases, if any
1. Selection of test cases from existing test suite to ensure the
correctness of modification.
2. Addition of new test cases, if required
3. Perform re testing to ensure correctness using selected test
cases and new test cases, if any
SELECTION OF TEST CASES FOR REGRESSION
TESTING
1. Select all test cases This is the simplest
technique when size of test suite is small. This is
safest because we run all the test cases for any
change in the program.
2. Select test cases randomly The test cases are
selected randomly to reduce the size of the test
suite. We decide number of test cases required on
the basis of available time and resources.
3. Select modification traversing test cases We
select only those test cases that execute the
modified portion of the program and the portion
which is affected by the modification.
REGRESSION TESTING TECHNIQUES
 The various regression testing techniques are:
 Retest all This technique checks all the test cases on
the current program. Though it is expensive as it
needs to re-run all the cases, it ensures that there are
no errors because of the modified code.
 Regression test selection Unlike Retest all, this
technique runs a part of the test suite if the cost of
selecting the part of the test suite is less than the
Retest all technique.
 Test case prioritization Prioritize the test cases so
as to increase a test suite's rate of fault detection.
Test case prioritization techniques schedule test
cases so that the test cases that are higher in priority
are executed before the test cases that have a lesser
priority.
EXTREME TESTING
 Within Extreme Testing, we have a few practices:
 One watching and one doing.
 One test case writer and one executor immediately after writing
 One defect writer with a test case writer writing a regression test case
related to the defect
 One executing test cases on a module that is integrated tightly, and another
module being tested by another tester who is testing linked or integrated
modules or functions
 Extreme testing may also include:
 Stress Testing: It involves testing an application under extreme workloads
to see how it handles high traffic or data processing. The objective is to
identify breaking point of an application.
 Load Testing: checks the application's ability to perform under anticipated
user loads. The objective is to identify performance bottlenecks before the
software application goes live.
 Volume Testing: Under Volume Testing large no. of. Data is populated in
database and the overall software system's behaviour is monitored. The
objective is to check software application's performance under varying
database volumes.

More Related Content

DOCX
Unit 4 Software engineering deatiled notes.docx
PPTX
Test levels
PPTX
Software Testing
PPTX
SE-Unit 4_software testing stretagy.pptx
PPTX
Software testing
PPTX
Sftwre engg.testng
PPT
Basic Guide to Manual Testing
PPTX
Unit 4 Software engineering deatiled notes.docx
Test levels
Software Testing
SE-Unit 4_software testing stretagy.pptx
Software testing
Sftwre engg.testng
Basic Guide to Manual Testing

Similar to ST Unit-3.pptx (20)

PPTX
Types of testing
PPTX
softwaretesting-140721025833-phpapp02.pptx
PPTX
Learn sqa from expert class 2reviewed
PPT
Testing strategies
PPTX
Software Testing
PPTX
DOCX
DOCX
Testing type
DOCX
Testing in Software Engineering.docx
PPTX
Testing level
PPTX
unit 4.pptx very needful and important p
PPTX
Software Testing.pptx
PPTX
Software testing sengu
PPTX
Software testing
PPTX
Software testing.ppt
PDF
softwaretesting-140721025833-phpapp02.pdf
PPTX
Software testing
PPTX
Software testing
PPTX
Software testing
PPTX
Software Testing
Types of testing
softwaretesting-140721025833-phpapp02.pptx
Learn sqa from expert class 2reviewed
Testing strategies
Software Testing
Testing type
Testing in Software Engineering.docx
Testing level
unit 4.pptx very needful and important p
Software Testing.pptx
Software testing sengu
Software testing
Software testing.ppt
softwaretesting-140721025833-phpapp02.pdf
Software testing
Software testing
Software testing
Software Testing

Recently uploaded (20)

PDF
System and Network Administraation Chapter 3
PDF
medical staffing services at VALiNTRY
PDF
Nekopoi APK 2025 free lastest update
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
L1 - Introduction to python Backend.pptx
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PPTX
Transform Your Business with a Software ERP System
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PDF
PTS Company Brochure 2025 (1).pdf.......
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
System and Network Administraation Chapter 3
medical staffing services at VALiNTRY
Nekopoi APK 2025 free lastest update
Design an Analysis of Algorithms II-SECS-1021-03
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Softaken Excel to vCard Converter Software.pdf
How to Choose the Right IT Partner for Your Business in Malaysia
L1 - Introduction to python Backend.pptx
VVF-Customer-Presentation2025-Ver1.9.pptx
Which alternative to Crystal Reports is best for small or large businesses.pdf
How to Migrate SBCGlobal Email to Yahoo Easily
Transform Your Business with a Software ERP System
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
How Creative Agencies Leverage Project Management Software.pdf
Navsoft: AI-Powered Business Solutions & Custom Software Development
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PTS Company Brochure 2025 (1).pdf.......
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)

ST Unit-3.pptx

  • 2. LEVELS OF TESTING  Software testing is generally carried out at different levels. There are four such levels: unit testing, integration testing, system testing and acceptance testing.  The first 3 levels of testing are done by testers and last level of testing is done by the customer.  Each level has specific objectives. Unit Testing Integration Testing Acceptance Testing System Testing Testing is done by customers. Software is ready for customer. Testing is done by testers and developers only.
  • 3. UNIT TESTING  “A unit is the smallest testable piece of software, which may consist of hundreds or even just few lines of source code, and generally represents the result of the work of one or few developers. The unit test cases purpose is to ensure that the unit satisfies functional specifications.”  This type of testing is performed by developers before the setup is handed over to the testing team to formally execute the test cases. Unit testing is performed by the respective developers on the individual units of source code assigned areas. The developers use test data that is different from the test data of the quality assurance team.  The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.
  • 4. LIMITATIONS OF UNIT TESTING  Testing cannot catch each and every bug in an application. It is impossible to evaluate every execution path in every software application. The same is the case with unit testing.  There is a limit to the number of scenarios and test data that a developer can use to verify a source code. After having exhausted all the options, there is no choice but to stop unit testing and merge the code segment with other units.  There is a major problem with unit testing: How can we run a unit independently?
  • 5.  A unit may not be completely independent. It may be calling a few units and also be called by one or more units. We may have to write additional source code to execute a unit.  A unit X may call a unit Y and a unit Y may call a unit A and unit B. To execute a unit Y independently, we may have to write additional source code in a unit Y which may handle the activities of unit X and the activities of unit A and B.  The additional source code to handle the activities of unit X is called DRIVER and the additional code to handle the activities of unit A and B is called STUB.  The complete additional source code which is written for the design of stub an driver is called SCAFFOLDING.
  • 6. Unit Y Unit B Unit A Unit X Unit under Test Unit Y under test Stub for B Stub for A Driver X Replacing unit B Replacing unit A Replacing unit X
  • 7. INTEGRATION TESTING  When we combine two or more units, we may like to test the interfaces amongst these units. We combine two or more units because they share some relationship. This relationship is called coupling.  Coupling is a measure that defines the level of inter-dependability among modules of a program. It tells at what level the modules interfere and interact with each other. The lower the coupling, the better the program.
  • 9. APPROACHES FOR INTEGRATION TESTING  Top Down is an approach to Integration Testing where top level units (main module) are tested first and lower level units (sub module) are tested step by step after that. This approach is taken when top down development approach is followed. Test Stubs are needed to simulate lower level units which may not be available during the initial phases.  Bottom Up is an approach to Integration Testing where bottom level units (sub modules) are tested first and upper level units (main module) step by step after that. This approach is taken when bottom up development approach is followed. Test Drivers are needed to simulate higher level units which may not be available during the initial phases.  Sandwich/Hybrid is an approach to Integration Testing which is a combination of Top Down and Bottom Up approaches.
  • 10. TOP DOWN APPROACH  In Top to down approach, testing takes place from top to down following the control flow of the software system.  Takes help of stubs for testing.  Advantages:  Fault Localization is easier.  Possibility to obtain an early prototype.  Critical Modules are tested on priority; major design flaws could be found and fixed first.  Disadvantages:  Needs many Stubs.  Modules at lower level are tested inadequately.
  • 11. BOTTOM UP APPROACH  In the bottom up strategy, each module at lower levels is tested with higher modules until all modules are tested. It takes help of Drivers for testing.  Advantages:  Fault localization is easier.  No time is wasted waiting for all modules to be developed  Disadvantages:  Critical modules (at the top level of software architecture) which control the flow of application are tested last and may be prone to defects.  Early prototype is not possible
  • 12. SYSTEM TESTING  System Testing is a level of the software testing where a complete and integrated software is tested.  System testing is performed after the completion of unit and integration testing in an expected environment.  System testing ensures that each system function works as expected and also tests for the non functional requirements like performance, security, reliability, stress, load, etc.
  • 13.  System Testing (ST) is a black box testing technique performed to evaluate the complete system the system's compliance against specified requirements.  In System testing, the functionalities of the system are tested from an end-to-end perspective.  System Testing is usually carried out by a team that is independent of the development team in order to measure the quality of the system unbiased.  It includes both functional and Non-Functional testing.
  • 14. STEPS IN SYSTEM TESTING  In Software System Testing following steps needs to be executed:  Step 1) First & important step is preparation of System Test Plan  Step 2) Second step is to creation Test Cases  Step 3) Creation of test data which used for System testing.  Step 4) Automated test case execution.  Step 5) Execution of normal test case & update test case if using any test management tool (if any).  Step 6) Bug Reporting, Bug verification & Regression testing.  Step 7) Repeat testing life cycle (if required).
  • 15. ACCEPTANCE TESTING  This is the extension of system testing. When the testing team feels that the product is ready for the customers, they invite the customers for demonstration. After demo of the product, customers may like to use the product to assess their satisfaction and confidence. This type of usage is essential before accepting the final product. The testing done for the purpose of accepting a product is known as acceptance testing.  This is carried out by the customer at developer’s or customer’s site.
  • 16.  Acceptance Testing is a level of the software testing where a system is tested for acceptability.  The purpose of this test is to evaluate the system’s compliance with the business requirements and assess whether it is acceptable for delivery.  Internal Acceptance Testing (Also known as Alpha Testing) is performed by members of the organization that developed the software but who are not directly involved in the project. Usually, it is the members of Product Management, Sales and/or Customer Support.  External Acceptance Testing is performed by people who are not employees of the organization that developed the software.  Customer Acceptance Testing is performed by the customers of the organization that developed the software. They are the ones who asked the organization to develop the software. [This is in the case of the software not being owned by the organization that developed it.]  User Acceptance Testing (Also known as Beta Testing) is performed by the end users of the software. They can be the customers themselves or the customers’ customers.
  • 17. DEBUGGING  The process of identifying and correcting a software error is known as debugging.  It is a multistep process that involves identifying a problem, isolating the source of the problem, and then either correcting the problem or determining a way to work around it. The final step of debugging is to test the correction or workaround and make sure it works.
  • 18. DEBUGGING  The goal of testing is to identify errors (bugs) in the program.  The process of testing generates symptoms, and a program’s failure is a clear symptom of the presence of an error.  After getting a symptom, we begin to investigate the cause and place of that error.  After identification of place, we examine that portion to identify the cause of the problem.  This process is called debugging.
  • 19. DEBUGGING TECHNIQUES  Most developers have learned through experience several techniques for debugging.  Generally they are applied in trial and error manner.  Debugging is not an easy process.  Error removal requires humility to even admit the possibility of errors in the code we have created. Debugging Techniques Core Dumps Traces Print Statements Debugging Programs
  • 20. Core Dumps • A printout of all registers and relevant memory locations is obtained and studied. Traces • Similar to core dumps except the printout contains only certain memory and register contents. Print Statements • The standard print statement in the language being used is sprinkled throughout the program to output values of key variables. Debugging Programs • A program that runs concurrently to examine memory and registers, stop execution of program at a certain point.
  • 21. DEBUGGING APPROACHES  In heart of debugging process is not the debugging tools but the underlying approaches used to deduce the cause of the error. Debugging Approaches Trail and error Backtracking Insert Watch Points Induction and Deduction Approach
  • 22. DEBUGGING APPROACHES  Trial and error: The debugger looks at the error symptoms, reaches a snap judgment as to where in the code the underlying error might be and roam around in the program with one or more debugging techniques. This is slow and wasteful approach.  Backtracking: In this we examine the error symptoms to see where they are first noticed. One then backtracks in the program flow of control to a point where the symptoms have disappeared. This process brackets the location of the error in the program.
  • 23. DEBUGGING APPROACHES  Insert Watch Points: In this approach we insert watch points at the appropriate place in the program. We can use a software to insert watch points in a program without modifying the program manually.  Induction and Deduction Approach:  Induction Approach  Locate the pertinent data  Organize the data  Devise a hypothesis  Prove the hypothesis
  • 25. DEBUGGING APPROACHES  Deduction Approach  Enumerate the possible causes or hypotheses  Use the data to eliminate possible causes  Refine the remaining hypothesis  Prove the remaining hypothesis
  • 27. TESTING TOOLS  One way to improve the quality & quantity of testing is to make the process as pleasant as possible for the tester.  This means that tools should be as concise, powerful & natural as possible.  The two broad categories of software testing tools are :  Static  Dynamic
  • 29. DEBUGGING PROCESS 1. Replication of the bug: This means to recreate the undesired behaviour under controlled conditions. 2. Understanding the bug: This means we want to find the reason for the failure. 3. Locate the bug: There are two portions of the source code which need to be considered for locating bug. The first portion of the source code is one which causes the visible incorrect behaviour and the second portion of the source code is one which is actually incorrect. 4. Fix the bug and re test the program: The fixing of bug is a programming exercise i.e. Making necessary changes in source code.
  • 30. REGRESSION TESTING  Regression testing is a type of software testing which verifies that software which was previously developed and tested still performs the same way after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc.  Regression Testing is defined as a type of software testing to confirm that a recent program or code change has not adversely affected existing features.  Regression testing is nothing but full or partial selection of already executed test cases which are re-executed to ensure existing functionalities work fine.  This testing is done to make sure that new code changes should not have side effects on the existing functionalities. It ensures that old code still works once the new code changes are done.
  • 31. REGRESSION TESTING PROCESS 1. Fault Identification 1. Failure of the program and generation of failure report. 2. Debugging of source code. 3. Identification of faults in the source code 2. Modification 1. Source code modification. 3. Execution based on selected test cases and new test cases, if any 1. Selection of test cases from existing test suite to ensure the correctness of modification. 2. Addition of new test cases, if required 3. Perform re testing to ensure correctness using selected test cases and new test cases, if any
  • 32. SELECTION OF TEST CASES FOR REGRESSION TESTING 1. Select all test cases This is the simplest technique when size of test suite is small. This is safest because we run all the test cases for any change in the program. 2. Select test cases randomly The test cases are selected randomly to reduce the size of the test suite. We decide number of test cases required on the basis of available time and resources. 3. Select modification traversing test cases We select only those test cases that execute the modified portion of the program and the portion which is affected by the modification.
  • 33. REGRESSION TESTING TECHNIQUES  The various regression testing techniques are:  Retest all This technique checks all the test cases on the current program. Though it is expensive as it needs to re-run all the cases, it ensures that there are no errors because of the modified code.  Regression test selection Unlike Retest all, this technique runs a part of the test suite if the cost of selecting the part of the test suite is less than the Retest all technique.  Test case prioritization Prioritize the test cases so as to increase a test suite's rate of fault detection. Test case prioritization techniques schedule test cases so that the test cases that are higher in priority are executed before the test cases that have a lesser priority.
  • 34. EXTREME TESTING  Within Extreme Testing, we have a few practices:  One watching and one doing.  One test case writer and one executor immediately after writing  One defect writer with a test case writer writing a regression test case related to the defect  One executing test cases on a module that is integrated tightly, and another module being tested by another tester who is testing linked or integrated modules or functions  Extreme testing may also include:  Stress Testing: It involves testing an application under extreme workloads to see how it handles high traffic or data processing. The objective is to identify breaking point of an application.  Load Testing: checks the application's ability to perform under anticipated user loads. The objective is to identify performance bottlenecks before the software application goes live.  Volume Testing: Under Volume Testing large no. of. Data is populated in database and the overall software system's behaviour is monitored. The objective is to check software application's performance under varying database volumes.