SlideShare a Scribd company logo
SOFTWARE TESTING
Mr.Jay Prakash Maurya
OUTLINES
• Introduction to Testing
• Debugging
• Purpose and goal of Testing
• Dichotomies
• Testing and Debugging
• Model for Testing
• Consequences of Bugs
• Taxonomy of Bugs
PREREQUISITE
TESTING
Testing is the process of exercising or evaluating a system or
system components by manual or automated means to verify
that it satisfies specified requirements.
Debugging
• Debugging is the process of finding and fixing errors
or bugs in the source code of any software. When
software does not work as expected, computer
programmers study the code to determine why any
errors occurred. They use debugging tools to run the
software in a controlled environment, check the code
step by step, and analyze and fix the issue.
• MYTH: Good
programmers write
code without bugs.
(It’s wrong!!!)
• History says that even
well written programs
still have 1-3 bugs per
hundred statements.
PHASES IN A TESTER'S
MENTAL LIFE:
Phase 0: (Until 1956: Debugging Oriented)
Phase 1: (1957-1978: Demonstration Oriented)
Phase 2: (1979-1982: Destruction Oriented)
Phase 3: (1983-1987: Evaluation Oriented)
Phase 4: (1988-2000: Prevention Oriented)
PURPOSE OF
TESTING
To identify and show
program has bugs.
To show program/ software
works.
To show program/software
doesn’t work.
Goal of Testing
• Bug Prevention
(Primary Goal)
• Bug Discovery
(Secondary)
• Test Design
Bug is manifested in deviation from Expected behaviour.
SOME DICHOTOMIES
TESTING V/S DEBUGGING
Software Testing overview jay prakash maurya.pptx
Software Testing overview jay prakash maurya.pptx
MODEL FOR TESTING
Environment Model: Hardware Software (OS, linkage editor,
loader, compiler, utility routines)
Program Model: In order to simplify the order to test.
Complicated enough to test unexpected behavior.
Bug Hypothesis:
 Benign Bug Hypothesis: bugs are nice, tame and logical
 Bug Locality Hypothesis: bug discovered with in a component affects only that
component's behavior
 Control Bug Dominance: errors in the control structures
 Code / Data Separation: bugs respect the separation of code and data
 Lingua Salvatore Est: language syntax and semantics eliminate bugs
 Corrections Abide: corrected bug remains corrected
 Silver Bullets: Language, Design method, representation, environment grants
immunity from bugs.
 Sadism Suffices: Tough bugs need methodology and techniques.
 Angelic Testers: testers are better at test design, Programmer for code design
TEST
Tests are formal procedures, Inputs must be prepared,
Outcomes should predict, tests should be documented,
commands need to be executed, and results are to be
observed. All these errors are subjected to error.
 Unit / Component Testing:
 Integration Testing:
 System Testing:
ROLE OF MODELS:
The art of testing consists of creating, selecting,
exploring, and revising models. Our ability to go through
this process depends on the number of different models
we have at hand and their ability to express a program's
behavior.
Software Testing overview jay prakash maurya.pptx
TASK-1 [CLASS ACTIVITY]
Focus on types of bugs in the software development process and how to handle these bugs.
https://web.cs.ucdavis.edu/~rubio/includes/ase17.pdf
Software bug prediction using object-oriented metrics (ias.ac.in)
CONSEQUENCE OF BUGS
Damage Depends on :
• Frequency
• Correction Cost
• Installation Cost
• Consequences
Importance= ($) = Frequency * (Correction cost + Installation cost +
Consequential cost)
Consequences of bugs:
• Mild
• Moderate
• Annoying
• Disturbing
• Serious
• Very Serious
• Extreme
• Intolerable
• Catastrophic
• Infectious
SOFTWARE TESTING METRICS
Process Metrics
Product Metrics
Project Metrics
Base Metrics
Calculated Metrics
TAXONOMY OF BUGS
There is no universally correct way to categorize bugs. The
taxonomy is not rigid.
A given bug can be put into one or another category depending
on its history and the programmer's state of mind.
The major categories are:
(1) Requirements, Features, and Functionality Bugs
(2) Structural Bugs
(3) Data Bugs
(4) Coding Bugs
(5) Interface, Integration, and System Bugs
(6) Test and Test Design Bugs.
REQUIREMENTS AND
SPECIFICATIONS BUGS:
Requirements and specifications developed from them can be
incomplete ambiguous, or self-contradictory. They can be
misunderstood or impossible to understand.
The specifications that don't have flaws in them may change while the
design is in progress. The features are added, modified and deleted.
Requirements, especially, as expressed in specifications are a major
source of expensive bugs.
The range is from a few percentages to more than 50%, depending
on the application 10 and environment.
What hurts most about the bugs is that they are the earliest to
invade the system and the last to leave.
FEATURE BUGS:
Specification problems usually create corresponding
feature problems.
A feature can be wrong, missing, or superfluous (serving
no useful purpose). A missing feature or case is easier to
detect and correct. A wrong feature could have deep
design implications.
Removing the features might complicate the software,
consume more resources, and foster more bugs.
FEATURE INTERACTION
BUGS:
Providing correct, clear, implementable and testable feature
specifications is not enough.
Features usually come in groups or related features. The features of
each group and the interaction of features within the group are
usually well tested.
The problem is unpredictable interactions between feature groups
or even between individual features. For example, your telephone is
provided with call holding and call forwarding. The interactions
between these two features may have bugs.
Every application has its peculiar set of features and a much bigger
set of unspecified feature interaction potentials and therefore result
in feature interaction bugs
CONTROL AND SEQUENCE
BUGS:
Control and sequence bugs include paths left out, unreachable
code, improper nesting of loops, loop-back or loop termination
criteria incorrect, missing process steps, duplicated processing,
unnecessary processing, rampaging, GOTO's, ill-conceived (not
properly planned) switches, spaghetti code, and worst of all,
pachinko code.
One reason for control flow bugs is that this area is amenable
(supportive) to theoretical treatment.
Most of the control flow bugs are easily tested and caught in unit
testing
LOGIC BUGS:
Bugs in logic, especially those related to misunderstanding how
case statements and logic operators behave singly and
combinations
Also includes evaluation of boolean expressions in deeply nested
IF-THEN-ELSE constructs.
If the bugs are parts of logical (i.e. boolean) processing not related
to control flow, they are characterized as processing bugs.
If the bugs are parts of a logical expression (i.e. control-flow
statement) which is used to direct the control flow, then they are
categorized as control-flow bugs.
PROCESSING BUGS:
Processing bugs include arithmetic bugs, algebraic,
mathematical function evaluation, algorithm selection
and general processing.
Examples of Processing bugs include: Incorrect
conversion from one data representation to other,
ignoring overflow, improper use of greater-than-or-
equal etc
Although these bugs are frequent (12%), they tend to be
caught in good unit testing.
INITIALIZATION BUGS:
Initialization bugs are common. Initialization bugs can be
improper and superfluous.
Superfluous bugs are generally less harmful but can
affect performance.
Typical initialization bugs include: Forgetting to initialize
the variables before first use, assuming that they are
initialized elsewhere, initializing to the wrong format,
representation or type etc
Explicit declaration of all variables, as in Pascal, can
reduce some initialization problems.
DATA-FLOW BUGS AND
ANOMALIES:
Most initialization bugs are special case of data flow
anomalies.
A data flow anomaly occurs where there is a path along
which we expect to do something unreasonable with
data, such as using an uninitialized variable, attempting
to use a variable before it exists, modifying and then not
storing or using the result, or initializing twice without
an intermediate use
DATA BUGS
Data bugs include all bugs that arise from the specification of
data objects, their formats, the number of such objects, and
their initial values.
Data Bugs are at least as common as bugs in code, but they are
often treated as if they did not exist at all.
Code migrates data: Software is evolving towards programs in
which more and more of the control and processing functions
are stored in tables.
Because of this, there is an increasing awareness that bugs in
code are only half the battle and the data problems should be
given equal attention.
CODING BUGS:
Coding errors of all kinds can create any of the other kind of
bugs.
Syntax errors are generally not important in the scheme of
things if the source language translator has adequate syntax
checking.
If a program has many syntax errors, then we should expect
many logic and coding bugs.
The documentation bugs are also considered as coding bugs
which may mislead the maintenance programmers
INTERFACE, INTEGRATION,
AND SYSTEM BUGS:
External Interface
Internal Interface
Hardware Architecture
O/S Bug
Software Architecture
Control and sequence bugs
Resourse management Problems
Integration bugs
System bugs
TEST AND TEST DESIGN
BUGS:
Testing: testers have no immunity to bugs. Tests require
complicated scenarios and databases.
They require code or the equivalent to execute and
consequently they can have bugs.
Test criteria: if the specification is correct, it is correctly
interpreted and implemented, and a proper test has been
designed;
TEST METRICS
Generation of Software Test Metrics is the most important responsibility of the
Software Test Lead/Manager.
Test Metrics are used to,
1.Take the decision for the next phase of activities such as, estimate the cost &
schedule of future projects.
2.Understand the kind of improvement required to success the project
3.Take a decision on the Process or Technology to be modified etc.
EXAMPLE OF TEST REPORT
How many test cases have been designed per requirement?
How many test cases are yet to design?
How many test cases are executed?
How many test cases are passed/failed/blocked?
How many test cases are not yet executed?
How many defects are identified & what is the severity of those
defects?
How many test cases are failed due to one particular defect? etc.
Example of Software Test Metrics Calculation
S No. Testing Metric
Data retrieved during test case
development
1 No. of requirements 5
2
The average number of test
cases written per requirement
40
3
Total no. of Test cases
written for all requirements
200
4
Total no. of Test cases
executed
164
5 No. of Test cases passed 100
6 No. of Test cases failed 60
7 No. of Test cases blocked 4
8 No. of Test cases unexecuted 36
9 Total no. of defects identified 20
10
Defects accepted as valid by
the dev team
15
11
Defects deferred for future
releases
5
12 Defects fixed 12
Percentage test cases execu
Test Case Effectiveness
Failed Test Cases Percentag
Blocked Test Cases Percenta
Fixed Defects Percentage
Accepted Defects Percentage
Defects Deferred Percentage
1. Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100
= (164 / 200) x 100
= 82
2. Test Case Effectiveness = (Number of defects detected
/ Number of test cases run) x 100
= (20 / 164) x 100
= 12.2
3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests executed) x 100
= (60 / 164) * 100
= 36.59
4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests executed) x 100
= (4 / 164) * 100
= 2.44
5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100
= (12 / 20) * 100
= 60
6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects Reported) x 100
= (15 / 20) * 100
= 75
7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x 100
= (5 / 20) * 100
= 25
QUESTION
Calculate all previous parameters.
https://reqtest.com/try-reqtest/
HOW TO ESTIMATE?
Software Testing Estimation
Techniques
•Work Breakdown Structure
•3-Point Software Testing Estimation
Technique
•Wideband Delphi technique
•Function Point/Testing Point Analysis
•Use – Case Point Method
•Percentage distribution
•Ad-hoc method
WBS
Divide the whole project task into subtasks
Allocate each task to team member
Effort Estimation For Tasks
 Functional Point Method
 Three Point Estimation
FUNCTION POINT
METHOD
•Total Effort: The effort to
completely test all the
functions.
•Total Function Points: Total
modules.
•Estimate defined per
Function Points: The average
effort to complete one
function points. This value
depends on
the productivity of the
member who will take in
charge this task.
TOTAL EFFORT AND COST.
Weightage
# of
Function
Points
Total
Complex 5 3 15
Medium 3 5 15
Simple 1 4 4
Function Total Points 34
Estimate define per point 5
Total Estimated Effort (Person Hours) 170
Estimate the cost for the tasks: Suppose, on average your team salary
is $5 per hour. The time required for “Create Test Specs” task is 170
hours. Accordingly, the cost for the task is 5*170= $850.
THREE POINT
ESTIMATION
Test Manager needs to provide
three values, as specified above.
The three values identified, estimate
what happens in an optimal state,
what is the most likely, or what we
think it would be the worst
case scenario.
parameter E is known
as Weighted Average. It is the
estimation of the task “Create the test
specification”.
A possible and not a certain value, we
must know about the probability that
the estimation is correct.
THANKS

More Related Content

PPTX
Software_Testing_Overview.pptx
PPT
Taxonomy for bugs
PPT
software testing mtehododlogies path testing
PPTX
Taxonomy of bugs total topic covered presentation
PPTX
ST UNIT-1.pptx
PPT
introduction.ppt
PPT
Introduction to software testing
PPTX
Testing Plan
Software_Testing_Overview.pptx
Taxonomy for bugs
software testing mtehododlogies path testing
Taxonomy of bugs total topic covered presentation
ST UNIT-1.pptx
introduction.ppt
Introduction to software testing
Testing Plan

Similar to Software Testing overview jay prakash maurya.pptx (20)

PPTX
Software Testing Introduction (Part 1)
PPT
1.basics of software testing
PPTX
SE - Lecture 8 - Software Testing State Diagram.pptx
DOCX
Chapter 10 Testing and Quality Assurance1Unders.docx
PPTX
Why do we test software?
PPTX
SoftwareTesting.pptx
PDF
Software testing
PDF
software-testing-yogesh-singh (1).pdf
PPTX
Understanding Key Concepts and Applications in Week 11: A Comprehensive Overv...
PPT
Basic software-testing-concepts
PDF
Presentation
PPT
Manual testing ppt
DOC
Software Bugs A Software Architect Point Of View
PPT
Unit 1, PART 1 Software testing methodologies
PPTX
SOFTWARE BUGS.pptx for computer science and more
PPTX
Software engineering quality assurance and testing
PDF
Introduction to Software Testing
PPTX
Software Testing_A_mmmmmmmmmmmmmmmmmmmmm
PDF
Testing concepts [3] - Software Testing Techniques (CIS640)
PDF
Requirements Based Testing
Software Testing Introduction (Part 1)
1.basics of software testing
SE - Lecture 8 - Software Testing State Diagram.pptx
Chapter 10 Testing and Quality Assurance1Unders.docx
Why do we test software?
SoftwareTesting.pptx
Software testing
software-testing-yogesh-singh (1).pdf
Understanding Key Concepts and Applications in Week 11: A Comprehensive Overv...
Basic software-testing-concepts
Presentation
Manual testing ppt
Software Bugs A Software Architect Point Of View
Unit 1, PART 1 Software testing methodologies
SOFTWARE BUGS.pptx for computer science and more
Software engineering quality assurance and testing
Introduction to Software Testing
Software Testing_A_mmmmmmmmmmmmmmmmmmmmm
Testing concepts [3] - Software Testing Techniques (CIS640)
Requirements Based Testing
Ad

Recently uploaded (20)

PPTX
Introduction to Artificial Intelligence
PDF
top salesforce developer skills in 2025.pdf
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
PDF
System and Network Administration Chapter 2
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PPTX
history of c programming in notes for students .pptx
PPTX
L1 - Introduction to python Backend.pptx
PDF
Understanding Forklifts - TECH EHS Solution
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Wondershare Filmora 15 Crack With Activation Key [2025
Introduction to Artificial Intelligence
top salesforce developer skills in 2025.pdf
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
Flood Susceptibility Mapping Using Image-Based 2D-CNN Deep Learnin. Overview ...
System and Network Administration Chapter 2
Which alternative to Crystal Reports is best for small or large businesses.pdf
How to Migrate SBCGlobal Email to Yahoo Easily
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
2025 Textile ERP Trends: SAP, Odoo & Oracle
history of c programming in notes for students .pptx
L1 - Introduction to python Backend.pptx
Understanding Forklifts - TECH EHS Solution
Operating system designcfffgfgggggggvggggggggg
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
wealthsignaloriginal-com-DS-text-... (1).pdf
Wondershare Filmora 15 Crack With Activation Key [2025
Ad

Software Testing overview jay prakash maurya.pptx

  • 2. OUTLINES • Introduction to Testing • Debugging • Purpose and goal of Testing • Dichotomies • Testing and Debugging • Model for Testing • Consequences of Bugs • Taxonomy of Bugs
  • 4. TESTING Testing is the process of exercising or evaluating a system or system components by manual or automated means to verify that it satisfies specified requirements. Debugging • Debugging is the process of finding and fixing errors or bugs in the source code of any software. When software does not work as expected, computer programmers study the code to determine why any errors occurred. They use debugging tools to run the software in a controlled environment, check the code step by step, and analyze and fix the issue.
  • 5. • MYTH: Good programmers write code without bugs. (It’s wrong!!!) • History says that even well written programs still have 1-3 bugs per hundred statements.
  • 6. PHASES IN A TESTER'S MENTAL LIFE: Phase 0: (Until 1956: Debugging Oriented) Phase 1: (1957-1978: Demonstration Oriented) Phase 2: (1979-1982: Destruction Oriented) Phase 3: (1983-1987: Evaluation Oriented) Phase 4: (1988-2000: Prevention Oriented)
  • 7. PURPOSE OF TESTING To identify and show program has bugs. To show program/ software works. To show program/software doesn’t work. Goal of Testing • Bug Prevention (Primary Goal) • Bug Discovery (Secondary) • Test Design Bug is manifested in deviation from Expected behaviour.
  • 13. Environment Model: Hardware Software (OS, linkage editor, loader, compiler, utility routines) Program Model: In order to simplify the order to test. Complicated enough to test unexpected behavior. Bug Hypothesis:  Benign Bug Hypothesis: bugs are nice, tame and logical  Bug Locality Hypothesis: bug discovered with in a component affects only that component's behavior  Control Bug Dominance: errors in the control structures  Code / Data Separation: bugs respect the separation of code and data  Lingua Salvatore Est: language syntax and semantics eliminate bugs  Corrections Abide: corrected bug remains corrected  Silver Bullets: Language, Design method, representation, environment grants immunity from bugs.  Sadism Suffices: Tough bugs need methodology and techniques.  Angelic Testers: testers are better at test design, Programmer for code design
  • 14. TEST Tests are formal procedures, Inputs must be prepared, Outcomes should predict, tests should be documented, commands need to be executed, and results are to be observed. All these errors are subjected to error.  Unit / Component Testing:  Integration Testing:  System Testing:
  • 15. ROLE OF MODELS: The art of testing consists of creating, selecting, exploring, and revising models. Our ability to go through this process depends on the number of different models we have at hand and their ability to express a program's behavior.
  • 17. TASK-1 [CLASS ACTIVITY] Focus on types of bugs in the software development process and how to handle these bugs. https://web.cs.ucdavis.edu/~rubio/includes/ase17.pdf Software bug prediction using object-oriented metrics (ias.ac.in)
  • 18. CONSEQUENCE OF BUGS Damage Depends on : • Frequency • Correction Cost • Installation Cost • Consequences Importance= ($) = Frequency * (Correction cost + Installation cost + Consequential cost) Consequences of bugs: • Mild • Moderate • Annoying • Disturbing • Serious • Very Serious • Extreme • Intolerable • Catastrophic • Infectious
  • 19. SOFTWARE TESTING METRICS Process Metrics Product Metrics Project Metrics Base Metrics Calculated Metrics
  • 20. TAXONOMY OF BUGS There is no universally correct way to categorize bugs. The taxonomy is not rigid. A given bug can be put into one or another category depending on its history and the programmer's state of mind. The major categories are: (1) Requirements, Features, and Functionality Bugs (2) Structural Bugs (3) Data Bugs (4) Coding Bugs (5) Interface, Integration, and System Bugs (6) Test and Test Design Bugs.
  • 21. REQUIREMENTS AND SPECIFICATIONS BUGS: Requirements and specifications developed from them can be incomplete ambiguous, or self-contradictory. They can be misunderstood or impossible to understand. The specifications that don't have flaws in them may change while the design is in progress. The features are added, modified and deleted. Requirements, especially, as expressed in specifications are a major source of expensive bugs. The range is from a few percentages to more than 50%, depending on the application 10 and environment. What hurts most about the bugs is that they are the earliest to invade the system and the last to leave.
  • 22. FEATURE BUGS: Specification problems usually create corresponding feature problems. A feature can be wrong, missing, or superfluous (serving no useful purpose). A missing feature or case is easier to detect and correct. A wrong feature could have deep design implications. Removing the features might complicate the software, consume more resources, and foster more bugs.
  • 23. FEATURE INTERACTION BUGS: Providing correct, clear, implementable and testable feature specifications is not enough. Features usually come in groups or related features. The features of each group and the interaction of features within the group are usually well tested. The problem is unpredictable interactions between feature groups or even between individual features. For example, your telephone is provided with call holding and call forwarding. The interactions between these two features may have bugs. Every application has its peculiar set of features and a much bigger set of unspecified feature interaction potentials and therefore result in feature interaction bugs
  • 24. CONTROL AND SEQUENCE BUGS: Control and sequence bugs include paths left out, unreachable code, improper nesting of loops, loop-back or loop termination criteria incorrect, missing process steps, duplicated processing, unnecessary processing, rampaging, GOTO's, ill-conceived (not properly planned) switches, spaghetti code, and worst of all, pachinko code. One reason for control flow bugs is that this area is amenable (supportive) to theoretical treatment. Most of the control flow bugs are easily tested and caught in unit testing
  • 25. LOGIC BUGS: Bugs in logic, especially those related to misunderstanding how case statements and logic operators behave singly and combinations Also includes evaluation of boolean expressions in deeply nested IF-THEN-ELSE constructs. If the bugs are parts of logical (i.e. boolean) processing not related to control flow, they are characterized as processing bugs. If the bugs are parts of a logical expression (i.e. control-flow statement) which is used to direct the control flow, then they are categorized as control-flow bugs.
  • 26. PROCESSING BUGS: Processing bugs include arithmetic bugs, algebraic, mathematical function evaluation, algorithm selection and general processing. Examples of Processing bugs include: Incorrect conversion from one data representation to other, ignoring overflow, improper use of greater-than-or- equal etc Although these bugs are frequent (12%), they tend to be caught in good unit testing.
  • 27. INITIALIZATION BUGS: Initialization bugs are common. Initialization bugs can be improper and superfluous. Superfluous bugs are generally less harmful but can affect performance. Typical initialization bugs include: Forgetting to initialize the variables before first use, assuming that they are initialized elsewhere, initializing to the wrong format, representation or type etc Explicit declaration of all variables, as in Pascal, can reduce some initialization problems.
  • 28. DATA-FLOW BUGS AND ANOMALIES: Most initialization bugs are special case of data flow anomalies. A data flow anomaly occurs where there is a path along which we expect to do something unreasonable with data, such as using an uninitialized variable, attempting to use a variable before it exists, modifying and then not storing or using the result, or initializing twice without an intermediate use
  • 29. DATA BUGS Data bugs include all bugs that arise from the specification of data objects, their formats, the number of such objects, and their initial values. Data Bugs are at least as common as bugs in code, but they are often treated as if they did not exist at all. Code migrates data: Software is evolving towards programs in which more and more of the control and processing functions are stored in tables. Because of this, there is an increasing awareness that bugs in code are only half the battle and the data problems should be given equal attention.
  • 30. CODING BUGS: Coding errors of all kinds can create any of the other kind of bugs. Syntax errors are generally not important in the scheme of things if the source language translator has adequate syntax checking. If a program has many syntax errors, then we should expect many logic and coding bugs. The documentation bugs are also considered as coding bugs which may mislead the maintenance programmers
  • 31. INTERFACE, INTEGRATION, AND SYSTEM BUGS: External Interface Internal Interface Hardware Architecture O/S Bug Software Architecture Control and sequence bugs Resourse management Problems Integration bugs System bugs
  • 32. TEST AND TEST DESIGN BUGS: Testing: testers have no immunity to bugs. Tests require complicated scenarios and databases. They require code or the equivalent to execute and consequently they can have bugs. Test criteria: if the specification is correct, it is correctly interpreted and implemented, and a proper test has been designed;
  • 33. TEST METRICS Generation of Software Test Metrics is the most important responsibility of the Software Test Lead/Manager. Test Metrics are used to, 1.Take the decision for the next phase of activities such as, estimate the cost & schedule of future projects. 2.Understand the kind of improvement required to success the project 3.Take a decision on the Process or Technology to be modified etc.
  • 34. EXAMPLE OF TEST REPORT How many test cases have been designed per requirement? How many test cases are yet to design? How many test cases are executed? How many test cases are passed/failed/blocked? How many test cases are not yet executed? How many defects are identified & what is the severity of those defects? How many test cases are failed due to one particular defect? etc.
  • 35. Example of Software Test Metrics Calculation S No. Testing Metric Data retrieved during test case development 1 No. of requirements 5 2 The average number of test cases written per requirement 40 3 Total no. of Test cases written for all requirements 200 4 Total no. of Test cases executed 164 5 No. of Test cases passed 100 6 No. of Test cases failed 60 7 No. of Test cases blocked 4 8 No. of Test cases unexecuted 36 9 Total no. of defects identified 20 10 Defects accepted as valid by the dev team 15 11 Defects deferred for future releases 5 12 Defects fixed 12 Percentage test cases execu Test Case Effectiveness Failed Test Cases Percentag Blocked Test Cases Percenta Fixed Defects Percentage Accepted Defects Percentage Defects Deferred Percentage
  • 36. 1. Percentage test cases executed = (No of test cases executed / Total no of test cases written) x 100 = (164 / 200) x 100 = 82 2. Test Case Effectiveness = (Number of defects detected / Number of test cases run) x 100 = (20 / 164) x 100 = 12.2 3. Failed Test Cases Percentage = (Total number of failed test cases / Total number of tests executed) x 100 = (60 / 164) * 100 = 36.59 4. Blocked Test Cases Percentage = (Total number of blocked tests / Total number of tests executed) x 100 = (4 / 164) * 100 = 2.44 5. Fixed Defects Percentage = (Total number of flaws fixed / Number of defects reported) x 100 = (12 / 20) * 100 = 60 6. Accepted Defects Percentage = (Defects Accepted as Valid by Dev Team / Total Defects Reported) x 100 = (15 / 20) * 100 = 75 7. Defects Deferred Percentage = (Defects deferred for future releases / Total Defects Reported) x 100 = (5 / 20) * 100 = 25
  • 39. HOW TO ESTIMATE? Software Testing Estimation Techniques •Work Breakdown Structure •3-Point Software Testing Estimation Technique •Wideband Delphi technique •Function Point/Testing Point Analysis •Use – Case Point Method •Percentage distribution •Ad-hoc method
  • 40. WBS
  • 41. Divide the whole project task into subtasks Allocate each task to team member Effort Estimation For Tasks  Functional Point Method  Three Point Estimation
  • 42. FUNCTION POINT METHOD •Total Effort: The effort to completely test all the functions. •Total Function Points: Total modules. •Estimate defined per Function Points: The average effort to complete one function points. This value depends on the productivity of the member who will take in charge this task.
  • 43. TOTAL EFFORT AND COST. Weightage # of Function Points Total Complex 5 3 15 Medium 3 5 15 Simple 1 4 4 Function Total Points 34 Estimate define per point 5 Total Estimated Effort (Person Hours) 170 Estimate the cost for the tasks: Suppose, on average your team salary is $5 per hour. The time required for “Create Test Specs” task is 170 hours. Accordingly, the cost for the task is 5*170= $850.
  • 44. THREE POINT ESTIMATION Test Manager needs to provide three values, as specified above. The three values identified, estimate what happens in an optimal state, what is the most likely, or what we think it would be the worst case scenario. parameter E is known as Weighted Average. It is the estimation of the task “Create the test specification”. A possible and not a certain value, we must know about the probability that the estimation is correct.