SlideShare a Scribd company logo
Presented by Group 1
Coding, Testing,
Black-Box and
White Box Testing
Coding, Testing, Black-box and White-box Testing.pptx
CODING
The objective of the coding phase is to transform the
design of a system into code in a high-level language and then
to unit test this code. The programmers adhere to standard
and well-defined style of coding which they call their coding
standard.
• A coding standard gives uniform appearances to the code
written by different engineers
• It facilitates code of understanding.
• Promotes good programming practices.
The main advantages of adhering to a
standard style of coding are as follows:
• Readability
• Portability
• Generality
• Brevity
• Error checking
• Cost
• Familiar notation
• Quick translation
• Efficiency
• Modularity
• Widely available
Characteristics of a
Programming Language
Coding standards and
guidelines
The following are some representative coding standards:
1. Rules for limiting the use of global
2. Contents of the headers preceding codes for different
modules
• Name of the module.
• Date on which the module was created.
• Author’s name.
• Modification history.
• Synopsis of the module.
Coding standards and
guidelines
• Different functions supported, along with their
input/output parameters.
• Global variables accessed/modified by the module.
3. Naming conventions for global variables, local variables,
and constant identifiers
4. Error return conventions and exception handling
mechanisms
The following are some representative coding
guidelines recommended by many software
development organizations.
1. Do not use a coding style that is too clever or too difficult to
understand
2. Avoid obscure side effects
3. Do not use an identifier for multiple purposes
• Each variable should be given a descriptive name
indicating its purpose.
• Use of variables for multiple purposes usually makes
future enhancements more difficult.
The following are some representative coding
guidelines recommended by many software
development organizations.
4. The code should be well-documented
5. The length of any function should not exceed 10 source lines
6. Do not use goto statements
Coding, Testing, Black-box and White-box Testing.pptx
Code Review
Code review for a model is carried out after the module
is successfully compiled and the all the syntax errors have been
eliminated. Code reviews are extremely cost-effective
strategies for reduction in coding errors and to produce high
quality code. Normally, two types of reviews are carried out on
the code of a module. These two types code review techniques
are code inspection and code walk through.
Code Walk Throughs
Code walk through is an informal code analysis technique.
In this technique, after a module has been coded, successfully
compiled and all syntax errors eliminated. A few members of the
development team are given the code few days before the walk
through meeting to read and understand code. Each member
selects some test cases and simulates execution of the code by
hand. The main objectives of the walk through are to discover
the algorithmic and logical errors in the code.
Code Walk Throughs
• The team performing code walk through should not be either
too big or too small. Ideally, it should consist of between
three to seven members.
• Discussion should focus on discovery of errors and not on
how to fix the discovered errors.
• In order to foster cooperation and to avoid the feeling among
engineers that they are being evaluated in the code walk
through meeting, managers should not attend the walk
through meetings.
Code Inspection
In contrast to code walk through, the aim of code
inspection is to discover some common types of errors caused
due to oversight and improper programming. In other words,
during code inspection the code is examined for the presence of
certain kinds of errors, in contrast to the hand simulation of code
execution done in code walk throughs. Adherence to coding
standards is also checked during code inspection.
Code Inspection
Classical programming errors:
• Use of uninitialized variables.
• Jumps into loops.
• Nonterminating loops.
• Incompatible assignments.
• Array indices out of bounds.
• Improper storage allocation and deallocation.
• Mismatches between actual and formal parameter in
procedure calls.
Code Inspection
Classical programming errors:
• Use of incorrect logical operators or incorrect precedence
among operators.
• Improper modification of loop variables.
• Comparison of equally of floating point variables, etc.
Coding, Testing, Black-box and White-box Testing.pptx
Clean Room Testing
Clean room testing was pioneered by IBM. This type of
testing relies heavily on walk throughs, inspection, and formal
verification. The programmers are not allowed to test any of their
code by executing the code other than doing some syntax testing
using a compiler. The software development philosophy is based
on avoiding software defects by using a rigorous inspection
process. The objective of this software is zero-defect software.
The name ‘clean room’ was derived from the analogy with semi-
conductor fabrication units.
Clean Room Testing
This technique reportedly produces documentation and
code that is more reliable and maintainable than other
development methods relying heavily on code execution-based
testing.
Clean Room Testing
The clean room approach to software development is based on
five characteristics:
• Formal specification
• Incremental development
• Structured programming
• Static verification
• Statistical testing of the system
Coding, Testing, Black-box and White-box Testing.pptx
Software
Documentation
When various kinds of software products are developed
then not only the executable files and thesource code are
developed but also various kinds of documents such as users’
manual, software requirements specification (SRS) documents,
design documents, test documents, installation manual, etc are
also developed as part of any software engineering process. All
these documents are a vital part of good software development
practice.
Software
Documentation
Good documents are very useful and server the following purposes:
• Good documents enhance understandability and
maintainability of a software product.
• Use documents help the users in effectively using the system.
• Good documents help in effectively handling the manpower
turnover problem.
• Production of good documents helps the manager in
effectively tracking the progress of the project.
Software
Documentation
Different types of software documents can broadly be
classified into the following:
• Internal documentation
• External documentation
Coding, Testing, Black-box and White-box Testing.pptx
Testing
Program Testing
Testing a program consists of providing the program with a
set of test inputs (or test cases) and observing if the program
behaves as expected. If the program fails to behave as expected,
then the conditions under which failure occurs are noted for later
debugging and correction.
Testing
Some commonly used terms associated with testing are:
 Failure
 Test case: This is the triplet [I,S,O], where I is the data input
to the system, S is the state of the system at which the data
is input, and O is the expected output of the system.
 Test suite
Testing
The aim of the testing process is to identify all defects
existing in a software product. However, for most practical systems,
even after satisfactorily carrying out the testing phase, it is not
possible to guarantee that the software is error free. Even with this
practical limitation of the testing process, the importance of testing
should not be underestimated. It must be remembered that testing
does expose many defects existing in a software product. Thus,
testing provides a practical way of reducing defects in a system and
increasing the users’ confidence in a developed system.
Aim of Testing
Coding, Testing, Black-box and White-box Testing.pptx
Verification Vs Validation
Verification Vs Validation
Verification is the process of determining whether the output of one
phase of software development conforms to that of its previous phase, whereas
validation is the process of determining whether a fully developed system
conforms to its requirements specification. Thus, while verification is concerned
with phase containment of errors, the aim of validation is that the final product
be error free.
Design of Test Cases
Exhaustive testing of almost any non-trivial system is impractical due to
the fact that the domain of input data values to most practical software systems
is either extremely large or infinite. Therefore, we must design an optional test
suite that is of reasonable size and can uncover as many errors existing in the
system as possible. Actually, if test cases are selected randomly, many of these
randomly selected test cases do not contribute to the significance of the test
suite, i.e. they do not detect any additional defects not already being detected
by other test cases in the suite. Thus, the number of random test cases in a test
suite is, in general, not an indication of the effectiveness of the testing.
Functional Testing Vs.
Structural Testing
In the black-box testing approach, test cases are designed using only the
functional specification of the software, i.e. without any knowledge of the
internal structure of the software. For this reason, black-box testing is known as
functional testing. On the other hand, in the white-box testing approach,
designing test cases requires thorough knowledge about the internal structure
of software, and therefore the white-box testing is called structural testing.
Coding, Testing, Black-box and White-box Testing.pptx
Black-Box
Testing
Coding, Testing, Black-box and White-box Testing.pptx
Testing in the large vs.
testing in the small
Software products are normally tested first at the individual
component (or unit) level. This is referred to as testing in the small. After
testing all the components individually, the components are slowly integrated
and tested at each level of integration (integration testing). Finally, the fully
integrated system is tested (called system testing). Integration and system
testing are known as testing in the large.
Unit Testing
Unit testing is undertaken after a module has been coded and
successfully reviewed. Unit testing (or module testing) is the testing of
different units (or modules) of a system in isolation. In order to test a single
module, a complete environment is needed to provide all that is necessary
for execution of the module. That is, besides the module under test itself, the
following steps are needed in order to be able to test the module:
• The procedures belonging to other modules that the module under
test calls.
• Nonlocal data structures that the module accesses.
• A procedure to call the functions of the module under test with
appropriate parameters.
Unit Testing
Modules are required to provide the necessary environment (which
either call or are called by the module under test) is usually not available
until they too have been unit tested, stubs and drivers are designed to
provide the complete environment for a module. A stub procedure is a
dummy procedure that has the same I/O parameters as the given procedure
but has a highly simplified behavior. A driver module contain the nonlocal
data structures accessed by the module under test, and would also have the
code to call the different functions of the module with appropriate
parameter values.
Unit Testing
Unit testing with the help of driver and stub modules
Black Box
Testing
In the black-box testing, test cases are designed from an
examination of the input/output values only and no knowledge of
design or code is required. The following are the two main
approaches to designing black box test cases.
• Equivalence class portioning
• Boundary value analysis
Equivalence Class
Partitioning
In this approach, the domain of input values to a program is
partitioned into a set of equivalence classes. This partitioning is
done such that the behavior of the program is similar for every
input data belonging to the same equivalence class. The main idea
behind defining the equivalence classes is that testing the code with
any one value belonging to an equivalence class is as good as
testing the software with any other value belonging to that
equivalence class. Equivalence classes for a software can be
designed by examining the input data and output data.
Equivalence Class
Partitioning
The following are some general guidelines for designing the
equivalence classes:
1. If the input data values to a system can be specified by a
range of values, then one valid and two invalid equivalence
classes should be defined.
2. If the input data assumes values from a set of discrete
members of some domain, then one equivalence class for
valid input values and another equivalence class for invalid
input values should be defined.
Boundary Value
Analysis
A type of programming error frequently occurs at the
boundaries of different equivalence classes of inputs. The reason
behind such errors might purely be due to psychological factors.
Programmers often fail to see the special processing required by the
input values that lie at the boundary of the different equivalence
classes. Boundary value analysis leads to selection of test cases at
the boundaries of the different equivalence classes.
White-Box
Testing
One white-box testing strategy is said to be stronger than
another strategy, if all types of errors detected by the first testing
strategy is also detected by the second testing strategy, and the
second testing strategy additionally detects some more types of
errors. When two testing strategies detect errors that are different
at least with respect to some types of errors, then they are called
complementary.
White-Box
Testing
Stronger and complementary testing strategies
White-Box
Testing
The statement coverage strategy aims to design test cases so
that every statement in a program is executed at least once. The
principal idea governing the statement coverage strategy is that
unless a statement is executed, it is very hard to determine if an
error exists in that statement. Unless a statement is executed, it is
very difficult to observe whether it causes failure due to some
illegal memory access, wrong result computation, etc. However,
executing some statement once and observing that it behaves
properly for that input value is no guarantee that it will behave
correctly for all input values.
Statement
Coverage
In the branch coverage-based testing strategy, test cases are
designed to make each branch condition to assume true and false
values in turn. Branch testing is also known as edge testing as in this
testing scheme, each edge of a program’s control flow graph is
traversed at least once. It is obvious that branch testing guarantees
statement coverage and thus is a stronger testing strategy
compared to the statement coverage-based testing.
Branch
Coverage
In this structural testing, test cases are designed to make each
component of a composite conditional expression to assume both
true and false values. condition testing is a stronger testing strategy
than branch testing and branch testing is stronger testing strategy
than the statement coverage-based testing. For a composite
conditional expression of n components, for condition coverage, 2ⁿ
test cases are required. Thus, for condition coverage, the number of
test cases increases exponentially with the number of component
conditions. Therefore, a condition coverage-based testing technique is
practical only if n (the number of conditions) is small.
Condition
Coverage
The path coverage-based testing strategy requires us to
design test cases such that all linearly independent paths in the
program are executed at least once. A linearly independent path
can be defined in terms of the control flow graph (CFG) of a
program.
Path Coverage
A control flow graph describes the sequence in which the
different instructions of a program get executed. In other words, a
control flow graph describes how the control flows through the
program. An edge from one node to another node exists if the execution
ofthe statement representing the first node can result in the transfer of
control to the other node.
Control Flow
Graph (CFG)
CFG for sequence, selection and iteration type of constructs,
respectively
Control Flow
Graph (CFG)
The CFG for any program can be easily drawn by knowing how
to represent the sequence, selection, and iteration type of
statements in the CFG. After all, a program is made up from these
types of statements. It is important to note that for the iteration type
of constructs such as the while construct, the loop condition is tested
only at the beginning of the loop and therefore the control flow from
the last statement of the loop is always to the top of the loop.
Control Flow
Graph (CFG)
EUCLID’S GCD Computation Algorithm
Control Flow
Graph (CFG)
Control flow diagram
Path
A path through a program is a node and edge sequence from
the starting node to a terminal node of the control flow graph of a
program. There can be more than one terminal node in a program.
Writing test cases to cover all the paths of a typical program is
impractical. For this reason, the path-coverage testing does not
require coverage of all paths but only coverage of linearly
independent paths.
Linearly
independent path
A linearly independent path is any path through the program
that introduces at least one new edge that is not included in any
other linearly independent paths. If a path has one new node
compared to all other linearly independent paths, then the path is
also linearly independent. This is because; any path having a new
node automatically implies that it has a new edge. Thus, a path that
is sub-path of another path is not considered to be a linearly
independent path.
Cyclomatic
Complexity
For more complicated programs it is not easy to determine the
number of independent paths of the program. McCabe’s cyclomatic
complexity defines an upper bound for the number of linearly
independent paths through a program. Also, the McCabe’s
cyclomatic complexity is very simple to compute. Thus, the McCabe’s
cyclomatic complexity metric provides a practical way of determining
the maximum number of linearly independent paths in a program.
Though the McCabe’s metric does not directly identify the linearly
independent paths, but it informs approximately how many paths to
look for.
Data Flow-Based
Testing
Data flow-based testing method selects test paths of a
program according to the locations of the definitions and uses of
different variables in a program.
Mutation Testing
In mutation testing, the software is first tested by using an
initial test suite built up from the different white box testing
strategies. After the initial testing is complete, mutation testing is
taken up. The idea behind mutation testing is to make few arbitrary
changes to a program at a time. Each time the program is changed, it
is called as a mutated program and the change effected is called as a
mutant. A mutated program is tested against the full test suite of the
program.
Mutation Testing
If there exists at least one test case in the test suite for which a
mutant gives an incorrect result, then the mutant is said to be dead.
If a mutant remains alive even after all the test cases have been
exhausted, the test data is enhanced to kill the mutant. The process
of generation and killing of mutants can be automated by
predefining a set of primitive changes that can be applied to the
program. These primitive changes can be alterations such as
changing an arithmetic operator, changing the value of a constant,
changing a data type, etc.
Mutation Testing
If there exists at least one test case in the test suite for which a
mutant gives an incorrect result, then the mutant is said to be dead.
If a mutant remains alive even after all the test cases have been
exhausted, the test data is enhanced to kill the mutant. The process
of generation and killing of mutants can be automated by
predefining a set of primitive changes that can be applied to the
program. These primitive changes can be alterations such as
changing an arithmetic operator, changing the value of a constant,
changing a data type, etc.
Mutation Testing
A major disadvantage of the mutation-based testing approach
is that it is computationally very expensive, since a large number of
possible mutants can be generated.
Since mutation testing generates a large number of mutants
and requires us to check each mutant with the full test suite, it is not
suitable for manual testing. Mutation testing should be used in
conjunction of some testing tool which would run all the test cases
automatically.
Members
1.Aprilen Fernando
2.Audrey Camacho
3.Ma. Chamear Troyo
4.Amier Lesigues
5.Marvin Matocenio
6.Stephen Morado
7.Niňo Joshua Maurillo
8.Tristan Bautista

More Related Content

PPTX
Coding and testing in Software Engineering
PPTX
Software Testing_A_mmmmmmmmmmmmmmmmmmmmm
PPT
Testing fundamentals
PPSX
Introduction to software testing
PDF
testing.pdf
DOC
Testing
PDF
Validation & verification software engineering
PPTX
software testing types jxnvlbnLCBNFVjnl/fknblb
Coding and testing in Software Engineering
Software Testing_A_mmmmmmmmmmmmmmmmmmmmm
Testing fundamentals
Introduction to software testing
testing.pdf
Testing
Validation & verification software engineering
software testing types jxnvlbnLCBNFVjnl/fknblb

Similar to Coding, Testing, Black-box and White-box Testing.pptx (20)

PPT
Software coding & testing, software engineering
PPT
Software testing & its technology
PPTX
Types of testing
PPT
Software Quality and Testing_Se lect18 btech
PPTX
Unit_5 and Unit 6.pptx
PPTX
Software Quality Assurance
PPTX
Testing Plan
PPT
Types of Software Testing
PPTX
Structured system analysis and design
PDF
What is software testing in software engineering?
PDF
What is Testing in Software Engineering?
PPTX
Software testing
PDF
Software Engineering TESTING AND MAINTENANCE
PPT
Software coding and testing
PDF
L software testing
PPT
Software Engineering Lec 10 -software testing--
PPTX
object oriented system analysis and design
PPTX
Software testing introduction
DOCX
Chapter 10 Testing and Quality Assurance1Unders.docx
Software coding & testing, software engineering
Software testing & its technology
Types of testing
Software Quality and Testing_Se lect18 btech
Unit_5 and Unit 6.pptx
Software Quality Assurance
Testing Plan
Types of Software Testing
Structured system analysis and design
What is software testing in software engineering?
What is Testing in Software Engineering?
Software testing
Software Engineering TESTING AND MAINTENANCE
Software coding and testing
L software testing
Software Engineering Lec 10 -software testing--
object oriented system analysis and design
Software testing introduction
Chapter 10 Testing and Quality Assurance1Unders.docx
Ad

Recently uploaded (20)

PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PDF
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
PPTX
CHAPTER 2 - PM Management and IT Context
PPTX
Transform Your Business with a Software ERP System
PDF
Navsoft: AI-Powered Business Solutions & Custom Software Development
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPTX
history of c programming in notes for students .pptx
PPTX
L1 - Introduction to python Backend.pptx
PPTX
ManageIQ - Sprint 268 Review - Slide Deck
PDF
Understanding Forklifts - TECH EHS Solution
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
System and Network Administration Chapter 2
PPTX
Introduction to Artificial Intelligence
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PDF
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
PPTX
ISO 45001 Occupational Health and Safety Management System
PDF
How Creative Agencies Leverage Project Management Software.pdf
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
Design an Analysis of Algorithms II-SECS-1021-03
Raksha Bandhan Grocery Pricing Trends in India 2025.pdf
CHAPTER 2 - PM Management and IT Context
Transform Your Business with a Software ERP System
Navsoft: AI-Powered Business Solutions & Custom Software Development
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
history of c programming in notes for students .pptx
L1 - Introduction to python Backend.pptx
ManageIQ - Sprint 268 Review - Slide Deck
Understanding Forklifts - TECH EHS Solution
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
System and Network Administration Chapter 2
Introduction to Artificial Intelligence
How to Choose the Right IT Partner for Your Business in Malaysia
Addressing The Cult of Project Management Tools-Why Disconnected Work is Hold...
ISO 45001 Occupational Health and Safety Management System
How Creative Agencies Leverage Project Management Software.pdf
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Ad

Coding, Testing, Black-box and White-box Testing.pptx

  • 1. Presented by Group 1 Coding, Testing, Black-Box and White Box Testing
  • 3. CODING The objective of the coding phase is to transform the design of a system into code in a high-level language and then to unit test this code. The programmers adhere to standard and well-defined style of coding which they call their coding standard.
  • 4. • A coding standard gives uniform appearances to the code written by different engineers • It facilitates code of understanding. • Promotes good programming practices. The main advantages of adhering to a standard style of coding are as follows:
  • 5. • Readability • Portability • Generality • Brevity • Error checking • Cost • Familiar notation • Quick translation • Efficiency • Modularity • Widely available Characteristics of a Programming Language
  • 6. Coding standards and guidelines The following are some representative coding standards: 1. Rules for limiting the use of global 2. Contents of the headers preceding codes for different modules • Name of the module. • Date on which the module was created. • Author’s name. • Modification history. • Synopsis of the module.
  • 7. Coding standards and guidelines • Different functions supported, along with their input/output parameters. • Global variables accessed/modified by the module. 3. Naming conventions for global variables, local variables, and constant identifiers 4. Error return conventions and exception handling mechanisms
  • 8. The following are some representative coding guidelines recommended by many software development organizations. 1. Do not use a coding style that is too clever or too difficult to understand 2. Avoid obscure side effects 3. Do not use an identifier for multiple purposes • Each variable should be given a descriptive name indicating its purpose. • Use of variables for multiple purposes usually makes future enhancements more difficult.
  • 9. The following are some representative coding guidelines recommended by many software development organizations. 4. The code should be well-documented 5. The length of any function should not exceed 10 source lines 6. Do not use goto statements
  • 11. Code Review Code review for a model is carried out after the module is successfully compiled and the all the syntax errors have been eliminated. Code reviews are extremely cost-effective strategies for reduction in coding errors and to produce high quality code. Normally, two types of reviews are carried out on the code of a module. These two types code review techniques are code inspection and code walk through.
  • 12. Code Walk Throughs Code walk through is an informal code analysis technique. In this technique, after a module has been coded, successfully compiled and all syntax errors eliminated. A few members of the development team are given the code few days before the walk through meeting to read and understand code. Each member selects some test cases and simulates execution of the code by hand. The main objectives of the walk through are to discover the algorithmic and logical errors in the code.
  • 13. Code Walk Throughs • The team performing code walk through should not be either too big or too small. Ideally, it should consist of between three to seven members. • Discussion should focus on discovery of errors and not on how to fix the discovered errors. • In order to foster cooperation and to avoid the feeling among engineers that they are being evaluated in the code walk through meeting, managers should not attend the walk through meetings.
  • 14. Code Inspection In contrast to code walk through, the aim of code inspection is to discover some common types of errors caused due to oversight and improper programming. In other words, during code inspection the code is examined for the presence of certain kinds of errors, in contrast to the hand simulation of code execution done in code walk throughs. Adherence to coding standards is also checked during code inspection.
  • 15. Code Inspection Classical programming errors: • Use of uninitialized variables. • Jumps into loops. • Nonterminating loops. • Incompatible assignments. • Array indices out of bounds. • Improper storage allocation and deallocation. • Mismatches between actual and formal parameter in procedure calls.
  • 16. Code Inspection Classical programming errors: • Use of incorrect logical operators or incorrect precedence among operators. • Improper modification of loop variables. • Comparison of equally of floating point variables, etc.
  • 18. Clean Room Testing Clean room testing was pioneered by IBM. This type of testing relies heavily on walk throughs, inspection, and formal verification. The programmers are not allowed to test any of their code by executing the code other than doing some syntax testing using a compiler. The software development philosophy is based on avoiding software defects by using a rigorous inspection process. The objective of this software is zero-defect software. The name ‘clean room’ was derived from the analogy with semi- conductor fabrication units.
  • 19. Clean Room Testing This technique reportedly produces documentation and code that is more reliable and maintainable than other development methods relying heavily on code execution-based testing.
  • 20. Clean Room Testing The clean room approach to software development is based on five characteristics: • Formal specification • Incremental development • Structured programming • Static verification • Statistical testing of the system
  • 22. Software Documentation When various kinds of software products are developed then not only the executable files and thesource code are developed but also various kinds of documents such as users’ manual, software requirements specification (SRS) documents, design documents, test documents, installation manual, etc are also developed as part of any software engineering process. All these documents are a vital part of good software development practice.
  • 23. Software Documentation Good documents are very useful and server the following purposes: • Good documents enhance understandability and maintainability of a software product. • Use documents help the users in effectively using the system. • Good documents help in effectively handling the manpower turnover problem. • Production of good documents helps the manager in effectively tracking the progress of the project.
  • 24. Software Documentation Different types of software documents can broadly be classified into the following: • Internal documentation • External documentation
  • 27. Program Testing Testing a program consists of providing the program with a set of test inputs (or test cases) and observing if the program behaves as expected. If the program fails to behave as expected, then the conditions under which failure occurs are noted for later debugging and correction. Testing
  • 28. Some commonly used terms associated with testing are:  Failure  Test case: This is the triplet [I,S,O], where I is the data input to the system, S is the state of the system at which the data is input, and O is the expected output of the system.  Test suite Testing
  • 29. The aim of the testing process is to identify all defects existing in a software product. However, for most practical systems, even after satisfactorily carrying out the testing phase, it is not possible to guarantee that the software is error free. Even with this practical limitation of the testing process, the importance of testing should not be underestimated. It must be remembered that testing does expose many defects existing in a software product. Thus, testing provides a practical way of reducing defects in a system and increasing the users’ confidence in a developed system. Aim of Testing
  • 32. Verification Vs Validation Verification is the process of determining whether the output of one phase of software development conforms to that of its previous phase, whereas validation is the process of determining whether a fully developed system conforms to its requirements specification. Thus, while verification is concerned with phase containment of errors, the aim of validation is that the final product be error free.
  • 33. Design of Test Cases Exhaustive testing of almost any non-trivial system is impractical due to the fact that the domain of input data values to most practical software systems is either extremely large or infinite. Therefore, we must design an optional test suite that is of reasonable size and can uncover as many errors existing in the system as possible. Actually, if test cases are selected randomly, many of these randomly selected test cases do not contribute to the significance of the test suite, i.e. they do not detect any additional defects not already being detected by other test cases in the suite. Thus, the number of random test cases in a test suite is, in general, not an indication of the effectiveness of the testing.
  • 34. Functional Testing Vs. Structural Testing In the black-box testing approach, test cases are designed using only the functional specification of the software, i.e. without any knowledge of the internal structure of the software. For this reason, black-box testing is known as functional testing. On the other hand, in the white-box testing approach, designing test cases requires thorough knowledge about the internal structure of software, and therefore the white-box testing is called structural testing.
  • 38. Testing in the large vs. testing in the small Software products are normally tested first at the individual component (or unit) level. This is referred to as testing in the small. After testing all the components individually, the components are slowly integrated and tested at each level of integration (integration testing). Finally, the fully integrated system is tested (called system testing). Integration and system testing are known as testing in the large.
  • 39. Unit Testing Unit testing is undertaken after a module has been coded and successfully reviewed. Unit testing (or module testing) is the testing of different units (or modules) of a system in isolation. In order to test a single module, a complete environment is needed to provide all that is necessary for execution of the module. That is, besides the module under test itself, the following steps are needed in order to be able to test the module: • The procedures belonging to other modules that the module under test calls. • Nonlocal data structures that the module accesses. • A procedure to call the functions of the module under test with appropriate parameters.
  • 40. Unit Testing Modules are required to provide the necessary environment (which either call or are called by the module under test) is usually not available until they too have been unit tested, stubs and drivers are designed to provide the complete environment for a module. A stub procedure is a dummy procedure that has the same I/O parameters as the given procedure but has a highly simplified behavior. A driver module contain the nonlocal data structures accessed by the module under test, and would also have the code to call the different functions of the module with appropriate parameter values.
  • 41. Unit Testing Unit testing with the help of driver and stub modules
  • 42. Black Box Testing In the black-box testing, test cases are designed from an examination of the input/output values only and no knowledge of design or code is required. The following are the two main approaches to designing black box test cases. • Equivalence class portioning • Boundary value analysis
  • 43. Equivalence Class Partitioning In this approach, the domain of input values to a program is partitioned into a set of equivalence classes. This partitioning is done such that the behavior of the program is similar for every input data belonging to the same equivalence class. The main idea behind defining the equivalence classes is that testing the code with any one value belonging to an equivalence class is as good as testing the software with any other value belonging to that equivalence class. Equivalence classes for a software can be designed by examining the input data and output data.
  • 44. Equivalence Class Partitioning The following are some general guidelines for designing the equivalence classes: 1. If the input data values to a system can be specified by a range of values, then one valid and two invalid equivalence classes should be defined. 2. If the input data assumes values from a set of discrete members of some domain, then one equivalence class for valid input values and another equivalence class for invalid input values should be defined.
  • 45. Boundary Value Analysis A type of programming error frequently occurs at the boundaries of different equivalence classes of inputs. The reason behind such errors might purely be due to psychological factors. Programmers often fail to see the special processing required by the input values that lie at the boundary of the different equivalence classes. Boundary value analysis leads to selection of test cases at the boundaries of the different equivalence classes.
  • 47. One white-box testing strategy is said to be stronger than another strategy, if all types of errors detected by the first testing strategy is also detected by the second testing strategy, and the second testing strategy additionally detects some more types of errors. When two testing strategies detect errors that are different at least with respect to some types of errors, then they are called complementary. White-Box Testing
  • 48. Stronger and complementary testing strategies White-Box Testing
  • 49. The statement coverage strategy aims to design test cases so that every statement in a program is executed at least once. The principal idea governing the statement coverage strategy is that unless a statement is executed, it is very hard to determine if an error exists in that statement. Unless a statement is executed, it is very difficult to observe whether it causes failure due to some illegal memory access, wrong result computation, etc. However, executing some statement once and observing that it behaves properly for that input value is no guarantee that it will behave correctly for all input values. Statement Coverage
  • 50. In the branch coverage-based testing strategy, test cases are designed to make each branch condition to assume true and false values in turn. Branch testing is also known as edge testing as in this testing scheme, each edge of a program’s control flow graph is traversed at least once. It is obvious that branch testing guarantees statement coverage and thus is a stronger testing strategy compared to the statement coverage-based testing. Branch Coverage
  • 51. In this structural testing, test cases are designed to make each component of a composite conditional expression to assume both true and false values. condition testing is a stronger testing strategy than branch testing and branch testing is stronger testing strategy than the statement coverage-based testing. For a composite conditional expression of n components, for condition coverage, 2ⁿ test cases are required. Thus, for condition coverage, the number of test cases increases exponentially with the number of component conditions. Therefore, a condition coverage-based testing technique is practical only if n (the number of conditions) is small. Condition Coverage
  • 52. The path coverage-based testing strategy requires us to design test cases such that all linearly independent paths in the program are executed at least once. A linearly independent path can be defined in terms of the control flow graph (CFG) of a program. Path Coverage
  • 53. A control flow graph describes the sequence in which the different instructions of a program get executed. In other words, a control flow graph describes how the control flows through the program. An edge from one node to another node exists if the execution ofthe statement representing the first node can result in the transfer of control to the other node. Control Flow Graph (CFG)
  • 54. CFG for sequence, selection and iteration type of constructs, respectively Control Flow Graph (CFG)
  • 55. The CFG for any program can be easily drawn by knowing how to represent the sequence, selection, and iteration type of statements in the CFG. After all, a program is made up from these types of statements. It is important to note that for the iteration type of constructs such as the while construct, the loop condition is tested only at the beginning of the loop and therefore the control flow from the last statement of the loop is always to the top of the loop. Control Flow Graph (CFG)
  • 56. EUCLID’S GCD Computation Algorithm Control Flow Graph (CFG) Control flow diagram
  • 57. Path A path through a program is a node and edge sequence from the starting node to a terminal node of the control flow graph of a program. There can be more than one terminal node in a program. Writing test cases to cover all the paths of a typical program is impractical. For this reason, the path-coverage testing does not require coverage of all paths but only coverage of linearly independent paths.
  • 58. Linearly independent path A linearly independent path is any path through the program that introduces at least one new edge that is not included in any other linearly independent paths. If a path has one new node compared to all other linearly independent paths, then the path is also linearly independent. This is because; any path having a new node automatically implies that it has a new edge. Thus, a path that is sub-path of another path is not considered to be a linearly independent path.
  • 59. Cyclomatic Complexity For more complicated programs it is not easy to determine the number of independent paths of the program. McCabe’s cyclomatic complexity defines an upper bound for the number of linearly independent paths through a program. Also, the McCabe’s cyclomatic complexity is very simple to compute. Thus, the McCabe’s cyclomatic complexity metric provides a practical way of determining the maximum number of linearly independent paths in a program. Though the McCabe’s metric does not directly identify the linearly independent paths, but it informs approximately how many paths to look for.
  • 60. Data Flow-Based Testing Data flow-based testing method selects test paths of a program according to the locations of the definitions and uses of different variables in a program.
  • 61. Mutation Testing In mutation testing, the software is first tested by using an initial test suite built up from the different white box testing strategies. After the initial testing is complete, mutation testing is taken up. The idea behind mutation testing is to make few arbitrary changes to a program at a time. Each time the program is changed, it is called as a mutated program and the change effected is called as a mutant. A mutated program is tested against the full test suite of the program.
  • 62. Mutation Testing If there exists at least one test case in the test suite for which a mutant gives an incorrect result, then the mutant is said to be dead. If a mutant remains alive even after all the test cases have been exhausted, the test data is enhanced to kill the mutant. The process of generation and killing of mutants can be automated by predefining a set of primitive changes that can be applied to the program. These primitive changes can be alterations such as changing an arithmetic operator, changing the value of a constant, changing a data type, etc.
  • 63. Mutation Testing If there exists at least one test case in the test suite for which a mutant gives an incorrect result, then the mutant is said to be dead. If a mutant remains alive even after all the test cases have been exhausted, the test data is enhanced to kill the mutant. The process of generation and killing of mutants can be automated by predefining a set of primitive changes that can be applied to the program. These primitive changes can be alterations such as changing an arithmetic operator, changing the value of a constant, changing a data type, etc.
  • 64. Mutation Testing A major disadvantage of the mutation-based testing approach is that it is computationally very expensive, since a large number of possible mutants can be generated. Since mutation testing generates a large number of mutants and requires us to check each mutant with the full test suite, it is not suitable for manual testing. Mutation testing should be used in conjunction of some testing tool which would run all the test cases automatically.
  • 65. Members 1.Aprilen Fernando 2.Audrey Camacho 3.Ma. Chamear Troyo 4.Amier Lesigues 5.Marvin Matocenio 6.Stephen Morado 7.Niňo Joshua Maurillo 8.Tristan Bautista

Editor's Notes

  • #4: For implementing our design into a code, we require a good high-level language. A programming language should have the following features:
  • #5: Readability: A good high-level language will allow programs to be written in some ways that resemble a quite-English description of the underlying algorithms. If care is taken, the coding may be done in a way that is essentially self-documenting. Portability: High-level languages, being essentially machine independent, should be able to develop portable software. Generality: Most high-level languages allow the writing of a wide variety of programs, thus, relieving the programmer of the need to become expert in many diverse languages. Brevity: Language should have the ability to implement the algorithm with less amount of code. Programs expressed in high-level languages are often considerably shorter than their low-level equivalents. Error checking: Being human, a programmer is likely to make many mistakes in the development of a computer program. Many high-level languages enforce a great deal of error checking both at compile-time and at run-time. Cost: The ultimate cost of a programming language is a function of many of its characteristics. Familiar notation: A language should have familiar notation, so it can be understood by most of the programmers. Quick translation: It should admit quick translation. Efficiency: It should permit the generation of efficient object code. Modularity: It is desirable that programs can be developed in the language as a collection of separately compiled modules, with appropriate mechanisms for ensuring self-consistency between these modules. Widely available: Language should be widely available and it should be possible to provide translators for all the major machines and for all the major operating systems. For next slide: A coding standard lists several rules to be followed during coding, such as the way variables are to be named, the way the code is to be laid out, error return conventions, etc.
  • #6: Good software development organizations usually develop their own coding standards and guidelines depending on what best suits their organization and the type of products they develop. 1. Rules for limiting the use of global: These rules list what types of data can be declared global and what cannot. 2. Contents of the headers preceding codes for different modules: The information contained in the headers of different modules should be standard for an organization. The exact format in which the header information is organized in the header can also be specified. The following are some standard header data:
  • #7: 3. Naming conventions for global variables, local variables, and constant identifiers: A possible naming convention can be that global variable names always start with a capital letter, local variable names are made of small letters, and constant names are always capital letters. 4. Error return conventions and exception handling mechanisms: The way error conditions are reported by different functions in a program are handled should be standard within an organization. For example, different functions while encountering an error condition should either return a 0 or 1 consistently.
  • #8: 1. Do not use a coding style that is too clever or too difficult to understand: Code should be easy to understand. Many inexperienced engineers actually take pride in writing cryptic and incomprehensible code. Clever coding can obscure meaning of the code and hamper understanding. It also makes maintenance difficult. 2. Avoid obscure side effects: The side effects of a function call include modification of parameters passed by reference, modification of global variables, and I/O operations. An obscure side effect is one that is not obvious from a casual examination of the code. Obscure side effects make it difficult to understand a piece of code. For example, if a global variable is changed obscurely in a called module or some file I/O is performed which is difficult to infer from the function’s name and header information, it becomes difficult for anybody trying to understand the code. 3. Do not use an identifier for multiple purposes: Programmers often use the same identifier to denote several temporary entities. For example, some programmers use a temporary loop variable for computing and a storing the final result. The rationale that is usually given by these programmers for such multiple uses of variables is memory efficiency, e.g. three variables use up three memory locations, whereas the same variable used in three different ways uses just one memory location. However, there are several things wrong with this approach and hence should be avoided. Some of the problems caused by use of variables for multiple purposes as follows:  Each variable should be given a descriptive name indicating its purpose. This is not possible if an identifier is used for multiple purposes. Use of a variable for multiple purposes can lead to confusion and make it difficult for somebody trying to read and understand the code.  Use of variables for multiple purposes usually makes future enhancements more difficult.
  • #9: 4. The code should be well-documented: As a rule of thumb, there must be at least one comment line on the average for every three-source line. 5. The length of any function should not exceed 10 source lines: A function that is very lengthy is usually very difficult to understand as it probably carries out many different functions. For the same reason, lengthy functions are likely to have disproportionately larger number of bugs. 6. Do not use goto statements: Use of goto statements makes a program unstructured and very difficult to understand.
  • #12: Code walk through is an informal code analysis technique. In this technique, after a module has been coded, successfully compiled and all syntax errors eliminated. A few members of the development team are given the code few days before the walk through meeting to read and understand code. Each member selects some test cases and simulates execution of the code by hand (i.e. trace execution through each statement and function execution). The main objectives of the walk through are to discover the algorithmic and logical errors in the code. The members note down their findings to discuss these in a walk through meeting where the coder of the module is present. Even though a code walk through is an informal analysis technique, several guidelines have evolved over the years for making this naïve but useful analysis technique more effective. Of course, these guidelines are based on personal experience, common sense, and several subjective factors. Therefore, these guidelines should be considered as examples rather than accepted as rules to be applied dogmatically. Some of these guidelines are the following:
  • #14: In contrast to code walk through, the aim of code inspection is to discover some common types of errors caused due to oversight and improper programming. In other words, during code inspection the code is examined for the presence of certain kinds of errors, in contrast to the hand simulation of code execution done in code walk throughs. For instance, consider the classical error of writing a procedure that modifies a formal parameter while the calling routine calls that procedure with a constant actual parameter. It is more likely that such an error will be discovered by looking for these kinds of mistakes in the code, rather than by simply hand simulating execution of the procedure. In addition to the commonly made errors, adherence to coding standards is also checked during code inspection. Good software development companies collect statistics regarding different types of errors commonly committed by their engineers and identify the type of errors most frequently committed. Such a list of commonly committed errors can be used during code inspection to look out for possible errors. Following is a list of some classical programming errors which can be checked during code inspection:
  • #18: Clean room testing was pioneered by IBM. This type of testing relies heavily on walk throughs, inspection, and formal verification. The programmers are not allowed to test any of their code by executing the code other than doing some syntax testing using a compiler. The software development philosophy is based on avoiding software defects by using a rigorous inspection process. The objective of this software is zero-defect software. The name ‘clean room’ was derived from the analogy with semi-conductor fabrication units. In these units (clean rooms), defects are avoided by manufacturing in ultra-clean atmosphere. In this kind of development, inspections to check the consistency of the components with their specifications has replaced unit-testing.
  • #20:  Formal specification: The software to be developed is formally specified. A statetransition model which shows system responses to stimuli is used to express the specification.  Incremental development: The software is partitioned into increments which are developed and validated separately using the clean room process. These increments are specified, with customer input, at an early stage in the process.  Structured programming: Only a limited number of control and data abstraction constructs are used. The program development process is process of stepwise refinement of the specification.  Static verification: The developed software is statically verified using rigorous software inspections. There is no unit or module testing process for code components  Statistical testing of the system: The integrated software increment is tested statistically to determine its reliability. These statistical tests are based on the operational profile which is developed in parallel with the system specification. The main problem with this approach is that testing effort is increased as walk throughs, inspection, and verification are time-consuming.
  • #21:  Formal specification: The software to be developed is formally specified. A statetransition model which shows system responses to stimuli is used to express the specification.  Incremental development: The software is partitioned into increments which are developed and validated separately using the clean room process. These increments are specified, with customer input, at an early stage in the process.  Structured programming: Only a limited number of control and data abstraction constructs are used. The program development process is process of stepwise refinement of the specification.  Static verification: The developed software is statically verified using rigorous software inspections. There is no unit or module testing process for code components  Statistical testing of the system: The integrated software increment is tested statistically to determine its reliability. These statistical tests are based on the operational profile which is developed in parallel with the system specification. The main problem with this approach is that testing effort is increased as walk throughs, inspection, and verification are time-consuming.
  • #23: o Good documents enhance understandability and maintainability of a software product. They reduce the effort and time required for maintenance. o Use documents help the users in effectively using the system. o Good documents help in effectively handling the manpower turnover problem. Even when an engineer leaves the organization, and a new engineer comes in, he can build up the required knowledge easily. o Production of good documents helps the manager in effectively tracking the progress of the project. The project manager knows that measurable progress is achieved if a piece of work is done and the required documents have been produced and reviewed.
  • #24: Internal documentation is the code comprehension features provided as part of the source code itself. Internal documentation is provided through appropriate module headers and comments embedded in the source code. Internal documentation is also provided through the useful variable names, module and function headers, code indentation, code structuring, use of enumerated types and constant identifiers, use of user-defined data types, etc. External documentation is provided through various types of supporting documents such as users’ manual, software requirements specification document, design document, test documents, etc. A systematic software development style ensures that all these documents are produced in an orderly fashion.
  • #25: o Good documents enhance understandability and maintainability of a software product. They reduce the effort and time required for maintenance. o Use documents help the users in effectively using the system. o Good documents help in effectively handling the manpower turnover problem. Even when an engineer leaves the organization, and a new engineer comes in, he can build up the required knowledge easily. o Production of good documents helps the manager in effectively tracking the progress of the project. The project manager knows that measurable progress is achieved if a piece of work is done and the required documents have been produced and reviewed.
  • #28:  Failure: This is a manifestation of an error (or defect or bug). But, the mere presence of an error may not necessarily lead to a failure.  Test case: This is the triplet [I,S,O], where I is the data input to the system, S is the state of the system at which the data is input, and O is the expected output of the system.  Test suite: This is the set of all test cases with which a given software product is to be tested.
  • #29: The aim of the testing process is to identify all defects existing in a software product. However for most practical systems, even after satisfactorily carrying out the testing phase, it is not possible to guarantee that the software is error free. This is because of the fact that the input data domain of most software products is very large. It is not practical to test the software exhaustively with respect to each value that the input data may assume. Even with this practical limitation of the testing process, the importance of testing should not be underestimated. It must be remembered that testing does expose many defects existing in a software product. Thus testing provides a practical way of reducing defects in a system and increasing the users’ confidence in a developed system.
  • #30: The aim of the testing process is to identify all defects existing in a software product. However for most practical systems, even after satisfactorily carrying out the testing phase, it is not possible to guarantee that the software is error free. This is because of the fact that the input data domain of most software products is very large. It is not practical to test the software exhaustively with respect to each value that the input data may assume. Even with this practical limitation of the testing process, the importance of testing should not be underestimated. It must be remembered that testing does expose many defects existing in a software product. Thus testing provides a practical way of reducing defects in a system and increasing the users’ confidence in a developed system.
  • #33: In other words, testing a system using a large collection of test cases that are selected at random does not guarantee that all (or even most) of the errors in the system will be uncovered.
  • #53: A control flow graph describes the sequence in which the different instructions of a program get executed. In other words, a control flow graph describes how the control flows through the program. In order to draw the control flow graph of a program, all the statements of a program must be numbered first. The different numbered statements serve as nodes of the control flow graph (as will be shown in next slide). An edge from one node to another node exists if the execution of the statement representing the first node can result in the transfer of control to the other node.
  • #55: The CFG for any program can be easily drawn by knowing how to represent the sequence, selection, and iteration type of statements in the CFG. After all, a program is made up from these types of statements. The last slide summarizes how the CFG for these three types of statements can be drawn. It is important to note that for the iteration type of constructs such as the while construct, the loop condition is tested only at the beginning of the loop and therefore the control flow from the last statement of the loop is always to the top of the loop. Using these basic ideas, the CFG of Euclid’s GCD computation algorithm can be drawn as shown in next slide.