SlideShare a Scribd company logo
SOFT WARE ENGINEERING
ASSIGNMENT
FROM: BEDICT2920
NAME: ARTHUR TEMBO
SOFTWARE TESTING
Chapter 8 Software Engineering book (Ian Sommerville)
CONTENTS
 Development testing
 Test driven approach
 Release testing
 User testing
 Test case design
 White box testing
Development Testing
Definition:
 Testing of a system during development to discover bugs and defects.
Who is involved?
 System designers and programmers are likely to be involved in the process.
 Some development processes use programmer/tester pairs.
 For crucial systems, testing group is a separate group with the development team.
Stages of Development Testing
 There are three stages:
1. Unit testing
2. Component testing
3. System testing
Unit Testing
 Process of testing program components, e.g. methods or object classes.
 Tests are called to the methods with various parameters.
 Tests design should cover all features of objects, or stimulating all events causing
state change.
Note: Generalization or inheritance makes object class testing complicated, as
assumptions might not be valid with all subclasses inheriting operations.
 State model can be used to test the state transition
 Automated unit testing are also used to run program tests, and they provide
generic test classes which might be extended to create specific test cases.
Parts of automated testing
1. Setup part:
 For initializing the system with test case.
2. Call part:
 Calling methods or objects to be tested.
3. Assertion part:
 For comparing actual results with expected outputs.
Note: Sometimes, objects being tested are dependent on other objects, so mock
objects are used.
Choosing unit test cases
Two effective strategies for choosing unit test cases:
1. Partitioning testing
 This is where you identify groups with inputs of common characteristics and should
be processed in the same way. Tests should be chosen from within each group.
2. Guide-line based testing
 This is where testing guidelines are used to choose test cases. These guidelines
reflect previous experience of the kinds of errors that programmers often make
when developing components.
Component Testing
 Process of testing several interacting objects of a system, of which they form components.
Types of interface between program components
1. Shared memory interface
 Interfaces in which block of memory is shared within components.
2. Parameter interface
 Interfaces in which data or function references are passed within components. E.g. methods.
3. Procedural interface
 A component encapsulates set of procedures that can be called by other components.
4. Message passing interface
 A component requests a service from other components by passing a message to it.
Component Testing
Classes of interface errors:
1. Interface misuse
 Component calls other components and makes error in use of its interface. E.g. passing
parameter in wrong order.
2. Interface misunderstanding
 Calling component misunderstands the specification of the interface of called
component. Hence called component behaves as not expected.
3. Timing errors
 Occurs in real-time systems using shared memory or message passing interface. The
procedure of data and consumer may operate at different speeds.
Guidelines for interface testing
1. Design a set of tests in which parameter values to the external components at the
extreme ends of their ranges.
2. Test the interface with null pointer parameters where pointers are passed across an
interface.
3. Where a component is called through procedural interface, design tests that
deliberately cause the component to fail.
4. Use stress testing in message passing. This means designing tests that generate
more errors.
5. Design tests that vary the order in which the components are activated.
System Testing
Definition:
 This is where the components are integrated and the system is tested as a whole.
The focus is on components interaction.
How does system testing differ from component testing?
1. Complete system is tested under system testing.
2. System testing is a collective rather than individual process.
Note: Use case-based testing is effective for system testing as the focus is on
components interactions.
Policies for system testing
1. All system functions accessed through menus should be tested.
2. Combinations of functions accessed through same menu must be tested.
3. For user input, the functions must be tested for both correct and incorrect data.
Note: Automated testing is difficult for system testing since is difficult to predict the
outputs.
Test-Driven Development
Definition:
 This is where the code is developed incrementally and one does not move to the
next code until the one being developed passes all its tests.
 The following diagram illustrates the test-driven development.
Diagram for test-driven development
Diagram brief description
1. Identification of increment of functionality.
2. Formulation of tests for the functionality and implement as automated tests.
3. Run the tests along with other implemented tests.
4. Implement functionality and rerun the test.
5. If all tests are successful, you implement new functionality.
Benefits of test-driven development
1. Code coverage
 Every code written, is tested under development.
2. Regression testing
 Running regression test to see if no new bugs have ben introduced.
3. Simplified debugging
 When a test fails, it is obvious to locate a bug.
4. System documentation
 The tests acts as documentation as they describe a code should be doing.
Note: however, when using test-driven development approach, there is still need for system
testing process for system validation.
Release Testing
Definition:
 This is the process of testing a particular release of system that is intended for use
outside of the development team.
How does release testing differ from system testing?
1. System testing by development team should focus on discovering bugs in the system
(Defect testing).
2. For system development, team is not responsible for release testing.
What release testing must show:
1. Specified functionality.
2. Performance
3. Dependability (The system does not fail during normal use)
Approaches to release testing
1. Requirement-base testing
 Systematic approach to test case design where each requirement is considered and derive
a set of tests for it. For requirements coverage, several tests have to be created.
2. Scenario testing
 Approach to release testing whereby one devises typical scenarios of use and use these
scenarios to develop test cases for the system.
Qualities of good scenarios
 Realistic: Real system users should be able to relate
 Motivative: Stakeholders should relate to the scenario and believe it is important for the
system to pass the test.
 Easy to evaluate: If problems are available, team should recognize them.
Continuation…
3. Performance testing
 When the system has been fully integrated, it is possible to test for performance.
 It usually involves running a series of tests where load is increased until
performance is unacceptable.
What is stress testing?
 Process of testing the system by making demands outside design limits of product.
Relevance of stress testing
1. Testing the failure behavior of the system.
2. Revealing defects that shows up when system is fully loaded.
User Testing
Definition:
 This is a process in which users or customers provide input and advice on system. It
is also called customer testing.
Importance of user testing
 Influences from the users environment impacts:
1. Reliability
2. Performance
3. Usability
4. Robustness of system
Types of user testing and importance of each type
1. Alpha testing
 A selected group of software users work closely with development team to test early releases of the
software.
 The importance is that, it reduces the risk that unanticipated changes will have disrupt effects on
the business.
2. Beta testing
 This is when a release of software is made available to a large group of users so that they
experiment and raise problems they discover with the developers.
 The importance is that, it is a form of marketing as customers become aware of the system. Also, it
is good for discovering interaction problems between software and operational environment.
3. Acceptance testing
 This is where customers test a system to come with a decision on whether or not it is ready to be
accepted from developers and be deployed in customer environment.
Stages in acceptance testing
 There are six stages
1. Define acceptance criteria
 Takes place before the contract for the system is signed. Acceptance has to be part of
the contract and should be approved by customers and developers.
2. Plan acceptance criteria
 This involves deciding resources, time and budget for acceptance testing and
establishing a testing schedule.
3. Derive acceptance tests
 This is the deigning of tests to check if system is acceptable or not.
 Tests must provide complete coverage of system requirements. Also should test both
functional and non functional characteristics.
Continuation…
4. Run acceptance test
 The agreed acceptance tests are executed on a system. Takes place in the actual
environment where the system will be used.
5. Negotiate test results
 This involves customer and developers negotiating to decide if the system is good
enough to be used. And they must agree on how the developer will fix the
identified problems.
6. Reject / accept system
 This stage involves a meeting between the developers and the customer to decide
on whether or not the system should be accepted.
 If it is not good enough, it is required to fix the identified problems
Test-Case Design
Description:
 Test case designing before one develops a code for a component is taken to be a
good practice. As it ensures that the developed code will pass at least tests one
had already thought of.
Qualities of good test-case design
 Must anticipate error conditions and establish error handling paths to reroute or
cleanly terminate process when an error occurs. (Anti-bugging approach)
Factors to consider for creation of good test-case
design
1. Requirements and use cases
 Requirements gathering involves working with customers to generate user stories
for developers to refine into formal use cases and analysis models.
 Use cases are a guide to creation of test cases.
2. Traceability
 Testing process must be auditable, that is, each test case needs to be traceable
back to the specific functional or non functional requirements or anti requirements.
White Box Testing
Definition:
 Also known as structural testing, it is a philosophy for test case design that uses control
structure as part of component level design to derive test cases.
Techniques in white box testing
1. Basis path testing
 Basis path method enables the test-case designer to derive a logical complexity
measure of a procedural design hence using it as a guide for defining set of execution
paths.
 One must prepare a flow graph before basis path introduction.
Continuation…
2. Control structure testing
 Condition testing: A test case method that exercise the logical conditions
contained in program module.
 Data flow testing: Selects test path according to locations of definitions and uses of
variables in program.
 Loop testing: Focuses exclusively on validity of loop constructs. Classified into
simple and nested loops.
SOFTWARE QUALITY
ASPECTS
Under chapter 15 Software Engineering A Practitioners Approach
(Roger Pressman, Bruce Maxim)
CONTENTS
 What is quality?
 Software quality
 The software quality dilemma
 Elements of software quality assurance
 SQA processes and product characteristics
What is quality?
Description of quality from five perspectives:
1. Transcendental view:
 Quality is something one immediately recognizes but can not explicitly define.
2. User view:
 Product that meets user’s goals.
3. Manufacturer’s view:
 Product that conforms to the specifications.
4. Product view:
 Quality can be tied to inherent characteristics of product.
5. Value based view:
 Quality is measured on how much a customer is willing to pay for the product.
Software Quality
 An effective software process applied in a manner that creates useful product that
provides measurable values for who produces it and users.
Attributes of quality software
1. Effective software process
 Establishes the infrastructure that supports any effort at building a high-quality
software product.
2. Usefulness
 Content delivery, functions, desired features for end users and reliability.
3. Adding value for both user and producers of the software.
 Provides benefits for the software organization and the end-user community.
Quality Design
Definition:
 Quality of design are characteristics that designers specify for a product.
Contributions to quality designs
1. Grade of material
2. Tolerances
3. Performance specifications
Quality of conformance
 Focus on degree to which the implementation follows the design and resulting
meets its requirements and performance goals.
Methods for evaluating software quality
1. Quality factors
Types of quality factors model
a. Quality in use model
 Effectiveness, efficiency, satisfaction, freedom from risk and context coverage.
b. Product quality model
 Functional suitability, performance efficiency and compatibility.
Continuation…
2. Qualitative assessment
 This involves addressing specific measurable (or at least, recognizable) attributes of
the interface.
3. Quantitative assessment
 Involves finding code fragments that suggest presence of things like unnecessary
code complexity. Internal code attributes can be described quantitatively using
software metrics.
Software Quality Dilemma
 Producing a low quality software, you lose, because no one will want to buy the software. On the
other hand, spending infinite time to build absolutely perfect software, then it takes so long to
complete hence, higher cost to produce. And you will be out of business.
Solutions to the dilemma
 The solution is to produce a software which conforms to the following:
1. Goods enough software
 Delivering a software with high quality functions and features that end-users desire, but at the
same time with known bugs to some obscure or faint features.
2. The cost of quality
 This includes costs brought upon in doing quality-related activities and downstream costs of lack of
quality. It include: Prevention costs, appraisal costs, and failure costs.
Solutions to the dilemma continuation…
3. Risks
 Low quality software increases risks for both developer and end-users.
4. Negligence and liability
 The software needs to support a major corporate function or government functions
5. Quality and security
 Low quality software indirectly increase security risks
6. The impact of management actions
Software quality is often influenced by the management decisions as it is by technology.
Elements of Software Quality Assurance
(Under chapter 17)
1.Standards
 Adopted standards must be followed and all work products must conform to them.
2. Review and audits
 Reviews performed by software quality assurance personnel to ensure that quality guides are
followed for software engineering work.
3. Testing
 Quality control function with primary goal of finding errors.
4. Error / defect collection and analysis
 Error collection and analysis and defect data to better understand how errors are introduced and
how to eliminate them.
5. Change management
 Change might cause confusions which might lead to poor quality, hence, has to be properly
managed.
Continuation…
6. Education
 Teaching better engineering practices to engineers, managers and stakeholders is a
key to improved quality.
7. Vendor management
 Ensuring high quality software results by suggesting specific quality practices to be
followed by vendors.
8. Security management
 Ensuring that appropriate process and technology are used to achieve software
quality.
Continuation…
9. Safety
 Assessing the impacts of software failure and initiate the steps required to risk
reduction.
10. Risk management
 Ensuring that risk management activities are properly conducted and risk-related
contingency plans are established
Software Quality Assurance Process and
Product Characteristics
 Different software products may exhibit different levels of quality.
 This means that, different software environments require different software quality
assurance procedures and approaches.
 The solution to the dilemma is to understand the specific quality requirements for
a software product and then select the process and specific software quality
assurance actions and tasks that will be used to meet those requirements.
THE END…

More Related Content

PPTX
Testing throughout the software life cycle (test levels)
PPT
Testing Types And Models
PPT
Software Testing
PPT
Chapter 3 SOFTWARE TESTING PROCESS
PPTX
Testing level
PPTX
Software testing
PPTX
CTFL Module 02
Testing throughout the software life cycle (test levels)
Testing Types And Models
Software Testing
Chapter 3 SOFTWARE TESTING PROCESS
Testing level
Software testing
CTFL Module 02

Similar to SOFTWARE Engineering (SOFTWARE TESTING).pptx (20)

PPTX
unit 4.pptx very needful and important p
PPTX
Materi Testing dan Implementasi Sistem - Testing throughout the software life...
DOC
software engineering
PPSX
Introduction to software testing
PPT
Software Engineering Lec 10 -software testing--
PPTX
Software Quality Assurance
PPTX
CCS366 Softwares Testing Automation.pptx
DOC
Testing
PPTX
Software testing
PPTX
These risks can arise from various sources and can affect different aspects o...
PPTX
Software Testing_mmmmmmmmmmmmmmmmmmmmmmm
PPTX
Testing throughout the software life cycle
PPTX
Sftwre engg.testng
PPT
AutoTest.ppt
PDF
manualtesting-170218090020 (1).pdf
PPT
Defect Testing in Software Engineering SE20
PPT
software testing
PPTX
Testing ppt
unit 4.pptx very needful and important p
Materi Testing dan Implementasi Sistem - Testing throughout the software life...
software engineering
Introduction to software testing
Software Engineering Lec 10 -software testing--
Software Quality Assurance
CCS366 Softwares Testing Automation.pptx
Testing
Software testing
These risks can arise from various sources and can affect different aspects o...
Software Testing_mmmmmmmmmmmmmmmmmmmmmmm
Testing throughout the software life cycle
Sftwre engg.testng
AutoTest.ppt
manualtesting-170218090020 (1).pdf
Defect Testing in Software Engineering SE20
software testing
Testing ppt
Ad

Recently uploaded (20)

PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PDF
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PDF
PPT on Performance Review to get promotions
PPTX
UNIT 4 Total Quality Management .pptx
PDF
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
PDF
BIO-INSPIRED ARCHITECTURE FOR PARSIMONIOUS CONVERSATIONAL INTELLIGENCE : THE ...
PPT
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PDF
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
PPTX
Nature of X-rays, X- Ray Equipment, Fluoroscopy
PDF
Soil Improvement Techniques Note - Rabbi
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPT
introduction to datamining and warehousing
PPT
Total quality management ppt for engineering students
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
BIO-INSPIRED HORMONAL MODULATION AND ADAPTIVE ORCHESTRATION IN S-AI-GPT
Fundamentals of safety and accident prevention -final (1).pptx
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PPT on Performance Review to get promotions
UNIT 4 Total Quality Management .pptx
PREDICTION OF DIABETES FROM ELECTRONIC HEALTH RECORDS
BIO-INSPIRED ARCHITECTURE FOR PARSIMONIOUS CONVERSATIONAL INTELLIGENCE : THE ...
Introduction, IoT Design Methodology, Case Study on IoT System for Weather Mo...
III.4.1.2_The_Space_Environment.p pdffdf
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
EXPLORING LEARNING ENGAGEMENT FACTORS INFLUENCING BEHAVIORAL, COGNITIVE, AND ...
Nature of X-rays, X- Ray Equipment, Fluoroscopy
Soil Improvement Techniques Note - Rabbi
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
introduction to datamining and warehousing
Total quality management ppt for engineering students
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Ad

SOFTWARE Engineering (SOFTWARE TESTING).pptx

  • 1. SOFT WARE ENGINEERING ASSIGNMENT FROM: BEDICT2920 NAME: ARTHUR TEMBO
  • 2. SOFTWARE TESTING Chapter 8 Software Engineering book (Ian Sommerville)
  • 3. CONTENTS  Development testing  Test driven approach  Release testing  User testing  Test case design  White box testing
  • 4. Development Testing Definition:  Testing of a system during development to discover bugs and defects. Who is involved?  System designers and programmers are likely to be involved in the process.  Some development processes use programmer/tester pairs.  For crucial systems, testing group is a separate group with the development team.
  • 5. Stages of Development Testing  There are three stages: 1. Unit testing 2. Component testing 3. System testing
  • 6. Unit Testing  Process of testing program components, e.g. methods or object classes.  Tests are called to the methods with various parameters.  Tests design should cover all features of objects, or stimulating all events causing state change. Note: Generalization or inheritance makes object class testing complicated, as assumptions might not be valid with all subclasses inheriting operations.  State model can be used to test the state transition  Automated unit testing are also used to run program tests, and they provide generic test classes which might be extended to create specific test cases.
  • 7. Parts of automated testing 1. Setup part:  For initializing the system with test case. 2. Call part:  Calling methods or objects to be tested. 3. Assertion part:  For comparing actual results with expected outputs. Note: Sometimes, objects being tested are dependent on other objects, so mock objects are used.
  • 8. Choosing unit test cases Two effective strategies for choosing unit test cases: 1. Partitioning testing  This is where you identify groups with inputs of common characteristics and should be processed in the same way. Tests should be chosen from within each group. 2. Guide-line based testing  This is where testing guidelines are used to choose test cases. These guidelines reflect previous experience of the kinds of errors that programmers often make when developing components.
  • 9. Component Testing  Process of testing several interacting objects of a system, of which they form components. Types of interface between program components 1. Shared memory interface  Interfaces in which block of memory is shared within components. 2. Parameter interface  Interfaces in which data or function references are passed within components. E.g. methods. 3. Procedural interface  A component encapsulates set of procedures that can be called by other components. 4. Message passing interface  A component requests a service from other components by passing a message to it.
  • 10. Component Testing Classes of interface errors: 1. Interface misuse  Component calls other components and makes error in use of its interface. E.g. passing parameter in wrong order. 2. Interface misunderstanding  Calling component misunderstands the specification of the interface of called component. Hence called component behaves as not expected. 3. Timing errors  Occurs in real-time systems using shared memory or message passing interface. The procedure of data and consumer may operate at different speeds.
  • 11. Guidelines for interface testing 1. Design a set of tests in which parameter values to the external components at the extreme ends of their ranges. 2. Test the interface with null pointer parameters where pointers are passed across an interface. 3. Where a component is called through procedural interface, design tests that deliberately cause the component to fail. 4. Use stress testing in message passing. This means designing tests that generate more errors. 5. Design tests that vary the order in which the components are activated.
  • 12. System Testing Definition:  This is where the components are integrated and the system is tested as a whole. The focus is on components interaction. How does system testing differ from component testing? 1. Complete system is tested under system testing. 2. System testing is a collective rather than individual process. Note: Use case-based testing is effective for system testing as the focus is on components interactions.
  • 13. Policies for system testing 1. All system functions accessed through menus should be tested. 2. Combinations of functions accessed through same menu must be tested. 3. For user input, the functions must be tested for both correct and incorrect data. Note: Automated testing is difficult for system testing since is difficult to predict the outputs.
  • 14. Test-Driven Development Definition:  This is where the code is developed incrementally and one does not move to the next code until the one being developed passes all its tests.  The following diagram illustrates the test-driven development.
  • 15. Diagram for test-driven development
  • 16. Diagram brief description 1. Identification of increment of functionality. 2. Formulation of tests for the functionality and implement as automated tests. 3. Run the tests along with other implemented tests. 4. Implement functionality and rerun the test. 5. If all tests are successful, you implement new functionality.
  • 17. Benefits of test-driven development 1. Code coverage  Every code written, is tested under development. 2. Regression testing  Running regression test to see if no new bugs have ben introduced. 3. Simplified debugging  When a test fails, it is obvious to locate a bug. 4. System documentation  The tests acts as documentation as they describe a code should be doing. Note: however, when using test-driven development approach, there is still need for system testing process for system validation.
  • 18. Release Testing Definition:  This is the process of testing a particular release of system that is intended for use outside of the development team. How does release testing differ from system testing? 1. System testing by development team should focus on discovering bugs in the system (Defect testing). 2. For system development, team is not responsible for release testing. What release testing must show: 1. Specified functionality. 2. Performance 3. Dependability (The system does not fail during normal use)
  • 19. Approaches to release testing 1. Requirement-base testing  Systematic approach to test case design where each requirement is considered and derive a set of tests for it. For requirements coverage, several tests have to be created. 2. Scenario testing  Approach to release testing whereby one devises typical scenarios of use and use these scenarios to develop test cases for the system. Qualities of good scenarios  Realistic: Real system users should be able to relate  Motivative: Stakeholders should relate to the scenario and believe it is important for the system to pass the test.  Easy to evaluate: If problems are available, team should recognize them.
  • 20. Continuation… 3. Performance testing  When the system has been fully integrated, it is possible to test for performance.  It usually involves running a series of tests where load is increased until performance is unacceptable. What is stress testing?  Process of testing the system by making demands outside design limits of product. Relevance of stress testing 1. Testing the failure behavior of the system. 2. Revealing defects that shows up when system is fully loaded.
  • 21. User Testing Definition:  This is a process in which users or customers provide input and advice on system. It is also called customer testing. Importance of user testing  Influences from the users environment impacts: 1. Reliability 2. Performance 3. Usability 4. Robustness of system
  • 22. Types of user testing and importance of each type 1. Alpha testing  A selected group of software users work closely with development team to test early releases of the software.  The importance is that, it reduces the risk that unanticipated changes will have disrupt effects on the business. 2. Beta testing  This is when a release of software is made available to a large group of users so that they experiment and raise problems they discover with the developers.  The importance is that, it is a form of marketing as customers become aware of the system. Also, it is good for discovering interaction problems between software and operational environment. 3. Acceptance testing  This is where customers test a system to come with a decision on whether or not it is ready to be accepted from developers and be deployed in customer environment.
  • 23. Stages in acceptance testing  There are six stages 1. Define acceptance criteria  Takes place before the contract for the system is signed. Acceptance has to be part of the contract and should be approved by customers and developers. 2. Plan acceptance criteria  This involves deciding resources, time and budget for acceptance testing and establishing a testing schedule. 3. Derive acceptance tests  This is the deigning of tests to check if system is acceptable or not.  Tests must provide complete coverage of system requirements. Also should test both functional and non functional characteristics.
  • 24. Continuation… 4. Run acceptance test  The agreed acceptance tests are executed on a system. Takes place in the actual environment where the system will be used. 5. Negotiate test results  This involves customer and developers negotiating to decide if the system is good enough to be used. And they must agree on how the developer will fix the identified problems. 6. Reject / accept system  This stage involves a meeting between the developers and the customer to decide on whether or not the system should be accepted.  If it is not good enough, it is required to fix the identified problems
  • 25. Test-Case Design Description:  Test case designing before one develops a code for a component is taken to be a good practice. As it ensures that the developed code will pass at least tests one had already thought of. Qualities of good test-case design  Must anticipate error conditions and establish error handling paths to reroute or cleanly terminate process when an error occurs. (Anti-bugging approach)
  • 26. Factors to consider for creation of good test-case design 1. Requirements and use cases  Requirements gathering involves working with customers to generate user stories for developers to refine into formal use cases and analysis models.  Use cases are a guide to creation of test cases. 2. Traceability  Testing process must be auditable, that is, each test case needs to be traceable back to the specific functional or non functional requirements or anti requirements.
  • 27. White Box Testing Definition:  Also known as structural testing, it is a philosophy for test case design that uses control structure as part of component level design to derive test cases. Techniques in white box testing 1. Basis path testing  Basis path method enables the test-case designer to derive a logical complexity measure of a procedural design hence using it as a guide for defining set of execution paths.  One must prepare a flow graph before basis path introduction.
  • 28. Continuation… 2. Control structure testing  Condition testing: A test case method that exercise the logical conditions contained in program module.  Data flow testing: Selects test path according to locations of definitions and uses of variables in program.  Loop testing: Focuses exclusively on validity of loop constructs. Classified into simple and nested loops.
  • 29. SOFTWARE QUALITY ASPECTS Under chapter 15 Software Engineering A Practitioners Approach (Roger Pressman, Bruce Maxim)
  • 30. CONTENTS  What is quality?  Software quality  The software quality dilemma  Elements of software quality assurance  SQA processes and product characteristics
  • 31. What is quality? Description of quality from five perspectives: 1. Transcendental view:  Quality is something one immediately recognizes but can not explicitly define. 2. User view:  Product that meets user’s goals. 3. Manufacturer’s view:  Product that conforms to the specifications. 4. Product view:  Quality can be tied to inherent characteristics of product. 5. Value based view:  Quality is measured on how much a customer is willing to pay for the product.
  • 32. Software Quality  An effective software process applied in a manner that creates useful product that provides measurable values for who produces it and users. Attributes of quality software 1. Effective software process  Establishes the infrastructure that supports any effort at building a high-quality software product. 2. Usefulness  Content delivery, functions, desired features for end users and reliability. 3. Adding value for both user and producers of the software.  Provides benefits for the software organization and the end-user community.
  • 33. Quality Design Definition:  Quality of design are characteristics that designers specify for a product. Contributions to quality designs 1. Grade of material 2. Tolerances 3. Performance specifications Quality of conformance  Focus on degree to which the implementation follows the design and resulting meets its requirements and performance goals.
  • 34. Methods for evaluating software quality 1. Quality factors Types of quality factors model a. Quality in use model  Effectiveness, efficiency, satisfaction, freedom from risk and context coverage. b. Product quality model  Functional suitability, performance efficiency and compatibility.
  • 35. Continuation… 2. Qualitative assessment  This involves addressing specific measurable (or at least, recognizable) attributes of the interface. 3. Quantitative assessment  Involves finding code fragments that suggest presence of things like unnecessary code complexity. Internal code attributes can be described quantitatively using software metrics.
  • 36. Software Quality Dilemma  Producing a low quality software, you lose, because no one will want to buy the software. On the other hand, spending infinite time to build absolutely perfect software, then it takes so long to complete hence, higher cost to produce. And you will be out of business. Solutions to the dilemma  The solution is to produce a software which conforms to the following: 1. Goods enough software  Delivering a software with high quality functions and features that end-users desire, but at the same time with known bugs to some obscure or faint features. 2. The cost of quality  This includes costs brought upon in doing quality-related activities and downstream costs of lack of quality. It include: Prevention costs, appraisal costs, and failure costs.
  • 37. Solutions to the dilemma continuation… 3. Risks  Low quality software increases risks for both developer and end-users. 4. Negligence and liability  The software needs to support a major corporate function or government functions 5. Quality and security  Low quality software indirectly increase security risks 6. The impact of management actions Software quality is often influenced by the management decisions as it is by technology.
  • 38. Elements of Software Quality Assurance (Under chapter 17) 1.Standards  Adopted standards must be followed and all work products must conform to them. 2. Review and audits  Reviews performed by software quality assurance personnel to ensure that quality guides are followed for software engineering work. 3. Testing  Quality control function with primary goal of finding errors. 4. Error / defect collection and analysis  Error collection and analysis and defect data to better understand how errors are introduced and how to eliminate them. 5. Change management  Change might cause confusions which might lead to poor quality, hence, has to be properly managed.
  • 39. Continuation… 6. Education  Teaching better engineering practices to engineers, managers and stakeholders is a key to improved quality. 7. Vendor management  Ensuring high quality software results by suggesting specific quality practices to be followed by vendors. 8. Security management  Ensuring that appropriate process and technology are used to achieve software quality.
  • 40. Continuation… 9. Safety  Assessing the impacts of software failure and initiate the steps required to risk reduction. 10. Risk management  Ensuring that risk management activities are properly conducted and risk-related contingency plans are established
  • 41. Software Quality Assurance Process and Product Characteristics  Different software products may exhibit different levels of quality.  This means that, different software environments require different software quality assurance procedures and approaches.  The solution to the dilemma is to understand the specific quality requirements for a software product and then select the process and specific software quality assurance actions and tasks that will be used to meet those requirements.