Ome 
Testing: Testing is a process of executing a program with the intent of finding an error. The 
purpose is reducing post release expenses. 
Primary Benefits of Testing: A good test has high probability of finding an as yet undiscovered 
error. 
Secondary Benefit: To check whether the functions appear to be working according to 
specifications. 
Debug: Is a process of locating exact cause of error and removing that cause. 
QA: QA involves in the entire software development process – monitoring and improving the 
process and making sure any agreed upon standards and procedures are followed and ensuring 
that problems are found and dealt with. 
QA is about having an overall development and management process that provides right 
environment for ensuring quality of final product. 
QA is best carried out on process. 
QA should take place at every stage of SDLC. 
QC: 
is about checking at the end of some development process (E.g.- a design activity ) that we have 
built quality in i.e. that we have achieved the required quality with our methods. 
QC is like testing a module against requirement specification or design document , measuring 
response time , throughput etc. 
QC is best carried out on products 
QC should be done at end of every SDLC i.e. when product building is 
complete. 
Verification: Typically involves the reviews and meetings to evaluate plans, docs, code 
requirements and specifications. This can be done with checklists, issue lists, walkthroughs and 
inspections. 
Validation: Typically involves the actual testing after verification. validation checks that the 
system meets the customer’s requirements at the end of the life cycle 
The major purpose of verification and validation activities is to ensure that 
software design, code, and documentation meet all the requirements imposed on 
them. 
Walkthroughs: This is the informal meeting for evaluation purpose. 
Inspection: Its just formal walkthrough.
Test Design: 
This document records what need to tested in the Application. This should abstract from 
Requirement specification, Design spec. 
Test Procedures: 
This records how to execute the set of test cases, specifying the exact process to be followed to 
conduct each of test cases. 
Test Case: Is a document that describing input, action or event and expected response to 
determine whether feature of application working correctly or not. 
Developing Test Cases: 
- Design test cases for unit test event based on objectives of test 
- Should be specifications derived tests 
- Should consider Equivalence partitioning 
- State Transitions testing 
- Design test cases that include abnormal situations and data. 
- Design test cases for all known and expected error conditions 
- Design test cases to be easily duplicated 
- Each test case must be described input and predicted result and conditions under which 
the test has been run. 
Test Case design 
Test Case ID: It is unique number given to test case in order to be identified. 
Test description: The description if test case you are going to test. 
Revision history: Each test case has to have its revision history in order to know when and by 
whom it is created or modified. 
Function to be tested: The name of function to be tested. 
Environment: It tells in which environment you are testing. 
Test Setup: Anything you need to set up outside of your application for example printers, network 
and so on. 
Test Execution: It is detailed description of every step of execution. 
Expected Results: The description of what you expect the function to do. 
Actual Results: pass / failed 
If pass - What actually happen when you run the test. 
If failed - put in description of what you've observed. 
Test Case Design Techniques: 
White box techniques: Branch Testing, Condition Testing. Black box testing techniques are 
Boundary value analysis, Equivalence partitioning. 
Equivalence Partitioning: 
Is the process of taking all of the possible test values and planning them into classes (groups). 
Test cases should be designed to test one value from each group. So that it uses fewest test 
cases to cover maximum input requirements.
Basis Path Testing: A white box test case design technique that uses the algorithmic 
flow of the program to design tests. 
Boundary Value Analysis: 
This is a selection technique where the test data lie along the boundaries of the input domain. 
Branch Testing: 
In branch testing, test cases are designed to exercise control flow braches or decision points in a 
unit. 
Cause Effect Graph: A graphical representation of inputs and the associated outputs 
effects, which can be used to design test cases. 
Characteristics of a Good Test: They are likely to catch bugs, no redundant , not too simple or 
too complex. Test case is going to be complex if you have more than one expected results. 
Test Plan: 
Cont 1. Introduction 
2. Scope – Test Items, Features To be tested, Features Not to be Tested 
3. Approach 
4. Test Environment 
5. Schedule and Testing Tasks 
6. Item pass/fail 
7. Suspension Criteria 
8. Risk and Contingencies 
9. Test Deliverables 
10. Approves of the Plan 
Introduction: 
Summary of the items and features to be tested. References to related documents. 
Scope: 
Test Items: Names of all the items (software) being tested are listed here and their version too. 
These include Programs, Batch files, Configuration files and control tables. 
Features To Be Tested: A detailed list of functions or features will be prepared from reviewing 
Detail Design Document, Functional specification and Requirement specifications. 
Features Not To Be Tested: If any feature is not being tested, its should be clearly mentioned 
here with reason for not being tested. Eg: Sending financial transactions over a private network 
may not be tested due to non availability of infrastructure. 
Approach: 
For each major group features specify the type of testes like regression, stress etc., Specify mojor 
activities, tools, techniques. 
Test Environment: 
Describe test environment minimal and optimal needs. This would include Hardware, Network, 
Communications, OS and printers and Security and other tools.
Schedule and Tasks: 
Provide a detailed schedule of testing activities and responsibilities. Indicate dependencies and 
time frames for testing activities. 
Item Pass/Fail criteria: 
Specify the criteria to be used to determine whether each test item has passed or failed. 
Suspension criteria: 
Criteria to be used to suspend the testing activity. 
Criteria to be used resume the testing activity 
Risk and Contingencies: 
Identify the high risk assumptions of the test plan. 
Specify the contingency plans for each 
Test Deliverables: 
Identify the deliverables documents: test plan, test design specifications, test case specifications, 
test procedure documents, test item transmittal reports, test logs, test incidental reports and test 
summary reports. And also identify test input and output data. 
Approvals: 
Specify the names and titles of all persons who must approve the plan 
Provides space for signatures and dates 
Test Techniques: 
Black box Testing: This testing is completely based on specifications and requirements and not 
bothered about internal logic of testing. 
White box Testing: This is completely based on internal logic of code. Tests are based on 
coverage of code statements, paths, branches and conditions. 
Test Phase Levels: 
Unit Testing: This is the most micro scale of testing, to test particular function or module. This 
will done by programmers only. 
Incremental Integration: Continuous testing of the application as new functionality is added. 
Integration testing: Testing the combined parts of the application to determine if they functioning 
correctly all together. The parts can be code modules, individual applications or client server 
applications.
Functional Testing: Functionality testing is performed to verify that a software application 
performs and functions correctly according to design specifications. By inputting both valid and 
invalid data, we verify that the output of these features and functional areas meets predefined 
expectations 
System Testing: Black box type testing based on overall requirement specifications. 
Acceptance Testing: 
Final testing based on the end user or customer acceptance criteria. Determining if the software 
is satisfactory to a customer. Its of two types Alpha testing and Beta testing. 
Other Tests: 
Alpha Testing: 
This type of testing is conducted at the developer’s site by a customer. The developer makes 
note of the errors and usage problems. 
Beta Testing: 
Conducted at one or more customer sites by the end user(s) of the software. Unlike alpha testing, 
the developer is generally not present. 
Adhoc Testing: 
This is also called as Monkey testing. No presumptions are made in the advancement for the 
testing of the remaining features based on the following results due to a lack of repeatable tests. 
End to End Testing: This is similar to system testing in a environment that mimics real world 
use, such as interacting network communications, databases and network environment or 
interacting with other hardware, applications or systems 
Recovery Testing: 
Forces the software to fail in a variety of ways and verifies that recovery is properly performed. If 
recovery is automatic, re-initialization, check-pointing mechanisms, data recovery, and restart are 
each evaluated for correctness. If recovery requires human intervention, the mean time to repair 
is evaluated to determine whether it is within acceptable limits. 
Sanity Testing/Smoke Testing: This is initial testing effort to determine if the new software is 
performing enough to accept major testing. 
Regression Testing: Re testing after bug fixes or modification of software or environment. We 
don’t know how much retesting is needed near the end of application. For this automated tools 
will be used. 
Load Testing: Testing the application under heavy loads. Such as testing web application under 
range of loads to determine at what point of time the system response time degrades or fails. 
Stress Testing: This can described as the functional testing the system under unusually heavy 
loads, heavy repetitions of same actions or inputs and input of large numerical values and large 
complex queries to database. 
Performance: Load and Stress testing combindley called as performance testing. 
Compatibility Testing: 
Compatibility Testing test the functionality and performance of a software application across 
multiple platform configurations. This type of testing typically uncovers compatibility issues with 
operating systems, other software applications, and hardware components.
Bottom-up testing: An approach to integration testing where the lowest level components are 
tested first, then used to facilitate the testing of higher level components. The process is repeated 
until the component at the top of the hierarchy is tested. The components used are called 
‘Drivers’. 
Top-down testing: An approach to integration testing where the component at the top of the 
component hierarchy is tested first, with lower level components being simulated by stubs. Tested 
components are then used to test lower level components. The process is repeated until the 
lowest level components have been tested. 
Software Quality: 
Quality software is reasonably bug free, delivered on time, within budget limit, meets 
requirements and is maintainable. 
CMM: 
Capability maturity model is developed by SEI. This is model of 5 levels of organization maturity, 
that determine effectiveness of delivering quality software. 
CMM1: Characterized by chaos, periodic panics and heroic efforts required by individuals to 
successfully complete the project. Success may not be repeated. 
CMM2: software project tracking, requirements management, realistic project plan and 
configuration management process are in place. In this case success is repeatable. 
CMM3: standard software development and maintenance processes are integrated throughout 
the organization. A software engineering process group is oversees the process in the 
organization, and training programs. 
CMM4: Metrics are used to track productivity, processes and products. Project performance is 
predictable and quality is consistently high. 
CMM5: It focus on continuous improvement in process. It can predict impact of new technologies 
and new processes and can be implemented when its required. 
ISO: 
International organization for standardization(9001-2000). This concerns quality systems 
assessed by outside auditors. 
IEEE: 
Institute of Electrical and Electronics Engineering. 
IEEE/ANSI 829 – Software Test Documentation 
IEEE/ANSI 1008 – Software Unit Testing 
IEEE/ANSI 730 – Software Quality Assurance Plans
ISO: 
ISO = 'International Organization for Standards' - The ISO 9001, 9002, and 9003 standards 
concern quality systems that are assessed by outside auditors, and they apply to many kinds of 
production and manufacturing organizations, not just software. 
The most comprehensive is 9001, and this is the one most often used by software development 
organizations. It covers documentation, design, development, production, testing, installation, 
servicing, and other processes. 
ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development 
organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the 
international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be 
purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be 
ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically 
good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 
certification does not necessarily indicate quality products - it indicates only that documented 
processes are followed. (Publication of revised ISO standards are expected in late 2000; see 
http://www.iso.ch/ for latest info.) 
Software Life Cycle: 
It includes aspects such as initial concept, requirement analysis, functional design, internal 
design, documentation planning, test planning, coding documentation, testing, retesting, and 
maintenance. 
Configuration Management: 
It covers the processes used to control, coordinate and track: Code, documentation, 
requirements, problems and change requests, designs and changes made to them, and who 
makes changes to them. 
The purpose of software configuration management is to identify all the interrelated components 
of software and to control their evolution throughout the various life cycle phases. 
Control Version 
As an application evolves over time, many different versions of its software components are 
created, and there needs to be an organized process to manage changes in the software 
components and their relationships 
Change Control 
Change control is the process by which a modification to a software component is proposed, 
evaluated, approved or rejected, scheduled, and tracked. 
Business Requirements: 
This document describes users needs for the application. 
Functional Requirements: 
This would be primarily derived from Business Requirements. This describes functional needs. 
Prototype: 
This is look and feel representation of the application that is proposed. This basically shows 
placements of the fields and modules and generic flow of the application. 
Test Strategy: 
A Test strategy is a statement of the overall approaches to the testing, identifying what levels of 
testing are to be applied and the techniques, methods and tools to be used. A test strategy 
should ideally be organization wide, being applicable to all of an organizations projects.
Here levels refers Unit, Integration and system etc 
Methods refers to dynamic and static. 
Techniques refers to white box and black box testing 
Test Methodology: 
This is actual implementation we use for each and every project. This may defer from project to 
project. For example we may mention in test strategy that we use load test but in the 
methodology of a particular project we may not perform it 
Test Policy: 
Test Policy is a management definition to testing for a department. Its specifies milestones to be 
achieved. 
Test policy contains these side headings 
a. Def of testing 
b. Testing System: Methods through which testing will be achieved 
c. Evaluation: How information services management will measure and evaluate testing 
d. Standards: The standards against which testing will be measured 
Test Life Cycle: 
Test Analysis(Risk Analysis), Test plan , Designing Test cases, Executing Test cases, 
Maintaining test logs, Reporting 
Risk management is the process of measuring, or assessing risk and then developing strategies 
to manage the risk. In ideal risk management, a prioritization process is followed whereby the 
risks with the greatest loss and the greatest probability of occurring are handled first 
Risk Analysis: 
Schedule Risk: Factors that may affect the schedule of testing 
Technology Risk: Risks on hardware and software of the application 
Resource Risk: Test Team availability on slippage of the project schedules. 
Support Risk: Clarification required on specifications and availability of personnel for the same 
Bug: 
Something abnormal to the expected behavior is called a bug 
Error: 
A software bug found before software release is called a Error 
Defect: 
A software bug found after the software release is send to customer 
Fault: 
Since a defect is a software bug found in production. Every defect leads to a fault to the system 
SDLC: 
Waterfall Model: 
This approach breaks the development cycle down into discrete phases, each with rigid 
sequential begin and end. Each phase is fully completed before the next is started. Once a phase 
is completed, and it never goes back to change it. The phase are Requirements, Designing, 
coding, Testing and maintenance.
Iterative Model: 
This model does not start with full specifications of requirements. Instead development begins by 
specifying and implementing just part of the software, which can be reviewed in order to identify 
further requirements. This process is repeated, producing a new version of the software for each 
cycle of the model 
Progressive Model: 
Testing Methods: 
Static Testing: 
Static testing approaches are time independent. They do not necessarily involve either manual or 
automated execution of product. Example includes syntax testing, structured walkthroughs, and 
inspections. 
Dynamic Testing: 
Dynamic testing techniques are time dependent and involve executing specific sequence of 
instructions on paper. For example Boundary testing is a dynamic testing technique that requires 
the execution of test cases on the computer with a specific focus on the boundary values 
associated with the inputs or outputs of the program. 
Test Metrics: 
Test metric is a measurable unit used to measure the productivity and quality of the system 
produced. Generally classified into 
size oriented metrics - based on lines of code (KLOC) 
Function oriented metrics - we use Function point metric to calculate the functionality delivered by 
a system 
For example these are the test metrics u can say as example 
*Test Coverage* = Number of units (KLOC/FP) tested / total size of the system 
*Test cost (in %)* = Cost of testing / total cost *100 
*Cost to locate defect* = Cost of testing / the number of defects located 
*Defects detected in testing (in %)* = Defects detected in testing / total system defects*100 
*Acceptance criteria tested* = Acceptance criteria tested / total acceptance criteria 
If u want to mention your company testing department performance we use 
Defect removal Efficiency having formulae: (No. of defects found by 
testing team/ (No. of defects found by testing team + No. of defects 
found by Customer))*100 
For example testing department has found 90 defects and customer has 
found 10 defects then DRE: (90/ 100 )*100 = 90% defect removal efficiency
IF the case is reverse then our defect removal efficiency will be only 10% 
Cyclocomplex: 
This metric is an indication of the number of 'linear' segments in a method (i.e. sections of code 
with no branches) and therefore can be used to determine the number of tests required to obtain 
complete coverage. It can also be used to indicate the psychological complexity of a method. 
A method with no branches has a Cyclomatic Complexity of 1 since there is 1 arc. This number is 
incremented whenever a branch is encountered. In this implementation, statements that 
represent branching are defined as: 'for', 'while', 'do', 'if', 'case' (optional), 'catch' (optional) and the 
ternary operator (optional). The sum of Cyclomatic Complexities for methods in local classes is 
also included in the total for a method. 
Check In (or) Check Out 
Check in means changing or modifying or adding the Information present in Main Repository 
when other person check out the changes will be reflected. 
Bug leakage; 
If a bug which has to be found in the requirement phase is found in the design phase is said to be 
bug leakage.

More Related Content

PPT
Software Testing
PPT
Chapter 3 SOFTWARE TESTING PROCESS
PPTX
Software testing and process
PPT
Test case design
PPT
Testing
PPTX
CTFL Module 04
PPTX
Test Planning and Test Estimation Techniques
PPTX
Test planning
Software Testing
Chapter 3 SOFTWARE TESTING PROCESS
Software testing and process
Test case design
Testing
CTFL Module 04
Test Planning and Test Estimation Techniques
Test planning

What's hot (20)

PPTX
PPT
Testing
PPT
Software Testing Process
PPT
Test Techniques
PPTX
CTFL Module 02
PPTX
CTFL Module 03
PPTX
CTFL Module 01
PPT
Testing fundamentals
PPT
Software Testing
PPTX
Python: Object-Oriented Testing (Unit Testing)
PPT
ISTQB / ISEB Foundation Exam Practice - 2
PPTX
Software Testing Fundamentals | Basics Of Software Testing
PPT
ISTQB / ISEB Foundation Exam Practice -1
PDF
Test cases
PPTX
Software testing introduction
PPTX
Software testing
PPT
PPT
Softwaretesting
PPT
ISTQB / ISEB Foundation Exam Practice - 5
PDF
@#$@#$@#$"""@#$@#$"""
Testing
Software Testing Process
Test Techniques
CTFL Module 02
CTFL Module 03
CTFL Module 01
Testing fundamentals
Software Testing
Python: Object-Oriented Testing (Unit Testing)
ISTQB / ISEB Foundation Exam Practice - 2
Software Testing Fundamentals | Basics Of Software Testing
ISTQB / ISEB Foundation Exam Practice -1
Test cases
Software testing introduction
Software testing
Softwaretesting
ISTQB / ISEB Foundation Exam Practice - 5
@#$@#$@#$"""@#$@#$"""
Ad

Viewers also liked (20)

PPT
Presentationmusic idea!!!![1]
PDF
Achievement Gap[1]
PPT
ms_pp_2007_winxp
PPT
V2kenal suai
DOCX
Mathemahics_Derivatif
PPT
Русские усадьбы
PDF
Un Blog Profitable et Passionnant
PPTX
Recuperaciones
PPT
3 methods ofdesign
PDF
Statistical analysis to identify the main parameters to
PPTX
Presentation1 media pembelajaran
PPSX
China-Tune Talk, Tone Excel PowerPoint.
PPTX
Μηνιγγιτιδικά εμβόλια ACWY και B
PPT
Global Innovation, Secondary Education and Privatisation
PPT
reading
PPT
National Training on Safe Hospitals - Sri Lanka - Module 1 Session 1 - 14Sept...
PPT
казпрофориентация каф психологии прав
ODP
Jóvenes pordioseros
PPT
Facilities plan
PPTX
Kata majmuk keterangan
Presentationmusic idea!!!![1]
Achievement Gap[1]
ms_pp_2007_winxp
V2kenal suai
Mathemahics_Derivatif
Русские усадьбы
Un Blog Profitable et Passionnant
Recuperaciones
3 methods ofdesign
Statistical analysis to identify the main parameters to
Presentation1 media pembelajaran
China-Tune Talk, Tone Excel PowerPoint.
Μηνιγγιτιδικά εμβόλια ACWY και B
Global Innovation, Secondary Education and Privatisation
reading
National Training on Safe Hospitals - Sri Lanka - Module 1 Session 1 - 14Sept...
казпрофориентация каф психологии прав
Jóvenes pordioseros
Facilities plan
Kata majmuk keterangan
Ad

Similar to Testing (20)

PPSX
Introduction to software testing
PPT
Types of Software Testing
PDF
Glossary of Testing Terms and Concepts
DOC
Testing terms & definitions
PPTX
PPTX
Unit Testing
PPTX
Coding, Testing, Black-box and White-box Testing.pptx
PPTX
software testing types jxnvlbnLCBNFVjnl/fknblb
PPT
Testing strategies
PPT
Software testing & its technology
DOCX
Faq
PPTX
object oriented system analysis and design
PPTX
DOC
Testing
PPTX
black and white Box testing.pptx
PPTX
Software testing ppt
DOC
Test plan
DOC
Question ISTQB foundation 3
DOC
Ôn tập kiến thức ISTQB
Introduction to software testing
Types of Software Testing
Glossary of Testing Terms and Concepts
Testing terms & definitions
Unit Testing
Coding, Testing, Black-box and White-box Testing.pptx
software testing types jxnvlbnLCBNFVjnl/fknblb
Testing strategies
Software testing & its technology
Faq
object oriented system analysis and design
Testing
black and white Box testing.pptx
Software testing ppt
Test plan
Question ISTQB foundation 3
Ôn tập kiến thức ISTQB

Recently uploaded (20)

PDF
KEY COB2 UNIT 1: The Business of businessĐH KInh tế TP.HCM
PPTX
Artificial_Intelligence_Basics use in our daily life
PPTX
Basic understanding of cloud computing one need
PDF
simpleintnettestmetiaerl for the simple testint
PPTX
COPD_Management_Exacerbation_Detailed_Placeholders.pptx
PDF
Course Overview and Agenda cloud security
PDF
BIOCHEM CH2 OVERVIEW OF MICROBIOLOGY.pdf
PPTX
Internet Safety for Seniors presentation
PPTX
Cyber Hygine IN organizations in MSME or
PDF
📍 LABUAN4D EXCLUSIVE SERVER STAR GAMING ASIA NO.1 TERPOPULER DI INDONESIA ! 🌟
PDF
Virtual Guard Technology Provider_ Remote Security Service Solutions.pdf
PPTX
MY PRESENTATION66666666666666666666.pptx
PDF
mera desh ae watn.(a source of motivation and patriotism to the youth of the ...
PDF
Session 1 (Week 1)fghjmgfdsfgthyjkhfdsadfghjkhgfdsa
PPTX
module 1-Part 1.pptxdddddddddddddddddddddddddddddddddddd
PPTX
t_and_OpenAI_Combined_two_pressentations
PPT
12 Things That Make People Trust a Website Instantly
PDF
Uptota Investor Deck - Where Africa Meets Blockchain
PPTX
Top Website Bugs That Hurt User Experience – And How Expert Web Design Fixes
PPTX
AI_Cyberattack_Solutions AI AI AI AI .pptx
KEY COB2 UNIT 1: The Business of businessĐH KInh tế TP.HCM
Artificial_Intelligence_Basics use in our daily life
Basic understanding of cloud computing one need
simpleintnettestmetiaerl for the simple testint
COPD_Management_Exacerbation_Detailed_Placeholders.pptx
Course Overview and Agenda cloud security
BIOCHEM CH2 OVERVIEW OF MICROBIOLOGY.pdf
Internet Safety for Seniors presentation
Cyber Hygine IN organizations in MSME or
📍 LABUAN4D EXCLUSIVE SERVER STAR GAMING ASIA NO.1 TERPOPULER DI INDONESIA ! 🌟
Virtual Guard Technology Provider_ Remote Security Service Solutions.pdf
MY PRESENTATION66666666666666666666.pptx
mera desh ae watn.(a source of motivation and patriotism to the youth of the ...
Session 1 (Week 1)fghjmgfdsfgthyjkhfdsadfghjkhgfdsa
module 1-Part 1.pptxdddddddddddddddddddddddddddddddddddd
t_and_OpenAI_Combined_two_pressentations
12 Things That Make People Trust a Website Instantly
Uptota Investor Deck - Where Africa Meets Blockchain
Top Website Bugs That Hurt User Experience – And How Expert Web Design Fixes
AI_Cyberattack_Solutions AI AI AI AI .pptx

Testing

  • 1. Ome Testing: Testing is a process of executing a program with the intent of finding an error. The purpose is reducing post release expenses. Primary Benefits of Testing: A good test has high probability of finding an as yet undiscovered error. Secondary Benefit: To check whether the functions appear to be working according to specifications. Debug: Is a process of locating exact cause of error and removing that cause. QA: QA involves in the entire software development process – monitoring and improving the process and making sure any agreed upon standards and procedures are followed and ensuring that problems are found and dealt with. QA is about having an overall development and management process that provides right environment for ensuring quality of final product. QA is best carried out on process. QA should take place at every stage of SDLC. QC: is about checking at the end of some development process (E.g.- a design activity ) that we have built quality in i.e. that we have achieved the required quality with our methods. QC is like testing a module against requirement specification or design document , measuring response time , throughput etc. QC is best carried out on products QC should be done at end of every SDLC i.e. when product building is complete. Verification: Typically involves the reviews and meetings to evaluate plans, docs, code requirements and specifications. This can be done with checklists, issue lists, walkthroughs and inspections. Validation: Typically involves the actual testing after verification. validation checks that the system meets the customer’s requirements at the end of the life cycle The major purpose of verification and validation activities is to ensure that software design, code, and documentation meet all the requirements imposed on them. Walkthroughs: This is the informal meeting for evaluation purpose. Inspection: Its just formal walkthrough.
  • 2. Test Design: This document records what need to tested in the Application. This should abstract from Requirement specification, Design spec. Test Procedures: This records how to execute the set of test cases, specifying the exact process to be followed to conduct each of test cases. Test Case: Is a document that describing input, action or event and expected response to determine whether feature of application working correctly or not. Developing Test Cases: - Design test cases for unit test event based on objectives of test - Should be specifications derived tests - Should consider Equivalence partitioning - State Transitions testing - Design test cases that include abnormal situations and data. - Design test cases for all known and expected error conditions - Design test cases to be easily duplicated - Each test case must be described input and predicted result and conditions under which the test has been run. Test Case design Test Case ID: It is unique number given to test case in order to be identified. Test description: The description if test case you are going to test. Revision history: Each test case has to have its revision history in order to know when and by whom it is created or modified. Function to be tested: The name of function to be tested. Environment: It tells in which environment you are testing. Test Setup: Anything you need to set up outside of your application for example printers, network and so on. Test Execution: It is detailed description of every step of execution. Expected Results: The description of what you expect the function to do. Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in description of what you've observed. Test Case Design Techniques: White box techniques: Branch Testing, Condition Testing. Black box testing techniques are Boundary value analysis, Equivalence partitioning. Equivalence Partitioning: Is the process of taking all of the possible test values and planning them into classes (groups). Test cases should be designed to test one value from each group. So that it uses fewest test cases to cover maximum input requirements.
  • 3. Basis Path Testing: A white box test case design technique that uses the algorithmic flow of the program to design tests. Boundary Value Analysis: This is a selection technique where the test data lie along the boundaries of the input domain. Branch Testing: In branch testing, test cases are designed to exercise control flow braches or decision points in a unit. Cause Effect Graph: A graphical representation of inputs and the associated outputs effects, which can be used to design test cases. Characteristics of a Good Test: They are likely to catch bugs, no redundant , not too simple or too complex. Test case is going to be complex if you have more than one expected results. Test Plan: Cont 1. Introduction 2. Scope – Test Items, Features To be tested, Features Not to be Tested 3. Approach 4. Test Environment 5. Schedule and Testing Tasks 6. Item pass/fail 7. Suspension Criteria 8. Risk and Contingencies 9. Test Deliverables 10. Approves of the Plan Introduction: Summary of the items and features to be tested. References to related documents. Scope: Test Items: Names of all the items (software) being tested are listed here and their version too. These include Programs, Batch files, Configuration files and control tables. Features To Be Tested: A detailed list of functions or features will be prepared from reviewing Detail Design Document, Functional specification and Requirement specifications. Features Not To Be Tested: If any feature is not being tested, its should be clearly mentioned here with reason for not being tested. Eg: Sending financial transactions over a private network may not be tested due to non availability of infrastructure. Approach: For each major group features specify the type of testes like regression, stress etc., Specify mojor activities, tools, techniques. Test Environment: Describe test environment minimal and optimal needs. This would include Hardware, Network, Communications, OS and printers and Security and other tools.
  • 4. Schedule and Tasks: Provide a detailed schedule of testing activities and responsibilities. Indicate dependencies and time frames for testing activities. Item Pass/Fail criteria: Specify the criteria to be used to determine whether each test item has passed or failed. Suspension criteria: Criteria to be used to suspend the testing activity. Criteria to be used resume the testing activity Risk and Contingencies: Identify the high risk assumptions of the test plan. Specify the contingency plans for each Test Deliverables: Identify the deliverables documents: test plan, test design specifications, test case specifications, test procedure documents, test item transmittal reports, test logs, test incidental reports and test summary reports. And also identify test input and output data. Approvals: Specify the names and titles of all persons who must approve the plan Provides space for signatures and dates Test Techniques: Black box Testing: This testing is completely based on specifications and requirements and not bothered about internal logic of testing. White box Testing: This is completely based on internal logic of code. Tests are based on coverage of code statements, paths, branches and conditions. Test Phase Levels: Unit Testing: This is the most micro scale of testing, to test particular function or module. This will done by programmers only. Incremental Integration: Continuous testing of the application as new functionality is added. Integration testing: Testing the combined parts of the application to determine if they functioning correctly all together. The parts can be code modules, individual applications or client server applications.
  • 5. Functional Testing: Functionality testing is performed to verify that a software application performs and functions correctly according to design specifications. By inputting both valid and invalid data, we verify that the output of these features and functional areas meets predefined expectations System Testing: Black box type testing based on overall requirement specifications. Acceptance Testing: Final testing based on the end user or customer acceptance criteria. Determining if the software is satisfactory to a customer. Its of two types Alpha testing and Beta testing. Other Tests: Alpha Testing: This type of testing is conducted at the developer’s site by a customer. The developer makes note of the errors and usage problems. Beta Testing: Conducted at one or more customer sites by the end user(s) of the software. Unlike alpha testing, the developer is generally not present. Adhoc Testing: This is also called as Monkey testing. No presumptions are made in the advancement for the testing of the remaining features based on the following results due to a lack of repeatable tests. End to End Testing: This is similar to system testing in a environment that mimics real world use, such as interacting network communications, databases and network environment or interacting with other hardware, applications or systems Recovery Testing: Forces the software to fail in a variety of ways and verifies that recovery is properly performed. If recovery is automatic, re-initialization, check-pointing mechanisms, data recovery, and restart are each evaluated for correctness. If recovery requires human intervention, the mean time to repair is evaluated to determine whether it is within acceptable limits. Sanity Testing/Smoke Testing: This is initial testing effort to determine if the new software is performing enough to accept major testing. Regression Testing: Re testing after bug fixes or modification of software or environment. We don’t know how much retesting is needed near the end of application. For this automated tools will be used. Load Testing: Testing the application under heavy loads. Such as testing web application under range of loads to determine at what point of time the system response time degrades or fails. Stress Testing: This can described as the functional testing the system under unusually heavy loads, heavy repetitions of same actions or inputs and input of large numerical values and large complex queries to database. Performance: Load and Stress testing combindley called as performance testing. Compatibility Testing: Compatibility Testing test the functionality and performance of a software application across multiple platform configurations. This type of testing typically uncovers compatibility issues with operating systems, other software applications, and hardware components.
  • 6. Bottom-up testing: An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. The components used are called ‘Drivers’. Top-down testing: An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. Software Quality: Quality software is reasonably bug free, delivered on time, within budget limit, meets requirements and is maintainable. CMM: Capability maturity model is developed by SEI. This is model of 5 levels of organization maturity, that determine effectiveness of delivering quality software. CMM1: Characterized by chaos, periodic panics and heroic efforts required by individuals to successfully complete the project. Success may not be repeated. CMM2: software project tracking, requirements management, realistic project plan and configuration management process are in place. In this case success is repeatable. CMM3: standard software development and maintenance processes are integrated throughout the organization. A software engineering process group is oversees the process in the organization, and training programs. CMM4: Metrics are used to track productivity, processes and products. Project performance is predictable and quality is consistently high. CMM5: It focus on continuous improvement in process. It can predict impact of new technologies and new processes and can be implemented when its required. ISO: International organization for standardization(9001-2000). This concerns quality systems assessed by outside auditors. IEEE: Institute of Electrical and Electronics Engineering. IEEE/ANSI 829 – Software Test Documentation IEEE/ANSI 1008 – Software Unit Testing IEEE/ANSI 730 – Software Quality Assurance Plans
  • 7. ISO: ISO = 'International Organization for Standards' - The ISO 9001, 9002, and 9003 standards concern quality systems that are assessed by outside auditors, and they apply to many kinds of production and manufacturing organizations, not just software. The most comprehensive is 9001, and this is the one most often used by software development organizations. It covers documentation, design, development, production, testing, installation, servicing, and other processes. ISO 9000-3 (not the same as 9003) is a guideline for applying ISO 9001 to software development organizations. The U.S. version of the ISO 9000 series standards is exactly the same as the international version, and is called the ANSI/ASQ Q9000 series. The U.S. version can be purchased directly from the ASQ (American Society for Quality) or the ANSI organizations. To be ISO 9001 certified, a third-party auditor assesses an organization, and certification is typically good for about 3 years, after which a complete reassessment is required. Note that ISO 9000 certification does not necessarily indicate quality products - it indicates only that documented processes are followed. (Publication of revised ISO standards are expected in late 2000; see http://www.iso.ch/ for latest info.) Software Life Cycle: It includes aspects such as initial concept, requirement analysis, functional design, internal design, documentation planning, test planning, coding documentation, testing, retesting, and maintenance. Configuration Management: It covers the processes used to control, coordinate and track: Code, documentation, requirements, problems and change requests, designs and changes made to them, and who makes changes to them. The purpose of software configuration management is to identify all the interrelated components of software and to control their evolution throughout the various life cycle phases. Control Version As an application evolves over time, many different versions of its software components are created, and there needs to be an organized process to manage changes in the software components and their relationships Change Control Change control is the process by which a modification to a software component is proposed, evaluated, approved or rejected, scheduled, and tracked. Business Requirements: This document describes users needs for the application. Functional Requirements: This would be primarily derived from Business Requirements. This describes functional needs. Prototype: This is look and feel representation of the application that is proposed. This basically shows placements of the fields and modules and generic flow of the application. Test Strategy: A Test strategy is a statement of the overall approaches to the testing, identifying what levels of testing are to be applied and the techniques, methods and tools to be used. A test strategy should ideally be organization wide, being applicable to all of an organizations projects.
  • 8. Here levels refers Unit, Integration and system etc Methods refers to dynamic and static. Techniques refers to white box and black box testing Test Methodology: This is actual implementation we use for each and every project. This may defer from project to project. For example we may mention in test strategy that we use load test but in the methodology of a particular project we may not perform it Test Policy: Test Policy is a management definition to testing for a department. Its specifies milestones to be achieved. Test policy contains these side headings a. Def of testing b. Testing System: Methods through which testing will be achieved c. Evaluation: How information services management will measure and evaluate testing d. Standards: The standards against which testing will be measured Test Life Cycle: Test Analysis(Risk Analysis), Test plan , Designing Test cases, Executing Test cases, Maintaining test logs, Reporting Risk management is the process of measuring, or assessing risk and then developing strategies to manage the risk. In ideal risk management, a prioritization process is followed whereby the risks with the greatest loss and the greatest probability of occurring are handled first Risk Analysis: Schedule Risk: Factors that may affect the schedule of testing Technology Risk: Risks on hardware and software of the application Resource Risk: Test Team availability on slippage of the project schedules. Support Risk: Clarification required on specifications and availability of personnel for the same Bug: Something abnormal to the expected behavior is called a bug Error: A software bug found before software release is called a Error Defect: A software bug found after the software release is send to customer Fault: Since a defect is a software bug found in production. Every defect leads to a fault to the system SDLC: Waterfall Model: This approach breaks the development cycle down into discrete phases, each with rigid sequential begin and end. Each phase is fully completed before the next is started. Once a phase is completed, and it never goes back to change it. The phase are Requirements, Designing, coding, Testing and maintenance.
  • 9. Iterative Model: This model does not start with full specifications of requirements. Instead development begins by specifying and implementing just part of the software, which can be reviewed in order to identify further requirements. This process is repeated, producing a new version of the software for each cycle of the model Progressive Model: Testing Methods: Static Testing: Static testing approaches are time independent. They do not necessarily involve either manual or automated execution of product. Example includes syntax testing, structured walkthroughs, and inspections. Dynamic Testing: Dynamic testing techniques are time dependent and involve executing specific sequence of instructions on paper. For example Boundary testing is a dynamic testing technique that requires the execution of test cases on the computer with a specific focus on the boundary values associated with the inputs or outputs of the program. Test Metrics: Test metric is a measurable unit used to measure the productivity and quality of the system produced. Generally classified into size oriented metrics - based on lines of code (KLOC) Function oriented metrics - we use Function point metric to calculate the functionality delivered by a system For example these are the test metrics u can say as example *Test Coverage* = Number of units (KLOC/FP) tested / total size of the system *Test cost (in %)* = Cost of testing / total cost *100 *Cost to locate defect* = Cost of testing / the number of defects located *Defects detected in testing (in %)* = Defects detected in testing / total system defects*100 *Acceptance criteria tested* = Acceptance criteria tested / total acceptance criteria If u want to mention your company testing department performance we use Defect removal Efficiency having formulae: (No. of defects found by testing team/ (No. of defects found by testing team + No. of defects found by Customer))*100 For example testing department has found 90 defects and customer has found 10 defects then DRE: (90/ 100 )*100 = 90% defect removal efficiency
  • 10. IF the case is reverse then our defect removal efficiency will be only 10% Cyclocomplex: This metric is an indication of the number of 'linear' segments in a method (i.e. sections of code with no branches) and therefore can be used to determine the number of tests required to obtain complete coverage. It can also be used to indicate the psychological complexity of a method. A method with no branches has a Cyclomatic Complexity of 1 since there is 1 arc. This number is incremented whenever a branch is encountered. In this implementation, statements that represent branching are defined as: 'for', 'while', 'do', 'if', 'case' (optional), 'catch' (optional) and the ternary operator (optional). The sum of Cyclomatic Complexities for methods in local classes is also included in the total for a method. Check In (or) Check Out Check in means changing or modifying or adding the Information present in Main Repository when other person check out the changes will be reflected. Bug leakage; If a bug which has to be found in the requirement phase is found in the design phase is said to be bug leakage.