SlideShare a Scribd company logo
Chap 5
Managing the Test Activities
Purpose of a Test Plan
• A test plan describes the objectives, resources and processes for a test project.
• A test plan:
• Documents the means and schedule for achieving test objectives
• Helps to ensure that the performed test activities will meet the established criteria
• Serves as a means of communication with team members and other stakeholders
• Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from
them)
• Test planning guides the testers’ thinking and forces the testers to confront the future challenges related to risks, schedules,
people, tools, costs, effort, etc.
• The process of preparing a test plan is a useful way to think through the efforts needed to achieve the test project objectives.
Content of a Test Plan
Typical content of a test plan includes:
• Context of testing (e.g., scope, test objectives, constraints, test basis)
• Assumptions and constraints of the test project
• Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training
needs)
• Communication (e.g., forms and frequency of communication, documentation
templates)
• Risk register (e.g., product risks, project risks)
• Test approach (e.g., test levels, test types, test techniques, test deliverables, entry
criteria and exit criteria, independence of testing, metrics to be collected, test data
requirements, test environment requirements, deviations from the organizational test
policy and test strategy)
• Budget and schedule
Tester's Contribution to Iteration and Release Planning
In iterative SDLCs, typically two kinds of planning occur:
release planning and iteration planning
Release Planning:
• Looks ahead to the release of a product
• Defines and re-defines the product backlog
• May involve refining larger user stories into smaller user stories
• Serves as the basis for the test approach and test plan across all iterations
• Testers involved in release planning:
• Participate in writing testable user stories and acceptance criteria
• Participate in project and quality risk analyses
• Estimate test effort associated with user stories
• Determine the test approach
• Plan the testing for the release
Tester's Contribution to Iteration and Release Planning
Iteration Planning:
• Looks ahead to the end of a single iteration
• Concerned with the iteration backlog
• Testers involved in iteration planning:
• Participate in the detailed risk analysis of user stories
• Determine the testability of user stories
• Break down user stories into tasks (particularly testing tasks)
• Estimate test effort for all testing tasks
• Identify and refine functional and non-functional aspects of the test object
Entry Criteria and Exit Criteria
Entry Criteria in Software Testing
Definition:
Preconditions that must be met before testing activities can begin. They ensure readiness for efficient,
effective, and low-risk testing.
Why Entry Criteria Matter:
• Avoids wasted effort on unstable or incomplete items
• Reduces risk, cost, and time overruns
• Ensures resource and environment availability
Examples of Entry Criteria:
• ✅ Test environment is set up and accessible
• ✅ Required tools and licenses are available
• ✅ Test data is prepared and verified
• ✅ All smoke tests have passed
• ✅ Testable requirements or user stories are finalized
 In Agile:
📌 Definition of Ready (DoR) – Ensures a user story is well-defined and ready for development and testing.
Tester's Contribution to Iteration and Release Planning
Exit Criteria in Software Testing
Definition:
Conditions that must be met to conclude a testing phase or declare a test level complete.
Why Exit Criteria Matter:
• Defines what “done” looks like
• Prevents premature release
• Ensures product quality and risk coverage
Examples of Exit Criteria:
• ✅ All planned test cases are executed
• ✅ Test coverage goals are met
• ✅ No critical or high-priority defects remain unresolved
• ✅ All regression tests are automated
• ✅ Static testing is completed and defects are reported
 In Agile:
📌 Definition of Done (DoD) – Objective checklist to declare a user story or feature complete and potentially
releasable.
🧮 Test Effort Estimation Techniques
What is Test Effort Estimation?
The process of predicting the effort required to meet testing objectives. It’s based on
assumptions and subject to estimation errors.
Key Principles:
• Small tasks = more accurate estimates ✅
• Large tasks → Break down into smaller units 🔍
There are two types of Estimation
 Metrics Based Estimation (the one which are driven by the calculation)
Estimation Based on Ratio
Extrapolation
 Expert Based Estimation (the one which are people driven not on calculation)
Wide Band Delphi
Three Point Estimation
🧮 Test Effort Estimation Techniques
Estimation Based on Ratios (Metrics-Based Estimation)
What is it?
A technique that uses historical data and standard ratios from previous projects to estimate testing effort.
key Concept:
Use the Development-to-Test Effort Ratio from past projects to forecast the effort for new ones.
📁 Example:
• Previous project ratio: Dev:Test = 3:2
• Current project: 600 person-days of development
• Estimated test effort:
> (2/3) × 600 = 400 person-days
Why Use It?
✅ Based on real organizational data
✅ Great for similar project types
✅ Fast and relatively simple
 ⚠️Important: Ensure project similarity & context match when applying ratios!
🧮 Test Effort Estimation Techniques
Extrapolation (Metrics-Based Estimation)
• What is it?
Estimation technique where early project data is collected, and effort for the remaining work is predicted by projecting trends
using a mathematical model.
How It Works:
• Measure test effort in initial stages/iterations
• Use that data to estimate future effort
• Especially useful in iterative SDLCs (e.g., Agile)
Example:
If the test effort in the last 3 iterations was:
• Iteration 1: 100 person-hours
• Iteration 2: 110 person-hours
• Iteration 3: 90 person-hours
➡️Average = 100 person-hours
Predict next iteration =
➡️ ~100 person-hours
Benefits:
• More accurate in ongoing projects
• Adapts well to changing scopes
🧮 Test Effort Estimation Techniques
Wideband Delphi Estimation Technique
 What is it?
An iterative, expert-based technique for estimating effort, ensuring accuracy through anonymous and feedback.
How It Works:
1. Experts estimate effort individually and anonymously.
2. Estimates are collected and analyzed.
3. If estimates deviate beyond a set boundary:
1. Experts discuss reasons (facilitated discussion).
2. New estimates are made again in isolation.
4. Process repeats until consensus is reached.
Key Features:
• Encourages honest input without peer pressure.
• Reduces bias through anonymity.
• Ideal for complex or high-risk tasks.
Planning Poker = Variant of Wideband Delphi
• Commonly used in Agile teams.
• Estimates made using numbered cards (e.g., Fibonacci series).
• Quick, fun, and collaborative for user story estimation.
🧮 Test Effort Estimation Techniques
Three-Point Estimation Technique
 What is it?
An expert-based estimation method using three values to improve accuracy and assess risk.
The Three Estimates:
• a = Most Optimistic
• m = Most Likely
• b = Most Pessimistic
 Estimation Formula
E = (a + 4m + b) / 6
Standard Deviation (SD):
SD = (b – a) / 6
Example:
• a = 6, m = 9, b = 18 (person-hours)
• E = (6 + 4×9 + 18) / 6 = 10
• SD = (18 – 6) / 6 = 2
➡️Estimate = 10 ± 2 hours Between
→ 8 and 12 person-hours
Benefits:
• Incorporates uncertainty and risk
• Provides a range instead of a single number
• Helps visualize best-case, realistic, and worst-case effort
Test Case Prioritization
what is Test Case Prioritization?
Content:
• Test case prioritization is the process of determining the order in which test cases should be executed during the software testing
process.
• The goal is to optimize the execution by running the most important or risky tests first, which helps in identifying defects early
and improving test coverage.
Why Prioritize?
• Efficiency: Save time and resources by focusing on high-priority areas.
• Risk Management: Identify critical issues sooner based on risk analysis.
• Test Coverage: Ensure that key areas of the software are covered as early as possible.
Prioritization Strategies:
1. Risk-Based Prioritization:
 Test cases based on the risk of failure or criticality of the features.
 Prioritize tests related to high-risk areas or components.
2. Coverage-Based Prioritization:
 Focus on tests that maximize code coverage (e.g., statement or branch coverage).
 Additional Coverage: Run tests that provide the highest additional coverage first.
3. Requirements-Based Prioritization:
 Based on business priorities and critical features as defined by stakeholders.
Test Case Prioritization
Challenges and Considerations
 Test Case Dependencies:
• Sometimes, higher-priority test cases depend on lower-priority ones. These dependencies
must be respected to avoid incorrect test results.
 Resource Availability:
• Test execution may be affected by the availability of resources like test environments, tools,
or people.
• Consider time windows for resource availability when scheduling test runs.
 Ideal Practice:
• Test cases should be prioritized based on a combination of the above strategies,
considering the importance of requirements, risk levels, and the coverage achieved by
each test.
Test Case Prioritization
Logical dependency: Occurs when the sequence of test execution is
dictated by the logical flow or business logic of the application being
tested
Functional Flow: A user registration test must pass before a test for
user login can run because, logically, the login functionality requires
an existing user
Technical dependency: occurs when one test case relies on the
outcome or execution of another test case due to technical reasons.
By considering the priority:
T1 T3  T2  T4 OR T3  T1 T2  T4
By Considering the Priority and Technical dependency.
T3  T2  T1  T4 we run the t3 first because in prioritization which
independent and high priority.
By considering the priority Technical and logical Dependency:
 T2  T1  T3 T4 OR T2  T3  T1 T4
Test Pyramid
The Test Pyramid is a model that helps teams plan their automated testing strategy. It
shows that tests vary in their granularity (level of detail) and that different types of
tests serve different purposes.
A pyramid showing:
• Bottom: Unit Tests
• Middle: Service/Integration Tests
• Top: UI/End-to-End Tests
Test Pyramid
• Bottom Layer:
• Unit Tests (smallest and fastest)
• Test tiny pieces of functionality in isolation.
• Many unit tests are needed for good coverage.
• Middle Layer:
• Service or Integration Tests
• Test the interaction between components or services.
• Fewer than unit tests, but still important.
• Top Layer:
• UI Tests or End-to-End Tests (largest and slowest)
• Test entire user journeys through the system.
• Only a few are needed for reasonable coverage because they are slow and complex.
 Different models might slightly rename or adjust these layers, but the key idea
remains: more fast, isolated tests at the bottom; fewer slow, broad tests at
the top.
Testing Quadrants
 Helps teams plan the right types of tests for Agile development.
 Tests are categorized by two axes:
Business facing vs. Technology facing
Support the team vs. Critique the product.
 Ensures complete and balanced testing across the SDLC.
Testing Quadrants
Quadrant Q1 (technology facing, support the team)
This quadrant contains component and component integration tests. These tests should be
automated and included in the CI process.
Quadrant Q2 (business facing, support the team)
This quadrant contains functional tests, examples, user story tests, user experience
prototypes, API testing, and simulations. These tests check the acceptance criteria and
can be manual or automated.
Quadrant Q3 (business facing, critique the product)
This quadrant contains exploratory testing, usability testing, user acceptance testing.
These tests are user-oriented and often manual.
Quadrant Q4 (technology facing, critique the product)
 This quadrant contains smoke tests and non-functional tests (except usability tests).
These tests are often automated.
Risk Management
Definition (ISO 31000):
Organizations face internal & external factors that create uncertainty in achieving
objectives. Risk management helps navigate these uncertainties.
Key Benefits:
• Increases likelihood of meeting goals
• Enhances product quality
• Builds stakeholder trust
Main Activities:
1. Risk Analysis
1. Risk Identification: (identify your risk like many activities to identify your risk. Like brain
storming session. Expert interview)
2. Risk Assessment (after identification you need to assessment of this risk. Prioritize the risk your
Higer risk at the top your lower risk at the bottom)
2. Risk Control
1. Risk Mitigation: (means taking steps to reduce the impact or likelihood of a risk.)
2. Risk Monitoring: (means keeping an eye on identified risks over time to see if they change,
increase, or new ones appear.)
Project Risks in Software Testing
Definition:
Project risks are related to the management and execution of the project and can
affect schedule, budget, or scope.
Examples of Project Risks:
• Organizational Issues:
• Delays in deliverables, unrealistic estimates, cost-cutting
• People Issues:
• Lack of skills, team conflicts, poor communication, understaffing
• Technical Issues:
• Scope creep, outdated or missing tools
• Supplier Issues:
• Third-party failures, vendor bankruptcy
 Impact:
Can delay the project, increase costs, or prevent it from meeting its objectives.
Product Risks in Software Testing
 Definition:
Product risks relate to the quality and performance of the software being built.
Examples of Product Risks (based on ISO 25010):
 Missing or incorrect functionality
 Calculation errors, runtime failures
 Poor performance or architecture
 Bad user experience, security flaws
Consequences of Product Risks:
 User dissatisfaction
 Loss of revenue or reputation
 High maintenance or support costs
 Legal issues or criminal penalties
 In severe cases: physical harm or injury
Product Risk Analysis
. Purpose:
To identify and assess risks related to the product's quality so testing can minimize residual risk effectively.
When to Start:
• Ideally begins early in the Software Development Life Cycle (SDLC).
Key Components:
1. Risk Identification:
1. Create a comprehensive list of product risks
2. Techniques: brainstorming, workshops, interviews, cause-effect diagrams
2. Risk Assessment:
1. Categorize risks
2. Evaluate risk likelihood and impact
3. Determine risk level and prioritize
4. Propose mitigation strategies
Approaches to Risk Assessment:
• Quantitative:
Risk Level = Likelihood × Impact Likelihood mean(How probable it is that the risk will actually happen.)
• Qualitative:
Use a risk matrix to rank and visualize risks
Product Risk Analysis
Impact on Testing Strategy:
• Defines scope and depth of testing
• Selects test levels (e.g., unit, system, acceptance)
• Suggests test types (e.g., functional, performance, security)
• Recommends test techniques and coverage goals
• Helps estimate effort per testing task
• Supports defect prioritization: focus on finding critical issues early
May also recommend non-testing activities (e.g., reviews, simulations) to reduce product
risk.
Product Risk Control
. Definition:
Product risk control comprises all actions taken to respond to identified and assessed
product risks.
Two Main Components:
• Risk Mitigation:
Implement actions to reduce risk levels (based on risk assessment).
• Risk Monitoring:
Ensure effectiveness of mitigation, refine risk assessment, and detect new/emerging
risks.
Risk Response Options (Veenendaal, 2012):
• Mitigation (e.g., through testing)
• Acceptance
• Transfer
• Contingency planning
Product Risk Control
Risk Mitigation Through Testing
Testing-Related Risk Mitigation Actions:
• ✅ Select testers with the right experience and skills for the risk type
• ✅ Ensure appropriate level of test independence
• ✅ Conduct reviews and perform static analysis
• ✅ Use appropriate test techniques and set coverage levels
• ✅ Target affected quality characteristics with correct test types
• ✅ Perform dynamic testing, including regression testing
Test Monitoring, Test Control and Test Completion
Test Monitoring:
• Gathers information to assess test progress
• Measures if exit criteria or test tasks are being met
• Example targets: risk coverage, requirements coverage, acceptance criteria
Test Control:
• Uses monitoring data to issue control directives for improved testing efficiency
• Examples of control directives:
• 🔄 Reprioritize tests when a risk becomes an issue
• ✅ Re-evaluate test items based on entry/exit criteria after rework
• 🕒 Adjust test schedule due to delays in test environment
• 👥 Add resources where necessary
Test Monitoring, Test Control and Test Completion
Test Completion Activities
Test Completion:
• Involves collecting data from completed test activities
• Consolidates:
• 📁 Testware
• 📊 Metrics and logs
• 📚 Lessons learned
When It Happens:
• 📌 Completion of a test level
• 🔁 End of an agile iteration
• ✅ Software/system release
• 🛠 Completion of maintenance release
• ❌ Cancellation of a test project
Metrics Used in Testing
Purpose of Test Metrics:
Track progress, quality, and effectiveness of testing activities to support test monitoring,
control, and completion.
Common Types of Metrics:
• 📈 Project Progress Metrics:
Task completion, resource usage, test effort
• 🧪 Test Progress Metrics:
Test case implementation, environment readiness, execution stats (run/passed/failed), test
duration
• ✅ Product Quality Metrics:
Availability, response time, mean time to failure
• 🐞 Defect Metrics:
Number and priority of defects found/fixed, defect density, detection percentage
• ⚠️Risk Metrics:
Residual risk level
• 📊 Coverage Metrics:
Requirements and code coverage
• 💰 Cost Metrics:
Testing cost, organizational cost of quality
Purpose, Content and Audience for Test Reports
Purpose of Test Reports:
• Summarize and communicate test information during and after testing
• Support ongoing test control and decision-making
• Inform updates to test schedule, resources, or plans due to deviations or changes
• Provide insight for subsequent testing phases
Types of Reports:
• Test Progress Reports:
Ongoing updates during test execution
• Test Completion Reports:
Summary at the end of a test stage (e.g., level, cycle, iteration)
Audience:
• Project Managers
• Test Leads and QA Teams
• Developers
• Business Stakeholders
Purpose, Content and Audience for Test Reports
Content of Test Progress Reports
Common Contents of Test Progress Reports:
• 📅 Test Period Covered
• 📈 Progress Status:
On track, ahead, or behind schedule with key deviations
• 🚧 Impediments & Workarounds
• 📊 Test Metrics:
Execution stats, defect data, coverage, etc.
• ⚠️New or Changed Risks
• 🔜 Planned Testing Activities for Next Period
Report Frequency:
• Typically generated daily, weekly, or per sprint/iteration
Purpose, Content and Audience for Test Reports
Test Completion Report Overview
When It’s Prepared:
• At the end of a project, test level, or test type
• Ideally after meeting exit criteria
Purpose:
• Summarizes final testing results
• Evaluates product quality and testing effectiveness
• Supports decision-making for release or further action
Data Sources:
• Test progress reports
• Project metrics and observations
Purpose, Content and Audience for Test Reports
Contents & Audience of Completion Reports
Typical Contents:
• 📝 Test summary
• ✅ Evaluation of product quality & test objectives (based on the test plan)
• 📉 Deviations from the plan (schedule, duration, effort)
• 🚧 Testing impediments & workarounds
• 📊 Test metrics from progress reports
• ⚠️Unfixed defects & unmitigated risks
• 📚 Lessons learned
Reporting Considerations:
• Audience needs determine formality and frequency
• 🤝‍
🧑
‍
‍
‍
‍‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍
‍ Team reports: frequent & informal
• 📄 Project-level reports: formal, structured, one-time
• 📘 Follows ISO/IEC/IEEE 29119-3 standard templates
Configuration Management
Configuration Management (CM) provides a discipline for identifying, controlling, and
tracking work products such as:
• Test plans, test strategies
• Test conditions, test cases, test scripts
• Test results, test logs, and test reports
For complex items (e.g., a test environment), CM records:
• The items it consists of
• Their relationships and versions
 Once approved for testing, a configuration item becomes a baseline and can only be
changed through a formal change control process.
CM keeps a record of changes and allows reverting to previous baselines to reproduce
earlier test results.
Configuration Management
CM ensures:
• All configuration items (including test items) are:
• Uniquely identified
• Version controlled
• Tracked for changes
• Related to other items for full traceability
• All documentation and software items are unambiguously referenced in test
documentation
In DevOps pipelines (CI, CD, and continuous deployment):
• CM is automated
• Supports associated testing in an integrated workflow
Defect Management
•Defect Management is essential as one major test objective is to find defects.
•"Defects" may include real defects, false positives, or change requests — clarified
during the defect handling process.
•Anomalies can be reported at any SDLC phase; format varies depending on the SDLC.
•Defect management process includes:
•Workflow from discovery to closure
•Logging, analysis, classification
•Response decision (fix, defer, reject)
•Closure of the defect report
•All stakeholders must follow the process.
•Defects from static testing should be handled similarly.
Defect Management
Defect Reports – Objectives & Content
Objectives of Defect Reports:
• Give sufficient information to resolve issues
• Track the quality of work products
• Provide improvement ideas for development/testing
Typical Defect Report (Dynamic Testing):
• Unique identifier, title, date, author, and role
• Test object and environment info
• Context: test case, SDLC phase, data/techniques used
• Failure description, steps to reproduce, logs/screenshots
• Expected vs actual results
• Severity and priority
• Status: open, deferred, closed, etc.
• References (e.g., to test case)
 Note: Some fields may be auto-filled by tools. Templates/examples available in
ISO/IEC/IEEE 29119-3 (calls them incident reports).
iSTQB Chap 5   Managing the Test Activities

More Related Content

PPTX
Test Management.pptx
PDF
Chapter 5 - Managing Test Activities V4.0
PPTX
Test planning & estimation
PPT
Test process
PPTX
Software Testing Foundations Part 7 - Basics of Test Management
PPSX
Things to keep in mind before starting a test plan
PPTX
ISTQB foundation level - day 2
PPS
Estimating test effort part 1 of 2
Test Management.pptx
Chapter 5 - Managing Test Activities V4.0
Test planning & estimation
Test process
Software Testing Foundations Part 7 - Basics of Test Management
Things to keep in mind before starting a test plan
ISTQB foundation level - day 2
Estimating test effort part 1 of 2

Similar to iSTQB Chap 5 Managing the Test Activities (20)

PDF
Test Design with Action-based Testing Methodology - Ngo Hoang Minh
PPT
Software test management
PPTX
chapter-no-4-test-management fudhg ddh j
PPTX
Introduction to testing.
PDF
Test Management final ppt file for vp(1).pdf
PDF
Effective Test Estimation
PPTX
A Software Testing Intro
PPTX
SWT2_tim.pptx
PPTX
Test management
PDF
Chapter 3
PDF
Test Plan_ What It Is and How to Create One for Maximum Efficiency.pdf
PPT
Software Testing Process
PPT
Testing process
PPT
SOFTWARE TESTING
PPTX
Test Management
DOCX
Testing documents
PPT
ISTQB, ISEB Lecture Notes
PPTX
Software Testing 2/5
PPT
Software Testing- Principles of testing- Mazenet Solution
PDF
Agile process
Test Design with Action-based Testing Methodology - Ngo Hoang Minh
Software test management
chapter-no-4-test-management fudhg ddh j
Introduction to testing.
Test Management final ppt file for vp(1).pdf
Effective Test Estimation
A Software Testing Intro
SWT2_tim.pptx
Test management
Chapter 3
Test Plan_ What It Is and How to Create One for Maximum Efficiency.pdf
Software Testing Process
Testing process
SOFTWARE TESTING
Test Management
Testing documents
ISTQB, ISEB Lecture Notes
Software Testing 2/5
Software Testing- Principles of testing- Mazenet Solution
Agile process
Ad

Recently uploaded (20)

PPT
Teaching material agriculture food technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Network Security Unit 5.pdf for BCA BBA.
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Cloud computing and distributed systems.
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Encapsulation theory and applications.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
Teaching material agriculture food technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
NewMind AI Monthly Chronicles - July 2025
Network Security Unit 5.pdf for BCA BBA.
Understanding_Digital_Forensics_Presentation.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Per capita expenditure prediction using model stacking based on satellite ima...
Cloud computing and distributed systems.
Chapter 3 Spatial Domain Image Processing.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
“AI and Expert System Decision Support & Business Intelligence Systems”
Encapsulation theory and applications.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Review of recent advances in non-invasive hemoglobin estimation
Mobile App Security Testing_ A Comprehensive Guide.pdf
Ad

iSTQB Chap 5 Managing the Test Activities

  • 1. Chap 5 Managing the Test Activities
  • 2. Purpose of a Test Plan • A test plan describes the objectives, resources and processes for a test project. • A test plan: • Documents the means and schedule for achieving test objectives • Helps to ensure that the performed test activities will meet the established criteria • Serves as a means of communication with team members and other stakeholders • Demonstrates that testing will adhere to the existing test policy and test strategy (or explains why the testing will deviate from them) • Test planning guides the testers’ thinking and forces the testers to confront the future challenges related to risks, schedules, people, tools, costs, effort, etc. • The process of preparing a test plan is a useful way to think through the efforts needed to achieve the test project objectives.
  • 3. Content of a Test Plan Typical content of a test plan includes: • Context of testing (e.g., scope, test objectives, constraints, test basis) • Assumptions and constraints of the test project • Stakeholders (e.g., roles, responsibilities, relevance to testing, hiring and training needs) • Communication (e.g., forms and frequency of communication, documentation templates) • Risk register (e.g., product risks, project risks) • Test approach (e.g., test levels, test types, test techniques, test deliverables, entry criteria and exit criteria, independence of testing, metrics to be collected, test data requirements, test environment requirements, deviations from the organizational test policy and test strategy) • Budget and schedule
  • 4. Tester's Contribution to Iteration and Release Planning In iterative SDLCs, typically two kinds of planning occur: release planning and iteration planning Release Planning: • Looks ahead to the release of a product • Defines and re-defines the product backlog • May involve refining larger user stories into smaller user stories • Serves as the basis for the test approach and test plan across all iterations • Testers involved in release planning: • Participate in writing testable user stories and acceptance criteria • Participate in project and quality risk analyses • Estimate test effort associated with user stories • Determine the test approach • Plan the testing for the release
  • 5. Tester's Contribution to Iteration and Release Planning Iteration Planning: • Looks ahead to the end of a single iteration • Concerned with the iteration backlog • Testers involved in iteration planning: • Participate in the detailed risk analysis of user stories • Determine the testability of user stories • Break down user stories into tasks (particularly testing tasks) • Estimate test effort for all testing tasks • Identify and refine functional and non-functional aspects of the test object
  • 6. Entry Criteria and Exit Criteria Entry Criteria in Software Testing Definition: Preconditions that must be met before testing activities can begin. They ensure readiness for efficient, effective, and low-risk testing. Why Entry Criteria Matter: • Avoids wasted effort on unstable or incomplete items • Reduces risk, cost, and time overruns • Ensures resource and environment availability Examples of Entry Criteria: • ✅ Test environment is set up and accessible • ✅ Required tools and licenses are available • ✅ Test data is prepared and verified • ✅ All smoke tests have passed • ✅ Testable requirements or user stories are finalized  In Agile: 📌 Definition of Ready (DoR) – Ensures a user story is well-defined and ready for development and testing.
  • 7. Tester's Contribution to Iteration and Release Planning Exit Criteria in Software Testing Definition: Conditions that must be met to conclude a testing phase or declare a test level complete. Why Exit Criteria Matter: • Defines what “done” looks like • Prevents premature release • Ensures product quality and risk coverage Examples of Exit Criteria: • ✅ All planned test cases are executed • ✅ Test coverage goals are met • ✅ No critical or high-priority defects remain unresolved • ✅ All regression tests are automated • ✅ Static testing is completed and defects are reported  In Agile: 📌 Definition of Done (DoD) – Objective checklist to declare a user story or feature complete and potentially releasable.
  • 8. 🧮 Test Effort Estimation Techniques What is Test Effort Estimation? The process of predicting the effort required to meet testing objectives. It’s based on assumptions and subject to estimation errors. Key Principles: • Small tasks = more accurate estimates ✅ • Large tasks → Break down into smaller units 🔍 There are two types of Estimation  Metrics Based Estimation (the one which are driven by the calculation) Estimation Based on Ratio Extrapolation  Expert Based Estimation (the one which are people driven not on calculation) Wide Band Delphi Three Point Estimation
  • 9. 🧮 Test Effort Estimation Techniques Estimation Based on Ratios (Metrics-Based Estimation) What is it? A technique that uses historical data and standard ratios from previous projects to estimate testing effort. key Concept: Use the Development-to-Test Effort Ratio from past projects to forecast the effort for new ones. 📁 Example: • Previous project ratio: Dev:Test = 3:2 • Current project: 600 person-days of development • Estimated test effort: > (2/3) × 600 = 400 person-days Why Use It? ✅ Based on real organizational data ✅ Great for similar project types ✅ Fast and relatively simple  ⚠️Important: Ensure project similarity & context match when applying ratios!
  • 10. 🧮 Test Effort Estimation Techniques Extrapolation (Metrics-Based Estimation) • What is it? Estimation technique where early project data is collected, and effort for the remaining work is predicted by projecting trends using a mathematical model. How It Works: • Measure test effort in initial stages/iterations • Use that data to estimate future effort • Especially useful in iterative SDLCs (e.g., Agile) Example: If the test effort in the last 3 iterations was: • Iteration 1: 100 person-hours • Iteration 2: 110 person-hours • Iteration 3: 90 person-hours ➡️Average = 100 person-hours Predict next iteration = ➡️ ~100 person-hours Benefits: • More accurate in ongoing projects • Adapts well to changing scopes
  • 11. 🧮 Test Effort Estimation Techniques Wideband Delphi Estimation Technique  What is it? An iterative, expert-based technique for estimating effort, ensuring accuracy through anonymous and feedback. How It Works: 1. Experts estimate effort individually and anonymously. 2. Estimates are collected and analyzed. 3. If estimates deviate beyond a set boundary: 1. Experts discuss reasons (facilitated discussion). 2. New estimates are made again in isolation. 4. Process repeats until consensus is reached. Key Features: • Encourages honest input without peer pressure. • Reduces bias through anonymity. • Ideal for complex or high-risk tasks. Planning Poker = Variant of Wideband Delphi • Commonly used in Agile teams. • Estimates made using numbered cards (e.g., Fibonacci series). • Quick, fun, and collaborative for user story estimation.
  • 12. 🧮 Test Effort Estimation Techniques Three-Point Estimation Technique  What is it? An expert-based estimation method using three values to improve accuracy and assess risk. The Three Estimates: • a = Most Optimistic • m = Most Likely • b = Most Pessimistic  Estimation Formula E = (a + 4m + b) / 6 Standard Deviation (SD): SD = (b – a) / 6 Example: • a = 6, m = 9, b = 18 (person-hours) • E = (6 + 4×9 + 18) / 6 = 10 • SD = (18 – 6) / 6 = 2 ➡️Estimate = 10 ± 2 hours Between → 8 and 12 person-hours Benefits: • Incorporates uncertainty and risk • Provides a range instead of a single number • Helps visualize best-case, realistic, and worst-case effort
  • 13. Test Case Prioritization what is Test Case Prioritization? Content: • Test case prioritization is the process of determining the order in which test cases should be executed during the software testing process. • The goal is to optimize the execution by running the most important or risky tests first, which helps in identifying defects early and improving test coverage. Why Prioritize? • Efficiency: Save time and resources by focusing on high-priority areas. • Risk Management: Identify critical issues sooner based on risk analysis. • Test Coverage: Ensure that key areas of the software are covered as early as possible. Prioritization Strategies: 1. Risk-Based Prioritization:  Test cases based on the risk of failure or criticality of the features.  Prioritize tests related to high-risk areas or components. 2. Coverage-Based Prioritization:  Focus on tests that maximize code coverage (e.g., statement or branch coverage).  Additional Coverage: Run tests that provide the highest additional coverage first. 3. Requirements-Based Prioritization:  Based on business priorities and critical features as defined by stakeholders.
  • 14. Test Case Prioritization Challenges and Considerations  Test Case Dependencies: • Sometimes, higher-priority test cases depend on lower-priority ones. These dependencies must be respected to avoid incorrect test results.  Resource Availability: • Test execution may be affected by the availability of resources like test environments, tools, or people. • Consider time windows for resource availability when scheduling test runs.  Ideal Practice: • Test cases should be prioritized based on a combination of the above strategies, considering the importance of requirements, risk levels, and the coverage achieved by each test.
  • 15. Test Case Prioritization Logical dependency: Occurs when the sequence of test execution is dictated by the logical flow or business logic of the application being tested Functional Flow: A user registration test must pass before a test for user login can run because, logically, the login functionality requires an existing user Technical dependency: occurs when one test case relies on the outcome or execution of another test case due to technical reasons. By considering the priority: T1 T3  T2  T4 OR T3  T1 T2  T4 By Considering the Priority and Technical dependency. T3  T2  T1  T4 we run the t3 first because in prioritization which independent and high priority. By considering the priority Technical and logical Dependency:  T2  T1  T3 T4 OR T2  T3  T1 T4
  • 16. Test Pyramid The Test Pyramid is a model that helps teams plan their automated testing strategy. It shows that tests vary in their granularity (level of detail) and that different types of tests serve different purposes. A pyramid showing: • Bottom: Unit Tests • Middle: Service/Integration Tests • Top: UI/End-to-End Tests
  • 17. Test Pyramid • Bottom Layer: • Unit Tests (smallest and fastest) • Test tiny pieces of functionality in isolation. • Many unit tests are needed for good coverage. • Middle Layer: • Service or Integration Tests • Test the interaction between components or services. • Fewer than unit tests, but still important. • Top Layer: • UI Tests or End-to-End Tests (largest and slowest) • Test entire user journeys through the system. • Only a few are needed for reasonable coverage because they are slow and complex.  Different models might slightly rename or adjust these layers, but the key idea remains: more fast, isolated tests at the bottom; fewer slow, broad tests at the top.
  • 18. Testing Quadrants  Helps teams plan the right types of tests for Agile development.  Tests are categorized by two axes: Business facing vs. Technology facing Support the team vs. Critique the product.  Ensures complete and balanced testing across the SDLC.
  • 19. Testing Quadrants Quadrant Q1 (technology facing, support the team) This quadrant contains component and component integration tests. These tests should be automated and included in the CI process. Quadrant Q2 (business facing, support the team) This quadrant contains functional tests, examples, user story tests, user experience prototypes, API testing, and simulations. These tests check the acceptance criteria and can be manual or automated. Quadrant Q3 (business facing, critique the product) This quadrant contains exploratory testing, usability testing, user acceptance testing. These tests are user-oriented and often manual. Quadrant Q4 (technology facing, critique the product)  This quadrant contains smoke tests and non-functional tests (except usability tests). These tests are often automated.
  • 20. Risk Management Definition (ISO 31000): Organizations face internal & external factors that create uncertainty in achieving objectives. Risk management helps navigate these uncertainties. Key Benefits: • Increases likelihood of meeting goals • Enhances product quality • Builds stakeholder trust Main Activities: 1. Risk Analysis 1. Risk Identification: (identify your risk like many activities to identify your risk. Like brain storming session. Expert interview) 2. Risk Assessment (after identification you need to assessment of this risk. Prioritize the risk your Higer risk at the top your lower risk at the bottom) 2. Risk Control 1. Risk Mitigation: (means taking steps to reduce the impact or likelihood of a risk.) 2. Risk Monitoring: (means keeping an eye on identified risks over time to see if they change, increase, or new ones appear.)
  • 21. Project Risks in Software Testing Definition: Project risks are related to the management and execution of the project and can affect schedule, budget, or scope. Examples of Project Risks: • Organizational Issues: • Delays in deliverables, unrealistic estimates, cost-cutting • People Issues: • Lack of skills, team conflicts, poor communication, understaffing • Technical Issues: • Scope creep, outdated or missing tools • Supplier Issues: • Third-party failures, vendor bankruptcy  Impact: Can delay the project, increase costs, or prevent it from meeting its objectives.
  • 22. Product Risks in Software Testing  Definition: Product risks relate to the quality and performance of the software being built. Examples of Product Risks (based on ISO 25010):  Missing or incorrect functionality  Calculation errors, runtime failures  Poor performance or architecture  Bad user experience, security flaws Consequences of Product Risks:  User dissatisfaction  Loss of revenue or reputation  High maintenance or support costs  Legal issues or criminal penalties  In severe cases: physical harm or injury
  • 23. Product Risk Analysis . Purpose: To identify and assess risks related to the product's quality so testing can minimize residual risk effectively. When to Start: • Ideally begins early in the Software Development Life Cycle (SDLC). Key Components: 1. Risk Identification: 1. Create a comprehensive list of product risks 2. Techniques: brainstorming, workshops, interviews, cause-effect diagrams 2. Risk Assessment: 1. Categorize risks 2. Evaluate risk likelihood and impact 3. Determine risk level and prioritize 4. Propose mitigation strategies Approaches to Risk Assessment: • Quantitative: Risk Level = Likelihood × Impact Likelihood mean(How probable it is that the risk will actually happen.) • Qualitative: Use a risk matrix to rank and visualize risks
  • 24. Product Risk Analysis Impact on Testing Strategy: • Defines scope and depth of testing • Selects test levels (e.g., unit, system, acceptance) • Suggests test types (e.g., functional, performance, security) • Recommends test techniques and coverage goals • Helps estimate effort per testing task • Supports defect prioritization: focus on finding critical issues early May also recommend non-testing activities (e.g., reviews, simulations) to reduce product risk.
  • 25. Product Risk Control . Definition: Product risk control comprises all actions taken to respond to identified and assessed product risks. Two Main Components: • Risk Mitigation: Implement actions to reduce risk levels (based on risk assessment). • Risk Monitoring: Ensure effectiveness of mitigation, refine risk assessment, and detect new/emerging risks. Risk Response Options (Veenendaal, 2012): • Mitigation (e.g., through testing) • Acceptance • Transfer • Contingency planning
  • 26. Product Risk Control Risk Mitigation Through Testing Testing-Related Risk Mitigation Actions: • ✅ Select testers with the right experience and skills for the risk type • ✅ Ensure appropriate level of test independence • ✅ Conduct reviews and perform static analysis • ✅ Use appropriate test techniques and set coverage levels • ✅ Target affected quality characteristics with correct test types • ✅ Perform dynamic testing, including regression testing
  • 27. Test Monitoring, Test Control and Test Completion Test Monitoring: • Gathers information to assess test progress • Measures if exit criteria or test tasks are being met • Example targets: risk coverage, requirements coverage, acceptance criteria Test Control: • Uses monitoring data to issue control directives for improved testing efficiency • Examples of control directives: • 🔄 Reprioritize tests when a risk becomes an issue • ✅ Re-evaluate test items based on entry/exit criteria after rework • 🕒 Adjust test schedule due to delays in test environment • 👥 Add resources where necessary
  • 28. Test Monitoring, Test Control and Test Completion Test Completion Activities Test Completion: • Involves collecting data from completed test activities • Consolidates: • 📁 Testware • 📊 Metrics and logs • 📚 Lessons learned When It Happens: • 📌 Completion of a test level • 🔁 End of an agile iteration • ✅ Software/system release • 🛠 Completion of maintenance release • ❌ Cancellation of a test project
  • 29. Metrics Used in Testing Purpose of Test Metrics: Track progress, quality, and effectiveness of testing activities to support test monitoring, control, and completion. Common Types of Metrics: • 📈 Project Progress Metrics: Task completion, resource usage, test effort • 🧪 Test Progress Metrics: Test case implementation, environment readiness, execution stats (run/passed/failed), test duration • ✅ Product Quality Metrics: Availability, response time, mean time to failure • 🐞 Defect Metrics: Number and priority of defects found/fixed, defect density, detection percentage • ⚠️Risk Metrics: Residual risk level • 📊 Coverage Metrics: Requirements and code coverage • 💰 Cost Metrics: Testing cost, organizational cost of quality
  • 30. Purpose, Content and Audience for Test Reports Purpose of Test Reports: • Summarize and communicate test information during and after testing • Support ongoing test control and decision-making • Inform updates to test schedule, resources, or plans due to deviations or changes • Provide insight for subsequent testing phases Types of Reports: • Test Progress Reports: Ongoing updates during test execution • Test Completion Reports: Summary at the end of a test stage (e.g., level, cycle, iteration) Audience: • Project Managers • Test Leads and QA Teams • Developers • Business Stakeholders
  • 31. Purpose, Content and Audience for Test Reports Content of Test Progress Reports Common Contents of Test Progress Reports: • 📅 Test Period Covered • 📈 Progress Status: On track, ahead, or behind schedule with key deviations • 🚧 Impediments & Workarounds • 📊 Test Metrics: Execution stats, defect data, coverage, etc. • ⚠️New or Changed Risks • 🔜 Planned Testing Activities for Next Period Report Frequency: • Typically generated daily, weekly, or per sprint/iteration
  • 32. Purpose, Content and Audience for Test Reports Test Completion Report Overview When It’s Prepared: • At the end of a project, test level, or test type • Ideally after meeting exit criteria Purpose: • Summarizes final testing results • Evaluates product quality and testing effectiveness • Supports decision-making for release or further action Data Sources: • Test progress reports • Project metrics and observations
  • 33. Purpose, Content and Audience for Test Reports Contents & Audience of Completion Reports Typical Contents: • 📝 Test summary • ✅ Evaluation of product quality & test objectives (based on the test plan) • 📉 Deviations from the plan (schedule, duration, effort) • 🚧 Testing impediments & workarounds • 📊 Test metrics from progress reports • ⚠️Unfixed defects & unmitigated risks • 📚 Lessons learned Reporting Considerations: • Audience needs determine formality and frequency • 🤝‍ 🧑 ‍ ‍ ‍ ‍‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ ‍ Team reports: frequent & informal • 📄 Project-level reports: formal, structured, one-time • 📘 Follows ISO/IEC/IEEE 29119-3 standard templates
  • 34. Configuration Management Configuration Management (CM) provides a discipline for identifying, controlling, and tracking work products such as: • Test plans, test strategies • Test conditions, test cases, test scripts • Test results, test logs, and test reports For complex items (e.g., a test environment), CM records: • The items it consists of • Their relationships and versions  Once approved for testing, a configuration item becomes a baseline and can only be changed through a formal change control process. CM keeps a record of changes and allows reverting to previous baselines to reproduce earlier test results.
  • 35. Configuration Management CM ensures: • All configuration items (including test items) are: • Uniquely identified • Version controlled • Tracked for changes • Related to other items for full traceability • All documentation and software items are unambiguously referenced in test documentation In DevOps pipelines (CI, CD, and continuous deployment): • CM is automated • Supports associated testing in an integrated workflow
  • 36. Defect Management •Defect Management is essential as one major test objective is to find defects. •"Defects" may include real defects, false positives, or change requests — clarified during the defect handling process. •Anomalies can be reported at any SDLC phase; format varies depending on the SDLC. •Defect management process includes: •Workflow from discovery to closure •Logging, analysis, classification •Response decision (fix, defer, reject) •Closure of the defect report •All stakeholders must follow the process. •Defects from static testing should be handled similarly.
  • 37. Defect Management Defect Reports – Objectives & Content Objectives of Defect Reports: • Give sufficient information to resolve issues • Track the quality of work products • Provide improvement ideas for development/testing Typical Defect Report (Dynamic Testing): • Unique identifier, title, date, author, and role • Test object and environment info • Context: test case, SDLC phase, data/techniques used • Failure description, steps to reproduce, logs/screenshots • Expected vs actual results • Severity and priority • Status: open, deferred, closed, etc. • References (e.g., to test case)  Note: Some fields may be auto-filled by tools. Templates/examples available in ISO/IEC/IEEE 29119-3 (calls them incident reports).