SlideShare a Scribd company logo
Regression Testing: 
Down the Rabbit Hole 
Neil Studd, Towers Watson
About Me 
• 10 years of testing 
• Cambridge-based 
• Work for companies 
with red logos
Only the names have changed…
Chasing the Holy Grail 
• We’ll hear lots today 
about how regression 
testing should be done 
– …in an ideal world 
– …easiest for new projects 
– …or when starting afresh 
– …when there’s wider 
business buy-in, e.g. 
continuous delivery 
• The “holy grail” of 
regression testing…
I took the red pill 
• Desktop software 
• Infrequent releases 
• Client-driven features 
• Client-driven deadlines 
• (Time v features v quality: 
Quality often loses) 
• Manual regression cycle 
• At the end of the release
Our sacred texts 
• Tests are treated as a 
product bible 
• Handed down through 
generations 
• Revered and followed 
without question 
• Very much “of their 
time”; not modified to 
reflect new evidence
Oh, the things I’ve seen… 
• Tests not testing what 
they claimed to test 
• Expected result = 
“a sensible error” 
• …but that was 
actually a bug! 
• Not enough detail 
• Too much detail
All the information, all at once
Why was it done this way?
We need to go deeper 
• Five whys: 
– Not peer-reviewing 
– Short of time/resources 
– Fixed project deadline 
– Unrealistic promise to 
customer 
– Salespeople too far 
removed 
• Dev/test separation, 
driven by disrespect (dev) 
and fear (test) 
• “Testing is a tester’s 
problem”
We fell for the dark side 
• Don’t allow your tools to 
start working against you! 
• TFS: Supports multiple 
references to one test 
• TFS: Supports “shared 
steps” in tests = quickly 
multiplies setup/teardown 
• Just because you can 
easily record a regression 
test, doesn’t mean you 
should
What I didn’t do 
• Not burning books… 
• …written in good faith 
• …useful metadata 
• …cross-referencing 
• …gives information 
about previous 
perceived severities
How I’m surviving 
• Rewriting/reducing 
• Piecemeal 
• Session-based 
• To answer “Is there a 
problem here?” 
• …Which involves 
looking at the product
How I’m trying to change things 
• Training devs to test 
• Pairing/reviewing 
developer unit testing 
• Automating black & 
white checks 
• (…but not to replace 
human interaction) 
• More code reviews 
• …which feed testing
There’s still room to improve 
• More automation 
• Run more easily/often 
• Increased testability 
• Address the causes 
of regressions, rather 
than fixing the fallout 
• Focus on providing 
value and information
Any questions? 
• More thought to come 
(yes Simon I’ll write that article for The 
Testing Planet) 
• Blog: neilstudd.com 
• Twitter: @neilstudd

More Related Content

PDF
Test Driven Product: Applying Test Driven Thinking to the Product World
PDF
Clean tests
PPTX
What You are Doing Wrong with Automated Testing
PPTX
Living with acceptance tests: Beyond Write-Once (XP NYC)
PDF
RailsGirls DC 2012 - Debugging
PPTX
Performance testing mistakes newbies make
PPTX
Approval Tests in Action: A LEGO Exercise and an Experience Report
PPTX
Software Testing’s Future—According to Lee Copeland
Test Driven Product: Applying Test Driven Thinking to the Product World
Clean tests
What You are Doing Wrong with Automated Testing
Living with acceptance tests: Beyond Write-Once (XP NYC)
RailsGirls DC 2012 - Debugging
Performance testing mistakes newbies make
Approval Tests in Action: A LEGO Exercise and an Experience Report
Software Testing’s Future—According to Lee Copeland

What's hot (20)

PPTX
Slow tests in a fast delivery pipeline
PPTX
Moving From Staged To Pervasive Testing
PDF
Joe Beale - Automation is What We Do
PDF
Product management meet up post
PDF
Bad metric, bad!
PPTX
10 signs your testing is not enough
PDF
Improving Test Team Throughput via Architecture by Dustin Williams
PPTX
DevOps - Successful Patterns
PPTX
Making a Mock by Kelsey Shannahan
PDF
Shawn Wallace - Test automation in brownfield applications
PPTX
Agile Impact 2018: Feature Experimentation
PPTX
Testit 2017 - Exploratory Testing for Everyone
PPTX
Testing regression
PPTX
Wix Automation - The False Positive Paradox
PPTX
Pavel Kamyshov "Team Health Check" Kyiv PM Club
PPTX
Test Driven Development - a gentle introduction
PDF
Team health check new
PPTX
Павло Камишов “Health check model: refined edition” Lviv Project Management Day
PPTX
WTF: Where To Focus when you take over a Drupal project
PDF
NYU ITP Lean LaunchPad Development Planning
Slow tests in a fast delivery pipeline
Moving From Staged To Pervasive Testing
Joe Beale - Automation is What We Do
Product management meet up post
Bad metric, bad!
10 signs your testing is not enough
Improving Test Team Throughput via Architecture by Dustin Williams
DevOps - Successful Patterns
Making a Mock by Kelsey Shannahan
Shawn Wallace - Test automation in brownfield applications
Agile Impact 2018: Feature Experimentation
Testit 2017 - Exploratory Testing for Everyone
Testing regression
Wix Automation - The False Positive Paradox
Pavel Kamyshov "Team Health Check" Kyiv PM Club
Test Driven Development - a gentle introduction
Team health check new
Павло Камишов “Health check model: refined edition” Lviv Project Management Day
WTF: Where To Focus when you take over a Drupal project
NYU ITP Lean LaunchPad Development Planning
Ad

Similar to Regression Testing: Down the Rabbit Hole (MEWT 2014) (20)

PDF
How to Make the Most of Regression and Unit Testing_ A Comprehensive Guide.pdf
PDF
Sanity Testing Vs Regression Testing Key Differences (with Examples).pdf
PDF
Regression Testing for Mobile Apps: Best Practices
PDF
Things Could Get Worse: Ideas About Regression Testing
PDF
Functional_Testing_Part-1
PDF
Regression Testing Techniques and Best Practices.pdf
PDF
Regression Testing Techniques and Best Practices.pdf
PPT
Application Testing
PDF
How Agile Teams Can Master Regression Testing for Bug-Free Releases
PDF
How to Make the Most of Regression and Unit Testing
PDF
Testing methodology
PDF
What is Regression Testing? | Edureka
PPTX
PPT
Bert Jagers - Preserving Our Future Through Customer Satisfaction
PPT
Test Driven Development – What Works And What Doesn’t
PPTX
Regression and performance testing
PDF
What is Regression Testing Definition, Tools, Examples.pdf
PDF
Infographic All Things You Should Know About Regression Testing
PDF
Regression Testing: Definition, Importance, Types, and Best Practices.pdf
PDF
Non-Regression Testing: The Key to Faster and More Reliable Software Updates
How to Make the Most of Regression and Unit Testing_ A Comprehensive Guide.pdf
Sanity Testing Vs Regression Testing Key Differences (with Examples).pdf
Regression Testing for Mobile Apps: Best Practices
Things Could Get Worse: Ideas About Regression Testing
Functional_Testing_Part-1
Regression Testing Techniques and Best Practices.pdf
Regression Testing Techniques and Best Practices.pdf
Application Testing
How Agile Teams Can Master Regression Testing for Bug-Free Releases
How to Make the Most of Regression and Unit Testing
Testing methodology
What is Regression Testing? | Edureka
Bert Jagers - Preserving Our Future Through Customer Satisfaction
Test Driven Development – What Works And What Doesn’t
Regression and performance testing
What is Regression Testing Definition, Tools, Examples.pdf
Infographic All Things You Should Know About Regression Testing
Regression Testing: Definition, Importance, Types, and Best Practices.pdf
Non-Regression Testing: The Key to Faster and More Reliable Software Updates
Ad

Recently uploaded (20)

PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PPTX
L1 - Introduction to python Backend.pptx
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
top salesforce developer skills in 2025.pdf
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PPTX
Reimagine Home Health with the Power of Agentic AI​
PPTX
history of c programming in notes for students .pptx
PDF
Nekopoi APK 2025 free lastest update
PPTX
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PDF
medical staffing services at VALiNTRY
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PPTX
Transform Your Business with a Software ERP System
PDF
PTS Company Brochure 2025 (1).pdf.......
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
L1 - Introduction to python Backend.pptx
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
wealthsignaloriginal-com-DS-text-... (1).pdf
top salesforce developer skills in 2025.pdf
Upgrade and Innovation Strategies for SAP ERP Customers
Reimagine Home Health with the Power of Agentic AI​
history of c programming in notes for students .pptx
Nekopoi APK 2025 free lastest update
Embracing Complexity in Serverless! GOTO Serverless Bengaluru
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
medical staffing services at VALiNTRY
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
2025 Textile ERP Trends: SAP, Odoo & Oracle
Transform Your Business with a Software ERP System
PTS Company Brochure 2025 (1).pdf.......
How to Choose the Right IT Partner for Your Business in Malaysia
Internet Downloader Manager (IDM) Crack 6.42 Build 42 Updates Latest 2025

Regression Testing: Down the Rabbit Hole (MEWT 2014)

  • 1. Regression Testing: Down the Rabbit Hole Neil Studd, Towers Watson
  • 2. About Me • 10 years of testing • Cambridge-based • Work for companies with red logos
  • 3. Only the names have changed…
  • 4. Chasing the Holy Grail • We’ll hear lots today about how regression testing should be done – …in an ideal world – …easiest for new projects – …or when starting afresh – …when there’s wider business buy-in, e.g. continuous delivery • The “holy grail” of regression testing…
  • 5. I took the red pill • Desktop software • Infrequent releases • Client-driven features • Client-driven deadlines • (Time v features v quality: Quality often loses) • Manual regression cycle • At the end of the release
  • 6. Our sacred texts • Tests are treated as a product bible • Handed down through generations • Revered and followed without question • Very much “of their time”; not modified to reflect new evidence
  • 7. Oh, the things I’ve seen… • Tests not testing what they claimed to test • Expected result = “a sensible error” • …but that was actually a bug! • Not enough detail • Too much detail
  • 8. All the information, all at once
  • 9. Why was it done this way?
  • 10. We need to go deeper • Five whys: – Not peer-reviewing – Short of time/resources – Fixed project deadline – Unrealistic promise to customer – Salespeople too far removed • Dev/test separation, driven by disrespect (dev) and fear (test) • “Testing is a tester’s problem”
  • 11. We fell for the dark side • Don’t allow your tools to start working against you! • TFS: Supports multiple references to one test • TFS: Supports “shared steps” in tests = quickly multiplies setup/teardown • Just because you can easily record a regression test, doesn’t mean you should
  • 12. What I didn’t do • Not burning books… • …written in good faith • …useful metadata • …cross-referencing • …gives information about previous perceived severities
  • 13. How I’m surviving • Rewriting/reducing • Piecemeal • Session-based • To answer “Is there a problem here?” • …Which involves looking at the product
  • 14. How I’m trying to change things • Training devs to test • Pairing/reviewing developer unit testing • Automating black & white checks • (…but not to replace human interaction) • More code reviews • …which feed testing
  • 15. There’s still room to improve • More automation • Run more easily/often • Increased testability • Address the causes of regressions, rather than fixing the fallout • Focus on providing value and information
  • 16. Any questions? • More thought to come (yes Simon I’ll write that article for The Testing Planet) • Blog: neilstudd.com • Twitter: @neilstudd

Editor's Notes

  • #2: All slides are the property of Neil Studd & the MEWT organisers; please do not redistribute without prior permission. Alice in Wonderland (2010)
  • #3: Me (2014)
  • #4: All of the following experience accounts are true. They’re anonymised, but I haven’t worked for that many companies, so you could probably work out who they refer to. It’s all meant with the utmost respect for colleagues (past and present), and provided purely for discussion about the betterment of testing processes. Young Guns (1988)
  • #5: The points here all comprise what’s very much the “holy grail” of regression testing That’s good, and I hope to learn a lot about how others are doing that! Monty Python and the Holy Grail (1975)
  • #6: These are my experiences… Time v features v quality = the Project Management Triangle - http://en.wikipedia.org/wiki/Project_management_triangle The Matrix (1999)
  • #7: The Ten Commandments (1956)
  • #8: Tests not testing: One feature, only coverage was “blank inputs = blank outputs” Expected result: Another example, “Add corrupted data to file A, then try to open file B” – of course it meant A, and when done properly it revealed a bug… Not enough detail = Often skipped if confusing, and running it would’ve revealed bug. (Why skipped? Why not surfaced the problem? Why wasn’t it understood?) Too much detail = many different test cases within one; too guided, the “anyone can run them” example Wayne’s World 2 (1993)
  • #9: A mixture of “too much” and “not enough”! Just cram the entire text into the title. Attempting some form of “all-pairs” test for a few key product elements, but presenting them in hard to comprehend manner They’re all authentication checks = candidates for automation, where a computer won’t have to comprehend all of this… Star Trek: The Next Generation, “Deja Q” (1990)
  • #10: Quote from Grace Hopper
  • #11: How did our tests get into this state? A quick five-whys analysis. http://en.wikipedia.org/wiki/5_Whys Other dev/test conflict that I’ve seen, thankfully not in current org, leading to reduced understanding/prioritisation within test team. Inception (2010)
  • #12: A couple of issues that we uncovered after 2yrs of using Microsoft Team Foundation Server Multiple refs to one test: Can be hard to judge work left Shared steps means that each test has its own setup/teardown cycle, when they’re all testing in the same area; better to turn those into single scenarios/sessions. Easy recording (one-line test entry) encouraged tests to be created for even the tiniest thing, because it’s easy to create them …but need to judge the likelihood of regression, what the impact would be, versus the time spent running this test in every cycle… Star Wars Episode VI: Return Of The Jedi (1983)
  • #13: Many test exist for a reason Good faith, e.g. 6 tests to check 6 different settings for one parameter, where (since launch) we know only 1 or 2 are in everyday use Many tests contain cross-references to (e.g.) support tickets or bug IDs, allowing us to understand why (at one point) this was deemed important to somebody Helps us find out if we’ve previously broken (and fixed) an issue. Regressions can be embarrassing, but moreso if you’ve already shipped a patch for it before… Fahrenheit 451 (1966)
  • #14: With colleagues: Refactoring areas, as we find need to update or re-run them (i.e. Not a big, all-consuming, top-down effort to rewrite. More of a “make do and mend” approach) Creating scenario-based sessions, allowing us to (for instance) turn dozens of repetitive feature tests into 6 real-world scenarios which touch upon the same areas This also allows us to record more than just “time spent”, we can review time spent “on-charter” vs time spent investigating bugs, to give us another quality measurement Answering “is there a problem here” requires human interaction; we need to learn from our findings I Am Legend (2007)
  • #15: Some ways in which we can catch regressions earlier, which we’re doing (with reasonable results so far) We’re learning as we go, adapting our approaches, seeing what works. Not precious about process. Training devs to test: Reminding them of common mistakes/weaknesses in our product, as well as pair-review during development Encouraging careful/collaborative development Repeated automated checks to identify regressions (or decrease in quality), after changes occur. Automation with care. Too many checks = not enough people looking at the software, may prevent new risks from being uncovered, reduce learning Code reviews: Not just to catch glaring issues, but also to provide “hints to testers” about what’s impacted and what the risks are, to help shape the regression testing approach 12 Angry Men (1957)
  • #16: Value and information: Is regression even the biggest risk? New problems (in new features) tend to be more prevalent. …If we (and our internal/beta testers) are using the application in a user-centred manner, and regressions slip through, would a real user notice (or care) either? Rocky Balboa (2006)
  • #17: Recommended viewing: Michael Bolton’s EuroSTAR webinar “Things could get worse” : https://www.youtube.com/watch?v=VNfPwD9u9Bk Batman Forever (1995)