SlideShare a Scribd company logo
Parallel Run Selenium Tests
in a Good / Bad Way
Anton Semenchenko
Anton Semenchenko
Creator of communities www.COMAQA.BY and
www.CoreHard.by, founder of company
www.DPI.Solutions, «tricky» manager at EPAM
Systems. Almost 15 years of experience in IT, main
specialization: Automation, С++ and lower
development, management, sales.
Anton Semenchenko
EPAM Systems, Software Testing Manager
Agenda
• Why do we need run tests in parallel?
• Challenge
• Solution
• Algorithm
• Static or Dynamic; stateless or statefull: Architecture related questions
• Metrics definition
• Custom Test Runner: General scheme
• Risks based “good” examples of customized Test Runners
• “Bad” examples of customized Test Runners
• Refactoring as a criteria – detailed information
• Metrics definition
Why do we need run tests in parallel?
Challenge
Challenge
• How to invest min amount of time  money to run tests in parallel
• How to maximize QA Automation ROI
• ROI >> 0
Solution
Let’s define Algorithm “How to run tests in parallel efficiently”
Algorithm
• Define Tests properly  Tests attributes
• Define All shared entities
• Select proper Selenium WebDriver Wrapper
• Select proper Architecture
• Test Parallel approach or combination
o Some standard Test Runner
o Build instruments
o Several Processes
o Selenium Grid
o OS  Language specific multithreading
Why do we need run tests in parallel?
2 types of reasons:
• Process related reasons
• Project specific reasons
o Risk based reasons
Meaning:
• Test == Feedback Mechanism
• Test Value == Feedback Value
Feedback’s in Scrum example
Methodology – as a predictable way of risk management for some context
• Pre planning grooming – run-time feedback from customer side
• Planning poker during Iteration planning – run-time feedback from team side mostly +
customer side too
• Daily Stand up – daily feedback, Team side
• Iteration Demo – per iteration feedback, customer side
• Iteration Retrospective - per iteration feedback, Team side
• Pair programming (as an example) – run-time technical feedback
• Unit Tests + CI – close to run-time technical feedback
• QA Automation Tests + CI + Report + Report Analyses – daily feedback mechanism
• And so on
Feedback’s in Scrum example
Methodology – as a predictable way of risk management for some context
Goals
1. Decrease QA Automation “Window”
2. Decrease Test results analysis “Window”
3. Increase Regression frequency
4. Decrease  Optimize hardware utilization
1. Electricity
2. Hardware costs (buy or rent)
5. Increase QA Automation ROI >> 0
Algorithm
• Define Tests properly  Tests attributes
• Define All shared entities
• Select proper Selenium WebDriver Wrapper
• Select proper Architecture
• Test Parallel approach or combination
o Some standard Test Runner
o Build instruments
o Several Processes
o Selenium Grid
o OS  Language specific multithreading
Algorithm – The best way
Test attributes
Atomic tests without
dependencies
Some dependency
resolver
Add specific annotation plus
add new feature to existing
standard test-runner
Develop your own test-
runner with integrated
dependency resolver
Algorithm – The best way
How to choose – ROI calculator as a solution
Atomic tests without
dependencies
Some dependency
resolver
Decreased time to runDecreased test debug time
Lower hardware costs + electricityDecreased test update/support time
No time spent on dependency definition
No time spent on dependency resolver
Lower entry barrier for newcomers
Less documentation
Algorithm – The best way
In most cases – no dependencies:
• More expensive specialists
• Higher hardware and electricity costs
Algorithm – The best way
o Define All shared entities
o Improve Architecture
o For example:
• Pre-steps using DB
• DB Layer:
 Singleton
 Thread-safe
 Optimize (use profiler)
 Migrate to Singleton-Bus (instance per thread, in a thread-safe way)
 Solution: in an iteration-based way, start from the simplest singleton
Algorithm – The best way
o Define All shared entities
o Improve Architecture
o For example:
• Logger
• Tracer
• Report Engine:
 +1 more reason to integrate Test Runner and
Report Engine, knows how to run in parallel and
integrate reports pieces into one document
Algorithm – The best way
o Define All shared entities
o Improve Architecture
o For example:
• Data-provider
• Any other orthogonal entity:
 Define
 Isolate
 Remember
Algorithm – The best way
o Select proper Selenium WebDriver Wrapper
• Mature
• Thread-safe
• Easy-to-use “downcast”
• Examples:
 Selenide – easy to use
 JDI evolution
 Serenity – more complicated to use
Algorithm – The best way
o Select proper Architecture
• Stateless architecture
• Static Page Object
• Isolate all orthogonal shared entities
• Use “From Conditional to Patterns and
reverse” Refactoring as a metric + ROI to prof
Algorithm – The best way
o Select proper Architecture
• Convert to Stateful architecture
• Dynamic Page Object
• Update all isolated orthogonal shared
entities
• Re-calculate ROI and reuse “From
Conditional to Patterns and reverse” in
a systematical way
Algorithm
• Define Tests properly  Tests attributes
• Define All shared entities
• Select proper Selenium WebDriver Wrapper
• Select proper Architecture
• Test Parallel approach or combination
o Some standard Test Runner
o Build instruments
o Several Processes
o Selenium Grid
o OS  Language specific multithreading
Algorithm – The best way
o Some standard test-runner
• The lowest layer
• Standard (or some specialized), stable, efficient,
simple, most of instruments use it as a basement
Algorithm – The best way
o Build instruments
• Use parameters to configure test run
• Use Test Runner as a basement
• Some kind of Test Runner wrapper
• + 1 layer => less stable, less efficient, sometimes
easier to use
• Anyway, doesn’t work without Test Runner
• Maven example: Thread Count and param: Test
Method, Test Class, both
Algorithm – The best way
o Several processes
• Using CI
• Team City (Job Configuration)
• Jenkins (Job Configuration)
• Extensive way
Algorithm – The best way
o Selenium Grid
• Infrastructure
• Could be used for indirect test run, but this is not
primary purpose
• Could be combined with all other solutions
Algorithm – The best way
o Language / OS specific mechanisms
• Language, focusing on multithreading
• Java
 Process
 Thread / Fork
 JVM (Java property config or command line
arguments)
Out of scope
OS and Language related questions of multi-threading
Reasons
Transition complexity
With proper architecture (plus mature, not self-developed, WebDriver wrapper, like Selenide) ->
0, just 3-5-10 places in a solution, QA Automation Architect or Developer should invest several
hours
With improper architecture (without any wrapper or with a self-developed WebDriver wrapper) -
> complicated question
Invest days or weeks to Update Architecture and wrapper (better, to use, mature one)
Invest weeks or months to update tons of source code places
Without any architecture -> nightmare
Invest weeks to redesign tests using proper Architecture and mature wrapper
Invest months or even years just to update tests
ROI as a metric
Architecture related questions
Static or Dynamic; stateless or statefull: Architecture related questions?
Architecture related questions
1. Static: transform to parallel run
2. Dynamic: transform to parallel run
3. Static <=> Dynamic: transformation criteria
4. Transformation example
5. Detailed information about transformation
6. Static or Dynamic – as an architecture basement
Leak of stateless examples
Leak of stateless examples in standard Selenium documentation
IMHO: due to Selenium development process
A tiny group of extra professionals  developers
No processes
No backlog
No priorities
No сcommittee for backlog and backlog items prioritization
No “iterations”
How to add new feature
Just to implement and then add for review
1. Let’s compare:
Photo
Share – looks like parallelism (easy parallelism).
Video
Share – looks like parallelism (not trivial parallelism).
State-less or state-full solution?
1. How easy transform solution from “single” to “multi” threading (to decrease “QA
Automation Windows”)
State-less – like share a photo
Just 5 minutes of work.
State-full – like share a video
Not trivial task, could be a night mare.
2. Summary
prefer state-less solutions to state-full solutions in mooooost cases;
before start implementation a state-full solution, please, take a break for a minute, and re-thing
everything again, possibly you can find a proper state-less solution.
State-less or state-full solution?
1. Static class
could be implemented as a state-less solution easily
2. Object
State-full solution in 99,99% cases
3. Summary
prefer static class based solutions (state-less) to object based (state-full) in mooooost cases;
before start implementation based on objects, please, take a break for a minute, and re-thing everything
again, possibly you can find a proper solution based on static classes.
Object or static class  State-full or state-less solution?
“Replace Conditional with Polymorphism refactoring” as a Static <=> Dynamic:
transformation criteria
Replace Conditional with Polymorphism as criteria
1. You have a conditional that chooses different behavior depending on the type of an
object.
2. Move each leg of the conditional to an overriding method in a subclass. Make the
original method abstract.
3. And vice versa
4. Example
Replace Conditional with Polymorphism and vice versa
1. Replace Conditional Dispatcher with Command Design Pattern
Create a Command for each action. Store the Commands in a collection and
replace the conditional logic with code to fetch and execute Commands.
2. Replace Conditional Logic with Strategy Design Pattern
Create a Strategy for each variant and make the method
delegate the “calculation” to a Strategy instance.
3. Replace Conditional Logic with State Design Pattern
Create a State for each variant as a part of “State Machine” and make the method
delegate tricky “calculation” to the “State Machine”.
Replace Conditional with … more sophisticated options
1. Problem:
You have a conditional that performs various actions depending on object type or properties.
2. Solution:
Create subclasses matching the branches of the conditional.
In them, create a shared method and move code from the corresponding branch of the conditional to it.
Replace the conditional with the relevant method call.
The result is that the proper implementation will be attained via polymorphism depending on the object
class.
Replace Conditional with Polymorphism – detailed description
1. Transformation example
2. Detailed information about transformation
3. Static or Dynamic – as an architecture basement
Example plus some details
Custom Test Runner: General scheme
Risks based “good” examples of customized Test Runners
Custom Java test runner first iteration
The runner allowed selecting test suite and browser
CSV-report as s result
Custom Java test runner final iteration
Added various features for test-run configuration
Custom Java test runner parallel engine
We get the number of nodes from Selenium Grid
Get node ip’s from Grid
Get the number of cores on each node (use the
smallest number)
Use this info to configure the number of parallel
threads using standard test-runner (Test-NG)
Use the power of both Test-NG and Selenium Grid
Custom Java test runner next iteration
• Reworked the re-run
mechanism
• Changed CSV-reports for
detailed Excel spreadsheets
with color coding
• Integrate with load emulator
“Bad” examples of customized Test Runners
“Bad” examples of customized Test Runners
• Custom test-runner
• XML-configuration-based
• Features:
o Test, not test-suite level orientation
o XML-config for multithreading
“Bad” examples of customized Test Runners
“Take away” points
Algorithm
1. Define Tests properly  Tests attributes
2. Define All shared entities
3. Select proper Selenium WebDriver Wrapper
4. Select proper Architecture
5. Test Parallel approach or combination
1. Some standard Test Runner
2. Build instruments
3. Several Processes
4. Selenium Grid
5. OS  Language specific multithreading
Summary
Step by step summary
1. Algorithm
2. Static or Dynamic; stateless or statefull: Architecture related questions
3. Metrics definition
4. Custom Test Runner: General scheme
1. Risks based “good” examples of customized Test Runners
2. “Bad” examples of customized Test Runners
What’s next?
Let’s go and discuss all open questions in an informal way 
1. “Refactoring is a controlled technique for improving the design of an existing code
base.”
2. “Its essence is applying a series of small behavior-preserving transformations, each of
which "too small to be worth doing".”
3. “The cumulative effect of each of these transformations is quite significant.”
4. “By doing Refactoring in small steps you reduce the risk of introducing errors. You
also avoid having the system broken while you are carrying out the restructuring -
which allows you to gradually refactor a system over an extended period of time.”
Refactoring by Martin Fowler
1. Ask yourself "how can I hide some details from the rest of the software?“
2. What is encapsulation?
hide variability
hide complexity
Details
"conflict of interests“
“tech” discussions
3. Example of public member or private member + setter/getter
What is really hidden?
Where is simplicity?
Encapsulation – the most important OOP principle
1. “There is a close relationship between refactoring and patterns.”
2. “Often the best way to use patterns is to gradually refactor your code to use the
pattern once you realize it’s needed.”
3. “Joshua Kerievsky’s Refactoring to Patterns explores this topic, making this a great
topic to learn about once you’ve got the basic refactoring's under your belt.”
4. “From Refactoring To Design Pattern” path – from pure design to adequate design
5. “From ~Design Patterns To Refactoring” path – from over design to adequate design
Refactoring and Design Patterns by Martin Fowler
1. “Refactoring to Patterns is the marriage of refactoring - the process of improving the
design of existing code - with patterns, the classic solutions to recurring design
problems.”
2. “Refactoring to Patterns suggests that using patterns to improve an existing design is
better than using patterns early in a new design. This is true whether code is years old
or minutes old.”
3. “We improve designs with patterns by applying sequences of low-level design
transformations, known as refactoring's.”
4. And vice versa
Refactoring and Design Patterns by Joshua Kerievsky
1. There are more then 90 types of refactoring
2. Refactoring types that relate to a particular field is called a ”Refactoring Language”
3. ”Refactoring Language” gives a common terminology for discussing the situations
specialists are faced with:
“The elements of this language are entities called Refactoring types”;
“Each type of Refactoring describes a problem that occurs over and over again in our environment”;
“Each type of Refactoring describes the core of the solution to that “~low level” problem, in such a way
that you can use this solution a million times over, without ever doing it the same way twice!”
Refactoring Catalog / Language
1. You have a conditional that chooses different behavior depending on the type of an
object.
2. Move each leg of the conditional to an overriding method in a subclass. Make the
original method abstract.
3. And vice versa
4. Example
Replace Conditional with Polymorphism and vice versa
1. Replace Conditional Dispatcher with Command Design Pattern
Create a Command for each action. Store the Commands in a collection and
replace the conditional logic with code to fetch and execute Commands.
2. Replace Conditional Logic with Strategy Design Pattern
Create a Strategy for each variant and make the method
delegate the “calculation” to a Strategy instance.
3. Replace Conditional Logic with State Design Pattern
Create a State for each variant as a part of “State Machine” and make the method
delegate tricky “calculation” to the “State Machine”.
Replace Conditional with … more sophisticated options
1. Problem:
You have a conditional that performs various actions depending on object type or properties.
2. Solution:
Create subclasses matching the branches of the conditional.
In them, create a shared method and move code from the corresponding branch of the conditional to it.
Replace the conditional with the relevant method call.
The result is that the proper implementation will be attained via polymorphism depending on the object
class.
Replace Conditional with Polymorphism – detailed description
1. This refactoring technique can help if your code contains operators performing
various tasks that vary based on:
Class of the object or interface that it implements
Value of an object's field
Result of calling one of an object's methods
2. If a new object property or type appears, you will need to search for and add code in all
similar conditionals. Thus the benefit of this technique is multiplied if there are
multiple conditionals scattered throughout all of an object's methods.
Why refactor
1. This technique adheres to the Tell-Don't-Ask principle: instead of asking an object
about its state and then performing actions based on this, it is much easier to simply
tell the object what it needs to do and let it decide for itself how to do that.
2. Removes duplicate code. You get rid of many almost identical conditionals.
3. If you need to add a new execution variant, all you need to do is add a new subclass
without touching the existing code (Open/Closed Principle).
Benefits
1. For this refactoring technique, you should have a ready hierarchy of classes that will
contain alternative behaviors. If you do not have a hierarchy like this, create one. Other
techniques will help to make this happen:
2. Replace Type Code with Subclasses. Subclasses will be created for all values of a
particular object property. This approach is simple but less flexible since you cannot
create subclasses for the other properties of the object.
3. Replace Type Code with State/Strategy. A class will be dedicated for a particular object
property and subclasses will be created from it for each value of the property. The
current class will contain references to the objects of this type and delegate execution
to them.
4. The following steps assume that you have already created the hierarchy.
Preparing to Refactor
1. If the conditional is in a method that performs other actions as well, perform Extract
Method.
2. For each hierarchy subclass, redefine the method that contains the conditional and
copy the code of the corresponding conditional branch to that location.
3. Delete this branch from the conditional.
4. Repeat replacement until the conditional is empty. Then delete the conditional and
declare the method abstract.
Refactoring Steps
Regression frequency (RF)
Definition
• Regression Frequency (RF) = How frequent does automated regression
run?
«Meaning»
• The value of product use is better as higher it is. That is ok for automated
tests, if automation test runs are frequent, their importance for customer
is bigger. Because of that that metric is one of the key metrics while
valuing ROI.
Regression frequency (RF)
Boundaries
• Widespread boundaries / recommendations:
smoke – every night
full-regression – every weekend
Where do we get info from
• Automation reports
• Continuous Integration (CI)
Regression frequency (RF)
Examples:
• RF and Economical expediency of АТ(ROI);
• Facebook and Bamboo
• HeadHunter
• Kanban: RF and WarGaming experience
• Contra example «Absolute» «recommendations»
• Contra example «Commit window»
Regression frequency (RF)
Visualization
• Not less than one a week – green color
• Not less than once in two weeks – yellow color
• Less than one a month - red color
• More frequent than once a day - red color
Connection between other metrics
• Automation testing window (ATW);
• Test results analysis window (TRAW);
• Economical expediency of АТ (ROI)
• “Commit window“
Category:
• Quality
• Automated testing
АТ «Window»
Definition
• Automated testing «Window» – how much physical time does Automated
test run take (full run or subset)
• Automated testing «Window» – how much system  «lab» time does
Automated test run take (full run or subset)
АТ «Window»
«Meaning»
• Time, that is required to be calculated while estimating economical
expediency of AUT while analyzing ROI in comparison with manual
testing. Metric is required as for making decision about introduction of
Automation and as for valuing current state of implemented automation
with the aim of looking for narrow places.
АТ «Window»
Boundaries
• Depends on the size of the project, might take from couple of hours to
many hours. In general, Smoke after commit should be not longer than
one hour, full Regression not more than two days (weekend).
Where do we get info from
• Test Reports
• Continuous Integration (CI)
АТ «Window»
Examples:
• Social networks (Facebook, Bamboo), CMS, CMS templates – before
automation tools for vizual testing automated test cases percent was not
big;
• HeadHunter example
• Counterexample – physical time
• Counterexample – machine time (Cloud)
• Technical details: Stateless and Statefull Automation, parallel run
• Technical details: Effective waiters
• Technical details: Premature optimization
АТ «Window»
Visualization
• Smoke <= 1 hour, Full Regression <= 12 hours (night) – green color
• Smoke <= 2 hours, Full Regression <= 2 days (weekend) – yellow color
• Smoke > 2 hours, Full Regression > 2 days(weekend) – red color
АТ «Window»
Connection between other metrics
• Automation progress (AP)
• Automated tests coverage percentage
• Regression Frequency (RF)
• Automated tests stability (ATS)
• Economical expediency of АТ (ROI)
Category:
• Cost / Time
• Automated testing
Test results analysis “Window” (TRAW)
Definition
• Analyzing «Window» of automation test results = How much time does it
take to analyze received data?
«Meaning»
• Metric shows how exhaustive and readable are reports, how stabile AT
and AUT. When the window is too big, less time would be devoted to
tests development, or analysis will be performed not thoroughly enough,
which will decrease Automation value.
Test results analysis “Window” (TRAW)
Boundaries
• In dependency of the project can last from some minutes to many hours.
In general, analyzing results of Smoke test after commit – should take
couple of minutes, analyzing results of full Regression – should take
couple of hours, ideally, less than an hour.
Where do we get info from
• Test Reports
• Continuous Integration (CI)
• Task Tracking Systems
Test results analysis “Window” (TRAW)
Examples:
• Social networks (Facebook, Bamboo), CMS, CMS templates – before
automation tools for vizual testing automated test cases percent was not
big;
• HeadHunter example
• Mature Data Protection Solution, new SQL Denali plug-in, close to 100%;
• Mature Secure VPN (R), technological stack;
• Counterexamples;
Test results analysis “Window” (TRAW)
Visualization
• Smoke <= 10 minutes, Full Regression <= 2 hours – green color
• Smoke <= 20 minutes, Full Regression <= 4 hours – yellow color
• Smoke > 20 minutes, Full Regression > 4 hours – red color
Test results analysis “Window” (TRAW)
• Connection between other metrics
• Automation progress (AP)
• Automated tests coverage Percentage (ATC)
• Regression Frequency (RF)
• Automated Tests stability (ATS)
• Category:
• Cost / Time
• Automated testing
ROI
Definition
• Economical expedience of AT (ROI) = Manual efforts – (Automation
efforts + Automation investment) / QA investment * 100%
«Meaning»
• Shows does it have sense to implement automation on the current
project in the current time. It might happen, that at some conditions,
automation on the project can be economically inappropriate, because
manual testing, even in long term future can be cheaper.
ROI
Boundaries
• Out of scope
Where do we get info from
• Test Strategy
• Test Plan
• Test Management Systems (TMS)
• Task Tracking System
ROI
Examples:
• Variety of projects
• Standard «problem» while working with middle+ automation specialists
of «old formation»
• A set of “alternative” ways of ROI usage (out of scope)
ROI (+ additional profit)
Visualization
• Comparing trends
• Manual testing vs Automation
• Whole variant of option of implementing / developing automation
• Different investment options
• Choosing optimal team-trend here and now
ROI
Connection between other metrics
• % of Tests, suitable for AT
• Regression frequency (RF)
• Automated test creation time (ATDT)
• Automated test support time (ATST)
• Automated tests stability (ATS)
• Automation testing window (ATW)
• Test results analysis window (TRAW)
Category:
• Price / time
• Automation testing
CONTACT ME
semenchenko@dpi.solutions
dpi.semenchenko
https://www.linkedin.com/in/anton-semenchenko-612a926b
https://www.facebook.com/semenchenko.anton.v
https://twitter.com/comaqa
www.COMAQA.BY
Community’s audience
Testing specialists (manual and automated)
Automation tools developers
Managers and sales specialists in IT
IT-specialists, thinking about migrating to automation
Students looking for perspective profession.
Community goals
Create unified space for effective communication for all IT-specialists in the context of automated
testing.
Your profit
Ability to listen to reports from leading IT-specialists and share your experience.
Take part in «promo»-versions of top IT-conferences in CIS for free.
Meet regularly, at different forums, community «offices», social networks and messengers.
www.COMAQA.BY
info@comaqa.by
https://www.facebook.com/comaqa.by/
http://vk.com/comaqaby
+375 33 33 46 120
+375 44 74 00 385
www.CoreHard.by
Community’s audience
«Harsh» С++ developers & co, IoT, BigData, High Load, Parallel Computing
Automation tools developers
Managers and sales specialists in IT
Students looking for perspective profession.
Community goals
Create unified space for effective communication for all IT-specialists in the context of «harsh»
development.
Your profit
Ability to listen to reports from leading IT-specialists and share your experience.
Take part in «promo»-versions of top IT-conferences in CIS for free.
Meet regularly, at different forums, community «offices», social networks and messengers.
www.CoreHard.by
info@corehard.by
https://www.facebook.com/corehard.by/
+375 33 33 46 120
+375 44 74 00 385

More Related Content

PDF
Design Patterns for QA Automation
PDF
Static and dynamic Page Objects with Java \ .Net examples
PPTX
Out of box page object design pattern, java
PPTX
Как стать синьором
PPTX
Principles and patterns for test driven development
PDF
Testing Angular
PPTX
Improving the Quality of Existing Software - DevIntersection April 2016
PPT
Java Basics for selenium
Design Patterns for QA Automation
Static and dynamic Page Objects with Java \ .Net examples
Out of box page object design pattern, java
Как стать синьором
Principles and patterns for test driven development
Testing Angular
Improving the Quality of Existing Software - DevIntersection April 2016
Java Basics for selenium

What's hot (20)

PPTX
Improving the Quality of Existing Software
PPTX
Improving The Quality of Existing Software
PPTX
Breaking Dependencies to Allow Unit Testing - DevIntersection Spring 2016
PDF
Using Selenium to Improve a Teams Development Cycle
PPTX
Testing Rapidly Changing Applications With Self-Testing Object-Oriented Selen...
PPTX
Mini training - Moving to xUnit.net
PDF
Oh so you test? - A guide to testing on Android from Unit to Mutation
PPTX
JavaLand - Integration Testing How-to
PPTX
Riga Dev Day - Automated Android Continuous Integration
PPTX
Jbehave- Basics to Advance
PPT
Test Driven Development using QUnit
PPTX
TDD with Visual Studio 2010
PPTX
Maven TestNg frame work (1) (1)
PPTX
2014 Joker - Integration Testing from the Trenches
PPTX
Tech io spa_angularjs_20130814_v0.9.5
PPTX
50 EJB 3 Best Practices in 50 Minutes - JavaOne 2014
PDF
Build Java Web Application Using Apache Struts
PPTX
Apex Testing and Best Practices
PDF
SQL or NoSQL - how to choose
KEY
Fun with EJB 3.1 and Open EJB
Improving the Quality of Existing Software
Improving The Quality of Existing Software
Breaking Dependencies to Allow Unit Testing - DevIntersection Spring 2016
Using Selenium to Improve a Teams Development Cycle
Testing Rapidly Changing Applications With Self-Testing Object-Oriented Selen...
Mini training - Moving to xUnit.net
Oh so you test? - A guide to testing on Android from Unit to Mutation
JavaLand - Integration Testing How-to
Riga Dev Day - Automated Android Continuous Integration
Jbehave- Basics to Advance
Test Driven Development using QUnit
TDD with Visual Studio 2010
Maven TestNg frame work (1) (1)
2014 Joker - Integration Testing from the Trenches
Tech io spa_angularjs_20130814_v0.9.5
50 EJB 3 Best Practices in 50 Minutes - JavaOne 2014
Build Java Web Application Using Apache Struts
Apex Testing and Best Practices
SQL or NoSQL - how to choose
Fun with EJB 3.1 and Open EJB
Ad

Similar to Системный взгляд на параллельный запуск Selenium тестов (20)

PPTX
Parallel run selenium tests in a good way
PDF
How to use selenium successfully
PPTX
Beginners overview of automated testing with Rspec
PPTX
Practical unit testing in c & c++
PPTX
Cucumber jvm best practices v3
PDF
Performance tuning Grails applications
PPTX
Get Testing with tSQLt - SQL In The City Workshop 2014
PDF
Into The Box 2018 | Assert control over your legacy applications
PDF
Agile Software Testing the Agilogy Way
PDF
Testing - How Vital and How Easy to use
PPT
The QA/Testing Process
PPTX
Alexander Podelko - Context-Driven Performance Testing
PDF
Performance tuning Grails applications
PPT
Test automation principles, terminologies and implementations
PPTX
Small is Beautiful- Fully Automate your Test Case Design
PPTX
How do you tame a big ball of mud? One test at a time.
PDF
Neotys PAC 2018 - Tingting Zong
PPTX
Generalization in Auto-Testing. How we put what we had into new Technological...
PDF
Agile testing
PPTX
Test Driven Development
Parallel run selenium tests in a good way
How to use selenium successfully
Beginners overview of automated testing with Rspec
Practical unit testing in c & c++
Cucumber jvm best practices v3
Performance tuning Grails applications
Get Testing with tSQLt - SQL In The City Workshop 2014
Into The Box 2018 | Assert control over your legacy applications
Agile Software Testing the Agilogy Way
Testing - How Vital and How Easy to use
The QA/Testing Process
Alexander Podelko - Context-Driven Performance Testing
Performance tuning Grails applications
Test automation principles, terminologies and implementations
Small is Beautiful- Fully Automate your Test Case Design
How do you tame a big ball of mud? One test at a time.
Neotys PAC 2018 - Tingting Zong
Generalization in Auto-Testing. How we put what we had into new Technological...
Agile testing
Test Driven Development
Ad

More from COMAQA.BY (20)

PDF
Тестирование аналогов инсталлируемых приложений (Android Instant Apps, Progre...
PPTX
Anton semenchenko. Comaqa Spring 2018. Nine circles of hell. Antipatterns in ...
PPTX
Vivien Ibironke Ibiyemi. Comaqa Spring 2018. Enhance your Testing Skills With...
PPTX
Roman Soroka. Comaqa Spring 2018. Глобальный обзор процесса QA и его важность
PPTX
Roman Iovlev. Comaqa Spring 2018. Архитектура Open Source решений для автомат...
PPTX
Vladimir Polyakov. Comaqa Spring 2018. Особенности тестирования ПО в предметн...
PPTX
Kimmo Hakala. Comaqa Spring 2018. Challenges and good QA practices in softwar...
PPTX
Дмитрий Лемешко. Comaqa Spring 2018. Continuous mobile automation in build pi...
PPTX
Ivan Katunov. Comaqa Spring 2018. Test Design and Automation for Rest API.
PPTX
Vadim Zubovich. Comaqa Spring 2018. Красивое тестирование производительности.
PPTX
Alexander Andelkovic. Comaqa Spring 2018. Using Artificial Intelligence to Te...
PPTX
Моя роль в конфликте
PPTX
Организация приемочного тестирования силами матерых тестировщиков
PPTX
Развитие или смерть
PPTX
Эффективная работа с рутинными задачами
PPTX
Open your mind for OpenSource
PPTX
JDI 2.0. Not only UI testing
PPTX
Battle: BDD vs notBDD
PPTX
Слои тестового фрамеворка. Что? Где? Когда?
PPTX
Аспектно ориентированное программирование для Java автоматизаторов
Тестирование аналогов инсталлируемых приложений (Android Instant Apps, Progre...
Anton semenchenko. Comaqa Spring 2018. Nine circles of hell. Antipatterns in ...
Vivien Ibironke Ibiyemi. Comaqa Spring 2018. Enhance your Testing Skills With...
Roman Soroka. Comaqa Spring 2018. Глобальный обзор процесса QA и его важность
Roman Iovlev. Comaqa Spring 2018. Архитектура Open Source решений для автомат...
Vladimir Polyakov. Comaqa Spring 2018. Особенности тестирования ПО в предметн...
Kimmo Hakala. Comaqa Spring 2018. Challenges and good QA practices in softwar...
Дмитрий Лемешко. Comaqa Spring 2018. Continuous mobile automation in build pi...
Ivan Katunov. Comaqa Spring 2018. Test Design and Automation for Rest API.
Vadim Zubovich. Comaqa Spring 2018. Красивое тестирование производительности.
Alexander Andelkovic. Comaqa Spring 2018. Using Artificial Intelligence to Te...
Моя роль в конфликте
Организация приемочного тестирования силами матерых тестировщиков
Развитие или смерть
Эффективная работа с рутинными задачами
Open your mind for OpenSource
JDI 2.0. Not only UI testing
Battle: BDD vs notBDD
Слои тестового фрамеворка. Что? Где? Когда?
Аспектно ориентированное программирование для Java автоматизаторов

Recently uploaded (20)

PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
Tartificialntelligence_presentation.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Mushroom cultivation and it's methods.pdf
PDF
Encapsulation theory and applications.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
1. Introduction to Computer Programming.pptx
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Group 1 Presentation -Planning and Decision Making .pptx
Building Integrated photovoltaic BIPV_UPV.pdf
MIND Revenue Release Quarter 2 2025 Press Release
NewMind AI Weekly Chronicles - August'25-Week II
Tartificialntelligence_presentation.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
cloud_computing_Infrastucture_as_cloud_p
Mushroom cultivation and it's methods.pdf
Encapsulation theory and applications.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Diabetes mellitus diagnosis method based random forest with bat algorithm
1. Introduction to Computer Programming.pptx
Univ-Connecticut-ChatGPT-Presentaion.pdf
A comparative study of natural language inference in Swahili using monolingua...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Assigned Numbers - 2025 - Bluetooth® Document
Advanced methodologies resolving dimensionality complications for autism neur...
SOPHOS-XG Firewall Administrator PPT.pptx
Encapsulation_ Review paper, used for researhc scholars
Build a system with the filesystem maintained by OSTree @ COSCUP 2025

Системный взгляд на параллельный запуск Selenium тестов

  • 1. Parallel Run Selenium Tests in a Good / Bad Way Anton Semenchenko
  • 2. Anton Semenchenko Creator of communities www.COMAQA.BY and www.CoreHard.by, founder of company www.DPI.Solutions, «tricky» manager at EPAM Systems. Almost 15 years of experience in IT, main specialization: Automation, С++ and lower development, management, sales. Anton Semenchenko EPAM Systems, Software Testing Manager
  • 3. Agenda • Why do we need run tests in parallel? • Challenge • Solution • Algorithm • Static or Dynamic; stateless or statefull: Architecture related questions • Metrics definition • Custom Test Runner: General scheme • Risks based “good” examples of customized Test Runners • “Bad” examples of customized Test Runners • Refactoring as a criteria – detailed information • Metrics definition
  • 4. Why do we need run tests in parallel?
  • 5. Challenge Challenge • How to invest min amount of time money to run tests in parallel • How to maximize QA Automation ROI • ROI >> 0
  • 6. Solution Let’s define Algorithm “How to run tests in parallel efficiently”
  • 7. Algorithm • Define Tests properly Tests attributes • Define All shared entities • Select proper Selenium WebDriver Wrapper • Select proper Architecture • Test Parallel approach or combination o Some standard Test Runner o Build instruments o Several Processes o Selenium Grid o OS Language specific multithreading
  • 8. Why do we need run tests in parallel? 2 types of reasons: • Process related reasons • Project specific reasons o Risk based reasons Meaning: • Test == Feedback Mechanism • Test Value == Feedback Value
  • 9. Feedback’s in Scrum example Methodology – as a predictable way of risk management for some context • Pre planning grooming – run-time feedback from customer side • Planning poker during Iteration planning – run-time feedback from team side mostly + customer side too • Daily Stand up – daily feedback, Team side • Iteration Demo – per iteration feedback, customer side • Iteration Retrospective - per iteration feedback, Team side • Pair programming (as an example) – run-time technical feedback • Unit Tests + CI – close to run-time technical feedback • QA Automation Tests + CI + Report + Report Analyses – daily feedback mechanism • And so on
  • 10. Feedback’s in Scrum example Methodology – as a predictable way of risk management for some context
  • 11. Goals 1. Decrease QA Automation “Window” 2. Decrease Test results analysis “Window” 3. Increase Regression frequency 4. Decrease Optimize hardware utilization 1. Electricity 2. Hardware costs (buy or rent) 5. Increase QA Automation ROI >> 0
  • 12. Algorithm • Define Tests properly Tests attributes • Define All shared entities • Select proper Selenium WebDriver Wrapper • Select proper Architecture • Test Parallel approach or combination o Some standard Test Runner o Build instruments o Several Processes o Selenium Grid o OS Language specific multithreading
  • 13. Algorithm – The best way Test attributes Atomic tests without dependencies Some dependency resolver Add specific annotation plus add new feature to existing standard test-runner Develop your own test- runner with integrated dependency resolver
  • 14. Algorithm – The best way How to choose – ROI calculator as a solution Atomic tests without dependencies Some dependency resolver Decreased time to runDecreased test debug time Lower hardware costs + electricityDecreased test update/support time No time spent on dependency definition No time spent on dependency resolver Lower entry barrier for newcomers Less documentation
  • 15. Algorithm – The best way In most cases – no dependencies: • More expensive specialists • Higher hardware and electricity costs
  • 16. Algorithm – The best way o Define All shared entities o Improve Architecture o For example: • Pre-steps using DB • DB Layer:  Singleton  Thread-safe  Optimize (use profiler)  Migrate to Singleton-Bus (instance per thread, in a thread-safe way)  Solution: in an iteration-based way, start from the simplest singleton
  • 17. Algorithm – The best way o Define All shared entities o Improve Architecture o For example: • Logger • Tracer • Report Engine:  +1 more reason to integrate Test Runner and Report Engine, knows how to run in parallel and integrate reports pieces into one document
  • 18. Algorithm – The best way o Define All shared entities o Improve Architecture o For example: • Data-provider • Any other orthogonal entity:  Define  Isolate  Remember
  • 19. Algorithm – The best way o Select proper Selenium WebDriver Wrapper • Mature • Thread-safe • Easy-to-use “downcast” • Examples:  Selenide – easy to use  JDI evolution  Serenity – more complicated to use
  • 20. Algorithm – The best way o Select proper Architecture • Stateless architecture • Static Page Object • Isolate all orthogonal shared entities • Use “From Conditional to Patterns and reverse” Refactoring as a metric + ROI to prof
  • 21. Algorithm – The best way o Select proper Architecture • Convert to Stateful architecture • Dynamic Page Object • Update all isolated orthogonal shared entities • Re-calculate ROI and reuse “From Conditional to Patterns and reverse” in a systematical way
  • 22. Algorithm • Define Tests properly Tests attributes • Define All shared entities • Select proper Selenium WebDriver Wrapper • Select proper Architecture • Test Parallel approach or combination o Some standard Test Runner o Build instruments o Several Processes o Selenium Grid o OS Language specific multithreading
  • 23. Algorithm – The best way o Some standard test-runner • The lowest layer • Standard (or some specialized), stable, efficient, simple, most of instruments use it as a basement
  • 24. Algorithm – The best way o Build instruments • Use parameters to configure test run • Use Test Runner as a basement • Some kind of Test Runner wrapper • + 1 layer => less stable, less efficient, sometimes easier to use • Anyway, doesn’t work without Test Runner • Maven example: Thread Count and param: Test Method, Test Class, both
  • 25. Algorithm – The best way o Several processes • Using CI • Team City (Job Configuration) • Jenkins (Job Configuration) • Extensive way
  • 26. Algorithm – The best way o Selenium Grid • Infrastructure • Could be used for indirect test run, but this is not primary purpose • Could be combined with all other solutions
  • 27. Algorithm – The best way o Language / OS specific mechanisms • Language, focusing on multithreading • Java  Process  Thread / Fork  JVM (Java property config or command line arguments)
  • 28. Out of scope OS and Language related questions of multi-threading
  • 29. Reasons Transition complexity With proper architecture (plus mature, not self-developed, WebDriver wrapper, like Selenide) -> 0, just 3-5-10 places in a solution, QA Automation Architect or Developer should invest several hours With improper architecture (without any wrapper or with a self-developed WebDriver wrapper) - > complicated question Invest days or weeks to Update Architecture and wrapper (better, to use, mature one) Invest weeks or months to update tons of source code places Without any architecture -> nightmare Invest weeks to redesign tests using proper Architecture and mature wrapper Invest months or even years just to update tests ROI as a metric
  • 30. Architecture related questions Static or Dynamic; stateless or statefull: Architecture related questions?
  • 31. Architecture related questions 1. Static: transform to parallel run 2. Dynamic: transform to parallel run 3. Static <=> Dynamic: transformation criteria 4. Transformation example 5. Detailed information about transformation 6. Static or Dynamic – as an architecture basement
  • 32. Leak of stateless examples Leak of stateless examples in standard Selenium documentation IMHO: due to Selenium development process A tiny group of extra professionals developers No processes No backlog No priorities No сcommittee for backlog and backlog items prioritization No “iterations” How to add new feature Just to implement and then add for review
  • 33. 1. Let’s compare: Photo Share – looks like parallelism (easy parallelism). Video Share – looks like parallelism (not trivial parallelism). State-less or state-full solution?
  • 34. 1. How easy transform solution from “single” to “multi” threading (to decrease “QA Automation Windows”) State-less – like share a photo Just 5 minutes of work. State-full – like share a video Not trivial task, could be a night mare. 2. Summary prefer state-less solutions to state-full solutions in mooooost cases; before start implementation a state-full solution, please, take a break for a minute, and re-thing everything again, possibly you can find a proper state-less solution. State-less or state-full solution?
  • 35. 1. Static class could be implemented as a state-less solution easily 2. Object State-full solution in 99,99% cases 3. Summary prefer static class based solutions (state-less) to object based (state-full) in mooooost cases; before start implementation based on objects, please, take a break for a minute, and re-thing everything again, possibly you can find a proper solution based on static classes. Object or static class State-full or state-less solution?
  • 36. “Replace Conditional with Polymorphism refactoring” as a Static <=> Dynamic: transformation criteria Replace Conditional with Polymorphism as criteria
  • 37. 1. You have a conditional that chooses different behavior depending on the type of an object. 2. Move each leg of the conditional to an overriding method in a subclass. Make the original method abstract. 3. And vice versa 4. Example Replace Conditional with Polymorphism and vice versa
  • 38. 1. Replace Conditional Dispatcher with Command Design Pattern Create a Command for each action. Store the Commands in a collection and replace the conditional logic with code to fetch and execute Commands. 2. Replace Conditional Logic with Strategy Design Pattern Create a Strategy for each variant and make the method delegate the “calculation” to a Strategy instance. 3. Replace Conditional Logic with State Design Pattern Create a State for each variant as a part of “State Machine” and make the method delegate tricky “calculation” to the “State Machine”. Replace Conditional with … more sophisticated options
  • 39. 1. Problem: You have a conditional that performs various actions depending on object type or properties. 2. Solution: Create subclasses matching the branches of the conditional. In them, create a shared method and move code from the corresponding branch of the conditional to it. Replace the conditional with the relevant method call. The result is that the proper implementation will be attained via polymorphism depending on the object class. Replace Conditional with Polymorphism – detailed description
  • 40. 1. Transformation example 2. Detailed information about transformation 3. Static or Dynamic – as an architecture basement Example plus some details
  • 41. Custom Test Runner: General scheme
  • 42. Risks based “good” examples of customized Test Runners
  • 43. Custom Java test runner first iteration The runner allowed selecting test suite and browser CSV-report as s result
  • 44. Custom Java test runner final iteration Added various features for test-run configuration
  • 45. Custom Java test runner parallel engine We get the number of nodes from Selenium Grid Get node ip’s from Grid Get the number of cores on each node (use the smallest number) Use this info to configure the number of parallel threads using standard test-runner (Test-NG) Use the power of both Test-NG and Selenium Grid
  • 46. Custom Java test runner next iteration • Reworked the re-run mechanism • Changed CSV-reports for detailed Excel spreadsheets with color coding • Integrate with load emulator
  • 47. “Bad” examples of customized Test Runners
  • 48. “Bad” examples of customized Test Runners • Custom test-runner • XML-configuration-based • Features: o Test, not test-suite level orientation o XML-config for multithreading
  • 49. “Bad” examples of customized Test Runners
  • 50. “Take away” points Algorithm 1. Define Tests properly Tests attributes 2. Define All shared entities 3. Select proper Selenium WebDriver Wrapper 4. Select proper Architecture 5. Test Parallel approach or combination 1. Some standard Test Runner 2. Build instruments 3. Several Processes 4. Selenium Grid 5. OS Language specific multithreading
  • 51. Summary Step by step summary 1. Algorithm 2. Static or Dynamic; stateless or statefull: Architecture related questions 3. Metrics definition 4. Custom Test Runner: General scheme 1. Risks based “good” examples of customized Test Runners 2. “Bad” examples of customized Test Runners
  • 52. What’s next? Let’s go and discuss all open questions in an informal way 
  • 53. 1. “Refactoring is a controlled technique for improving the design of an existing code base.” 2. “Its essence is applying a series of small behavior-preserving transformations, each of which "too small to be worth doing".” 3. “The cumulative effect of each of these transformations is quite significant.” 4. “By doing Refactoring in small steps you reduce the risk of introducing errors. You also avoid having the system broken while you are carrying out the restructuring - which allows you to gradually refactor a system over an extended period of time.” Refactoring by Martin Fowler
  • 54. 1. Ask yourself "how can I hide some details from the rest of the software?“ 2. What is encapsulation? hide variability hide complexity Details "conflict of interests“ “tech” discussions 3. Example of public member or private member + setter/getter What is really hidden? Where is simplicity? Encapsulation – the most important OOP principle
  • 55. 1. “There is a close relationship between refactoring and patterns.” 2. “Often the best way to use patterns is to gradually refactor your code to use the pattern once you realize it’s needed.” 3. “Joshua Kerievsky’s Refactoring to Patterns explores this topic, making this a great topic to learn about once you’ve got the basic refactoring's under your belt.” 4. “From Refactoring To Design Pattern” path – from pure design to adequate design 5. “From ~Design Patterns To Refactoring” path – from over design to adequate design Refactoring and Design Patterns by Martin Fowler
  • 56. 1. “Refactoring to Patterns is the marriage of refactoring - the process of improving the design of existing code - with patterns, the classic solutions to recurring design problems.” 2. “Refactoring to Patterns suggests that using patterns to improve an existing design is better than using patterns early in a new design. This is true whether code is years old or minutes old.” 3. “We improve designs with patterns by applying sequences of low-level design transformations, known as refactoring's.” 4. And vice versa Refactoring and Design Patterns by Joshua Kerievsky
  • 57. 1. There are more then 90 types of refactoring 2. Refactoring types that relate to a particular field is called a ”Refactoring Language” 3. ”Refactoring Language” gives a common terminology for discussing the situations specialists are faced with: “The elements of this language are entities called Refactoring types”; “Each type of Refactoring describes a problem that occurs over and over again in our environment”; “Each type of Refactoring describes the core of the solution to that “~low level” problem, in such a way that you can use this solution a million times over, without ever doing it the same way twice!” Refactoring Catalog / Language
  • 58. 1. You have a conditional that chooses different behavior depending on the type of an object. 2. Move each leg of the conditional to an overriding method in a subclass. Make the original method abstract. 3. And vice versa 4. Example Replace Conditional with Polymorphism and vice versa
  • 59. 1. Replace Conditional Dispatcher with Command Design Pattern Create a Command for each action. Store the Commands in a collection and replace the conditional logic with code to fetch and execute Commands. 2. Replace Conditional Logic with Strategy Design Pattern Create a Strategy for each variant and make the method delegate the “calculation” to a Strategy instance. 3. Replace Conditional Logic with State Design Pattern Create a State for each variant as a part of “State Machine” and make the method delegate tricky “calculation” to the “State Machine”. Replace Conditional with … more sophisticated options
  • 60. 1. Problem: You have a conditional that performs various actions depending on object type or properties. 2. Solution: Create subclasses matching the branches of the conditional. In them, create a shared method and move code from the corresponding branch of the conditional to it. Replace the conditional with the relevant method call. The result is that the proper implementation will be attained via polymorphism depending on the object class. Replace Conditional with Polymorphism – detailed description
  • 61. 1. This refactoring technique can help if your code contains operators performing various tasks that vary based on: Class of the object or interface that it implements Value of an object's field Result of calling one of an object's methods 2. If a new object property or type appears, you will need to search for and add code in all similar conditionals. Thus the benefit of this technique is multiplied if there are multiple conditionals scattered throughout all of an object's methods. Why refactor
  • 62. 1. This technique adheres to the Tell-Don't-Ask principle: instead of asking an object about its state and then performing actions based on this, it is much easier to simply tell the object what it needs to do and let it decide for itself how to do that. 2. Removes duplicate code. You get rid of many almost identical conditionals. 3. If you need to add a new execution variant, all you need to do is add a new subclass without touching the existing code (Open/Closed Principle). Benefits
  • 63. 1. For this refactoring technique, you should have a ready hierarchy of classes that will contain alternative behaviors. If you do not have a hierarchy like this, create one. Other techniques will help to make this happen: 2. Replace Type Code with Subclasses. Subclasses will be created for all values of a particular object property. This approach is simple but less flexible since you cannot create subclasses for the other properties of the object. 3. Replace Type Code with State/Strategy. A class will be dedicated for a particular object property and subclasses will be created from it for each value of the property. The current class will contain references to the objects of this type and delegate execution to them. 4. The following steps assume that you have already created the hierarchy. Preparing to Refactor
  • 64. 1. If the conditional is in a method that performs other actions as well, perform Extract Method. 2. For each hierarchy subclass, redefine the method that contains the conditional and copy the code of the corresponding conditional branch to that location. 3. Delete this branch from the conditional. 4. Repeat replacement until the conditional is empty. Then delete the conditional and declare the method abstract. Refactoring Steps
  • 65. Regression frequency (RF) Definition • Regression Frequency (RF) = How frequent does automated regression run? «Meaning» • The value of product use is better as higher it is. That is ok for automated tests, if automation test runs are frequent, their importance for customer is bigger. Because of that that metric is one of the key metrics while valuing ROI.
  • 66. Regression frequency (RF) Boundaries • Widespread boundaries / recommendations: smoke – every night full-regression – every weekend Where do we get info from • Automation reports • Continuous Integration (CI)
  • 67. Regression frequency (RF) Examples: • RF and Economical expediency of АТ(ROI); • Facebook and Bamboo • HeadHunter • Kanban: RF and WarGaming experience • Contra example «Absolute» «recommendations» • Contra example «Commit window»
  • 68. Regression frequency (RF) Visualization • Not less than one a week – green color • Not less than once in two weeks – yellow color • Less than one a month - red color • More frequent than once a day - red color Connection between other metrics • Automation testing window (ATW); • Test results analysis window (TRAW); • Economical expediency of АТ (ROI) • “Commit window“ Category: • Quality • Automated testing
  • 69. АТ «Window» Definition • Automated testing «Window» – how much physical time does Automated test run take (full run or subset) • Automated testing «Window» – how much system «lab» time does Automated test run take (full run or subset)
  • 70. АТ «Window» «Meaning» • Time, that is required to be calculated while estimating economical expediency of AUT while analyzing ROI in comparison with manual testing. Metric is required as for making decision about introduction of Automation and as for valuing current state of implemented automation with the aim of looking for narrow places.
  • 71. АТ «Window» Boundaries • Depends on the size of the project, might take from couple of hours to many hours. In general, Smoke after commit should be not longer than one hour, full Regression not more than two days (weekend). Where do we get info from • Test Reports • Continuous Integration (CI)
  • 72. АТ «Window» Examples: • Social networks (Facebook, Bamboo), CMS, CMS templates – before automation tools for vizual testing automated test cases percent was not big; • HeadHunter example • Counterexample – physical time • Counterexample – machine time (Cloud) • Technical details: Stateless and Statefull Automation, parallel run • Technical details: Effective waiters • Technical details: Premature optimization
  • 73. АТ «Window» Visualization • Smoke <= 1 hour, Full Regression <= 12 hours (night) – green color • Smoke <= 2 hours, Full Regression <= 2 days (weekend) – yellow color • Smoke > 2 hours, Full Regression > 2 days(weekend) – red color
  • 74. АТ «Window» Connection between other metrics • Automation progress (AP) • Automated tests coverage percentage • Regression Frequency (RF) • Automated tests stability (ATS) • Economical expediency of АТ (ROI) Category: • Cost / Time • Automated testing
  • 75. Test results analysis “Window” (TRAW) Definition • Analyzing «Window» of automation test results = How much time does it take to analyze received data? «Meaning» • Metric shows how exhaustive and readable are reports, how stabile AT and AUT. When the window is too big, less time would be devoted to tests development, or analysis will be performed not thoroughly enough, which will decrease Automation value.
  • 76. Test results analysis “Window” (TRAW) Boundaries • In dependency of the project can last from some minutes to many hours. In general, analyzing results of Smoke test after commit – should take couple of minutes, analyzing results of full Regression – should take couple of hours, ideally, less than an hour. Where do we get info from • Test Reports • Continuous Integration (CI) • Task Tracking Systems
  • 77. Test results analysis “Window” (TRAW) Examples: • Social networks (Facebook, Bamboo), CMS, CMS templates – before automation tools for vizual testing automated test cases percent was not big; • HeadHunter example • Mature Data Protection Solution, new SQL Denali plug-in, close to 100%; • Mature Secure VPN (R), technological stack; • Counterexamples;
  • 78. Test results analysis “Window” (TRAW) Visualization • Smoke <= 10 minutes, Full Regression <= 2 hours – green color • Smoke <= 20 minutes, Full Regression <= 4 hours – yellow color • Smoke > 20 minutes, Full Regression > 4 hours – red color
  • 79. Test results analysis “Window” (TRAW) • Connection between other metrics • Automation progress (AP) • Automated tests coverage Percentage (ATC) • Regression Frequency (RF) • Automated Tests stability (ATS) • Category: • Cost / Time • Automated testing
  • 80. ROI Definition • Economical expedience of AT (ROI) = Manual efforts – (Automation efforts + Automation investment) / QA investment * 100% «Meaning» • Shows does it have sense to implement automation on the current project in the current time. It might happen, that at some conditions, automation on the project can be economically inappropriate, because manual testing, even in long term future can be cheaper.
  • 81. ROI Boundaries • Out of scope Where do we get info from • Test Strategy • Test Plan • Test Management Systems (TMS) • Task Tracking System
  • 82. ROI Examples: • Variety of projects • Standard «problem» while working with middle+ automation specialists of «old formation» • A set of “alternative” ways of ROI usage (out of scope)
  • 83. ROI (+ additional profit) Visualization • Comparing trends • Manual testing vs Automation • Whole variant of option of implementing / developing automation • Different investment options • Choosing optimal team-trend here and now
  • 84. ROI Connection between other metrics • % of Tests, suitable for AT • Regression frequency (RF) • Automated test creation time (ATDT) • Automated test support time (ATST) • Automated tests stability (ATS) • Automation testing window (ATW) • Test results analysis window (TRAW) Category: • Price / time • Automation testing
  • 86. www.COMAQA.BY Community’s audience Testing specialists (manual and automated) Automation tools developers Managers and sales specialists in IT IT-specialists, thinking about migrating to automation Students looking for perspective profession. Community goals Create unified space for effective communication for all IT-specialists in the context of automated testing. Your profit Ability to listen to reports from leading IT-specialists and share your experience. Take part in «promo»-versions of top IT-conferences in CIS for free. Meet regularly, at different forums, community «offices», social networks and messengers.
  • 88. www.CoreHard.by Community’s audience «Harsh» С++ developers & co, IoT, BigData, High Load, Parallel Computing Automation tools developers Managers and sales specialists in IT Students looking for perspective profession. Community goals Create unified space for effective communication for all IT-specialists in the context of «harsh» development. Your profit Ability to listen to reports from leading IT-specialists and share your experience. Take part in «promo»-versions of top IT-conferences in CIS for free. Meet regularly, at different forums, community «offices», social networks and messengers.

Editor's Notes

  • #86: Text should be left aligned / icons should be broken into two columns three and three