SlideShare a Scribd company logo
Winning the Battle against
Automated Testing
Elena Laskavaia
March 2016
Quality
People Process Tools
Foundation
of Quality
Development vs
Testing
 Developers don’t test
 Testers don’t develop
 Testers don’t have to be skilled
 Separate Developers and Testers
 Make the Test team responsible for
quality
One Team
Quality is a team responsibility
The Process
When quality is bad let's
add more steps to the
process
Story about the
broken Trunk
Thousands of developers
Continuous stability is a must
“Trunk is broken” too often
Huge show stopper for R&D
People did root-cause analysis
Came up with Improved Process
“Improved”
Pre-Commit
Process
Repeat for all hardware variants
Manually execute sanity test cases
Re-build and re-deploy whole system
Clean compile All
Pull/Update All Source
Trunk is still
broken. Why?
Process
was not
followed
Process is too
complex
Process is too
boring
Process is too
time consuming
Environment /
Hardware
limitations
Developers
don’t know
about the
process
Developers are
lazy
Automated
Pre-Commit Testing
Pre-Commit Tests with Source Management System
Fix Push Checks
master
● Peer reviews
● Robots checks
Automation
Hack
Oh well we don’t have any more budget and time,
let go back to manual testing
It does not work at all now!
Oops our tester quit, who knows how to run it?
Need a person to run it for every build
Spend 6 month developing testing framework
Randomly pick a tool
Let's slap on some automation!
Continuous Testing
Continuous Quality
Cost of
Automation
• Cost of Tools
• User Training
• Integration and Customization
• Writing Test Cases
• Executing Test Cases
• Maintaining Test Cases
Jump Start
Make one team
responsible
Setup continuous
integration
Add pre-commit
hooks
Establish simple self-
verifying process
Add one automated
test
Key Principles
of successful automated testing
Gate Keeper
test system must guard the gate
100% Success
100% of tests must pass. zero tolerance
NO random
failures
• Remove such tests from
automation
• Use repeaters to keep intermittent
tests
• Be prepared for the noise
• Modify AUT to remove source of
randomness for tests
Fully Automated
• No monkeys pushing buttons
to start the testing
• No monkeys watching
automated UI testing
• Hooks on code-submission
(pre-commit, fast)
• Hooks on build promotion
(overnight)
Fast and Furioius
• Feedback for pre-commit <=10 min
• Overnight is absolute maximum
• More tests degrade the system
response time
• Not all tests are born equal!
• Use tagging and filtering
• Distribute or run in parallel
• No sleeps
Timeouts
• Make sure tests are not hanging!
• Use timeouts
• Use external monitors to kill
hanging runs
• Do not overestimate timeouts
Test Scripts are
Programs
• Automated test cases are programs
• Treat them as source code
• They must be in text form
• They must go to same version
control system
• Subject to code inspection, coding
standards, build checks, etc
Unit
Tests • Mandatory with commit
• Use servers to run
Part of the process
• Use a mocking framework
• Use UI bot
• Use test generators
• Inline data sources
Easy to write
Unit
• Unit tests cannot cover all
• Test actual installed AUT
• Run the program as user
would
• Use same language for unit
and integration testing
Integration
Pick and Choose
• Difficult to set-up
cases
• Rare but important
scenarios
• Check lists
• Module is actively
developed
• Long maintenance
expected
Candidates
you should not automate everything
Self-Verification:
test the test system?
Automatically Check
 Code submission is properly
formatted (has bug id, etc)
 Code submission has unit tests
 Total number of tests is increased
 Performance is not declined
 Code coverage is not declined
Failed Battles
Tools we used or evaluated and failed
• after 3 month of
writing tests
realized that it
won’t work on
Linux
WinRunner
• was pretty good
until it was bought
and it stopped
launching with
new eclipse
WindowTester
• 4 years ago:
database, no text
for tests, no
integration
Jubula
• slow, not
debuggable,
blocks on support.
python.
Squish
• domain specific
language
RCPTT
Working Solution
Continuous
Integration
Tools
• Unit testing
JUnit
• Source Control and Code
Review
Git/Gerrit
• Static Analysis
FindBugs
• Build System
• maven-surefire-plugin (for
unit tests)
• maven-failsafe-plugin (for
integration tests)
• findbugs-plugin for static
analysis
Maven/Tycho
• Continuous Integration
Server
• Gerrit Trigger plugin - pre-
commit builds and voting
• FindBugs plugin - reports
and status
• JUnit plugin - reports and
status
Jenkins
• gui testing
SWTBot
• junit mocking
Mockito
• Code Coverage
Ecl Emma
• Lots of custom libraries,
frameworks and bots
Custom
Tips and Tricks
Auto-Bots
Checks that can be added to every test
 App crashed during a test
 Test timeout exceeded
 App generated unexpected log
 Temp files were not cleaned up
 Resource or memory leaks
 Runtime error detection
AutoBots: Junit Rules
public class SomeTest {
// tests that we don’t leave tmp file behind (this is custom rule not
// base junit)
@Rule TmpDirectoryCheck tmpRule = new TmpDirectoryCheck();
@Test
void testSomething(){
}
}
// base class with timeouts
public abstract class TestBase {
public @Rule Timeout globalTimeout = Timeout.seconds(1); // 1 second
}
Jenkins Split Verifiers Regression
testing
Linux
Regression
testing
Windows
Static
Analysis
+1
verify
To speed up verification
for pre-commit hooks
set up multiple jobs
which trigger on the
same event (i.e. patch
submitted)
Inline Data Sources: Comments in Java
// template<typename T2>
// struct B : public ns::A<T2> {};
// void test() {
// B<int>::a;
// }
public void testInstanceInheritance_258745() {
getBindingFromFirstIdentifier("a", ICPPField.class);
}
Code Coverage
• Run tests with code coverage
• Not during pre-commit check
• Unless it has validation hooks
• Good tool for unit test design
(IDE)
• Never ask for 100% coverage
Code Coverage ->
Select Tests
Based on changed code
exclude tests that do not cover
the changes
Static Analysis
• Can be run independently
• Has to be a gatekeeper
• Spent time to tune it (remove all
noisy checkers)
• Push to desktop (if running as you
type - instantaneous feedback!)
• Use alternative UX on desktop (i.e.
code formatter)
Jenkins Plugin:
Code Reviewer
Post defects from static
analysis as reviewer
comments on the patch
Tagging and Filtering: Junit Categories
// tag class with categories in test class
@Category({PrecommitRegression.class, FastTests.class})
public class SomeClassTest {
@Test
public void someTest() {
}
}
// in maven pom.xml
<build>
<plugins>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<configuration>
<groups>com.example.PrecommitRegression</groups>
</configuration>
</plugin>
</plugins>
</build>
Runtime Filtering: Junit Assume
// skip test entirely if not running in osgi
@Before
public void setUp() {
Assume.assumeTrue( Activator.isOsgiRunning() );
}
Intermittent Test: Junit Rule
public class SomeClassTest {
public @Rule InterimittentRule irule = new InterimittentRule();
// repeat this up to 3 times if it failing
@Intermittent(repetition = 3)
@Test
public void someTest() {}
}
You can create a runner or define a rule which repeats a test if it fails.
Junit itself does not define either, you have to add it yourself (2 classes, 62 lines) of code).
The End
One Team
Simple Process
Right Tools

More Related Content

PPS
Why Unit Testingl
PDF
Unit Test + Functional Programming = Love
KEY
Unit Testing Your Application
PDF
Is this how you hate unit testing?
PPTX
Benefit From Unit Testing In The Real World
PPTX
How do you tame a big ball of mud? One test at a time.
PDF
Unit testing (workshop)
PPTX
An Introduction to Unit Testing
Why Unit Testingl
Unit Test + Functional Programming = Love
Unit Testing Your Application
Is this how you hate unit testing?
Benefit From Unit Testing In The Real World
How do you tame a big ball of mud? One test at a time.
Unit testing (workshop)
An Introduction to Unit Testing

What's hot (19)

PDF
Unit testing in PHP
PDF
Unit testing in Force.com platform
PPT
Software Testing - Tool support for testing (CAST) - Mazenet Solution
PPS
Unit Testing
ODP
Software testing tools
PPTX
Continous testing for grails
PDF
An Introduction to Unit Test Using NUnit
PPTX
Unit tests & TDD
PPTX
#1 unit testing
PPTX
Unit testing with NUnit
PPTX
Unit tests benefits
PPT
Unit Testing, TDD and the Walking Skeleton
PPTX
Test Driven Development
PDF
Microsoft Pex
ODP
Testing Philosphies
PPT
N Unit Presentation
PPT
Unit Testing
PDF
May: Automated Developer Testing: Achievements and Challenges
Unit testing in PHP
Unit testing in Force.com platform
Software Testing - Tool support for testing (CAST) - Mazenet Solution
Unit Testing
Software testing tools
Continous testing for grails
An Introduction to Unit Test Using NUnit
Unit tests & TDD
#1 unit testing
Unit testing with NUnit
Unit tests benefits
Unit Testing, TDD and the Walking Skeleton
Test Driven Development
Microsoft Pex
Testing Philosphies
N Unit Presentation
Unit Testing
May: Automated Developer Testing: Achievements and Challenges
Ad

Similar to Winning the battle against Automated testing (20)

PPTX
Presentation delex
PDF
Deliver Faster with BDD/TDD - Designing Automated Tests That Don't Suck
PPTX
Agile Engineering Sparker GLASScon 2015
PDF
Engaging IV&V Testing Services for Agile Projects
PPTX
(Agile) engineering best practices - What every project manager should know
PDF
Clean & Dirty Acceptance Tests with Cucumber & Watir
PPTX
Automation testing & Unit testing
PDF
Introduction to Automated Testing
PDF
Introduction to-automated-testing
PDF
How the JDeveloper team test JDeveloper at UKOUG'08
PPTX
Creating testing tools to support development
PPTX
Unit testing & TDD concepts with best practice guidelines.
PDF
The Automation Firehose: Be Strategic & Tactical With Your Mobile & Web Testing
KEY
33rd degree
PDF
End-end tests as first class citizens - SeleniumConf 2020
PPT
Test planning and software's engineering
PPTX
Introduction to Software Testing Techniques
PDF
Agile Testing Pasadena JUG Aug2009
PPTX
Benefits from AATs
PPT
Automated Software Testing Framework Training by Quontra Solutions
Presentation delex
Deliver Faster with BDD/TDD - Designing Automated Tests That Don't Suck
Agile Engineering Sparker GLASScon 2015
Engaging IV&V Testing Services for Agile Projects
(Agile) engineering best practices - What every project manager should know
Clean & Dirty Acceptance Tests with Cucumber & Watir
Automation testing & Unit testing
Introduction to Automated Testing
Introduction to-automated-testing
How the JDeveloper team test JDeveloper at UKOUG'08
Creating testing tools to support development
Unit testing & TDD concepts with best practice guidelines.
The Automation Firehose: Be Strategic & Tactical With Your Mobile & Web Testing
33rd degree
End-end tests as first class citizens - SeleniumConf 2020
Test planning and software's engineering
Introduction to Software Testing Techniques
Agile Testing Pasadena JUG Aug2009
Benefits from AATs
Automated Software Testing Framework Training by Quontra Solutions
Ad

Recently uploaded (20)

PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PPTX
ai tools demonstartion for schools and inter college
PPTX
Reimagine Home Health with the Power of Agentic AI​
PDF
AI in Product Development-omnex systems
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PDF
Digital Strategies for Manufacturing Companies
PDF
Upgrade and Innovation Strategies for SAP ERP Customers
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PPTX
Operating system designcfffgfgggggggvggggggggg
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PPTX
Transform Your Business with a Software ERP System
PPTX
Introduction to Artificial Intelligence
PDF
Softaken Excel to vCard Converter Software.pdf
PPTX
CHAPTER 2 - PM Management and IT Context
PDF
top salesforce developer skills in 2025.pdf
How to Migrate SBCGlobal Email to Yahoo Easily
wealthsignaloriginal-com-DS-text-... (1).pdf
Which alternative to Crystal Reports is best for small or large businesses.pdf
2025 Textile ERP Trends: SAP, Odoo & Oracle
ai tools demonstartion for schools and inter college
Reimagine Home Health with the Power of Agentic AI​
AI in Product Development-omnex systems
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
Digital Strategies for Manufacturing Companies
Upgrade and Innovation Strategies for SAP ERP Customers
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
Odoo POS Development Services by CandidRoot Solutions
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Operating system designcfffgfgggggggvggggggggg
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
Transform Your Business with a Software ERP System
Introduction to Artificial Intelligence
Softaken Excel to vCard Converter Software.pdf
CHAPTER 2 - PM Management and IT Context
top salesforce developer skills in 2025.pdf

Winning the battle against Automated testing

  • 1. Winning the Battle against Automated Testing Elena Laskavaia March 2016
  • 3. Development vs Testing  Developers don’t test  Testers don’t develop  Testers don’t have to be skilled  Separate Developers and Testers  Make the Test team responsible for quality
  • 4. One Team Quality is a team responsibility
  • 5. The Process When quality is bad let's add more steps to the process
  • 6. Story about the broken Trunk Thousands of developers Continuous stability is a must “Trunk is broken” too often Huge show stopper for R&D People did root-cause analysis Came up with Improved Process
  • 7. “Improved” Pre-Commit Process Repeat for all hardware variants Manually execute sanity test cases Re-build and re-deploy whole system Clean compile All Pull/Update All Source
  • 8. Trunk is still broken. Why? Process was not followed Process is too complex Process is too boring Process is too time consuming Environment / Hardware limitations Developers don’t know about the process Developers are lazy
  • 10. Pre-Commit Tests with Source Management System Fix Push Checks master ● Peer reviews ● Robots checks
  • 11. Automation Hack Oh well we don’t have any more budget and time, let go back to manual testing It does not work at all now! Oops our tester quit, who knows how to run it? Need a person to run it for every build Spend 6 month developing testing framework Randomly pick a tool Let's slap on some automation!
  • 13. Cost of Automation • Cost of Tools • User Training • Integration and Customization • Writing Test Cases • Executing Test Cases • Maintaining Test Cases
  • 14. Jump Start Make one team responsible Setup continuous integration Add pre-commit hooks Establish simple self- verifying process Add one automated test
  • 15. Key Principles of successful automated testing
  • 16. Gate Keeper test system must guard the gate
  • 17. 100% Success 100% of tests must pass. zero tolerance
  • 18. NO random failures • Remove such tests from automation • Use repeaters to keep intermittent tests • Be prepared for the noise • Modify AUT to remove source of randomness for tests
  • 19. Fully Automated • No monkeys pushing buttons to start the testing • No monkeys watching automated UI testing • Hooks on code-submission (pre-commit, fast) • Hooks on build promotion (overnight)
  • 20. Fast and Furioius • Feedback for pre-commit <=10 min • Overnight is absolute maximum • More tests degrade the system response time • Not all tests are born equal! • Use tagging and filtering • Distribute or run in parallel • No sleeps
  • 21. Timeouts • Make sure tests are not hanging! • Use timeouts • Use external monitors to kill hanging runs • Do not overestimate timeouts
  • 22. Test Scripts are Programs • Automated test cases are programs • Treat them as source code • They must be in text form • They must go to same version control system • Subject to code inspection, coding standards, build checks, etc
  • 23. Unit Tests • Mandatory with commit • Use servers to run Part of the process • Use a mocking framework • Use UI bot • Use test generators • Inline data sources Easy to write
  • 24. Unit • Unit tests cannot cover all • Test actual installed AUT • Run the program as user would • Use same language for unit and integration testing Integration
  • 25. Pick and Choose • Difficult to set-up cases • Rare but important scenarios • Check lists • Module is actively developed • Long maintenance expected Candidates you should not automate everything
  • 26. Self-Verification: test the test system? Automatically Check  Code submission is properly formatted (has bug id, etc)  Code submission has unit tests  Total number of tests is increased  Performance is not declined  Code coverage is not declined
  • 28. Tools we used or evaluated and failed • after 3 month of writing tests realized that it won’t work on Linux WinRunner • was pretty good until it was bought and it stopped launching with new eclipse WindowTester • 4 years ago: database, no text for tests, no integration Jubula • slow, not debuggable, blocks on support. python. Squish • domain specific language RCPTT
  • 30. Continuous Integration Tools • Unit testing JUnit • Source Control and Code Review Git/Gerrit • Static Analysis FindBugs • Build System • maven-surefire-plugin (for unit tests) • maven-failsafe-plugin (for integration tests) • findbugs-plugin for static analysis Maven/Tycho • Continuous Integration Server • Gerrit Trigger plugin - pre- commit builds and voting • FindBugs plugin - reports and status • JUnit plugin - reports and status Jenkins • gui testing SWTBot • junit mocking Mockito • Code Coverage Ecl Emma • Lots of custom libraries, frameworks and bots Custom
  • 32. Auto-Bots Checks that can be added to every test  App crashed during a test  Test timeout exceeded  App generated unexpected log  Temp files were not cleaned up  Resource or memory leaks  Runtime error detection
  • 33. AutoBots: Junit Rules public class SomeTest { // tests that we don’t leave tmp file behind (this is custom rule not // base junit) @Rule TmpDirectoryCheck tmpRule = new TmpDirectoryCheck(); @Test void testSomething(){ } } // base class with timeouts public abstract class TestBase { public @Rule Timeout globalTimeout = Timeout.seconds(1); // 1 second }
  • 34. Jenkins Split Verifiers Regression testing Linux Regression testing Windows Static Analysis +1 verify To speed up verification for pre-commit hooks set up multiple jobs which trigger on the same event (i.e. patch submitted)
  • 35. Inline Data Sources: Comments in Java // template<typename T2> // struct B : public ns::A<T2> {}; // void test() { // B<int>::a; // } public void testInstanceInheritance_258745() { getBindingFromFirstIdentifier("a", ICPPField.class); }
  • 36. Code Coverage • Run tests with code coverage • Not during pre-commit check • Unless it has validation hooks • Good tool for unit test design (IDE) • Never ask for 100% coverage Code Coverage -> Select Tests Based on changed code exclude tests that do not cover the changes
  • 37. Static Analysis • Can be run independently • Has to be a gatekeeper • Spent time to tune it (remove all noisy checkers) • Push to desktop (if running as you type - instantaneous feedback!) • Use alternative UX on desktop (i.e. code formatter) Jenkins Plugin: Code Reviewer Post defects from static analysis as reviewer comments on the patch
  • 38. Tagging and Filtering: Junit Categories // tag class with categories in test class @Category({PrecommitRegression.class, FastTests.class}) public class SomeClassTest { @Test public void someTest() { } } // in maven pom.xml <build> <plugins> <plugin> <artifactId>maven-surefire-plugin</artifactId> <configuration> <groups>com.example.PrecommitRegression</groups> </configuration> </plugin> </plugins> </build>
  • 39. Runtime Filtering: Junit Assume // skip test entirely if not running in osgi @Before public void setUp() { Assume.assumeTrue( Activator.isOsgiRunning() ); }
  • 40. Intermittent Test: Junit Rule public class SomeClassTest { public @Rule InterimittentRule irule = new InterimittentRule(); // repeat this up to 3 times if it failing @Intermittent(repetition = 3) @Test public void someTest() {} } You can create a runner or define a rule which repeats a test if it fails. Junit itself does not define either, you have to add it yourself (2 classes, 62 lines) of code).
  • 41. The End One Team Simple Process Right Tools

Editor's Notes

  • #2: There is an eternal battle going in universe. Battle between chaos and order. No matter how good your design may be the forces of entropy will destroy your system. Unless you make it alive. How to do it? You have to stick with me for the next half an hour to find out. I am Alena Laskavaia and I will talking about automated regression testing. I work for QNX and we don’t sell testing tools. But I built and used testing tools for my work, my hobby project (don’t laught I do have automated regression testing for my hobby project) and eclipse project I am contributing to. When I first prepared the slides it was 2 hour talk. So I have to really cut it down by removing all why and how and only leave what part.
  • #3: So we are going to be talking about automated regression system, which is system that we put in place to make sure our quality does not degrade. But first we need to talk about other import parts: the foundation of software quality which consists of people, process and tools. Does not matter how good your test system is, if other parts are failing you won’t get good quality.
  • #4: So first lets look at anti-patterns that will lead to bad quality. First one is separation development and testing into two camps. The typical scenario is: Developers have no time to write junit tests, and no time to test Tester’s don’t develop, so testers don’t need to be skilled We put Developers and Testers as far as possible, preferably in opposite time-zone But we make test team is responsible for quality The result: no one is responsible, it takes a week to resolve a simple bug since its takes 24 hours to communicate one answer, which of cause will only bring up the next question, project release is delayed because of bad quality, teams blame each other.
  • #5: The solution is one team. Ideally it has to be collocated. The most 3 hour time difference. Nobody else to blame.
  • #6: Another anti-pattern is Process snake oil. When quality is bad lets “improve the process”
  • #7: I will tell you a little store about broken “trunk”. Trunk is the name of main branch in svn if you don’t know So thousands of developers working on the same system Every day they check out new code and try to use to. Continuous stability is a must Quality was bad. “Trunk is broken” too often. Means system does not boot or not operational. Huge show stopper for R&D People did root-cause analysis Came up with Improved Process Root-causes found Code was not recompiled after minor changes before submission Test case was not executed Combined component test case was not executed Test was not performed in right context (wrong branch, obsolete code, etc) Test environment issue (not covered all supported platforms, etc) Regression testing was not run Functional testing is incomplete
  • #10: Solution: Automated Pre-Commit Testing
  • #11: It very simple. But you need right tools for that. The master branch where all the mainstream code lives. Developer changes the code locally and “Pushes” his change on the validation server to perform “Checks”. Usually it will be robot checks and people checks, like code reviews. And robots can do variety of cool stuff and vote. If it fails developers get it back. If it passes code is merged.
  • #13: You know quality of the software cannot improve. Absolute Quality main(){} Every line you add since that will degrate its quality. The only way to slow down the degradation is to do continuous testing. Its like a fridge. It always have to on to save you food. In few hours you turn it off food will start roting! Cannot depend on human intervention (like a person turning fridge on and off to save power)
  • #14: Let’s look at cost of automation. You should aim to have most of this reduced to zero
  • #18: Gate Keeper – test system must guard the gate, if its running on the side its never be successful
  • #19: Black and White No nonsense like 97% of tests are passing. What does it mean?
  • #20: Deterministic tests does not mean it can only test deterministic system For example if you test ui and it has a random need to check for updates which pop-up dialog about it You test should expect a specific dialog not any so the random one should have not bothered it
  • #24: They must go to same version control system Ideally same repository as AUT Ideally same module
  • #25: Tight deadlines are not excuse Unit testing of UI components totally feasible Use UI bot (and optional recorder) Do not require 100% coverage
  • #26: main was never run, caught in the last day production build was missing dependencies, features, etc released produce with licensing turned off
  • #28: Developers can turn off tests accidentally or on purpose
  • #31: Jubula: 4 years ago, authors passionately was trying to prove that tests are not programs, don’t have to be in text form and database is perfect store for them. Last year was something about Jekyll and Hide and the finally something usable
  • #35: There is series of checks that can be added to every test, I call them auto-bots
  • #37: To speed up verification for pre-commit hooks Set up multiple jobs trigger on the same event (i.e. patch submitted)
  • #40: Nothing to do with test harness
  • #43: import org.junit.rules.MethodRule; import org.junit.runners.model.FrameworkMethod; import org.junit.runners.model.Statement; public class InterimittentRule implements MethodRule { public static class RunIntermittent extends Statement { private final FrameworkMethod method; private final Statement statement; public RunIntermittent(FrameworkMethod method, Statement statement) { this.method = method; this.statement = statement; } @Override public void evaluate() throws Throwable { int repetition = 1; Intermittent methodAnnot = method.getAnnotation(Intermittent.class); if (methodAnnot != null) { repetition = methodAnnot.repetition(); } else { Intermittent classAnnot = method.getDeclaringClass().getAnnotation(Intermittent.class); if (classAnnot != null) { repetition = classAnnot.repetition(); } } if (repetition > 1) { for (int i = 0; i < repetition; i++) { try { statement.evaluate(); break; // did not fail } catch (Throwable e) { if (i < repetition - 1) continue; // try again throw e; } } } else statement.evaluate(); } } @Override public Statement apply(Statement base, final FrameworkMethod method, final Object target) { return new RunIntermittent(method, base); } } ---- import java.lang.annotation.Retention; import java.lang.annotation.Target; import static java.lang.annotation.ElementType.METHOD; import static java.lang.annotation.ElementType.TYPE; import static java.lang.annotation.RetentionPolicy.RUNTIME; @Target({ METHOD, TYPE }) @Retention(RUNTIME) public @interface Intermittent { int repetition() default 10; }
  • #44: There is an epic battle going on in universe. Battle between Order and Chaos. Use elements of self organization in the team, process and the test system, make it alive. And then you can actually win the battle… Thanks for listening!
  • #45: Important, automated pre-commit check should be done a) touchless b) not on the desktop!
  • #46: Examples Tier 0: Low level junit, mock all env, super fast Tier 1: Medium speed junits and integration tests, run locally, require installation of AUT Tier 2: Heavy tests, require remote system Tier 3: Destructive tests, require new virtual host for every run