3. Goals of Test Automation
Tests should help us improve
software quality
Tests should help us improve
software understanding
Tests as specification.
Insure that we build the right
software.
Defect localization.
Insure that the software is correct.
Defect prevention.
Insure that bugs wont crawl back
to the software.
Tests as documentation.
Allow the developer/maintainer to
answer questions like “what should
be the expected outcome of the
software is the given input is …”
4. Goals of Test Automation
Tests should help us reduce risk
Tests should be easy to run
Tests as safety net.
Insure that we do no harm when
we change the software.
Fully automated.
Execute without any effort.
Self checking.
Detect and report any errors without
human intervention.
Repeatable.
Can be run many times in a row and
produce the same results without
human interventions in between.
Independent from each other.
Can be run by themselves and NOT
depend on the execution order, failure
or success of other tests.
Test development should
not introduce new risks risk
Refrain from modifying the
software to facilitate the
development of the tests as safety
net.
5. How to achieve these goals?
Use an xUnit test automation
framework and patterns
7. How do we prepare automated tests
for our software?
8. Recorded Tests
We automate tests by recording interactions with the
application and playing them back using a test tool.
No need for programming skills
for such kind of tests.
We have two basic choices
when using a Recorded Test
strategy:
We can either acquire third-
party tools that record the
communication that occurs while
we interact with the application
(e.g. Mercury QuickTest
Professional)
or we can build a “record and
playback” mechanism right into
our application
9. Scripted Tests
We automate the tests by writing test programs by
hand.
The most typical kind of tests….
Traditionally, Scripted Tests were
written as “test programs,” often
using a special test scripting
language.
Nowadays, we prefer to write
Scripted Tests using a Test
Automation Framework
10. Data Driven Tests
We store all the information needed for each test in a data
file and write an interpreter that reads the file and executes
the tests.
A Data-Driven Test is an ideal
strategy for getting business people
involved in writing automated tests.
By keeping the format of the data file
simple, we make it possible for the
business person to populate the file
with data and execute the tests
without having to ask a technical
person to write test code for each
test.
11. How do we make it easy to write
and run tests written by different
people?
12. Test Automation Framework
We use a framework that provides all the mechanisms
needed to run the test logic so the test writer needs to
provide only the test-specific logic.
…
13. Which fixture** strategy should we
use?
** fixture in xUnit terminology is everything we need
in place to be able to test the System Under Test
(SUT)
14. Minimal Fixture
We use the smallest and simplest fixture possible for each test.
A test that uses a Minimal
Fixture will always be easier to
understand than one that uses a
fixture containing unnecessary
or irrelevant information.
15. Standard Fixture
We reuse the design of the text fixture across the many tests
This approach makes a lot of
sense in the context of manual
execution of many customer
tests because it eliminates
the need for each tester to
spend a lot of time setting up
the test environment.
16. Fresh Fixture
Each test constructs its own brand-new test fixture for its own
private use.
We should use a Fresh Fixture
whenever we want to avoid any
interdependencies between
tests (which is in fact almost
always the case …..)
17. Shared Fixture
We reuse the same instance of the test fixture across many
tests.
If we want to avoid slow tests.
Or, when we have a long,
complex sequence of actions,
each of which depends on the
previous actions. In customer
tests, this may show up as a
workflow; in unit tests, it may
be a sequence of method calls
on the same object.
With the big risk of introducing
interdependencies between
tests ….
18. Back Door Manipulation
We set up the test fixture or verify the outcome by going through a
back door (such as direct database access).
A common application involves
testing basic CRUD (Create,
Read, Update, Delete) operations
on the SUT’s state.
In such a case, we want to verify
that the information persisted and
can be recovered in the same
form.
21. Test Method
We encode each test as a single Test Method on some class.
Variations:
Simple success test
(happy day)
Expected exception test
Constructor test
Dependency initialization
test
22. Test Case Class
We group a set of related Test Methods on a single Testcase Class.
24. Four Phase Test
We structure each test with four distinct parts executed in sequence.
In the first phase, we set up the
test fixture and anything we
need to put in place to be able
to observe the actual outcome .
In the second phase, we interact
with the SUT.
In the third phase, we
determine whether the
expected outcome has been
obtained.
In the fourth phase, we tear
down the test fixture to put the
world back into the state in
which we found it.
26. Assertion Method
We call a utility method to evaluate whether an expected outcome
has been achieved.
Variations:
(In) Equality assertions
Fuzzy equality assertions
(for floating point results
with an error tolerance)
Stated outcome assertions
(is null, is true, …)
Expected exception
assertions.
Single outcome assertions
(fail)
27. How do we provide more
information about a failed
assertion?
30. Test Runner
We execute the xUnit framework’s specific program that
instantiates and executes the Testcase Objects. When we have
many tests to run we can organize them in Test Suite classes.
32. How do we construct (destroy) the
fresh fixture?
33. Inline Setup (Teardown)
Each Test Method creates its own Fresh Fixture by calling the
appropriate constructor methods to build exactly the test fixture it
requires (the method destroys the fixture at the end).
We can use In-line Setup
when the fixture setup
logic is very simple and
straightforward.
34. Delegated Setup (Teardown)
Each Test Method creates (destroys) its own Fresh Fixture by
calling Creation/Destruction Methods from within the Test
Methods.
We can use a Delegated Setup
when we want to avoid the test
code duplication caused by
having to set up similar fixtures
for several tests and we want to
keep the nature of the fixture
visible within the Test Methods
35. Implicit Fresh Fixture Setup (Teardown)
We build (destroy) the test fixture common to several tests in set
up/tear down methods called by the test framework.
We can use Implicit Setup when
several Test Methods on the
same Testcase Class need an
identical Fresh Fixture.
36. How do we create (destroy) a shared
fixture if the test methods that
need it are in the same test class?
37. Implicit Shared Fixture Setup
(Teardown)
We build (destroy) the shared fixture in special methods called by
the Test Automation Framework before/after the first/last Test
Method is called
38. How do we create (destroy) a shared
fixture if the test methods that
need it are not in the same test
class?
39. Lazy Setup
We use Lazy Initialization of the fixture to create it in the first test
method that needs it.
The basic disadvantage is that
we do not know when to
tear down the fixture.
May be better to group all
methods in the same test
class….
40. How do we create (destroy) a shared
fixture if the fixture setup takes
too much time and resources?
41. Prebuild Fixture
We build the Shared Fixture separately from running the tests.
Not very common but
could be useful is the setup
is very time or resource
consuming.
Fits well with the Backdoor
Manipulation strategy
43. How do we verify a method that
returns a value?
44. Return Value Verification
We inspect the returned value of the method and compare it with
an expected return value.
check return value
45. How do we verify a method that
changes the state of the SUT?
46. State Verification
We inspect the state of the system under test after it has been
exercised and compare it to the expected state.
Variations:
Procedural State Verification, we
simply write a series of calls to Assertion
Methods that pick apart the state
information into pieces and compare to
individual expected values.
Expected State Specification, we
construct a specification for the
post-exercise state in the form of one or
more objects populated with the expected
attributes. We then compare the actual
state with these objects.
47. How do we verify a method of the
SUT that interacts with other
Depend-On-Components (DOC)?
48. Behavior Verification
We capture the indirect outputs/interactions of the SUT with DOC
as they occur and compare them to the expected behavior.
Typically to verify behavior we
have to use some kind of a Test
Spy or Test Mock
(see Test Double patterns that
follow)
50. How do we test the SUT when it
interacts/depends with/on other
DOC?
51. Test Double
We replace the DOC on which the SUT depends with a “test-
specific equivalent” test doubles that allow us to control the input
to the SUT and/or provide capture the interaction between them.
52. How do we verify the behavior of
the SUT when it gets indirect
inputs from another component,
independently from this
component?
53. Test Stub
Use a test-specific object that feeds the desired indirect inputs into the
system under test.
Variations:
Responder: a stub that feeds the SUT with
valid (happy path) input.
Saboteur: a stub that feeds the SUT with
invalid input.
Stub Implementation options:
Use a mocking framework like mockito
(mock() and when-then-return/throw
commands).
If DOC implements an interface, configure the
SUT with a test-specific DOC interface
implementation.
54. How do we verify the behavior of
the SUT when it calls another
component?
55. Test Spy
We use a Test Spy to wrap the DOC to capture the indirect output
calls made to DOC by the SUT for later verification by the test.
Spy Implementation
options:
Use a mocking framework like
mockito (spy() and verify()
commands).
Subclass DOC and override the
required methods to capture
SUT calls. Configure the SUT
with a test-specific object of the
DOC subclass.
56. How do we verify the behavior of
the SUT when it calls another
component, independently from
this component?
57. Fake Object
We replace the DOC that the SUT depends on with a much lighter-
weight implementation.
Fake Object implementation
options:
Use a mocking framework like
mockito (mock() and when-then-
return/throw commands).
If DOC implements an interface,
configure the SUT with a test-
specific DOC interface
implementation.
#48:Attention !!! these – self checking - mock objects are not supported by mockito, they have to be build by hand !!! Typical mockito mock objects are just fake objects or stub objects
A way to make a spy without mockito is to subclass the DOC and override its methods to keep track of when they are called.
#49:Attention !!! these – self checking - mock objects are not supported by mockito, they have to be build by hand !!! Typical mockito mock objects are just fake objects or stub objects
A way to make a spy without mockito is to subclass the DOC and override its methods to keep track of when they are called.
#54:Mockito to make a stub we create a mock and use when-then commands to specify its behavior
https://www.baeldung.com/mockito-behavior