SlideShare a Scribd company logo
xUnit Patterns by Gerard Meszaros
1
xUnit Patterns
www.cs.uoi.gr/~zarras/http://www.cs.uoi.gr/~zarras/se.htm
Goals of Test Automation
Why should we test ?
Goals of Test Automation
Tests should help us improve
software quality
Tests should help us improve
software understanding
 Tests as specification.
 Insure that we build the right
software.
 Defect localization.
 Insure that the software is correct.
 Defect prevention.
 Insure that bugs wont crawl back
to the software.
 Tests as documentation.
 Allow the developer/maintainer to
answer questions like “what should
be the expected outcome of the
software is the given input is …”
Goals of Test Automation
Tests should help us reduce risk
Tests should be easy to run
 Tests as safety net.
 Insure that we do no harm when
we change the software.
 Fully automated.
 Execute without any effort.
 Self checking.
 Detect and report any errors without
human intervention.
 Repeatable.
 Can be run many times in a row and
produce the same results without
human interventions in between.
 Independent from each other.
 Can be run by themselves and NOT
depend on the execution order, failure
or success of other tests.
Test development should
not introduce new risks risk
 Refrain from modifying the
software to facilitate the
development of the tests as safety
net.
How to achieve these goals?
Use an xUnit test automation
framework and patterns
Test Strategy Patterns
How do we prepare automated tests
for our software?
Recorded Tests
We automate tests by recording interactions with the
application and playing them back using a test tool.
No need for programming skills
for such kind of tests.
We have two basic choices
when using a Recorded Test
strategy:
We can either acquire third-
party tools that record the
communication that occurs while
we interact with the application
(e.g. Mercury QuickTest
Professional)
or we can build a “record and
playback” mechanism right into
our application
Scripted Tests
We automate the tests by writing test programs by
hand.
The most typical kind of tests….
Traditionally, Scripted Tests were
written as “test programs,” often
using a special test scripting
language.
Nowadays, we prefer to write
Scripted Tests using a Test
Automation Framework
Data Driven Tests
We store all the information needed for each test in a data
file and write an interpreter that reads the file and executes
the tests.
A Data-Driven Test is an ideal
strategy for getting business people
involved in writing automated tests.
By keeping the format of the data file
simple, we make it possible for the
business person to populate the file
with data and execute the tests
without having to ask a technical
person to write test code for each
test.
How do we make it easy to write
and run tests written by different
people?
Test Automation Framework
We use a framework that provides all the mechanisms
needed to run the test logic so the test writer needs to
provide only the test-specific logic.
…
Which fixture** strategy should we
use?
** fixture in xUnit terminology is everything we need
in place to be able to test the System Under Test
(SUT)
Minimal Fixture
We use the smallest and simplest fixture possible for each test.
A test that uses a Minimal
Fixture will always be easier to
understand than one that uses a
fixture containing unnecessary
or irrelevant information.
Standard Fixture
We reuse the design of the text fixture across the many tests
This approach makes a lot of
sense in the context of manual
execution of many customer
tests because it eliminates
the need for each tester to
spend a lot of time setting up
the test environment.
Fresh Fixture
Each test constructs its own brand-new test fixture for its own
private use.
We should use a Fresh Fixture
whenever we want to avoid any
interdependencies between
tests (which is in fact almost
always the case …..)
Shared Fixture
We reuse the same instance of the test fixture across many
tests.
If we want to avoid slow tests.
Or, when we have a long,
complex sequence of actions,
each of which depends on the
previous actions. In customer
tests, this may show up as a
workflow; in unit tests, it may
be a sequence of method calls
on the same object.
With the big risk of introducing
interdependencies between
tests ….
Back Door Manipulation
We set up the test fixture or verify the outcome by going through a
back door (such as direct database access).
A common application involves
testing basic CRUD (Create,
Read, Update, Delete) operations
on the SUT’s state.
In such a case, we want to verify
that the information persisted and
can be recovered in the same
form.
xUnit Basic Patterns
Where do we put our test code?
Test Method
We encode each test as a single Test Method on some class.
Variations:
Simple success test
(happy day)
Expected exception test
Constructor test
Dependency initialization
test
Test Case Class
We group a set of related Test Methods on a single Testcase Class.
How do we structure our test code?
Four Phase Test
We structure each test with four distinct parts executed in sequence.
In the first phase, we set up the
test fixture and anything we
need to put in place to be able
to observe the actual outcome .
In the second phase, we interact
with the SUT.
In the third phase, we
determine whether the
expected outcome has been
obtained.
In the fourth phase, we tear
down the test fixture to put the
world back into the state in
which we found it.
How do we make tests self-checking?
Assertion Method
We call a utility method to evaluate whether an expected outcome
has been achieved.
Variations:
(In) Equality assertions
Fuzzy equality assertions
(for floating point results
with an error tolerance)
Stated outcome assertions
(is null, is true, …)
Expected exception
assertions.
Single outcome assertions
(fail)
How do we provide more
information about a failed
assertion?
Assertion Message
We include a descriptive string argument in each call to an Assertion
Method.
How do we run the tests?
Test Runner
We execute the xUnit framework’s specific program that
instantiates and executes the Testcase Objects. When we have
many tests to run we can organize them in Test Suite classes.
Fixture Setup/Teardown
Patterns
How do we construct (destroy) the
fresh fixture?
Inline Setup (Teardown)
Each Test Method creates its own Fresh Fixture by calling the
appropriate constructor methods to build exactly the test fixture it
requires (the method destroys the fixture at the end).
We can use In-line Setup
when the fixture setup
logic is very simple and
straightforward.
Delegated Setup (Teardown)
Each Test Method creates (destroys) its own Fresh Fixture by
calling Creation/Destruction Methods from within the Test
Methods.
We can use a Delegated Setup
when we want to avoid the test
code duplication caused by
having to set up similar fixtures
for several tests and we want to
keep the nature of the fixture
visible within the Test Methods
Implicit Fresh Fixture Setup (Teardown)
We build (destroy) the test fixture common to several tests in set
up/tear down methods called by the test framework.
We can use Implicit Setup when
several Test Methods on the
same Testcase Class need an
identical Fresh Fixture.
How do we create (destroy) a shared
fixture if the test methods that
need it are in the same test class?
Implicit Shared Fixture Setup
(Teardown)
We build (destroy) the shared fixture in special methods called by
the Test Automation Framework before/after the first/last Test
Method is called
How do we create (destroy) a shared
fixture if the test methods that
need it are not in the same test
class?
Lazy Setup
We use Lazy Initialization of the fixture to create it in the first test
method that needs it.
The basic disadvantage is that
we do not know when to
tear down the fixture.
May be better to group all
methods in the same test
class….
How do we create (destroy) a shared
fixture if the fixture setup takes
too much time and resources?
Prebuild Fixture
We build the Shared Fixture separately from running the tests.
Not very common but
could be useful is the setup
is very time or resource
consuming.
Fits well with the Backdoor
Manipulation strategy
Result Verification Patterns
How do we verify a method that
returns a value?
Return Value Verification
We inspect the returned value of the method and compare it with
an expected return value.
check return value
How do we verify a method that
changes the state of the SUT?
State Verification
We inspect the state of the system under test after it has been
exercised and compare it to the expected state.
Variations:
Procedural State Verification, we
simply write a series of calls to Assertion
Methods that pick apart the state
information into pieces and compare to
individual expected values.
Expected State Specification, we
construct a specification for the
post-exercise state in the form of one or
more objects populated with the expected
attributes. We then compare the actual
state with these objects.
How do we verify a method of the
SUT that interacts with other
Depend-On-Components (DOC)?
Behavior Verification
We capture the indirect outputs/interactions of the SUT with DOC
as they occur and compare them to the expected behavior.
Typically to verify behavior we
have to use some kind of a Test
Spy or Test Mock
(see Test Double patterns that
follow)
Test Double Patterns
How do we test the SUT when it
interacts/depends with/on other
DOC?
Test Double
We replace the DOC on which the SUT depends with a “test-
specific equivalent” test doubles that allow us to control the input
to the SUT and/or provide capture the interaction between them.
How do we verify the behavior of
the SUT when it gets indirect
inputs from another component,
independently from this
component?
Test Stub
Use a test-specific object that feeds the desired indirect inputs into the
system under test.
Variations:
Responder: a stub that feeds the SUT with
valid (happy path) input.
Saboteur: a stub that feeds the SUT with
invalid input.
Stub Implementation options:
Use a mocking framework like mockito
(mock() and when-then-return/throw
commands).
If DOC implements an interface, configure the
SUT with a test-specific DOC interface
implementation.
How do we verify the behavior of
the SUT when it calls another
component?
Test Spy
We use a Test Spy to wrap the DOC to capture the indirect output
calls made to DOC by the SUT for later verification by the test.
Spy Implementation
options:
Use a mocking framework like
mockito (spy() and verify()
commands).
Subclass DOC and override the
required methods to capture
SUT calls. Configure the SUT
with a test-specific object of the
DOC subclass.
How do we verify the behavior of
the SUT when it calls another
component, independently from
this component?
Fake Object
We replace the DOC that the SUT depends on with a much lighter-
weight implementation.
Fake Object implementation
options:
Use a mocking framework like
mockito (mock() and when-then-
return/throw commands).
If DOC implements an interface,
configure the SUT with a test-
specific DOC interface
implementation.
……….
….
…….
Test Object
The test runner creates a Command object for each test and call the
test methods when we wish to execute it.
How do we run the tests when we
have many tests to run?
Test Suite
We define a collection class that implements the standard test
interface and use it to run a set of related Testcase Objects.

More Related Content

PPT
Testing Software Engineering systems end to end
PPTX
Object Oriented Testing
PDF
What Is Unit Testing A Complete Guide With Examples.pdf
PDF
Unit testing, principles
PDF
What Is Unit Testing_ A Complete Guide With Examples.pdf
PPTX
Testing frameworks
PDF
How Unit Testing Strengthens Software Reliability
PPT
Unit testing php-unit - phing - selenium_v2
Testing Software Engineering systems end to end
Object Oriented Testing
What Is Unit Testing A Complete Guide With Examples.pdf
Unit testing, principles
What Is Unit Testing_ A Complete Guide With Examples.pdf
Testing frameworks
How Unit Testing Strengthens Software Reliability
Unit testing php-unit - phing - selenium_v2

Similar to Software Engineering XUnit Testing and Patterns (20)

PPT
Chapter 3 SOFTWARE TESTING PROCESS
PDF
Muwanika rogers (software testing) muni university
PPT
Application Testing
PPTX
unittesting-190620114546 (1).pptx document
PPS
Why Unit Testingl
PPS
Why Unit Testingl
PPS
Why unit testingl
PPTX
unit 4.pptx very needful and important p
PDF
TDD Workshop UTN 2012
DOCX
Interview questions for manual testing technology.
DOCX
Software engg unit 4
PPTX
Testing &ampdebugging
DOC
Testing
PDF
Unit Testing vs Integration Testing
PDF
MBT_Installers_Dev_Env
PPT
Testing strategies
PPTX
08 fse verification
PDF
Test automation
PPT
Test plan
PPT
Class17
Chapter 3 SOFTWARE TESTING PROCESS
Muwanika rogers (software testing) muni university
Application Testing
unittesting-190620114546 (1).pptx document
Why Unit Testingl
Why Unit Testingl
Why unit testingl
unit 4.pptx very needful and important p
TDD Workshop UTN 2012
Interview questions for manual testing technology.
Software engg unit 4
Testing &ampdebugging
Testing
Unit Testing vs Integration Testing
MBT_Installers_Dev_Env
Testing strategies
08 fse verification
Test automation
Test plan
Class17
Ad

Recently uploaded (20)

PDF
Nekopoi APK 2025 free lastest update
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PPTX
VVF-Customer-Presentation2025-Ver1.9.pptx
PPTX
history of c programming in notes for students .pptx
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
PTS Company Brochure 2025 (1).pdf.......
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PDF
System and Network Administraation Chapter 3
PDF
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
PDF
Digital Strategies for Manufacturing Companies
PDF
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
PPTX
CHAPTER 2 - PM Management and IT Context
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PPTX
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
How to Migrate SBCGlobal Email to Yahoo Easily
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
Nekopoi APK 2025 free lastest update
wealthsignaloriginal-com-DS-text-... (1).pdf
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Softaken Excel to vCard Converter Software.pdf
Adobe Illustrator 28.6 Crack My Vision of Vector Design
VVF-Customer-Presentation2025-Ver1.9.pptx
history of c programming in notes for students .pptx
Odoo Companies in India – Driving Business Transformation.pdf
PTS Company Brochure 2025 (1).pdf.......
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
System and Network Administraation Chapter 3
Adobe Premiere Pro 2025 (v24.5.0.057) Crack free
Digital Strategies for Manufacturing Companies
T3DD25 TYPO3 Content Blocks - Deep Dive by André Kraus
CHAPTER 2 - PM Management and IT Context
How to Choose the Right IT Partner for Your Business in Malaysia
Agentic AI : A Practical Guide. Undersating, Implementing and Scaling Autono...
Which alternative to Crystal Reports is best for small or large businesses.pdf
How to Migrate SBCGlobal Email to Yahoo Easily
2025 Textile ERP Trends: SAP, Odoo & Oracle
Ad

Software Engineering XUnit Testing and Patterns

  • 1. xUnit Patterns by Gerard Meszaros 1 xUnit Patterns www.cs.uoi.gr/~zarras/http://www.cs.uoi.gr/~zarras/se.htm
  • 2. Goals of Test Automation Why should we test ?
  • 3. Goals of Test Automation Tests should help us improve software quality Tests should help us improve software understanding  Tests as specification.  Insure that we build the right software.  Defect localization.  Insure that the software is correct.  Defect prevention.  Insure that bugs wont crawl back to the software.  Tests as documentation.  Allow the developer/maintainer to answer questions like “what should be the expected outcome of the software is the given input is …”
  • 4. Goals of Test Automation Tests should help us reduce risk Tests should be easy to run  Tests as safety net.  Insure that we do no harm when we change the software.  Fully automated.  Execute without any effort.  Self checking.  Detect and report any errors without human intervention.  Repeatable.  Can be run many times in a row and produce the same results without human interventions in between.  Independent from each other.  Can be run by themselves and NOT depend on the execution order, failure or success of other tests. Test development should not introduce new risks risk  Refrain from modifying the software to facilitate the development of the tests as safety net.
  • 5. How to achieve these goals? Use an xUnit test automation framework and patterns
  • 7. How do we prepare automated tests for our software?
  • 8. Recorded Tests We automate tests by recording interactions with the application and playing them back using a test tool. No need for programming skills for such kind of tests. We have two basic choices when using a Recorded Test strategy: We can either acquire third- party tools that record the communication that occurs while we interact with the application (e.g. Mercury QuickTest Professional) or we can build a “record and playback” mechanism right into our application
  • 9. Scripted Tests We automate the tests by writing test programs by hand. The most typical kind of tests…. Traditionally, Scripted Tests were written as “test programs,” often using a special test scripting language. Nowadays, we prefer to write Scripted Tests using a Test Automation Framework
  • 10. Data Driven Tests We store all the information needed for each test in a data file and write an interpreter that reads the file and executes the tests. A Data-Driven Test is an ideal strategy for getting business people involved in writing automated tests. By keeping the format of the data file simple, we make it possible for the business person to populate the file with data and execute the tests without having to ask a technical person to write test code for each test.
  • 11. How do we make it easy to write and run tests written by different people?
  • 12. Test Automation Framework We use a framework that provides all the mechanisms needed to run the test logic so the test writer needs to provide only the test-specific logic. …
  • 13. Which fixture** strategy should we use? ** fixture in xUnit terminology is everything we need in place to be able to test the System Under Test (SUT)
  • 14. Minimal Fixture We use the smallest and simplest fixture possible for each test. A test that uses a Minimal Fixture will always be easier to understand than one that uses a fixture containing unnecessary or irrelevant information.
  • 15. Standard Fixture We reuse the design of the text fixture across the many tests This approach makes a lot of sense in the context of manual execution of many customer tests because it eliminates the need for each tester to spend a lot of time setting up the test environment.
  • 16. Fresh Fixture Each test constructs its own brand-new test fixture for its own private use. We should use a Fresh Fixture whenever we want to avoid any interdependencies between tests (which is in fact almost always the case …..)
  • 17. Shared Fixture We reuse the same instance of the test fixture across many tests. If we want to avoid slow tests. Or, when we have a long, complex sequence of actions, each of which depends on the previous actions. In customer tests, this may show up as a workflow; in unit tests, it may be a sequence of method calls on the same object. With the big risk of introducing interdependencies between tests ….
  • 18. Back Door Manipulation We set up the test fixture or verify the outcome by going through a back door (such as direct database access). A common application involves testing basic CRUD (Create, Read, Update, Delete) operations on the SUT’s state. In such a case, we want to verify that the information persisted and can be recovered in the same form.
  • 20. Where do we put our test code?
  • 21. Test Method We encode each test as a single Test Method on some class. Variations: Simple success test (happy day) Expected exception test Constructor test Dependency initialization test
  • 22. Test Case Class We group a set of related Test Methods on a single Testcase Class.
  • 23. How do we structure our test code?
  • 24. Four Phase Test We structure each test with four distinct parts executed in sequence. In the first phase, we set up the test fixture and anything we need to put in place to be able to observe the actual outcome . In the second phase, we interact with the SUT. In the third phase, we determine whether the expected outcome has been obtained. In the fourth phase, we tear down the test fixture to put the world back into the state in which we found it.
  • 25. How do we make tests self-checking?
  • 26. Assertion Method We call a utility method to evaluate whether an expected outcome has been achieved. Variations: (In) Equality assertions Fuzzy equality assertions (for floating point results with an error tolerance) Stated outcome assertions (is null, is true, …) Expected exception assertions. Single outcome assertions (fail)
  • 27. How do we provide more information about a failed assertion?
  • 28. Assertion Message We include a descriptive string argument in each call to an Assertion Method.
  • 29. How do we run the tests?
  • 30. Test Runner We execute the xUnit framework’s specific program that instantiates and executes the Testcase Objects. When we have many tests to run we can organize them in Test Suite classes.
  • 32. How do we construct (destroy) the fresh fixture?
  • 33. Inline Setup (Teardown) Each Test Method creates its own Fresh Fixture by calling the appropriate constructor methods to build exactly the test fixture it requires (the method destroys the fixture at the end). We can use In-line Setup when the fixture setup logic is very simple and straightforward.
  • 34. Delegated Setup (Teardown) Each Test Method creates (destroys) its own Fresh Fixture by calling Creation/Destruction Methods from within the Test Methods. We can use a Delegated Setup when we want to avoid the test code duplication caused by having to set up similar fixtures for several tests and we want to keep the nature of the fixture visible within the Test Methods
  • 35. Implicit Fresh Fixture Setup (Teardown) We build (destroy) the test fixture common to several tests in set up/tear down methods called by the test framework. We can use Implicit Setup when several Test Methods on the same Testcase Class need an identical Fresh Fixture.
  • 36. How do we create (destroy) a shared fixture if the test methods that need it are in the same test class?
  • 37. Implicit Shared Fixture Setup (Teardown) We build (destroy) the shared fixture in special methods called by the Test Automation Framework before/after the first/last Test Method is called
  • 38. How do we create (destroy) a shared fixture if the test methods that need it are not in the same test class?
  • 39. Lazy Setup We use Lazy Initialization of the fixture to create it in the first test method that needs it. The basic disadvantage is that we do not know when to tear down the fixture. May be better to group all methods in the same test class….
  • 40. How do we create (destroy) a shared fixture if the fixture setup takes too much time and resources?
  • 41. Prebuild Fixture We build the Shared Fixture separately from running the tests. Not very common but could be useful is the setup is very time or resource consuming. Fits well with the Backdoor Manipulation strategy
  • 43. How do we verify a method that returns a value?
  • 44. Return Value Verification We inspect the returned value of the method and compare it with an expected return value. check return value
  • 45. How do we verify a method that changes the state of the SUT?
  • 46. State Verification We inspect the state of the system under test after it has been exercised and compare it to the expected state. Variations: Procedural State Verification, we simply write a series of calls to Assertion Methods that pick apart the state information into pieces and compare to individual expected values. Expected State Specification, we construct a specification for the post-exercise state in the form of one or more objects populated with the expected attributes. We then compare the actual state with these objects.
  • 47. How do we verify a method of the SUT that interacts with other Depend-On-Components (DOC)?
  • 48. Behavior Verification We capture the indirect outputs/interactions of the SUT with DOC as they occur and compare them to the expected behavior. Typically to verify behavior we have to use some kind of a Test Spy or Test Mock (see Test Double patterns that follow)
  • 50. How do we test the SUT when it interacts/depends with/on other DOC?
  • 51. Test Double We replace the DOC on which the SUT depends with a “test- specific equivalent” test doubles that allow us to control the input to the SUT and/or provide capture the interaction between them.
  • 52. How do we verify the behavior of the SUT when it gets indirect inputs from another component, independently from this component?
  • 53. Test Stub Use a test-specific object that feeds the desired indirect inputs into the system under test. Variations: Responder: a stub that feeds the SUT with valid (happy path) input. Saboteur: a stub that feeds the SUT with invalid input. Stub Implementation options: Use a mocking framework like mockito (mock() and when-then-return/throw commands). If DOC implements an interface, configure the SUT with a test-specific DOC interface implementation.
  • 54. How do we verify the behavior of the SUT when it calls another component?
  • 55. Test Spy We use a Test Spy to wrap the DOC to capture the indirect output calls made to DOC by the SUT for later verification by the test. Spy Implementation options: Use a mocking framework like mockito (spy() and verify() commands). Subclass DOC and override the required methods to capture SUT calls. Configure the SUT with a test-specific object of the DOC subclass.
  • 56. How do we verify the behavior of the SUT when it calls another component, independently from this component?
  • 57. Fake Object We replace the DOC that the SUT depends on with a much lighter- weight implementation. Fake Object implementation options: Use a mocking framework like mockito (mock() and when-then- return/throw commands). If DOC implements an interface, configure the SUT with a test- specific DOC interface implementation.
  • 60. Test Object The test runner creates a Command object for each test and call the test methods when we wish to execute it.
  • 61. How do we run the tests when we have many tests to run?
  • 62. Test Suite We define a collection class that implements the standard test interface and use it to run a set of related Testcase Objects.

Editor's Notes

  • #22: https://www.javatpoint.com/junit-test-case-example-in-java
  • #26: https://www.baeldung.com/junit-assertions https://howtodoinjava.com/junit5/junit-5-assertions-examples/
  • #30: https://howtodoinjava.com/junit5/junit5-test-suites-examples/
  • #35: https://www.baeldung.com/junit-before-beforeclass-beforeeach-beforeall
  • #37: https://www.baeldung.com/junit-before-beforeclass-beforeeach-beforeall
  • #48: Attention !!! these – self checking - mock objects are not supported by mockito, they have to be build by hand !!! Typical mockito mock objects are just fake objects or stub objects A way to make a spy without mockito is to subclass the DOC and override its methods to keep track of when they are called.
  • #49: Attention !!! these – self checking - mock objects are not supported by mockito, they have to be build by hand !!! Typical mockito mock objects are just fake objects or stub objects A way to make a spy without mockito is to subclass the DOC and override its methods to keep track of when they are called.
  • #54: Mockito to make a stub we create a mock and use when-then commands to specify its behavior https://www.baeldung.com/mockito-behavior