SlideShare a Scribd company logo
MA
Full Day Tutorial
10/13/2014 8:30:00 AM
"The Challenges of BIG Testing:
Automation, Virtualization,
Outsourcing, and More"
Presented by:
Hans Buwalda
LogiGear Corporation
Brought to you by:
340 Corporate Way, Suite 300, Orange Park, FL 32073
888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
Hans Buwalda
LogiGear
Hans Buwalda has been working with information technology since his high school years. In his
thirty year career, Hans has gained experience as a developer, manager, and principal
consultant for companies and organizations worldwide. He was a pioneer of the keyword
approach to testing and automation, now widely used throughout the industry. His approaches
to testing, like Action Based Testing and Soap Opera Testing, have helped a variety of
customers achieve scalable and maintainable solutions for large and complex testing
challenges. Hans is a frequent speaker at STAR conferences and is lead author of Integrated
Test Design and Automation: Using the Testframe Method.
Speaker Presentations
1
© 2014 LogiGear Corporation. All Rights Reserved
Hans Buwalda
LogiGear
Automation,
Virtualization,
Outsourcing, and More
STARWEST 2014, Tutorial MA
Anaheim, Monday October 13, 2014
8.30 AM – 4.30 PM
The Challenges of
BIG Testing
© 2014 LogiGear
Introduction
− industries
− roles in testing
2
© 2014 LogiGear
Who is your speaker
Software testing company, around since 1994
Testing and test automation services:
− consultancy, training
− test development and automation services
− "test integrated" development services
Products:
− TestArchitect™, TestArchitect for Visual Studio™
− integrating test development with test management and automation
− based on modularized keyword-driven testing
LogiGear Magazine:
− themed issues, non-commercial
www.logigear.com
www.testarchitect.com
Dutch guy, in California since 2001
Background in math, computer science, management
Since 1994 focusing on automated testing
− keywords, agile testing, big testing
Hans Buwalda
LogiGear Corporation
hans @ logigear.com
www.happytester.com
© 2014 LogiGear
What is "BIG"
Big efforts in development, automation, execution and/or follow up
It takes a long time and/or large capacity to run tests (lot of tests, lot
of versions, lot of configurations, ...)
Scalability, short term and long term
Complexity, functional, technical, scale
Number and diversity of players and stakeholders
Various definitions of "big" possible... and relevant...
− "10 machines" or "10 acres"
− "1000 tests" or "1000 weeks of testing"
Big today means: big for you
− not trivial, you need to think about it
"Windows 8 has undergone more than
1,240,000,000 hours of testing"
Steven Sinofsky, Microsoft, 2012
3
© 2014 LogiGear
Some key items in scalable (automated) testing
Organization and design of the tests
Process and cooperation (agile or traditional)
Project and production focus
Technology, tooling, architecture
Infrastructure
Testability of the application under test
Agreement, commitment
Globalization, off-shoring
© 2014 LogiGear
Existential Questions
Why test?
Why not test?
Why automate tests?
Why not automate tests?
4
© 2014 LogiGear
Why test?
People expect us to do
Somebody wants us to
Increases certainty and control
− Showing absence of problems
Finds faults, saving time, money, damage
− Showing presence of problems
© 2014 LogiGear
Why not test?
It costs time and money
You might find problems . . .
We forgot to plan for it
We need the resources for development
It is difficult
It's hard to manage
5
© 2014 LogiGear
Why Automate Tests?
It is more fun
Can save time and money
− potentially improving time-to-market, quality-to-market and control
Can capture key domain and application knowledge in a re-
usable way
Can speed up development life cycles
− for agile project approaches automation is often a must-have
Execution typically is more reliable
− a robot is not subjective, tired or moody
Some tests can be done much better, or only, with automation
− for example unit tests can test an application on the individual function level,
helping a lot in scalability of system development
− load and performance tests are also good examples
− automation can also reach non-UI items like web services
© 2014 LogiGear
The Power of Robot Perception
FINISHED FILES ARE THE RE
SULT OF YEARS OF SCIENTI
FIC STUDY COMBINED WITH
THE EXPERIENCE OF YEARS...
6
© 2014 LogiGear
Why not Automate?
Can rule out the human elements
− promotes "mechanical" testing
− might not find "unexpected" problems
More sensitive to good practices
− pitfalls are plentiful
Needs technical expertise in the test team
Tends to dominate the testing process
− at the cost of good test development
Creates more software to manage
− can actually diminish scalability rather than helping it
− in particular changes in an application under test can
have large, and hard to predict, impact on the
automated tests
© 2014 LogiGear
Olny srmat poelpe can raed tihs.
I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd
waht I was rdanieg. The phaonmneal pweor of the
hmuan mnid, aoccdrnig to a rscheearch at
Cmabrigde Uinervtisy, it deosn't mttaer in waht
oredr the ltteers in a wrod are, the olny iprmoatnt
tihng is taht the frist and lsat ltteer be in the rghit
pclae. The rset can be a taotl mses and you can
sitll raed it wouthit a porbelm. Tihs is bcuseae the
huamn mnid deos not raed ervey lteter by istlef,
but the wrod as a wlohe.
The Power of Human Perception
7
© 2014 LogiGear
The Power of Human Perception
Notice at an event:
"Those who have children and don't
know it, there is a nursery
downstairs."
In a New York restaurant:
"Customers who consider our
waiters uncivil ought to see the
manager."
In a bulletin:
"The eighth-graders will be
presenting Shakespeare's Hamlet in
the basement Friday at 7 PM. You
are all invited to attend this drama."
In the offices of a loan company:
"Ask about our plans for owning your
home."
In the window of a store:
"Why go elsewhere and be cheated
when you can come here?"
© 2014 LogiGear
Relation to code Quality / depth Automation Scalability
Unit Testing
Close relationship
with the code
Singular test
scope, but deep
into the code
Fully automated
by nature
Scalable, grows
with the code,
easy to repeat
Functional
Testing
Usually does not
have a one-on-one
relation with code
Quality and scope
depends on test
design
In particular UI
based automation
can be a challenge
Often a bottle-
neck in scalability
Exploratory
Testing
Human driven, not
seeking a relation
with code
Usually deep and
thorough, good at
finding problems
May or may not
be automated
afterwards
Not meant to be
repeatable. Rather
do a new session
Some test kinds and their scalability (simplified)
8
© 2014 LogiGear
Actions
4 actions, each
with an action
keyword and
arguments
read from top
to bottom
fragment from a test with actions
acc nr first last
open account 123123 John Doe
acc nr amount
deposit 123123 10.11
deposit 123123 20.22
acc nr expected
check balance 123123 30.33
• The test developer creates tests using actions with keywords and
arguments
• Checks are, as much as possible, explicit (specified expected values)
• The automation task focuses on automating the keywords, each keyword
is automated only once
• This technique can be very scalable. A similar approach is behavior
based testing, which also works with human readable tests, but is more
verbose
© 2014 LogiGear
Potential benefits of keywords
More productive: more tests, better tests
− more breadth
− more depth
Easier to read and understand
− no program code, tests can be self-documenting
− facilitates involvement of non-technical people, like domain experts
Fast, results can be quickly available
− the design directly drives the automation
More targeted efforts for the automation engineers
− less repetition, test design helps in creating a structured solution (factoring)
− can focus on critical and complex automation challenges more easily
Automation can be made more stable and maintainable
− limited and manageable impact of changes in the system under test
A significant portion of the tests can typically be created early in a
system life cycle
− dealing with execution details later
. . .
9
© 2014 LogiGear
Risks of keywords
Keywords are often seen as silver bullet
− often treated as a technical "trick", complications are underestimated
The method needs understanding and experience to be
successful
− pitfalls are many, and can have a negative effect on the outcome
− some of the worst automation projects I've seen were with keywords
Testers might get pushed into half-baked automation role
− risk: you loose a good tester and gain a poor programmer
− focus may shift from good (lean and mean) testing to "getting
automation to work"
− the actual automation challenges are better left to a the experienced
automation professionals
Lack of method and structure can risk manageability
− maintainability may not as good as hoped
− tests may turn out shallow and redundant
© 2014 LogiGear
Keywords need a method
By themselves keywords don't provide much scalability
− they can even backfire and make automation more cumbersome
− a method can help tell you which keywords to use when, and how to
organize the process
Today we'll look at Action Based Testing (ABT)
− addresses test management, test development and automation
− large focus on test design as the main driver for automation success
Central deliveries in ABT are the "Test Modules"
− developed in spreadsheets
− each test module contains "test objectives" and "test cases"
− each test module is a separate (mini) project, each test module can
involve different stake holders
10
© 2014 LogiGear
High Level Test Design - Test Development Plan
Objectives
Test Module 1
Test Cases
Test Module 2 Test Module N
Actions
. . .
AUTOMATION
Objectives Objectives
interaction test business test
Overview Action Based Testing
define the "chapters"
create the "chapters"
create the "words"
make the words work
Test Cases Test Cases
window control value
enter log in user name jdoe
enter log in password car guy
window control property expected
check property log in ok button enabled true
user password
log in jdoe car guy
first last brand model
enter rental Mary Renter Ford Escape
last total
check bill Renter 140.42
© 2014 LogiGear
Example of an ABT test module
Consists of an (1) initial part, (2) test cases and (3) a final part
Focus is on readability, and a clear scope
Navigation details are avoided, unless they're meant to be tested
TEST MODULE Car Rental Payments
user
start system jdoe
TEST CASE TC 01 Rent some cars
first name last name car days
rent car Mary Jones Ford Escape 3
first name last name amount
check billing Mary Jones 140.40
FINAL
close application
11
© 2014 LogiGear
Example of a "low level" test module
In "low level" tests interaction details are not hidden, since they are
the target of the test
The right level of abstraction depends on the scope of the test, and is
an outcome of your test design process
TEST MODULE Screen Flow
user
start system john
TEST CASE TC 01 Order button
window button
click main create order
window
check window exists new order
FINAL
close application
© 2014 LogiGear
ACTION DEFINITION check balance
user
argument customer
argument amount
window control value
enter balance inquiry last name # customer
window control
click balance inquiry view balance
window control expected
check balance inquiry balance # amount
Re-use actions to make new actions
In the below example we make a new action
Existing actions are strung together to create new ones with a
broader scope
Often steps in low level tests are re-used to create these action
definitions
:
customer amount
check balance Smith 223.45
check balance Jones 0.00
check balance James -330.45
:
use many times in tests:
define in one place:
12
© 2014 LogiGear
Question
What is wrong with the
following pictures?
© 2014 LogiGear
No Millennium Problems ? ?
13
© 2014 LogiGear
Anything wrong with this instruction ?
You should change your battery or switch to outlet
power immediately to keep from losing your work.
© 2014 LogiGear
Issues are not always obvious...
Downton Abbey
14
© 2014 LogiGear
Why Better Test Design?
Quality and manageability of test
− many tests are often quite "mechanical" now, no surprises
− one to one related to specifications, user stories or requirements,
which often is ok, but lacks aggression
− no combinations, no unexpected situations, lame and boring
− such tests have a hard time finding (interesting) bugs
Better automation
− when unneeded details are left out of tests, they don't have to be
maintained
− avoiding "over checking": creating checks that are not in the scope of
a test, but may fail after system changes
− limit the impact of system changes on tests, making such impact
more manageable
I have become to believe that successful automation is usually
less of a technical challenge as it is a test design challenge.
unexpected problem?
© 2014 LogiGear
Case for organizing tests in BIG projects
Can help keep the volume down
Isolate the complexities
Make tests and automation more re-usable
Easier to deal with changing designs
Much of tested subject matter is often business
oriented, not system specific
− for example a home loan is a home loan
Automation can be made efficient. For example
business logic tests may not even need the UI
− can use web services or business components
15
© 2014 LogiGear
The Three “Holy Grails” of Test Design
Metaphor to depict three main steps in test design
Using "grail" to illustrate that there is no single perfect
solution, but that it matters to pay attention
Right approach for each test module
Proper level of detail in the test specification
Organization of tests into test modules
© 2014 LogiGear
What's the trick...
16
© 2014 LogiGear
What's the trick...
Have or acquire facilities to store and organize
your content
Select your stuff
Decide where to put what
− assign and label the shelves
Put it there
If the organization is not sufficient anymore, add
to it or change it
© 2014 LogiGear
Breakdown Criteria
Common Criteria
− Functionality (customers, finances, management information, UI, ...)
− Architecture of the system under test (client, server, protocol, sub
systems, components, modules, ...)
− Kind of test (navigation flow, negative tests, response time, ...)
Additional Criteria
− Stakeholders (like "Accounting", "Compliance", "HR", ...)
− Complexity of the test (put complex tests in separate modules)
− Execution aspects (special hardware, multi-station, ...)
− Project planning (availability of information, timelines, sprints, ...)
− Risks involved (extra test modules for high risk areas)
− Ambition level (smoke test, regression, aggressive, )
17
© 2014 LogiGear
Examples
Lifecycle tests (Create, Read, Update, Delete) for business objects
− like "car", "customer", "order", "invoice", etc
− for all: test various types and situations, and keep the tests at high level if possible
Forms, value entry
− does each form/dialog/page work
− mandatory and optional fields, valid and invalid values, etc
− UI elements and their properties and contents
− function keys, tab keys, special keys, etc
Screen and transaction flows
− like cancel an order, menu navigation, use a browser back and forward buttons, etc
− is the data in the database correct after each flow
Business transactions, end-to-end tests
− like enter, submit and fulfil a sale order, then check inventory and accounting
− example: behaviors of alarms
Functions and features
− can I count orders, can I calculate a mortgage, etc
− can I export, like to PDF, HMTL, XML,
Security
− login and password procedures
− authorizations
Localizations
− languages, standards,
Special tests
− multi-station, load, hardware intensive, etc
© 2014 LogiGear
Example Top Level Structure
<business object 1>
Lifecycles
Value entry
Screen flows
. . .
Dialogs
<business object 2>
Functions and Features
Integrations
End-to-end, business
. . .
Security, authorization
Special tests
Non-UI
Extensibility, customizing
Custom controls
. . .
Project
18
© 2014 LogiGear
Approach 1: Workshop
Gather a meeting with relevant participants
− test developers
− domain experts
− automation engineer (focus on efficiency of automation)
− experienced moderator
− also consider: developers, managers
If necessary, provide training of participants
before the discussion
© 2014 LogiGear
Approach 2: Design and Feedback
One or two experienced test designers create a
first draft
The draft is delivered/discussed to relevant
parties
Ask the parties to verify:
1. Structure: does it make sense
2. Completeness: are all relevant areas covered
Based on feedback, further modify the design
19
© 2014 LogiGear
Identifying the modules
Step 1: top down → establish main structure (and understanding)
analyze what the business is and what the system does?
how is it technically organized?
what is important that we test
use the list in the "breakdown examples" slide as a starting point
also look at the “secondary criteria”, as far as applicable
if the test is large, define main groups first, then detail out into modules
Step 2: bottom up → refine, complete
study individual functionalities and checks (like from exist test cases)
and identify test modules for them if needed
identify and discuss any additional criteria and needed testing situations
review and discuss the resulting list(s) of test modules
create some early drafts of test modules and adjust the list if needed
© 2014 LogiGear
Some notes on Bugs
Bugs found in the "Immediate Sphere", when the developer/team is still
working on the code (like in the same sprint)
Consider not logging as bugs, since that is much overhead.
Simply share a failed test module with the developer.
For each bug found late ask three questions, in this order:
1. was it a bug?
2. what was the root cause?
3. why wasn't it caught?
Consider keeping this information in the tracking system
Bugs found "Post Delivery", when the developer/team is working
on something else already
Good to keep track, manage, prioritize, assign, close, learn etc.
The later the bug is found the more important.
20
© 2014 LogiGear
Questions for Test Design
How does your organization
handle test design and test
organization?
How do you document it?
© 2014 LogiGear
"Thou Shall Not Debug Tests..."
Large and complex test projects can be hard to "get to
run"
If they are however, start with taking a good look again at
your test design...
Rule of thumb: don't debug tests. If tests don't run
smoothly, make sure:
− lower level tests have been successfully executed first -> UI flow in the AUT
is stable
− actions and interface definitions have been tested sufficiently with their own
test modules -> automation can be trusted
− are you test modules not too long and complex?
21
© 2014 LogiGear
What about existing tests?
Compare to moving house:
− some effort can't be avoided
− be selective, edit your stuff,
• look at the future, not the past
− first decide where to put what, then put it there
− moving is an opportunity, you may not get such chance again soon
Follow the module approach
− define the modules and their scope as if from scratch
− use the existing test cases in two ways:
• verify completeness
• harvest and re-use them for tests and for actions
− avoid porting over "step by step", in particular avoid over-checking
© 2014 LogiGear
Grail 2: Approach per Test Module
Plan the test module:
− when to develop: do we have enough information?
UI tests are usually the last ones to be developed
− when to execute: make sure lower level stuff working first
UI tests are usually the first ones to be executed
Process:
− do an intake: understand what is needed and devise an approach
− analyze requirements, formulate "test objectives", create tests
Don't just stick to "checking", try follow an exploratory approach:
− see the test development as a "learning process", about the business domain, the
application structure, the interaction, etc
− talk about your tests, make them strong
Identify stakeholders and their involvement:
− users, subject matter experts
− developers
− auditors
Choose testing techniques if applicable:
− boundary analysis, decision tables, etc
22
© 2014 LogiGear
Eye on the ball, Scope
Always know the scope of the test module
The scope should be unambiguous
The scope determines many things:
− what the test objectives are
− which test cases to expect
− what level of actions to use
− what the checks are about and which events should
generate a warning or error (if a “lower” functionality
is wrong)
© 2014 LogiGear
Too detailed?
Step Name Description Expected
step 16 Click the new formula button to start a new
calculation.
The current formula is cleared. If it had not
been save a message will show
step 17 Enter "vegas winner" in the name field The title will show "vegas winner"
step 18 Open the formula editor by clicking the '+'
button for the panel "formula editor"
The formula editor will show with an empty
formula (only comment lines)
step 19 Add some lines and enter "10*x;" The status bard will show "valid formula".
There is a "*" marker in the title
step 20 Click the Save formula button The formula is saved, the "*" will disappear
from the title
step 21 Open the panel with the arguments by
clicking the '+' button
There two lines, for 'x' and 'y'
step 22 Click on the value type cell and select
"currency"
A button to select a currency appears, with
default USD
step 23 Click on the specify argument values link The argument specification dialog is shown
23
© 2014 LogiGear
State your Objectives . . .
...
TO-3.51 The exit date must be after the entry date
...
test objective TO-3.51
name entry date exit date
enter employment Bill Goodfellow 2016-10-02 2016-10-01
check error message The exit date must be after the entry date.
requirement,
specification, test case
requirement,
specification, test objective test case
direct relation indirect relation via a test objective
Linking through test objectives can help easier traceability:
© 2014 LogiGear
Examples of Testing Techniques
Equivalence class partitioning
• any age between 18 and 65
• see Cem Kaner's book on "Domain Testing"
Boundary condition analysis
• try 17, 18, 19 and 64, 65, 66
Error guessing
• try Cécile Schäfer to test sorting of a name list
Exploratory
• "Exploratory testing is simultaneous learning,
test design, and test execution", James Bach,
www.satisfice.com
• note Hans: I think there is also something like
"business exploratory testing", focusing on test
design
Error seeding
• deliberately inject faults in a test version of the
system, to see if the tests catch them
• handle with care, don't let the bugs get into the
production version
Decision tables
• define possible situations and the expected
responses of the system under test
State transition diagrams
• identify "states" of the system, and have your
tests go through each transition between
states at least once
Jungle Testing
• focus on unexpected situations, like hacking
attacks
Soap Opera Testing
• describe typical situations and scenarios in the
style of episodes of a soap opera, with fixed
characters
• high density of events, exaggerated
• make sure the system under test can still
handle these
24
© 2014 LogiGear
"Jungle Testing"
Expect the unexpected
− unexpected requests
− unexpected situations (often data oriented)
− deliberate attacks
− how does a generic design respond to a specific unexpected event?
Difference in thinking
− coding bug: implementation is different from what was intended/specified
− jungle bug: system does not respond well to an unexpected situation
To address
− study the matter (common hack attacks, ...)
− make a risk analysis
− make time to discuss about it (analysis, brainstorm)
− involve people who can know
− use exploratory testing
− use an agile approach for test development
− consider randomized testing, like "monkey" testing
© 2014 LogiGear
Soap Opera Testing
Informal scenario technique to invite subject-matter
experiences into the tests, and efficiently address multiple
objectives
Using a recurring theme, with “episodes”
About “real life”
But condensed
And more extreme
Typically created with a high involvement of end-users
and/or subject-matter experts
It can help create a lot of tests quickly, and in an agile
way
25
© 2014 LogiGear
Lisa Crispin: Disorder Depot . . .
There are 20 preorders for George W. Bush action
figures in "Enterprise", the ERP system, awaiting the
receipt of the items in the warehouse.
Finally, the great day arrives, and Jane at the warehouse
receives 100 of the action figures as available inventory
against the purchase order. She updates the item record
in Enterprise to show it is no longer a preorder.
Some time passes, during which the Enterprise
background workflow to release preorders runs. The 20
orders are pick-released and sent down to the warehouse.
Source: Hans Buwalda, Soap Opera Testing (article), Better Software Magazine, February 2005
© 2014 LogiGear
Lisa Crispin: Disorder Depot . . .
Then Joe loses control of his forklift and accidentally
drives it into the shelf containing the Bush action figures.
All appear to be shredded to bits. Jane, horrified,
removes all 100 items from available inventory with a
miscellaneous issue. Meanwhile, more orders for this very
popular item have come in to Enterprise.
Sorting through the rubble, Jane and Joe find that 14 of
the action figures have survived intact in their boxes.
Jane adds them back into available inventory with a
miscellaneous receipt.
26
© 2014 LogiGear
Lisa Crispin: Disorder Depot . . .
This scenario tests
• Preorder process
• PO receipt process
• Miscellaneous receipt and issue
• Backorder process
• Pick-release process
• Preorder release process
• Warehouse cancels
© 2014 LogiGear
Vary your tests?
Automated tests have a tendency to be rigid, and
predictable
Real-world situations are not necessarily
predictable
Whenever possible try to vary:
− with select other data cases that still fit the goal of tests
− with randomized behavior of the test
27
© 2014 LogiGear
Generation and randomization techniques
Model-based
− use models of the system under test to create tests
− see: Harry Robinson, www.model-based-testing.org, and Hans Buwalda, Better
Software, March 2003
Data driven testing
− apply one test scenario to multiple data elements
− either coming from a file or produce by an automation
"Monkey testing"
− use automation to generate random data or behavior
− "smart monkeys" will follow typical user behavior, most helpful in efficiency
− "dumb monkeys" are more purely random, may find more unexpected issues
− long simulations can expose bugs traditional tests won't find
Extended Random Regression
− have a large database of tests
− randomly select and run them, for a very long time
− this will expose bugs otherwise hidden
− see Cem Kaner e.a.: "High Volume Test Automation", STARWEST 2004
© 2014 LogiGear
Data Driven Testing
Separate test logic from the data
Possible origins for the data:
− earlier steps in the test
− data table
− randomizer, or other formula
− external sources, like a database query
Use "variables" as placeholders in the test case,
instead of hard values
Data driven is powerful, but use modestly:
− value cannot be known at test time, or changes over time
− having many data variations is meaningful for the test
28
© 2014 LogiGear
Variables and expressions with keywords
This test does not need an absolute number for the
available cars, just wants to see if a stock is updated
As a convention we denote an assignment with ">>"
The "#" indicates an expression
TEST CASE TC 02 Rent some more cars
car available
get quantity Chevvy Volt >> volts
first name last name car
rent car John Doe Chevvy Volt
rent car John Doe Chevvy Volt
car expected
check quantity Chevvy Volt # volts - 2
© 2014 LogiGear
Data driven testing with keywords
The test lines will be repeated for each row in the data set
The values represented by "car", "first" and "last" come
from the selected row of the data set
TEST CASE TC 03 Check stocks
data set
use data set /cars
car available
get quantity # car >> quantity
first name last name car
rent car # first # last # car
car expected
check quantity # car # quantity - 1
repeat for data set
DATA SET cars
car first last
Chevvy Volt John Doe
Ford Escape Mary Kane
Chrysler 300 Jane Collins
Buick Verano Tom Anderson
BMW 750 Henry Smyth
Toyota Corolla Vivian Major
29
© 2014 LogiGear
Combinations
Input values
− determine equivalence classes of values for a variable or field
− for each class pick a value (or randomize)
Options, settings
Configurations
− operating systems, operating system versions and flavors
• Windows service packs, Linux distributions
− browsers, browser versions
− protocol stacks (IPv4, IPv6, USB, ...)
− processors
− DBMS's
Combinations of all of the above
Trying all combinations will spin out of control quickly
© 2014 LogiGear
Pairwise versus exhaustive testing
Group values of variables in pairs (or tuples with more than 2)
Each pair (tuple) should occur in the test at least once
− maybe not in every run, but at least once before you assume "done"
− consider to go through combinations round-robin, for example pick a different
combination every time you run a build acceptance test
− in a NASA study:
• 67 percent of failures triggered by a single value
• 93 percent by two-way combinations, and
• 98 percent by three-way combinations
Example, configurations
− operating system: Windows XP,
Apple OS X, Red Hat Enterprise Linux
− browser: Internet Explorer, Firefox, Chrome
− processor: Intel, AMD
− database: MySQL, Sybase, Oracle
− 72 combinations possible, to test each pair: 10 tests
Example of tools:
− ACTS from NIST, PICT from Microsoft, AllPairs from James Bach (Perl)
− for a longer list see: www.pairwise.org
These techniques and tool are supportive only. Often priorities
between platforms and values can drive more informed selection
Source: PRACTICAL COMBINATORIAL TESTING, D. Richard Kuhn, Raghu N.
Kacker, Yu Lei, NIST Special Publication 800-142, October, 2010
30
© 2014 LogiGear
Grail 3: Specification Level, choosing actions
Scope of the test determines the specification level
As high level as appropriate, as little arguments as possible
− be generous with default values for arguments
Clear names for actions
− usually verb + noun usually works well
− try to standardize both the verbs and the nouns, like "check customer"
versus "verify client" (or vice versa)
Avoid "engineer" styles for names of actions and arguments
− tests are not source code
− like no spaces, uppercase, camel-case or underlines
− in other words: "noha_RDT_oUnderS~tand" names please
Manage and document the Actions
By-product of the test design
© 2014 LogiGear
By product of test design
As generic as possible
Use a verb and a noun, and standardize the
verbs and the nouns
Organize and document
Be generous with default values, so you can
leave out arguments not relevant for the test
module scope
Actions
31
© 2014 LogiGear
Using actions
TEST MODULE Order processing
start system
TEST CASE TC 01 Order for tablets
user password
login jdoe doedoe
window
check window exists welcome
order id cust id article price quantity
create order AB123 W3454X tablet 198.95 5
order id total
check order total AB123 994.75
. . .
© 2014 LogiGear
Low-level, high-level, mid-level actions
"Low level": detailed interaction with the UI (or API)
− generic, do not show any functional or business logic
− examples: "click", "expand tree node", "select menu"
"High level": a business domain operation or check on the
application under test
− hide the interaction
− examples: "enter customer", "rent car", "check balance"
"Mid level": common sequences at a more detailed
application level
− usually to wrap a form or dialog
− for use in high level actions
− greatly enhance maintainability
− example: "enter address fields"
enter customer
enter address fields
enter select set . . .. . .
32
© 2014 LogiGear
Identifying controls
Identify windows and controls, and assign names to them
These names encapsulate the properties that the tool can
use to identify the windows and controls when executing the
tests
© 2014 LogiGear
Mapping the interface
An interface mapping (common in test tools) will map windows and
controls to names
When the interface of an application changes, you only have to update
this in one place
The interface mapping is a key step in your automation success, allocate
time to design it well
INTERFACE ENTITY library
interface entity setting title {.*Music Library}
ta name ta class label
interface element title text Title:
interface element artist text Artist:
interface element file size text File size (Kb):
ta name ta class position
interface element playing time text textbox 4
interface element file type text textbox 5
interface element bitrate text textbox 6
ta name ta class position
interface element music treeview treeview 1
33
© 2014 LogiGear
Tips to make "BIG" automation stable
Make the system under test automation-friendly
− consider this a key requirement ("must have")
− development practices are often a great source of automation
impediments
Don't use hard coded waits
Select and create the right technologies and tools
Pay attention to interface strategies
− like hooks, interface maps, and non-UI testing
Test automation items before running them
− actions, interface mappings, emulators, etc
− in particular when they're complex
Keep an eye on the test design
− test design being a main driver for automation success
© 2014 LogiGear
Use properties a human user can't see, but a test tool can
This approach can lead to speedier and more stable automation
− less need for "spy" tools (which take a lot of time)
− less sensitive to changes in the system under test
− not sensitive to languages and localizations
A "white-box" approach to UI's can also help operate on or verify aspect of
interface elements
Examples:
− "id" attribute for HTML elements
− "name" field for Java controls
− "AccessibleName" or "Automation ID" properties in .Net controls (see below)
Hidden interface properties
34
© 2014 LogiGear
Mapping the interface using hidden identifiers
Instead of positions or language dependent labels, an internal property
"automation id" has been used
The interface definition will be less dependent on modifications in the UI
of the application under test
If the information can be agreed upon with the developers, for example in
an agile team, it can be entered (or pasted) manually and early on
INTERFACE ENTITY library
interface entity setting automation id MusicLibraryWindow
ta name ta class automation id
interface element title text TitleTextBox
interface element artist text SongArtistTextBox
interface element file size text SizeTextBox
interface element playing time text TimeTextBox
interface element file type text TypeTextBox
interface element bitrate text BitrateTextBox
ta name ta class automation id
interface element music treeview MusicTreeView
© 2014 LogiGear
Passive timing
− wait a set amount of time
− in large scale testing, try to avoid passive timing altogether:
• if wait too short, test will be interrupted
• if wait too long, time is wasted
Active timing
− wait for a measurable event
− usually the wait is up to a, generous, maximum time
− common example: wait for a window or control to appear (usually the test tool will do
this for you)
Even if not obvious, find something to wait for...
Involve developers if needed
− relatively easy in an agile team, but also in traditional projects, give this priority
If using a waiting loop
− make sure to use a "sleep" function in each cycle that frees up the processor (giving the
AUT time to respond)
− wait for an end time, rather then a set amount of cycles
Active Timing
35
© 2014 LogiGear
Things to wait for...
Wait for a last control or elements to load
− developers can help knowing which one that is
Non-UI criteria
− API function
− existence of a file
Criteria added in development specifically for this purpose, like:
− "disabling" big slow controls (like lists or trees) until they're done loading
− API functions or UI window or control properties
Use a "delta" approach:
− every wait cycle, test if there was a change; if no change, assume that the
loading time is over:
− examples of changes:
• the controls on a window
• count of items in a list
• size a file (like a log file)
© 2014 LogiGear
Should be a "must have" requirement
− first question in a development project: "how do we test this?"
Identifying properties
Hooks for timing
White-box access to anything relevant:
− input data (ability to emulate)
− output data (what is underlying data being displayed)
− random generators (can I set a seed?)
− states (like in a game)
− objects displayed (like monsters in a game)
Emulation features, like time-travel and fake locations
Testability, some key items
36
© 2014 LogiGear
Alternatives to UI automation ("non-UI")
Examples
− HTTP and XML based interfaces, like REST
− application programming interfaces (API’s)
− embedded software
− protocols
− files, batches
− databases
− command line interfaces (CLI’s)
− multi-media
− mobile devices
In many cases non-UI automation is needed since there simply is no
UI, but it can also speed things up:
− tends to be more straightforward technically, little effort needed to build up or
maintain
− once it works, it tends to work much faster and more stably than UI automation
− test design principles (like modules and keywords) apply to non-UI normally
In BIG testing projects routinely:
− identify which non-UI alternatives are available
− as part of test planning: identify which tests qualify for non-UI automation
device testing
© 2014 LogiGear
Tools that can help manage BIG projects
Application Lifecycle Management (ALM)
− abundant now, mainly on the wings of agile
− very good for control, team cooperation, and traceability
− often relate to IDE's (like Microsoft TFS and Visual Studio)
− examples: Rally, Jira, MS TFS and VS Online (VSO), HP ALM
Note: test cases are often treated as "work items" in an ALM, but they're also
products, that can be executed and need to be managed and maintained
Test Management
− as separate tools they're on their way out, morphing into or replaced by ALM options
− examples: HP Quality Center, Microsoft Test Manager, Atlassian Zephyr, TestArchitect
Test development and automation
− develop and/or automate tests
− examples are HP UFT, Selenium, MS Coded UI, FitNesse, Cucumber, TestArchitect
Continuous build, continuous integration
− server based building of software,
− builds can be started in different ways, like triggered by check-ins, scheduled times, etc
− can help run tests automatically , even "pre-flight": meaning a check-in only succeeds if tests pass
− examples: Hudson, Jenkins, TFS, ElectricCommander
Bug trackers
− not only register issues, but also facilitate their follow up, with workflow features
− often also part of other tools, and tend to get absorbed now by the ALMs
− Examples: BugZilla, Mantis, Trac
37
© 2014 LogiGear
Tooling and Traceability
ALM Items
Code Files
Test Objective Test Case Execution Result
Test Module
Bug Items
ALM, IDE,
Source Control
Project Manager,
Requirements
Test Development Tool
Automation Tool
Execution Manager
Continuous Integration
Build Verification Testing
Lab manager
Issue Tracker
ALM
Building, Testing
Trace back
© 2014 LogiGear
Function
Test Execution
Have an explicit approach for when and how to execute
which tests
− a good high level test design will help with this
Execution can be selective or integral
− unit tests are typically executed selectively, often automatically based
on code changes in a system like SVN or TFS
− functional tests don't have as obvious relations with code files
− selective execution will be quicker and more efficient, integral
execution may catch more side-effect issues ("bonus bugs")
− consider "random regression" execution of tests
Unit Test Code
user stories
work items
Unit Testing Functional Testing
Tests
38
© 2014 LogiGear
Versions, environments, configurations
Many factors can influence details of automation
− language, localization
− hardware
− version of the system under test
− system components, like OS or browser
Test design can reflect these
− certain test modules are more general
− others are specific, for example for a language
But for tests that do not care about the differences, the
automation just needs to "deal" with them
− shield them from the tests
minimum safe distance
from a bear is 91 meters
localization: converting
yards to meters
© 2014 LogiGear
Capture variations of the system under test in the actions and interface
definitions, rather than in the tests (unless relevant there).
Can be a feature in a test playback tool, or something you do with a global
variable or setting.
Variation Variation Variation
"Variations"
"Master Switch"
Actions, Interface Definitions
. . .
39
© 2014 LogiGear
Possible set up of variations
linked variation
keyworded variation
Specify for example in a dialog when you start an execution:
© 2014 LogiGear
Test Environments
Physical
• hardware
• infrastructure
• location
• . . .
Software
• programs
• data models
• protocols
• . . .
Data
• initial data
• parameters / tables
• . . .
• costs money
• can be scarce
• configurations
• availability
• manageability
40
© 2014 LogiGear
Dealing with data
Constructed data is easier to manage
− can use automation to generate it, and to enter it in the environment
− result of test analysis and design, reflecting "interesting" situations
− however, less "surprises": real life situations which were not foreseen
Real-world data is challenging to organize
− make it a project, or task, in itself
− make absolutely sure to deal with privacy, security and legal aspects
appropriately. You may need to "scrub" the data
Consider using automation to select data for a test
− set criteria ("need a male older than 50, married, living in Denver"), query
for matching cases, and select one randomly (if possible a different one
each run)
− this approach will introduce variation and unexpectedness, making
automated tests stronger and more interesting
A separate fairly recent challenge is testing non-SQL "Big Data"
− apart from testing software, you will also test the data itself, often with
heuristic and fuzzy logic technique
• see also: "Become a Big Data Quality Hero", Jason Rauen, StarCanada 2014
© 2014 LogiGear
Virtualization
Virtual machines rather than physical machines
− allow "guest" systems to operate on a "host" system
− host can be Windows, Linux, etc, but also a specialized "hypervisor"
− the hypervisor can be "hosted" or "bare metal"
Main providers:
− VMWare: ESX and ESXi
− Microsoft: Hyper-V
− Oracle/Sun: Virtual Box
− Citrix: Xen (open source)
Hardware support gets common now
− processor, chipset, i/o
− Like Intel's i7/Xeon
For most testing purposes you need virtual clients, not virtual servers
− most offerings in the market currently target virtual servers, particularly data centers
Virtual clients will become more mainstream with the coming of VM's as part
of regular operating systems
− Windows 8: Hyper-V
− Linux: KVM
41
© 2014 LogiGear
Virtualization, a testers dream...
In particular for functional testing
Much easier to define and create needed configurations
− you basically just need storage
− managing this is your next challenge
One stored configuration can be re-used over and over again
The VM can always start "fresh", in particular with
− fresh base data (either server or client)
− specified state, for example to repeat a particular problematic automation
situation
Can take "snap shots" of situations, for analysis of problems
Can use automation itself to select and start/stop suitable VM's
− for example using actions for this
− or letting an overnight or continuous build take care of this
© 2014 LogiGear
Virtualization, bad dream?
Performance, response times, capacities
Virtual machine latency can add timing problems
− see next slide
− can be derailing in big test runs
Management of images
− images can be large, and difficult to store and move around
• there can be many, with numbers growing combinatorial style
• configuration in the VM can have an impact, like fixed/growing virtual disks
− distinguish between managed configurations and sandboxes
− define ownership, organize it
− IT may be the one giving out (running) VM's, restricting your flexibility
Managing running tests in virtual machines can take additional efforts
on top of managing the VM's themselves
− with the luxury of having VM's the number of executing machines can
increase rapidly
− one approach: let longer running tests report their progress to a central
monitoring service (various tools have features for this)
42
© 2014 LogiGear
Virtualization: "time is relative"
Consider this waiting time loop, typical for a test script:
− endTime = currentTime + maxWait
− while not endTime, wait in 100 millisecond intervals
When the physical machine overloads VM's can get slow or have
drop outs, and endTime may pass not due to AUT latency
− GetLocalTime will suffer from the latency
− GetTickCount is probably better, but known for being unreliable on VM's
Therefore tests that run smooth on physical machines, may not
consistently do so on VM's. The timing problems are not easy to
predict
Possible approaches:
− in general: be generous with maximum wait times if you can
− don't put too many virtual machines on a physical box
− consider a compensation algorithm, for example using both tick count and clock time
© 2014 LogiGear
Virtual machines, capacity
Key to pricing is number of VM's that can run in parallel
on a physical machine
An automated test execution will typically keep a VM
more busy than human use
Factors in determining VM/PM ratio:
− memory, for guest OS, AUT, test tooling
− storage devices (physical devices, not disk images)
− processors, processor cores
− specific hardware support (becoming more common)
• processor, chipset, I/O
− need to high-end graphics
We started regression with 140 VMs.
Very slow performance of
Citrix VM clients.
43
© 2014 LogiGear
Building up virtualization
Pay attention to pricing:
− beefed up hardware can increase VM's/box ratio, but at a price
− software can be expensive depending on features, that you may not need
− graphics cards can be a bottleneck on putting VM's on a physical box
In a large organization, virtual machines are probably
available
− make sure to allocate timely
− keep in mind the capacity requirements
Logical and physical management
− which images, the wealth of possible images can quickly become hard to
see forest through the trees
− physical management of infrastructure is beyond this tutorial
Minimum requirement: snapshots/images
− freeware versions don't always carry this feature
− allow to set up: OS, environment, AUT, tooling, but also: data, states
© 2014 LogiGear
Servers
Test execution facilities tend to be a bottleneck very quickly in big
testing projects
Servers with virtual machines on them are an easy step up, but
require some organization and management
Allowing execution separately from the machines the testers and
automation engineers are working on increases scalability
Large scale text execution, in particular with VM's, like to have:
First step up: give team members a second machine
Second step up: use locally placed servers, users coordinate their
use of them
Third step up: major infrastructures with organized allocation
44
© 2014 LogiGear
Tower Servers
Smaller shops (smaller companies, departments)
Affordable, simple, first step up from clients execution
Not very scalable when the projects get larger
© 2014 LogiGear
Rack Servers
Well scalable
Pricing not unlike tower servers
Tend to need more mature IT expertise
45
© 2014 LogiGear
Server Blades
Big league infrastructure, high density, very scalable
Tends to be pricey, use when space and energy matters
Usually out of sight for you and your team
© 2014 LogiGear
Cloud
Cloud can be target of testing
− normal tests, plus cloud specific tests
• functional, load, response times
− from multiple locations
− moving production through data centers
Cloud can be host of test execution
− considerations can be economical or organizational
− providers offer imaging facilities, similar to virtual machines
− make sure machines are rented and returned efficiently
− IaaS (Infrastructure as a Service): you have to configure
− PaaS (Platform as a Service): some configuration, like OS and DBMS already included
Public cloud providers like EC2 and Azure offer API's, so your
automation can automatically allocate and release them
− be careful, software bugs can have costing consequences
− for example, consider having a second automation process to double-check cloud
machines have been released after a set time
Amazon is a market leader, but Microsoft is pushing Azure very hard
− embracing non-MS platforms
− focusing on "hybrid" solutions, where "on prem" and cloud work together
(Xinhua Photo)
46
© 2014 LogiGear
Cloud providers - Gartner
Challengers Leaders
VisionariesNiche Players
Ability to
execute
Completeness of vision
source:
Magic Quadrant for
Cloud Infrastructure
as a Service,
Gartner, 2014
© 2014 LogiGear
Cloud growth
source: IDC
47
© 2014 LogiGear
Cloud, example pricing, hourly rates
Source: Amazon EC2, my interpretation, actual prices may vary
Configuration "m3": fixed performance
mem cpu storage price
medium 3.75 1 4 0.13
large 7.5 2 32 0.27
xlarge 15 4 80 0.53
2xlarge 30 8 160 1.06
© 2014 LogiGear
Cloud, example economy
Very simplified, for example not counting:
− possible use of VM's within the buy option
− graphic cards coming with the buy options
Also not counting: additional cost of ownership elements for owning or
cloud (like IT management, contract and usage management)
Impressions:
− cloud could fit well for bursty testing needs, which is often the case
− for full continuous, or very frequent, testing: consider buying (for example rack servers)
− hybrid models may fit many big-testing situations: own a base capacity, rent more during
peak use periods (for Azure this is now a core strategy)
medium large extra BIG
per hour 0.13 0.27 0.53 1.06
buy (est) 300 500 800 1,100
hours to break even 2,308 1,852 1,509 1,038
months 3.1 2.5 2.1 1.4
48
© 2014 LogiGear
Cloud on demand? Organize it!
You're spending money, therefore decide who can do
what (don't forget to limit you yourself too)
Have a "test production planning" process
Have a budget
Have ownership
Use available policy features to limit usage in time and
quantity
Obtain and read production reporting, compare to plan
and budget
Minimize the need (for example "last test round only")
Have and try to use on-prem and hybrid alternatives
Start small, learn
© 2014 LogiGear
Data centers can go down
However, disruption could have been minimized by using multiple data centers
49
© 2014 LogiGear
Data centers can go down
This time, it did involve multiple data centers . . .
© 2014 LogiGear
Data centers can go down
Service providers can occasionally go down too
50
© 2014 LogiGear
Cloud, usage for special testing needs
Multi-region testing
− Amazon for example has several regions
• US East, Northern Virginia
• US West, Oregon, Northern California
• EU, Ireland
• Asia Pacific, Singapore, Tokyo
• South America, Sao Paulo
− be careful that data transfers between regions costs money
(0.01/GB)
Load generation
− example: "JMeter In The Cloud"
• based on the JMeter load test tool
• uses Amazon AMI's for the slave machines
• allows to distribute the AMI's in the different regions of Amazon
• see more here:
aws.amazon.com/amis/jmeter-in-the-cloud-a-cloud-based-load-testing-environment
© 2014 LogiGear
Questions for Infrastructure
What kind of infrastructure does
your organization use for
testing?
What is the role of
virtualization, now or in the
future?
Are you using a private or a
public cloud for testing?
51
© 2014 LogiGear
Testing big and complicated stuff
sources: Windows Azure Reference Platform, Unvanquished (open source game)
For example
complex cloud
architectures
and/or large
and complex
multi-player
games
© 2014 LogiGear
Approaches
Define testing and automation as business opportunities:
− better testing can mean less risks and problems, and more quality
perception
− robust automation results in faster time-to-market, and more flexibility
− the bigger and more complex the testing, the more attention it needs
Follow a testability and DevOps approach in projects:
− include "how do I test" right from the start of development, both test
design and automation (including white-box approaches)
− plan "operation" of test runs, like allocation of resources
Consider Testing in Production* approaches, like:
− A/B testing
− continuous testing with random regression testing or monkey testing
− but please don't forget about test design (think first, then make
decisions)
*see also:Ken Johnston's chapter in the book of Dorothy Graham and Mark Fewster, and his keynote at StarWest 2012
52
© 2014 LogiGear
A/B testing with a reverse proxy
Watch your test design, easy to drown in technical solutions only
B could be a real-life user or also a keyword driven test machine
A/B testing means part of traffic is routed through a different
server or component (see if it works, and/or how users react)
A similar strategy could be done at any component level
A
A
B
Reverse
Proxy
Users
Servers
A
B
newcurrent
A
B
© 2014 LogiGear
Organization
Much of the success is gained or lost in how you organize the
process
− who owns which responsibility (in particular to say "no" to a release)
− separate, integrated teams, or both
− who does test design, who does automation
− what to outsource, what to keep in-house
Write a plan of approach for the test development and automation
− scope, assumptions, risks, planning
− methods, best practices
− tools, technologies, architecture
− stake holders, including roles and processes for input and approvals
− team
− . . .
Assemble the right resources
− testers, lead testers
− automation engineer(s)
− managers, diplomats, ...
Test design is a skill . . .
Automation is a skill . . .
Management is a skill . . .
. . . and those skills are different
53
© 2014 LogiGear
Team roles, examples
Testing, test development
− test analysis, test creation
− reporting, result analysis and follow up, assessments
Automation
− functional navigation, technical automation
Test execution planning and management
Environments and infrastructure
Management and direction
− process, contents, practices, handling impediments
− handling "politics and diplomacy" *
*see my STARCANADA presentation on "left out in the cold"
© 2014 LogiGear
Think Industrial . . .
Large scale testing need a "design" and a
"production" focus
− emphasis more on delivery and scale, "thinking big"
− no-nonsense rather than creativity: "get stuff done"
Examples of tasks/responsibilities
− keeping the tests running
− plan and manage resources
− respond to hick-ups
− analyze and address automation issues
− address fails or other testing outcomes
GIT-R-DONE
YOU TESTERS!
54
© 2014 LogiGear
Stake Holders
Test Development
Test Automation
Technology/
Infrastructure
ProductionMarketing/
Sales
System
Development
End User
Departments
Quality Assurance
Management
After Sales/
Help Desk
Customers
Vendors
Government
Agencies
Publicity
EXTERNAL INTERNAL
© 2014 LogiGear
ABT in Agile
Test Module
Definition
(optional)
Test Module Development
Interface Definition
Action Automation
Test Execution
Sprint Products
Product
Backlog
Test re-use
Automation re-use
product
owner
team
prod owner
& team
User stories
Documentation
Domain understanding
Acceptance Criteria
PO Questions
Situations
Relations
Agile life cycle
Test development
Main Level Test Modules
Interaction Test Modules
Cross over Test Modules
55
© 2014 LogiGear
Using ABT in Sprints (1)
Aim for "sprint + zero", meaning: try to get test
development and automation "done" in the same sprint,
not the next one
− next one means work clutters up, part of team is not working on the
same sprint, work is done double (manually and automated), ...
Agree on the approach:
− questions like does "done" include tests developed and automated?
− do we see testing and automation as distinguishable tasks and
skillsets?
− is testability a requirement for the software?
© 2014 LogiGear
Using ABT in Sprints (2)
Just like for development, use discussions with the team
and product owners
− deepen understanding, for the whole team
− help identify items like negative, alternate and unexpected situations
Start with the main test modules, that address the user
stories and acceptance criteria
− try to keep the main test modules at a similar level as those stories
and criteria
− test modules can double as modeling device for the sprint
Plan for additional test modules:
− low-level testing of the interaction with the system under test (like
UI's)
− crossing over to other parts of the system under test
56
© 2014 LogiGear
Using ABT in Sprints (3)
To discuss an approach, consider daily "sit down" meetings with
some or all members to coach and evaluate
− an end-of-day counterpart to the early-morning "stand up" meetings
− short and friendly, not about progress and impediments, but about practices and
experiences with them (like "what actions did you use?")
− a few meetings may suffice
Create good starting conditions for a sprint:
− automation technology available (like hooks, calling functions, etc)
− how to deal with data and environments
− understanding of subject matter, testing, automation, etc
Do interface mapping by hand, using developer provided
identifications
− saves time by not having to use the viewer or other spy tools
− recording of actions (not tests) will go better
Tip
© 2014 LogiGear
Testing as a profession
Focus on tests, not development:
− what can be consequences of situations and events
− relieve developers
The challenge for the tester in the new era is to become a more
credible professional tester,
− not a pseudo programmer
− part of the team
− have knowledge and experience with testing techniques and principles
Forcing a nontechnical tester to become a programmer may lose a
good tester and gain a poor programmer
Forcing a good developer to become a tester may lose a good
developer and gain a poor tester
− a good developer who is working on an airplane control system is also not
necessarily a good airline pilot
57
© 2014 LogiGear
Automation is a profession too
Overlaps with regular system development, but not same
Less concerned with complex code structures or algorithms
More concerned with navigating through other software
efficiently, dealing with control classes, obtaining information,
timing, etc
− if you would compare developers to "creators", automation engineers might
be likened to "adventurers"...
The automation engineering role can also be a consultant:
− for test developers: help express tests efficiently
− for system developers: how to make a system more automation friendly
− important player in innovation in the automated testing
© 2014 LogiGear
Globalization....
58
© 2014 LogiGear
Globalization
Three Challenges:
− another countries, other cultures
− geographic distances
− time differences
Seven "Patterns":
− "Solution"
− "Push Back"
− "Time Pressure"
− "Surprises"
− "Ownership"
− "Mythical Man Month"
− "Cooperation"
© 2014 LogiGear
Challenge: Other Country
59
© 2014 LogiGear
Other Country
Differences in culture
− more on the next slide...
Different languages, and accents
Differences in education
− style, orientation and contents
− position of critical thinking, factual knowledge, practice, theory,...
− US, British, French, Asian, ...
Differences in circumstances
− demographics
− economy, infrastructure
− politics
Apprehension on-shore and off-shore about job security doesn't help in
projects
− management responsibility: understand your strategic intentions, and their consequences, and clarify
them
− be realistic in cost and benefit expectations
© 2014 LogiGear
More on Culture...
Regional culture. There are numerous factors:
− very difficult to make general statements
• many anecdotes, stories and perceptions, some are very helpful, some have limited general
value
• not sure on impact of regional culture (see also [Al-Ani])
− numerous factors, like history, religion, political system
• e.g. valuing of: critical thinking, theory, bottom-line, relations, status, work-ethic, bad news,
saying 'no'
• entertaining guests, eating habits, alcohol, meat, humor, etc
• position of leaders, position of women managers
• mistakes can be benign and funny, but also damaging, visibly or hidden, in particular perceived
disrespect hurts
Organizational culture
− can be different from country to country, sector to sector, company to company, group to group
− I feel this to be at least as strong than regional culture (see for example [Al-Ani])
− you can have at least some control over this
Professional cultures
− for example engineers, QA, managers, ...
Some ideas to help:
− get to know each other (it helps, see for example [Gotel])
− study the matter, and make adaptations
60
© 2014 LogiGear
© 2014 LogiGear
61
© 2014 LogiGear
© 2014 LogiGear
62
© 2014 LogiGear
Different countries . . .
© 2014 LogiGear
Challenge: Distance
63
© 2014 LogiGear
Distance
Continuous logistical challenges
Lots of costs, and disruptions, for traveling
Distance creates distrust and conflict
− could be "normal" behavior, inherent to humans
Complex coordination can create misunderstandings
− on technical topics
− on actions, priorities, and intentions
© 2014 LogiGear
Challenge: Time difference
64
© 2014 LogiGear
Challenge: Time difference
Additional complication for communication and
coordination
Places a major burden on both on-shore and off-shore
staff
− having to work evenings and/or early mornings
− potential for exhaustion, lack of relaxation, mistakes, irritation
Can easily lead to loss of time at critical moments
Some solutions:
− manage this actively
− constantly seek to optimize task and responsibility allocation
− build the on-shore and off-shore organizations to match
− seek ways to save meeting time, like optimal information handling
© 2014 LogiGear
Effect of time difference
Test Module:
“Segment Y, Default Settings”
Windows Linux
TestArchitect 5 ~ 4:16 m ~ 4:28 m
TestArchitect 6 ~ 11:00 m ~ 8:00 m
Report from the team to the US management . . .
Performance comparison TestArchitect 5 and 6
65
© 2014 LogiGear
Patterns
Experiences seem to follow patterns
− at least our own experiences do
− variations are numerous, but seem to follow similar lines
− following are examples, not limitative
It can help to recognize patterns quickly, and act upon
them
Resolutions have side-effects, can introduce new issues
− for example strengthening local management means less direct
contact with the project members doing the work
Just about every pattern occurs in every direction
− from your perspective regarding "them"
− their perspective on you, or each other
− sometimes equaling, sometimes mirroring
© 2014 LogiGear
Pattern: "The Solution"
Typical sequence of events:
− the team finds a problem in running a test
− the team discusses it and comes up with a "solution"
− the solution: (1) creates issues, and (2) hides the real
problem
Better way:
− First:
• clearly define the issue
• discuss with project manager and customer
− Only then:
• resolve it
• enjoy the gratitude bestowed upon you ☺
66
© 2014 LogiGear
Pattern: "Push Back"
US side, or customer, gives bad direction
Team doesn't like it, but feels obliged to follow orders
The result is disappointing
Team is blamed
− and will speak up even less next time
Better way:
− discuss with the principal/customer at multiple levels
• strategic about direction, operational day-to-day
− empower and encourage the team to speak up
− write plans of approach, and reports
© 2014 LogiGear
Pattern: "Time Pressure"
Deadline must be met
− no matter what
− use over-time
− "failure is not an option"
Deadlines are sometimes real, sometimes not
− become a routine on the US side
− easy to pressure over the email
− very difficult for a non-empowered team to push back
− risk: inflation of urgency
Better way:
− good planning
− proper weighing of deadlines and priorities
− frequent reporting
− local management
67
© 2014 LogiGear
Pattern: "Surprises"
Good news travels better than bad news...
− should be the other way around
− the "cover up": "let's fix, no need to tell...."
− over time: needing bigger cover ups to conceal
smaller ones
− not unique for off-shoring, but more difficult to
detect and deal with
Once a surprise happens:
− you will feel frustrated, and betrayed
− fix the problems, point out the consequences of
hiding, avoid screaming and flaming
Better ways:
− agree: NO SURPRISES!!
− emphasize again and again
− train against this
− continuously manage, point out
− the magic word: transparency
SUPRISES
© 2014 LogiGear
Pattern: "Ownership"
Shared responsibility is no responsibility
Effort-based versus result-based
On-shore players feel the off-shore team has a result responsibility
Off-shore team members feel an effort-based responsibility ("work
hard")
Better way:
− clear responsibilities and expectations
− on-shore ownership for quality control of system under test
• and therefore the tests
− off-shore ownership of producing good tests and good automation
− empower according to ownership
68
© 2014 LogiGear
Pattern: "Mythical Man Month"
Fred Brooks classic book, "Mythical man month":
− "Assigning more programmers to a project running behind schedule
will make it even later"
− "The bearing of a child takes nine months, no matter how many
women are assigned"
− in particular in automation it is easy to end up with a large pile of badly
designed tests, that then is difficult to scale and maintain (or even to
get rid of)
In test automation, there must be clear ownership of:
− test design (not just cranking out test cases)
− automation, this is different skill and interest
Assign at least the following roles:
− project lead, owns quality and schedule
− test lead: owns test design, coaches and coordinates the other testers
− automation: make the actions work (assuming ABT, not the test
cases)
Define distinct career paths in: testing, automation,
management
© 2014 LogiGear
Pattern: "Cooperation"
Communication is tedious, takes a long time
Questions, questions, questions, ...
− reverse: questions don't get answered
For at least one side in private time, extra annoying
Misunderstandings, confusion, actions not followed up
− double check apparent "crazy things" with the team before jumping to conclusions, and
actions (assume the other side is not "nuts" or "dumb"...)
Please understand: distance fosters conflicts
− we're born that way, can't ignore
Better ways:
− remember respect
− prioritize training, coaching, preparation and planning. Saves a lot of questions...
− write stuff down, use briefs, minutes
− define workflows and information flows
• buckets, reporting, select and use good tools
− specialize meetings
• table things for in-depth meetings
• ask to meet internally first
− be quick, no more than 30 mins
69
© 2014 LogiGear
Training as a tool
Many areas, big pay-offs:
− system under test
− subject matter under test, domain knowledge
− methods, best practices
− technologies, tools, ...
− processes
− soft skills, like creativity, critical thinking, management, ...
− language
− cross-cultural
Have exams
− think about the consequences of passing and failing
− people pay more attention when they know they will get tested
− you will know whether you were understood
Have coaching and train-the-trainers
− more experienced people help newbie's
− also runs a risk: bad habits can creep in and procreate
− "Tribal knowledge", learning by osmosis, water cooler conversations, encourage it
− consider "special interest groups (SIG's)"
Rule of thumb for off-shore teams: hire for technical knowledge, train for business
knowledge
The on-shore staff needs training and coaching too, to stay on par
© 2014 LogiGear
Additional ideas and experiences
Go there, be with the team,
− also experience yourself how "your side" comes across there
− I go about twice per year
Manage ownership
− the distinction between efforts and results ("efforts are good, results are better")
Provide clear direction, constant attention and coaching
Supervise, supervise, supervise
− but don't micromanage other side should have ownership
Ask to create example products (like ABT test modules and actions), review
these carefully, and use as direction for subsequent work
Leadership style: participative styles seem most common (as opposed to
consensus or authoritative, see also [Al-Ani])
Organize informal/fun events, provide a good environment
− solidify the group, improve retention
− include visiting US staff, this tends to do a lot of good ("priceless")
Manage expectations
− stuff takes time and energy
− differences can be addressed, but not 100% everybody likes cake...
70
© 2014 LogiGear
Outsourcing and Agile
If done well, can provide relieve to a lot of the patterns. Several
models possible, for example:
Model 1: Full team outsourcing
− development, testing and automation
− automated tests can be positioned as part of the delivery
Model 2: Integrated team:
− needs online tool like Jira or Rally
− you must have shared meetings
− advantage: more project time
Model 3: "2nd unit"
− off-shore team works under control of one or more sprint team members
Model 4: Test Production and management
− off-shore team takes the deliveries of the primary team, creates/automates more tests,
and executes and maintains them
© 2014 LogiGear
Summary
Not all "big project" challenges are the same
Think before you do. Best results come from planning
well, and combining effective concepts, tricks and tools
Consider tests and automation as products
Team work is a key for short term and long term success
There are many options for infrastructure, but keep an
eye on economy and planning
Off-shoring can help scale up, but needs attention to do it
right, in particular communication
71
© 2014 LogiGear
Homework . . .
1. Testing Computer Software, Cem Kaner, Hung Nguyen, Jack Falk, Wiley
2. Lessons Learned in Software Testing, Cem Kaner, James Bach, Bret Pettichord, Wiley
3. Experiences of Test Automation, Dorothy Graham, Mark Fewster, Addison Wesley, 2012
4. Automating Software Testing, Dorothy Graham, Mark Fewster, Addison Wesley
5. "Build a Successful Global Training Program", Michael Hackett, www.logigear.com
6. Action Based Testing (overview article), Hans Buwalda, Better Software, March 2011
7. Action Figures (on model-based testing), Hans Buwalda, Better Software, March 2003
8. Integrated Test Design & Automation, Hans Buwalda, Dennis Janssen and Iris Pinkster, Addison Wesley
9. Soap Opera Testing (article), Hans Buwalda, Better Software Magazine, February 2005
10. Testing with Action Words, Abandoning Record and Playback, Hans Buwalda, Eurostar 1996
11. QA All Stars, Building Your Dream Team, Hans Buwalda, Better Software, September 2006
12. The 5% Solutions, Hans Buwalda, Software Test & Performance Magazine, September 2006
13. Happy About Global Software Test Automation, Hung Nguyen, Michael Hackett, e.a., Happy About
14. Testing Applications on the Web, Hung Nguyen, Robert Johnson, Michael Hackett, Wiley
15. Practical Combinatorial Testing, Richard Kuhn, Raghu Kacker, Yu Lei, NIST, October, 2010
16. JMeter in the Cloud, Jörg Kalsbach, http://aws.amazon.com/amis/2924
17. Using Monkey Test Tools, Noel Nyman, STQE issue January/February 2000
18. High Volume Test Automation, Cem Kaner, Walter P. Bond, Pat McGee, STARWEST 2004
19. Descriptive Analysis of Fear and Distrust in Early Phases of GSD Projects, Arttu Piri, Tuomas Niinimäki, Casper
Lassenius, 2009 Fourth IEEE International Conference on Global Software Engineering [Piri]
20. Quality Indicators on Global Software Development Projects: Does 'Getting to Know You' Really Matter? Olly Gotel, Vidya Kulkarni,
Moniphal Say, Christelle Scharff, Thanwadee Sunetnanta, 2009 Fourth IEEE International Conference on Global Software Engineering
[Gotel]
21. Become a Big Data Quality Hero, Jason Rauen, StarCanada 2014 [Rauen]
22. Resources on Exploratory Testing, Metrics, and Other Stuff, Michael Bolton's site, www.developsense.com/resources
23. When Testers FeelLeft Out in the Cold, Hans Buwalda, STARCANADA 2014

More Related Content

PDF
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
PDF
Introducing Keyword-Driven Test Automation
PDF
The Leaders Guide to Getting Started with Automated Testing
PDF
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
PPTX
Top 10 Qualities of a QA Tester
PDF
The agile way: the complete guide to understanding agile methodologies
PDF
Tackling software testing challenges in the agile era
PDF
Introducing Keyword-driven Test Automation
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
Introducing Keyword-Driven Test Automation
The Leaders Guide to Getting Started with Automated Testing
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
Top 10 Qualities of a QA Tester
The agile way: the complete guide to understanding agile methodologies
Tackling software testing challenges in the agile era
Introducing Keyword-driven Test Automation

What's hot (20)

PDF
Introducing Keyword-driven Test Automation
PPT
! Testing for agile teams
PPT
Michael Bolton - Heuristics: Solving Problems Rapidly
PDF
[HCMC STC Jan 2015] How To Work Effectively As a Tester in Agile Teams
PDF
software testing for beginners
PDF
No more excuses QASymphony
PDF
TestPRO Profile v4.1
PPTX
Let's focus more on Quality and less on Testing by Joel Montvelisky
PDF
[HCMC STC Jan 2015] Practical Experiences In Test Automation
PPTX
B4 u solution_writing test cases from user stories and acceptance criteria
PPTX
Writing test cases from user stories and acceptance criteria
PPTX
Is Test Planning a lost art in Agile? by Michelle Williams
PPTX
Fundamentals of testing
PDF
STLDODN - Agile Testing in a Waterfall World
PDF
Measurement and Metrics for Test Managers
PPTX
Agility is the tool gilb vilnius 9 dec 2013
PPTX
Evolve or Die: Healthcare IT Testing | QASymphony Webinar
PDF
Better Test Designs to Drive Test Automation Excellence
PDF
The Survey Says: Testers Spend Their Time Doing...
PPT
TestIT Software Assurance
Introducing Keyword-driven Test Automation
! Testing for agile teams
Michael Bolton - Heuristics: Solving Problems Rapidly
[HCMC STC Jan 2015] How To Work Effectively As a Tester in Agile Teams
software testing for beginners
No more excuses QASymphony
TestPRO Profile v4.1
Let's focus more on Quality and less on Testing by Joel Montvelisky
[HCMC STC Jan 2015] Practical Experiences In Test Automation
B4 u solution_writing test cases from user stories and acceptance criteria
Writing test cases from user stories and acceptance criteria
Is Test Planning a lost art in Agile? by Michelle Williams
Fundamentals of testing
STLDODN - Agile Testing in a Waterfall World
Measurement and Metrics for Test Managers
Agility is the tool gilb vilnius 9 dec 2013
Evolve or Die: Healthcare IT Testing | QASymphony Webinar
Better Test Designs to Drive Test Automation Excellence
The Survey Says: Testers Spend Their Time Doing...
TestIT Software Assurance
Ad

Viewers also liked (17)

PPTX
Software Testing’s Future—According to Lee Copeland
PDF
Disrupting Ourselves: Moving to a “Teal Organization” Model
PDF
Don’t Make These Scrum Mistakes
PDF
Apply Phil Jackson’s Coaching Principles to Build Better Agile Teams
PDF
The Tester's Role in Agile Planning
PDF
The Soft Skills of Great Software Developers
PDF
Static Testing: We Know It Works, So Why Don’t We Use It?
PDF
Command Query Responsibility Segregation at Enterprise Scale
PDF
Experiments: The Good, the Bad, and the Beautiful
PDF
Move Your Selenium Testing to the Cloud
PDF
White Box Testing: It’s Not Just for Developers Any More
PDF
Which Agile Scaling Framework Is Best?
PDF
Great Business Analysts “Think Like a Freak”
PDF
Predictive Test Planning to Improve System Quality
PDF
Continuous Testing - The New Normal
PDF
How to Build a Fully Open Source Test Automation Framework
PDF
Agile QA & Test: A Shift in Mindset from Finding to Preventing Bugs
Software Testing’s Future—According to Lee Copeland
Disrupting Ourselves: Moving to a “Teal Organization” Model
Don’t Make These Scrum Mistakes
Apply Phil Jackson’s Coaching Principles to Build Better Agile Teams
The Tester's Role in Agile Planning
The Soft Skills of Great Software Developers
Static Testing: We Know It Works, So Why Don’t We Use It?
Command Query Responsibility Segregation at Enterprise Scale
Experiments: The Good, the Bad, and the Beautiful
Move Your Selenium Testing to the Cloud
White Box Testing: It’s Not Just for Developers Any More
Which Agile Scaling Framework Is Best?
Great Business Analysts “Think Like a Freak”
Predictive Test Planning to Improve System Quality
Continuous Testing - The New Normal
How to Build a Fully Open Source Test Automation Framework
Agile QA & Test: A Shift in Mindset from Finding to Preventing Bugs
Ad

Similar to The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More (20)

PDF
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
PDF
When Testers Feel Left Out in the Cold
PDF
Top 5 Pitfalls of Test Automation and How To Avoid Them
PPTX
'BIG Testing' with Hans Buwalda
PDF
Test Automation: Investment Today Pays Back Tomorrow
PDF
How to build confidence in your release cycle
PDF
Universal test solutions customer testimonial 10192013-v2.3
PDF
Future of Test Automation with Latest Trends in Software Testing.pdf
PDF
Webinar - Design Thinking for Platform Engineering
PDF
Introducing Keyword-driven Test Automation
PDF
Atagg2015 Where testing is moving in agile cloud world!
PDF
Future of Test Automation with Latest Trends in Software Testing.pdf
PDF
Test i agile projekter af Gitte Ottosen, Sogeti
DOCX
sutapa_resume
PDF
The Tester’s Role: Balancing Technical Acumen and User Advocacy
PDF
Improving ROI with Scriptless Test Automation
PDF
Tune Agile Test Strategies to Project and Product Maturity
PDF
Automated vs.pdf
PDF
Introducing Keyword-Driven Test Automation
PDF
Why Automation Fails—in Theory and Practice
The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More
When Testers Feel Left Out in the Cold
Top 5 Pitfalls of Test Automation and How To Avoid Them
'BIG Testing' with Hans Buwalda
Test Automation: Investment Today Pays Back Tomorrow
How to build confidence in your release cycle
Universal test solutions customer testimonial 10192013-v2.3
Future of Test Automation with Latest Trends in Software Testing.pdf
Webinar - Design Thinking for Platform Engineering
Introducing Keyword-driven Test Automation
Atagg2015 Where testing is moving in agile cloud world!
Future of Test Automation with Latest Trends in Software Testing.pdf
Test i agile projekter af Gitte Ottosen, Sogeti
sutapa_resume
The Tester’s Role: Balancing Technical Acumen and User Advocacy
Improving ROI with Scriptless Test Automation
Tune Agile Test Strategies to Project and Product Maturity
Automated vs.pdf
Introducing Keyword-Driven Test Automation
Why Automation Fails—in Theory and Practice

More from TechWell (20)

PDF
Failing and Recovering
PDF
Instill a DevOps Testing Culture in Your Team and Organization
PDF
Test Design for Fully Automated Build Architecture
PDF
System-Level Test Automation: Ensuring a Good Start
PDF
Build Your Mobile App Quality and Test Strategy
PDF
Testing Transformation: The Art and Science for Success
PDF
Implement BDD with Cucumber and SpecFlow
PDF
Develop WebDriver Automated Tests—and Keep Your Sanity
PDF
Ma 15
PDF
Eliminate Cloud Waste with a Holistic DevOps Strategy
PDF
Transform Test Organizations for the New World of DevOps
PDF
The Fourth Constraint in Project Delivery—Leadership
PDF
Resolve the Contradiction of Specialists within Agile Teams
PDF
Pin the Tail on the Metric: A Field-Tested Agile Game
PDF
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
PDF
A Business-First Approach to DevOps Implementation
PDF
Databases in a Continuous Integration/Delivery Process
PDF
Mobile Testing: What—and What Not—to Automate
PDF
Cultural Intelligence: A Key Skill for Success
PDF
Turn the Lights On: A Power Utility Company's Agile Transformation
Failing and Recovering
Instill a DevOps Testing Culture in Your Team and Organization
Test Design for Fully Automated Build Architecture
System-Level Test Automation: Ensuring a Good Start
Build Your Mobile App Quality and Test Strategy
Testing Transformation: The Art and Science for Success
Implement BDD with Cucumber and SpecFlow
Develop WebDriver Automated Tests—and Keep Your Sanity
Ma 15
Eliminate Cloud Waste with a Holistic DevOps Strategy
Transform Test Organizations for the New World of DevOps
The Fourth Constraint in Project Delivery—Leadership
Resolve the Contradiction of Specialists within Agile Teams
Pin the Tail on the Metric: A Field-Tested Agile Game
Agile Performance Holarchy (APH)—A Model for Scaling Agile Teams
A Business-First Approach to DevOps Implementation
Databases in a Continuous Integration/Delivery Process
Mobile Testing: What—and What Not—to Automate
Cultural Intelligence: A Key Skill for Success
Turn the Lights On: A Power Utility Company's Agile Transformation

Recently uploaded (20)

PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Modernizing your data center with Dell and AMD
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Approach and Philosophy of On baking technology
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Machine learning based COVID-19 study performance prediction
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
KodekX | Application Modernization Development
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
NewMind AI Monthly Chronicles - July 2025
Spectral efficient network and resource selection model in 5G networks
Digital-Transformation-Roadmap-for-Companies.pptx
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
MYSQL Presentation for SQL database connectivity
Unlocking AI with Model Context Protocol (MCP)
Modernizing your data center with Dell and AMD
NewMind AI Weekly Chronicles - August'25 Week I
“AI and Expert System Decision Support & Business Intelligence Systems”
Approach and Philosophy of On baking technology
Mobile App Security Testing_ A Comprehensive Guide.pdf
Machine learning based COVID-19 study performance prediction
Review of recent advances in non-invasive hemoglobin estimation
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Understanding_Digital_Forensics_Presentation.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
KodekX | Application Modernization Development

The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More

  • 1. MA Full Day Tutorial 10/13/2014 8:30:00 AM "The Challenges of BIG Testing: Automation, Virtualization, Outsourcing, and More" Presented by: Hans Buwalda LogiGear Corporation Brought to you by: 340 Corporate Way, Suite 300, Orange Park, FL 32073 888-268-8770 ∙ 904-278-0524 ∙ sqeinfo@sqe.com ∙ www.sqe.com
  • 2. Hans Buwalda LogiGear Hans Buwalda has been working with information technology since his high school years. In his thirty year career, Hans has gained experience as a developer, manager, and principal consultant for companies and organizations worldwide. He was a pioneer of the keyword approach to testing and automation, now widely used throughout the industry. His approaches to testing, like Action Based Testing and Soap Opera Testing, have helped a variety of customers achieve scalable and maintainable solutions for large and complex testing challenges. Hans is a frequent speaker at STAR conferences and is lead author of Integrated Test Design and Automation: Using the Testframe Method. Speaker Presentations
  • 3. 1 © 2014 LogiGear Corporation. All Rights Reserved Hans Buwalda LogiGear Automation, Virtualization, Outsourcing, and More STARWEST 2014, Tutorial MA Anaheim, Monday October 13, 2014 8.30 AM – 4.30 PM The Challenges of BIG Testing © 2014 LogiGear Introduction − industries − roles in testing
  • 4. 2 © 2014 LogiGear Who is your speaker Software testing company, around since 1994 Testing and test automation services: − consultancy, training − test development and automation services − "test integrated" development services Products: − TestArchitect™, TestArchitect for Visual Studio™ − integrating test development with test management and automation − based on modularized keyword-driven testing LogiGear Magazine: − themed issues, non-commercial www.logigear.com www.testarchitect.com Dutch guy, in California since 2001 Background in math, computer science, management Since 1994 focusing on automated testing − keywords, agile testing, big testing Hans Buwalda LogiGear Corporation hans @ logigear.com www.happytester.com © 2014 LogiGear What is "BIG" Big efforts in development, automation, execution and/or follow up It takes a long time and/or large capacity to run tests (lot of tests, lot of versions, lot of configurations, ...) Scalability, short term and long term Complexity, functional, technical, scale Number and diversity of players and stakeholders Various definitions of "big" possible... and relevant... − "10 machines" or "10 acres" − "1000 tests" or "1000 weeks of testing" Big today means: big for you − not trivial, you need to think about it "Windows 8 has undergone more than 1,240,000,000 hours of testing" Steven Sinofsky, Microsoft, 2012
  • 5. 3 © 2014 LogiGear Some key items in scalable (automated) testing Organization and design of the tests Process and cooperation (agile or traditional) Project and production focus Technology, tooling, architecture Infrastructure Testability of the application under test Agreement, commitment Globalization, off-shoring © 2014 LogiGear Existential Questions Why test? Why not test? Why automate tests? Why not automate tests?
  • 6. 4 © 2014 LogiGear Why test? People expect us to do Somebody wants us to Increases certainty and control − Showing absence of problems Finds faults, saving time, money, damage − Showing presence of problems © 2014 LogiGear Why not test? It costs time and money You might find problems . . . We forgot to plan for it We need the resources for development It is difficult It's hard to manage
  • 7. 5 © 2014 LogiGear Why Automate Tests? It is more fun Can save time and money − potentially improving time-to-market, quality-to-market and control Can capture key domain and application knowledge in a re- usable way Can speed up development life cycles − for agile project approaches automation is often a must-have Execution typically is more reliable − a robot is not subjective, tired or moody Some tests can be done much better, or only, with automation − for example unit tests can test an application on the individual function level, helping a lot in scalability of system development − load and performance tests are also good examples − automation can also reach non-UI items like web services © 2014 LogiGear The Power of Robot Perception FINISHED FILES ARE THE RE SULT OF YEARS OF SCIENTI FIC STUDY COMBINED WITH THE EXPERIENCE OF YEARS...
  • 8. 6 © 2014 LogiGear Why not Automate? Can rule out the human elements − promotes "mechanical" testing − might not find "unexpected" problems More sensitive to good practices − pitfalls are plentiful Needs technical expertise in the test team Tends to dominate the testing process − at the cost of good test development Creates more software to manage − can actually diminish scalability rather than helping it − in particular changes in an application under test can have large, and hard to predict, impact on the automated tests © 2014 LogiGear Olny srmat poelpe can raed tihs. I cdnuolt blveiee taht I cluod aulaclty uesdnatnrd waht I was rdanieg. The phaonmneal pweor of the hmuan mnid, aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer be in the rghit pclae. The rset can be a taotl mses and you can sitll raed it wouthit a porbelm. Tihs is bcuseae the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe. The Power of Human Perception
  • 9. 7 © 2014 LogiGear The Power of Human Perception Notice at an event: "Those who have children and don't know it, there is a nursery downstairs." In a New York restaurant: "Customers who consider our waiters uncivil ought to see the manager." In a bulletin: "The eighth-graders will be presenting Shakespeare's Hamlet in the basement Friday at 7 PM. You are all invited to attend this drama." In the offices of a loan company: "Ask about our plans for owning your home." In the window of a store: "Why go elsewhere and be cheated when you can come here?" © 2014 LogiGear Relation to code Quality / depth Automation Scalability Unit Testing Close relationship with the code Singular test scope, but deep into the code Fully automated by nature Scalable, grows with the code, easy to repeat Functional Testing Usually does not have a one-on-one relation with code Quality and scope depends on test design In particular UI based automation can be a challenge Often a bottle- neck in scalability Exploratory Testing Human driven, not seeking a relation with code Usually deep and thorough, good at finding problems May or may not be automated afterwards Not meant to be repeatable. Rather do a new session Some test kinds and their scalability (simplified)
  • 10. 8 © 2014 LogiGear Actions 4 actions, each with an action keyword and arguments read from top to bottom fragment from a test with actions acc nr first last open account 123123 John Doe acc nr amount deposit 123123 10.11 deposit 123123 20.22 acc nr expected check balance 123123 30.33 • The test developer creates tests using actions with keywords and arguments • Checks are, as much as possible, explicit (specified expected values) • The automation task focuses on automating the keywords, each keyword is automated only once • This technique can be very scalable. A similar approach is behavior based testing, which also works with human readable tests, but is more verbose © 2014 LogiGear Potential benefits of keywords More productive: more tests, better tests − more breadth − more depth Easier to read and understand − no program code, tests can be self-documenting − facilitates involvement of non-technical people, like domain experts Fast, results can be quickly available − the design directly drives the automation More targeted efforts for the automation engineers − less repetition, test design helps in creating a structured solution (factoring) − can focus on critical and complex automation challenges more easily Automation can be made more stable and maintainable − limited and manageable impact of changes in the system under test A significant portion of the tests can typically be created early in a system life cycle − dealing with execution details later . . .
  • 11. 9 © 2014 LogiGear Risks of keywords Keywords are often seen as silver bullet − often treated as a technical "trick", complications are underestimated The method needs understanding and experience to be successful − pitfalls are many, and can have a negative effect on the outcome − some of the worst automation projects I've seen were with keywords Testers might get pushed into half-baked automation role − risk: you loose a good tester and gain a poor programmer − focus may shift from good (lean and mean) testing to "getting automation to work" − the actual automation challenges are better left to a the experienced automation professionals Lack of method and structure can risk manageability − maintainability may not as good as hoped − tests may turn out shallow and redundant © 2014 LogiGear Keywords need a method By themselves keywords don't provide much scalability − they can even backfire and make automation more cumbersome − a method can help tell you which keywords to use when, and how to organize the process Today we'll look at Action Based Testing (ABT) − addresses test management, test development and automation − large focus on test design as the main driver for automation success Central deliveries in ABT are the "Test Modules" − developed in spreadsheets − each test module contains "test objectives" and "test cases" − each test module is a separate (mini) project, each test module can involve different stake holders
  • 12. 10 © 2014 LogiGear High Level Test Design - Test Development Plan Objectives Test Module 1 Test Cases Test Module 2 Test Module N Actions . . . AUTOMATION Objectives Objectives interaction test business test Overview Action Based Testing define the "chapters" create the "chapters" create the "words" make the words work Test Cases Test Cases window control value enter log in user name jdoe enter log in password car guy window control property expected check property log in ok button enabled true user password log in jdoe car guy first last brand model enter rental Mary Renter Ford Escape last total check bill Renter 140.42 © 2014 LogiGear Example of an ABT test module Consists of an (1) initial part, (2) test cases and (3) a final part Focus is on readability, and a clear scope Navigation details are avoided, unless they're meant to be tested TEST MODULE Car Rental Payments user start system jdoe TEST CASE TC 01 Rent some cars first name last name car days rent car Mary Jones Ford Escape 3 first name last name amount check billing Mary Jones 140.40 FINAL close application
  • 13. 11 © 2014 LogiGear Example of a "low level" test module In "low level" tests interaction details are not hidden, since they are the target of the test The right level of abstraction depends on the scope of the test, and is an outcome of your test design process TEST MODULE Screen Flow user start system john TEST CASE TC 01 Order button window button click main create order window check window exists new order FINAL close application © 2014 LogiGear ACTION DEFINITION check balance user argument customer argument amount window control value enter balance inquiry last name # customer window control click balance inquiry view balance window control expected check balance inquiry balance # amount Re-use actions to make new actions In the below example we make a new action Existing actions are strung together to create new ones with a broader scope Often steps in low level tests are re-used to create these action definitions : customer amount check balance Smith 223.45 check balance Jones 0.00 check balance James -330.45 : use many times in tests: define in one place:
  • 14. 12 © 2014 LogiGear Question What is wrong with the following pictures? © 2014 LogiGear No Millennium Problems ? ?
  • 15. 13 © 2014 LogiGear Anything wrong with this instruction ? You should change your battery or switch to outlet power immediately to keep from losing your work. © 2014 LogiGear Issues are not always obvious... Downton Abbey
  • 16. 14 © 2014 LogiGear Why Better Test Design? Quality and manageability of test − many tests are often quite "mechanical" now, no surprises − one to one related to specifications, user stories or requirements, which often is ok, but lacks aggression − no combinations, no unexpected situations, lame and boring − such tests have a hard time finding (interesting) bugs Better automation − when unneeded details are left out of tests, they don't have to be maintained − avoiding "over checking": creating checks that are not in the scope of a test, but may fail after system changes − limit the impact of system changes on tests, making such impact more manageable I have become to believe that successful automation is usually less of a technical challenge as it is a test design challenge. unexpected problem? © 2014 LogiGear Case for organizing tests in BIG projects Can help keep the volume down Isolate the complexities Make tests and automation more re-usable Easier to deal with changing designs Much of tested subject matter is often business oriented, not system specific − for example a home loan is a home loan Automation can be made efficient. For example business logic tests may not even need the UI − can use web services or business components
  • 17. 15 © 2014 LogiGear The Three “Holy Grails” of Test Design Metaphor to depict three main steps in test design Using "grail" to illustrate that there is no single perfect solution, but that it matters to pay attention Right approach for each test module Proper level of detail in the test specification Organization of tests into test modules © 2014 LogiGear What's the trick...
  • 18. 16 © 2014 LogiGear What's the trick... Have or acquire facilities to store and organize your content Select your stuff Decide where to put what − assign and label the shelves Put it there If the organization is not sufficient anymore, add to it or change it © 2014 LogiGear Breakdown Criteria Common Criteria − Functionality (customers, finances, management information, UI, ...) − Architecture of the system under test (client, server, protocol, sub systems, components, modules, ...) − Kind of test (navigation flow, negative tests, response time, ...) Additional Criteria − Stakeholders (like "Accounting", "Compliance", "HR", ...) − Complexity of the test (put complex tests in separate modules) − Execution aspects (special hardware, multi-station, ...) − Project planning (availability of information, timelines, sprints, ...) − Risks involved (extra test modules for high risk areas) − Ambition level (smoke test, regression, aggressive, )
  • 19. 17 © 2014 LogiGear Examples Lifecycle tests (Create, Read, Update, Delete) for business objects − like "car", "customer", "order", "invoice", etc − for all: test various types and situations, and keep the tests at high level if possible Forms, value entry − does each form/dialog/page work − mandatory and optional fields, valid and invalid values, etc − UI elements and their properties and contents − function keys, tab keys, special keys, etc Screen and transaction flows − like cancel an order, menu navigation, use a browser back and forward buttons, etc − is the data in the database correct after each flow Business transactions, end-to-end tests − like enter, submit and fulfil a sale order, then check inventory and accounting − example: behaviors of alarms Functions and features − can I count orders, can I calculate a mortgage, etc − can I export, like to PDF, HMTL, XML, Security − login and password procedures − authorizations Localizations − languages, standards, Special tests − multi-station, load, hardware intensive, etc © 2014 LogiGear Example Top Level Structure <business object 1> Lifecycles Value entry Screen flows . . . Dialogs <business object 2> Functions and Features Integrations End-to-end, business . . . Security, authorization Special tests Non-UI Extensibility, customizing Custom controls . . . Project
  • 20. 18 © 2014 LogiGear Approach 1: Workshop Gather a meeting with relevant participants − test developers − domain experts − automation engineer (focus on efficiency of automation) − experienced moderator − also consider: developers, managers If necessary, provide training of participants before the discussion © 2014 LogiGear Approach 2: Design and Feedback One or two experienced test designers create a first draft The draft is delivered/discussed to relevant parties Ask the parties to verify: 1. Structure: does it make sense 2. Completeness: are all relevant areas covered Based on feedback, further modify the design
  • 21. 19 © 2014 LogiGear Identifying the modules Step 1: top down → establish main structure (and understanding) analyze what the business is and what the system does? how is it technically organized? what is important that we test use the list in the "breakdown examples" slide as a starting point also look at the “secondary criteria”, as far as applicable if the test is large, define main groups first, then detail out into modules Step 2: bottom up → refine, complete study individual functionalities and checks (like from exist test cases) and identify test modules for them if needed identify and discuss any additional criteria and needed testing situations review and discuss the resulting list(s) of test modules create some early drafts of test modules and adjust the list if needed © 2014 LogiGear Some notes on Bugs Bugs found in the "Immediate Sphere", when the developer/team is still working on the code (like in the same sprint) Consider not logging as bugs, since that is much overhead. Simply share a failed test module with the developer. For each bug found late ask three questions, in this order: 1. was it a bug? 2. what was the root cause? 3. why wasn't it caught? Consider keeping this information in the tracking system Bugs found "Post Delivery", when the developer/team is working on something else already Good to keep track, manage, prioritize, assign, close, learn etc. The later the bug is found the more important.
  • 22. 20 © 2014 LogiGear Questions for Test Design How does your organization handle test design and test organization? How do you document it? © 2014 LogiGear "Thou Shall Not Debug Tests..." Large and complex test projects can be hard to "get to run" If they are however, start with taking a good look again at your test design... Rule of thumb: don't debug tests. If tests don't run smoothly, make sure: − lower level tests have been successfully executed first -> UI flow in the AUT is stable − actions and interface definitions have been tested sufficiently with their own test modules -> automation can be trusted − are you test modules not too long and complex?
  • 23. 21 © 2014 LogiGear What about existing tests? Compare to moving house: − some effort can't be avoided − be selective, edit your stuff, • look at the future, not the past − first decide where to put what, then put it there − moving is an opportunity, you may not get such chance again soon Follow the module approach − define the modules and their scope as if from scratch − use the existing test cases in two ways: • verify completeness • harvest and re-use them for tests and for actions − avoid porting over "step by step", in particular avoid over-checking © 2014 LogiGear Grail 2: Approach per Test Module Plan the test module: − when to develop: do we have enough information? UI tests are usually the last ones to be developed − when to execute: make sure lower level stuff working first UI tests are usually the first ones to be executed Process: − do an intake: understand what is needed and devise an approach − analyze requirements, formulate "test objectives", create tests Don't just stick to "checking", try follow an exploratory approach: − see the test development as a "learning process", about the business domain, the application structure, the interaction, etc − talk about your tests, make them strong Identify stakeholders and their involvement: − users, subject matter experts − developers − auditors Choose testing techniques if applicable: − boundary analysis, decision tables, etc
  • 24. 22 © 2014 LogiGear Eye on the ball, Scope Always know the scope of the test module The scope should be unambiguous The scope determines many things: − what the test objectives are − which test cases to expect − what level of actions to use − what the checks are about and which events should generate a warning or error (if a “lower” functionality is wrong) © 2014 LogiGear Too detailed? Step Name Description Expected step 16 Click the new formula button to start a new calculation. The current formula is cleared. If it had not been save a message will show step 17 Enter "vegas winner" in the name field The title will show "vegas winner" step 18 Open the formula editor by clicking the '+' button for the panel "formula editor" The formula editor will show with an empty formula (only comment lines) step 19 Add some lines and enter "10*x;" The status bard will show "valid formula". There is a "*" marker in the title step 20 Click the Save formula button The formula is saved, the "*" will disappear from the title step 21 Open the panel with the arguments by clicking the '+' button There two lines, for 'x' and 'y' step 22 Click on the value type cell and select "currency" A button to select a currency appears, with default USD step 23 Click on the specify argument values link The argument specification dialog is shown
  • 25. 23 © 2014 LogiGear State your Objectives . . . ... TO-3.51 The exit date must be after the entry date ... test objective TO-3.51 name entry date exit date enter employment Bill Goodfellow 2016-10-02 2016-10-01 check error message The exit date must be after the entry date. requirement, specification, test case requirement, specification, test objective test case direct relation indirect relation via a test objective Linking through test objectives can help easier traceability: © 2014 LogiGear Examples of Testing Techniques Equivalence class partitioning • any age between 18 and 65 • see Cem Kaner's book on "Domain Testing" Boundary condition analysis • try 17, 18, 19 and 64, 65, 66 Error guessing • try Cécile Schäfer to test sorting of a name list Exploratory • "Exploratory testing is simultaneous learning, test design, and test execution", James Bach, www.satisfice.com • note Hans: I think there is also something like "business exploratory testing", focusing on test design Error seeding • deliberately inject faults in a test version of the system, to see if the tests catch them • handle with care, don't let the bugs get into the production version Decision tables • define possible situations and the expected responses of the system under test State transition diagrams • identify "states" of the system, and have your tests go through each transition between states at least once Jungle Testing • focus on unexpected situations, like hacking attacks Soap Opera Testing • describe typical situations and scenarios in the style of episodes of a soap opera, with fixed characters • high density of events, exaggerated • make sure the system under test can still handle these
  • 26. 24 © 2014 LogiGear "Jungle Testing" Expect the unexpected − unexpected requests − unexpected situations (often data oriented) − deliberate attacks − how does a generic design respond to a specific unexpected event? Difference in thinking − coding bug: implementation is different from what was intended/specified − jungle bug: system does not respond well to an unexpected situation To address − study the matter (common hack attacks, ...) − make a risk analysis − make time to discuss about it (analysis, brainstorm) − involve people who can know − use exploratory testing − use an agile approach for test development − consider randomized testing, like "monkey" testing © 2014 LogiGear Soap Opera Testing Informal scenario technique to invite subject-matter experiences into the tests, and efficiently address multiple objectives Using a recurring theme, with “episodes” About “real life” But condensed And more extreme Typically created with a high involvement of end-users and/or subject-matter experts It can help create a lot of tests quickly, and in an agile way
  • 27. 25 © 2014 LogiGear Lisa Crispin: Disorder Depot . . . There are 20 preorders for George W. Bush action figures in "Enterprise", the ERP system, awaiting the receipt of the items in the warehouse. Finally, the great day arrives, and Jane at the warehouse receives 100 of the action figures as available inventory against the purchase order. She updates the item record in Enterprise to show it is no longer a preorder. Some time passes, during which the Enterprise background workflow to release preorders runs. The 20 orders are pick-released and sent down to the warehouse. Source: Hans Buwalda, Soap Opera Testing (article), Better Software Magazine, February 2005 © 2014 LogiGear Lisa Crispin: Disorder Depot . . . Then Joe loses control of his forklift and accidentally drives it into the shelf containing the Bush action figures. All appear to be shredded to bits. Jane, horrified, removes all 100 items from available inventory with a miscellaneous issue. Meanwhile, more orders for this very popular item have come in to Enterprise. Sorting through the rubble, Jane and Joe find that 14 of the action figures have survived intact in their boxes. Jane adds them back into available inventory with a miscellaneous receipt.
  • 28. 26 © 2014 LogiGear Lisa Crispin: Disorder Depot . . . This scenario tests • Preorder process • PO receipt process • Miscellaneous receipt and issue • Backorder process • Pick-release process • Preorder release process • Warehouse cancels © 2014 LogiGear Vary your tests? Automated tests have a tendency to be rigid, and predictable Real-world situations are not necessarily predictable Whenever possible try to vary: − with select other data cases that still fit the goal of tests − with randomized behavior of the test
  • 29. 27 © 2014 LogiGear Generation and randomization techniques Model-based − use models of the system under test to create tests − see: Harry Robinson, www.model-based-testing.org, and Hans Buwalda, Better Software, March 2003 Data driven testing − apply one test scenario to multiple data elements − either coming from a file or produce by an automation "Monkey testing" − use automation to generate random data or behavior − "smart monkeys" will follow typical user behavior, most helpful in efficiency − "dumb monkeys" are more purely random, may find more unexpected issues − long simulations can expose bugs traditional tests won't find Extended Random Regression − have a large database of tests − randomly select and run them, for a very long time − this will expose bugs otherwise hidden − see Cem Kaner e.a.: "High Volume Test Automation", STARWEST 2004 © 2014 LogiGear Data Driven Testing Separate test logic from the data Possible origins for the data: − earlier steps in the test − data table − randomizer, or other formula − external sources, like a database query Use "variables" as placeholders in the test case, instead of hard values Data driven is powerful, but use modestly: − value cannot be known at test time, or changes over time − having many data variations is meaningful for the test
  • 30. 28 © 2014 LogiGear Variables and expressions with keywords This test does not need an absolute number for the available cars, just wants to see if a stock is updated As a convention we denote an assignment with ">>" The "#" indicates an expression TEST CASE TC 02 Rent some more cars car available get quantity Chevvy Volt >> volts first name last name car rent car John Doe Chevvy Volt rent car John Doe Chevvy Volt car expected check quantity Chevvy Volt # volts - 2 © 2014 LogiGear Data driven testing with keywords The test lines will be repeated for each row in the data set The values represented by "car", "first" and "last" come from the selected row of the data set TEST CASE TC 03 Check stocks data set use data set /cars car available get quantity # car >> quantity first name last name car rent car # first # last # car car expected check quantity # car # quantity - 1 repeat for data set DATA SET cars car first last Chevvy Volt John Doe Ford Escape Mary Kane Chrysler 300 Jane Collins Buick Verano Tom Anderson BMW 750 Henry Smyth Toyota Corolla Vivian Major
  • 31. 29 © 2014 LogiGear Combinations Input values − determine equivalence classes of values for a variable or field − for each class pick a value (or randomize) Options, settings Configurations − operating systems, operating system versions and flavors • Windows service packs, Linux distributions − browsers, browser versions − protocol stacks (IPv4, IPv6, USB, ...) − processors − DBMS's Combinations of all of the above Trying all combinations will spin out of control quickly © 2014 LogiGear Pairwise versus exhaustive testing Group values of variables in pairs (or tuples with more than 2) Each pair (tuple) should occur in the test at least once − maybe not in every run, but at least once before you assume "done" − consider to go through combinations round-robin, for example pick a different combination every time you run a build acceptance test − in a NASA study: • 67 percent of failures triggered by a single value • 93 percent by two-way combinations, and • 98 percent by three-way combinations Example, configurations − operating system: Windows XP, Apple OS X, Red Hat Enterprise Linux − browser: Internet Explorer, Firefox, Chrome − processor: Intel, AMD − database: MySQL, Sybase, Oracle − 72 combinations possible, to test each pair: 10 tests Example of tools: − ACTS from NIST, PICT from Microsoft, AllPairs from James Bach (Perl) − for a longer list see: www.pairwise.org These techniques and tool are supportive only. Often priorities between platforms and values can drive more informed selection Source: PRACTICAL COMBINATORIAL TESTING, D. Richard Kuhn, Raghu N. Kacker, Yu Lei, NIST Special Publication 800-142, October, 2010
  • 32. 30 © 2014 LogiGear Grail 3: Specification Level, choosing actions Scope of the test determines the specification level As high level as appropriate, as little arguments as possible − be generous with default values for arguments Clear names for actions − usually verb + noun usually works well − try to standardize both the verbs and the nouns, like "check customer" versus "verify client" (or vice versa) Avoid "engineer" styles for names of actions and arguments − tests are not source code − like no spaces, uppercase, camel-case or underlines − in other words: "noha_RDT_oUnderS~tand" names please Manage and document the Actions By-product of the test design © 2014 LogiGear By product of test design As generic as possible Use a verb and a noun, and standardize the verbs and the nouns Organize and document Be generous with default values, so you can leave out arguments not relevant for the test module scope Actions
  • 33. 31 © 2014 LogiGear Using actions TEST MODULE Order processing start system TEST CASE TC 01 Order for tablets user password login jdoe doedoe window check window exists welcome order id cust id article price quantity create order AB123 W3454X tablet 198.95 5 order id total check order total AB123 994.75 . . . © 2014 LogiGear Low-level, high-level, mid-level actions "Low level": detailed interaction with the UI (or API) − generic, do not show any functional or business logic − examples: "click", "expand tree node", "select menu" "High level": a business domain operation or check on the application under test − hide the interaction − examples: "enter customer", "rent car", "check balance" "Mid level": common sequences at a more detailed application level − usually to wrap a form or dialog − for use in high level actions − greatly enhance maintainability − example: "enter address fields" enter customer enter address fields enter select set . . .. . .
  • 34. 32 © 2014 LogiGear Identifying controls Identify windows and controls, and assign names to them These names encapsulate the properties that the tool can use to identify the windows and controls when executing the tests © 2014 LogiGear Mapping the interface An interface mapping (common in test tools) will map windows and controls to names When the interface of an application changes, you only have to update this in one place The interface mapping is a key step in your automation success, allocate time to design it well INTERFACE ENTITY library interface entity setting title {.*Music Library} ta name ta class label interface element title text Title: interface element artist text Artist: interface element file size text File size (Kb): ta name ta class position interface element playing time text textbox 4 interface element file type text textbox 5 interface element bitrate text textbox 6 ta name ta class position interface element music treeview treeview 1
  • 35. 33 © 2014 LogiGear Tips to make "BIG" automation stable Make the system under test automation-friendly − consider this a key requirement ("must have") − development practices are often a great source of automation impediments Don't use hard coded waits Select and create the right technologies and tools Pay attention to interface strategies − like hooks, interface maps, and non-UI testing Test automation items before running them − actions, interface mappings, emulators, etc − in particular when they're complex Keep an eye on the test design − test design being a main driver for automation success © 2014 LogiGear Use properties a human user can't see, but a test tool can This approach can lead to speedier and more stable automation − less need for "spy" tools (which take a lot of time) − less sensitive to changes in the system under test − not sensitive to languages and localizations A "white-box" approach to UI's can also help operate on or verify aspect of interface elements Examples: − "id" attribute for HTML elements − "name" field for Java controls − "AccessibleName" or "Automation ID" properties in .Net controls (see below) Hidden interface properties
  • 36. 34 © 2014 LogiGear Mapping the interface using hidden identifiers Instead of positions or language dependent labels, an internal property "automation id" has been used The interface definition will be less dependent on modifications in the UI of the application under test If the information can be agreed upon with the developers, for example in an agile team, it can be entered (or pasted) manually and early on INTERFACE ENTITY library interface entity setting automation id MusicLibraryWindow ta name ta class automation id interface element title text TitleTextBox interface element artist text SongArtistTextBox interface element file size text SizeTextBox interface element playing time text TimeTextBox interface element file type text TypeTextBox interface element bitrate text BitrateTextBox ta name ta class automation id interface element music treeview MusicTreeView © 2014 LogiGear Passive timing − wait a set amount of time − in large scale testing, try to avoid passive timing altogether: • if wait too short, test will be interrupted • if wait too long, time is wasted Active timing − wait for a measurable event − usually the wait is up to a, generous, maximum time − common example: wait for a window or control to appear (usually the test tool will do this for you) Even if not obvious, find something to wait for... Involve developers if needed − relatively easy in an agile team, but also in traditional projects, give this priority If using a waiting loop − make sure to use a "sleep" function in each cycle that frees up the processor (giving the AUT time to respond) − wait for an end time, rather then a set amount of cycles Active Timing
  • 37. 35 © 2014 LogiGear Things to wait for... Wait for a last control or elements to load − developers can help knowing which one that is Non-UI criteria − API function − existence of a file Criteria added in development specifically for this purpose, like: − "disabling" big slow controls (like lists or trees) until they're done loading − API functions or UI window or control properties Use a "delta" approach: − every wait cycle, test if there was a change; if no change, assume that the loading time is over: − examples of changes: • the controls on a window • count of items in a list • size a file (like a log file) © 2014 LogiGear Should be a "must have" requirement − first question in a development project: "how do we test this?" Identifying properties Hooks for timing White-box access to anything relevant: − input data (ability to emulate) − output data (what is underlying data being displayed) − random generators (can I set a seed?) − states (like in a game) − objects displayed (like monsters in a game) Emulation features, like time-travel and fake locations Testability, some key items
  • 38. 36 © 2014 LogiGear Alternatives to UI automation ("non-UI") Examples − HTTP and XML based interfaces, like REST − application programming interfaces (API’s) − embedded software − protocols − files, batches − databases − command line interfaces (CLI’s) − multi-media − mobile devices In many cases non-UI automation is needed since there simply is no UI, but it can also speed things up: − tends to be more straightforward technically, little effort needed to build up or maintain − once it works, it tends to work much faster and more stably than UI automation − test design principles (like modules and keywords) apply to non-UI normally In BIG testing projects routinely: − identify which non-UI alternatives are available − as part of test planning: identify which tests qualify for non-UI automation device testing © 2014 LogiGear Tools that can help manage BIG projects Application Lifecycle Management (ALM) − abundant now, mainly on the wings of agile − very good for control, team cooperation, and traceability − often relate to IDE's (like Microsoft TFS and Visual Studio) − examples: Rally, Jira, MS TFS and VS Online (VSO), HP ALM Note: test cases are often treated as "work items" in an ALM, but they're also products, that can be executed and need to be managed and maintained Test Management − as separate tools they're on their way out, morphing into or replaced by ALM options − examples: HP Quality Center, Microsoft Test Manager, Atlassian Zephyr, TestArchitect Test development and automation − develop and/or automate tests − examples are HP UFT, Selenium, MS Coded UI, FitNesse, Cucumber, TestArchitect Continuous build, continuous integration − server based building of software, − builds can be started in different ways, like triggered by check-ins, scheduled times, etc − can help run tests automatically , even "pre-flight": meaning a check-in only succeeds if tests pass − examples: Hudson, Jenkins, TFS, ElectricCommander Bug trackers − not only register issues, but also facilitate their follow up, with workflow features − often also part of other tools, and tend to get absorbed now by the ALMs − Examples: BugZilla, Mantis, Trac
  • 39. 37 © 2014 LogiGear Tooling and Traceability ALM Items Code Files Test Objective Test Case Execution Result Test Module Bug Items ALM, IDE, Source Control Project Manager, Requirements Test Development Tool Automation Tool Execution Manager Continuous Integration Build Verification Testing Lab manager Issue Tracker ALM Building, Testing Trace back © 2014 LogiGear Function Test Execution Have an explicit approach for when and how to execute which tests − a good high level test design will help with this Execution can be selective or integral − unit tests are typically executed selectively, often automatically based on code changes in a system like SVN or TFS − functional tests don't have as obvious relations with code files − selective execution will be quicker and more efficient, integral execution may catch more side-effect issues ("bonus bugs") − consider "random regression" execution of tests Unit Test Code user stories work items Unit Testing Functional Testing Tests
  • 40. 38 © 2014 LogiGear Versions, environments, configurations Many factors can influence details of automation − language, localization − hardware − version of the system under test − system components, like OS or browser Test design can reflect these − certain test modules are more general − others are specific, for example for a language But for tests that do not care about the differences, the automation just needs to "deal" with them − shield them from the tests minimum safe distance from a bear is 91 meters localization: converting yards to meters © 2014 LogiGear Capture variations of the system under test in the actions and interface definitions, rather than in the tests (unless relevant there). Can be a feature in a test playback tool, or something you do with a global variable or setting. Variation Variation Variation "Variations" "Master Switch" Actions, Interface Definitions . . .
  • 41. 39 © 2014 LogiGear Possible set up of variations linked variation keyworded variation Specify for example in a dialog when you start an execution: © 2014 LogiGear Test Environments Physical • hardware • infrastructure • location • . . . Software • programs • data models • protocols • . . . Data • initial data • parameters / tables • . . . • costs money • can be scarce • configurations • availability • manageability
  • 42. 40 © 2014 LogiGear Dealing with data Constructed data is easier to manage − can use automation to generate it, and to enter it in the environment − result of test analysis and design, reflecting "interesting" situations − however, less "surprises": real life situations which were not foreseen Real-world data is challenging to organize − make it a project, or task, in itself − make absolutely sure to deal with privacy, security and legal aspects appropriately. You may need to "scrub" the data Consider using automation to select data for a test − set criteria ("need a male older than 50, married, living in Denver"), query for matching cases, and select one randomly (if possible a different one each run) − this approach will introduce variation and unexpectedness, making automated tests stronger and more interesting A separate fairly recent challenge is testing non-SQL "Big Data" − apart from testing software, you will also test the data itself, often with heuristic and fuzzy logic technique • see also: "Become a Big Data Quality Hero", Jason Rauen, StarCanada 2014 © 2014 LogiGear Virtualization Virtual machines rather than physical machines − allow "guest" systems to operate on a "host" system − host can be Windows, Linux, etc, but also a specialized "hypervisor" − the hypervisor can be "hosted" or "bare metal" Main providers: − VMWare: ESX and ESXi − Microsoft: Hyper-V − Oracle/Sun: Virtual Box − Citrix: Xen (open source) Hardware support gets common now − processor, chipset, i/o − Like Intel's i7/Xeon For most testing purposes you need virtual clients, not virtual servers − most offerings in the market currently target virtual servers, particularly data centers Virtual clients will become more mainstream with the coming of VM's as part of regular operating systems − Windows 8: Hyper-V − Linux: KVM
  • 43. 41 © 2014 LogiGear Virtualization, a testers dream... In particular for functional testing Much easier to define and create needed configurations − you basically just need storage − managing this is your next challenge One stored configuration can be re-used over and over again The VM can always start "fresh", in particular with − fresh base data (either server or client) − specified state, for example to repeat a particular problematic automation situation Can take "snap shots" of situations, for analysis of problems Can use automation itself to select and start/stop suitable VM's − for example using actions for this − or letting an overnight or continuous build take care of this © 2014 LogiGear Virtualization, bad dream? Performance, response times, capacities Virtual machine latency can add timing problems − see next slide − can be derailing in big test runs Management of images − images can be large, and difficult to store and move around • there can be many, with numbers growing combinatorial style • configuration in the VM can have an impact, like fixed/growing virtual disks − distinguish between managed configurations and sandboxes − define ownership, organize it − IT may be the one giving out (running) VM's, restricting your flexibility Managing running tests in virtual machines can take additional efforts on top of managing the VM's themselves − with the luxury of having VM's the number of executing machines can increase rapidly − one approach: let longer running tests report their progress to a central monitoring service (various tools have features for this)
  • 44. 42 © 2014 LogiGear Virtualization: "time is relative" Consider this waiting time loop, typical for a test script: − endTime = currentTime + maxWait − while not endTime, wait in 100 millisecond intervals When the physical machine overloads VM's can get slow or have drop outs, and endTime may pass not due to AUT latency − GetLocalTime will suffer from the latency − GetTickCount is probably better, but known for being unreliable on VM's Therefore tests that run smooth on physical machines, may not consistently do so on VM's. The timing problems are not easy to predict Possible approaches: − in general: be generous with maximum wait times if you can − don't put too many virtual machines on a physical box − consider a compensation algorithm, for example using both tick count and clock time © 2014 LogiGear Virtual machines, capacity Key to pricing is number of VM's that can run in parallel on a physical machine An automated test execution will typically keep a VM more busy than human use Factors in determining VM/PM ratio: − memory, for guest OS, AUT, test tooling − storage devices (physical devices, not disk images) − processors, processor cores − specific hardware support (becoming more common) • processor, chipset, I/O − need to high-end graphics We started regression with 140 VMs. Very slow performance of Citrix VM clients.
  • 45. 43 © 2014 LogiGear Building up virtualization Pay attention to pricing: − beefed up hardware can increase VM's/box ratio, but at a price − software can be expensive depending on features, that you may not need − graphics cards can be a bottleneck on putting VM's on a physical box In a large organization, virtual machines are probably available − make sure to allocate timely − keep in mind the capacity requirements Logical and physical management − which images, the wealth of possible images can quickly become hard to see forest through the trees − physical management of infrastructure is beyond this tutorial Minimum requirement: snapshots/images − freeware versions don't always carry this feature − allow to set up: OS, environment, AUT, tooling, but also: data, states © 2014 LogiGear Servers Test execution facilities tend to be a bottleneck very quickly in big testing projects Servers with virtual machines on them are an easy step up, but require some organization and management Allowing execution separately from the machines the testers and automation engineers are working on increases scalability Large scale text execution, in particular with VM's, like to have: First step up: give team members a second machine Second step up: use locally placed servers, users coordinate their use of them Third step up: major infrastructures with organized allocation
  • 46. 44 © 2014 LogiGear Tower Servers Smaller shops (smaller companies, departments) Affordable, simple, first step up from clients execution Not very scalable when the projects get larger © 2014 LogiGear Rack Servers Well scalable Pricing not unlike tower servers Tend to need more mature IT expertise
  • 47. 45 © 2014 LogiGear Server Blades Big league infrastructure, high density, very scalable Tends to be pricey, use when space and energy matters Usually out of sight for you and your team © 2014 LogiGear Cloud Cloud can be target of testing − normal tests, plus cloud specific tests • functional, load, response times − from multiple locations − moving production through data centers Cloud can be host of test execution − considerations can be economical or organizational − providers offer imaging facilities, similar to virtual machines − make sure machines are rented and returned efficiently − IaaS (Infrastructure as a Service): you have to configure − PaaS (Platform as a Service): some configuration, like OS and DBMS already included Public cloud providers like EC2 and Azure offer API's, so your automation can automatically allocate and release them − be careful, software bugs can have costing consequences − for example, consider having a second automation process to double-check cloud machines have been released after a set time Amazon is a market leader, but Microsoft is pushing Azure very hard − embracing non-MS platforms − focusing on "hybrid" solutions, where "on prem" and cloud work together (Xinhua Photo)
  • 48. 46 © 2014 LogiGear Cloud providers - Gartner Challengers Leaders VisionariesNiche Players Ability to execute Completeness of vision source: Magic Quadrant for Cloud Infrastructure as a Service, Gartner, 2014 © 2014 LogiGear Cloud growth source: IDC
  • 49. 47 © 2014 LogiGear Cloud, example pricing, hourly rates Source: Amazon EC2, my interpretation, actual prices may vary Configuration "m3": fixed performance mem cpu storage price medium 3.75 1 4 0.13 large 7.5 2 32 0.27 xlarge 15 4 80 0.53 2xlarge 30 8 160 1.06 © 2014 LogiGear Cloud, example economy Very simplified, for example not counting: − possible use of VM's within the buy option − graphic cards coming with the buy options Also not counting: additional cost of ownership elements for owning or cloud (like IT management, contract and usage management) Impressions: − cloud could fit well for bursty testing needs, which is often the case − for full continuous, or very frequent, testing: consider buying (for example rack servers) − hybrid models may fit many big-testing situations: own a base capacity, rent more during peak use periods (for Azure this is now a core strategy) medium large extra BIG per hour 0.13 0.27 0.53 1.06 buy (est) 300 500 800 1,100 hours to break even 2,308 1,852 1,509 1,038 months 3.1 2.5 2.1 1.4
  • 50. 48 © 2014 LogiGear Cloud on demand? Organize it! You're spending money, therefore decide who can do what (don't forget to limit you yourself too) Have a "test production planning" process Have a budget Have ownership Use available policy features to limit usage in time and quantity Obtain and read production reporting, compare to plan and budget Minimize the need (for example "last test round only") Have and try to use on-prem and hybrid alternatives Start small, learn © 2014 LogiGear Data centers can go down However, disruption could have been minimized by using multiple data centers
  • 51. 49 © 2014 LogiGear Data centers can go down This time, it did involve multiple data centers . . . © 2014 LogiGear Data centers can go down Service providers can occasionally go down too
  • 52. 50 © 2014 LogiGear Cloud, usage for special testing needs Multi-region testing − Amazon for example has several regions • US East, Northern Virginia • US West, Oregon, Northern California • EU, Ireland • Asia Pacific, Singapore, Tokyo • South America, Sao Paulo − be careful that data transfers between regions costs money (0.01/GB) Load generation − example: "JMeter In The Cloud" • based on the JMeter load test tool • uses Amazon AMI's for the slave machines • allows to distribute the AMI's in the different regions of Amazon • see more here: aws.amazon.com/amis/jmeter-in-the-cloud-a-cloud-based-load-testing-environment © 2014 LogiGear Questions for Infrastructure What kind of infrastructure does your organization use for testing? What is the role of virtualization, now or in the future? Are you using a private or a public cloud for testing?
  • 53. 51 © 2014 LogiGear Testing big and complicated stuff sources: Windows Azure Reference Platform, Unvanquished (open source game) For example complex cloud architectures and/or large and complex multi-player games © 2014 LogiGear Approaches Define testing and automation as business opportunities: − better testing can mean less risks and problems, and more quality perception − robust automation results in faster time-to-market, and more flexibility − the bigger and more complex the testing, the more attention it needs Follow a testability and DevOps approach in projects: − include "how do I test" right from the start of development, both test design and automation (including white-box approaches) − plan "operation" of test runs, like allocation of resources Consider Testing in Production* approaches, like: − A/B testing − continuous testing with random regression testing or monkey testing − but please don't forget about test design (think first, then make decisions) *see also:Ken Johnston's chapter in the book of Dorothy Graham and Mark Fewster, and his keynote at StarWest 2012
  • 54. 52 © 2014 LogiGear A/B testing with a reverse proxy Watch your test design, easy to drown in technical solutions only B could be a real-life user or also a keyword driven test machine A/B testing means part of traffic is routed through a different server or component (see if it works, and/or how users react) A similar strategy could be done at any component level A A B Reverse Proxy Users Servers A B newcurrent A B © 2014 LogiGear Organization Much of the success is gained or lost in how you organize the process − who owns which responsibility (in particular to say "no" to a release) − separate, integrated teams, or both − who does test design, who does automation − what to outsource, what to keep in-house Write a plan of approach for the test development and automation − scope, assumptions, risks, planning − methods, best practices − tools, technologies, architecture − stake holders, including roles and processes for input and approvals − team − . . . Assemble the right resources − testers, lead testers − automation engineer(s) − managers, diplomats, ... Test design is a skill . . . Automation is a skill . . . Management is a skill . . . . . . and those skills are different
  • 55. 53 © 2014 LogiGear Team roles, examples Testing, test development − test analysis, test creation − reporting, result analysis and follow up, assessments Automation − functional navigation, technical automation Test execution planning and management Environments and infrastructure Management and direction − process, contents, practices, handling impediments − handling "politics and diplomacy" * *see my STARCANADA presentation on "left out in the cold" © 2014 LogiGear Think Industrial . . . Large scale testing need a "design" and a "production" focus − emphasis more on delivery and scale, "thinking big" − no-nonsense rather than creativity: "get stuff done" Examples of tasks/responsibilities − keeping the tests running − plan and manage resources − respond to hick-ups − analyze and address automation issues − address fails or other testing outcomes GIT-R-DONE YOU TESTERS!
  • 56. 54 © 2014 LogiGear Stake Holders Test Development Test Automation Technology/ Infrastructure ProductionMarketing/ Sales System Development End User Departments Quality Assurance Management After Sales/ Help Desk Customers Vendors Government Agencies Publicity EXTERNAL INTERNAL © 2014 LogiGear ABT in Agile Test Module Definition (optional) Test Module Development Interface Definition Action Automation Test Execution Sprint Products Product Backlog Test re-use Automation re-use product owner team prod owner & team User stories Documentation Domain understanding Acceptance Criteria PO Questions Situations Relations Agile life cycle Test development Main Level Test Modules Interaction Test Modules Cross over Test Modules
  • 57. 55 © 2014 LogiGear Using ABT in Sprints (1) Aim for "sprint + zero", meaning: try to get test development and automation "done" in the same sprint, not the next one − next one means work clutters up, part of team is not working on the same sprint, work is done double (manually and automated), ... Agree on the approach: − questions like does "done" include tests developed and automated? − do we see testing and automation as distinguishable tasks and skillsets? − is testability a requirement for the software? © 2014 LogiGear Using ABT in Sprints (2) Just like for development, use discussions with the team and product owners − deepen understanding, for the whole team − help identify items like negative, alternate and unexpected situations Start with the main test modules, that address the user stories and acceptance criteria − try to keep the main test modules at a similar level as those stories and criteria − test modules can double as modeling device for the sprint Plan for additional test modules: − low-level testing of the interaction with the system under test (like UI's) − crossing over to other parts of the system under test
  • 58. 56 © 2014 LogiGear Using ABT in Sprints (3) To discuss an approach, consider daily "sit down" meetings with some or all members to coach and evaluate − an end-of-day counterpart to the early-morning "stand up" meetings − short and friendly, not about progress and impediments, but about practices and experiences with them (like "what actions did you use?") − a few meetings may suffice Create good starting conditions for a sprint: − automation technology available (like hooks, calling functions, etc) − how to deal with data and environments − understanding of subject matter, testing, automation, etc Do interface mapping by hand, using developer provided identifications − saves time by not having to use the viewer or other spy tools − recording of actions (not tests) will go better Tip © 2014 LogiGear Testing as a profession Focus on tests, not development: − what can be consequences of situations and events − relieve developers The challenge for the tester in the new era is to become a more credible professional tester, − not a pseudo programmer − part of the team − have knowledge and experience with testing techniques and principles Forcing a nontechnical tester to become a programmer may lose a good tester and gain a poor programmer Forcing a good developer to become a tester may lose a good developer and gain a poor tester − a good developer who is working on an airplane control system is also not necessarily a good airline pilot
  • 59. 57 © 2014 LogiGear Automation is a profession too Overlaps with regular system development, but not same Less concerned with complex code structures or algorithms More concerned with navigating through other software efficiently, dealing with control classes, obtaining information, timing, etc − if you would compare developers to "creators", automation engineers might be likened to "adventurers"... The automation engineering role can also be a consultant: − for test developers: help express tests efficiently − for system developers: how to make a system more automation friendly − important player in innovation in the automated testing © 2014 LogiGear Globalization....
  • 60. 58 © 2014 LogiGear Globalization Three Challenges: − another countries, other cultures − geographic distances − time differences Seven "Patterns": − "Solution" − "Push Back" − "Time Pressure" − "Surprises" − "Ownership" − "Mythical Man Month" − "Cooperation" © 2014 LogiGear Challenge: Other Country
  • 61. 59 © 2014 LogiGear Other Country Differences in culture − more on the next slide... Different languages, and accents Differences in education − style, orientation and contents − position of critical thinking, factual knowledge, practice, theory,... − US, British, French, Asian, ... Differences in circumstances − demographics − economy, infrastructure − politics Apprehension on-shore and off-shore about job security doesn't help in projects − management responsibility: understand your strategic intentions, and their consequences, and clarify them − be realistic in cost and benefit expectations © 2014 LogiGear More on Culture... Regional culture. There are numerous factors: − very difficult to make general statements • many anecdotes, stories and perceptions, some are very helpful, some have limited general value • not sure on impact of regional culture (see also [Al-Ani]) − numerous factors, like history, religion, political system • e.g. valuing of: critical thinking, theory, bottom-line, relations, status, work-ethic, bad news, saying 'no' • entertaining guests, eating habits, alcohol, meat, humor, etc • position of leaders, position of women managers • mistakes can be benign and funny, but also damaging, visibly or hidden, in particular perceived disrespect hurts Organizational culture − can be different from country to country, sector to sector, company to company, group to group − I feel this to be at least as strong than regional culture (see for example [Al-Ani]) − you can have at least some control over this Professional cultures − for example engineers, QA, managers, ... Some ideas to help: − get to know each other (it helps, see for example [Gotel]) − study the matter, and make adaptations
  • 62. 60 © 2014 LogiGear © 2014 LogiGear
  • 63. 61 © 2014 LogiGear © 2014 LogiGear
  • 64. 62 © 2014 LogiGear Different countries . . . © 2014 LogiGear Challenge: Distance
  • 65. 63 © 2014 LogiGear Distance Continuous logistical challenges Lots of costs, and disruptions, for traveling Distance creates distrust and conflict − could be "normal" behavior, inherent to humans Complex coordination can create misunderstandings − on technical topics − on actions, priorities, and intentions © 2014 LogiGear Challenge: Time difference
  • 66. 64 © 2014 LogiGear Challenge: Time difference Additional complication for communication and coordination Places a major burden on both on-shore and off-shore staff − having to work evenings and/or early mornings − potential for exhaustion, lack of relaxation, mistakes, irritation Can easily lead to loss of time at critical moments Some solutions: − manage this actively − constantly seek to optimize task and responsibility allocation − build the on-shore and off-shore organizations to match − seek ways to save meeting time, like optimal information handling © 2014 LogiGear Effect of time difference Test Module: “Segment Y, Default Settings” Windows Linux TestArchitect 5 ~ 4:16 m ~ 4:28 m TestArchitect 6 ~ 11:00 m ~ 8:00 m Report from the team to the US management . . . Performance comparison TestArchitect 5 and 6
  • 67. 65 © 2014 LogiGear Patterns Experiences seem to follow patterns − at least our own experiences do − variations are numerous, but seem to follow similar lines − following are examples, not limitative It can help to recognize patterns quickly, and act upon them Resolutions have side-effects, can introduce new issues − for example strengthening local management means less direct contact with the project members doing the work Just about every pattern occurs in every direction − from your perspective regarding "them" − their perspective on you, or each other − sometimes equaling, sometimes mirroring © 2014 LogiGear Pattern: "The Solution" Typical sequence of events: − the team finds a problem in running a test − the team discusses it and comes up with a "solution" − the solution: (1) creates issues, and (2) hides the real problem Better way: − First: • clearly define the issue • discuss with project manager and customer − Only then: • resolve it • enjoy the gratitude bestowed upon you ☺
  • 68. 66 © 2014 LogiGear Pattern: "Push Back" US side, or customer, gives bad direction Team doesn't like it, but feels obliged to follow orders The result is disappointing Team is blamed − and will speak up even less next time Better way: − discuss with the principal/customer at multiple levels • strategic about direction, operational day-to-day − empower and encourage the team to speak up − write plans of approach, and reports © 2014 LogiGear Pattern: "Time Pressure" Deadline must be met − no matter what − use over-time − "failure is not an option" Deadlines are sometimes real, sometimes not − become a routine on the US side − easy to pressure over the email − very difficult for a non-empowered team to push back − risk: inflation of urgency Better way: − good planning − proper weighing of deadlines and priorities − frequent reporting − local management
  • 69. 67 © 2014 LogiGear Pattern: "Surprises" Good news travels better than bad news... − should be the other way around − the "cover up": "let's fix, no need to tell...." − over time: needing bigger cover ups to conceal smaller ones − not unique for off-shoring, but more difficult to detect and deal with Once a surprise happens: − you will feel frustrated, and betrayed − fix the problems, point out the consequences of hiding, avoid screaming and flaming Better ways: − agree: NO SURPRISES!! − emphasize again and again − train against this − continuously manage, point out − the magic word: transparency SUPRISES © 2014 LogiGear Pattern: "Ownership" Shared responsibility is no responsibility Effort-based versus result-based On-shore players feel the off-shore team has a result responsibility Off-shore team members feel an effort-based responsibility ("work hard") Better way: − clear responsibilities and expectations − on-shore ownership for quality control of system under test • and therefore the tests − off-shore ownership of producing good tests and good automation − empower according to ownership
  • 70. 68 © 2014 LogiGear Pattern: "Mythical Man Month" Fred Brooks classic book, "Mythical man month": − "Assigning more programmers to a project running behind schedule will make it even later" − "The bearing of a child takes nine months, no matter how many women are assigned" − in particular in automation it is easy to end up with a large pile of badly designed tests, that then is difficult to scale and maintain (or even to get rid of) In test automation, there must be clear ownership of: − test design (not just cranking out test cases) − automation, this is different skill and interest Assign at least the following roles: − project lead, owns quality and schedule − test lead: owns test design, coaches and coordinates the other testers − automation: make the actions work (assuming ABT, not the test cases) Define distinct career paths in: testing, automation, management © 2014 LogiGear Pattern: "Cooperation" Communication is tedious, takes a long time Questions, questions, questions, ... − reverse: questions don't get answered For at least one side in private time, extra annoying Misunderstandings, confusion, actions not followed up − double check apparent "crazy things" with the team before jumping to conclusions, and actions (assume the other side is not "nuts" or "dumb"...) Please understand: distance fosters conflicts − we're born that way, can't ignore Better ways: − remember respect − prioritize training, coaching, preparation and planning. Saves a lot of questions... − write stuff down, use briefs, minutes − define workflows and information flows • buckets, reporting, select and use good tools − specialize meetings • table things for in-depth meetings • ask to meet internally first − be quick, no more than 30 mins
  • 71. 69 © 2014 LogiGear Training as a tool Many areas, big pay-offs: − system under test − subject matter under test, domain knowledge − methods, best practices − technologies, tools, ... − processes − soft skills, like creativity, critical thinking, management, ... − language − cross-cultural Have exams − think about the consequences of passing and failing − people pay more attention when they know they will get tested − you will know whether you were understood Have coaching and train-the-trainers − more experienced people help newbie's − also runs a risk: bad habits can creep in and procreate − "Tribal knowledge", learning by osmosis, water cooler conversations, encourage it − consider "special interest groups (SIG's)" Rule of thumb for off-shore teams: hire for technical knowledge, train for business knowledge The on-shore staff needs training and coaching too, to stay on par © 2014 LogiGear Additional ideas and experiences Go there, be with the team, − also experience yourself how "your side" comes across there − I go about twice per year Manage ownership − the distinction between efforts and results ("efforts are good, results are better") Provide clear direction, constant attention and coaching Supervise, supervise, supervise − but don't micromanage other side should have ownership Ask to create example products (like ABT test modules and actions), review these carefully, and use as direction for subsequent work Leadership style: participative styles seem most common (as opposed to consensus or authoritative, see also [Al-Ani]) Organize informal/fun events, provide a good environment − solidify the group, improve retention − include visiting US staff, this tends to do a lot of good ("priceless") Manage expectations − stuff takes time and energy − differences can be addressed, but not 100% everybody likes cake...
  • 72. 70 © 2014 LogiGear Outsourcing and Agile If done well, can provide relieve to a lot of the patterns. Several models possible, for example: Model 1: Full team outsourcing − development, testing and automation − automated tests can be positioned as part of the delivery Model 2: Integrated team: − needs online tool like Jira or Rally − you must have shared meetings − advantage: more project time Model 3: "2nd unit" − off-shore team works under control of one or more sprint team members Model 4: Test Production and management − off-shore team takes the deliveries of the primary team, creates/automates more tests, and executes and maintains them © 2014 LogiGear Summary Not all "big project" challenges are the same Think before you do. Best results come from planning well, and combining effective concepts, tricks and tools Consider tests and automation as products Team work is a key for short term and long term success There are many options for infrastructure, but keep an eye on economy and planning Off-shoring can help scale up, but needs attention to do it right, in particular communication
  • 73. 71 © 2014 LogiGear Homework . . . 1. Testing Computer Software, Cem Kaner, Hung Nguyen, Jack Falk, Wiley 2. Lessons Learned in Software Testing, Cem Kaner, James Bach, Bret Pettichord, Wiley 3. Experiences of Test Automation, Dorothy Graham, Mark Fewster, Addison Wesley, 2012 4. Automating Software Testing, Dorothy Graham, Mark Fewster, Addison Wesley 5. "Build a Successful Global Training Program", Michael Hackett, www.logigear.com 6. Action Based Testing (overview article), Hans Buwalda, Better Software, March 2011 7. Action Figures (on model-based testing), Hans Buwalda, Better Software, March 2003 8. Integrated Test Design & Automation, Hans Buwalda, Dennis Janssen and Iris Pinkster, Addison Wesley 9. Soap Opera Testing (article), Hans Buwalda, Better Software Magazine, February 2005 10. Testing with Action Words, Abandoning Record and Playback, Hans Buwalda, Eurostar 1996 11. QA All Stars, Building Your Dream Team, Hans Buwalda, Better Software, September 2006 12. The 5% Solutions, Hans Buwalda, Software Test & Performance Magazine, September 2006 13. Happy About Global Software Test Automation, Hung Nguyen, Michael Hackett, e.a., Happy About 14. Testing Applications on the Web, Hung Nguyen, Robert Johnson, Michael Hackett, Wiley 15. Practical Combinatorial Testing, Richard Kuhn, Raghu Kacker, Yu Lei, NIST, October, 2010 16. JMeter in the Cloud, Jörg Kalsbach, http://aws.amazon.com/amis/2924 17. Using Monkey Test Tools, Noel Nyman, STQE issue January/February 2000 18. High Volume Test Automation, Cem Kaner, Walter P. Bond, Pat McGee, STARWEST 2004 19. Descriptive Analysis of Fear and Distrust in Early Phases of GSD Projects, Arttu Piri, Tuomas Niinimäki, Casper Lassenius, 2009 Fourth IEEE International Conference on Global Software Engineering [Piri] 20. Quality Indicators on Global Software Development Projects: Does 'Getting to Know You' Really Matter? Olly Gotel, Vidya Kulkarni, Moniphal Say, Christelle Scharff, Thanwadee Sunetnanta, 2009 Fourth IEEE International Conference on Global Software Engineering [Gotel] 21. Become a Big Data Quality Hero, Jason Rauen, StarCanada 2014 [Rauen] 22. Resources on Exploratory Testing, Metrics, and Other Stuff, Michael Bolton's site, www.developsense.com/resources 23. When Testers FeelLeft Out in the Cold, Hans Buwalda, STARCANADA 2014