SlideShare a Scribd company logo
Getting Testing Right:
A Practical Guide to Testing in
Direct Marketing

IoF DM and Fundraising - March 2013
About me

Richard Hughes
Marketing Data Team Manager


Previously:
Data Planner at Bluefrog
Data Analyst at Good Agency / Cascaid
Database Administrator at Crusaid (AIDS/HIV charity)
Objectives

 • Talk through some of the finer points of testing
     • the strategy side and the data techie side
     • Both are really important!
 • A brain dump of everything I’ve learnt about testing
 • Advice on best practice
 • Show that it can be exciting
 • Inspire you to think about testing you can do
Why talk about testing?
  –   Not enough people talk about how to do it
  –   There is little info on the web for DM
  –   I’ve seen it go wrong
  –   Used well, testing can be very powerful, requires
      some thought and planning!
Definitions


    Split testing, or A / B testing, is when an
  audience is split into two or more groups and
given different treatments in order to determine
           the most effective treatment
Why Test Anyway?
Why Test Anyway?
                       How many
                    communications
How much should     should we send
   we ask our        throughout the
  supporters to           year?
    donate?

                       Which creative
                        should we
                         choose?
Which Email
Subject gets the
best open rate?
Why Test Anyway?
                    What stationery
                   types perform the
    What’s the     best? More money
    best time to   on more expensive
    send a pack?         packs?



                          Who is the
                          best
                          signatory?
How do cash
appeals affect
regular giving
attrition rates?
Marketing Triangle

                   Audience          Elements
                                    that affect
                                      results


          Timing          Message
The Data Pyramid
Gut Reaction Versus Evidence
• Sometimes as experienced marketeers we
  know intuitively know the answers to some of
  these questions
• But we want to move to situation with where
  we make evidence based decisions
Concepts
Concepts
We are trying to find out if one approach is
more likely to get better results than another
•Testing is affected by probability
  –This means there is no guarantee that an approach
  will always “win”
  –We can say that it is more likely to win and we can
  say how confident we are
Sampling Distribution
• When we test we …
  – measure a sample of our audience and use that to
    generalise about the rest of the database
The results can be put into a bell curve

If we
sample data
from our
database
many times
and treat in
certain way
we get a
normal
distribution
Two Curves

Mathematical
properties of
the curve
means we
can use stats
to determine
how likely
test a and B
are different
Stats Summary
• The response rate for each test is a normally
  distributed
• We want to measure the difference in
  performance between a given treatment and
  the control.
• The difference itself is a normally distributed
  random variable.
Structured Approach: Testing Life Cycle
                            Testing
                           Strategy

                                         Design Test
            Insight
                                          (Tactics)




                Evaluate              Execute
Annual Testing Strategy
• Good testing starts with careful thinking
• Document what you want to find out
  • Check and reflect on your questions
  • Ensure that tests will deliver actionable results
Annual Testing Strategy

    •Build scenarios to understand where you
     are going to get the best value
    •Prioritise – focus on the best outcomes

For UNICEF, this means focusing   Select tests that will have most
on the outcome that brings the    impact, e.g. in mail packs, focus
best result for children          on outers rather than copy buried
                                  inside.
Cautionary Tales 1
• Testing can be expensive
  – Paying for different creative
  – Paying for different stationery to be printed
  – Ring fencing certain supporters from different
    comms is all expensive
• This is an important consideration when
  thinking about the value of the test
Designing Tests: Sample Sizes
• Think about volume for your test
  – You need sufficient quantity in your test


• The sample needs have enough volume to
  be able to generalise about the population
Calculating Sample Sizes
Deep Dark Statistics:
• www.lucidview.com/sample_size.htm

• The most useful online resource that has
  quite a technical explanation
Calculating Sample Sizes
• Two things determine sample size
  – Existing Response Rate
     • Low number of responders means we need a bigger
       sample
  – Uplift of test
     • Small uplift means we need a bigger sample to see if
       there is a difference
Sample Sizes – Worked Example




Take from http://www.testsignificance.com/
Testing more than one thing at once
• Need to be careful, but can split by more
  than one test
                   Treatment A   Treatment B   Totals X & Y
Treatment X        Segment 1     Segment 2
Treatment Y        Segment 3     Segment 4
Totals for A & B
Cautionary Tales 2
• Be careful about testing too
  many things in one campaign
  – They can be difficult to manage
  – Cause confusion evaluating
Selected the Data
• Once you you’ve decided on the volumes
  the next task is to make sure you split the
  data fairly
  – This means selecting two or more samples,
    ordering by factors that are important and
    selecting alternate rows
  – Do not take top / bottom half of spreadsheet
Coding
• This might be a no brainer but ensuring
  the coding of A and B is set up right is
  important
Evaluating
• We need to determin if two different results
  are significant
• This means showing that we are 95%
  confident there is a significant difference
• Quite a few websites that can help
Evaluating
• If we are testing prompt amounts in packs
  we also need to test to see if the average
  gift is significantly different
• We can use a T-test for this
Cautionary Tales 3
• Testing sometimes don’t tell us anything
  interesting
• This is a lesson in setting expectations
   • Don’t say “we’re going to find out
     which is better”
   • Instead say “We’re going to find out if
     there is any difference”
Don’t forget to focus on Net Income




 Mailed    Cost     Response   RR    Income   Net     Average   RoI


 10000    £7,500      800      8%    £14,400 £6,900     £18     1.92


 10000    £11,000     1100     11%   £19,800 £8,800     £18     1.80
Building Insight
• Understanding what your tests means for
  your programme
• Updating your strategy
Final Thoughts
• Testing is about making incremental
  improvements
• If you need more dramatic change then think
  about your overall fundraising strategy
• Make sure you do lots of planning
Summary
                   • What are your marketing questions?
Testing Strategy   • What are your priorities?

                   • Calculate testing volume
  Test Design      • Split data fairly, Code data appropriately


    Execute        • Mail, email, phone



   Evaluate        • Evaluate significance of results



 Build Insight     • Update documentation on your audience insights
Any questions?
Thank you
• Richard Hughes
• richardh@unicef.org.uk

More Related Content

PDF
Customer Analytics Best Practice
PDF
How Data Scientists Make Reliable Decisions with Data
PDF
Cypher 2017 - Keynote Presentation - Sandeep Mittal - Cartesian Consulting
PDF
Marketing Science at OBS
PDF
Measuring outcomes... or how to get meaningful metrics
PPTX
Steve Keightley, Head of Optimisation, Mezzo Labs - Data-Driven Optimisation
PPTX
Decision making and delegation
PDF
SDNC13 -DAY2- There is no Innovation Fast-lane by Lizzie Shupack
Customer Analytics Best Practice
How Data Scientists Make Reliable Decisions with Data
Cypher 2017 - Keynote Presentation - Sandeep Mittal - Cartesian Consulting
Marketing Science at OBS
Measuring outcomes... or how to get meaningful metrics
Steve Keightley, Head of Optimisation, Mezzo Labs - Data-Driven Optimisation
Decision making and delegation
SDNC13 -DAY2- There is no Innovation Fast-lane by Lizzie Shupack

Similar to Getting testing right (20)

PPT
Great Survey Design
PDF
Exciting marketing research notes
PDF
Product Management Guide - A Work In Progress
PDF
Prsa social influence and measurement
PDF
PRSA - Social Influence and Measurement
PPT
Presentation on research methodologies
PPT
Training Program Evaluation
PDF
009428504.pdf
PDF
How to design powerful experiments - Ying Zhang
PPTX
MULTI-TOUCH ATTRIBUTION: A MARKETING PROBLEM SOLVED? - ABIGAIL LEBRECHT
PDF
How to Analyze Survey Data | SoGoSurvey
PPTX
How to design effective online surveys
PPTX
Problem Solving Techniques - LEAN
PDF
WEBINAR: How to Set Up and Run Hypothesis Tests (ENCORE!)
PDF
Using IDEA to Create a Sampling Methodology
PPTX
Essential tips & effective tactics for social media & quality content marketing
PPTX
Survey design basics
PPTX
Losing is the New Winning
PPTX
Meetup 16 dec data driven process to optimization
PDF
How to Set Up and Run Hypothesis Tests
Great Survey Design
Exciting marketing research notes
Product Management Guide - A Work In Progress
Prsa social influence and measurement
PRSA - Social Influence and Measurement
Presentation on research methodologies
Training Program Evaluation
009428504.pdf
How to design powerful experiments - Ying Zhang
MULTI-TOUCH ATTRIBUTION: A MARKETING PROBLEM SOLVED? - ABIGAIL LEBRECHT
How to Analyze Survey Data | SoGoSurvey
How to design effective online surveys
Problem Solving Techniques - LEAN
WEBINAR: How to Set Up and Run Hypothesis Tests (ENCORE!)
Using IDEA to Create a Sampling Methodology
Essential tips & effective tactics for social media & quality content marketing
Survey design basics
Losing is the New Winning
Meetup 16 dec data driven process to optimization
How to Set Up and Run Hypothesis Tests
Ad

More from Natalie Blackburn (20)

PDF
Getting the most from charity of the year partnerships
PDF
Keeping the passion alive: Avon and Breakthrough's 21 year partnership
PDF
Regional corporate fundraising
PDF
Working with corporates as partners and clients
PDF
Thinking big beyond the corporate csr budget
PDF
Money for life with family action case study
PDF
Transforming corporate partnerships
PDF
The changing face of corporate fundraising
PDF
Branded fundraising pages
PDF
Effective event promotion
PDF
Social media and events
PDF
Ensuring supporters reach their targets
PDF
From ticket buyer to donor
PDF
Flagship events - flash in the pan, slow burners or quick death?
PDF
19 ideas to improve your supporters' loyalty
PDF
Optimising return on investment
PDF
Great recruitment propositions
PDF
Developing a perfect proposition
PDF
Drtv on a budget
PDF
Multi channel integration
Getting the most from charity of the year partnerships
Keeping the passion alive: Avon and Breakthrough's 21 year partnership
Regional corporate fundraising
Working with corporates as partners and clients
Thinking big beyond the corporate csr budget
Money for life with family action case study
Transforming corporate partnerships
The changing face of corporate fundraising
Branded fundraising pages
Effective event promotion
Social media and events
Ensuring supporters reach their targets
From ticket buyer to donor
Flagship events - flash in the pan, slow burners or quick death?
19 ideas to improve your supporters' loyalty
Optimising return on investment
Great recruitment propositions
Developing a perfect proposition
Drtv on a budget
Multi channel integration
Ad

Getting testing right

  • 1. Getting Testing Right: A Practical Guide to Testing in Direct Marketing IoF DM and Fundraising - March 2013
  • 2. About me Richard Hughes Marketing Data Team Manager Previously: Data Planner at Bluefrog Data Analyst at Good Agency / Cascaid Database Administrator at Crusaid (AIDS/HIV charity)
  • 3. Objectives • Talk through some of the finer points of testing • the strategy side and the data techie side • Both are really important! • A brain dump of everything I’ve learnt about testing • Advice on best practice • Show that it can be exciting • Inspire you to think about testing you can do
  • 4. Why talk about testing? – Not enough people talk about how to do it – There is little info on the web for DM – I’ve seen it go wrong – Used well, testing can be very powerful, requires some thought and planning!
  • 5. Definitions Split testing, or A / B testing, is when an audience is split into two or more groups and given different treatments in order to determine the most effective treatment
  • 7. Why Test Anyway? How many communications How much should should we send we ask our throughout the supporters to year? donate? Which creative should we choose? Which Email Subject gets the best open rate?
  • 8. Why Test Anyway? What stationery types perform the What’s the best? More money best time to on more expensive send a pack? packs? Who is the best signatory? How do cash appeals affect regular giving attrition rates?
  • 9. Marketing Triangle Audience Elements that affect results Timing Message
  • 11. Gut Reaction Versus Evidence • Sometimes as experienced marketeers we know intuitively know the answers to some of these questions • But we want to move to situation with where we make evidence based decisions
  • 13. Concepts We are trying to find out if one approach is more likely to get better results than another •Testing is affected by probability –This means there is no guarantee that an approach will always “win” –We can say that it is more likely to win and we can say how confident we are
  • 14. Sampling Distribution • When we test we … – measure a sample of our audience and use that to generalise about the rest of the database
  • 15. The results can be put into a bell curve If we sample data from our database many times and treat in certain way we get a normal distribution
  • 16. Two Curves Mathematical properties of the curve means we can use stats to determine how likely test a and B are different
  • 17. Stats Summary • The response rate for each test is a normally distributed • We want to measure the difference in performance between a given treatment and the control. • The difference itself is a normally distributed random variable.
  • 18. Structured Approach: Testing Life Cycle Testing Strategy Design Test Insight (Tactics) Evaluate Execute
  • 19. Annual Testing Strategy • Good testing starts with careful thinking • Document what you want to find out • Check and reflect on your questions • Ensure that tests will deliver actionable results
  • 20. Annual Testing Strategy •Build scenarios to understand where you are going to get the best value •Prioritise – focus on the best outcomes For UNICEF, this means focusing Select tests that will have most on the outcome that brings the impact, e.g. in mail packs, focus best result for children on outers rather than copy buried inside.
  • 21. Cautionary Tales 1 • Testing can be expensive – Paying for different creative – Paying for different stationery to be printed – Ring fencing certain supporters from different comms is all expensive • This is an important consideration when thinking about the value of the test
  • 22. Designing Tests: Sample Sizes • Think about volume for your test – You need sufficient quantity in your test • The sample needs have enough volume to be able to generalise about the population
  • 23. Calculating Sample Sizes Deep Dark Statistics: • www.lucidview.com/sample_size.htm • The most useful online resource that has quite a technical explanation
  • 24. Calculating Sample Sizes • Two things determine sample size – Existing Response Rate • Low number of responders means we need a bigger sample – Uplift of test • Small uplift means we need a bigger sample to see if there is a difference
  • 25. Sample Sizes – Worked Example Take from http://www.testsignificance.com/
  • 26. Testing more than one thing at once • Need to be careful, but can split by more than one test Treatment A Treatment B Totals X & Y Treatment X Segment 1 Segment 2 Treatment Y Segment 3 Segment 4 Totals for A & B
  • 27. Cautionary Tales 2 • Be careful about testing too many things in one campaign – They can be difficult to manage – Cause confusion evaluating
  • 28. Selected the Data • Once you you’ve decided on the volumes the next task is to make sure you split the data fairly – This means selecting two or more samples, ordering by factors that are important and selecting alternate rows – Do not take top / bottom half of spreadsheet
  • 29. Coding • This might be a no brainer but ensuring the coding of A and B is set up right is important
  • 30. Evaluating • We need to determin if two different results are significant • This means showing that we are 95% confident there is a significant difference • Quite a few websites that can help
  • 31. Evaluating • If we are testing prompt amounts in packs we also need to test to see if the average gift is significantly different • We can use a T-test for this
  • 32. Cautionary Tales 3 • Testing sometimes don’t tell us anything interesting • This is a lesson in setting expectations • Don’t say “we’re going to find out which is better” • Instead say “We’re going to find out if there is any difference”
  • 33. Don’t forget to focus on Net Income Mailed Cost Response RR Income Net Average RoI 10000 £7,500 800 8% £14,400 £6,900 £18 1.92 10000 £11,000 1100 11% £19,800 £8,800 £18 1.80
  • 34. Building Insight • Understanding what your tests means for your programme • Updating your strategy
  • 35. Final Thoughts • Testing is about making incremental improvements • If you need more dramatic change then think about your overall fundraising strategy • Make sure you do lots of planning
  • 36. Summary • What are your marketing questions? Testing Strategy • What are your priorities? • Calculate testing volume Test Design • Split data fairly, Code data appropriately Execute • Mail, email, phone Evaluate • Evaluate significance of results Build Insight • Update documentation on your audience insights
  • 38. Thank you • Richard Hughes • richardh@unicef.org.uk