SlideShare a Scribd company logo
Using experiments in
innovation policy
Albert Bravo-Biosca
Three principles for delivering good innovation policy
3. Judgment

1. Experiment

2. Data
Innovation policy and experimentation – Two interpretations

Supporting
experimentation in
the economy and
society

Using
experimentation to
learn what works
better to support
innovation
Supporting experimentation in the economy and society

“The task of industrial policy is as much about
eliciting information from the private sector
about significant externalities and their
remedies as it is about implementing
appropriate policies”
Rodrik, 2004
Innovation policy focused on information discovery
The public sector as
partner/enabler on the innovation
process by helping reduce
uncertainty in the private sector:
– A project-based conception of
innovation policy – sunset
clauses framed explicitly in
terms of learning: “the policy
ends when the learning ends”
– Specific learning methods
would include: experimental
development
funds, testbeds, challenge
prizes, observatories…
Using experimentation to learn what works better to
support innovation
• Large amounts of money invested in schemes to support
innovation, but very limited evidence on their effectiveness

Typical approach
• Introduce large new
interventions without prior
small-scale testing

Experimental approach
• Set up pilots to experiment
with new
instruments, evaluate them
using rigorous methods, and
scale up those that work
(continuing to experiment to
improve them)

 Experimental approach is a smarter, cheaper and more effective
approach to develop better innovation policy instruments
What is an experiment? A continuum of definitions…

Trying something
new
Trying something
new and put in place
the systems to learn

RCTs

• No rigorous learning or evaluation
strategy
• No real “testing mindset”
• A “pilot”
• Rigorous formal research design
• Test a hypothesis
• Codifying and sharing resulting
knowledge
• Sometimes but not always with
some form of control group

• Randomized control trials
• Control group created by the
programme manager/researcher
using a lottery
• Field vs. “lab” experiments
• Different from a natural experiment
What is a randomized controlled trial?
Design

Randomize

Implement

Compare

Treatment
group

Receive
intervention

Outcome

Control
group

Don’t receive
intervention

Outcome

Different
alternatives to run
the lottery
(e.g., individual vs.
group level
randomization, etc)

1/0 vs. A/B
experiment:
control group gets
nothing (0) vs.
alternative
intervention (B)

Collect data using
surveys and/or
administrative data
sources and
estimate impact of
the intervention

Participants

Participants can be
individuals, but
also firms, public
organizations, villag
es, regions, etc

Participants are randomly placed in a "treatment" group and a “control” group, and the
impact of the treatment estimated comparing the behaviour and outcomes of both of them
Why are RCTs useful?
“Typical” evaluations:

RCTs:

• Good answer to "how well did the
programme participants perform“
(before and after)

• The lottery in an RCT addresses
selection biases

• Fail to provide a compelling answer
to "what additional value did the
programme generate“  Requires
good knowledge on how
participants would have performed
in absence of the programme
• No credible control group
(e.g., biased matching, selection
biases)
• Programme recipient satisfaction
survey/”what if” questions/case
studies

• Differences between the treatment
and control groups are the result of
the intervention
• Provides an accurate/unbiased
estimate of the impact of the
intervention
• “Gold standard” for evaluation
…even if they also have some
weaknesses and do not always apply
(so not the solution for everything but
still a very valuable tool, yet almost
missing in the innovation policy area)
RCTs can have two non-mutually exclusive aims

Testing the
impact of an
intervention

Understanding
the behaviour
of individuals
and what
drives it

Mechanism experiment
Focus:
Additionality
Hypothesis (e.g.): “Intervention has an effect” vs. “Managers’ actions driven by inertia”
Some misconceptions about RCTs
The criticism

A potential response

Unethical

• Assumes intervention does benefit rather than harm recipients
• Can provide alternative treatment (compare two alternative interventions, or
the same intervention with two different sets of conditions, rather than “all or
nothing”)  Replaces decisions based mostly on “opinions” with “data”
• Often insufficient resources to support all potential recipients in any case
• A lottery can be fairer (and cheaper) than some panel-based scoring approaches
• Using resources in programmes that don’t work deprive other more effective
programmes from funding  Experimental pilots reduce this risk

Expensive

• It is often the programme, not the evaluation, that is expensive
• Data collection is expensive, regardless of the evaluation method used
• RCTs require smaller sample size  cheaper data collection
• Analysis can be quite cheap (simple comparison between groups), even if initial
design requires more work

Findings not applicable to
other settings (Internal vs.
external validity)
Cannot capture
unexpected/unintended
effects
Don’t tell you why there is
an effect
Don’t use qualitative
methods along side

• Context matters, as is any other type of evaluation, but some lessons can be
generalized (still, multiple evaluations always desirable)
• Innovation is uncertain, so it may be difficult ex-ante to identify all potential
effects. In contrast to before/after approaches, with an RCT you can collect data
ex-post on an unanticipated outcome of particular interest (even if not ideal)

• It is possible to design the RCT to be able to find this out
• Can be combined with qualitative methods – mixed methods can be the most
informative approach
Key questions to design an RCT
• What intervention do you want to test?
• Does the control group benefit from an alternative
intervention?
• What is the outcome measure of interest?
• Is data available for the outcome?
• At what level should randomization be done?
• How large should be the treatment and the control group?
• Many other design choices available (e.g., randomizing the
“treatment” vs “the promotion of the intervention” in a
randomized encouragement design, etc)
The use of RCTs increasing around the world
Health

Development

JPAL, IPA, Wor
ld
Bank, Oxfam…

Social Experimentation

Education

French
experimentati
on fund for
youth, UK job
centres

Harvard
EdLabs, UK
Education
Endowment
Foundation..
Over the last 10 years the JPAL network
has worked with NGOs, governments and
international organizations to conduct
445 randomized evaluations on poverty
alleviation in 54 countries
But very limited use of RCTs on…

Innovation

Entrepreneurship

Business growth

....in advanced economies....even if it is feasible
Creative credits: Nesta’s vouchers RCT
• Business-to-business innovation voucher experiment run by
Nesta
• It awarded 150 vouchers x £4,000 with £1,000 co-funding
from SMEs to pay for collaborations with creative businesses
• An RCT with longitudinal evaluation

Innovation
voucher

Business led
Formal evaluation

Innovation project

Build connections
Creative credits: The results
High short
term input
additionality

SMEs receiving Credit
78% more likely to
undertake their project

Short term
output
additionality

Strong evidence of short
term output additionality in
terms of increased
innovations after six months

No significant
long term
additionality
Source: Bakhshi et al (2013)

No significant
output, network or
behavioral
additionality after
12 months
Creative credits: Methods
• Mixed methods evaluation  Qualitative analysis extremely
useful to complement rigorous quantitative analysis, but
cannot replace it
• Traditional evaluation methods used in parallel gave a
misleading assessment of the impact of the scheme, much
more positive, contradicting RCT evaluation findings

• See Bakhshi et al (2013) for the full results
• Similar results to those obtained in the Dutch innovation
vouchers RCT
The UK adopting RCTs in many different areas

• Behavioural experiments –
“nudge unit” (BIT) e.g., HMRC
letters
• Job centres – unemployment
training
• Education – 50 RCTS in 1000+
schools on-going
• Business support (Growth
vouchers, BIS)
• Innovation (Innovation
vouchers, TSB)
Growth vouchers
• £30 million budget for a new BIS programme of advice for businesses
which will be run as a trial
• 25,000 micro and small businesses on an equal cost sharing
• Vouchers will be available to firms with
– less than 50 staff
– first time users of business advice

• Aims
– increase the use of business advice

– to collect robust evidence

• Research questions:
– Does subsidy encourage businesses to seek and use business advice?
– What is the impact of advice on our outcome measures
(sales, employment, turnover, profit)?
– What type of advice is it most effective to subsidise?
Innovation vouchers
• Technology Strategy Board programme to connect UK SMEs
with knowledge providers (both university-based and
knowledge providers)
• £5000 pounds vouchers (rolling programme)
• Process:
1.
2.
3.
4.

Very short application form (with evaluation questions embedded)
Screening out of bad applicants
Use lottery to select recipients (good for evaluation and has low
administration costs)
Track innovation behaviour, relationships with knowledge
providers, and firm performance using survey instrument and
administrative data.
Why haven’t governments and researchers used more RCTs to
understand innovation and its drivers, in contrast to other policy areas?

Governments

• Lack of sufficient examples showcasing their feasibility and value
have made governments and intermediary organizations very
reluctant to consider using RCTs in this area

Researchers

• Very few academic researchers in related fields have developed
the capabilities and required support infrastructure necessary to
set up and run experiments

Missing
networks

• The networks between researchers and practitioners are
missing, so even when they would be interested in collaborating
on an RCT, they typically don’t know how to find each other

Insufficient
knowledge

• There is insufficient knowledge about when is appropriate and
feasible to use RCTs in this domain, and a widely-held
misperception that RCTs need to be expensive

A new global innovation, entrepreneurship and
growth lab to tackle these 4 factors simultaneously
Nesta is seeding a new international
initiative for experiments on
innovation, entrepreneurship and
growth
Use RCTs to build the evidence base on the most effective approaches to

Increase
innovation

Support
entrepreneurship

Accelerate
business growth
The approach
Identify and pursue opportunities for experimentation, bringing together social
science researchers interested in these questions and organizations (whether
public or private) with the ability to undertake experiments

Programme
delivery
partners

Researchers

Experiments that:
• Generate actionable insights
for decision makers, by
piloting new programmes and
creating better evidence on
their impact
• Push the knowledge frontier
forward, by giving researchers
the opportunity to test with
RCTs different hypotheses on
the drivers of innovation
What will this new lab do?
Develop and run experiments

Work with public programmes and
other delivery organizations to
support their adoption, matching
them with interested researchers

Build a community of researchers that
undertake RCTs

Showcase RCTs’ value with real
examples to advocate wider use

Improve the knowledge-base on how
to do RCTs in this space, learn when
they work and when don’t, and hence
when to use and not to use them

Act as an aggregator and translator of
the evidence generated through RCTs
across countries

 A version of the JPAL model but focused on innovation, entrepreneurship and
growth, with the aim to expand the research and evaluation toolkit in these areas
by facilitating the use RCTs
Thank you
abravobiosca@nesta.org.uk

Get in touch if you would like to find out more

More Related Content

PPT
Bill Appelbe PPT
PPT
Innovation Center Intro Presentation
PPTX
Czarnitzki - Towards a portfolio of additionaliyu indicators
PPTX
Small is beautiful: an antidote to big data
PPTX
Using SURVEYBE to improve the collection of panel data experience from EDI
PDF
Innovation Platforms for increasing impact of research in Mozambique & India
PDF
Innovation yYork services 26 nov2015
PPTX
Developing core common outcomes for tropical peatland research and management
Bill Appelbe PPT
Innovation Center Intro Presentation
Czarnitzki - Towards a portfolio of additionaliyu indicators
Small is beautiful: an antidote to big data
Using SURVEYBE to improve the collection of panel data experience from EDI
Innovation Platforms for increasing impact of research in Mozambique & India
Innovation yYork services 26 nov2015
Developing core common outcomes for tropical peatland research and management

What's hot (16)

PPT
Presentación Michael Sargent
PPTX
SCC2011 - Evaluation: Facing the tricky questions
PPTX
Towards indicators for 'opening up' science and technology policy
PPTX
What are we learning from learning analytics: Rhetoric to reality escalate 2014
PDF
Lies, Damned Lies and Cost-Effectiveness: Open-Source Models
PPTX
Innovation & Problem Solving - Heritage Hub 9 July 2014
PPTX
Forecasting
PDF
WISE2019 Edtech Testbeds models for improving evidence
PPTX
FCAS M&E Seminar
PPTX
Laatsit - Towards a typology of innovation system practices
PDF
Introduction to Evaluation and the role of IEPPEC
PPTX
Research Partnerships to Support Telehealth Opportunities
PPTX
Choose your scientific partners
PPTX
Time for a New Approach to Innovation in Technology Enhanced Learning?
PDF
Research Skills Session 1: Introduction
PPTX
Research analytics service - ARMA study tour
Presentación Michael Sargent
SCC2011 - Evaluation: Facing the tricky questions
Towards indicators for 'opening up' science and technology policy
What are we learning from learning analytics: Rhetoric to reality escalate 2014
Lies, Damned Lies and Cost-Effectiveness: Open-Source Models
Innovation & Problem Solving - Heritage Hub 9 July 2014
Forecasting
WISE2019 Edtech Testbeds models for improving evidence
FCAS M&E Seminar
Laatsit - Towards a typology of innovation system practices
Introduction to Evaluation and the role of IEPPEC
Research Partnerships to Support Telehealth Opportunities
Choose your scientific partners
Time for a New Approach to Innovation in Technology Enhanced Learning?
Research Skills Session 1: Introduction
Research analytics service - ARMA study tour
Ad

Similar to Using experiments in innovation policy (short) (20)

PPTX
Professor Stephen Roper . International Conference . Taiwan. Experimenting wi...
PDF
Developing a Robust Evaluation Evidence Base for Business Support - Professor...
PPT
Rc ts & social innovation
PPTX
Fonds d’Expérimentation pour la Jeunesse – FEJ
PPTX
3. How to Randomize
PPT
Koch taftie-measuring the effects of research
PPTX
Program evaluation instead of assessment AABIG
PDF
The Problem with Evidence-Based Policies
PDF
Programme design for impact - Keep Britain Tidy​. Developing behaviour change...
PPT
Learning Approaches
 
PPT
Workshop: Monitoring, evaluation and impact assessment
PPTX
The use of evidence in achieving ir cs mission
 
PPTX
Supporting innovation in insurance with randomized experimentation
PDF
Rapid Software Testing: Strategy
PDF
The Goldilocks challenge : right-fit evidence for the social sector Mary Kay ...
PDF
All That Glitters Is Not Gold. The Political Economy Of Randomized Evaluation...
PPTX
WHAT IS AN EVIDENCE-BASED APPROACH? - Jonathan Potter (OECD)
PPTX
Making Social Innovation Work Day 3
PPT
Publin Innovation in the Public Sector
PPTX
Effective assessment
Professor Stephen Roper . International Conference . Taiwan. Experimenting wi...
Developing a Robust Evaluation Evidence Base for Business Support - Professor...
Rc ts & social innovation
Fonds d’Expérimentation pour la Jeunesse – FEJ
3. How to Randomize
Koch taftie-measuring the effects of research
Program evaluation instead of assessment AABIG
The Problem with Evidence-Based Policies
Programme design for impact - Keep Britain Tidy​. Developing behaviour change...
Learning Approaches
 
Workshop: Monitoring, evaluation and impact assessment
The use of evidence in achieving ir cs mission
 
Supporting innovation in insurance with randomized experimentation
Rapid Software Testing: Strategy
The Goldilocks challenge : right-fit evidence for the social sector Mary Kay ...
All That Glitters Is Not Gold. The Political Economy Of Randomized Evaluation...
WHAT IS AN EVIDENCE-BASED APPROACH? - Jonathan Potter (OECD)
Making Social Innovation Work Day 3
Publin Innovation in the Public Sector
Effective assessment
Ad

More from Nesta (20)

PDF
Added value test
PDF
Nesta impact partnerships
PPTX
Open Jobs
PPT
Crowdfunding good causes
PPTX
Realising the Value Stakeholder Event - Main slide deck
PPTX
Realising the Value Stakeholder Event -Workshop: How does the system support
PPTX
Realising the Value Stakeholder Event - Workshop:Prioritising our ‘long list’...
PPTX
Realising the Value Stakeholder Event - Workshop:Let's think in terms of beha...
PPTX
Realising the Value Stakeholder Event - Workshop: How do we understand value?
PPTX
Sabine Junginger: Developing & Maintaining Design Capabilities
PPTX
Stephane Vincent: Empowering civil servants with service design skills
PDF
Dominic Campbell: Futuregov
PPTX
Stian Westlake: the quiet rebirth of industrial policy
PPTX
Stian Westlake: six trends of the future
PPTX
Oliver Quinlan: new movement in education
PPTX
Prof. Mike Osborne: creativity vs. robots
PPTX
Prof. Mike Osborne: the future of employment
PPTX
Mark Bartlett: the future of personalised medicine
PPTX
John Loder: people powered health
PPTX
Jenny Barnett: digital healthcare
Added value test
Nesta impact partnerships
Open Jobs
Crowdfunding good causes
Realising the Value Stakeholder Event - Main slide deck
Realising the Value Stakeholder Event -Workshop: How does the system support
Realising the Value Stakeholder Event - Workshop:Prioritising our ‘long list’...
Realising the Value Stakeholder Event - Workshop:Let's think in terms of beha...
Realising the Value Stakeholder Event - Workshop: How do we understand value?
Sabine Junginger: Developing & Maintaining Design Capabilities
Stephane Vincent: Empowering civil servants with service design skills
Dominic Campbell: Futuregov
Stian Westlake: the quiet rebirth of industrial policy
Stian Westlake: six trends of the future
Oliver Quinlan: new movement in education
Prof. Mike Osborne: creativity vs. robots
Prof. Mike Osborne: the future of employment
Mark Bartlett: the future of personalised medicine
John Loder: people powered health
Jenny Barnett: digital healthcare

Recently uploaded (20)

PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
MYSQL Presentation for SQL database connectivity
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Big Data Technologies - Introduction.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
sap open course for s4hana steps from ECC to s4
Unlocking AI with Model Context Protocol (MCP)
Review of recent advances in non-invasive hemoglobin estimation
Dropbox Q2 2025 Financial Results & Investor Presentation
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
MYSQL Presentation for SQL database connectivity
A comparative analysis of optical character recognition models for extracting...
MIND Revenue Release Quarter 2 2025 Press Release
The AUB Centre for AI in Media Proposal.docx
Advanced methodologies resolving dimensionality complications for autism neur...
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Reach Out and Touch Someone: Haptics and Empathic Computing
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Encapsulation_ Review paper, used for researhc scholars
Big Data Technologies - Introduction.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
20250228 LYD VKU AI Blended-Learning.pptx
Spectroscopy.pptx food analysis technology
Chapter 3 Spatial Domain Image Processing.pdf
sap open course for s4hana steps from ECC to s4

Using experiments in innovation policy (short)

  • 1. Using experiments in innovation policy Albert Bravo-Biosca
  • 2. Three principles for delivering good innovation policy 3. Judgment 1. Experiment 2. Data
  • 3. Innovation policy and experimentation – Two interpretations Supporting experimentation in the economy and society Using experimentation to learn what works better to support innovation
  • 4. Supporting experimentation in the economy and society “The task of industrial policy is as much about eliciting information from the private sector about significant externalities and their remedies as it is about implementing appropriate policies” Rodrik, 2004
  • 5. Innovation policy focused on information discovery The public sector as partner/enabler on the innovation process by helping reduce uncertainty in the private sector: – A project-based conception of innovation policy – sunset clauses framed explicitly in terms of learning: “the policy ends when the learning ends” – Specific learning methods would include: experimental development funds, testbeds, challenge prizes, observatories…
  • 6. Using experimentation to learn what works better to support innovation • Large amounts of money invested in schemes to support innovation, but very limited evidence on their effectiveness Typical approach • Introduce large new interventions without prior small-scale testing Experimental approach • Set up pilots to experiment with new instruments, evaluate them using rigorous methods, and scale up those that work (continuing to experiment to improve them)  Experimental approach is a smarter, cheaper and more effective approach to develop better innovation policy instruments
  • 7. What is an experiment? A continuum of definitions… Trying something new Trying something new and put in place the systems to learn RCTs • No rigorous learning or evaluation strategy • No real “testing mindset” • A “pilot” • Rigorous formal research design • Test a hypothesis • Codifying and sharing resulting knowledge • Sometimes but not always with some form of control group • Randomized control trials • Control group created by the programme manager/researcher using a lottery • Field vs. “lab” experiments • Different from a natural experiment
  • 8. What is a randomized controlled trial? Design Randomize Implement Compare Treatment group Receive intervention Outcome Control group Don’t receive intervention Outcome Different alternatives to run the lottery (e.g., individual vs. group level randomization, etc) 1/0 vs. A/B experiment: control group gets nothing (0) vs. alternative intervention (B) Collect data using surveys and/or administrative data sources and estimate impact of the intervention Participants Participants can be individuals, but also firms, public organizations, villag es, regions, etc Participants are randomly placed in a "treatment" group and a “control” group, and the impact of the treatment estimated comparing the behaviour and outcomes of both of them
  • 9. Why are RCTs useful? “Typical” evaluations: RCTs: • Good answer to "how well did the programme participants perform“ (before and after) • The lottery in an RCT addresses selection biases • Fail to provide a compelling answer to "what additional value did the programme generate“  Requires good knowledge on how participants would have performed in absence of the programme • No credible control group (e.g., biased matching, selection biases) • Programme recipient satisfaction survey/”what if” questions/case studies • Differences between the treatment and control groups are the result of the intervention • Provides an accurate/unbiased estimate of the impact of the intervention • “Gold standard” for evaluation …even if they also have some weaknesses and do not always apply (so not the solution for everything but still a very valuable tool, yet almost missing in the innovation policy area)
  • 10. RCTs can have two non-mutually exclusive aims Testing the impact of an intervention Understanding the behaviour of individuals and what drives it Mechanism experiment Focus: Additionality Hypothesis (e.g.): “Intervention has an effect” vs. “Managers’ actions driven by inertia”
  • 11. Some misconceptions about RCTs The criticism A potential response Unethical • Assumes intervention does benefit rather than harm recipients • Can provide alternative treatment (compare two alternative interventions, or the same intervention with two different sets of conditions, rather than “all or nothing”)  Replaces decisions based mostly on “opinions” with “data” • Often insufficient resources to support all potential recipients in any case • A lottery can be fairer (and cheaper) than some panel-based scoring approaches • Using resources in programmes that don’t work deprive other more effective programmes from funding  Experimental pilots reduce this risk Expensive • It is often the programme, not the evaluation, that is expensive • Data collection is expensive, regardless of the evaluation method used • RCTs require smaller sample size  cheaper data collection • Analysis can be quite cheap (simple comparison between groups), even if initial design requires more work Findings not applicable to other settings (Internal vs. external validity) Cannot capture unexpected/unintended effects Don’t tell you why there is an effect Don’t use qualitative methods along side • Context matters, as is any other type of evaluation, but some lessons can be generalized (still, multiple evaluations always desirable) • Innovation is uncertain, so it may be difficult ex-ante to identify all potential effects. In contrast to before/after approaches, with an RCT you can collect data ex-post on an unanticipated outcome of particular interest (even if not ideal) • It is possible to design the RCT to be able to find this out • Can be combined with qualitative methods – mixed methods can be the most informative approach
  • 12. Key questions to design an RCT • What intervention do you want to test? • Does the control group benefit from an alternative intervention? • What is the outcome measure of interest? • Is data available for the outcome? • At what level should randomization be done? • How large should be the treatment and the control group? • Many other design choices available (e.g., randomizing the “treatment” vs “the promotion of the intervention” in a randomized encouragement design, etc)
  • 13. The use of RCTs increasing around the world Health Development JPAL, IPA, Wor ld Bank, Oxfam… Social Experimentation Education French experimentati on fund for youth, UK job centres Harvard EdLabs, UK Education Endowment Foundation..
  • 14. Over the last 10 years the JPAL network has worked with NGOs, governments and international organizations to conduct 445 randomized evaluations on poverty alleviation in 54 countries
  • 15. But very limited use of RCTs on… Innovation Entrepreneurship Business growth ....in advanced economies....even if it is feasible
  • 16. Creative credits: Nesta’s vouchers RCT • Business-to-business innovation voucher experiment run by Nesta • It awarded 150 vouchers x £4,000 with £1,000 co-funding from SMEs to pay for collaborations with creative businesses • An RCT with longitudinal evaluation Innovation voucher Business led Formal evaluation Innovation project Build connections
  • 17. Creative credits: The results High short term input additionality SMEs receiving Credit 78% more likely to undertake their project Short term output additionality Strong evidence of short term output additionality in terms of increased innovations after six months No significant long term additionality Source: Bakhshi et al (2013) No significant output, network or behavioral additionality after 12 months
  • 18. Creative credits: Methods • Mixed methods evaluation  Qualitative analysis extremely useful to complement rigorous quantitative analysis, but cannot replace it • Traditional evaluation methods used in parallel gave a misleading assessment of the impact of the scheme, much more positive, contradicting RCT evaluation findings • See Bakhshi et al (2013) for the full results • Similar results to those obtained in the Dutch innovation vouchers RCT
  • 19. The UK adopting RCTs in many different areas • Behavioural experiments – “nudge unit” (BIT) e.g., HMRC letters • Job centres – unemployment training • Education – 50 RCTS in 1000+ schools on-going • Business support (Growth vouchers, BIS) • Innovation (Innovation vouchers, TSB)
  • 20. Growth vouchers • £30 million budget for a new BIS programme of advice for businesses which will be run as a trial • 25,000 micro and small businesses on an equal cost sharing • Vouchers will be available to firms with – less than 50 staff – first time users of business advice • Aims – increase the use of business advice – to collect robust evidence • Research questions: – Does subsidy encourage businesses to seek and use business advice? – What is the impact of advice on our outcome measures (sales, employment, turnover, profit)? – What type of advice is it most effective to subsidise?
  • 21. Innovation vouchers • Technology Strategy Board programme to connect UK SMEs with knowledge providers (both university-based and knowledge providers) • £5000 pounds vouchers (rolling programme) • Process: 1. 2. 3. 4. Very short application form (with evaluation questions embedded) Screening out of bad applicants Use lottery to select recipients (good for evaluation and has low administration costs) Track innovation behaviour, relationships with knowledge providers, and firm performance using survey instrument and administrative data.
  • 22. Why haven’t governments and researchers used more RCTs to understand innovation and its drivers, in contrast to other policy areas? Governments • Lack of sufficient examples showcasing their feasibility and value have made governments and intermediary organizations very reluctant to consider using RCTs in this area Researchers • Very few academic researchers in related fields have developed the capabilities and required support infrastructure necessary to set up and run experiments Missing networks • The networks between researchers and practitioners are missing, so even when they would be interested in collaborating on an RCT, they typically don’t know how to find each other Insufficient knowledge • There is insufficient knowledge about when is appropriate and feasible to use RCTs in this domain, and a widely-held misperception that RCTs need to be expensive A new global innovation, entrepreneurship and growth lab to tackle these 4 factors simultaneously
  • 23. Nesta is seeding a new international initiative for experiments on innovation, entrepreneurship and growth Use RCTs to build the evidence base on the most effective approaches to Increase innovation Support entrepreneurship Accelerate business growth
  • 24. The approach Identify and pursue opportunities for experimentation, bringing together social science researchers interested in these questions and organizations (whether public or private) with the ability to undertake experiments Programme delivery partners Researchers Experiments that: • Generate actionable insights for decision makers, by piloting new programmes and creating better evidence on their impact • Push the knowledge frontier forward, by giving researchers the opportunity to test with RCTs different hypotheses on the drivers of innovation
  • 25. What will this new lab do? Develop and run experiments Work with public programmes and other delivery organizations to support their adoption, matching them with interested researchers Build a community of researchers that undertake RCTs Showcase RCTs’ value with real examples to advocate wider use Improve the knowledge-base on how to do RCTs in this space, learn when they work and when don’t, and hence when to use and not to use them Act as an aggregator and translator of the evidence generated through RCTs across countries  A version of the JPAL model but focused on innovation, entrepreneurship and growth, with the aim to expand the research and evaluation toolkit in these areas by facilitating the use RCTs
  • 26. Thank you abravobiosca@nesta.org.uk Get in touch if you would like to find out more