STRATEGIC SELECTION OF INDICATORS | 1
DRAFT PAPER: STRATEGIC SELECTION OF INDICATORS
By Kasper Jon Larsen, Think Capacity
Approaches to monitoring, evaluation and learning (MEL)
are undergoing a data revolution. Yet with greater com-
plexity, there is a need for selecting indicators strategi-
cally to heighten data validity and ensure indicators re-
flect strategic aims. The paper therefore sets out criteria
for making strategic selections of indicators.
INTRODUCTION
In the past, organisations have been able to build ac-
countability by counting simple outputs such as the
number of beneficiaries reached. While it is still im-
portant to document project deliveries, organisations
are increasingly expected to also prove their contribu-
tions by collecting data on complex change processes.
These processes span programmes deliveries and intan-
gible change such as advocacy and capacity building in
changing policy environments.
This has led the organisations to explore various indica-
tors to account for previously undocumented aspects of
their interventions. These range from quantitative indi-
cators based on numerical values to qualitative indica-
tors that summarise narrative stories of change.
Yet organisations often express doubt about whether
their chosen indicators reflect the aim of enquiry when
reporting to donors or other stakeholders. For example,
it sometimes proves impractical to aggregate numerical
indicators due to varying policy environments across dif-
ferent country programmes. At other times, organisa-
tions realise that the chosen indicators do not support
their need for narrative accounts in reporting.
To address these challenges, the paper sets out criteria
for strategic selection of indicators. Strategic selection
refers to a purposeful selection of indicators, reflecting
the requirements to manage project implementation
and to report externally as well as defining linkages be-
tween indicators and an underlying theory-of-change.
Although indicator selection should always be done us-
ing context-specific knowledge, it is hoped that the cri-
teria listed here will help to guide such considerations.
STRATEGIC SELECTION
Communication is an important end-goal of most MEL
approaches. Reporting to stakeholders is in large parts a
communications exercise to build accountability, while
learning typically rests on a communicative flow of ideas
amongst staff and partners. Yet organisations often by-
pass the end-goal and define indicators from a technical
point of departure through questions like: what indica-
tors do we need to aggregate results upwards?
Instead, it is vital to take a step back and first envision
the end-goal that the MEL-system is aiming to support.
This should ideally be defined before data is collected
and researchers refer to this stage as ex-ante. It is ex-
tremely difficult to change indicators once data collec-
tion has begun, although there are some complex tools
for dealing with this (researchers refer to this as ex-
post). For simplicity, the paper will only consider indica-
tor selection prior to starting the data collection.
The strategic selection can be divided into three overall
steps to clarify the purpose behind indicators:
 Step 1: Define the purpose of collecting data
Organisations typically have two overall purposes
with collecting data: internal project management
to monitor progress and communicating change to
stakeholders. It is thus important to start the selec-
tion by answering the following strategic questions:
o What is our purpose with collecting data and
how does it relate to our theory-of-change? Do
we want to build accountability, support pro-
ject management or facilitate learning?
o Who is our target audience that will see the
output from the data? What claims and key
messages do we want to communicate?
o What does the output look like? Are we aiming
to write reports, make presentations or pro-
duce other deliveries?
STRATEGIC SELECTION OF INDICATORS | 2
 Step 2: Define plausible linkages
Once the purpose is known, it is necessary to define
what data is needed to operationalise the purpose.
This does not entail defining indicators just yet, but
envisioning what information is necessary to estab-
lish a plausible linkage between activities and re-
sults, e.g. uptake of advocacy amongst municipali-
ties or levels of satisfaction:
o What plausible linkages can we envision be-
tween our purpose with collecting data and the
actual outcome or impact?
o How do we limit the number of plausible link-
ages, so we focus resources on the most im-
portant claims and key messages?
o Are there any linkages that don’t support the
purpose directly and can be deselected?
 Step 3: Select indicators
It is useful to view indicators as a proxy for the plau-
sible linkages between own activities and the actual
outcome observed in real-life contexts. Indicators
are thus constituted by observable actions, events
or subjective opinions. These are different from
plausible linkages that define the assumptions for
how indicators are interpreted. These assumptions
usually form part of a theory-of-change.
Selecting strategic indicators thus evolves around
the following question, which is dealt with in more
detail through the criteria listed in the next section:
o How do we select indicators that act as a proxy
for the plausible linkages and the purpose be-
hind the data collection?
SELECTION CRITERIA
This section sets out criteria for qualifying discussions on
indicator selection. It does so by directing attention to
two fundamental aspects in selecting indicators, ensur-
ing the right match with the aim of enquiry and the de-
sired level of summarisation. These are illustrated on
Chart 1 and will be discussed in detail below.
There are, however, endless possibilities for categoris-
ing various types of indicators. The following should thus
primarily be viewed as an approximate categorisation to
direct attention of MEL-teams towards the most likely
fits between indicators and the defined purpose.
Aim of enquiry
The first criteria when selecting indicators pertains to
the aim of the enquiry, i.e. what the organisation will use
Clusters
Description Reflection
Cross-caseWithin-case
Interpretation
Comparative
Core indicators
Narratives
Outcome
challenges
Statistical
aggregation
Ranking
and rating
Calibration
Direct
indicators
Scales
Indirect
indicators
Most
significant
change
Displays
Levelofsummarisation
Aim of enquiry
Chart 1: Selecting indicators by aim of enquiry and level of summarisation
STRATEGIC SELECTION OF INDICATORS | 3
data for. Indicators have traditionally been divided be-
tween quantitative indicators – that document infor-
mation using numerical values – and qualitative indica-
tors that document information using narratives. Here
three overall approaches can be identified:
Description is the primary purpose behind quantitative
indicators. This is because they aim to summarise infor-
mation using numerical values. For example, we cannot
deduce additional information from statistical figures
such as mortality rates than the overall numbers.
Interpretation represents a possible compromise be-
tween quantitative and qualitative aims of enquiry, as it
combines the functions of description and reflection.
This is for example done by assigning numerical scores
to cases based on qualitative knowledge.
Reflection is the primary purpose behind qualitative in-
dicators, as they enable reflection on a wider process
within a single case. For example, it is useful to reflect
on outcomes that cannot easily be quantified such as
advocacy or capacity building efforts.
In selecting indicators, the organisation should thus
start by considering which of the above aims of enquiry
are aligned with its purpose and plausible linkages.
Level of summarisation
The second criteria relates of the level of summarisation
across cases, i.e. whether we want to gain deep
knowledge of an individual project or want to summa-
rise data across multiple projects. Summarisation is
sometimes wrongly equated with the differences be-
tween quantitative and qualitative research, where the
former is seen as spanning across cases due to numeri-
cal aggregation and the latter is viewed as within-case
due to an emphasis on narratives. However, it should be
stressed that qualitative research also be conducted
across cases and vice versa.
Instead, it is useful to draw a distinction between three
levels of summarisation that span various the aims of
enquiry: within-case, comparative and cross-case.
Within-case approaches refer to studies that seek to
gain an in-depth understanding of individual cases, e.g.
learning outcomes from a capacity building programme.
As these are context-specific, it is not desirable to sum-
marise data across various programmes or cases.
Comparative approaches seek to compare data across
projects or cases. The aim here is not to aggregate num-
bers into an overall figure, but to sort cases according to
a number of categories or domains-of-change that
makes it possible to compare projects or cases.
Methodology Pros and cons
Description  Research questions focus on describing val-
ues, e.g. “how many” / “how much”
 Deductive/closed questioning, i.e. defines
enquiry prior to data collection
 Pros: Numerical values often appear more
scientific and can pre-empt questioning
 Cons: Aggregations can easily lead to misin-
terpretations and lack vital detail
Interpretation  Research questions focus on interpretation,
e.g. “how does X compare to Y”
 Iterative approach that shifts between de-
ductive and inductive enquiry
 Pros: Compromise of quantitative and qual-
itative approaches, allows for interpretation
 Cons: Interpretation may be biased to sub-
jective views and may misrepresent data
Reflection  Research questions focus on reflective nar-
ratives, e.g. “how”, “why” and “what”
 Inductive/open questioning, i.e. defines en-
quiry before and/or after data collection
 Pros: Reflection to uncover unknown as-
pects and facilitate learning/innovation
 Cons: Narratives are often seen as less sci-
entific and may be less persuasive
Within-case  In-depth reflection of individual cases with-
out comparisons across other cases
 Pros: In-depth case knowledge that often
generates new insights into the case
 Cons: Knowledge cannot be generalised to
other contexts, must be done for each case
Comparative  Comparison of quantitative or qualitative
data across cases to reflect on similarities
and differences
 Pros: Allows for comparisons across cases
without low level of detail in aggregations
 Cons: Comparing across cases can be prone
to biases and ignore context-specific details
Cross-case  Aggregates data across cases, using either
statistics or criteria for summarising narra-
tive stories of change
 Pros: Aggregates data across many cases.
Produces accessible overviews for donors
 Cons: Can lead to misrepresentation and ig-
nore important details from cases
Table 1: Understanding approaches to aim of enquiry and level of summarisation
STRATEGIC SELECTION OF INDICATORS | 4
Cross-case approaches typically aim to aggregate data
into overall findings, which holds true for all projects un-
der review. Sometimes cross-case approaches also seek
to develop generalisable theories, e.g. that community
participation results in less conflict, etc.
The organisation should thus proceed by determining
the preferred level of summarisation that corresponds
with its purposes and defined plausible linkages.
Selection process
To aid the selection process, pros and cons for each type
of enquiry and each level of summarisation are listed on
Table 1 on the previous page. This enables organisations
to focus their discussions on those indicators that are
most likely to support the purpose and plausible link-
ages behind the data collection.
The most relevant indicators are found in the corre-
sponding squares on Chart 1. While the indicators in-
cluded on the chart represent the most common indica-
tors used for MEL, it does not represent an exhaustive
list. Often it is useful to use a mixed approach where
quantitative figures, for instance, are complimented by
qualitative narratives to ensure balanced accounts.
Detailed accounts of how to operationalise each indica-
tor have been presented elsewhere. The paper will
therefore not attempt to explain each type and refers
instead to the existing literature.
One recent paper is worth highlighting, however, as it
has provided the main inspiration for the present paper
and contains a useful overview of different types of in-
dicators. It is written by Nigel Simister of INTRAC and is
entitled “Summarising Portfolio Change: Results Frame-
works at Organisational level” (read the paper here).
The indicators placed on the chart are as follows:
 Statistical aggregation: vertical aggregation across
programmes or horizontal aggregation at global
level (see “aggregated indicators” in Simister 2016).
 Ranking and rating: comparing values across cases
and ranking them accordingly either directly or us-
ing aggregation (see Simister 2016).
 Calibration: re-calculating values based on scoring
principles, which can also be used for aggregation
(compare “translated indicators” in Simister 2016).
 Core indicators: comparative overview of cases that
are listed in a table structure (see Simister 2016).
 Indirect indicators: summarisation of different prox-
ies without numerical aggregation (compare “fram-
ing indicators” in Simister 2016).
 Clusters: indicators taken from individual projects
and used at the global level as examples of overall
results without aggregation (see Simister 2016).
 Direct indicators: quantitative and qualitative indi-
cators related to observable changes or opinions.
 Scales: assignment of numerical scores based on
qualitative principles to compare progress.
 Outcome challenges: defined outcomes in accord-
ance with Outcome Mapping (see Earl, Carden and
Smutylo 2001).
 Most significant change: summarising stories of
change to generalise best practice across cases (see
Davies and Dart 2005).
 Displays: summary of qualitative narratives using a
table structure to organise these by domains-of-
change (see for example Dahler-Larsen 2010).
 Narratives: qualitative narratives within-case and
without comparative summarisation.
ONLINE PLATFORM FOR M&E DATA-MANAGEMENT
Think Capacity is developing an online platform for collecting, analysing and sharing data
across organisational levels, partners and consortiums. The platform will make it easy to:
 Plan project implementation, define indicators and collect data online
 Aggregate data horizontally and vertically using organisational hierarchies
 Sharing and validating data across partners and consortiums
CSOs are currently invited to take part in a free pilot testing.
Think Capacity is founded by Kasper Jon Larsen, who works as an M&E-consultant
and has written a PhD project on time-series analyses using data-based M&Es.
For more details, email kjlarsen@thinkcapacity.net or call (+45) 5037 5164.
THINK
C A PA C I T Y
www.thinkcapacity.net
STRATEGIC SELECTION OF INDICATORS | 5
WORKSHEET – STRATEGIC SELECTION OF INDICATORS
Clusters
Description Reflection
Cross-caseWithin-case
Interpretation
Comparative
Core indicators
Narratives
Outcome
challenges
Statistical
aggregation
Ranking
and rating
Calibration
Direct
indicators
Scales
Indirect
indicators
Most
significant
change
Displays
Levelofsummarisation
Aim of enquiry
Step 1: Define the purpose
What is our purpose with collecting data and how does it relate to
our theory-of-change? Do we want to build accountability, support
project management or facilitate learning?
Who is our target audience that will see the output from the data?
What claims and key messages do we want to communicate?
What does the output look like? Are we aiming to write reports,
make presentations or produce other deliveries?
Step 2: Define plausible linkages
What plausible linkages can we envision between our purpose with
collecting data and the actual outcome or impact?
How do we limit the number of plausible linkages, so we focus re-
sources on the most important claims and key messages?
Are there any linkages that don’t support the purpose directly and
can be deselected?
Step 3: Select indicators
Use the chart to place the aim of enquiry and the level of summari-
sation that best reflect the chosen purpose and plausible linkage.
The chart represents an approximate categorisation of indicators
and seeks to facilitate discussion amongst team member. Keep in
mind that indicator selection should always be done using context-
specific knowledge. The analysis often benefits from utilising mixed
indicators to balance quantitative and qualitative data.
This worksheet has been developed by Kasper Jon Larsen,
independent M&E-consultant and founder of Think Capacity.
He is available for presentations and consultancies.
email: kjlarsen@thinkcapacity.net / phone: (+45) 5037 5164
© Think Capacity 2016
THINK
C A P A C I T Y
www.thinkcapacity.net
Chart 1: Selecting indicators by aim of enquiry and level of summarisation
Use the chart to place the purpose
and plausible linkages behind your
data collection – then discuss related
indicators with your team.

More Related Content

PDF
Merkl-Davies, Doris M. and Brennan, Niamh M. [2007] Discretionary Disclosure ...
DOCX
Business analytics
PDF
Riding The Technology Wave - Effective Dashboard Data Visualization
DOCX
kenegaraan
PDF
Using Business Intelligence: The Strategic Use of Analytics in Government
PPT
Original definition Predictive Analytics SPSS Jan 15, 2003 Intriduction Slides
PDF
Programming for big data
DOCX
05. Physical Data Specification Template
Merkl-Davies, Doris M. and Brennan, Niamh M. [2007] Discretionary Disclosure ...
Business analytics
Riding The Technology Wave - Effective Dashboard Data Visualization
kenegaraan
Using Business Intelligence: The Strategic Use of Analytics in Government
Original definition Predictive Analytics SPSS Jan 15, 2003 Intriduction Slides
Programming for big data
05. Physical Data Specification Template

What's hot (18)

PPTX
Relational Capabilities in Projects EGOS2013
PDF
Amcis2015 erf strategic_informationsystemsevaluationtrack_improvingenterprise...
PDF
Basic Marketing Research 4th Edition Malhotra Test Bank
PDF
WHITE PAPER: Distributed Data Quality
DOCX
Ms 66 marketing research
PDF
Paper Explained: Deep learning framework for measuring the digital strategy o...
DOCX
Customer Relationship Management and Marketing as a Potential Discipline
PDF
03 chapter3 information-quality-assessment
PDF
Data Quality in Data Warehouse and Business Intelligence Environments - Disc...
PDF
Research design decisions and be competent in the process of reliable data co...
PDF
SUCCESS-DRIVING BUSINESS MODEL CHARACTERISTICS OF IAAS PROVIDERS – TOWARDS A ...
PDF
Chapter-13-14.pdf
PPTX
Chapter 3. marketing research
PPTX
Logic Models
PDF
Customer and Stakeholder Perspective using Analytical Hierarchy Process Metho...
DOCX
04. Logical Data Definition template
PDF
Marketing Research An Applied Orientation 7th Edition Malhotra Test Bank
PPT
Relational capabilities euram 2012
Relational Capabilities in Projects EGOS2013
Amcis2015 erf strategic_informationsystemsevaluationtrack_improvingenterprise...
Basic Marketing Research 4th Edition Malhotra Test Bank
WHITE PAPER: Distributed Data Quality
Ms 66 marketing research
Paper Explained: Deep learning framework for measuring the digital strategy o...
Customer Relationship Management and Marketing as a Potential Discipline
03 chapter3 information-quality-assessment
Data Quality in Data Warehouse and Business Intelligence Environments - Disc...
Research design decisions and be competent in the process of reliable data co...
SUCCESS-DRIVING BUSINESS MODEL CHARACTERISTICS OF IAAS PROVIDERS – TOWARDS A ...
Chapter-13-14.pdf
Chapter 3. marketing research
Logic Models
Customer and Stakeholder Perspective using Analytical Hierarchy Process Metho...
04. Logical Data Definition template
Marketing Research An Applied Orientation 7th Edition Malhotra Test Bank
Relational capabilities euram 2012
Ad

Similar to Strategic Selection of Indicators (20)

DOC
M&E handout for module 8
PPTX
Chapter Two PME.pptx
PPT
Indicators workshop ces 2013
PPT
6 M&E - Monitoring and Evaluation of Aid Projects
DOC
Module 8 presenter notes
PPT
Indicators workshop ces 2013
PDF
Measuring Progress | Charles Thrift
PPT
PPTX
M & e indicators
PPTX
Indicators & Targets Example
PPTX
Indicators in M&E.pptx
PPTX
PPTX
VC M&E Module 4 - Select Performance Indicators
PPT
7 M&E: Indicators
PPTX
M & E Training guide
DOCX
Indicators workshop ces 2013
PDF
Managing with KPI's and KRI's
PPTX
ME_Concepts.pptx
PPTX
Chapter 2 Indicators and M & E Frameworks.pptx
PPTX
Dr Brian Mutie on basics of Monitoring and Evaluation
M&E handout for module 8
Chapter Two PME.pptx
Indicators workshop ces 2013
6 M&E - Monitoring and Evaluation of Aid Projects
Module 8 presenter notes
Indicators workshop ces 2013
Measuring Progress | Charles Thrift
M & e indicators
Indicators & Targets Example
Indicators in M&E.pptx
VC M&E Module 4 - Select Performance Indicators
7 M&E: Indicators
M & E Training guide
Indicators workshop ces 2013
Managing with KPI's and KRI's
ME_Concepts.pptx
Chapter 2 Indicators and M & E Frameworks.pptx
Dr Brian Mutie on basics of Monitoring and Evaluation
Ad

Recently uploaded (20)

PDF
PPT Item # 10 -- Proposed 2025 Tax Rate
PDF
PPT Item # 9 - FY 2025-26 Proposed Budget.pdf
PDF
Landscape quality objectives based on social perception. The experience of th...
PDF
Abhay Bhutada Foundation’s Commitment to ESG Compliance
PPTX
PPT for Meeting with CM 18.08.2025complete (1).pptx
PPTX
Neurons.pptx and the family in London are you chatgpt
PDF
Driving Change with Compassion - The Source of Hope Foundation
PDF
2024-Need-Assessment-Report-March-2025.pdf
PPTX
Introduction to the NAP Process and NAP Global Network
PDF
The GDP double bind- Anders Wijkman Honorary President Club of Rome
PPT
An Introduction To National Savings CDNS.ppt
PDF
An Easy Approach to Kerala Service Rules
PDF
The Landscape Observatory of Catalonia. A Journey of Fifteen Years
PDF
ACHO's Six WEEK UPDATE REPORT ON WATER SACHETS DISTRIBUTION IN RENK COUNTY - ...
PDF
AAAAAAAAAAAAAAAAAaaaaaaaAAAAAAAt - ĐV.pdf
PDF
Global Intergenerational Week Impact Report
PDF
eVerify Overview and Detailed Instructions to Set up an account
PPTX
TOT Programme for Gender Champions among Colleges Students
PPTX
I'M A PANCASILA STUDENT.pptx pendidikan pkn
PPTX
LUNG CANCER PREDICTION MODELING USING ARTIFICIAL NEURAL NETWORK.pptx
PPT Item # 10 -- Proposed 2025 Tax Rate
PPT Item # 9 - FY 2025-26 Proposed Budget.pdf
Landscape quality objectives based on social perception. The experience of th...
Abhay Bhutada Foundation’s Commitment to ESG Compliance
PPT for Meeting with CM 18.08.2025complete (1).pptx
Neurons.pptx and the family in London are you chatgpt
Driving Change with Compassion - The Source of Hope Foundation
2024-Need-Assessment-Report-March-2025.pdf
Introduction to the NAP Process and NAP Global Network
The GDP double bind- Anders Wijkman Honorary President Club of Rome
An Introduction To National Savings CDNS.ppt
An Easy Approach to Kerala Service Rules
The Landscape Observatory of Catalonia. A Journey of Fifteen Years
ACHO's Six WEEK UPDATE REPORT ON WATER SACHETS DISTRIBUTION IN RENK COUNTY - ...
AAAAAAAAAAAAAAAAAaaaaaaaAAAAAAAt - ĐV.pdf
Global Intergenerational Week Impact Report
eVerify Overview and Detailed Instructions to Set up an account
TOT Programme for Gender Champions among Colleges Students
I'M A PANCASILA STUDENT.pptx pendidikan pkn
LUNG CANCER PREDICTION MODELING USING ARTIFICIAL NEURAL NETWORK.pptx

Strategic Selection of Indicators

  • 1. STRATEGIC SELECTION OF INDICATORS | 1 DRAFT PAPER: STRATEGIC SELECTION OF INDICATORS By Kasper Jon Larsen, Think Capacity Approaches to monitoring, evaluation and learning (MEL) are undergoing a data revolution. Yet with greater com- plexity, there is a need for selecting indicators strategi- cally to heighten data validity and ensure indicators re- flect strategic aims. The paper therefore sets out criteria for making strategic selections of indicators. INTRODUCTION In the past, organisations have been able to build ac- countability by counting simple outputs such as the number of beneficiaries reached. While it is still im- portant to document project deliveries, organisations are increasingly expected to also prove their contribu- tions by collecting data on complex change processes. These processes span programmes deliveries and intan- gible change such as advocacy and capacity building in changing policy environments. This has led the organisations to explore various indica- tors to account for previously undocumented aspects of their interventions. These range from quantitative indi- cators based on numerical values to qualitative indica- tors that summarise narrative stories of change. Yet organisations often express doubt about whether their chosen indicators reflect the aim of enquiry when reporting to donors or other stakeholders. For example, it sometimes proves impractical to aggregate numerical indicators due to varying policy environments across dif- ferent country programmes. At other times, organisa- tions realise that the chosen indicators do not support their need for narrative accounts in reporting. To address these challenges, the paper sets out criteria for strategic selection of indicators. Strategic selection refers to a purposeful selection of indicators, reflecting the requirements to manage project implementation and to report externally as well as defining linkages be- tween indicators and an underlying theory-of-change. Although indicator selection should always be done us- ing context-specific knowledge, it is hoped that the cri- teria listed here will help to guide such considerations. STRATEGIC SELECTION Communication is an important end-goal of most MEL approaches. Reporting to stakeholders is in large parts a communications exercise to build accountability, while learning typically rests on a communicative flow of ideas amongst staff and partners. Yet organisations often by- pass the end-goal and define indicators from a technical point of departure through questions like: what indica- tors do we need to aggregate results upwards? Instead, it is vital to take a step back and first envision the end-goal that the MEL-system is aiming to support. This should ideally be defined before data is collected and researchers refer to this stage as ex-ante. It is ex- tremely difficult to change indicators once data collec- tion has begun, although there are some complex tools for dealing with this (researchers refer to this as ex- post). For simplicity, the paper will only consider indica- tor selection prior to starting the data collection. The strategic selection can be divided into three overall steps to clarify the purpose behind indicators:  Step 1: Define the purpose of collecting data Organisations typically have two overall purposes with collecting data: internal project management to monitor progress and communicating change to stakeholders. It is thus important to start the selec- tion by answering the following strategic questions: o What is our purpose with collecting data and how does it relate to our theory-of-change? Do we want to build accountability, support pro- ject management or facilitate learning? o Who is our target audience that will see the output from the data? What claims and key messages do we want to communicate? o What does the output look like? Are we aiming to write reports, make presentations or pro- duce other deliveries?
  • 2. STRATEGIC SELECTION OF INDICATORS | 2  Step 2: Define plausible linkages Once the purpose is known, it is necessary to define what data is needed to operationalise the purpose. This does not entail defining indicators just yet, but envisioning what information is necessary to estab- lish a plausible linkage between activities and re- sults, e.g. uptake of advocacy amongst municipali- ties or levels of satisfaction: o What plausible linkages can we envision be- tween our purpose with collecting data and the actual outcome or impact? o How do we limit the number of plausible link- ages, so we focus resources on the most im- portant claims and key messages? o Are there any linkages that don’t support the purpose directly and can be deselected?  Step 3: Select indicators It is useful to view indicators as a proxy for the plau- sible linkages between own activities and the actual outcome observed in real-life contexts. Indicators are thus constituted by observable actions, events or subjective opinions. These are different from plausible linkages that define the assumptions for how indicators are interpreted. These assumptions usually form part of a theory-of-change. Selecting strategic indicators thus evolves around the following question, which is dealt with in more detail through the criteria listed in the next section: o How do we select indicators that act as a proxy for the plausible linkages and the purpose be- hind the data collection? SELECTION CRITERIA This section sets out criteria for qualifying discussions on indicator selection. It does so by directing attention to two fundamental aspects in selecting indicators, ensur- ing the right match with the aim of enquiry and the de- sired level of summarisation. These are illustrated on Chart 1 and will be discussed in detail below. There are, however, endless possibilities for categoris- ing various types of indicators. The following should thus primarily be viewed as an approximate categorisation to direct attention of MEL-teams towards the most likely fits between indicators and the defined purpose. Aim of enquiry The first criteria when selecting indicators pertains to the aim of the enquiry, i.e. what the organisation will use Clusters Description Reflection Cross-caseWithin-case Interpretation Comparative Core indicators Narratives Outcome challenges Statistical aggregation Ranking and rating Calibration Direct indicators Scales Indirect indicators Most significant change Displays Levelofsummarisation Aim of enquiry Chart 1: Selecting indicators by aim of enquiry and level of summarisation
  • 3. STRATEGIC SELECTION OF INDICATORS | 3 data for. Indicators have traditionally been divided be- tween quantitative indicators – that document infor- mation using numerical values – and qualitative indica- tors that document information using narratives. Here three overall approaches can be identified: Description is the primary purpose behind quantitative indicators. This is because they aim to summarise infor- mation using numerical values. For example, we cannot deduce additional information from statistical figures such as mortality rates than the overall numbers. Interpretation represents a possible compromise be- tween quantitative and qualitative aims of enquiry, as it combines the functions of description and reflection. This is for example done by assigning numerical scores to cases based on qualitative knowledge. Reflection is the primary purpose behind qualitative in- dicators, as they enable reflection on a wider process within a single case. For example, it is useful to reflect on outcomes that cannot easily be quantified such as advocacy or capacity building efforts. In selecting indicators, the organisation should thus start by considering which of the above aims of enquiry are aligned with its purpose and plausible linkages. Level of summarisation The second criteria relates of the level of summarisation across cases, i.e. whether we want to gain deep knowledge of an individual project or want to summa- rise data across multiple projects. Summarisation is sometimes wrongly equated with the differences be- tween quantitative and qualitative research, where the former is seen as spanning across cases due to numeri- cal aggregation and the latter is viewed as within-case due to an emphasis on narratives. However, it should be stressed that qualitative research also be conducted across cases and vice versa. Instead, it is useful to draw a distinction between three levels of summarisation that span various the aims of enquiry: within-case, comparative and cross-case. Within-case approaches refer to studies that seek to gain an in-depth understanding of individual cases, e.g. learning outcomes from a capacity building programme. As these are context-specific, it is not desirable to sum- marise data across various programmes or cases. Comparative approaches seek to compare data across projects or cases. The aim here is not to aggregate num- bers into an overall figure, but to sort cases according to a number of categories or domains-of-change that makes it possible to compare projects or cases. Methodology Pros and cons Description  Research questions focus on describing val- ues, e.g. “how many” / “how much”  Deductive/closed questioning, i.e. defines enquiry prior to data collection  Pros: Numerical values often appear more scientific and can pre-empt questioning  Cons: Aggregations can easily lead to misin- terpretations and lack vital detail Interpretation  Research questions focus on interpretation, e.g. “how does X compare to Y”  Iterative approach that shifts between de- ductive and inductive enquiry  Pros: Compromise of quantitative and qual- itative approaches, allows for interpretation  Cons: Interpretation may be biased to sub- jective views and may misrepresent data Reflection  Research questions focus on reflective nar- ratives, e.g. “how”, “why” and “what”  Inductive/open questioning, i.e. defines en- quiry before and/or after data collection  Pros: Reflection to uncover unknown as- pects and facilitate learning/innovation  Cons: Narratives are often seen as less sci- entific and may be less persuasive Within-case  In-depth reflection of individual cases with- out comparisons across other cases  Pros: In-depth case knowledge that often generates new insights into the case  Cons: Knowledge cannot be generalised to other contexts, must be done for each case Comparative  Comparison of quantitative or qualitative data across cases to reflect on similarities and differences  Pros: Allows for comparisons across cases without low level of detail in aggregations  Cons: Comparing across cases can be prone to biases and ignore context-specific details Cross-case  Aggregates data across cases, using either statistics or criteria for summarising narra- tive stories of change  Pros: Aggregates data across many cases. Produces accessible overviews for donors  Cons: Can lead to misrepresentation and ig- nore important details from cases Table 1: Understanding approaches to aim of enquiry and level of summarisation
  • 4. STRATEGIC SELECTION OF INDICATORS | 4 Cross-case approaches typically aim to aggregate data into overall findings, which holds true for all projects un- der review. Sometimes cross-case approaches also seek to develop generalisable theories, e.g. that community participation results in less conflict, etc. The organisation should thus proceed by determining the preferred level of summarisation that corresponds with its purposes and defined plausible linkages. Selection process To aid the selection process, pros and cons for each type of enquiry and each level of summarisation are listed on Table 1 on the previous page. This enables organisations to focus their discussions on those indicators that are most likely to support the purpose and plausible link- ages behind the data collection. The most relevant indicators are found in the corre- sponding squares on Chart 1. While the indicators in- cluded on the chart represent the most common indica- tors used for MEL, it does not represent an exhaustive list. Often it is useful to use a mixed approach where quantitative figures, for instance, are complimented by qualitative narratives to ensure balanced accounts. Detailed accounts of how to operationalise each indica- tor have been presented elsewhere. The paper will therefore not attempt to explain each type and refers instead to the existing literature. One recent paper is worth highlighting, however, as it has provided the main inspiration for the present paper and contains a useful overview of different types of in- dicators. It is written by Nigel Simister of INTRAC and is entitled “Summarising Portfolio Change: Results Frame- works at Organisational level” (read the paper here). The indicators placed on the chart are as follows:  Statistical aggregation: vertical aggregation across programmes or horizontal aggregation at global level (see “aggregated indicators” in Simister 2016).  Ranking and rating: comparing values across cases and ranking them accordingly either directly or us- ing aggregation (see Simister 2016).  Calibration: re-calculating values based on scoring principles, which can also be used for aggregation (compare “translated indicators” in Simister 2016).  Core indicators: comparative overview of cases that are listed in a table structure (see Simister 2016).  Indirect indicators: summarisation of different prox- ies without numerical aggregation (compare “fram- ing indicators” in Simister 2016).  Clusters: indicators taken from individual projects and used at the global level as examples of overall results without aggregation (see Simister 2016).  Direct indicators: quantitative and qualitative indi- cators related to observable changes or opinions.  Scales: assignment of numerical scores based on qualitative principles to compare progress.  Outcome challenges: defined outcomes in accord- ance with Outcome Mapping (see Earl, Carden and Smutylo 2001).  Most significant change: summarising stories of change to generalise best practice across cases (see Davies and Dart 2005).  Displays: summary of qualitative narratives using a table structure to organise these by domains-of- change (see for example Dahler-Larsen 2010).  Narratives: qualitative narratives within-case and without comparative summarisation. ONLINE PLATFORM FOR M&E DATA-MANAGEMENT Think Capacity is developing an online platform for collecting, analysing and sharing data across organisational levels, partners and consortiums. The platform will make it easy to:  Plan project implementation, define indicators and collect data online  Aggregate data horizontally and vertically using organisational hierarchies  Sharing and validating data across partners and consortiums CSOs are currently invited to take part in a free pilot testing. Think Capacity is founded by Kasper Jon Larsen, who works as an M&E-consultant and has written a PhD project on time-series analyses using data-based M&Es. For more details, email kjlarsen@thinkcapacity.net or call (+45) 5037 5164. THINK C A PA C I T Y www.thinkcapacity.net
  • 5. STRATEGIC SELECTION OF INDICATORS | 5 WORKSHEET – STRATEGIC SELECTION OF INDICATORS Clusters Description Reflection Cross-caseWithin-case Interpretation Comparative Core indicators Narratives Outcome challenges Statistical aggregation Ranking and rating Calibration Direct indicators Scales Indirect indicators Most significant change Displays Levelofsummarisation Aim of enquiry Step 1: Define the purpose What is our purpose with collecting data and how does it relate to our theory-of-change? Do we want to build accountability, support project management or facilitate learning? Who is our target audience that will see the output from the data? What claims and key messages do we want to communicate? What does the output look like? Are we aiming to write reports, make presentations or produce other deliveries? Step 2: Define plausible linkages What plausible linkages can we envision between our purpose with collecting data and the actual outcome or impact? How do we limit the number of plausible linkages, so we focus re- sources on the most important claims and key messages? Are there any linkages that don’t support the purpose directly and can be deselected? Step 3: Select indicators Use the chart to place the aim of enquiry and the level of summari- sation that best reflect the chosen purpose and plausible linkage. The chart represents an approximate categorisation of indicators and seeks to facilitate discussion amongst team member. Keep in mind that indicator selection should always be done using context- specific knowledge. The analysis often benefits from utilising mixed indicators to balance quantitative and qualitative data. This worksheet has been developed by Kasper Jon Larsen, independent M&E-consultant and founder of Think Capacity. He is available for presentations and consultancies. email: kjlarsen@thinkcapacity.net / phone: (+45) 5037 5164 © Think Capacity 2016 THINK C A P A C I T Y www.thinkcapacity.net Chart 1: Selecting indicators by aim of enquiry and level of summarisation Use the chart to place the purpose and plausible linkages behind your data collection – then discuss related indicators with your team.