SlideShare a Scribd company logo
T I M B O C K P R E S E N T S
If you have any questions, enter them into the Questions field.
Questions will be answered at the end. If we do not have time to get to your question, we will email you.
We will email you a link to the video, slides, and data.
Get a free one-month trial of Q from www.q-researchsoftware.com.
DIY Max-Diff
2
When to use max-diff
Experimental design
Counting analysis (bad)
Latent class analysis
Computing the preference share for each respondent
A G E N DA | D I Y M A X - D I F
Thinking about the type of person you would like to have as
the President of the USA, how appealing are these
characteristics to you?
3
Decent/ethical Good in a crisis Concerned about
global warming
Entertaining
Plain-speaking Experienced in
government
Concerned about
poverty
Male
Healthy Focuses on
minorities
Has served in the
military
From a traditional
American
background
Successful in
business
Understands
economics Multilingual Christian
4
5
A max-diff question
(One of 10 questions, each asked with a different subset of the alternatives)
6
An experimental design indicates which alternatives appear are shown in
which question.
7
Use max-diff when:
1. Ratings are likely to get too many ties
2. There are too many items to rank
3. Respondents are going to provide data with noise, such as when they:
• Are tired
• Are lazy
• Change their mind
8
Typical applications
• Understanding preferences. E.g.,
• Preferences between new products (“should we launch concept A, B, C, etc.”)
• Preferences for existing brands
• Message testing
• Segmentation. Identify groups of people that differ in the importance they assign
to different attributes, traits, values, characteristics, etc.
• General-purpose measurement. Collecting data that can be used in lots of
different ways. For example:
• As one of multiple different types of data in segmentation
• As general profiling data, used to contextualize other variables in a study, in much the same way
as is done with demographics.
9
10
When to use max-diff
Experimental design
Counting analysis (bad)
Latent class analysis
Computing the preference share for each respondent
A G E N DA | D I Y M A X - D I F
Worked example: appeal of 10 technology brands
Apple Microsoft IBM Google Intel
Samsung Sony Dell Yahoo Nokia
11
Typical applications
Application Implications for experimental design
Understanding preferences. E.g.,
• Preferences between new products (“should we
launch concept A, B, C, etc.”)
• Preferences for existing brands
• Message testing
• Separate design is best for each person
• The design can be “poor” for each person, so
long as it is good in aggregate
• A large number of alternatives can be included
in the study
Segmentation. Identify groups of people that differ
in the importance they assign to different attributes,
traits, values, characteristics, etc.
• The design needs to be “good” for each person
• Each person should see the same design
• A smaller number of alternatives should be
included in the study (e.g., less than 20)
General-purpose measurement. Collecting data that
can be used in lots of different ways. For example:
• As one of multiple different types of data in
segmentation
• As general profiling data, used to contextualize
other variables in a study, in much the same
way as is done with demographics.
• The design needs to be “good” for each person
• Each person should see the same design
• A smaller number of alternatives should be
included in the study (e.g., less than 20)
12
Randomization of the order of alternatives
• Randomizing the order with which alternatives appear
• One order for each respondent
• For example, Apple at the top for the first respondent, in the middle for the next respondent,
etc.
• Randomizing the order with which the questions appear
• For example, the first respondent sees question 4, 3, 1, 2, 6, and 5, the next sees 6, 3, 2, 1, 5, 4,
etc.
13
If randomization works, it means that the randomization will be a source of
variance in any between-respondent comparisons, reducing the validity of any
resulting segmentation.
Tip: get the randomization done in the data collection software, and removed
from the data prior to doing any analysis.
More advanced experimental design issues
• Too many alternatives for each person to evaluate all of them
• Prohibitions: sets of alternatives that should not be shown together
• Anchored max-diff: combination of max-diff with other data
See http://docs.displayr.com/wiki/Creating_Max-
Diff_Experimental_Designs#Advanced_designs_for_max-diff
14
15
When to use max-diff
Experimental design
Counting analysis (bad)
Latent class analysis
Computing the preference share for each respondent
A G E N DA | D I Y M A X - D I F
Counting analysis gives the wrong answers, as:
• It ignores the experimental design
• It does not deal with inconsistent preferences
• It ignores differences between people
16
Problems with counting analysis: Example 1
17
Best Worst
Best -
Worst
Apple 464 155 309
Google 348 40 308
Samsung 333 103 230
Sony 227 69 158
Microsoft 187 87 100
Dell 82 255 -173
Nokia 64 282 -218
Intel 48 176 -128
IBM 32 314 -282
Yahoo 27 331 -304
Counting analysis
• If we look at the number of times Apple is
chosen as Best, it is the clear winner
• But, if we look at the Best - Worst scores,
Apple and Google are tied
• Which of these analyses is correct?
• Both seem plausible at first look
• They cannot both be valid
• Neither is valid…
Problems with counting analysis: Example 2
18
• In this experiment, each alternative
appeared three times. So, if a brand
is chosen as best three times, we
know it is most preferred.
• Samsung is clearly the second most
preferred brand.
• However, the counting analysis on
the previous slide suggested that
Google was second most preferred.
• Counting analysis confuses breadth
of popularity with strength of
preference
Never Once Twice 3 times
Apple % 35 13 16 36
Microsoft % 58 28 8 6
IBM % 90 9 1 0
Google % 32 32 23 12
Intel % 88 10 2 1
Samsung % 46 18 17 20
Sony % 53 26 13 8
Dell % 80 15 3 2
Yahoo % 94 3 2 0
Nokia % 86 9 4 2
Times chosen as Best
Problems with counting analysis: Example 3
19
ID 1
Questions 1 2 3 4 5 Choices
1 Apple Microsoft IBM Google Nokia Best
2 Apple Sony Dell Yahoo Nokia Worst
3 Microsoft Intel Samsung Sony Nokia
4 IBM Google Intel Sony Dell
5 Microsoft Google Samsung Dell Yahoo
6 Apple IBM Intel Samsung Yahoo
Alternatives
Counting analysis
3 Microsoft
1 Dell, Google, Samsung
0 Sony, Intel, Apple
-1 Yahoo
-2 Nokia
-3 IBM
• The counting analysis shows that Yahoo is the 8th most popular of the brands for this respondent
(i.e., the 3rd worst score, at -1, of any of the 10 brands).
• This is based on being chosen as Worst in Question 5.
• However, in Question 5 Yahoo was against the four most popular brands, so the actual data
provides no evidence that Yahoo is any less popular than Sony, Intel, and Apple.
Problems with counting analysis: Example 4
20
• The counting analysis suggests that Microsoft is 3 times as popular as Google.
• The data shows us that in the two questions where the person had a choice between Microsoft
and Google (Questions 1 and 5) they chose Microsoft. It contains no information to suggest that
Microsoft is three times as appealing as Google.
ID 1
Questions 1 2 3 4 5 Choices
1 Apple Microsoft IBM Google Nokia Best
2 Apple Sony Dell Yahoo Nokia Worst
3 Microsoft Intel Samsung Sony Nokia
4 IBM Google Intel Sony Dell
5 Microsoft Google Samsung Dell Yahoo
6 Apple IBM Intel Samsung Yahoo
Alternatives
Counting analysis
3 Microsoft
1 Dell, Google, Samsung
0 Sony, Intel, Apple
-1 Yahoo
-2 Nokia
-3 IBM
Problems with counting analysis: Example 5
21
• Question 1 tells us that Apple is preferred to Google
• Question 5 tells us that Google is preferred to Samsung
• Therefore, Apple is preferred to Samsung
• But, Question 6 tells us that Samsung is preferred to Apple
• Such inconsistencies are typical in survey data
ID 13
Questions 1 2 3 4 5 Choices
1 Apple Microsoft IBM Google Nokia Best
2 Apple Sony Dell Yahoo Nokia Worst
3 Microsoft Intel Samsung Sony Nokia
4 IBM Google Intel Sony Dell
5 Microsoft Google Samsung Dell Yahoo
6 Apple IBM Intel Samsung Yahoo
Alternatives
22
More
advanced
methods, such
as latent class
analysis,
mitigate/solve
all these
problems
23
When to use max-diff
Experimental design
Counting analysis (bad)
Latent class analysis
Computing the preference share for each respondent
A G E N DA | D I Y M A X - D I F
Process for latent class analysis max-diff
• Create > Marketing > Max-Diff > Latent Class Analysis
• Set Questions left out for cross validation to about 20% of your max-diff data (e.g., to 1 if you
have 6 questions per respondent)
• Repeat for from 1 through to 10 segments
• Select number of segments based on:
• Bayesian Information Criterion (BIC): Smaller is better
• Prediction accuracy (only if using cross-validation)
• Stability of preference shares (i.e., similar to with one fewer and one more class)
• Good discrimination within the segments (a small number of preferred alternatives)
• Ease of explaining to clients
• Re-run with :
• The chosen number of segments
• Questions left out for cross validation set to 0
24
How many segments?
25
• Predictive accuracy (CV) is maximized at 5
segments
• The BIC is lowest at 9 segments, but the
differences are very small from 6 onwards
• The average preference shares for the
difference brands stabilize after 6
segments
• I would probably choose 5 segments in
this example, as I really like predictive
accuracy as a criterion
Advanced Latent Class Analysis in Q
• Two applications
• Latent class analysis of anchored max-diff
• Latent class using multiple different types of data (e.g., max-diff + choice model + ratings)
• These are done in Q by:
• Setting up the max-diff as an Experiment question:
http://wiki.q-researchsoftware.com/wiki/Marketing_-_Max-Diff_-_Max-
Diff_Setup_from_an_Experimental_Design
• Using the standard latent class analysis option (Create > Segments > Latent Class Analysis)
26
27
When to use max-diff
Experimental design
Counting analysis (bad)
Latent class analysis
Computing the preference share for each respondent
A G E N DA | D I Y M A X - D I F
Typical applications
Application Implications for experimental design Implications for analysis
Understanding preferences. E.g.,
• Preferences between new products (“should we
launch concept A, B, C, etc.”)
• Preferences for existing brands
• Message testing
• Separate design is best for each person
• The design can be “poor” for each person, so
long as it is good in aggregate
• A large number of alternatives can be included
in the study
1. Compute the preference share for each
person (e.g., using latent class analysis,
hierarchical Bayes, varying coefficients)
2. Compute the average preference share
Segmentation. Identify groups of people that differ
in the importance they assign to different attributes,
traits, values, characteristics, etc.
• The design needs to be “good” for each person
• Each person should see the same design
• A smaller number of alternatives should be
included in the study (e.g., less than 20)
Use latent class analysis to identify preference
shares within segments
General-purpose measurement. Collecting data that
can be used in lots of different ways. For example:
• As one of multiple different types of data in
segmentation
• As general profiling data, used to contextualize
other variables in a study, in much the same
way as is done with demographics.
• The design needs to be “good” for each person
• Each person should see the same design
• A smaller number of alternatives should be
included in the study (e.g., less than 20)
1. Compute the preference share for each
person (e.g., using latent class analysis,
hierarchical Bayes, varying coefficients)
2. Use these preference shares in other
analyses (e.g., comparing averages by other
groups)
28
Computing the preference share for each respondent
• Method 1: Using a standard max-diff latent class model
• Select the output in Q
• Create > Marketing > Max-Diff > Save variable(s) > Compute Preference Shares
• Method 2: Same as Method 1, but with many more classes (e.g., 20). If this is too slow, use the Advanced Latent Class Analysis method
instead, as it is faster and has no time limit set on it.
• Method 3: Normal mixing distribution (aka Hierarchical Bayes)
• On the Data tab, set the Case IDs (top-left of the screen)
• Set up as Advanced Latent Class Analysis (see earlier slide)
• Set Number of segments to 1
• Latent Class Analysis > Advanced and set Distribution to Multivariate Normal – Full Covariance
• Press OK twice
• Right-click on the tree and select Save Individual-Level Parameter Means and Standard Deviations
• Create > Marketing > Max-Diff > Compute Preference Shares from Individual-Level Parameter Means (All Alternatives)
• Method 4: Mixtures of normal mixing distributions: Same as Method 3, except:
• Set Number of segments to Automatic or some number
• In Advanced, untick Pooled (to the right of Multivariate Normal)
• Method 5: Varying coefficients
• Create > Marketing > Max-Diff > Varying Coefficients
• Setup as for latent class analysis
• Select additional predictor variables as Varying Coefficients
• Method 6: Ensemble
• Use all the methods above
• Ignore the data for any method that performs poorly (e.g., based on BIC, or, if you can compute it, cross-validation)
• For each respondent, compute their preference share as the average of the preference share of the different methods 29
T I M B O C K P R E S E N T S
Q&A Session
Type questions into the Questions fields in GoToWebinar.
If we do not get to your question during the webinar, we will write back
via email.
We will email you a link to the slides and data.
Get a free one-month trial of Q from www.q-researchsoftware.com.

More Related Content

PPTX
Causative verbs
PPTX
Estudo de Cobertura eficiente em Pay TV
PPTX
Cultural_Differences_between bangaladesh and china Presentation.pptx
PDF
Introduction to conjoint analysis 2021
PPT
Survey & Questionnaire Design in Applied Marketing Research
PPTX
Turning Information chaos into reliable data
PDF
林守德/Practical Issues in Machine Learning
PDF
Bj research session 8 gathering quantitative data
Causative verbs
Estudo de Cobertura eficiente em Pay TV
Cultural_Differences_between bangaladesh and china Presentation.pptx
Introduction to conjoint analysis 2021
Survey & Questionnaire Design in Applied Marketing Research
Turning Information chaos into reliable data
林守德/Practical Issues in Machine Learning
Bj research session 8 gathering quantitative data

Similar to DIY Max-Diff webinar slides (20)

PDF
Stated preference methods and analysis
PDF
Debugging machine-learning
PPTX
Preference and Desirability Testing: Measuring Emotional Response to Guide De...
DOCX
CSIS 100CSIS 100 - Discussion Board Topic #1One of the object.docx
PPTX
Preference and Desirability Testing, Measuring Emotional Response to Guide De...
PPTX
DIY Driver Analysis Webinar slides
PPTX
PPTX
Cracking The Technical Interview
PPT
How To Assess Project Proposals
PDF
Engl317 project4 slidedoc2_stepsto_designux_test
PPTX
The 8 Step Data Mining Process
PPTX
Cracking the coding interview u penn - sept 30 2010
PDF
AI-driven product innovation: from Recommender Systems to COVID-19
PDF
AI Driven Product Innovation
PPTX
Predictive Model and Record Description with Segmented Sensitivity Analysis (...
PDF
VSSML18. OptiML and Fusions
PPTX
intro_big_data.pptx
PDF
BSSML17 - Introduction, Models, Evaluations
PPT
Great Survey Design
PDF
Splitter Student version Tutorial June 2020 - English
Stated preference methods and analysis
Debugging machine-learning
Preference and Desirability Testing: Measuring Emotional Response to Guide De...
CSIS 100CSIS 100 - Discussion Board Topic #1One of the object.docx
Preference and Desirability Testing, Measuring Emotional Response to Guide De...
DIY Driver Analysis Webinar slides
Cracking The Technical Interview
How To Assess Project Proposals
Engl317 project4 slidedoc2_stepsto_designux_test
The 8 Step Data Mining Process
Cracking the coding interview u penn - sept 30 2010
AI-driven product innovation: from Recommender Systems to COVID-19
AI Driven Product Innovation
Predictive Model and Record Description with Segmented Sensitivity Analysis (...
VSSML18. OptiML and Fusions
intro_big_data.pptx
BSSML17 - Introduction, Models, Evaluations
Great Survey Design
Splitter Student version Tutorial June 2020 - English
Ad

Recently uploaded (20)

PDF
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
PDF
Microsoft Core Cloud Services powerpoint
PPTX
Introduction to Inferential Statistics.pptx
PPTX
SAP 2 completion done . PRESENTATION.pptx
DOCX
Factor Analysis Word Document Presentation
PPTX
Pilar Kemerdekaan dan Identi Bangsa.pptx
PPTX
New ISO 27001_2022 standard and the changes
PPTX
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
PPTX
modul_python (1).pptx for professional and student
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PPTX
Managing Community Partner Relationships
PDF
How to run a consulting project- client discovery
PPT
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
PPTX
sac 451hinhgsgshssjsjsjheegdggeegegdggddgeg.pptx
PDF
Systems Analysis and Design, 12th Edition by Scott Tilley Test Bank.pdf
PDF
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
PDF
Votre score augmente si vous choisissez une catégorie et que vous rédigez une...
PDF
Business Analytics and business intelligence.pdf
PPT
ISS -ESG Data flows What is ESG and HowHow
PPT
Predictive modeling basics in data cleaning process
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
Microsoft Core Cloud Services powerpoint
Introduction to Inferential Statistics.pptx
SAP 2 completion done . PRESENTATION.pptx
Factor Analysis Word Document Presentation
Pilar Kemerdekaan dan Identi Bangsa.pptx
New ISO 27001_2022 standard and the changes
mbdjdhjjodule 5-1 rhfhhfjtjjhafbrhfnfbbfnb
modul_python (1).pptx for professional and student
Optimise Shopper Experiences with a Strong Data Estate.pdf
Managing Community Partner Relationships
How to run a consulting project- client discovery
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
sac 451hinhgsgshssjsjsjheegdggeegegdggddgeg.pptx
Systems Analysis and Design, 12th Edition by Scott Tilley Test Bank.pdf
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
Votre score augmente si vous choisissez une catégorie et que vous rédigez une...
Business Analytics and business intelligence.pdf
ISS -ESG Data flows What is ESG and HowHow
Predictive modeling basics in data cleaning process
Ad

DIY Max-Diff webinar slides

  • 1. T I M B O C K P R E S E N T S If you have any questions, enter them into the Questions field. Questions will be answered at the end. If we do not have time to get to your question, we will email you. We will email you a link to the video, slides, and data. Get a free one-month trial of Q from www.q-researchsoftware.com. DIY Max-Diff
  • 2. 2 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  • 3. Thinking about the type of person you would like to have as the President of the USA, how appealing are these characteristics to you? 3 Decent/ethical Good in a crisis Concerned about global warming Entertaining Plain-speaking Experienced in government Concerned about poverty Male Healthy Focuses on minorities Has served in the military From a traditional American background Successful in business Understands economics Multilingual Christian
  • 4. 4
  • 5. 5
  • 6. A max-diff question (One of 10 questions, each asked with a different subset of the alternatives) 6 An experimental design indicates which alternatives appear are shown in which question.
  • 7. 7
  • 8. Use max-diff when: 1. Ratings are likely to get too many ties 2. There are too many items to rank 3. Respondents are going to provide data with noise, such as when they: • Are tired • Are lazy • Change their mind 8
  • 9. Typical applications • Understanding preferences. E.g., • Preferences between new products (“should we launch concept A, B, C, etc.”) • Preferences for existing brands • Message testing • Segmentation. Identify groups of people that differ in the importance they assign to different attributes, traits, values, characteristics, etc. • General-purpose measurement. Collecting data that can be used in lots of different ways. For example: • As one of multiple different types of data in segmentation • As general profiling data, used to contextualize other variables in a study, in much the same way as is done with demographics. 9
  • 10. 10 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  • 11. Worked example: appeal of 10 technology brands Apple Microsoft IBM Google Intel Samsung Sony Dell Yahoo Nokia 11
  • 12. Typical applications Application Implications for experimental design Understanding preferences. E.g., • Preferences between new products (“should we launch concept A, B, C, etc.”) • Preferences for existing brands • Message testing • Separate design is best for each person • The design can be “poor” for each person, so long as it is good in aggregate • A large number of alternatives can be included in the study Segmentation. Identify groups of people that differ in the importance they assign to different attributes, traits, values, characteristics, etc. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) General-purpose measurement. Collecting data that can be used in lots of different ways. For example: • As one of multiple different types of data in segmentation • As general profiling data, used to contextualize other variables in a study, in much the same way as is done with demographics. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) 12
  • 13. Randomization of the order of alternatives • Randomizing the order with which alternatives appear • One order for each respondent • For example, Apple at the top for the first respondent, in the middle for the next respondent, etc. • Randomizing the order with which the questions appear • For example, the first respondent sees question 4, 3, 1, 2, 6, and 5, the next sees 6, 3, 2, 1, 5, 4, etc. 13 If randomization works, it means that the randomization will be a source of variance in any between-respondent comparisons, reducing the validity of any resulting segmentation. Tip: get the randomization done in the data collection software, and removed from the data prior to doing any analysis.
  • 14. More advanced experimental design issues • Too many alternatives for each person to evaluate all of them • Prohibitions: sets of alternatives that should not be shown together • Anchored max-diff: combination of max-diff with other data See http://docs.displayr.com/wiki/Creating_Max- Diff_Experimental_Designs#Advanced_designs_for_max-diff 14
  • 15. 15 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  • 16. Counting analysis gives the wrong answers, as: • It ignores the experimental design • It does not deal with inconsistent preferences • It ignores differences between people 16
  • 17. Problems with counting analysis: Example 1 17 Best Worst Best - Worst Apple 464 155 309 Google 348 40 308 Samsung 333 103 230 Sony 227 69 158 Microsoft 187 87 100 Dell 82 255 -173 Nokia 64 282 -218 Intel 48 176 -128 IBM 32 314 -282 Yahoo 27 331 -304 Counting analysis • If we look at the number of times Apple is chosen as Best, it is the clear winner • But, if we look at the Best - Worst scores, Apple and Google are tied • Which of these analyses is correct? • Both seem plausible at first look • They cannot both be valid • Neither is valid…
  • 18. Problems with counting analysis: Example 2 18 • In this experiment, each alternative appeared three times. So, if a brand is chosen as best three times, we know it is most preferred. • Samsung is clearly the second most preferred brand. • However, the counting analysis on the previous slide suggested that Google was second most preferred. • Counting analysis confuses breadth of popularity with strength of preference Never Once Twice 3 times Apple % 35 13 16 36 Microsoft % 58 28 8 6 IBM % 90 9 1 0 Google % 32 32 23 12 Intel % 88 10 2 1 Samsung % 46 18 17 20 Sony % 53 26 13 8 Dell % 80 15 3 2 Yahoo % 94 3 2 0 Nokia % 86 9 4 2 Times chosen as Best
  • 19. Problems with counting analysis: Example 3 19 ID 1 Questions 1 2 3 4 5 Choices 1 Apple Microsoft IBM Google Nokia Best 2 Apple Sony Dell Yahoo Nokia Worst 3 Microsoft Intel Samsung Sony Nokia 4 IBM Google Intel Sony Dell 5 Microsoft Google Samsung Dell Yahoo 6 Apple IBM Intel Samsung Yahoo Alternatives Counting analysis 3 Microsoft 1 Dell, Google, Samsung 0 Sony, Intel, Apple -1 Yahoo -2 Nokia -3 IBM • The counting analysis shows that Yahoo is the 8th most popular of the brands for this respondent (i.e., the 3rd worst score, at -1, of any of the 10 brands). • This is based on being chosen as Worst in Question 5. • However, in Question 5 Yahoo was against the four most popular brands, so the actual data provides no evidence that Yahoo is any less popular than Sony, Intel, and Apple.
  • 20. Problems with counting analysis: Example 4 20 • The counting analysis suggests that Microsoft is 3 times as popular as Google. • The data shows us that in the two questions where the person had a choice between Microsoft and Google (Questions 1 and 5) they chose Microsoft. It contains no information to suggest that Microsoft is three times as appealing as Google. ID 1 Questions 1 2 3 4 5 Choices 1 Apple Microsoft IBM Google Nokia Best 2 Apple Sony Dell Yahoo Nokia Worst 3 Microsoft Intel Samsung Sony Nokia 4 IBM Google Intel Sony Dell 5 Microsoft Google Samsung Dell Yahoo 6 Apple IBM Intel Samsung Yahoo Alternatives Counting analysis 3 Microsoft 1 Dell, Google, Samsung 0 Sony, Intel, Apple -1 Yahoo -2 Nokia -3 IBM
  • 21. Problems with counting analysis: Example 5 21 • Question 1 tells us that Apple is preferred to Google • Question 5 tells us that Google is preferred to Samsung • Therefore, Apple is preferred to Samsung • But, Question 6 tells us that Samsung is preferred to Apple • Such inconsistencies are typical in survey data ID 13 Questions 1 2 3 4 5 Choices 1 Apple Microsoft IBM Google Nokia Best 2 Apple Sony Dell Yahoo Nokia Worst 3 Microsoft Intel Samsung Sony Nokia 4 IBM Google Intel Sony Dell 5 Microsoft Google Samsung Dell Yahoo 6 Apple IBM Intel Samsung Yahoo Alternatives
  • 22. 22 More advanced methods, such as latent class analysis, mitigate/solve all these problems
  • 23. 23 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  • 24. Process for latent class analysis max-diff • Create > Marketing > Max-Diff > Latent Class Analysis • Set Questions left out for cross validation to about 20% of your max-diff data (e.g., to 1 if you have 6 questions per respondent) • Repeat for from 1 through to 10 segments • Select number of segments based on: • Bayesian Information Criterion (BIC): Smaller is better • Prediction accuracy (only if using cross-validation) • Stability of preference shares (i.e., similar to with one fewer and one more class) • Good discrimination within the segments (a small number of preferred alternatives) • Ease of explaining to clients • Re-run with : • The chosen number of segments • Questions left out for cross validation set to 0 24
  • 25. How many segments? 25 • Predictive accuracy (CV) is maximized at 5 segments • The BIC is lowest at 9 segments, but the differences are very small from 6 onwards • The average preference shares for the difference brands stabilize after 6 segments • I would probably choose 5 segments in this example, as I really like predictive accuracy as a criterion
  • 26. Advanced Latent Class Analysis in Q • Two applications • Latent class analysis of anchored max-diff • Latent class using multiple different types of data (e.g., max-diff + choice model + ratings) • These are done in Q by: • Setting up the max-diff as an Experiment question: http://wiki.q-researchsoftware.com/wiki/Marketing_-_Max-Diff_-_Max- Diff_Setup_from_an_Experimental_Design • Using the standard latent class analysis option (Create > Segments > Latent Class Analysis) 26
  • 27. 27 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  • 28. Typical applications Application Implications for experimental design Implications for analysis Understanding preferences. E.g., • Preferences between new products (“should we launch concept A, B, C, etc.”) • Preferences for existing brands • Message testing • Separate design is best for each person • The design can be “poor” for each person, so long as it is good in aggregate • A large number of alternatives can be included in the study 1. Compute the preference share for each person (e.g., using latent class analysis, hierarchical Bayes, varying coefficients) 2. Compute the average preference share Segmentation. Identify groups of people that differ in the importance they assign to different attributes, traits, values, characteristics, etc. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) Use latent class analysis to identify preference shares within segments General-purpose measurement. Collecting data that can be used in lots of different ways. For example: • As one of multiple different types of data in segmentation • As general profiling data, used to contextualize other variables in a study, in much the same way as is done with demographics. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) 1. Compute the preference share for each person (e.g., using latent class analysis, hierarchical Bayes, varying coefficients) 2. Use these preference shares in other analyses (e.g., comparing averages by other groups) 28
  • 29. Computing the preference share for each respondent • Method 1: Using a standard max-diff latent class model • Select the output in Q • Create > Marketing > Max-Diff > Save variable(s) > Compute Preference Shares • Method 2: Same as Method 1, but with many more classes (e.g., 20). If this is too slow, use the Advanced Latent Class Analysis method instead, as it is faster and has no time limit set on it. • Method 3: Normal mixing distribution (aka Hierarchical Bayes) • On the Data tab, set the Case IDs (top-left of the screen) • Set up as Advanced Latent Class Analysis (see earlier slide) • Set Number of segments to 1 • Latent Class Analysis > Advanced and set Distribution to Multivariate Normal – Full Covariance • Press OK twice • Right-click on the tree and select Save Individual-Level Parameter Means and Standard Deviations • Create > Marketing > Max-Diff > Compute Preference Shares from Individual-Level Parameter Means (All Alternatives) • Method 4: Mixtures of normal mixing distributions: Same as Method 3, except: • Set Number of segments to Automatic or some number • In Advanced, untick Pooled (to the right of Multivariate Normal) • Method 5: Varying coefficients • Create > Marketing > Max-Diff > Varying Coefficients • Setup as for latent class analysis • Select additional predictor variables as Varying Coefficients • Method 6: Ensemble • Use all the methods above • Ignore the data for any method that performs poorly (e.g., based on BIC, or, if you can compute it, cross-validation) • For each respondent, compute their preference share as the average of the preference share of the different methods 29
  • 30. T I M B O C K P R E S E N T S Q&A Session Type questions into the Questions fields in GoToWebinar. If we do not get to your question during the webinar, we will write back via email. We will email you a link to the slides and data. Get a free one-month trial of Q from www.q-researchsoftware.com.