Ludovik Coba
lucoba@unibz.it
Markus Zanker
mzanker@unibz.it
Laurens Rook
l.rook@tudelft.nl
DecisionMakingBasedon
BimodalRatingSummary
Statistics
An Eye-TrackingStudyofHotels
Introduction
Intro – Related work – Earlier Work
Examples: Electronic Word-of-Mouth (Ratings Summary Statistics) 3
Related work
▪ Online Travel Agencies reshaped the ecosystem [1]
▪ …and eWOM has a strong bias/influence on online decision making [2,4,5]
▪ …as well as to predict business performance [3]
▪ From a CS perspective eWOM is a main ingredient for algorithmic decision support
mechanism like RS
▪ Experiments in psychology literature[6] revealed users with different decision
making styles
[1] Xiang, Z. et al. Information technology and consumer behavior in travel and tourism: Insights from travel planning using the internet. JRCS 2015
[2] Ulrike Gretzel and Kyung Hyan Yoo. Use and impact of online travel reviews. Information and Communication Technologies in Tourism 2008
[3] Xie, K. et al. The business value of online consumer reviews and management response to hotel performance. IJHM 2014
[4] Xiang, Z. et al. A comparative analysis of major online review platforms: Implications for social media analytics inhospitality and tourism. TM 2017
[5] Xie, H. et al. Consumers responses to ambivalent online hotel reviews: The role of perceived source credibility and predecisional disposition. IJHM 2011
[6] Schwartz, B. et al. Maximizing versus satisficing: Happiness is a matter of choice. JPSP 2002
4
5
The goal of this research is
determine how rating summary
statistics are guiding users’
choices in the online scenario
… in order to develop more
efficient algorithms.
Research Goals
Decomposing rating
summaries
We consider them to be
multi-attribute objects:
▪ Number of ratings
▪ Mean of the ratings
▪ Bimodality
▪ Variance
▪ Skewness
▪ Origin of Ratings 6
Decision Making on Multi-attribute Items
▪ Non-Compensatory Strategies [1]:
▫ Compare items based on one attribute
▫ Perform intra-dimensional comparisons
▫ Perform less comparisons
▪ Compensatory Strategies [1]:
▫ All attributes meet a minimum requirement
▫ Multiple inter-dimensional comparisons
▫ Spend more time on items
Eye movement is an indicator of the screening of the choices [2].
7[1] John W Payne. Task complexity and contingentprocessing in decision making: An information search and protocol analysis. Organizational Behavior and Human
Performance, 1976
[2] Jacob L. Orquin and Simone Mueller Loose. Attention and choice: A review on eye movements in decision making. Acta Psychologica 2013
Decision making strategies
▪ Interpersonal differences
▪ Satisficer / Maximizers [1]
▪ Three subdimensions[2]:
▫ Decision Difficulty
▫ Alternative Search
▫ High Standards
8
[1] Herbert A Simon. A behavioral model of rational choice. The quarterly journal of economics, 1955
[2] Schwartz et al., 2002, Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology 1983
Herbert Simon
Earlier Work
▪ We run a set of 3 experiments to understand trade-off
mechanisms between decision strategies
▪ Decomposing rating summaries: Different types of explanations,
Number of ratings, Mean of the ratings, Variance, Skewness
▪ Respondents:
▫ Relied highly on the mean rating
▫ Non linear influence of overall Number of Ratings
▫ Variance and skewness remain largely unnoticed
▫ Maximizers vs. Satisficers display different preferences
9
[1] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perception of Rating Summary Statistics. UMAP ‘18
[2] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perceptionof Collaborative Explanation Styles. CBI ’18
[3] Coba L., Zanker M., Rook L., Symeonidis P.: Decision Making Strategies Differ in the Presenceof Collaborative Explanations: Two Conjoint Studies. IUI
’19
Approach
Experimental design – Metrics – Setup
10
Conjoint experiment to
quantify users’ preferences
Ranking based Conjoint Methodology:
▪ Used in product
design/development
▪ Items can be seen as a bundle of
attributes
▪ Goal to identify the utility
contribution of each attribute of
the rating summary statistics
separately
11
Data
▪ Data driven levels [1]
▪ J-shaped [2]
▪ Bimodality coefficient[3]:
12
[1] Markus Zanker and Martin Schoberegger. An empirical study on the persuasiveness
of fact-based explanations for recommender systems. RecSys 2014
[2] Hu N, Zhang J, Pavlou PA (2009) Overcomingthe J-shaped distribution of product reviews.
Commun ACM
[3] Pfister R, Schwarz KA, Janczyk M, Dale R, Freeman JB (2013) Good things peak in pairs: a note on
the bimodality coefficient. Front Psychol
Design
▪ Full-factorial design with:
▫ 2 levels of the
Number of rating
▫ 3 levels of Mean
▫ 3 levels of Bimodality
▪ 3 screens with 6 items to
rank
13
Additive utility model
Different attributes contribute independently to the overall utility.
The perceived utility of an item/profile is determined as:
𝑢 = 𝑥𝑖 𝛽 + 𝜀
𝑥𝑖 vector characterizing a profile i,
𝛽 vector with (unknown) preferences for each attribute level,
𝜀 is the residual error.
Respondents are supposed to select the alternative with, in their
eyes, maximal utility u.
14
Eye-tracking Metrics
▪ Area of Interest (AOI)[1]
▪ Fixation times
▫ Geometrical mean [2]
▪ Revisits [3]
15
[1]Kenneth Holmqvist, Marcus Nystroom, Richard Andersson, Richard Dewhurst,
Halszka Jarodzka, and Joost Van De Weijer. Eye tracking. A comprehensive guide to
methods and measures. Oxford University Press, 2011.
[2] Jeff Sauro and James R. Lewis. Average task times in usability tests. CHI’10
[3] John W Payne. Task complexity and contingent processing in decision making: An
information search and protocol analysis. Organizational Behavior and Human
Performance, 1976
Results
Respondents’ demographics
0 10 20 30 40
Age
40+ 31-40
25-30 18-24
0 10 20 30
Gender
No answer Female Male
0 5 10 15 20
Country
Others Germany
Albania Italy
42
Respondents
17
All respondents were acquainted to online
booking scenarios
Split on Decision Difficulty:
Parameter estimates
18
Non - compensatory
strategy
▪ Compare items based on one
attribute
▪ Perform intra-dimensional
comparisons
▪ Perform less comparisons
19
Compensatory strategy
▪ All attributes meet a minimum
requirement
▪ Multiple inter-dimensional
comparison
▪ Spend more time on items
20
Max vs. Sat: Time spent on items
21
Geometrical mean of the time spent on item (confidence level of 95%), median split on
decision difficulty sub-scale.
Max vs. Sat: Revisits
22
Mean number of revisits per item(confidence level of 95%), median split on decision difficulty
sub-scale.
Gini-index
23
Conclusions
Conclusions
▪ Maximizers and satisficers expose different decision making behavior
▫ Choice is dominated by mean and number of ratings
▫ Bimodality showed no significant influence
▫ Compensatory vs. non compensatory
▪ Rating summaries influence/bias users’ choice
▫ Not considered when interpreting implicit user feedback
▪ Our results indicate that more aspects need to be considered to optimize
recommendations based on explainability/persuasiveness
25
Thank you!
Ludovik Coba
lucoba@unibz.it
Markus Zanker
mzanker@unibz.it
Laurens Rook
l.rook@tudelft.nl

More Related Content

PPTX
Ratings in Recommender Systems: Decision Biases and Explainability
PPTX
How to Run Conjoint Analysis
PPTX
How to run conjoint analysis
PPTX
How to Run Discrete Choice Conjoint Analysis
PPT
Cb2decision making pr
PPT
SIMS Quantitative Course Lecture 1
PPTX
Week 9. Decision Making.pptx, Covers various topics on the sdecision making
PPT
Learn how to do a conjoint analysis project in 1 hr
Ratings in Recommender Systems: Decision Biases and Explainability
How to Run Conjoint Analysis
How to run conjoint analysis
How to Run Discrete Choice Conjoint Analysis
Cb2decision making pr
SIMS Quantitative Course Lecture 1
Week 9. Decision Making.pptx, Covers various topics on the sdecision making
Learn how to do a conjoint analysis project in 1 hr

Similar to Decision Making Based on Bimodal Rating Summary Statistics - An Eye-Tracking Study of Hotels (20)

PPT
Survey analytics conjointanalysis_1
PDF
Chainsaw Conjoint
PDF
Mr course module 06
PPT
Chap012
PPTX
conjoint analysis
PPT
Consumer behavior in services context
PPTX
Consumer decision making process slide
PPTX
Consumer decision making process slide me
PDF
SSIi2016 keynote Martijn Willemsen
PPTX
Cb7 mba2 k10
PDF
UX & Usability: From "nice to have" to "do or die"
PPT
Consumer Decision Making Process
PDF
Analysis of consumer preferences for new smartphone - Xiomi India
PPT
Customer Satisfaction Measures
PPTX
Attitudinal Impact of Online Reviews on Consumer Purchase Decisions
PPT
Psychological determinants of human judgment & decision making
PPT
Marketing Management -Consumer Descision Making
PDF
Navigating the alphabet soup of survey methodologies
PDF
Guide: Conjoint Analysis
Survey analytics conjointanalysis_1
Chainsaw Conjoint
Mr course module 06
Chap012
conjoint analysis
Consumer behavior in services context
Consumer decision making process slide
Consumer decision making process slide me
SSIi2016 keynote Martijn Willemsen
Cb7 mba2 k10
UX & Usability: From "nice to have" to "do or die"
Consumer Decision Making Process
Analysis of consumer preferences for new smartphone - Xiomi India
Customer Satisfaction Measures
Attitudinal Impact of Online Reviews on Consumer Purchase Decisions
Psychological determinants of human judgment & decision making
Marketing Management -Consumer Descision Making
Navigating the alphabet soup of survey methodologies
Guide: Conjoint Analysis
Ad

Recently uploaded (20)

PPTX
SET 1 Compulsory MNH machine learning intro
PPTX
Business_Capability_Map_Collection__pptx
PPTX
sac 451hinhgsgshssjsjsjheegdggeegegdggddgeg.pptx
PPTX
statsppt this is statistics ppt for giving knowledge about this topic
PPTX
Lesson-01intheselfoflifeofthekennyrogersoftheunderstandoftheunderstanded
PPTX
eGramSWARAJ-PPT Training Module for beginners
PPTX
retention in jsjsksksksnbsndjddjdnFPD.pptx
PPTX
CYBER SECURITY the Next Warefare Tactics
PPTX
AI AND ML PROPOSAL PRESENTATION MUST.pptx
PPTX
New ISO 27001_2022 standard and the changes
PPTX
Caseware_IDEA_Detailed_Presentation.pptx
PPT
DU, AIS, Big Data and Data Analytics.ppt
PPTX
recommendation Project PPT with details attached
PPTX
ai agent creaction with langgraph_presentation_
PDF
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
PPTX
The Data Security Envisioning Workshop provides a summary of an organization...
PPT
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
PDF
©️ 02_SKU Automatic SW Robotics for Microsoft PC.pdf
PDF
Systems Analysis and Design, 12th Edition by Scott Tilley Test Bank.pdf
PPTX
FMIS 108 and AISlaudon_mis17_ppt_ch11.pptx
SET 1 Compulsory MNH machine learning intro
Business_Capability_Map_Collection__pptx
sac 451hinhgsgshssjsjsjheegdggeegegdggddgeg.pptx
statsppt this is statistics ppt for giving knowledge about this topic
Lesson-01intheselfoflifeofthekennyrogersoftheunderstandoftheunderstanded
eGramSWARAJ-PPT Training Module for beginners
retention in jsjsksksksnbsndjddjdnFPD.pptx
CYBER SECURITY the Next Warefare Tactics
AI AND ML PROPOSAL PRESENTATION MUST.pptx
New ISO 27001_2022 standard and the changes
Caseware_IDEA_Detailed_Presentation.pptx
DU, AIS, Big Data and Data Analytics.ppt
recommendation Project PPT with details attached
ai agent creaction with langgraph_presentation_
Jean-Georges Perrin - Spark in Action, Second Edition (2020, Manning Publicat...
The Data Security Envisioning Workshop provides a summary of an organization...
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
©️ 02_SKU Automatic SW Robotics for Microsoft PC.pdf
Systems Analysis and Design, 12th Edition by Scott Tilley Test Bank.pdf
FMIS 108 and AISlaudon_mis17_ppt_ch11.pptx
Ad

Decision Making Based on Bimodal Rating Summary Statistics - An Eye-Tracking Study of Hotels

  • 1. Ludovik Coba lucoba@unibz.it Markus Zanker mzanker@unibz.it Laurens Rook l.rook@tudelft.nl DecisionMakingBasedon BimodalRatingSummary Statistics An Eye-TrackingStudyofHotels
  • 2. Introduction Intro – Related work – Earlier Work
  • 3. Examples: Electronic Word-of-Mouth (Ratings Summary Statistics) 3
  • 4. Related work ▪ Online Travel Agencies reshaped the ecosystem [1] ▪ …and eWOM has a strong bias/influence on online decision making [2,4,5] ▪ …as well as to predict business performance [3] ▪ From a CS perspective eWOM is a main ingredient for algorithmic decision support mechanism like RS ▪ Experiments in psychology literature[6] revealed users with different decision making styles [1] Xiang, Z. et al. Information technology and consumer behavior in travel and tourism: Insights from travel planning using the internet. JRCS 2015 [2] Ulrike Gretzel and Kyung Hyan Yoo. Use and impact of online travel reviews. Information and Communication Technologies in Tourism 2008 [3] Xie, K. et al. The business value of online consumer reviews and management response to hotel performance. IJHM 2014 [4] Xiang, Z. et al. A comparative analysis of major online review platforms: Implications for social media analytics inhospitality and tourism. TM 2017 [5] Xie, H. et al. Consumers responses to ambivalent online hotel reviews: The role of perceived source credibility and predecisional disposition. IJHM 2011 [6] Schwartz, B. et al. Maximizing versus satisficing: Happiness is a matter of choice. JPSP 2002 4
  • 5. 5 The goal of this research is determine how rating summary statistics are guiding users’ choices in the online scenario … in order to develop more efficient algorithms. Research Goals
  • 6. Decomposing rating summaries We consider them to be multi-attribute objects: ▪ Number of ratings ▪ Mean of the ratings ▪ Bimodality ▪ Variance ▪ Skewness ▪ Origin of Ratings 6
  • 7. Decision Making on Multi-attribute Items ▪ Non-Compensatory Strategies [1]: ▫ Compare items based on one attribute ▫ Perform intra-dimensional comparisons ▫ Perform less comparisons ▪ Compensatory Strategies [1]: ▫ All attributes meet a minimum requirement ▫ Multiple inter-dimensional comparisons ▫ Spend more time on items Eye movement is an indicator of the screening of the choices [2]. 7[1] John W Payne. Task complexity and contingentprocessing in decision making: An information search and protocol analysis. Organizational Behavior and Human Performance, 1976 [2] Jacob L. Orquin and Simone Mueller Loose. Attention and choice: A review on eye movements in decision making. Acta Psychologica 2013
  • 8. Decision making strategies ▪ Interpersonal differences ▪ Satisficer / Maximizers [1] ▪ Three subdimensions[2]: ▫ Decision Difficulty ▫ Alternative Search ▫ High Standards 8 [1] Herbert A Simon. A behavioral model of rational choice. The quarterly journal of economics, 1955 [2] Schwartz et al., 2002, Maximizing versus satisficing: Happiness is a matter of choice. Journal of Personality and Social Psychology 1983 Herbert Simon
  • 9. Earlier Work ▪ We run a set of 3 experiments to understand trade-off mechanisms between decision strategies ▪ Decomposing rating summaries: Different types of explanations, Number of ratings, Mean of the ratings, Variance, Skewness ▪ Respondents: ▫ Relied highly on the mean rating ▫ Non linear influence of overall Number of Ratings ▫ Variance and skewness remain largely unnoticed ▫ Maximizers vs. Satisficers display different preferences 9 [1] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perception of Rating Summary Statistics. UMAP ‘18 [2] Coba L., Zanker M., Rook L., Symeonidis P.: Exploring Users' Perceptionof Collaborative Explanation Styles. CBI ’18 [3] Coba L., Zanker M., Rook L., Symeonidis P.: Decision Making Strategies Differ in the Presenceof Collaborative Explanations: Two Conjoint Studies. IUI ’19
  • 10. Approach Experimental design – Metrics – Setup 10
  • 11. Conjoint experiment to quantify users’ preferences Ranking based Conjoint Methodology: ▪ Used in product design/development ▪ Items can be seen as a bundle of attributes ▪ Goal to identify the utility contribution of each attribute of the rating summary statistics separately 11
  • 12. Data ▪ Data driven levels [1] ▪ J-shaped [2] ▪ Bimodality coefficient[3]: 12 [1] Markus Zanker and Martin Schoberegger. An empirical study on the persuasiveness of fact-based explanations for recommender systems. RecSys 2014 [2] Hu N, Zhang J, Pavlou PA (2009) Overcomingthe J-shaped distribution of product reviews. Commun ACM [3] Pfister R, Schwarz KA, Janczyk M, Dale R, Freeman JB (2013) Good things peak in pairs: a note on the bimodality coefficient. Front Psychol
  • 13. Design ▪ Full-factorial design with: ▫ 2 levels of the Number of rating ▫ 3 levels of Mean ▫ 3 levels of Bimodality ▪ 3 screens with 6 items to rank 13
  • 14. Additive utility model Different attributes contribute independently to the overall utility. The perceived utility of an item/profile is determined as: 𝑢 = 𝑥𝑖 𝛽 + 𝜀 𝑥𝑖 vector characterizing a profile i, 𝛽 vector with (unknown) preferences for each attribute level, 𝜀 is the residual error. Respondents are supposed to select the alternative with, in their eyes, maximal utility u. 14
  • 15. Eye-tracking Metrics ▪ Area of Interest (AOI)[1] ▪ Fixation times ▫ Geometrical mean [2] ▪ Revisits [3] 15 [1]Kenneth Holmqvist, Marcus Nystroom, Richard Andersson, Richard Dewhurst, Halszka Jarodzka, and Joost Van De Weijer. Eye tracking. A comprehensive guide to methods and measures. Oxford University Press, 2011. [2] Jeff Sauro and James R. Lewis. Average task times in usability tests. CHI’10 [3] John W Payne. Task complexity and contingent processing in decision making: An information search and protocol analysis. Organizational Behavior and Human Performance, 1976
  • 17. Respondents’ demographics 0 10 20 30 40 Age 40+ 31-40 25-30 18-24 0 10 20 30 Gender No answer Female Male 0 5 10 15 20 Country Others Germany Albania Italy 42 Respondents 17 All respondents were acquainted to online booking scenarios
  • 18. Split on Decision Difficulty: Parameter estimates 18
  • 19. Non - compensatory strategy ▪ Compare items based on one attribute ▪ Perform intra-dimensional comparisons ▪ Perform less comparisons 19
  • 20. Compensatory strategy ▪ All attributes meet a minimum requirement ▪ Multiple inter-dimensional comparison ▪ Spend more time on items 20
  • 21. Max vs. Sat: Time spent on items 21 Geometrical mean of the time spent on item (confidence level of 95%), median split on decision difficulty sub-scale.
  • 22. Max vs. Sat: Revisits 22 Mean number of revisits per item(confidence level of 95%), median split on decision difficulty sub-scale.
  • 25. Conclusions ▪ Maximizers and satisficers expose different decision making behavior ▫ Choice is dominated by mean and number of ratings ▫ Bimodality showed no significant influence ▫ Compensatory vs. non compensatory ▪ Rating summaries influence/bias users’ choice ▫ Not considered when interpreting implicit user feedback ▪ Our results indicate that more aspects need to be considered to optimize recommendations based on explainability/persuasiveness 25
  • 26. Thank you! Ludovik Coba lucoba@unibz.it Markus Zanker mzanker@unibz.it Laurens Rook l.rook@tudelft.nl