SlideShare a Scribd company logo
Investigating Serendipity in Recommender
Systems Based on Real User Feedback
Denis Kotkov, University of Jyväskylä, Finland
Joseph A. Konstan, University of Minnesota, USA
Qian Zhao, University of Minnesota, USA
Jari Veijalainen, University of Jyväskylä, Finland
Summary
• In this study, we
• summarize definitions of serendipity in recommender
systems
• show that serendipity is valuable for users
Outline
• Introduction
• Definitions
• Motivation
• Experiment
• Results
• Conclusion
• Dataset
Recommender systems
Recommender systems are software tools that suggest
items of use to users
(Ricci et al., 2015)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
From accuracy to serendipity
• Accurate recommendations:
• Too safe or boring
• Familiar
• Users would consume them anyway
• How about serendipitous recommendations?
Introduction Definitions Motivation Experiment Results Conclusion Dataset
What is serendipity?
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Dictionary
The faculty of making fortunate
discoveries by accident
(https://www.thefreedictionary.com/serendipity)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Dictionary
The faculty of making fortunate
discoveries by accident
(https://www.thefreedictionary.com/serendipity)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Dictionary
The faculty of making fortunate
discoveries by accident
(https://www.thefreedictionary.com/serendipity)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Dictionary
The faculty of making fortunate
discoveries by accident
(https://www.thefreedictionary.com/serendipity)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Scientific sources
First, a serendipitous item should be not yet
discovered and not be expected by the user;
secondly, the item should also be interesting,
relevant and useful to the user.
(Ge et al., 2010)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Scientific sources
Serendipity represents the “unusualness” or
“surprise” of recommendations.
(Zhang et al., 2012)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
What is serendipity in recommender
systems?
• Serendipity components (Kotkov et al., 2016)
• Relevance
• Novelty
• Unexpectedness
Introduction Definitions Motivation Experiment Results Conclusion Dataset
What is serendipity in recommender
systems?
• Serendipity components
• Relevance – the user likes or is interested in the item
• Novelty
• Unexpectedness
Introduction Definitions Motivation Experiment Results Conclusion Dataset
What is serendipity in recommender
systems?
• Serendipity components
• Relevance
• Novelty
• Strict novelty – the user has never heard about this item
• Motivational novelty – the user was motivated to consume this item by the
recommender system
• Unexpectedness
Introduction Definitions Motivation Experiment Results Conclusion Dataset
What is serendipity in recommender
systems?
• Serendipity components
• Relevance
• Novelty
• Unexpectedness
• Unexpectedness (relevant) – the user does not expect to like an item before consuming
it
• Unexpectedness (find) – the user does not think they would find the item on their own
• Unexpectedness (implicit) – the item is dissimilar to those the user usually consumes
• Unexpectedness (recommend) – the user does not expect to this item to be
recommended
Introduction Definitions Motivation Experiment Results Conclusion Dataset
What is serendipity in recommender
systems?
Name Relevance Strict
novelty
Motivational
novelty
Unexpectedness
(relevant)
Unexpectedness
(find)
Unexpectednes
s (implicit)
Unexpectedness
(recommend)
Strict serendipity
(relevant) + + +
Strict serendipity (find) + + +
Strict serendipity
(implicit) + + +
Strict serendipity
(recommend) + + +
Motivational serendipity
(relevant) + + +
Motivational serendipity
(find) + + +
Motivational serendipity
(implicit) + + +
Motivational serendipity
(recommend) + + +
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Which variation is the most valuable?
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Why serendipitous items?
• According to the literature review, serendipity
• broadens user preferences (Herlocker et al., 2004; Zhang et al., 2012)
• increases user satisfaction (Lu et al., 2012; Murakami et al., 2007)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Not enough evidence
• Takeaways:
• Users prefer using serendipity-oriented algorithms over accuracy-oriented
ones
• Users give low ratings to serendipitous items on average
• Focus is on the algorithm
• Sample size: 21 users
Zhang, Y. C., Séaghdha, D. Ó., Quercia, D., & Jambor, T. (2012, February). Auralist: introducing
serendipity into music recommendation. In Proceedings of the fifth ACM international conference on
Web search and data mining (pp. 13-22). ACM.
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Is serendipity valuable for users?
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Research questions
1. What are the effects of variations of novelty and unexpectedness
on preference broadening and user satisfaction?
2. What are the effects of serendipity variations on preference
broadening and user satisfaction?
3. What are the effective features for detecting serendipitous items?
4. How rare are serendipitous items?
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Research questions
1. What are the effects of variations of novelty and unexpectedness
on preference broadening and user satisfaction?
2. What are the effects of serendipity variations on preference
broadening and user satisfaction?
3. What are the effective features for detecting serendipitous items?
4. How rare are serendipitous items?
Introduction Definitions Motivation Experiment Results Conclusion Dataset
MovieLens
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Survey
• Selection criteria for movies:
• recently watched
• rated with at least 3.5 stars (relevant)
• unpopular
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Survey
• We picked 5 movies that are the least popular and rated with at least
3.5 stars during the three months
November December January February March April
The beginning
of the survey
The user
joined MovieLens At least 5 ratings >= 3.5
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Survey
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Survey
Statement Entity
The first time I heard of this movie was when MovieLens suggested it to me strict novelty
MovieLens influenced my decision to watch this movie motivational novelty
I expected to enjoy this movie before watching it for the first time unexpectedness (relevant)
This is the type of movie I would not normally discover on my own; I need a
recommender system like MovieLens to find movies like this one
unexpectedness (find)
This movie is different (e.g., in style, genre, topic) from the movies I usually watch unexpectedness (implicit)
I was (or, would have been) surprised that MovieLens picked this movie to
recommend to me
unexpectedness
(recommend)
Watching this movie broadened my preferences. Now I am interested in a wider
selection of movies
preference broadening
I am glad I watched this movie user satisfaction
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Survey
Concept User-movie pairs Users
All 2146 475
Strictly serendipitous (relevant) 77 (3%) 61
Strictly serendipitous (find) 181 (8%) 119
Strictly serendipitous (implicit) 115 (5%) 80
Strictly serendipitous (recommend) 63 (3%) 50
Motivationally serendipitous (relevant) 91 (4%) 64
Motivationally serendipitous (find) 163 (7%) 101
Motivationally serendipitous (implicit) 128 (5%) 88
Motivationally serendipitous (recommend) 71 (3%) 49
Serendipitous 302 (14%) 173
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Coefficients of ordinal regression models (12 models)
• metric ~ component
Component Preference broadening User satisfaction
Strict novelty 0.74* 0.12
Motivational novelty 0.78* 0.3
Unexpectedness
(relevant)
0.59* -0.91*
Unexpectedness (find) 2.11* 0.19
Unexpectedness (implicit) 1.86* -0.03
Unexpectedness
(recommend)
1.08* -0.45
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Coefficients of ordinal regression models (12 models)
• metric ~ component
Component Preference broadening User satisfaction
Strict novelty 0.74* 0.12
Motivational novelty 0.78* 0.3
Unexpectedness
(relevant)
0.59* -0.91*
Unexpectedness (find) 2.11* 0.19
Unexpectedness (implicit) 1.86* -0.03
Unexpectedness
(recommend)
1.08* -0.45
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Coefficients of ordinal regression models (12 models)
• metric ~ component
Component Preference broadening User satisfaction
Strict novelty 0.74* 0.12
Motivational novelty 0.78* 0.3
Unexpectedness
(relevant)
0.59* -0.91*
Unexpectedness (find) 2.11* 0.19
Unexpectedness (implicit) 1.86* -0.03
Unexpectedness
(recommend)
1.08* -0.45
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Coefficients of ordinal regression models (12 models)
• metric ~ component
Component Preference broadening User satisfaction
Strict novelty 0.74* 0.12
Motivational novelty 0.78* 0.3
Unexpectedness
(relevant)
0.59* -0.91*
Unexpectedness (find) 2.11* 0.19
Unexpectedness (implicit) 1.86* -0.03
Unexpectedness
(recommend)
1.08* -0.45
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Results for novelty variations were not statistically significant
• Coefficients of ordinal regression models (6 regression models)
• Preference broadening ~ a variation of unexpectedness
Component Unexpectedness
(relevant)
Unexpectedness
(find)
Unexpectedness
(implicit)
Unexpectedness
(find)
0.82*
Unexpectedness
(implicit)
0.63* -0.13
Unexpectedness
(recommend)
0.40 -0.39 -0.18
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Results for novelty variations were not statistically significant
• Coefficients of ordinal regression models (6 regression models)
• User satisfaction ~ a variation of unexpectedness
Component Unexpectedness
(relevant)
Unexpectedness
(find)
Unexpectedness
(implicit)
Unexpectedness
(find)
0.63*
Unexpectedness
(implicit)
0.46* -0.14*
Unexpectedness
(recommend)
0.14 -0.45 -0.32
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• Results for novelty variations were not statistically significant
• Coefficients of ordinal regression models (6 regression models)
• User satisfaction ~ a variation of unexpectedness
Component Unexpectedness
(relevant)
Unexpectedness
(find)
Unexpectedness
(implicit)
Unexpectedness
(find)
0.63*
Unexpectedness
(implicit)
0.46* -0.14*
Unexpectedness
(recommend)
0.14 -0.45 -0.32
Significance codes: “*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of novelty and unexpectedness
• If your goal is to improve preference broadening:
• All variations of novelty and unexpectedness are good
• Unexpectedness (find) is the best choice
• Unexpectedness (relevant) is the worst choice
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of serendipity
• Results regarding user satisfaction were not statistically significant
• Coefficients of ordinal regression models (8 regression models)
• Preference broadening ~ a variation serendipity
Serendipity variation Preference broadening
Strict serendipity (relevant) 0.97*
Strict serendipity (find) 1.47*
Strict serendipity (implicit) 1.58*
Strict serendipity (recommend) 1.6*
Motivational serendipity (relevant) 0.6
Motivational serendipity (find) 1.66*
Motivational serendipity (implicit) 1.35*
Motivational serendipity (recommend) 1.3*
Significance codes:
“*” < 0.0005
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Effects of serendipity
• Results regarding preference broadening were not
statistically significant
• In terms of user satisfaction,
• Motivational serendipity (find) > strict serendipity (implicit) >
motivational serendipity (relevant)
• Strict serendipity (recommend) > motivational serendipity
(recommend)
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Takeaways
• Preference broadening – yes
• User satisfaction – unclear
• Serendipity variations are different
• Which serendipity variation to choose?
• Motivational serendipity (find) seems to be the best choice
• Motivational and strict serendipity (relevant) seem to be the worst choice
Introduction Definitions Motivation Experiment Results Conclusion Dataset
Dataset
• User answers
• Recommendations generated by MovieLens
• Tag genome
• Movie data (such as title, genres, cast)
• 10M user ratings
https://grouplens.org/datasets/serendipity-2018/
Introduction Definitions Motivation Experiment Results Conclusion Dataset
References
Ge, M., Delgado-Battenfeld, C., & Jannach, D. (2010, September). Beyond accuracy: evaluating
recommender systems by coverage and serendipity. In Proceedings of the fourth ACM conference on
Recommender systems (pp. 257-260). ACM.
Herlocker, J. L., Konstan, J. A., Terveen, L. G., & Riedl, J. T. (2004). Evaluating collaborative filtering
recommender systems. ACM Transactions on Information Systems (TOIS), 22(1), 5-53.
Kotkov, D., Wang, S., & Veijalainen, J. (2016). A survey of serendipity in recommender systems. Knowledge-
Based Systems, 111, 180-192.
Lu, Q., Chen, T., Zhang, W., Yang, D., & Yu, Y. (2012, December). Serendipitous personalized ranking for top-
n recommendation. In Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on
Web Intelligence and Intelligent Agent Technology-Volume 01 (pp. 258-265). IEEE Computer Society.
Murakami, T., Mori, K., & Orihara, R. (2007, June). Metrics for evaluating the serendipity of
recommendation lists. In Annual Conference of the Japanese Society for Artificial Intelligence (pp. 40-46).
Springer, Berlin, Heidelberg.
Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender systems: introduction and challenges. In
Recommender systems handbook (pp. 1-34). Springer, Boston, MA.
Zhang, Y. C., Séaghdha, D. Ó., Quercia, D., & Jambor, T. (2012, February). Auralist: introducing serendipity
into music recommendation. In Proceedings of the fifth ACM international conference on Web search and
data mining (pp. 13-22). ACM.
Questions?

More Related Content

PDF
Analytic emperical Mehods
PPT
Chp12 - Research Methods for Business By Authors Uma Sekaran and Roger Bougie
PDF
Bayes rpp bristol
PPT
Evaluating tests
PPT
Lecture 7
PDF
Causal Inference in Data Science and Machine Learning
PDF
Research Method for Business chapter 6
PPTX
Dowhy: An end-to-end library for causal inference
Analytic emperical Mehods
Chp12 - Research Methods for Business By Authors Uma Sekaran and Roger Bougie
Bayes rpp bristol
Evaluating tests
Lecture 7
Causal Inference in Data Science and Machine Learning
Research Method for Business chapter 6
Dowhy: An end-to-end library for causal inference

What's hot (16)

PPTX
PDF
What's the Science in Data Science? - Skipper Seabold
PPT
Analytic Frameworks
PPTX
Spss session 1 and 2
PDF
Introduction to Scientific research مُقدمة في البحث العلمي
PDF
Bayesian Bias Correction: Critically evaluating sets of studies in the presen...
PPTX
Causal inference in practice
PDF
Best Practices for the Academic User: Maximizing the Impact of Your Instituti...
PDF
Filippo Lanubile's talk @IASESE 2018
PDF
On the Measurement of Test Collection Reliability
PPT
Scientific method acc 2011
PPTX
Experiementation causal research
PPS
Lesson 10
PPTX
Hypothesis Testing
PPT
Business research (1)
What's the Science in Data Science? - Skipper Seabold
Analytic Frameworks
Spss session 1 and 2
Introduction to Scientific research مُقدمة في البحث العلمي
Bayesian Bias Correction: Critically evaluating sets of studies in the presen...
Causal inference in practice
Best Practices for the Academic User: Maximizing the Impact of Your Instituti...
Filippo Lanubile's talk @IASESE 2018
On the Measurement of Test Collection Reliability
Scientific method acc 2011
Experiementation causal research
Lesson 10
Hypothesis Testing
Business research (1)
Ad

Similar to Investigating Serendipity in Recommender Systems Based on Real User Feedback (20)

PDF
On serendipity in recommender systems - Haifa RecSoc workshop june 2015
PDF
Serendipity module in Item Recommender System
PDF
SIRUP - Serendipity in Recommendation through User Perception
PPTX
ICS2208 lecture7
PDF
Supporting Exploration and Serendipity in Information Retrieval
PPTX
Recommender system introduction
PPTX
Immersive Recommendation Workshop, NYC Media Lab'17
PDF
Toward a new Protocol to evaluate Recommender Systems
DOCX
Recommender system definition, types with examples
PDF
IUI Step-hai 2025 workshop Keynote by Martijn Willemsen
PDF
Towards Diverse Recommendation
PDF
Improving user experience in recommender systems
PPTX
Reasesrty djhjan S - explanation required.pptx
PPT
Recommender systems
PDF
UX Lx presentation "Serendipity: beyond recommendation" by Pedro Fernandes
PDF
Modern Perspectives on Recommender Systems and their Applications in Mendeley
PPT
Recommender systems session b
PDF
Mechanical Librarian
PDF
Recommender.system.presentation.pjug.01.21.2014
On serendipity in recommender systems - Haifa RecSoc workshop june 2015
Serendipity module in Item Recommender System
SIRUP - Serendipity in Recommendation through User Perception
ICS2208 lecture7
Supporting Exploration and Serendipity in Information Retrieval
Recommender system introduction
Immersive Recommendation Workshop, NYC Media Lab'17
Toward a new Protocol to evaluate Recommender Systems
Recommender system definition, types with examples
IUI Step-hai 2025 workshop Keynote by Martijn Willemsen
Towards Diverse Recommendation
Improving user experience in recommender systems
Reasesrty djhjan S - explanation required.pptx
Recommender systems
UX Lx presentation "Serendipity: beyond recommendation" by Pedro Fernandes
Modern Perspectives on Recommender Systems and their Applications in Mendeley
Recommender systems session b
Mechanical Librarian
Recommender.system.presentation.pjug.01.21.2014
Ad

Recently uploaded (20)

PPTX
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
PDF
Introduction to the R Programming Language
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PDF
Introduction to Data Science and Data Analysis
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PPTX
Supervised vs unsupervised machine learning algorithms
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PDF
Lecture1 pattern recognition............
PPT
ISS -ESG Data flows What is ESG and HowHow
PPTX
Introduction to machine learning and Linear Models
PPTX
Database Infoormation System (DBIS).pptx
PPTX
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
PPTX
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
PPTX
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
SAP 2 completion done . PRESENTATION.pptx
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
ALIMENTARY AND BILIARY CONDITIONS 3-1.pptx
Introduction to the R Programming Language
Business Ppt On Nestle.pptx huunnnhhgfvu
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Introduction to Data Science and Data Analysis
Data_Analytics_and_PowerBI_Presentation.pptx
Supervised vs unsupervised machine learning algorithms
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
Lecture1 pattern recognition............
ISS -ESG Data flows What is ESG and HowHow
Introduction to machine learning and Linear Models
Database Infoormation System (DBIS).pptx
DISORDERS OF THE LIVER, GALLBLADDER AND PANCREASE (1).pptx
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
Introduction to Basics of Ethical Hacking and Penetration Testing -Unit No. 1...
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
SAP 2 completion done . PRESENTATION.pptx
.pdf is not working space design for the following data for the following dat...
Introduction to Firewall Analytics - Interfirewall and Transfirewall.pptx
Introduction-to-Cloud-ComputingFinal.pptx

Investigating Serendipity in Recommender Systems Based on Real User Feedback

  • 1. Investigating Serendipity in Recommender Systems Based on Real User Feedback Denis Kotkov, University of Jyväskylä, Finland Joseph A. Konstan, University of Minnesota, USA Qian Zhao, University of Minnesota, USA Jari Veijalainen, University of Jyväskylä, Finland
  • 2. Summary • In this study, we • summarize definitions of serendipity in recommender systems • show that serendipity is valuable for users
  • 3. Outline • Introduction • Definitions • Motivation • Experiment • Results • Conclusion • Dataset
  • 4. Recommender systems Recommender systems are software tools that suggest items of use to users (Ricci et al., 2015) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 5. From accuracy to serendipity • Accurate recommendations: • Too safe or boring • Familiar • Users would consume them anyway • How about serendipitous recommendations? Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 6. What is serendipity? Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 7. Dictionary The faculty of making fortunate discoveries by accident (https://www.thefreedictionary.com/serendipity) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 8. Dictionary The faculty of making fortunate discoveries by accident (https://www.thefreedictionary.com/serendipity) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 9. Dictionary The faculty of making fortunate discoveries by accident (https://www.thefreedictionary.com/serendipity) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 10. Dictionary The faculty of making fortunate discoveries by accident (https://www.thefreedictionary.com/serendipity) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 11. Scientific sources First, a serendipitous item should be not yet discovered and not be expected by the user; secondly, the item should also be interesting, relevant and useful to the user. (Ge et al., 2010) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 12. Scientific sources Serendipity represents the “unusualness” or “surprise” of recommendations. (Zhang et al., 2012) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 13. What is serendipity in recommender systems? • Serendipity components (Kotkov et al., 2016) • Relevance • Novelty • Unexpectedness Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 14. What is serendipity in recommender systems? • Serendipity components • Relevance – the user likes or is interested in the item • Novelty • Unexpectedness Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 15. What is serendipity in recommender systems? • Serendipity components • Relevance • Novelty • Strict novelty – the user has never heard about this item • Motivational novelty – the user was motivated to consume this item by the recommender system • Unexpectedness Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 16. What is serendipity in recommender systems? • Serendipity components • Relevance • Novelty • Unexpectedness • Unexpectedness (relevant) – the user does not expect to like an item before consuming it • Unexpectedness (find) – the user does not think they would find the item on their own • Unexpectedness (implicit) – the item is dissimilar to those the user usually consumes • Unexpectedness (recommend) – the user does not expect to this item to be recommended Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 17. What is serendipity in recommender systems? Name Relevance Strict novelty Motivational novelty Unexpectedness (relevant) Unexpectedness (find) Unexpectednes s (implicit) Unexpectedness (recommend) Strict serendipity (relevant) + + + Strict serendipity (find) + + + Strict serendipity (implicit) + + + Strict serendipity (recommend) + + + Motivational serendipity (relevant) + + + Motivational serendipity (find) + + + Motivational serendipity (implicit) + + + Motivational serendipity (recommend) + + + Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 18. Which variation is the most valuable? Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 19. Why serendipitous items? • According to the literature review, serendipity • broadens user preferences (Herlocker et al., 2004; Zhang et al., 2012) • increases user satisfaction (Lu et al., 2012; Murakami et al., 2007) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 20. Not enough evidence • Takeaways: • Users prefer using serendipity-oriented algorithms over accuracy-oriented ones • Users give low ratings to serendipitous items on average • Focus is on the algorithm • Sample size: 21 users Zhang, Y. C., Séaghdha, D. Ó., Quercia, D., & Jambor, T. (2012, February). Auralist: introducing serendipity into music recommendation. In Proceedings of the fifth ACM international conference on Web search and data mining (pp. 13-22). ACM. Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 21. Is serendipity valuable for users? Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 22. Research questions 1. What are the effects of variations of novelty and unexpectedness on preference broadening and user satisfaction? 2. What are the effects of serendipity variations on preference broadening and user satisfaction? 3. What are the effective features for detecting serendipitous items? 4. How rare are serendipitous items? Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 23. Research questions 1. What are the effects of variations of novelty and unexpectedness on preference broadening and user satisfaction? 2. What are the effects of serendipity variations on preference broadening and user satisfaction? 3. What are the effective features for detecting serendipitous items? 4. How rare are serendipitous items? Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 24. MovieLens Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 25. Survey • Selection criteria for movies: • recently watched • rated with at least 3.5 stars (relevant) • unpopular Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 26. Survey • We picked 5 movies that are the least popular and rated with at least 3.5 stars during the three months November December January February March April The beginning of the survey The user joined MovieLens At least 5 ratings >= 3.5 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 27. Survey Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 28. Survey Statement Entity The first time I heard of this movie was when MovieLens suggested it to me strict novelty MovieLens influenced my decision to watch this movie motivational novelty I expected to enjoy this movie before watching it for the first time unexpectedness (relevant) This is the type of movie I would not normally discover on my own; I need a recommender system like MovieLens to find movies like this one unexpectedness (find) This movie is different (e.g., in style, genre, topic) from the movies I usually watch unexpectedness (implicit) I was (or, would have been) surprised that MovieLens picked this movie to recommend to me unexpectedness (recommend) Watching this movie broadened my preferences. Now I am interested in a wider selection of movies preference broadening I am glad I watched this movie user satisfaction Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 29. Survey Concept User-movie pairs Users All 2146 475 Strictly serendipitous (relevant) 77 (3%) 61 Strictly serendipitous (find) 181 (8%) 119 Strictly serendipitous (implicit) 115 (5%) 80 Strictly serendipitous (recommend) 63 (3%) 50 Motivationally serendipitous (relevant) 91 (4%) 64 Motivationally serendipitous (find) 163 (7%) 101 Motivationally serendipitous (implicit) 128 (5%) 88 Motivationally serendipitous (recommend) 71 (3%) 49 Serendipitous 302 (14%) 173 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 30. Effects of novelty and unexpectedness • Coefficients of ordinal regression models (12 models) • metric ~ component Component Preference broadening User satisfaction Strict novelty 0.74* 0.12 Motivational novelty 0.78* 0.3 Unexpectedness (relevant) 0.59* -0.91* Unexpectedness (find) 2.11* 0.19 Unexpectedness (implicit) 1.86* -0.03 Unexpectedness (recommend) 1.08* -0.45 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 31. Effects of novelty and unexpectedness • Coefficients of ordinal regression models (12 models) • metric ~ component Component Preference broadening User satisfaction Strict novelty 0.74* 0.12 Motivational novelty 0.78* 0.3 Unexpectedness (relevant) 0.59* -0.91* Unexpectedness (find) 2.11* 0.19 Unexpectedness (implicit) 1.86* -0.03 Unexpectedness (recommend) 1.08* -0.45 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 32. Effects of novelty and unexpectedness • Coefficients of ordinal regression models (12 models) • metric ~ component Component Preference broadening User satisfaction Strict novelty 0.74* 0.12 Motivational novelty 0.78* 0.3 Unexpectedness (relevant) 0.59* -0.91* Unexpectedness (find) 2.11* 0.19 Unexpectedness (implicit) 1.86* -0.03 Unexpectedness (recommend) 1.08* -0.45 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 33. Effects of novelty and unexpectedness • Coefficients of ordinal regression models (12 models) • metric ~ component Component Preference broadening User satisfaction Strict novelty 0.74* 0.12 Motivational novelty 0.78* 0.3 Unexpectedness (relevant) 0.59* -0.91* Unexpectedness (find) 2.11* 0.19 Unexpectedness (implicit) 1.86* -0.03 Unexpectedness (recommend) 1.08* -0.45 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 34. Effects of novelty and unexpectedness • Results for novelty variations were not statistically significant • Coefficients of ordinal regression models (6 regression models) • Preference broadening ~ a variation of unexpectedness Component Unexpectedness (relevant) Unexpectedness (find) Unexpectedness (implicit) Unexpectedness (find) 0.82* Unexpectedness (implicit) 0.63* -0.13 Unexpectedness (recommend) 0.40 -0.39 -0.18 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 35. Effects of novelty and unexpectedness • Results for novelty variations were not statistically significant • Coefficients of ordinal regression models (6 regression models) • User satisfaction ~ a variation of unexpectedness Component Unexpectedness (relevant) Unexpectedness (find) Unexpectedness (implicit) Unexpectedness (find) 0.63* Unexpectedness (implicit) 0.46* -0.14* Unexpectedness (recommend) 0.14 -0.45 -0.32 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 36. Effects of novelty and unexpectedness • Results for novelty variations were not statistically significant • Coefficients of ordinal regression models (6 regression models) • User satisfaction ~ a variation of unexpectedness Component Unexpectedness (relevant) Unexpectedness (find) Unexpectedness (implicit) Unexpectedness (find) 0.63* Unexpectedness (implicit) 0.46* -0.14* Unexpectedness (recommend) 0.14 -0.45 -0.32 Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 37. Effects of novelty and unexpectedness • If your goal is to improve preference broadening: • All variations of novelty and unexpectedness are good • Unexpectedness (find) is the best choice • Unexpectedness (relevant) is the worst choice Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 38. Effects of serendipity • Results regarding user satisfaction were not statistically significant • Coefficients of ordinal regression models (8 regression models) • Preference broadening ~ a variation serendipity Serendipity variation Preference broadening Strict serendipity (relevant) 0.97* Strict serendipity (find) 1.47* Strict serendipity (implicit) 1.58* Strict serendipity (recommend) 1.6* Motivational serendipity (relevant) 0.6 Motivational serendipity (find) 1.66* Motivational serendipity (implicit) 1.35* Motivational serendipity (recommend) 1.3* Significance codes: “*” < 0.0005 Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 39. Effects of serendipity • Results regarding preference broadening were not statistically significant • In terms of user satisfaction, • Motivational serendipity (find) > strict serendipity (implicit) > motivational serendipity (relevant) • Strict serendipity (recommend) > motivational serendipity (recommend) Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 40. Takeaways • Preference broadening – yes • User satisfaction – unclear • Serendipity variations are different • Which serendipity variation to choose? • Motivational serendipity (find) seems to be the best choice • Motivational and strict serendipity (relevant) seem to be the worst choice Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 41. Dataset • User answers • Recommendations generated by MovieLens • Tag genome • Movie data (such as title, genres, cast) • 10M user ratings https://grouplens.org/datasets/serendipity-2018/ Introduction Definitions Motivation Experiment Results Conclusion Dataset
  • 42. References Ge, M., Delgado-Battenfeld, C., & Jannach, D. (2010, September). Beyond accuracy: evaluating recommender systems by coverage and serendipity. In Proceedings of the fourth ACM conference on Recommender systems (pp. 257-260). ACM. Herlocker, J. L., Konstan, J. A., Terveen, L. G., & Riedl, J. T. (2004). Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS), 22(1), 5-53. Kotkov, D., Wang, S., & Veijalainen, J. (2016). A survey of serendipity in recommender systems. Knowledge- Based Systems, 111, 180-192. Lu, Q., Chen, T., Zhang, W., Yang, D., & Yu, Y. (2012, December). Serendipitous personalized ranking for top- n recommendation. In Proceedings of the The 2012 IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technology-Volume 01 (pp. 258-265). IEEE Computer Society. Murakami, T., Mori, K., & Orihara, R. (2007, June). Metrics for evaluating the serendipity of recommendation lists. In Annual Conference of the Japanese Society for Artificial Intelligence (pp. 40-46). Springer, Berlin, Heidelberg. Ricci, F., Rokach, L., & Shapira, B. (2015). Recommender systems: introduction and challenges. In Recommender systems handbook (pp. 1-34). Springer, Boston, MA. Zhang, Y. C., Séaghdha, D. Ó., Quercia, D., & Jambor, T. (2012, February). Auralist: introducing serendipity into music recommendation. In Proceedings of the fifth ACM international conference on Web search and data mining (pp. 13-22). ACM.