SlideShare a Scribd company logo
Recommender Systems and
  Learning Analytics in TEL

      Hendrik Drachsler
          Open University of the Netherlands
Hendrik Drachsler
• Assistant professor at the Centre for Learning
  Sciences and Technologies (CELSTEC)
• Track record in TEL projects such as
  TENCompetence, SC4L, LTfLL, Handover, dataTEL.
• Main research focus:
   – Personalization of learning with information
     retrieval technologies, recommender systems and
     educational datasets
   – Visualization of educational data, data mash-up
     environments, supporting context-awareness by
     data mining
   – Social and ethical implications of data mining in
     education
• Leader of the dataTEL Theme Team of the
  STELLAR network of excellence (join the SIG on
  TELeurope.eu)
• Just recently: new alterEGO project granted by the
  Netherlands Laboratory for Lifelong Learning (on
  limitations of learning analytics in formal and
  informal learning)
Recommender Systems
            and Learning Analytics in TEL




                    23.07.2011 MUP/PLE lecture series,
               Knowledge Media Institute, Open University UK

Hendrik Drachsler
Centre for Learning Sciences and Technology
Open University of the Netherlands 3
Goals of the lecture
1. Crash course Recommender Systems (RecSys)

2. Overview of RecSys in TEL

3. Open research issues for RecSys in TEL

4.TEL RecSys and Learning Analytics




                        4
Introduction into
Recommender Systems
     Introduction       Objectives

                                 Technologies

                                     Evaluation




                    5
Introduction::Application areas
 Application areas
 • E-commerce websites (Amazon)
 • Video, Music websites (Netflix, last.fm)
 • Content websites (CNN, Google News)
 • Information Support Systems

Major claims
 • Highly application-oriented research area, every domain and
  task needs a specific RecSys
 • Always build around content or products they never
  exist as on their own


                                6
Introduction::Definition
Using the opinions of a community of users to
help individuals in that community to identify more
effectively content of interest from a potentially
overwhelming set of choices.
Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3).




                                       7
Introduction::Definition
Using the opinions of a community of users to
help individuals in that community to identify more
effectively content of interest from a potentially
overwhelming set of choices.
Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3).


Any system that produces personalized
recommendations as output or has the effect of
guiding the user in a personalized way to interesting
or useful objects in a large space of possible options.
Burke R. (2002). Hybrid Recommender Systems: Survey and Experiments,
User Modeling & User Adapted Interaction, 12, pp. 331-370.

                                       7
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example




           8
Introduction::Example

What did we learn from the small exercise?
  • There are different kinds of recommendations
  a. People who bought X also bought Y
  b. there are more advanced personalized recommendations

   • When registering, we have to tell the RecSys what we like
   (and what not). Thus, it requires information to offer suitable
   recommendations and it learns our preferences.




                                8
Introduction:: The Long Tail




Anderson, C., (2004). The Long Tail. Wired Magazine.
                                       9
Introduction:: The Long Tail



“We are leaving the age of information and
entering the age of recommendation”.
                              Anderson, C. (2004)




Anderson, C., (2004). The Long Tail. Wired Magazine.
                                       9
Introduction:: Age of RecSys?
      ...10 minutes on Google.




                  10
Introduction:: Age of RecSys?
      ...10 minutes on Google.




                  10
Introduction:: Age of RecSys?
... another 10 minutes, research on RecSys is
  becoming main stream.
Some examples:
– ACM RecSys conference
– ICWSM: Weblog and Social Media
– WebKDD: Web Knowledge Discovery and Data Mining
– WWW: The original WWW conference
– SIGIR: Information Retrieval
– ACM KDD: Knowledge Discovery and Data Mining
– LAK: Learning Analytics and Knowledge
– Educational data mining conference
– ICML: Machine Learning
– ...

... and various workshops, books, and journals.

                               11
Objectives
of RecSys       probabilistic combination of
                – Item-based method
                – User-based method
                – Matrix Factorization
                – (May be) content-based method



                The idea is to pick from my
                previous list 20-50 movies that
                share similar audience with
                “Taken”, then how much I will like
                depend on how much I liked those
                early movies
                – In short: I tend to watch this movie
                because I have watched those
                movies … or
             12
                – People who have watched those
                movies also liked this movie
Objectives::Aims

• Converting Browsers into
    Buyers

• Increasing Cross-sales
• Building Loyalty
                                                          Foto by markhillary




Schafer, Konstan & Riedel, (1999). RecSys in e-commerce. Proc. of the 1st ACM on
electronic commerce, Denver, Colorado, pp. 158-169.
                                         13
Objectives::RecSys Tasks
Find good items
presenting a ranked list of
recommendendations.


                                               probabilistic combination of
                                               – Item-based method
                                               – User-based method
                                               – Matrix Factorization
                                               – (May be) content-based method
Find all good items
user wants to identify all
                                              The idea is to pick from my
items that might be                           previous list 20-50 movies that
                                              share similar audience with
interesting, e.g. medical                     “Taken”, then how much I will like
                                              depend on how much I liked those
or legal cases                                early movies
                                              – In short: I tend to watch this movie
                                              because I have watched those
Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering
                                              movies … or
Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53.
                                           14
                                                People who Systems, 22(1),
                                              movies also liked this movie
Objectives::RecSys Tasks
Find good items                             Receive sequence of items
presenting a ranked list of                 sequence of related items is
recommendendations.                         recommended to the user,
                                            e.g. music recommender
                                               probabilistic combination of
                                               – Item-based method
                                               – User-based method
                                               – Matrix Factorization
Find all good items                         Annotation in context
                                               – (May be) content-based method

user wants to identify all                  predicted usefulness of an
items that might be                         item that pick from mythatis currently
                                              The idea is to the user
                                              previous list 20-50 movies
interesting, e.g. medical                   viewing, e.g. linkslike
                                              share similar audience with within a
                                              “Taken”, then how much I will
or legal cases                              websitehow much I liked those
                                              depend on
                                              early movies
                                              – In short: I tend to watch this movie
                                              because I have watched those
Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering
                                              movies … or
Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53.
                                           14
                                                People who Systems, 22(1),
                                              movies also liked this movie
Objectives::RecSys Tasks
Find good items                             Receive sequence of items
presenting a ranked list of                 sequence of related items is
recommendendations.                         recommended to the user,
                                            e.g. music recommender

                 There are more tasks available... of
                                  probabilistic combination
                                  – Item-based method
                                               – User-based method
                                               – Matrix Factorization
Find all good items                         Annotation in context
                                               – (May be) content-based method

user wants to identify all                  predicted usefulness of an
items that might be                         item that pick from mythatis currently
                                              The idea is to the user
                                              previous list 20-50 movies
interesting, e.g. medical                   viewing, e.g. linkslike
                                              share similar audience with within a
                                              “Taken”, then how much I will
or legal cases                              websitehow much I liked those
                                              depend on
                                              early movies
                                              – In short: I tend to watch this movie
                                              because I have watched those
Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering
                                              movies … or
Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53.
                                           14
                                                People who Systems, 22(1),
                                              movies also liked this movie
RecSys Technologies
1. Predict how much a user
  may like a certain product

2. Create a list of Top-N
  best items

3. Adjust its prediction
  based on feedback of the
  target user and like-
  minded users
Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems",
  User Modeling and User-Adapted Interaction, 11.
                                            15
RecSys Technologies
1. Predict how much a user
  may like a certain product

2. Create a list of Top-N
  best items

3. Adjust its prediction
  based on feedback of the                         Just some examples
  target user and like-                              there are more
  minded users                                    technologies available.
Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems",
  User Modeling and User-Adapted Interaction, 11.
                                            15
Technologies::Collaborative filtering




  User-based filtering
  (Grouplens, 1994)

Take about 20-50 people who share
similar taste with you, afterwards
predict how much you might like an
item depended on how much the others
liked it.

You may like it because your
“friends” liked it.
                                       16
Technologies::Collaborative filtering




  User-based filtering                             Item-based filtering
  (Grouplens, 1994)                                 (Amazon, 2001)

Take about 20-50 people who share           Pick from your previous list 20-50 items
similar taste with you, afterwards          that share similar people with “the
predict how much you might like an          target item”, how much you will like the
item depended on how much the others        target item depends on how much the
liked it.                                   others liked those earlier items.

You may like it because your                You tend to like that item because
“friends” liked it.                         you have liked those items.
                                       16
Technologies::Content-based filtering




  Information needs of user and characteristics of items are
    represented in keywords, attributes, tags that describe
    past selections, e.g., TF-IDF.




                              17
Technologies::Hybrid RecSys
Combination of techniques to overcome
disadvantages and advantages of single techniques.

 Advantages                   Disadvantages
                             probabilistic combination of
                             – Item-based method
• No content analysis        • Cold-start problem
                             – User-based method
                             – Matrix Factorization

• Quality improves           • Over-fitting
                             – (May be) content-based method


• No cold-start problem      • New user / item problem
                            The idea is to pick from my
• No new user / item         • Sparsity
                            previous list 20-50 movies that
                            share similar audience with
  problem                    “Taken”, then how much I will like
                             depend on how much I liked those
                             early movies
                             – In short: I tend to watch this movie
                             because I have watched those
                             movies … or
                          18
                             – People who have watched those
                             movies also liked this movie
Technologies::Hybrid RecSys
Combination of techniques to overcome
disadvantages and advantages of single techniques.

 Advantages                     Disadvantages
                            probabilistic combination of
                            – Item-based method
• No content analysis          • Cold-start problem
                            – User-based method
                            – Matrix Factorization

• Quality improves             • Over-fitting
                            – (May be) content-based method


• No cold-start problem        • New user / item problem
                             The idea is to pick from my
• No new user / item           • Sparsity
                             previous list 20-50 movies that
                             share similar audience with
  problem                    “Taken”, then how much I will like
                              Just some examples there
                             depend on how much I liked those
                             early movies
                               are more (dis)advantages
                             – In short: I tend to watch this movie
                             because I have watched those

                          18
                             movies … or
                                               available.
                             – People who have watched those
                           movies also liked this movie
Evaluation
of RecSys
                probabilistic combination of
                – Item-based method
                – User-based method
                – Matrix Factorization
                – (May be) content-based method



                The idea is to pick from my
                previous list 20-50 movies that
                share similar audience with
                “Taken”, then how much I will like
                depend on how much I liked those
                early movies
                – In short: I tend to watch this movie
                because I have watched those
                movies … or
             19
                – People who have watched those
                movies also liked this movie
Evaluation::General idea
    Most of the time based on performance measures
      (“How good are your recommendations?”)

For example:

•Predict what rating will a user give an item?
•Will the user select an item?
•What is the order of usefulness of items to a user?

Herlocker, Konstan, Riedl (2004). Evaluating Collaborative Filtering Recommender
Systems. ACM Transactions on Information Systems, 22(1), 5-53.
                                          20
Evaluation::Reference datasets




         ... and various commercial datasets.
                21
Evaluation::Approaches
                       1. Simulation
•User preference
•Prediction accuracy
•Coverage
•Confidence
•Trust
•Novelty               2. User study
•Serendipity
•Diversity
•Utility
•Risk
•Robustness                            +
•Privacy
•Adaptivity
•Scalability
                               22
Evaluation::Metrics
 Precision – The portion of
 recommendations that were
 successful. (Selected by the
 algorithm and by the user)

 Recall – The portion of relevant
 items selected by algorithm
 compared to a total number of
 relevant items available.

 F1 - Measure balances Precision
 and Recall into a single
 measurement.

Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009.                                     23
Evaluation::Metrics
 Precision – The portion of
 recommendations that were
 successful. (Selected by the
 algorithm and by the user)

 Recall – The portion of relevant
 items selected by algorithm
 compared to a total number of
 relevant items available.

 F1 - Measure balances Precision
 and Recall into a single
 measurement.

Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009.                                     23
Evaluation::Metrics
 Precision – The portion of
 recommendations that were
 successful. (Selected by the
 algorithm and by the user)

 Recall – The portion of relevant
 items selected by algorithm
 compared to a total number of
 relevant items available.

 F1 - Measure balances Precision
 and Recall into a single
 measurement.

Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009.                                     23
Evaluation::Metrics
 Precision – The portion of
 recommendations that were
 successful. (Selected by the
 algorithm and by the user)

 Recall – The portion of relevant
 items selected by algorithm
 compared to a total number of
 relevant items available.

 F1 - Measure balances Precision            Just some examples there
 and Recall into a single                   are more metrics available
 measurement.                                     like MAE, RSME.
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009.                                     23
Evaluation::Metrics
                                       5
 Conclusion:
                                       4
 Pearson is better




                                RMSE
 than Cosine,                          3
                                                                         Pearson
 because less                          2
 errors in predicting                                                    Cosine
                                       1
 TOP-N items.                          0
                                           Netflix     BookCrossing




Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009.                                     24
Evaluation::Metrics
                                             5
 Conclusion:
                                             4
 Pearson is better




                                RMSE
 than Cosine,                                3
                                                                                                  Pearson
 because less                                2
 errors in predicting                                                                             Cosine
                                             1
 TOP-N items.                                0
                                                         Netflix          BookCrossing


                                                        News Story Clicks
 Conclusion:                                 80%

 Cosine better than              Precision
                                             60%
 Pearson, because
                                             40%
 of higher precision
                                             20%
 and recall value on
 TOP-N items.                                0%
                                                   5%   10%   15%   20%   25%   30%   35%   40%

                                                       Recall
Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of
Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962,
2009.                                     24
RecSys::TimeToThink
What do you expect that a RecSys in a
MUP/PLE should do with respect to ...

• Aims
• Tasks
• Technology
                        Blackmore’s custom-built LSD Drive

• Evaluation            http://www.flickr.com/photos/
                        rootoftwo/



                   25
Goals of the lecture
1. Crash course Recommender Systems (RecSys)

2. Overview of RecSys in TEL

3. Open research issues for RecSys in TEL

4.TEL RecSys and Learning Analytics




                       26
Recommender Systems
for TEL
    Introduction        Objectives

                                 Technologies

                                     Evaluation




                   27
TEL RecSys::Definition
     Using the experiences of a community of
     learners to help individual learners in that
     community to identify more effectively learning
     content from a potentially overwhelming set of
     choices.
Extended definition by Resnick & Varian (1997). Recommender Systems, Communications of the
  ACM, 40(3).




                                           28
TEL RecSys::Learning spectrum




Cross, J., Informal learning. Pfeifer. (2006).
                                         29
The Long Tail




Graphic: Wilkins, D., (2009).   30
The Long Tail of Learning




Graphic: Wilkins, D., (2009).   30
The Long Tail of Learning
          Formal

                                 Informal




Graphic: Wilkins, D., (2009).   30
TEL RecSys::Technologies




           31
TEL RecSys:: Technologies




            32
TEL RecSys:: Technologies




            33
TEL RecSys:: Technologies


          RecSys Task:
          Find good items

          Hybrid RecSys:
          •Content-based on
           interests
          •Collaborative filtering


            33
TEL RecSys::Tasks
  Find good items
  e.g. relevant items for a learning
     task or a learning goal




                                                 The idea is to pick from my
                                                 previous list 20-50 movies that
                                                 share similar audience with
                                                 “Taken”, then how much I will like
                                                 depend on how much I liked those
                                                 early movies
                                                 – In short: I tend to watch this movie
                                                 because I have watched those
Drachsler, H., Hummel, H., Koper, R., (2009). Identifying the goal, user model and conditions of
                                                 movies … or
     recommender systems for formal and informal–learning. Journal watched those
                                              34
                                                   People who have of Digital Information. 10(2).
                                                 movies also liked this movie
TEL RecSys::Tasks
  Find good items
  e.g. relevant items for a learning
     task or a learning goal


 Receive sequence of items
 e.g. recommend a learning path
     to achieve a certain
     competence

                                                 The idea is to pick from my
                                                 previous list 20-50 movies that
                                                 share similar audience with
                                                 “Taken”, then how much I will like
                                                 depend on how much I liked those
                                                 early movies
                                                 – In short: I tend to watch this movie
                                                 because I have watched those
Drachsler, H., Hummel, H., Koper, R., (2009). Identifying the goal, user model and conditions of
                                                 movies … or
     recommender systems for formal and informal–learning. Journal watched those
                                              34
                                                   People who have of Digital Information. 10(2).
                                                 movies also liked this movie
TEL RecSys::Tasks
  Find good items
  e.g. relevant items for a learning
     task or a learning goal


 Receive sequence of items
 e.g. recommend a learning path
     to achieve a certain
     competence

Annotation in context                            The idea is to pick from my
e.g. take into account location,                 previous list 20-50 movies that
                                                 share similar audience with
     time, noise level, prior                    “Taken”, then how much I will like
     knowledge, peers around                     depend on how much I liked those
                                                 early movies
                                                 – In short: I tend to watch this movie
                                                 because I have watched those
Drachsler, H., Hummel, H., Koper, R., (2009). Identifying the goal, user model and conditions of
                                                 movies … or
     recommender systems for formal and informal–learning. Journal watched those
                                              34
                                                   People who have of Digital Information. 10(2).
                                                 movies also liked this movie
Evaluation
 of TEL
 RecSys         probabilistic combination of
                – Item-based method
                – User-based method
                – Matrix Factorization
                – (May be) content-based method



                The idea is to pick from my
                previous list 20-50 movies that
                share similar audience with
                “Taken”, then how much I will like
                depend on how much I liked those
                early movies
                – In short: I tend to watch this movie
                because I have watched those
                movies … or
             35
                – People who have watched those
                movies also liked this movie
TEL RecSys::Review study




             36
TEL RecSys::Review study




Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).
Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,
L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).
Berlin: Springer.                          36
TEL RecSys::Review study


      Conclusions:

      Half of the systems (11/20) still at design or prototyping stage
       only 8 systems evaluated through trials with human users.




Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011).
Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci,
L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415).
Berlin: Springer.                          36
Thus...
“The performance results
of different research
efforts in recommender
systems are hardly
comparable.”

(Manouselis et al., 2010)
                                 Kaptain Kobold
                                 http://www.flickr.com/photos/
                                 kaptainkobold/3203311346/




                            37
Thus...
TEL recommender
 “The performance results
experiments lack
 of different research
 efforts in recommender
transparency. They need
 systems are hardly
to be repeatable to test:
 comparable.”
• Validity
•(Manouselis et al., 2010)
  Verification
• Compare results                 Kaptain Kobold
                                  http://www.flickr.com/photos/
                                  kaptainkobold/3203311346/




                             37
TEL RecSys::Evaluation/datasets




              38
TEL RecSys::Evaluation/datasets




Drachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G.,
Lindstaedt, S., Stern, H., Friedrich, M., & Wolpers, M. (2010). Issues and Considerations
regarding Sharable Data Sets for Recommender Systems in Technology Enhanced Learning.
Presentation at the 1st Workshop Recommnder Systems in Technology Enhanced Learning
(RecSysTEL) in conjunction with 5th European Conference on Technology Enhanced
Learning (EC-TEL 2010): Sustaining TEL: From Innovation to Learning and Practice.
September, 28, 2010, Barcelona, Spain.         38
Evaluation::Metrics
                                                   MAE – Mean Absolute Error:
                                                   Deviation of recommendations
                                                   from the user-specified ratings.
                                                   The lower the MAE, the more
                                                   accurately the RecSys predicts user
                                                   ratings.




Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,
(2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning
Analytics & Knowledge: February 27-March 1,39  2011, Banff, Alberta, Canada
Evaluation::Metrics
                                                   MAE – Mean Absolute Error:
                                                   Deviation of recommendations
                                                   from the user-specified ratings.
                                                   The lower the MAE, the more
                                                   accurately the RecSys predicts user
                                                   ratings.




 Outcomes:
 Tanimoto similarity +
 item-based CF was
 the most accurate.


Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,
(2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning
Analytics & Knowledge: February 27-March 1,39  2011, Banff, Alberta, Canada
Evaluation::Metrics
                                                   MAE – Mean Absolute Error:
                                                   Deviation of recommendations
                                                   from the user-specified ratings.
                                                   The lower the MAE, the more
                                                   accurately the RecSys predicts user
                                                   ratings.




Outcomes:
•User-based CF Algorithm that
predicts the top 10 most relevant
 Outcomes:
items for a user has a F1 score
 Tanimoto similarity +
of almost 30%.
 item-based CF was
•the most accurate.
  Implicit ratings like download
 rates, bookmarks can
 successfully used in TEL.
Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E.,
(2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning
Analytics & Knowledge: February 27-March 1,39  2011, Banff, Alberta, Canada
TEL RecSys::Evaluation




Combined approach by          Kirkpatrick model by
 Drachsler et al. 2008        Manouselis et al. 2010
                         40
TEL RecSys::Evaluation
 1. Accuracy
 2. Coverage
 3. Precision




Combined approach by          Kirkpatrick model by
 Drachsler et al. 2008        Manouselis et al. 2010
                         40
TEL RecSys::Evaluation
 1. Accuracy
 2. Coverage
 3. Precision



 1. Effectiveness of learning
 2. Efficiency of learning
 3. Drop out rate
 4. Satisfaction


Combined approach by                 Kirkpatrick model by
 Drachsler et al. 2008               Manouselis et al. 2010
                                40
TEL RecSys::Evaluation
 1. Accuracy                           1. Reaction of learner
 2. Coverage                           2. Learning improved
 3. Precision                          3. Behaviour
                                       4. Results


 1. Effectiveness of learning
 2. Efficiency of learning
 3. Drop out rate
 4. Satisfaction


Combined approach by                 Kirkpatrick model by
 Drachsler et al. 2008               Manouselis et al. 2010
                                40
Goals of the lecture
1. Crash course Recommender Systems (RecSys)

2. Overview of RecSys in TEL

3. Open research issues for RecSys in TEL

4.TEL RecSys and Learning Analytics




                       41
TEL RecSys::Open issues

1. Evaluation of TEL RecSys
2. Publicly available datasets
3. Comparable experiments
4. Body of knowledge
5. Privacy and data protection
6. Design learning driven RecSys




                          42
Goals of the lecture
1. Crash course Recommender Systems (RecSys)

2. Overview of RecSys in TEL

3. Open research issues for RecSys in TEL

4.TEL RecSys and Learning Analytics




                       43
Greller, W., & Drachsler, H., 2011.
                          44
Greller, W., & Drachsler, H., 2011.
                          44
Greller, W., & Drachsler, H., 2011.
                          44
Greller, W., & Drachsler, H., 2011.
                          44
Greller, W., & Drachsler, H., 2011.
                          44
Greller, W., & Drachsler, H., 2011.
                          44
Greller, W., & Drachsler, H., 2011.
                          44
Learning Analytics::TimeToThink
 •   Consider the Learning Analytics
     framework and imagine some great TEL
     RecSys that could support you in your
     stakeholder role

     alternatively

 • Name a learning task where a TEL
     RecSys would be useful for.


                      45
Thank you for attending this lecture!
 This silde is available at:
 http://www.slideshare.com/Drachsler

 Email:       hendrik.drachsler@ou.nl
 Skype:       celstec-hendrik.drachsler
 Blogging at: http://www.drachsler.de
 Twittering at: http://twitter.com/HDrachsler


                      46

More Related Content

PDF
Guest lecture Recommender Systems in TEL at RWTH Aachen, Germany
PDF
Collaborative Filtering and Recommender Systems By Navisro Analytics
PPS
Women
PPT
Ailehekimligianketi
PPT
Cijeli brojevi vježba
PPTX
Educon Encienda 2015: Students, Families, Teachers: One Team
PPT
St. Mark’S Libraries – Tech Talk
PPTX
Slavery Module: Lesson three
Guest lecture Recommender Systems in TEL at RWTH Aachen, Germany
Collaborative Filtering and Recommender Systems By Navisro Analytics
Women
Ailehekimligianketi
Cijeli brojevi vježba
Educon Encienda 2015: Students, Families, Teachers: One Team
St. Mark’S Libraries – Tech Talk
Slavery Module: Lesson three

Viewers also liked (19)

PDF
Tips for grabbing and holding attention in online courses
PPT
Niedziela W Supermarkecie
PPT
A methodology to design customized learning networks
PPTX
Slavery Module: Lesson ten
PPT
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...
PDF
Palimpsest Maps
DOCX
El abuso de las drogas
PPT
8.5 Y1 Passes Tu Tes Vacances En France
PPT
Class Of 2010 Info Session Final
PPTX
Publishing for the students living in the iPad era: our view of the industry
PPT
zen and the art of SQL optimization
PDF
バーチャル読書会 第2回
PPS
Eski fotoğraf ve kartpostallar
PPT
新聞報告
PDF
Dan Smith
KEY
Cytoscape プロジェクト現状報告 2011年2月
PDF
Phoenix Hope VI And Green Building Presentation
PPT
媽祖2012五騎有保佑
PPT
Trinity 020908
Tips for grabbing and holding attention in online courses
Niedziela W Supermarkecie
A methodology to design customized learning networks
Slavery Module: Lesson ten
Autonomics Computing (with some of Adaptive Systems) and Requirements Enginee...
Palimpsest Maps
El abuso de las drogas
8.5 Y1 Passes Tu Tes Vacances En France
Class Of 2010 Info Session Final
Publishing for the students living in the iPad era: our view of the industry
zen and the art of SQL optimization
バーチャル読書会 第2回
Eski fotoğraf ve kartpostallar
新聞報告
Dan Smith
Cytoscape プロジェクト現状報告 2011年2月
Phoenix Hope VI And Green Building Presentation
媽祖2012五騎有保佑
Trinity 020908
Ad

Similar to Recommender Systems and Learning Analytics in TEL (20)

PDF
RecSysTEL lecture at advanced SIKS course, NL
PPT
Introduction to recommendation system
PDF
RES Introduction powerpoint presetation
PPTX
Recommenders Systems
PPTX
Lecture Notes on Recommender System Introduction
DOC
WORD
PDF
A Survey Of Collaborative Filtering Techniques
PPTX
Toward the Next Generation of Recommender Systems:
PPT
Social Recommender Systems Tutorial - WWW 2011
PPTX
OMRES-ProgressPresentation1.pptx
PPT
Chapter 02 collaborative recommendation
PPT
Chapter 02 collaborative recommendation
PPT
AI-week6-Recommender Systems & Personalization.ppt
PDF
How to use LLMs for creating a content-based recommendation system for entert...
PPS
MLforIR.pps
PPTX
Immersive Recommendation Workshop, NYC Media Lab'17
PDF
Contextual model of recommending resources on an academic networking portal
PDF
CONTEXTUAL MODEL OF RECOMMENDING RESOURCES ON AN ACADEMIC NETWORKING PORTAL
PDF
Mendeley: Recommendation Systems for Academic Literature
PPTX
Recommender System _Module 1_Introduction to Recommender System.pptx
RecSysTEL lecture at advanced SIKS course, NL
Introduction to recommendation system
RES Introduction powerpoint presetation
Recommenders Systems
Lecture Notes on Recommender System Introduction
WORD
A Survey Of Collaborative Filtering Techniques
Toward the Next Generation of Recommender Systems:
Social Recommender Systems Tutorial - WWW 2011
OMRES-ProgressPresentation1.pptx
Chapter 02 collaborative recommendation
Chapter 02 collaborative recommendation
AI-week6-Recommender Systems & Personalization.ppt
How to use LLMs for creating a content-based recommendation system for entert...
MLforIR.pps
Immersive Recommendation Workshop, NYC Media Lab'17
Contextual model of recommending resources on an academic networking portal
CONTEXTUAL MODEL OF RECOMMENDING RESOURCES ON AN ACADEMIC NETWORKING PORTAL
Mendeley: Recommendation Systems for Academic Literature
Recommender System _Module 1_Introduction to Recommender System.pptx
Ad

More from Hendrik Drachsler (20)

PDF
Trusted Learning Analytics Research Program
PDF
Smart Speaker as Studying Assistant by Joao Pargana
PDF
Verhaltenskodex Trusted Learning Analytics
PDF
Rödling, S. (2019). Entwicklung einer Applikation zum assoziativen Medien Ler...
PDF
E.Leute: Learning the impact of Learning Analytics with an authentic dataset
PDF
Romano, G. (2019) Dancing Trainer: A System For Humans To Learn Dancing Using...
PPTX
Towards Tangible Trusted Learning Analytics
PPTX
Trusted Learning Analytics
PPTX
Fighting level 3: From the LA framework to LA practice on the micro-level
PDF
LACE Project Overview and Exploitation
PPTX
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the Consistent
PPTX
Recommendations for Open Online Education: An Algorithmic Study
PDF
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...
PDF
DELICATE checklist - to establish trusted Learning Analytics
PDF
LACE Flyer 2016
PPT
The Future of Big Data in Education
PDF
The Future of Learning Analytics
PDF
Six dimensions of Learning Analytics
PDF
Learning Analytics Metadata Standards, xAPI recipes & Learning Record Store -
PDF
Ethics privacy washington
Trusted Learning Analytics Research Program
Smart Speaker as Studying Assistant by Joao Pargana
Verhaltenskodex Trusted Learning Analytics
Rödling, S. (2019). Entwicklung einer Applikation zum assoziativen Medien Ler...
E.Leute: Learning the impact of Learning Analytics with an authentic dataset
Romano, G. (2019) Dancing Trainer: A System For Humans To Learn Dancing Using...
Towards Tangible Trusted Learning Analytics
Trusted Learning Analytics
Fighting level 3: From the LA framework to LA practice on the micro-level
LACE Project Overview and Exploitation
Dutch Cooking with xAPI Recipes, The Good, the Bad, and the Consistent
Recommendations for Open Online Education: An Algorithmic Study
Privacy and Analytics – it’s a DELICATE Issue. A Checklist for Trusted Learni...
DELICATE checklist - to establish trusted Learning Analytics
LACE Flyer 2016
The Future of Big Data in Education
The Future of Learning Analytics
Six dimensions of Learning Analytics
Learning Analytics Metadata Standards, xAPI recipes & Learning Record Store -
Ethics privacy washington

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Modernizing your data center with Dell and AMD
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Big Data Technologies - Introduction.pptx
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
A Presentation on Artificial Intelligence
PDF
KodekX | Application Modernization Development
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
Teaching material agriculture food technology
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Modernizing your data center with Dell and AMD
The AUB Centre for AI in Media Proposal.docx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
NewMind AI Monthly Chronicles - July 2025
Encapsulation_ Review paper, used for researhc scholars
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Unlocking AI with Model Context Protocol (MCP)
Mobile App Security Testing_ A Comprehensive Guide.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Big Data Technologies - Introduction.pptx
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
A Presentation on Artificial Intelligence
KodekX | Application Modernization Development
Dropbox Q2 2025 Financial Results & Investor Presentation
Teaching material agriculture food technology

Recommender Systems and Learning Analytics in TEL

  • 1. Recommender Systems and Learning Analytics in TEL Hendrik Drachsler Open University of the Netherlands
  • 2. Hendrik Drachsler • Assistant professor at the Centre for Learning Sciences and Technologies (CELSTEC) • Track record in TEL projects such as TENCompetence, SC4L, LTfLL, Handover, dataTEL. • Main research focus: – Personalization of learning with information retrieval technologies, recommender systems and educational datasets – Visualization of educational data, data mash-up environments, supporting context-awareness by data mining – Social and ethical implications of data mining in education • Leader of the dataTEL Theme Team of the STELLAR network of excellence (join the SIG on TELeurope.eu) • Just recently: new alterEGO project granted by the Netherlands Laboratory for Lifelong Learning (on limitations of learning analytics in formal and informal learning)
  • 3. Recommender Systems and Learning Analytics in TEL 23.07.2011 MUP/PLE lecture series, Knowledge Media Institute, Open University UK Hendrik Drachsler Centre for Learning Sciences and Technology Open University of the Netherlands 3
  • 4. Goals of the lecture 1. Crash course Recommender Systems (RecSys) 2. Overview of RecSys in TEL 3. Open research issues for RecSys in TEL 4.TEL RecSys and Learning Analytics 4
  • 5. Introduction into Recommender Systems Introduction Objectives Technologies Evaluation 5
  • 6. Introduction::Application areas Application areas • E-commerce websites (Amazon) • Video, Music websites (Netflix, last.fm) • Content websites (CNN, Google News) • Information Support Systems Major claims • Highly application-oriented research area, every domain and task needs a specific RecSys • Always build around content or products they never exist as on their own 6
  • 7. Introduction::Definition Using the opinions of a community of users to help individuals in that community to identify more effectively content of interest from a potentially overwhelming set of choices. Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3). 7
  • 8. Introduction::Definition Using the opinions of a community of users to help individuals in that community to identify more effectively content of interest from a potentially overwhelming set of choices. Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3). Any system that produces personalized recommendations as output or has the effect of guiding the user in a personalized way to interesting or useful objects in a large space of possible options. Burke R. (2002). Hybrid Recommender Systems: Survey and Experiments, User Modeling & User Adapted Interaction, 12, pp. 331-370. 7
  • 17. Introduction::Example What did we learn from the small exercise? • There are different kinds of recommendations a. People who bought X also bought Y b. there are more advanced personalized recommendations • When registering, we have to tell the RecSys what we like (and what not). Thus, it requires information to offer suitable recommendations and it learns our preferences. 8
  • 18. Introduction:: The Long Tail Anderson, C., (2004). The Long Tail. Wired Magazine. 9
  • 19. Introduction:: The Long Tail “We are leaving the age of information and entering the age of recommendation”. Anderson, C. (2004) Anderson, C., (2004). The Long Tail. Wired Magazine. 9
  • 20. Introduction:: Age of RecSys? ...10 minutes on Google. 10
  • 21. Introduction:: Age of RecSys? ...10 minutes on Google. 10
  • 22. Introduction:: Age of RecSys? ... another 10 minutes, research on RecSys is becoming main stream. Some examples: – ACM RecSys conference – ICWSM: Weblog and Social Media – WebKDD: Web Knowledge Discovery and Data Mining – WWW: The original WWW conference – SIGIR: Information Retrieval – ACM KDD: Knowledge Discovery and Data Mining – LAK: Learning Analytics and Knowledge – Educational data mining conference – ICML: Machine Learning – ... ... and various workshops, books, and journals. 11
  • 23. Objectives of RecSys probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 12 – People who have watched those movies also liked this movie
  • 24. Objectives::Aims • Converting Browsers into Buyers • Increasing Cross-sales • Building Loyalty Foto by markhillary Schafer, Konstan & Riedel, (1999). RecSys in e-commerce. Proc. of the 1st ACM on electronic commerce, Denver, Colorado, pp. 158-169. 13
  • 25. Objectives::RecSys Tasks Find good items presenting a ranked list of recommendendations. probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method Find all good items user wants to identify all The idea is to pick from my items that might be previous list 20-50 movies that share similar audience with interesting, e.g. medical “Taken”, then how much I will like depend on how much I liked those or legal cases early movies – In short: I tend to watch this movie because I have watched those Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering movies … or Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53. 14 People who Systems, 22(1), movies also liked this movie
  • 26. Objectives::RecSys Tasks Find good items Receive sequence of items presenting a ranked list of sequence of related items is recommendendations. recommended to the user, e.g. music recommender probabilistic combination of – Item-based method – User-based method – Matrix Factorization Find all good items Annotation in context – (May be) content-based method user wants to identify all predicted usefulness of an items that might be item that pick from mythatis currently The idea is to the user previous list 20-50 movies interesting, e.g. medical viewing, e.g. linkslike share similar audience with within a “Taken”, then how much I will or legal cases websitehow much I liked those depend on early movies – In short: I tend to watch this movie because I have watched those Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering movies … or Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53. 14 People who Systems, 22(1), movies also liked this movie
  • 27. Objectives::RecSys Tasks Find good items Receive sequence of items presenting a ranked list of sequence of related items is recommendendations. recommended to the user, e.g. music recommender There are more tasks available... of probabilistic combination – Item-based method – User-based method – Matrix Factorization Find all good items Annotation in context – (May be) content-based method user wants to identify all predicted usefulness of an items that might be item that pick from mythatis currently The idea is to the user previous list 20-50 movies interesting, e.g. medical viewing, e.g. linkslike share similar audience with within a “Taken”, then how much I will or legal cases websitehow much I liked those depend on early movies – In short: I tend to watch this movie because I have watched those Herlocker, Konstan, Borchers, & Riedl (2004). Evaluating Collaborative Filtering movies … or Recommender Systems. ACM Transactions on–Informationhave watched those pp. 5-53. 14 People who Systems, 22(1), movies also liked this movie
  • 28. RecSys Technologies 1. Predict how much a user may like a certain product 2. Create a list of Top-N best items 3. Adjust its prediction based on feedback of the target user and like- minded users Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems", User Modeling and User-Adapted Interaction, 11. 15
  • 29. RecSys Technologies 1. Predict how much a user may like a certain product 2. Create a list of Top-N best items 3. Adjust its prediction based on feedback of the Just some examples target user and like- there are more minded users technologies available. Hanani et al., (2001). Information Filtering: Overview of Issues, Research and Systems", User Modeling and User-Adapted Interaction, 11. 15
  • 30. Technologies::Collaborative filtering User-based filtering (Grouplens, 1994) Take about 20-50 people who share similar taste with you, afterwards predict how much you might like an item depended on how much the others liked it. You may like it because your “friends” liked it. 16
  • 31. Technologies::Collaborative filtering User-based filtering Item-based filtering (Grouplens, 1994) (Amazon, 2001) Take about 20-50 people who share Pick from your previous list 20-50 items similar taste with you, afterwards that share similar people with “the predict how much you might like an target item”, how much you will like the item depended on how much the others target item depends on how much the liked it. others liked those earlier items. You may like it because your You tend to like that item because “friends” liked it. you have liked those items. 16
  • 32. Technologies::Content-based filtering Information needs of user and characteristics of items are represented in keywords, attributes, tags that describe past selections, e.g., TF-IDF. 17
  • 33. Technologies::Hybrid RecSys Combination of techniques to overcome disadvantages and advantages of single techniques. Advantages Disadvantages probabilistic combination of – Item-based method • No content analysis • Cold-start problem – User-based method – Matrix Factorization • Quality improves • Over-fitting – (May be) content-based method • No cold-start problem • New user / item problem The idea is to pick from my • No new user / item • Sparsity previous list 20-50 movies that share similar audience with problem “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 18 – People who have watched those movies also liked this movie
  • 34. Technologies::Hybrid RecSys Combination of techniques to overcome disadvantages and advantages of single techniques. Advantages Disadvantages probabilistic combination of – Item-based method • No content analysis • Cold-start problem – User-based method – Matrix Factorization • Quality improves • Over-fitting – (May be) content-based method • No cold-start problem • New user / item problem The idea is to pick from my • No new user / item • Sparsity previous list 20-50 movies that share similar audience with problem “Taken”, then how much I will like Just some examples there depend on how much I liked those early movies are more (dis)advantages – In short: I tend to watch this movie because I have watched those 18 movies … or available. – People who have watched those movies also liked this movie
  • 35. Evaluation of RecSys probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 19 – People who have watched those movies also liked this movie
  • 36. Evaluation::General idea Most of the time based on performance measures (“How good are your recommendations?”) For example: •Predict what rating will a user give an item? •Will the user select an item? •What is the order of usefulness of items to a user? Herlocker, Konstan, Riedl (2004). Evaluating Collaborative Filtering Recommender Systems. ACM Transactions on Information Systems, 22(1), 5-53. 20
  • 37. Evaluation::Reference datasets ... and various commercial datasets. 21
  • 38. Evaluation::Approaches 1. Simulation •User preference •Prediction accuracy •Coverage •Confidence •Trust •Novelty 2. User study •Serendipity •Diversity •Utility •Risk •Robustness + •Privacy •Adaptivity •Scalability 22
  • 39. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision and Recall into a single measurement. Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962, 2009. 23
  • 40. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision and Recall into a single measurement. Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962, 2009. 23
  • 41. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision and Recall into a single measurement. Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962, 2009. 23
  • 42. Evaluation::Metrics Precision – The portion of recommendations that were successful. (Selected by the algorithm and by the user) Recall – The portion of relevant items selected by algorithm compared to a total number of relevant items available. F1 - Measure balances Precision Just some examples there and Recall into a single are more metrics available measurement. like MAE, RSME. Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962, 2009. 23
  • 43. Evaluation::Metrics 5 Conclusion: 4 Pearson is better RMSE than Cosine, 3 Pearson because less 2 errors in predicting Cosine 1 TOP-N items. 0 Netflix BookCrossing Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962, 2009. 24
  • 44. Evaluation::Metrics 5 Conclusion: 4 Pearson is better RMSE than Cosine, 3 Pearson because less 2 errors in predicting Cosine 1 TOP-N items. 0 Netflix BookCrossing News Story Clicks Conclusion: 80% Cosine better than Precision 60% Pearson, because 40% of higher precision 20% and recall value on TOP-N items. 0% 5% 10% 15% 20% 25% 30% 35% 40% Recall Gunawardana, A., Shani, G., (2009). A Survey of Accuracy Evaluation Metrics of Recommendation Tasks, Journal of Machine Learning Research, 10(Dec):2935−2962, 2009. 24
  • 45. RecSys::TimeToThink What do you expect that a RecSys in a MUP/PLE should do with respect to ... • Aims • Tasks • Technology Blackmore’s custom-built LSD Drive • Evaluation http://www.flickr.com/photos/ rootoftwo/ 25
  • 46. Goals of the lecture 1. Crash course Recommender Systems (RecSys) 2. Overview of RecSys in TEL 3. Open research issues for RecSys in TEL 4.TEL RecSys and Learning Analytics 26
  • 47. Recommender Systems for TEL Introduction Objectives Technologies Evaluation 27
  • 48. TEL RecSys::Definition Using the experiences of a community of learners to help individual learners in that community to identify more effectively learning content from a potentially overwhelming set of choices. Extended definition by Resnick & Varian (1997). Recommender Systems, Communications of the ACM, 40(3). 28
  • 49. TEL RecSys::Learning spectrum Cross, J., Informal learning. Pfeifer. (2006). 29
  • 50. The Long Tail Graphic: Wilkins, D., (2009). 30
  • 51. The Long Tail of Learning Graphic: Wilkins, D., (2009). 30
  • 52. The Long Tail of Learning Formal Informal Graphic: Wilkins, D., (2009). 30
  • 56. TEL RecSys:: Technologies RecSys Task: Find good items Hybrid RecSys: •Content-based on interests •Collaborative filtering 33
  • 57. TEL RecSys::Tasks Find good items e.g. relevant items for a learning task or a learning goal The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those Drachsler, H., Hummel, H., Koper, R., (2009). Identifying the goal, user model and conditions of movies … or recommender systems for formal and informal–learning. Journal watched those 34 People who have of Digital Information. 10(2). movies also liked this movie
  • 58. TEL RecSys::Tasks Find good items e.g. relevant items for a learning task or a learning goal Receive sequence of items e.g. recommend a learning path to achieve a certain competence The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those Drachsler, H., Hummel, H., Koper, R., (2009). Identifying the goal, user model and conditions of movies … or recommender systems for formal and informal–learning. Journal watched those 34 People who have of Digital Information. 10(2). movies also liked this movie
  • 59. TEL RecSys::Tasks Find good items e.g. relevant items for a learning task or a learning goal Receive sequence of items e.g. recommend a learning path to achieve a certain competence Annotation in context The idea is to pick from my e.g. take into account location, previous list 20-50 movies that share similar audience with time, noise level, prior “Taken”, then how much I will like knowledge, peers around depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those Drachsler, H., Hummel, H., Koper, R., (2009). Identifying the goal, user model and conditions of movies … or recommender systems for formal and informal–learning. Journal watched those 34 People who have of Digital Information. 10(2). movies also liked this movie
  • 60. Evaluation of TEL RecSys probabilistic combination of – Item-based method – User-based method – Matrix Factorization – (May be) content-based method The idea is to pick from my previous list 20-50 movies that share similar audience with “Taken”, then how much I will like depend on how much I liked those early movies – In short: I tend to watch this movie because I have watched those movies … or 35 – People who have watched those movies also liked this movie
  • 62. TEL RecSys::Review study Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011). Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci, L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415). Berlin: Springer. 36
  • 63. TEL RecSys::Review study Conclusions: Half of the systems (11/20) still at design or prototyping stage only 8 systems evaluated through trials with human users. Manouselis, N., Drachsler, H., Vuorikari, R., Hummel, H. G. K., & Koper, R. (2011). Recommender Systems in Technology Enhanced Learning. In P. B. Kantor, F. Ricci, L. Rokach, & B. Shapira (Eds.), Recommender Systems Handbook (pp. 387-415). Berlin: Springer. 36
  • 64. Thus... “The performance results of different research efforts in recommender systems are hardly comparable.” (Manouselis et al., 2010) Kaptain Kobold http://www.flickr.com/photos/ kaptainkobold/3203311346/ 37
  • 65. Thus... TEL recommender “The performance results experiments lack of different research efforts in recommender transparency. They need systems are hardly to be repeatable to test: comparable.” • Validity •(Manouselis et al., 2010) Verification • Compare results Kaptain Kobold http://www.flickr.com/photos/ kaptainkobold/3203311346/ 37
  • 67. TEL RecSys::Evaluation/datasets Drachsler, H., Bogers, T., Vuorikari, R., Verbert, K., Duval, E., Manouselis, N., Beham, G., Lindstaedt, S., Stern, H., Friedrich, M., & Wolpers, M. (2010). Issues and Considerations regarding Sharable Data Sets for Recommender Systems in Technology Enhanced Learning. Presentation at the 1st Workshop Recommnder Systems in Technology Enhanced Learning (RecSysTEL) in conjunction with 5th European Conference on Technology Enhanced Learning (EC-TEL 2010): Sustaining TEL: From Innovation to Learning and Practice. September, 28, 2010, Barcelona, Spain. 38
  • 68. Evaluation::Metrics MAE – Mean Absolute Error: Deviation of recommendations from the user-specified ratings. The lower the MAE, the more accurately the RecSys predicts user ratings. Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E., (2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning Analytics & Knowledge: February 27-March 1,39 2011, Banff, Alberta, Canada
  • 69. Evaluation::Metrics MAE – Mean Absolute Error: Deviation of recommendations from the user-specified ratings. The lower the MAE, the more accurately the RecSys predicts user ratings. Outcomes: Tanimoto similarity + item-based CF was the most accurate. Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E., (2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning Analytics & Knowledge: February 27-March 1,39 2011, Banff, Alberta, Canada
  • 70. Evaluation::Metrics MAE – Mean Absolute Error: Deviation of recommendations from the user-specified ratings. The lower the MAE, the more accurately the RecSys predicts user ratings. Outcomes: •User-based CF Algorithm that predicts the top 10 most relevant Outcomes: items for a user has a F1 score Tanimoto similarity + of almost 30%. item-based CF was •the most accurate. Implicit ratings like download rates, bookmarks can successfully used in TEL. Verbert, K., Drachsler, H., Manouselis, N., Wolpers, M., Vuorikari, R., Beham, G., Duval, E., (2011). Dataset-driven Research for Improving Recommender Systems for Learning. Learning Analytics & Knowledge: February 27-March 1,39 2011, Banff, Alberta, Canada
  • 71. TEL RecSys::Evaluation Combined approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 40
  • 72. TEL RecSys::Evaluation 1. Accuracy 2. Coverage 3. Precision Combined approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 40
  • 73. TEL RecSys::Evaluation 1. Accuracy 2. Coverage 3. Precision 1. Effectiveness of learning 2. Efficiency of learning 3. Drop out rate 4. Satisfaction Combined approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 40
  • 74. TEL RecSys::Evaluation 1. Accuracy 1. Reaction of learner 2. Coverage 2. Learning improved 3. Precision 3. Behaviour 4. Results 1. Effectiveness of learning 2. Efficiency of learning 3. Drop out rate 4. Satisfaction Combined approach by Kirkpatrick model by Drachsler et al. 2008 Manouselis et al. 2010 40
  • 75. Goals of the lecture 1. Crash course Recommender Systems (RecSys) 2. Overview of RecSys in TEL 3. Open research issues for RecSys in TEL 4.TEL RecSys and Learning Analytics 41
  • 76. TEL RecSys::Open issues 1. Evaluation of TEL RecSys 2. Publicly available datasets 3. Comparable experiments 4. Body of knowledge 5. Privacy and data protection 6. Design learning driven RecSys 42
  • 77. Goals of the lecture 1. Crash course Recommender Systems (RecSys) 2. Overview of RecSys in TEL 3. Open research issues for RecSys in TEL 4.TEL RecSys and Learning Analytics 43
  • 78. Greller, W., & Drachsler, H., 2011. 44
  • 79. Greller, W., & Drachsler, H., 2011. 44
  • 80. Greller, W., & Drachsler, H., 2011. 44
  • 81. Greller, W., & Drachsler, H., 2011. 44
  • 82. Greller, W., & Drachsler, H., 2011. 44
  • 83. Greller, W., & Drachsler, H., 2011. 44
  • 84. Greller, W., & Drachsler, H., 2011. 44
  • 85. Learning Analytics::TimeToThink • Consider the Learning Analytics framework and imagine some great TEL RecSys that could support you in your stakeholder role alternatively • Name a learning task where a TEL RecSys would be useful for. 45
  • 86. Thank you for attending this lecture! This silde is available at: http://www.slideshare.com/Drachsler Email: hendrik.drachsler@ou.nl Skype: celstec-hendrik.drachsler Blogging at: http://www.drachsler.de Twittering at: http://twitter.com/HDrachsler 46