Ten ways to make your semantic
       app addicted - REVISITED
                     Elena Simperl
             Tutorial at the ISWC2011, Bonn, Germany




10/24/2011                www.insemtives.eu            1
Executive summary
• Many aspects of semantic content authoring naturally rely
  on human contribution.

• Motivating users to contribute is essential for semantic
  technologies to reach critical mass and ensure sustainable
  growth.

• This tutorial is about
   – Methods and techniques to study incentives and motivators
     applicable to semantic content authoring scenarios.
   – How to implement the results of such studies through
     technology design, usability engineering, and game mechanics.


                            www.insemtives.eu                        2
Incentives and motivators

• Motivation is the driving     • Incentives can be related
  force that makes humans         to both extrinsic and
  achieve their goals.            intrinsic motivations.
• Incentives are ‘rewards’      • Extrinsic motivation if
  assigned by an external         task is considered boring,
  ‘judge’ to a performer for      dangerous, useless,
  undertaking a specific          socially undesirable,
  task.                           dislikable by the
   – Common belief (among         performer.
     economists): incentives    • Intrinsic motivation is
     can be translated into a
     sum of money for all         driven by an interest or
     practical purposes.          enjoyment in the task
                                  itself.
Examples of applications




            www.insemtives.eu   4
Extrinsic vs intrinsic motivations
• Successful volunteer crowdsourcing is difficult
  to predict or replicate.
  – Highly context-specific.
  – Not applicable to arbitrary tasks.
• Reward models often easier to study and
  control.*
  – Different models: pay-per-time, pay-per-unit, winner-
    takes-it-all…
  – Not always easy to abstract from social aspects (free-
    riding, social pressure…).
  – May undermine intrinsic motivation.
                          * in cases when performance can be reliably measured
Examples (ii)




Mason & Watts: Financial incentives and the performance of the crowds, HCOMP 2009.
Amazon‘s Mechanical Turk
  • Types of tasks: transcription, classification, and content
    generation, data collection, image tagging, website feedback,
    usability tests.*
  • Increasingly used by academia.
  • Vertical solutions built on top.
  • Research on extensions for complex tasks.




* http://behind-the-enemy-lines.blogspot.com/2010/10/what-tasks-are-posted-on-mechanical.html
Tasks amenable to crowdsourcing
• Tasks that are decomposable into simpler
  tasks that are easy to perform.
• Performance is measurable.
• No specific skills or expertise are required.
Patterns of tasks*
• Solving a task                   • Example: open-scale tasks
   – Generate answers                in Mturk
   – Find additional information      – Generate, then vote.
   – Improve, edit, fix               – Introduce random noise to
• Evaluating the results of a           identify potential issues in
                                        the second step
  task
   – Vote for accept/reject
                                                         Label                  Correct




                                                                 Vote answers
                                       Generate answer
   – Vote up/down to rank
     potentially correct answers
                                                         image                  or not?
   – Vote best/top-n results
• Flow control
   – Split the task
   – Aggregate partial results

 * „Managing Crowdsourced Human Computation“@WWW2011, Ipeirotis
Examples (iii)




            www.insemtives.eu   10
What makes game mechanics
                 successfull?*
      • Accelerated feedback cycles.
             – Annual performance appraisals vs immediate feedback to
               maintain engagement.
      • Clear goals and rules of play.
             – Players feel empowered to achieve goals vs fuzzy, complex
               system of rules in real-world.
      • Compelling narrative.
             – Gamification builds a narrative that engages players to
               participate and achieve the goals of the activity.

      • But in the end it’s about what task users want to get
        better at.
*http://www.gartner.com/it/page.jsp?id=1629214
Images from http://gapingvoid.com/2011/06/07/pixie-dust-the-mountain-of-mediocrity/ and http://www.hideandseek.net/wp-
content/uploads/2010/10/gamification_badges.jpg
Guidelines
      • Focus on the actual goal and incentivize related
        actions.
            – Write posts, create graphics, annotate pictures, reply
              to customers in a given time…
      • Build a community around the intended actions.
            – Reward helping each other in performing the task and
              interaction.
            – Reward recruiting new contributors.
      • Reward repeated actions.
            – Actions become part of the daily routine.


Image from http://t1.gstatic.com/images?q=tbn:ANd9GcSzWEQdtagJy6lxiR2focH2D01Wpz7dzAilDuPsWnL0i4GAHgnm_0hyw3upqw
What tasks can be gamified?*
    • Tasks that are decomposable into simpler
      tasks, nested tasks.
    • Performance is measurable.
    • Obvious rewarding scheme.
    • Skills can be arranged in a smooth learning
      curve.




*http://www.lostgarden.com/2008/06/what-actitivies-that-can-be-turned-into.html
Image from http://www.powwownow.co.uk/blog/wp-content/uploads/2011/06/gamification.jpeg
What is different about semantic
      systems?
• It‘s still about the context
  of the actual application.

• User engagement with
  semantic tasks in order to
   – Ensure knowledge is
     relevant and up-to-date.
   – People accept the new
     solution and understand its
     benefits.
   – Avoid cold-start problems.
   – Optimize maintenance
     costs.
Tasks in knowledge engineering
• Definition of vocabulary
• Conceptualization
   – Based on competency questions
   – Identifying instances, classes, attributes,
     relationships
• Documentation
   – Labeling and definitions.
   – Localization
• Evaluation and quality assurance
   – Matching conceptualization to documentation
• Alignment
• Validating the results of automatic methods
                                   www.insemtives.eu   15
http://www.ontogame.org
http://apps.facebook.com/ontogame




              16
OntoGame API
• API that provides several methods that are
  shared by the OntoGame games, such as:
   – Different agreement types (e.g. selection
     agreement).
   – Input matching (e.g. , majority).
   – Game modes (multi-player, single player).
   – Player reliability evaluation.
   – Player matching (e.g., finding the optimal
     partner to play).
   – Resource (i.e., data needed for games)
     management.
   – Creating semantic content.
• http://insemtives.svn.sourceforge.net/vie
  wvc/insemtives/generic-gaming-toolkit
  10/24/2011                       www.insemtives.eu   17
OntoGame games




10/24/2011            www.insemtives.eu   18
Case studies
• Methods applied
   –   Mechanism design.
   –   Participatory design.
   –   Games with a purpose.
   –   Crowdsourcing via MTurk.
• Semantic content
  authoring scenarios
   – Extending and populating
     an ontology.
   – Aligning two ontologies.
   – Annotation of text, media
     and Web APIs.
Lessons learned
• Approach is feasible for mainstream domains, where a
  (large-enough) knowledge corpus is available.
• Advertisement is important.
• Game design vs useful content.
   – Reusing well-kwown game paradigms.
   – Reusing game outcomes and integration in existing workflows
     and tools.

• But, the approach is per design less applicable because
   – Knowledge-intensive tasks that are not easily nestable.
   – Repetitive tasks  players‘ retention?

• Cost-benefit analysis.
Using Mechanical Turk for
    semantic content authoring
• Many design decisions similar to GWAPs.
  – But clear incentives structures.
  – How to reliably compare games and MTurk results?

• Automatic generation of HITs depending on the
  types of tasks and inputs.

• Integration in productive environments.
  – Protégé plug-in for managing and using crowdsourcing
    results.
Outline of the tutorial
Time      Presentation
14:00 –   Human contributions in semantic content authoring
14:45
14:45 –   Case study: motivating employees to annotate enterprise
15:30     content semantically at Telefonica
15:30 –   Coffee break
16:00
16:00 –   Case study: Crowdsourcing the annotation of dynamic Web
16:45     content at seekda
16:45 –   Case study: Content tagging at MoonZoo and
17:30     MyTinyPlanets
17:30 –   Ten ways to make your semantic app addicted - revisited
18:00                        www.insemtives.eu                  22
Realizing the Semantic Web by
 encouraging millions of end-users to
      create semantic content.
10/24/2011      www.insemtives.eu       23

More Related Content

PDF
Insemtives stanford
PDF
INSEMTIVES Tutorial ISWC2011 - Session5
PDF
Insemtives cluj meetup
PDF
Insemtives swat4ls 2012
PDF
Insemtives semtech2010-20100622
PDF
Insemtives iswc2010
PDF
Insemtives cluj iccp
PPTX
SemTech2011 - Employee-of-the-Month' Badge Unlocked
Insemtives stanford
INSEMTIVES Tutorial ISWC2011 - Session5
Insemtives cluj meetup
Insemtives swat4ls 2012
Insemtives semtech2010-20100622
Insemtives iswc2010
Insemtives cluj iccp
SemTech2011 - Employee-of-the-Month' Badge Unlocked

Similar to Insemtives iswc2011 session1 (20)

PPTX
INSEMTIVES talk at Semtech2010
PPTX
Tutorial Cognition - Irene
PDF
Crowdsourcing & Gamification
PDF
Tutorial: Social Semantic Web and Crowdsourcing - E. Simperl - ESWC SS 2014
PPTX
Introduction (1/6)
PDF
SSSW 2016 Cognition Tutorial
PPTX
Crowdsourcing for Online Data Collection
PDF
Comparison GWAP Mechanical Turk
PDF
Insemtives (Newsfromthefront 2010)
PDF
Human-in-the-loop @ ISWS 2019
PPT
SemTech 2012 - Making your semantic app addictive: Incentivizing Users
PDF
One does not simply crowdsource the Semantic Web: 10 years with people, URIs,...
PDF
Incentives-driven technology design
PDF
Human computation @ Data Semantics
PDF
Rise of Crowd Computing (December 2012)
PPT
Insemtives at the KIT
PDF
Wims2012
PDF
Metrocon-Rise-Of-Crowd-Computing
PPTX
Special Topics in Mobile
PDF
Actuate - Gamification
INSEMTIVES talk at Semtech2010
Tutorial Cognition - Irene
Crowdsourcing & Gamification
Tutorial: Social Semantic Web and Crowdsourcing - E. Simperl - ESWC SS 2014
Introduction (1/6)
SSSW 2016 Cognition Tutorial
Crowdsourcing for Online Data Collection
Comparison GWAP Mechanical Turk
Insemtives (Newsfromthefront 2010)
Human-in-the-loop @ ISWS 2019
SemTech 2012 - Making your semantic app addictive: Incentivizing Users
One does not simply crowdsource the Semantic Web: 10 years with people, URIs,...
Incentives-driven technology design
Human computation @ Data Semantics
Rise of Crowd Computing (December 2012)
Insemtives at the KIT
Wims2012
Metrocon-Rise-Of-Crowd-Computing
Special Topics in Mobile
Actuate - Gamification
Ad

More from Elena Simperl (20)

PDF
When stars align: studies in data quality, knowledge graphs, and machine lear...
PDF
Knowledge engineering: from people to machines and back
PDF
This talk was not generated with ChatGPT: how AI is changing science
PDF
Knowledge graph use cases in natural language generation
PDF
Knowledge engineering: from people to machines and back
PDF
The web of data: how are we doing so far
PDF
What Wikidata teaches us about knowledge engineering
PDF
Open government data portals: from publishing to use and impact
PDF
Ten myths about knowledge graphs.pdf
PDF
What Wikidata teaches us about knowledge engineering
PDF
Data commons and their role in fighting misinformation.pdf
PDF
Are our knowledge graphs trustworthy?
PDF
The web of data: how are we doing so far?
PDF
Crowdsourcing and citizen engagement for people-centric smart cities
PDF
Pie chart or pizza: identifying chart types and their virality on Twitter
PDF
High-value datasets: from publication to impact
PDF
The story of Data Stories
PDF
The human face of AI: how collective and augmented intelligence can help sol...
PDF
Qrowd and the city: designing people-centric smart cities
PDF
Qrowd and the city
When stars align: studies in data quality, knowledge graphs, and machine lear...
Knowledge engineering: from people to machines and back
This talk was not generated with ChatGPT: how AI is changing science
Knowledge graph use cases in natural language generation
Knowledge engineering: from people to machines and back
The web of data: how are we doing so far
What Wikidata teaches us about knowledge engineering
Open government data portals: from publishing to use and impact
Ten myths about knowledge graphs.pdf
What Wikidata teaches us about knowledge engineering
Data commons and their role in fighting misinformation.pdf
Are our knowledge graphs trustworthy?
The web of data: how are we doing so far?
Crowdsourcing and citizen engagement for people-centric smart cities
Pie chart or pizza: identifying chart types and their virality on Twitter
High-value datasets: from publication to impact
The story of Data Stories
The human face of AI: how collective and augmented intelligence can help sol...
Qrowd and the city: designing people-centric smart cities
Qrowd and the city
Ad

Recently uploaded (20)

PDF
NewMind AI Weekly Chronicles – August ’25 Week III
DOCX
search engine optimization ppt fir known well about this
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Developing a website for English-speaking practice to English as a foreign la...
PPTX
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
PDF
STKI Israel Market Study 2025 version august
PDF
CloudStack 4.21: First Look Webinar slides
PDF
A review of recent deep learning applications in wood surface defect identifi...
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PPTX
observCloud-Native Containerability and monitoring.pptx
PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PDF
Architecture types and enterprise applications.pdf
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PDF
A comparative study of natural language inference in Swahili using monolingua...
PPTX
The various Industrial Revolutions .pptx
PPT
What is a Computer? Input Devices /output devices
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Five Habits of High-Impact Board Members
PPTX
Modernising the Digital Integration Hub
NewMind AI Weekly Chronicles – August ’25 Week III
search engine optimization ppt fir known well about this
Group 1 Presentation -Planning and Decision Making .pptx
Developing a website for English-speaking practice to English as a foreign la...
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
STKI Israel Market Study 2025 version august
CloudStack 4.21: First Look Webinar slides
A review of recent deep learning applications in wood surface defect identifi...
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
observCloud-Native Containerability and monitoring.pptx
Taming the Chaos: How to Turn Unstructured Data into Decisions
Architecture types and enterprise applications.pdf
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
A comparative study of natural language inference in Swahili using monolingua...
The various Industrial Revolutions .pptx
What is a Computer? Input Devices /output devices
A novel scalable deep ensemble learning framework for big data classification...
Five Habits of High-Impact Board Members
Modernising the Digital Integration Hub

Insemtives iswc2011 session1

  • 1. Ten ways to make your semantic app addicted - REVISITED Elena Simperl Tutorial at the ISWC2011, Bonn, Germany 10/24/2011 www.insemtives.eu 1
  • 2. Executive summary • Many aspects of semantic content authoring naturally rely on human contribution. • Motivating users to contribute is essential for semantic technologies to reach critical mass and ensure sustainable growth. • This tutorial is about – Methods and techniques to study incentives and motivators applicable to semantic content authoring scenarios. – How to implement the results of such studies through technology design, usability engineering, and game mechanics. www.insemtives.eu 2
  • 3. Incentives and motivators • Motivation is the driving • Incentives can be related force that makes humans to both extrinsic and achieve their goals. intrinsic motivations. • Incentives are ‘rewards’ • Extrinsic motivation if assigned by an external task is considered boring, ‘judge’ to a performer for dangerous, useless, undertaking a specific socially undesirable, task. dislikable by the – Common belief (among performer. economists): incentives • Intrinsic motivation is can be translated into a sum of money for all driven by an interest or practical purposes. enjoyment in the task itself.
  • 4. Examples of applications www.insemtives.eu 4
  • 5. Extrinsic vs intrinsic motivations • Successful volunteer crowdsourcing is difficult to predict or replicate. – Highly context-specific. – Not applicable to arbitrary tasks. • Reward models often easier to study and control.* – Different models: pay-per-time, pay-per-unit, winner- takes-it-all… – Not always easy to abstract from social aspects (free- riding, social pressure…). – May undermine intrinsic motivation. * in cases when performance can be reliably measured
  • 6. Examples (ii) Mason & Watts: Financial incentives and the performance of the crowds, HCOMP 2009.
  • 7. Amazon‘s Mechanical Turk • Types of tasks: transcription, classification, and content generation, data collection, image tagging, website feedback, usability tests.* • Increasingly used by academia. • Vertical solutions built on top. • Research on extensions for complex tasks. * http://behind-the-enemy-lines.blogspot.com/2010/10/what-tasks-are-posted-on-mechanical.html
  • 8. Tasks amenable to crowdsourcing • Tasks that are decomposable into simpler tasks that are easy to perform. • Performance is measurable. • No specific skills or expertise are required.
  • 9. Patterns of tasks* • Solving a task • Example: open-scale tasks – Generate answers in Mturk – Find additional information – Generate, then vote. – Improve, edit, fix – Introduce random noise to • Evaluating the results of a identify potential issues in the second step task – Vote for accept/reject Label Correct Vote answers Generate answer – Vote up/down to rank potentially correct answers image or not? – Vote best/top-n results • Flow control – Split the task – Aggregate partial results * „Managing Crowdsourced Human Computation“@WWW2011, Ipeirotis
  • 10. Examples (iii) www.insemtives.eu 10
  • 11. What makes game mechanics successfull?* • Accelerated feedback cycles. – Annual performance appraisals vs immediate feedback to maintain engagement. • Clear goals and rules of play. – Players feel empowered to achieve goals vs fuzzy, complex system of rules in real-world. • Compelling narrative. – Gamification builds a narrative that engages players to participate and achieve the goals of the activity. • But in the end it’s about what task users want to get better at. *http://www.gartner.com/it/page.jsp?id=1629214 Images from http://gapingvoid.com/2011/06/07/pixie-dust-the-mountain-of-mediocrity/ and http://www.hideandseek.net/wp- content/uploads/2010/10/gamification_badges.jpg
  • 12. Guidelines • Focus on the actual goal and incentivize related actions. – Write posts, create graphics, annotate pictures, reply to customers in a given time… • Build a community around the intended actions. – Reward helping each other in performing the task and interaction. – Reward recruiting new contributors. • Reward repeated actions. – Actions become part of the daily routine. Image from http://t1.gstatic.com/images?q=tbn:ANd9GcSzWEQdtagJy6lxiR2focH2D01Wpz7dzAilDuPsWnL0i4GAHgnm_0hyw3upqw
  • 13. What tasks can be gamified?* • Tasks that are decomposable into simpler tasks, nested tasks. • Performance is measurable. • Obvious rewarding scheme. • Skills can be arranged in a smooth learning curve. *http://www.lostgarden.com/2008/06/what-actitivies-that-can-be-turned-into.html Image from http://www.powwownow.co.uk/blog/wp-content/uploads/2011/06/gamification.jpeg
  • 14. What is different about semantic systems? • It‘s still about the context of the actual application. • User engagement with semantic tasks in order to – Ensure knowledge is relevant and up-to-date. – People accept the new solution and understand its benefits. – Avoid cold-start problems. – Optimize maintenance costs.
  • 15. Tasks in knowledge engineering • Definition of vocabulary • Conceptualization – Based on competency questions – Identifying instances, classes, attributes, relationships • Documentation – Labeling and definitions. – Localization • Evaluation and quality assurance – Matching conceptualization to documentation • Alignment • Validating the results of automatic methods www.insemtives.eu 15
  • 17. OntoGame API • API that provides several methods that are shared by the OntoGame games, such as: – Different agreement types (e.g. selection agreement). – Input matching (e.g. , majority). – Game modes (multi-player, single player). – Player reliability evaluation. – Player matching (e.g., finding the optimal partner to play). – Resource (i.e., data needed for games) management. – Creating semantic content. • http://insemtives.svn.sourceforge.net/vie wvc/insemtives/generic-gaming-toolkit 10/24/2011 www.insemtives.eu 17
  • 18. OntoGame games 10/24/2011 www.insemtives.eu 18
  • 19. Case studies • Methods applied – Mechanism design. – Participatory design. – Games with a purpose. – Crowdsourcing via MTurk. • Semantic content authoring scenarios – Extending and populating an ontology. – Aligning two ontologies. – Annotation of text, media and Web APIs.
  • 20. Lessons learned • Approach is feasible for mainstream domains, where a (large-enough) knowledge corpus is available. • Advertisement is important. • Game design vs useful content. – Reusing well-kwown game paradigms. – Reusing game outcomes and integration in existing workflows and tools. • But, the approach is per design less applicable because – Knowledge-intensive tasks that are not easily nestable. – Repetitive tasks  players‘ retention? • Cost-benefit analysis.
  • 21. Using Mechanical Turk for semantic content authoring • Many design decisions similar to GWAPs. – But clear incentives structures. – How to reliably compare games and MTurk results? • Automatic generation of HITs depending on the types of tasks and inputs. • Integration in productive environments. – Protégé plug-in for managing and using crowdsourcing results.
  • 22. Outline of the tutorial Time Presentation 14:00 – Human contributions in semantic content authoring 14:45 14:45 – Case study: motivating employees to annotate enterprise 15:30 content semantically at Telefonica 15:30 – Coffee break 16:00 16:00 – Case study: Crowdsourcing the annotation of dynamic Web 16:45 content at seekda 16:45 – Case study: Content tagging at MoonZoo and 17:30 MyTinyPlanets 17:30 – Ten ways to make your semantic app addicted - revisited 18:00 www.insemtives.eu 22
  • 23. Realizing the Semantic Web by encouraging millions of end-users to create semantic content. 10/24/2011 www.insemtives.eu 23