SlideShare a Scribd company logo
@playfulMIT
Learning Analytics Design in
Game-based Learning
José A. Ruipérez Valiente - @JoseARuiperez
YJ Kim - @yjkimchee
@playfulMIT
Introductions
Who are you and why am I here?
@playfulMIT
playful.mit.edu
@playfulMIT
@playfulMIT
What we believe in
@playfulMIT
Who we are
@playfulMIT
Workshop organizers
José A. Ruipérez-Valiente
- Postdoc @ MIT
- Engineer, focus on edtech
- 6 years working in learning
analytics across many objectives
and contexts
- Currently focused in large scale
trends in MOOCs and games for
learning
YJ (Yoon Jeon) Kim
- Executive Director Playful Journey
Lab located at MIT Open Learning
- Assessment scientist
- Focus on games and playful
approaches for assessment
- Just had a baby!
@playfulMIT
Overview of the Workshop
PART I:
- Overview of game-based learning and assessment
- Example with Physics Playground and evidence-centered design approach
- Activity: mapping game evidence to constructs
PART II:
- Introduction of learning analytics
- Example with Shadowspect and Learning Analytics approach
- Activity: mapping evidence to data and design algorithmic machinery
- Share out by each group
@playfulMIT
How does the workshop relate to TEL topics?
- Game-based Learning
- Game-based Assessment
- Learning Analytics
- … and Design (which is transverse to numerous areas and applications)
@playfulMIT
FIRST PART:
What makes good games that can afford analytics and how to avoid junk in junk out?
@playfulMIT
Assessment is a process of reasoning
from evidence. Therefore, an
assessment is a tool designed to
observe students’ behavior and
produce data that can be used to draw
reasonable inferences about what
students know.
~ Bob Mislevy
@playfulMIT
A game is a voluntary interactive
activity, in which one or more players
follow rules that constrain their
behavior, enacting an artificial conflict
that ends in a quantifiable outcome.
~Eric Zimmerman (2004)
@playfulMIT
Why Games?
● Games are “flexible enough for players to
inhabit and explore through meaningful
play” (Salen & Zimmerman 2004, p. 165)
(deep learning)
● Learners have more freedom related to
how much effort they choose to expend,
how often they fail and try again (Osterweil,
2014) (real life)
@playfulMIT
Why Games for Assessment?
● Games incorporate multiple pathways to solution(s) → Rapid/numerous data
streams → Rich and comprehensive student models
● Use complex and authentic problems → construct validity esp for hard-to-
measure constructs
● Games are motivating and engaging → accurate assessment (Sundre &
Wise, 2003)
● It doesn’t feel like assessment (i.e. stealth assessment)
@playfulMIT
Metaphor
@playfulMIT
Design Process
@playfulMIT
Design, Development and Evaluation Process of Game-based
Assessment
@playfulMIT
@playfulMIT
Game Design vs. Assessment Design
@playfulMIT
Our simplified case scenario right now
Evidence Constructs
map
@playfulMIT
Example of Evidence-centered
design with the Physics Playground
@playfulMIT
Video Physics Playground
See video of Physics Playground here
@playfulMIT
Physics Playground: Competencies and Evidence
@playfulMIT
Physics Playground: Data example
@playfulMIT
Physics Playground: Expert Assessment of Constructs
@playfulMIT
@playfulMIT
Now it’s your turn!
@playfulMIT
Let’s form groups by game!
If you already have a ed game you would like to work on, let me know. Otherwise:
- Shadowspect
- Physics Playground (user = demo1, pw = demo1pass)
- Quandary
- Spanish Bingo
- Make a Number
- Argument Wars
- Want to try other game? Some ed game aggregators MathPlayground, iCivics or ABCYa
@playfulMIT
Activity!
- Play the game for about 15 minutes to understand the domain and mechanics
- Select and read about a construct that could be measured in the game:
- Content knowledge: e.g., if the game is about mathematics, multiplication, summation.. More
detailed approach is checking the Common Core Standards by age:
- Behavioral constructs: Off-task, persistence, productive struggle, non-constructive behaviors
- Cognitive skills: Spatial reasoning, creativity, memory, attention...
- Find what Evidence in the game is related to that construct
- What changes would you make in the game mechanics to facilitate measuring those
constructs?
@playfulMIT
Share out
- Recommended format:
a. What was your game about?
b. For each constructs that you have identified as measurable:
- Name the construct and describe it
- What is the evidence you find in the game related to the construct?
- Do you recommend a change in the game mechanics to improve the
evidence?
c. What did you learn with this activity?
@playfulMIT
BREAK
Next part will focus on designing how to engineer the analytics to inform your
constructs
@playfulMIT
SECOND PART:
How can we effectively use learning analytics for inference in games for learning?
@playfulMIT
Recap
What we did:
- Introduced why games and assessment go well together
- Presented an example of how to apply Evidence-centered Design in a game
- Did an activity to map the evidence in a game to constructs
What we are doing now:
- Introduce LA and why is a good approach for learning games
- Present example on classical data-driven LA in a game
- Activity to map evidence to data (schema), and constructs to features (algorithms)
@playfulMIT
The Broad view of Learning Analytics
…collection, analysis and reporting of data about learners and
their contexts, for purposes of understanding and optimising
learning and the environments in which it occurs…
Source: First Learning Analytics
and Knowledge Conference
@playfulMIT
The Learning Analytics data-driven Process
@playfulMIT
Our simplified case scenario now updates to:
Evidence Constructs
map
Data Features
data schema inform
algorithms
@playfulMIT
Example of Learning Analytics
approach with Shadowspect
@playfulMIT
Video of Shadowspect
See video of Shadowspect here
@playfulMIT
Efficiency construct
- Efficiency is the ability to do things well, successfully, and without waste. It
often specifically comprises the capability of a specific application of effort
to produce a specific outcome with a minimum amount or quantity of
waste, expense, or unnecessary effort (Wikipedia)
@playfulMIT
Evidence in Shadowspect related to efficiency
● Ability to do things well:
○ Solving puzzles correctly
● Expense or effort:
○ Time invested
○ Number of attempts to solve a problem
@playfulMIT
Mapping evidence into necessary data in Shadowspect
● We need: puzzles solved correctly, time invested and attempts
○ Necessary types of events for that:
■ puzzle_start (timestamp, student, puzzle_id)
■ leave_to_menu (timestamp, student, puzzle_id)
■ puzzle_attempt (timestamp, student, puzzle_id, correct)
@playfulMIT
How does data in Shadowspect actually looks like?
@playfulMIT
Algorithm to compute features from data (pseudo-code)
# note this is a VERY simplified version that do not aim to be the most effective implementation of this algorithm
computeEfficiencyFeatures(student):
student_events = getStudentEvents(student)
correct_puzzles_list = list(); number_attempts = 0; total_time = 0; puzzle_started_event = None
for event in student_events:
if(event[‘type’] == ‘puzzle_started’) then
puzzle_started_event = event
elif(event[‘type’] == ‘leave_to_menu’) then
total_time += (event[‘timestamp’] - puzzle_started_event[‘timestamp’])
puzzle_started_event = None
elif(event[‘type’] == ‘puzzle_attempt’):
number_attempts += 1
if(event[‘correct’] == True) then
correct_puzzles_list.add(event[‘puzzle_id’])
attempts_per_correct_puzzle = length(unique(correct_puzzles_list))/number_attempts
time_per_correct_ puzzle = length(unique(correct_puzzles_list))/total_time
return(attempts_per_correct_puzzle, time_per_correct_puzzle)
@playfulMIT
The previous general scenario
Evidence Constructs
map
Data Features
data schema inform
algorithms
@playfulMIT
Model for efficiency in Shadowspect
Evidence
● Correct puzzles
● Time
● Number attempts
Data
● puzzle_start
● leave_to_menu
● puzzle_attempt
data schema inform
computeEfficiency
Features(student)
Construct
Efficiency
Features
attempts_per_correct_problem
time_per_correct_problem
map
@playfulMIT
How to join expert and LA flows?
Evidence
Constructsmap
Data Features
data schema inform
algorithms
expert
assessment
ML/AI
@playfulMIT
Now it’s your turn!
@playfulMIT
Activity!
1. Use the constructs you defined during previous activity and add new extra ones.
For each construct:
a. Map the Evidence you found in the game, into the actual data that you need
b. Write human-readable instructions or pseudo-code that explains how to
process that data to generate features that represent a construct
c. What changes would you do in the game to facilitate capturing useful data to
measure that construct?
@playfulMIT
Share out
- Describe your game: What are the main mechanics? What learning is involved?
- For each constructs that you have identified as measurable:
a. Name the construct and describe it
b. What is the evidence you find in the game related to the construct?
c. How did you map the evidence into data?
d. How did you measure your constructs? Both ECD and LA are valid
e. Do you recommend a change in the game mechanics to improve the evidence
that the game can generate?
- What did you learn with this activity?
@playfulMIT
Key takeaways
- We applied ECD and LA approaches to measure constructs in games
- Very simplified version of the real complete process of game-based assessment,
we missed to talk about:
a. Initial design phase: domain knowledge-game mechanics-data alignment
b. Playtesting and iteration
c. Application and user target
d. Evaluation
@playfulMIT
playful.mit.edu
@playfulMIT

More Related Content

PPTX
Learning Analytics for the Evaluation of Competencies and Behaviors in Seriou...
 
PDF
Player Rating Algorithms for Balancing Human Computation Games: Testing the E...
PDF
Game Engines in Game Education: Thinking Inside the Tool Boox?
PPT
Gamification - ASTD RTA
PPTX
Meaningful Gamification in HE? Slides from GameScope Conference
PPTX
Autenrieth&wechselberger ract15
PDF
Playpower Labs : Diploma Document
PPT
Quiana bradshaw final defense slides
Learning Analytics for the Evaluation of Competencies and Behaviors in Seriou...
 
Player Rating Algorithms for Balancing Human Computation Games: Testing the E...
Game Engines in Game Education: Thinking Inside the Tool Boox?
Gamification - ASTD RTA
Meaningful Gamification in HE? Slides from GameScope Conference
Autenrieth&wechselberger ract15
Playpower Labs : Diploma Document
Quiana bradshaw final defense slides

Similar to Learning Analytics Design in Game-based Learning (20)

PPTX
Aplicando Analítica de Aprendizaje para la Evaluación de Competencias y Compo...
PPTX
2021_06_30 «Collaborative Design with Classroom Teachers for Impactful Game-B...
PDF
A Primer On Play: How to use Games for Learning and Results
PPTX
Icce21 systematizing game learning analytics for improving serious games life...
PDF
WEEF/GEDC eMadrid_Systematizing Game Learning Analytics for Improving Serious...
PPTX
Learning analytics for improving educational games jcsg2017
PPTX
Teaching video game development panel FDG2014
PDF
Using Game Learning Analytics to Improve the Design, Evaluation and Deploymen...
PDF
Gamifying Aplications (aguascalientes_oct2014)
PPT
Immersive Learning Simulations Astd Final2
PPTX
Tcea 2014 Video Game Design for New TEKS
PDF
The Lens of Intrinsic Skill Atoms: A Method for Gameful Design
PDF
Gamification for fun, engagement and learning!
PDF
GAMIFIN 2019 Conference Keynote: How to fail at #gamification research
PDF
Th202 slides
PPTX
Digital Game-Based Learning
PPTX
Serious games: current uses and emergent trends
PPTX
Learning Analytics Serious Games Cognitive Disabilities
PDF
Game-based Learning
PDF
Career as a Product Manager / Data Analyst in the Games Industry
Aplicando Analítica de Aprendizaje para la Evaluación de Competencias y Compo...
2021_06_30 «Collaborative Design with Classroom Teachers for Impactful Game-B...
A Primer On Play: How to use Games for Learning and Results
Icce21 systematizing game learning analytics for improving serious games life...
WEEF/GEDC eMadrid_Systematizing Game Learning Analytics for Improving Serious...
Learning analytics for improving educational games jcsg2017
Teaching video game development panel FDG2014
Using Game Learning Analytics to Improve the Design, Evaluation and Deploymen...
Gamifying Aplications (aguascalientes_oct2014)
Immersive Learning Simulations Astd Final2
Tcea 2014 Video Game Design for New TEKS
The Lens of Intrinsic Skill Atoms: A Method for Gameful Design
Gamification for fun, engagement and learning!
GAMIFIN 2019 Conference Keynote: How to fail at #gamification research
Th202 slides
Digital Game-Based Learning
Serious games: current uses and emergent trends
Learning Analytics Serious Games Cognitive Disabilities
Game-based Learning
Career as a Product Manager / Data Analyst in the Games Industry
Ad

More from MIT (6)

PPTX
Multiplatform MOOC Analytics: Comparing Global and Regional Patterns in edX a...
 
PPTX
Investigación en Learning Analytics vs. Learning Analytics en la Universidad
 
PDF
Ph.D. Defense - Dr. Jose A. Ruiperez Valiente
 
PDF
A Data-driven Method for the Detection of Close Submitters in Online Learning...
 
PPTX
Using Multiple Accounts for Harvesting Solutions in MOOCs
 
PDF
Diseño e Implementación de un Módulo de Analítica de Aprendizaje en la Plataf...
 
Multiplatform MOOC Analytics: Comparing Global and Regional Patterns in edX a...
 
Investigación en Learning Analytics vs. Learning Analytics en la Universidad
 
Ph.D. Defense - Dr. Jose A. Ruiperez Valiente
 
A Data-driven Method for the Detection of Close Submitters in Online Learning...
 
Using Multiple Accounts for Harvesting Solutions in MOOCs
 
Diseño e Implementación de un Módulo de Analítica de Aprendizaje en la Plataf...
 
Ad

Recently uploaded (20)

DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
Trump Administration's workforce development strategy
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
IGGE1 Understanding the Self1234567891011
PDF
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
LDMMIA Reiki Yoga Finals Review Spring Summer
Final Presentation General Medicine 03-08-2024.pptx
A systematic review of self-coping strategies used by university students to ...
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Trump Administration's workforce development strategy
Paper A Mock Exam 9_ Attempt review.pdf.
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
IGGE1 Understanding the Self1234567891011
LNK 2025 (2).pdf MWEHEHEHEHEHEHEHEHEHEHE
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Chinmaya Tiranga quiz Grand Finale.pdf
Complications of Minimal Access Surgery at WLH
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Orientation - ARALprogram of Deped to the Parents.pptx
A powerpoint presentation on the Revised K-10 Science Shaping Paper
Practical Manual AGRO-233 Principles and Practices of Natural Farming
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE

Learning Analytics Design in Game-based Learning

  • 1. @playfulMIT Learning Analytics Design in Game-based Learning José A. Ruipérez Valiente - @JoseARuiperez YJ Kim - @yjkimchee
  • 6. @playfulMIT Workshop organizers José A. Ruipérez-Valiente - Postdoc @ MIT - Engineer, focus on edtech - 6 years working in learning analytics across many objectives and contexts - Currently focused in large scale trends in MOOCs and games for learning YJ (Yoon Jeon) Kim - Executive Director Playful Journey Lab located at MIT Open Learning - Assessment scientist - Focus on games and playful approaches for assessment - Just had a baby!
  • 7. @playfulMIT Overview of the Workshop PART I: - Overview of game-based learning and assessment - Example with Physics Playground and evidence-centered design approach - Activity: mapping game evidence to constructs PART II: - Introduction of learning analytics - Example with Shadowspect and Learning Analytics approach - Activity: mapping evidence to data and design algorithmic machinery - Share out by each group
  • 8. @playfulMIT How does the workshop relate to TEL topics? - Game-based Learning - Game-based Assessment - Learning Analytics - … and Design (which is transverse to numerous areas and applications)
  • 9. @playfulMIT FIRST PART: What makes good games that can afford analytics and how to avoid junk in junk out?
  • 10. @playfulMIT Assessment is a process of reasoning from evidence. Therefore, an assessment is a tool designed to observe students’ behavior and produce data that can be used to draw reasonable inferences about what students know. ~ Bob Mislevy
  • 11. @playfulMIT A game is a voluntary interactive activity, in which one or more players follow rules that constrain their behavior, enacting an artificial conflict that ends in a quantifiable outcome. ~Eric Zimmerman (2004)
  • 12. @playfulMIT Why Games? ● Games are “flexible enough for players to inhabit and explore through meaningful play” (Salen & Zimmerman 2004, p. 165) (deep learning) ● Learners have more freedom related to how much effort they choose to expend, how often they fail and try again (Osterweil, 2014) (real life)
  • 13. @playfulMIT Why Games for Assessment? ● Games incorporate multiple pathways to solution(s) → Rapid/numerous data streams → Rich and comprehensive student models ● Use complex and authentic problems → construct validity esp for hard-to- measure constructs ● Games are motivating and engaging → accurate assessment (Sundre & Wise, 2003) ● It doesn’t feel like assessment (i.e. stealth assessment)
  • 16. @playfulMIT Design, Development and Evaluation Process of Game-based Assessment
  • 18. @playfulMIT Game Design vs. Assessment Design
  • 19. @playfulMIT Our simplified case scenario right now Evidence Constructs map
  • 20. @playfulMIT Example of Evidence-centered design with the Physics Playground
  • 21. @playfulMIT Video Physics Playground See video of Physics Playground here
  • 24. @playfulMIT Physics Playground: Expert Assessment of Constructs
  • 27. @playfulMIT Let’s form groups by game! If you already have a ed game you would like to work on, let me know. Otherwise: - Shadowspect - Physics Playground (user = demo1, pw = demo1pass) - Quandary - Spanish Bingo - Make a Number - Argument Wars - Want to try other game? Some ed game aggregators MathPlayground, iCivics or ABCYa
  • 28. @playfulMIT Activity! - Play the game for about 15 minutes to understand the domain and mechanics - Select and read about a construct that could be measured in the game: - Content knowledge: e.g., if the game is about mathematics, multiplication, summation.. More detailed approach is checking the Common Core Standards by age: - Behavioral constructs: Off-task, persistence, productive struggle, non-constructive behaviors - Cognitive skills: Spatial reasoning, creativity, memory, attention... - Find what Evidence in the game is related to that construct - What changes would you make in the game mechanics to facilitate measuring those constructs?
  • 29. @playfulMIT Share out - Recommended format: a. What was your game about? b. For each constructs that you have identified as measurable: - Name the construct and describe it - What is the evidence you find in the game related to the construct? - Do you recommend a change in the game mechanics to improve the evidence? c. What did you learn with this activity?
  • 30. @playfulMIT BREAK Next part will focus on designing how to engineer the analytics to inform your constructs
  • 31. @playfulMIT SECOND PART: How can we effectively use learning analytics for inference in games for learning?
  • 32. @playfulMIT Recap What we did: - Introduced why games and assessment go well together - Presented an example of how to apply Evidence-centered Design in a game - Did an activity to map the evidence in a game to constructs What we are doing now: - Introduce LA and why is a good approach for learning games - Present example on classical data-driven LA in a game - Activity to map evidence to data (schema), and constructs to features (algorithms)
  • 33. @playfulMIT The Broad view of Learning Analytics …collection, analysis and reporting of data about learners and their contexts, for purposes of understanding and optimising learning and the environments in which it occurs… Source: First Learning Analytics and Knowledge Conference
  • 34. @playfulMIT The Learning Analytics data-driven Process
  • 35. @playfulMIT Our simplified case scenario now updates to: Evidence Constructs map Data Features data schema inform algorithms
  • 36. @playfulMIT Example of Learning Analytics approach with Shadowspect
  • 37. @playfulMIT Video of Shadowspect See video of Shadowspect here
  • 38. @playfulMIT Efficiency construct - Efficiency is the ability to do things well, successfully, and without waste. It often specifically comprises the capability of a specific application of effort to produce a specific outcome with a minimum amount or quantity of waste, expense, or unnecessary effort (Wikipedia)
  • 39. @playfulMIT Evidence in Shadowspect related to efficiency ● Ability to do things well: ○ Solving puzzles correctly ● Expense or effort: ○ Time invested ○ Number of attempts to solve a problem
  • 40. @playfulMIT Mapping evidence into necessary data in Shadowspect ● We need: puzzles solved correctly, time invested and attempts ○ Necessary types of events for that: ■ puzzle_start (timestamp, student, puzzle_id) ■ leave_to_menu (timestamp, student, puzzle_id) ■ puzzle_attempt (timestamp, student, puzzle_id, correct)
  • 41. @playfulMIT How does data in Shadowspect actually looks like?
  • 42. @playfulMIT Algorithm to compute features from data (pseudo-code) # note this is a VERY simplified version that do not aim to be the most effective implementation of this algorithm computeEfficiencyFeatures(student): student_events = getStudentEvents(student) correct_puzzles_list = list(); number_attempts = 0; total_time = 0; puzzle_started_event = None for event in student_events: if(event[‘type’] == ‘puzzle_started’) then puzzle_started_event = event elif(event[‘type’] == ‘leave_to_menu’) then total_time += (event[‘timestamp’] - puzzle_started_event[‘timestamp’]) puzzle_started_event = None elif(event[‘type’] == ‘puzzle_attempt’): number_attempts += 1 if(event[‘correct’] == True) then correct_puzzles_list.add(event[‘puzzle_id’]) attempts_per_correct_puzzle = length(unique(correct_puzzles_list))/number_attempts time_per_correct_ puzzle = length(unique(correct_puzzles_list))/total_time return(attempts_per_correct_puzzle, time_per_correct_puzzle)
  • 43. @playfulMIT The previous general scenario Evidence Constructs map Data Features data schema inform algorithms
  • 44. @playfulMIT Model for efficiency in Shadowspect Evidence ● Correct puzzles ● Time ● Number attempts Data ● puzzle_start ● leave_to_menu ● puzzle_attempt data schema inform computeEfficiency Features(student) Construct Efficiency Features attempts_per_correct_problem time_per_correct_problem map
  • 45. @playfulMIT How to join expert and LA flows? Evidence Constructsmap Data Features data schema inform algorithms expert assessment ML/AI
  • 47. @playfulMIT Activity! 1. Use the constructs you defined during previous activity and add new extra ones. For each construct: a. Map the Evidence you found in the game, into the actual data that you need b. Write human-readable instructions or pseudo-code that explains how to process that data to generate features that represent a construct c. What changes would you do in the game to facilitate capturing useful data to measure that construct?
  • 48. @playfulMIT Share out - Describe your game: What are the main mechanics? What learning is involved? - For each constructs that you have identified as measurable: a. Name the construct and describe it b. What is the evidence you find in the game related to the construct? c. How did you map the evidence into data? d. How did you measure your constructs? Both ECD and LA are valid e. Do you recommend a change in the game mechanics to improve the evidence that the game can generate? - What did you learn with this activity?
  • 49. @playfulMIT Key takeaways - We applied ECD and LA approaches to measure constructs in games - Very simplified version of the real complete process of game-based assessment, we missed to talk about: a. Initial design phase: domain knowledge-game mechanics-data alignment b. Playtesting and iteration c. Application and user target d. Evaluation

Editor's Notes

  • #16: YJ
  • #18: begins by identifying what should be assessed in terms of knowledge, skills, or other learner attributes. These variables cannot be observed directly, so behaviors and performances that demonstrate these variables need to be identified instead. The next step is determining the types of tasks or situations that would draw out such behaviors or performances. Example around simple math knowledge in a game:
  • #21: YJ
  • #27: YJ
  • #37: YJ
  • #47: YJ