Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Clarification on Logit Models
• Dependent/outcome variable would be a
variable made up of 0s and 1s
• We don’t need to apply any kind of transformation
• glmer(Recalled ~ 1 + StudyTime * Strategy +
(1|Subject) + (1|WordPair),
data=cuedrecall, family=binomial)
• family=binomial tells R to analyze this using the logit
link function—everything handled automatically
Clarification on Logit Models
• When we get our results, they will be in terms
of log odds/logits (because we used
family=binomial)
• We may want to use exp() to transform our
estimates into effects on the odds to make them
more interpretable
• Elaborative rehearsal = +2.29 log odds of recall
• exp(2.29) = Odds of recall 9.87 times greater
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Poisson Regression
• glmer() supports other non-normal
distributions, such as family=poisson
• Good when the DV is a frequency count
• Number of gestures made in communicative task
• Number of traffic accidents
• Number of ideas brainstormed
• Number of doctor’s visits
• Different from both normal
& binomial distributions
• Lower-bound at 0
• But, no upper bound
• This is a Poisson distribution! -2 0 2 4 6 8 10
0.0
0.1
0.2
0.3
0.4
Count
Probability
• Link function for a Poisson distribution is the log()
• Thus, we would also want to use exp()
• Effect of ASD on number of disfluent pauses: 0.27
• exp(0.27) = 1.31
• ASD increases frequency of pauses by 1.31 times
= !0 + !1X1i + ei
Intercept
(Baseline)
ASD
Can be any number
Can be any number
log(yi)
Poisson Regression
Poisson Models: Other Variants
• Offset term: Controls for differences in the
opportunities to observe the DV (e.g., time)
• e.g., one child observed for 15 minutes, another
for 14 minutes
• glmer(Disfluencies ~ 1 + ASD + (1|Family),
offset=Time, family=poisson)
Poisson Models: Other Variants
• Zero-inflated Poisson: Sometimes, more 0s
than expected under a true Poisson distribution
• Often, when a separate process creates 0s
• # of traffic violations " 0 if you don’t have a car
• # of alcoholic drinks per week " 0 if you don’t drink
• Zero-inflated model:
Simultaneously models
the 0-creating process
as a logit and the rest
as a Poisson
• Use package zeroinfl
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
• Remember our cued recall data from last week?
• A second categorical DV here is source
confusions:
• Lots of theoretical interest in source memory—
memory for the context or source where you learned
something
Odds are not the
same thing as
probabilities!
Source Confusion
Odds are not the
same thing as
probabilities!
WHO
SAID
IT?
VIKING—COLLEGE
SCOTCH—VODKA
Study: Test:
VIKING—vodka
Source Confusion Model
• Let’s model what causes people to make source
confusions:
• model.Source <- lmer(SourceConfusion ~
Strategy*StudyTime.cen + (1|Subject) +
(1|WordPair), data=sourceconfusion)
• This code contains two mistakes—can we fix them?
Source Confusion Model
• Let’s model what causes people to make source
confusions:
• model.Source <- glmer(SourceConfusion ~
Strategy*StudyTime.cen + (1|Subject) +
(1|WordPair), data=sourceconfusion,
family=binomial)
• Failed to converge, with very questionable results
exp(20.08) =
Source errors
525,941,212 times
more likely in
Maintenance condition
… but not significant
Low & High Probabilities
• Problem: These are
low frequency events
• In fact, lots of theoretically interesting things have
low frequency
• Clinical diagnoses that are not common
• Various kinds of cognitive errors
• Language production, memory, language
comprehension…
• Learners’ errors in math or other educational
domains
Low & High Probabilities
• A problem for our model:
• Model was trying to find the odds of making a
source confusion within each study condition
• But: Source confusions were never observed with
elaborative rehearsal, ever!
• How small are the odds?
They are infinitely small in
this dataset!
• Note that not all failures to converge reflect low
frequency events. But when very low frequencies exist,
they are likely to cause convergence problems.
Low & High Probabilities
• Logit is undefined if probability = 1
• Logit is also undefined if probability = 0
• Log 0 is undefined
• e??? = 0
• But there is nothing to which you can raise e to get 0
p(confusion)
1-p(confusion)
[ ]
logit = log
1
0
[ ]
= log
p(confusion)
1-p(confusion)
[ ]
logit = log
0
1
[ ]
= log = log (0)
Division by
zero!!
Low & High Probabilities
• When close to 0 or 1, logit is defined but unstable
0.0 0.2 0.4 0.6 0.8 1.0
-4
-2
0
2
4
PROBABILITY of recall
LOG
ODDS
of
recall
p(0.6) -> 0.41
p(0.8) ->1.39
Relatively
gradual change
at moderate
probabilities
p(0.95) -> 2.94
p(0.98) -> 3.89
Fast change at
extreme
probabilities
Low & High Probabilities
• A problem for our model:
• Question was how much less common source
confusions become with elaborative rehearsal
• But: Source confusions were never
observed with elaborative rehearsal
• Why we think this happened:
• In theory, elaborative subjects would
probably make at least one of these
errors eventually (given infinite trials)
• Not impossible
• But, empirically, probability was low
enough that we didn’t see the error
in our sample (limited sample size)
Probability = 1%
N = ∞
N = 8
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Empirical Logit
• Empirical logit: An adjustment to the regular
logit to deal with probabilities near (or at) 0 or 1
p(confusion)
1-p(confusion)
[ ]
logit = log
Empirical Logit
• Empirical logit: An adjustment to the regular
logit to deal with probabilities near (or at) 0 or 1
• Makes extreme values
(close to 0 or 1) less extreme
Num of “A”s
Num of “B”s
[ ]
logit = log A = Source confusion
occurred
B = Source confusion did
not occur
Num of “A”s + 0.5
Num of “B”s + 0.5
[ ]
emp. logit = log
e
Empirical Logit
• Empirical logit: An adjustment to the regular
logit to deal with probabilities near (or at) 0 or 1
• Makes extreme values
(close to 0 or 1) less extreme
Num of “A”s
Num of “B”s
[ ]
logit = log
Num of “A”s + 0.5
Num of “B”s + 0.5
[ ]
emp. logit = log
e
Num of As: 10
Num of Bs: 0
Num of As: 9
Num of Bs: 1
3.04
2.20
1.85
0.41
0.37
Num of As: 6
Num of Bs: 4
Empirical logit doesn’t go as
high or as low as the “true” logit
At moderate values, they’re
essentially the same
Mixed Effects Models - Empirical Logit
With larger
samples,
difference
gets much
smaller (as
long as
probability
isn’t 0 or 1)
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab
Empirical Logit: Implementation
• Empirical logit requires summing up events and
then adding 0.5 to numerator & denominator:
• Thus, we have to (1) sum across individual trials,
and then (2) calculate empirical logit
Num A + 0.5
Num B + 0.5
[ ]
empirical logit = log
Single empirical
logit value for
each subject in
each condition
Not a sequence
of one YES or
NO for every
item
Num of As for subject S10 in
Low associative strength,
Maintenance rehearsal
condition
Empirical Logit: Implementation
• Empirical logit requires summing up events and
then adding 0.5 to numerator & denominator:
• Thus, we have to (1) sum across individual trials,
and then (2) calculate empirical logit
• Can’t have multiple random effects with empirical
logit
• Would have to do separate by-subjects and by-
items analyses
• Collecting more data can be another solution
Num A + 0.5
Num B + 0.5
[ ]
empirical logit = log
Num of As for subject S10 in
Maintenance rehearsal
condition
Empirical Logit: Implementation
• Scott’s psycholing package can help calculate the
empirical logit & run the model
• Example script will be posted on Canvas
• Two notes:
• No longer using glmer() with family=binomial.
We’re now running the model on the empirical logit
value, which isn’t just a 0 or 1.
Here, the
value of
the DV is
-1.61
Empirical Logit: Implementation
• Scott’s psycholing package can help calculate the
empirical logit & run the model
• Example script will be posted on Canvas
• Two notes:
• No longer using glmer() with family=binomial.
We’re now running the model on the empirical logit
value, which isn’t just a 0 or 1.
• Because we calculate the empirical logit beforehand,
model doesn’t know how many observations went into
that value
• Want to appropriately
weight the model
-2.46 could be the
average across 10
trials or across 100
trials
Week 10.1: Empirical Logit
! Finish Generalized LMERs
! Clarification on Logit Models
! Poisson Regression
! Empirical Logit
! Low & High Probabilities
! Empirical Logit
! Implementation
! Lab

More Related Content

PPTX
IELTS Speaking Part 2.pptx
PPT
Welcome to the english class
PPT
Pastcontinuoustense
PPTX
REGULAR AND IRREGULAR VERBS IN PRESENT SIMPLE.pptx
PPT
Verb to be
PPT
Action & non action verbs explanation
PPT
Uses Of `I Wish´ And if only
DOCX
Plan de la leçon ( phrase déclarative et phrase interrogative)
IELTS Speaking Part 2.pptx
Welcome to the english class
Pastcontinuoustense
REGULAR AND IRREGULAR VERBS IN PRESENT SIMPLE.pptx
Verb to be
Action & non action verbs explanation
Uses Of `I Wish´ And if only
Plan de la leçon ( phrase déclarative et phrase interrogative)

What's hot (20)

PPT
Present and past simple tense
PDF
23-+Tense+Uyumu.pdf
PPT
Past simple irregular and regular verbs
PPS
Present simple
PPT
Present continuous
PPS
The past of be
ODP
IELTS Essay Topics with Answers (writing task 2)
PPT
Future perfect and continuous
PPT
Vocabulary
PPTX
Ielts writing task 2 Overview
PPTX
Conditionals
PPTX
Participle clauses
PPTX
Past simple versus past continuous
PPTX
IELTS Task 2 Discussion Essay Lesson
PDF
Comparative and Superlative Adjectives.pdf
PPS
There is, there are
PPT
Verbs being
PPT
Comparatives And Superlatives
PPT
Interjections
PDF
Sesion5-Futuro
Present and past simple tense
23-+Tense+Uyumu.pdf
Past simple irregular and regular verbs
Present simple
Present continuous
The past of be
IELTS Essay Topics with Answers (writing task 2)
Future perfect and continuous
Vocabulary
Ielts writing task 2 Overview
Conditionals
Participle clauses
Past simple versus past continuous
IELTS Task 2 Discussion Essay Lesson
Comparative and Superlative Adjectives.pdf
There is, there are
Verbs being
Comparatives And Superlatives
Interjections
Sesion5-Futuro
Ad

Similar to Mixed Effects Models - Empirical Logit (20)

PDF
Mixed Effects Models - Logit Models
PPT
logit_probit.ppt
PDF
Probability Homework Help
PPTX
ERF Training Workshop Panel Data 5
PDF
Error Control and Severity
DOCX
1 Review and Practice Exam Questions for Exam 2 Lea.docx
PPTX
Statistics Homework Help
PPT
THE LOGIT AND PROBIT MODELS of Regression Analysis
PDF
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
PDF
Regression shrinkage: better answers to causal questions
PDF
Mixed Effects Models - Post-Hoc Comparisons
PDF
Classification methods and assessment
PDF
3_MLE_printable.pdf
PDF
PDF
Classification methods and assessment
PDF
4_logit_printable_.pdf
PDF
Federico Vegetti_GLM and Maximum Likelihood.pdf
PPTX
Data simulation basics
PDF
“Probabilistic Logic Programs and Their Applications”
Mixed Effects Models - Logit Models
logit_probit.ppt
Probability Homework Help
ERF Training Workshop Panel Data 5
Error Control and Severity
1 Review and Practice Exam Questions for Exam 2 Lea.docx
Statistics Homework Help
THE LOGIT AND PROBIT MODELS of Regression Analysis
Quantitative Methods for Lawyers - Class #22 - Regression Analysis - Part 5
Regression shrinkage: better answers to causal questions
Mixed Effects Models - Post-Hoc Comparisons
Classification methods and assessment
3_MLE_printable.pdf
Classification methods and assessment
4_logit_printable_.pdf
Federico Vegetti_GLM and Maximum Likelihood.pdf
Data simulation basics
“Probabilistic Logic Programs and Their Applications”
Ad

More from Scott Fraundorf (20)

PDF
Mixed Effects Models - Signal Detection Theory
PDF
Mixed Effects Models - Power
PDF
Mixed Effects Models - Effect Size
PDF
Mixed Effects Models - Autocorrelation
PDF
Mixed Effects Models - Growth Curve Analysis
PDF
Mixed Effects Models - Missing Data
PDF
Mixed Effects Models - Orthogonal Contrasts
PDF
Mixed Effects Models - Simple and Main Effects
PDF
Mixed Effects Models - Centering and Transformations
PDF
Mixed Effects Models - Crossed Random Effects
PDF
Mixed Effects Models - Random Slopes
PDF
Mixed Effects Models - Level-2 Variables
PDF
Mixed Effects Models - Random Intercepts
PDF
Mixed Effects Models - Model Comparison
PDF
Mixed Effects Models - Fixed Effect Interactions
PDF
Mixed Effects Models - Fixed Effects
PDF
Mixed Effects Models - Data Processing
PDF
Mixed Effects Models - Descriptive Statistics
PDF
Mixed Effects Models - Introduction
PDF
Scott_Fraundorf_Resume
Mixed Effects Models - Signal Detection Theory
Mixed Effects Models - Power
Mixed Effects Models - Effect Size
Mixed Effects Models - Autocorrelation
Mixed Effects Models - Growth Curve Analysis
Mixed Effects Models - Missing Data
Mixed Effects Models - Orthogonal Contrasts
Mixed Effects Models - Simple and Main Effects
Mixed Effects Models - Centering and Transformations
Mixed Effects Models - Crossed Random Effects
Mixed Effects Models - Random Slopes
Mixed Effects Models - Level-2 Variables
Mixed Effects Models - Random Intercepts
Mixed Effects Models - Model Comparison
Mixed Effects Models - Fixed Effect Interactions
Mixed Effects Models - Fixed Effects
Mixed Effects Models - Data Processing
Mixed Effects Models - Descriptive Statistics
Mixed Effects Models - Introduction
Scott_Fraundorf_Resume

Recently uploaded (20)

PDF
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PDF
semiconductor packaging in vlsi design fab
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
CRP102_SAGALASSOS_Final_Projects_2025.pdf
PDF
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PDF
Journal of Dental Science - UDMY (2021).pdf
PDF
Race Reva University – Shaping Future Leaders in Artificial Intelligence
PDF
Complications of Minimal Access-Surgery.pdf
PDF
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
PPTX
Core Concepts of Personalized Learning and Virtual Learning Environments
PDF
HVAC Specification 2024 according to central public works department
PPTX
Climate Change and Its Global Impact.pptx
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PDF
Empowerment Technology for Senior High School Guide
PDF
Hazard Identification & Risk Assessment .pdf
PDF
Journal of Dental Science - UDMY (2020).pdf
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
Unit 4 Computer Architecture Multicore Processor.pptx
semiconductor packaging in vlsi design fab
A powerpoint presentation on the Revised K-10 Science Shaping Paper
CRP102_SAGALASSOS_Final_Projects_2025.pdf
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
Journal of Dental Science - UDMY (2021).pdf
Race Reva University – Shaping Future Leaders in Artificial Intelligence
Complications of Minimal Access-Surgery.pdf
Myanmar Dental Journal, The Journal of the Myanmar Dental Association (2013).pdf
Core Concepts of Personalized Learning and Virtual Learning Environments
HVAC Specification 2024 according to central public works department
Climate Change and Its Global Impact.pptx
Environmental Education MCQ BD2EE - Share Source.pdf
Empowerment Technology for Senior High School Guide
Hazard Identification & Risk Assessment .pdf
Journal of Dental Science - UDMY (2020).pdf
B.Sc. DS Unit 2 Software Engineering.pptx
Cambridge-Practice-Tests-for-IELTS-12.docx

Mixed Effects Models - Empirical Logit

  • 1. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 2. Clarification on Logit Models • Dependent/outcome variable would be a variable made up of 0s and 1s • We don’t need to apply any kind of transformation • glmer(Recalled ~ 1 + StudyTime * Strategy + (1|Subject) + (1|WordPair), data=cuedrecall, family=binomial) • family=binomial tells R to analyze this using the logit link function—everything handled automatically
  • 3. Clarification on Logit Models • When we get our results, they will be in terms of log odds/logits (because we used family=binomial) • We may want to use exp() to transform our estimates into effects on the odds to make them more interpretable • Elaborative rehearsal = +2.29 log odds of recall • exp(2.29) = Odds of recall 9.87 times greater
  • 4. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 5. Poisson Regression • glmer() supports other non-normal distributions, such as family=poisson • Good when the DV is a frequency count • Number of gestures made in communicative task • Number of traffic accidents • Number of ideas brainstormed • Number of doctor’s visits • Different from both normal & binomial distributions • Lower-bound at 0 • But, no upper bound • This is a Poisson distribution! -2 0 2 4 6 8 10 0.0 0.1 0.2 0.3 0.4 Count Probability
  • 6. • Link function for a Poisson distribution is the log() • Thus, we would also want to use exp() • Effect of ASD on number of disfluent pauses: 0.27 • exp(0.27) = 1.31 • ASD increases frequency of pauses by 1.31 times = !0 + !1X1i + ei Intercept (Baseline) ASD Can be any number Can be any number log(yi) Poisson Regression
  • 7. Poisson Models: Other Variants • Offset term: Controls for differences in the opportunities to observe the DV (e.g., time) • e.g., one child observed for 15 minutes, another for 14 minutes • glmer(Disfluencies ~ 1 + ASD + (1|Family), offset=Time, family=poisson)
  • 8. Poisson Models: Other Variants • Zero-inflated Poisson: Sometimes, more 0s than expected under a true Poisson distribution • Often, when a separate process creates 0s • # of traffic violations " 0 if you don’t have a car • # of alcoholic drinks per week " 0 if you don’t drink • Zero-inflated model: Simultaneously models the 0-creating process as a logit and the rest as a Poisson • Use package zeroinfl
  • 9. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 10. • Remember our cued recall data from last week? • A second categorical DV here is source confusions: • Lots of theoretical interest in source memory— memory for the context or source where you learned something Odds are not the same thing as probabilities! Source Confusion Odds are not the same thing as probabilities! WHO SAID IT? VIKING—COLLEGE SCOTCH—VODKA Study: Test: VIKING—vodka
  • 11. Source Confusion Model • Let’s model what causes people to make source confusions: • model.Source <- lmer(SourceConfusion ~ Strategy*StudyTime.cen + (1|Subject) + (1|WordPair), data=sourceconfusion) • This code contains two mistakes—can we fix them?
  • 12. Source Confusion Model • Let’s model what causes people to make source confusions: • model.Source <- glmer(SourceConfusion ~ Strategy*StudyTime.cen + (1|Subject) + (1|WordPair), data=sourceconfusion, family=binomial) • Failed to converge, with very questionable results exp(20.08) = Source errors 525,941,212 times more likely in Maintenance condition … but not significant
  • 13. Low & High Probabilities • Problem: These are low frequency events • In fact, lots of theoretically interesting things have low frequency • Clinical diagnoses that are not common • Various kinds of cognitive errors • Language production, memory, language comprehension… • Learners’ errors in math or other educational domains
  • 14. Low & High Probabilities • A problem for our model: • Model was trying to find the odds of making a source confusion within each study condition • But: Source confusions were never observed with elaborative rehearsal, ever! • How small are the odds? They are infinitely small in this dataset! • Note that not all failures to converge reflect low frequency events. But when very low frequencies exist, they are likely to cause convergence problems.
  • 15. Low & High Probabilities • Logit is undefined if probability = 1 • Logit is also undefined if probability = 0 • Log 0 is undefined • e??? = 0 • But there is nothing to which you can raise e to get 0 p(confusion) 1-p(confusion) [ ] logit = log 1 0 [ ] = log p(confusion) 1-p(confusion) [ ] logit = log 0 1 [ ] = log = log (0) Division by zero!!
  • 16. Low & High Probabilities • When close to 0 or 1, logit is defined but unstable 0.0 0.2 0.4 0.6 0.8 1.0 -4 -2 0 2 4 PROBABILITY of recall LOG ODDS of recall p(0.6) -> 0.41 p(0.8) ->1.39 Relatively gradual change at moderate probabilities p(0.95) -> 2.94 p(0.98) -> 3.89 Fast change at extreme probabilities
  • 17. Low & High Probabilities • A problem for our model: • Question was how much less common source confusions become with elaborative rehearsal • But: Source confusions were never observed with elaborative rehearsal • Why we think this happened: • In theory, elaborative subjects would probably make at least one of these errors eventually (given infinite trials) • Not impossible • But, empirically, probability was low enough that we didn’t see the error in our sample (limited sample size) Probability = 1% N = ∞ N = 8
  • 18. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 19. Empirical Logit • Empirical logit: An adjustment to the regular logit to deal with probabilities near (or at) 0 or 1 p(confusion) 1-p(confusion) [ ] logit = log
  • 20. Empirical Logit • Empirical logit: An adjustment to the regular logit to deal with probabilities near (or at) 0 or 1 • Makes extreme values (close to 0 or 1) less extreme Num of “A”s Num of “B”s [ ] logit = log A = Source confusion occurred B = Source confusion did not occur Num of “A”s + 0.5 Num of “B”s + 0.5 [ ] emp. logit = log e
  • 21. Empirical Logit • Empirical logit: An adjustment to the regular logit to deal with probabilities near (or at) 0 or 1 • Makes extreme values (close to 0 or 1) less extreme Num of “A”s Num of “B”s [ ] logit = log Num of “A”s + 0.5 Num of “B”s + 0.5 [ ] emp. logit = log e Num of As: 10 Num of Bs: 0 Num of As: 9 Num of Bs: 1 3.04 2.20 1.85 0.41 0.37 Num of As: 6 Num of Bs: 4
  • 22. Empirical logit doesn’t go as high or as low as the “true” logit At moderate values, they’re essentially the same
  • 24. With larger samples, difference gets much smaller (as long as probability isn’t 0 or 1)
  • 25. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab
  • 26. Empirical Logit: Implementation • Empirical logit requires summing up events and then adding 0.5 to numerator & denominator: • Thus, we have to (1) sum across individual trials, and then (2) calculate empirical logit Num A + 0.5 Num B + 0.5 [ ] empirical logit = log Single empirical logit value for each subject in each condition Not a sequence of one YES or NO for every item Num of As for subject S10 in Low associative strength, Maintenance rehearsal condition
  • 27. Empirical Logit: Implementation • Empirical logit requires summing up events and then adding 0.5 to numerator & denominator: • Thus, we have to (1) sum across individual trials, and then (2) calculate empirical logit • Can’t have multiple random effects with empirical logit • Would have to do separate by-subjects and by- items analyses • Collecting more data can be another solution Num A + 0.5 Num B + 0.5 [ ] empirical logit = log Num of As for subject S10 in Maintenance rehearsal condition
  • 28. Empirical Logit: Implementation • Scott’s psycholing package can help calculate the empirical logit & run the model • Example script will be posted on Canvas • Two notes: • No longer using glmer() with family=binomial. We’re now running the model on the empirical logit value, which isn’t just a 0 or 1. Here, the value of the DV is -1.61
  • 29. Empirical Logit: Implementation • Scott’s psycholing package can help calculate the empirical logit & run the model • Example script will be posted on Canvas • Two notes: • No longer using glmer() with family=binomial. We’re now running the model on the empirical logit value, which isn’t just a 0 or 1. • Because we calculate the empirical logit beforehand, model doesn’t know how many observations went into that value • Want to appropriately weight the model -2.46 could be the average across 10 trials or across 100 trials
  • 30. Week 10.1: Empirical Logit ! Finish Generalized LMERs ! Clarification on Logit Models ! Poisson Regression ! Empirical Logit ! Low & High Probabilities ! Empirical Logit ! Implementation ! Lab