SlideShare a Scribd company logo
IT UNIVERSITY OF COPENHAGEN
EyeGrip: Detecting Targets in a Series of
Uni-directional Moving Objects
Using Optokinetic Nystagmus
Eye Movements
Shahram Jalaliniya - Diako Mardanbegi
IT University of Copenhagen
Pervasive Interaction Technology Lab
IT UNIVERSITY OF COPENHAGEN
MOTIVATION
• Information age & overwhelming users with data
• Data is getting more visual (e.g. Web, Facebook)
• Scrolling among visual data is becoming more popular
IT UNIVERSITY OF COPENHAGEN
• Scrolling includes:
- Scrolling
- Stopping the page
- Bringing the desired content back
(not always an easy task)
IT UNIVERSITY OF COPENHAGEN
EYEGRIP
EyeGrip automatically detects moving images that seem
to be interesting for a user among other scrolling images
by monitoring and analyzing user’s eye movements.
IT UNIVERSITY OF COPENHAGEN
EYEGRIP WORKS BASED ON OKN EYE MOVEMENTS
• OKN is an eye movement that tend to track the motion of
one element at a time in a set of unidirectional moving
stimuli
• OKN: Optokinetic nystagmus is a combination of saccadic
and smooth pursuit eye movements.
IT UNIVERSITY OF COPENHAGEN
HOW DOES EYEGRIP WORK?
When one of the images grabs our
attention we follow that image for
a longer time that creates a peak
in the OKN signal.
180$
200$
220$
240$
260$
280$
300$
320$
340$
360$
380$
6000$ 7500$ 9000$ 10500$12000$13500$15000$16500$18000$19500$21000$22500$24000$25500$27000$28
Original$data$
180$
200$
220$
240$
260$
280$
300$
320$
340$
360$
380$
6000$ 7500$ 9000$ 10500$ 12000$13500$ 15000$ 16500$ 18000$19500$21
Original$data$
IT UNIVERSITY OF COPENHAGEN
EXPERIMENT GOAL
Testing the feasibility of EyeGrip in different scrolling
conditions:
- Different speeds
- Maximum number of visible images on the screen (manipulated by
changing image width)
IT UNIVERSITY OF COPENHAGEN
EXPERIMENTAL DESIGN (3 × 2 )
• 20 participants
• 3 speeds
26.5, 37.5, and 49 °⁄sec
• 2 image widths
18°(
𝑊 𝑖𝑚𝑎𝑔𝑒
𝑊 𝑑𝑖𝑠𝑝𝑙𝑎𝑦
= 0.6) and 9°(
𝑊 𝑖𝑚𝑎𝑔𝑒
𝑊 𝑑𝑖𝑠𝑝𝑙𝑎𝑦
= 0.3)
IT UNIVERSITY OF COPENHAGEN
APPARATUS
• Head-mounted eye tracker with the Haytham open
source gaze tracker (20 Hz sampling rate)
• Laptop to display the scrolling images & collect eye data
34.5 cm
19.5cm
Small width condi ons: 1, 3,and 5
(a)
34.5 cm
19.5cm
Big width condi on: 2, 4, and 6
(b)
IT UNIVERSITY OF COPENHAGEN
EXPERIMENT TASK
• Visual search among faces: Participants should
press space key as soon as they see Bill Clinton’s
picture among other faces
• Participants repeated the task for all 6 conditions
• 40 random images of famous people is displayed
in each condition where 7 was Bill Clinton’s photos
USER ERROR
The error rate: total missing target images by
participants divided by total number of targets
Precision(NoEvent)"
n"6"
Window=10"
Window=16"
Window=20"
Window=30"
0"
1"
2"
3"
4"
5"
6"
7"
8"
9"
10"
Condi<on"
1"
Condi<on"
2"
Condi<on"
3"
Condi<on"
4"
Condi<on"
5"
Condi<on"
6"
Error$rate$(%)$
(b)$
Usererrorrate(%)
Cond. 1 2 3 4 5 6
Speed slow slow med med fast fast
Image
width
small big small big small big
IT UNIVERSITY OF COPENHAGEN
DATA ANALYSIS
• Cleaning data: Removing 5 participants with less than 75% data
• Normalization: finding left & right eye coordinates during the
experiment by displaying 2 red circles a the beginning of each
task. We used these coordinates to bring all the data in the same
range (min-max normalization)
• Aggregation: we aggregated the data from all 15 participants
• Labeling data: we used the space key to label the collected data
IT UNIVERSITY OF COPENHAGEN
EVENT DETECTION ALGORITHM
• We used the default setting for the Multilayer perceptron
algorithm in the WEKA with a single hidden layer
• Horizontal coordinate of the pupil center was the only
feature
• Sliding window is selected based on maximum
performance
- 30 frames for conditions 1,2,4
- 20 frames for conditions 3,6
- 16 frames for condition 5
EFFECT OF SPEED & MAX NO. OF IMAGES
• 10 cross-fold validation is used
• Repeated measure ANOVA
- Significant effect of image width
F(1,14) = 34.9, p < :0001
- No significant effect for speed
- But higher speed caused more error
(missing targets)
0"
10"
20"
30"
40"
50"
60"
70"
80"
90"
100"
Small"width" Big"width" Small"width" Big"width" Small"width" Big"width"
Slow" Medium"speed" Fast"
Accuracy'of'the'classifica1on'(%)'
(a)'
70"
75"
80"
85"
90"
95"
100"
Slow" Medium"
speed"
Fast"
Accuracy'of'the'classifica1on(%)'
(b)'
Small"image"width"
Big"image"width"
70
75
80
85
90
95
100
Accuracy'of'the'classifica1on'(%)'
0"
10"
20"
30"
40"
50"
60"
70"
80"
90"
100"
Small"width" Big"width" Small"width" Big"width" Small"width" Big"width"
Slow" Medium"speed" Fast"
Accuracy'of'the'classifica1on'(%)'
(a)'
70"
75"
80"
85"
90"
95"
100"
Slow" Medium"
speed"
Fast"
Accuracy'of'the'classifica1on(%)'
(b)'
Small"image"width"
Big"image"width"
"
"
"
"
"
"
"
"
"
"
"
Small"width" Big"width" Small"width" Big"width" Small"width" Big"width"
Slow" Medium"speed" Fast"
(a)'
70"
75"
80"
85"
90"
95"
100"
Slow" Medium"
speed"
Fast"
Accuracy'of'the'classifica1on(%)'
(b)'
Small"image"width"
Big"image"width"
70"
75"
80"
85"
90"
95"
100"
Small"image"
width"
Big"image"width"
Accuracy'of'the'classifica1on'(%)'
(c)'
Slow"
Medium"speed"
Fast"
IT UNIVERSITY OF COPENHAGEN
DESIGN GUIDELINES
• Images should move in a one direction at a
certain speed
• There should be a balance between moving
speed & max number of images
• The visual search task should not be very
complex otherwise all images will draw equally
high attention that increases falls positives
IT UNIVERSITY OF COPENHAGEN
STUDY 1: A PICTURE SELECTION SYSTEM
• 8 participants
• Speed: 37 °⁄sec
• Image width: 18 °
• Selecting Clinton pictures
0%#
10%#
20%#
30%#
40%#
50%#
60%#
70%#
80%#
90%#
100%#
Accuracy# Precision# Recall#
1"
2"
3"
4"
5"
Mental'
Demand'
Physical'
Demand'
Temporal'
Demand'
Performance' Effor t' Frustra: on'
5'lickert'scale'
IT UNIVERSITY OF COPENHAGEN
STUDY 2: MIND READING GAME
• 10 participants
• Select 1 of 4 characters
• Participants are asked to count
repetitions of selected person
• Accuracy:100%
IT UNIVERSITY OF COPENHAGEN
OTHER SUGGESTED APPLICATIONS
• Interaction with scrolling menus (e.g. scrolling cards
on Google Glass)
• EyeGrip for browsing Facebook page
• Advertisement on public displays
• Text reading assistant for small displays (slowing
down text when a user has problem with a word)
• Assistant for visual inspection in production lines
(automatically detecting unqualified products)
IT UNIVERSITY OF COPENHAGEN
RELATED WORK
• Pursuit [1]: is a calibration-free
technique to detect limited number
of moving objects on the screen
using smooth pursuit eye
movements.
• EyeGrip detects unlimited number
of unidirectional moving objects.
[1] M´elodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous
Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In
Proceedings of UbiComp ’13. ACM, 439–448.
IT UNIVERSITY OF COPENHAGEN
CONCLUSIONS
• EyeGrip is a calibration-free and implicit eye
interaction technique to select an object among
other moving unidirectional objects (top-down
attention)
• EyeGrip can be used in gaze-contingent user
interfaces to detect what draws users attention
(bottom-up attention)
• Simpler algorithms (e.g. threshold-based method)
can be applied to detect the event in EyeGrip
IT UNIVERSITY OF COPENHAGEN

More Related Content

PPSX
Eye-based head gestures
PDF
IRJET- Smart Mirror using Eye Gaze Tracking
PDF
Multi-Modal Sensing with Neuro-Information Retrieval
PDF
Cursor Movement with Eyeball
PPTX
strain_analysis major project for phase 2.pptx
PDF
Evaluation of eye gaze interaction
PPT
EyePhone.ppt
Eye-based head gestures
IRJET- Smart Mirror using Eye Gaze Tracking
Multi-Modal Sensing with Neuro-Information Retrieval
Cursor Movement with Eyeball
strain_analysis major project for phase 2.pptx
Evaluation of eye gaze interaction
EyePhone.ppt

Similar to EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements (20)

PPTX
2019 Project Showcase - Alexander Adam Laurence
PPTX
Human computer interaction_ 23CSM1R19.pptx
PDF
Gaze detection
PDF
Implementing Deep Learning Model in Human Computer Interaction with Face Reco...
PDF
F0932733
PDF
A novel efficient human computer interface using an electrooculogram
PDF
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
PPTX
Deep learning based gaze detection system for automobile drivers using nir ca...
PDF
Usability_Presentation
PPTX
Blue Eyes Technology
DOCX
employee job satisfaction project
PDF
Human Factors of XR: Using Human Factors to Design XR Systems
PDF
Faro An Interactive Interface For Remote Administration Of Clinical Tests Bas...
PDF
Kinnunen Towards Task Independent Person Authentication Using Eye Movement Si...
PDF
IRJET- Iris Controlled Access Mechanism for the Physically Challenged
PDF
Crime Detection using Machine Learning
PPTX
PC Control with Eye Movement for Specially Abled.pptx
PPTX
eye.ppt[1].pptx whwueyyywueeyeryeurerrrrrrrrrrrrr
PPT
Scott MacKenzie at BayCHI: Evaluating Eye Tracking Systems for Computer Data ...
2019 Project Showcase - Alexander Adam Laurence
Human computer interaction_ 23CSM1R19.pptx
Gaze detection
Implementing Deep Learning Model in Human Computer Interaction with Face Reco...
F0932733
A novel efficient human computer interface using an electrooculogram
IRJET- Igaze-Eye Gaze Direction Evaluation to Operate a Virtual Keyboard for ...
Deep learning based gaze detection system for automobile drivers using nir ca...
Usability_Presentation
Blue Eyes Technology
employee job satisfaction project
Human Factors of XR: Using Human Factors to Design XR Systems
Faro An Interactive Interface For Remote Administration Of Clinical Tests Bas...
Kinnunen Towards Task Independent Person Authentication Using Eye Movement Si...
IRJET- Iris Controlled Access Mechanism for the Physically Challenged
Crime Detection using Machine Learning
PC Control with Eye Movement for Specially Abled.pptx
eye.ppt[1].pptx whwueyyywueeyeryeurerrrrrrrrrrrrr
Scott MacKenzie at BayCHI: Evaluating Eye Tracking Systems for Computer Data ...
Ad

Recently uploaded (20)

PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Spectroscopy.pptx food analysis technology
PDF
Machine learning based COVID-19 study performance prediction
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Cloud computing and distributed systems.
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
sap open course for s4hana steps from ECC to s4
PDF
cuic standard and advanced reporting.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
Big Data Technologies - Introduction.pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
A Presentation on Artificial Intelligence
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Encapsulation theory and applications.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Empathic Computing: Creating Shared Understanding
Spectroscopy.pptx food analysis technology
Machine learning based COVID-19 study performance prediction
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Cloud computing and distributed systems.
Mobile App Security Testing_ A Comprehensive Guide.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
sap open course for s4hana steps from ECC to s4
cuic standard and advanced reporting.pdf
Network Security Unit 5.pdf for BCA BBA.
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
MIND Revenue Release Quarter 2 2025 Press Release
Big Data Technologies - Introduction.pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
A Presentation on Artificial Intelligence
MYSQL Presentation for SQL database connectivity
Encapsulation theory and applications.pdf
Ad

EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements

  • 1. IT UNIVERSITY OF COPENHAGEN EyeGrip: Detecting Targets in a Series of Uni-directional Moving Objects Using Optokinetic Nystagmus Eye Movements Shahram Jalaliniya - Diako Mardanbegi IT University of Copenhagen Pervasive Interaction Technology Lab
  • 2. IT UNIVERSITY OF COPENHAGEN MOTIVATION • Information age & overwhelming users with data • Data is getting more visual (e.g. Web, Facebook) • Scrolling among visual data is becoming more popular
  • 3. IT UNIVERSITY OF COPENHAGEN • Scrolling includes: - Scrolling - Stopping the page - Bringing the desired content back (not always an easy task)
  • 4. IT UNIVERSITY OF COPENHAGEN EYEGRIP EyeGrip automatically detects moving images that seem to be interesting for a user among other scrolling images by monitoring and analyzing user’s eye movements.
  • 5. IT UNIVERSITY OF COPENHAGEN EYEGRIP WORKS BASED ON OKN EYE MOVEMENTS • OKN is an eye movement that tend to track the motion of one element at a time in a set of unidirectional moving stimuli • OKN: Optokinetic nystagmus is a combination of saccadic and smooth pursuit eye movements.
  • 6. IT UNIVERSITY OF COPENHAGEN HOW DOES EYEGRIP WORK? When one of the images grabs our attention we follow that image for a longer time that creates a peak in the OKN signal. 180$ 200$ 220$ 240$ 260$ 280$ 300$ 320$ 340$ 360$ 380$ 6000$ 7500$ 9000$ 10500$12000$13500$15000$16500$18000$19500$21000$22500$24000$25500$27000$28 Original$data$ 180$ 200$ 220$ 240$ 260$ 280$ 300$ 320$ 340$ 360$ 380$ 6000$ 7500$ 9000$ 10500$ 12000$13500$ 15000$ 16500$ 18000$19500$21 Original$data$
  • 7. IT UNIVERSITY OF COPENHAGEN EXPERIMENT GOAL Testing the feasibility of EyeGrip in different scrolling conditions: - Different speeds - Maximum number of visible images on the screen (manipulated by changing image width)
  • 8. IT UNIVERSITY OF COPENHAGEN EXPERIMENTAL DESIGN (3 × 2 ) • 20 participants • 3 speeds 26.5, 37.5, and 49 °⁄sec • 2 image widths 18°( 𝑊 𝑖𝑚𝑎𝑔𝑒 𝑊 𝑑𝑖𝑠𝑝𝑙𝑎𝑦 = 0.6) and 9°( 𝑊 𝑖𝑚𝑎𝑔𝑒 𝑊 𝑑𝑖𝑠𝑝𝑙𝑎𝑦 = 0.3)
  • 9. IT UNIVERSITY OF COPENHAGEN APPARATUS • Head-mounted eye tracker with the Haytham open source gaze tracker (20 Hz sampling rate) • Laptop to display the scrolling images & collect eye data 34.5 cm 19.5cm Small width condi ons: 1, 3,and 5 (a) 34.5 cm 19.5cm Big width condi on: 2, 4, and 6 (b)
  • 10. IT UNIVERSITY OF COPENHAGEN EXPERIMENT TASK • Visual search among faces: Participants should press space key as soon as they see Bill Clinton’s picture among other faces • Participants repeated the task for all 6 conditions • 40 random images of famous people is displayed in each condition where 7 was Bill Clinton’s photos
  • 11. USER ERROR The error rate: total missing target images by participants divided by total number of targets Precision(NoEvent)" n"6" Window=10" Window=16" Window=20" Window=30" 0" 1" 2" 3" 4" 5" 6" 7" 8" 9" 10" Condi<on" 1" Condi<on" 2" Condi<on" 3" Condi<on" 4" Condi<on" 5" Condi<on" 6" Error$rate$(%)$ (b)$ Usererrorrate(%) Cond. 1 2 3 4 5 6 Speed slow slow med med fast fast Image width small big small big small big
  • 12. IT UNIVERSITY OF COPENHAGEN DATA ANALYSIS • Cleaning data: Removing 5 participants with less than 75% data • Normalization: finding left & right eye coordinates during the experiment by displaying 2 red circles a the beginning of each task. We used these coordinates to bring all the data in the same range (min-max normalization) • Aggregation: we aggregated the data from all 15 participants • Labeling data: we used the space key to label the collected data
  • 13. IT UNIVERSITY OF COPENHAGEN EVENT DETECTION ALGORITHM • We used the default setting for the Multilayer perceptron algorithm in the WEKA with a single hidden layer • Horizontal coordinate of the pupil center was the only feature • Sliding window is selected based on maximum performance - 30 frames for conditions 1,2,4 - 20 frames for conditions 3,6 - 16 frames for condition 5
  • 14. EFFECT OF SPEED & MAX NO. OF IMAGES • 10 cross-fold validation is used • Repeated measure ANOVA - Significant effect of image width F(1,14) = 34.9, p < :0001 - No significant effect for speed - But higher speed caused more error (missing targets) 0" 10" 20" 30" 40" 50" 60" 70" 80" 90" 100" Small"width" Big"width" Small"width" Big"width" Small"width" Big"width" Slow" Medium"speed" Fast" Accuracy'of'the'classifica1on'(%)' (a)' 70" 75" 80" 85" 90" 95" 100" Slow" Medium" speed" Fast" Accuracy'of'the'classifica1on(%)' (b)' Small"image"width" Big"image"width" 70 75 80 85 90 95 100 Accuracy'of'the'classifica1on'(%)' 0" 10" 20" 30" 40" 50" 60" 70" 80" 90" 100" Small"width" Big"width" Small"width" Big"width" Small"width" Big"width" Slow" Medium"speed" Fast" Accuracy'of'the'classifica1on'(%)' (a)' 70" 75" 80" 85" 90" 95" 100" Slow" Medium" speed" Fast" Accuracy'of'the'classifica1on(%)' (b)' Small"image"width" Big"image"width" " " " " " " " " " " " Small"width" Big"width" Small"width" Big"width" Small"width" Big"width" Slow" Medium"speed" Fast" (a)' 70" 75" 80" 85" 90" 95" 100" Slow" Medium" speed" Fast" Accuracy'of'the'classifica1on(%)' (b)' Small"image"width" Big"image"width" 70" 75" 80" 85" 90" 95" 100" Small"image" width" Big"image"width" Accuracy'of'the'classifica1on'(%)' (c)' Slow" Medium"speed" Fast"
  • 15. IT UNIVERSITY OF COPENHAGEN DESIGN GUIDELINES • Images should move in a one direction at a certain speed • There should be a balance between moving speed & max number of images • The visual search task should not be very complex otherwise all images will draw equally high attention that increases falls positives
  • 16. IT UNIVERSITY OF COPENHAGEN STUDY 1: A PICTURE SELECTION SYSTEM • 8 participants • Speed: 37 °⁄sec • Image width: 18 ° • Selecting Clinton pictures 0%# 10%# 20%# 30%# 40%# 50%# 60%# 70%# 80%# 90%# 100%# Accuracy# Precision# Recall# 1" 2" 3" 4" 5" Mental' Demand' Physical' Demand' Temporal' Demand' Performance' Effor t' Frustra: on' 5'lickert'scale'
  • 17. IT UNIVERSITY OF COPENHAGEN STUDY 2: MIND READING GAME • 10 participants • Select 1 of 4 characters • Participants are asked to count repetitions of selected person • Accuracy:100%
  • 18. IT UNIVERSITY OF COPENHAGEN OTHER SUGGESTED APPLICATIONS • Interaction with scrolling menus (e.g. scrolling cards on Google Glass) • EyeGrip for browsing Facebook page • Advertisement on public displays • Text reading assistant for small displays (slowing down text when a user has problem with a word) • Assistant for visual inspection in production lines (automatically detecting unqualified products)
  • 19. IT UNIVERSITY OF COPENHAGEN RELATED WORK • Pursuit [1]: is a calibration-free technique to detect limited number of moving objects on the screen using smooth pursuit eye movements. • EyeGrip detects unlimited number of unidirectional moving objects. [1] M´elodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous Interaction with Displays Based on Smooth Pursuit Eye Movement and Moving Targets. In Proceedings of UbiComp ’13. ACM, 439–448.
  • 20. IT UNIVERSITY OF COPENHAGEN CONCLUSIONS • EyeGrip is a calibration-free and implicit eye interaction technique to select an object among other moving unidirectional objects (top-down attention) • EyeGrip can be used in gaze-contingent user interfaces to detect what draws users attention (bottom-up attention) • Simpler algorithms (e.g. threshold-based method) can be applied to detect the event in EyeGrip
  • 21. IT UNIVERSITY OF COPENHAGEN