SlideShare a Scribd company logo
Accommodation-invariant
Computational Near-eye
Displays
Robert Konrad1, Nitish Padmanaban1, Keenan Molner1, Emily A. Cooper2, Gordon
Weztstein1
1 Stanford University, 2 Dartmouth College
http://www.computationalimaging.o
Virtual Image
Accommodation and Retinal Blur
Blur Gradient Driven Accommodation
Blur Gradient Driven Accommodation
Blur Gradient Driven Accommodation
Blur Gradient Driven Accommodation
Blur Gradient Driven Accommodation
Blur Gradient Driven Accommodation
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Q: can we drive accommodation with stereoscopic
cues by optically removing the retinal blur cue?
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Real World:
Vergence &
Accommodation
Match!
Current VR Displays:
Vergence &
Accommodation
Mismatch!
Existing Approaches
Multiplane
Rolland et al., Applied Optics 2000 Akeley et al., SIGGRAPH 2004
Focal Surfaces
Matsuda et al., SIGGRAPH 2017
Light Field
Huang et al., SIGGRAPH 2015
Lanman et al., SIGGRAPH Asia 2013
Holographic
Maimone et al., SIGGRAPH 2017
Adaptive Focus
Sugihara et al., SID 1998
Liu et al., ISMAR 2008
Koulieris et al.,SIGGRAPH 2017
Padmanaban et al., PNAS 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
How do we remove the blur cue?
Aperture Controls Depth of Field
Image courtesy of Concept One Studios
Aperture Controls Depth of Field
Image courtesy of Concept One Studios
Aperture Controls Depth of Field
Image courtesy of Concept One Studios
Maxwellian-type (pinhole) Near-eye Displays
Point Light
Source
Maxwellian-type (pinhole) Near-eye Displays
Severely reduces eyebox!
Spatial Light Modulator
Point Light
Source
Focal Sweep
EDOF Cameras:
Dowski & Cathey, App. Opt. 1995
Nagahara et al., ECCV 2008
Cossairt et al., SIGGRAPH 2010
60Hz
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
PSF Engineering
Spatially InvariantDepth Invariant
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
∗
∗ =
∗ =
Deconvolution?
∗ =
Deconvolution?
Target
Target Image
Target
Conventional
Target Image Conventional Display @ 1D
Target
Conventional
Target Image Conventional Display @ 3D
Target
Conventional
AI
AI @ 3D Conventional Display @ 3D
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Strobing the Backlight
Accommodation Invariant
Multi-plane Accommodation Invariant
Point Spread Functions
Conventional
AI
AI 2 Plane
AI 3 Plane
4D (0.25m) 3D (0.25m) 2D (0.5m) 1D (0.5m) 0D (∞)
Point Spread Functions
Conventional
AI
AI 2 Plane
AI 3 Plane
4D (0.25m) 3D (0.25m) 2D (0.5m) 1D (0.5m) 0D (∞)
3D 1D
Point Spread Functions
Conventional
AI
AI 2 Plane
AI 3 Plane
4D (0.25m) 3D (0.25m) 2D (0.5m) 1D (0.5m) 0D (∞)
MTF of Prototype
Spatial Frequency (cycles/degree)
MTF of Prototype
Spatial Frequency (cycles/degree)
MTF of Prototype
Spatial Frequency (cycles/degree)
MTF of Prototype
Spatial Frequency (cycles/degree)
MTF of Prototype
Spatial Frequency (cycles/degree)
Q: can we drive accommodation with stereoscopic
cues by optically removing the retinal blur cue?
Follow the target with your
eyes
4D
(0.25m)
0.5D
(2m)
User Study #1
11 participants
Stimulus
User Study #1
User Study #1
Look at each
target
0.5D
(2m)
4D
(0.25m)
User Study #2
12 participants
Ideal Accommodation Response
Ideal Accommodation Response
Ideal Accommodation Response
Ideal Accommodation Response
Ideal Accommodation Response
Ideal Accommodation Response
Individual User Response
Discussion
Focus Range/Planes
Spatial Resolution
Image courtesy of karma.com
Future Work – Multifocal Lenses
Robert Konrad
Computational Imaging Lab
Stanford University
stanford.edu/~rkkonrad
computationalimaging.org
User Comfort Study
Comfort!
Courtesy of vroom.buzz
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
Accommodation-invariant Near-eye Displays
Accommodation-invariant Near-eye Displays
3D 2.5D 2D 1.5D 1D 0D
ConventionalAIAI2-planeAI3-plane

More Related Content

PPTX
Adaptive Spectral Projection
PPTX
Vision-correcting Displays @ SIGGRAPH 2014
PPTX
>A Switchable Light Field Camera Architecture with Angle SEnsitive Pixels and...
PPTX
VR2.0: Making Virtual Reality Better Than Reality?
PPTX
End-to-end Optimization of Cameras and Image Processing - SIGGRAPH 2018
PPTX
The Light Field Stereoscope | SIGGRAPH 2015
PPTX
Computational Near-eye Displays with Focus Cues - SID 2017 Seminar
PPT
vision correcting display
Adaptive Spectral Projection
Vision-correcting Displays @ SIGGRAPH 2014
>A Switchable Light Field Camera Architecture with Angle SEnsitive Pixels and...
VR2.0: Making Virtual Reality Better Than Reality?
End-to-end Optimization of Cameras and Image Processing - SIGGRAPH 2018
The Light Field Stereoscope | SIGGRAPH 2015
Computational Near-eye Displays with Focus Cues - SID 2017 Seminar
vision correcting display

What's hot (20)

PPTX
Light Field, Focus-tunable, and Monovision Near-eye Displays | SID 2016
PPTX
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 3 Spectral Imaging
PPTX
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 4 Light Fields
PPTX
Tailored Displays to Compensate for Visual Aberrations - SIGGRAPH Presentation
PDF
Compressive DIsplays: SID Keynote by Ramesh Raskar
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 1
PPTX
ProxImaL | SIGGRAPH 2016
PPTX
Compressive Light Field Projection @ SIGGRAPH 2014
PPT
CORNAR: Looking Around Corners using Trillion FPS Imaging
PPTX
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)
PPT
Raskar Keynote at Stereoscopic Display Jan 2011
PPTX
HR3D: Content Adaptive Parallax Barriers
PPTX
Introduction to Light Fields
PPTX
Erste Ergebnisse mit einer neuen trifokalen EDOF IOL
PPTX
Compressive Light Field Displays
PPTX
Stereoscopic Imaging
PPTX
Coded Photography - Ramesh Raskar
PPTX
Light Field Photography Introduction
PPTX
Light field
Light Field, Focus-tunable, and Monovision Near-eye Displays | SID 2016
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 3 Spectral Imaging
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 4 Light Fields
Tailored Displays to Compensate for Visual Aberrations - SIGGRAPH Presentation
Compressive DIsplays: SID Keynote by Ramesh Raskar
Build Your Own VR Display Course - SIGGRAPH 2017: Part 1
ProxImaL | SIGGRAPH 2016
Compressive Light Field Projection @ SIGGRAPH 2014
CORNAR: Looking Around Corners using Trillion FPS Imaging
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)
Raskar Keynote at Stereoscopic Display Jan 2011
HR3D: Content Adaptive Parallax Barriers
Introduction to Light Fields
Erste Ergebnisse mit einer neuen trifokalen EDOF IOL
Compressive Light Field Displays
Stereoscopic Imaging
Coded Photography - Ramesh Raskar
Light Field Photography Introduction
Light field
Ad

Similar to Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017 (20)

PDF
Dr.Kawewong Ph.D Thesis
PPTX
Montage4D: Interactive Seamless Fusion of Multiview Video Textures
PPT
Raskar Computational Camera Fall 2009 Lecture 01
PPTX
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
PDF
Simulations of Strong Lensing
PPTX
From STC (Stereo Camera onboard on Bepi Colombo ESA Mission) to Blender
PPTX
Using Panoramic Videos for Multi-Person Localization and Tracking In A 3D Pan...
PPT
CS 354 Acceleration Structures
PDF
“Next-generation Computer Vision Methods for Automated Navigation of Unmanned...
PDF
RIM Poster Optics r2.1 - 2-OP-05 Glatzel_Tinsley Poster
PPT
Advanced Game Development with the Mobile 3D Graphics API
PPTX
Movement Tracking in Real-time Hand Gesture Recognition
PDF
Keynote at Tracking Workshop during ISMAR 2014
PPT
Raskar Coded Opto Charlotte
PPTX
Laser Beam Homogenizer
PDF
=iros16tutorial_2.pdf
PDF
"Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vi...
PPT
Dr.Kawewong Ph.D Thesis
Montage4D: Interactive Seamless Fusion of Multiview Video Textures
Raskar Computational Camera Fall 2009 Lecture 01
SIGGRAPH 2018 - Full Rays Ahead! From Raster to Real-Time Raytracing
Simulations of Strong Lensing
From STC (Stereo Camera onboard on Bepi Colombo ESA Mission) to Blender
Using Panoramic Videos for Multi-Person Localization and Tracking In A 3D Pan...
CS 354 Acceleration Structures
“Next-generation Computer Vision Methods for Automated Navigation of Unmanned...
RIM Poster Optics r2.1 - 2-OP-05 Glatzel_Tinsley Poster
Advanced Game Development with the Mobile 3D Graphics API
Movement Tracking in Real-time Hand Gesture Recognition
Keynote at Tracking Workshop during ISMAR 2014
Raskar Coded Opto Charlotte
Laser Beam Homogenizer
=iros16tutorial_2.pdf
"Think Like an Amateur, Do As an Expert: Lessons from a Career in Computer Vi...
Ad

More from StanfordComputationalImaging (8)

PDF
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
PPTX
Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes - Siggraph 2019
PPTX
Non-line-of-sight Imaging with Partial Occluders and Surface Normals | TOG 2019
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 5
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 4
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 3
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 2
PPTX
Multi-camera Time-of-Flight Systems | SIGGRAPH 2016
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes - Siggraph 2019
Non-line-of-sight Imaging with Partial Occluders and Surface Normals | TOG 2019
Build Your Own VR Display Course - SIGGRAPH 2017: Part 5
Build Your Own VR Display Course - SIGGRAPH 2017: Part 4
Build Your Own VR Display Course - SIGGRAPH 2017: Part 3
Build Your Own VR Display Course - SIGGRAPH 2017: Part 2
Multi-camera Time-of-Flight Systems | SIGGRAPH 2016

Recently uploaded (20)

PPTX
communication and presentation skills 01
PDF
Soil Improvement Techniques Note - Rabbi
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PDF
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
PPTX
introduction to high performance computing
PPTX
Nature of X-rays, X- Ray Equipment, Fluoroscopy
PDF
737-MAX_SRG.pdf student reference guides
PDF
PPT on Performance Review to get promotions
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
PPTX
Artificial Intelligence
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PPT
A5_DistSysCh1.ppt_INTRODUCTION TO DISTRIBUTED SYSTEMS
PDF
Visual Aids for Exploratory Data Analysis.pdf
PDF
Categorization of Factors Affecting Classification Algorithms Selection
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
communication and presentation skills 01
Soil Improvement Techniques Note - Rabbi
Exploratory_Data_Analysis_Fundamentals.pdf
Analyzing Impact of Pakistan Economic Corridor on Import and Export in Pakist...
introduction to high performance computing
Nature of X-rays, X- Ray Equipment, Fluoroscopy
737-MAX_SRG.pdf student reference guides
PPT on Performance Review to get promotions
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
III.4.1.2_The_Space_Environment.p pdffdf
R24 SURVEYING LAB MANUAL for civil enggi
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
SMART SIGNAL TIMING FOR URBAN INTERSECTIONS USING REAL-TIME VEHICLE DETECTI...
Artificial Intelligence
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
A5_DistSysCh1.ppt_INTRODUCTION TO DISTRIBUTED SYSTEMS
Visual Aids for Exploratory Data Analysis.pdf
Categorization of Factors Affecting Classification Algorithms Selection
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx

Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017

Editor's Notes

  • #2: Displays different from other displays presented today, in that they don't rely on the retinal blur cue to drive accommodation
  • #3: If we look at conventional vr display Creates virtual images some fixed distance away, which is what everyone is attempting to get away from, because it causes our eyes to focus only to that distance. This design is not viable for anyone trying to support accommodation
  • #4: If we were to look inside one of these displays and focus to 25cm, we’d see quite a blurry image But as we focus closer and closer to the plane of the virtual image, we see a crisper image appear until we focus to the plane of the virutal image and see a sharp image Like autofocus
  • #5: If we were to look inside one of these displays and focus to 25cm, we’d see quite a blurry image But as we focus closer and closer to the plane of the virtual image, we see a crisper image appear until we focus to the plane of the virutal image and see a sharp image
  • #6: If we were to look inside one of these displays and focus to 25cm, we’d see quite a blurry image But as we focus closer and closer to the plane of the virtual image, we see a crisper image appear until we focus to the plane of the virutal image and see a sharp image
  • #7: If we were to look inside one of these displays and focus to 25cm, we’d see quite a blurry image But as we focus closer and closer to the plane of the virtual image, we see a crisper image appear until we focus to the plane of the virutal image and see a sharp image
  • #8: If we were to look inside one of these displays and focus to 25cm, we’d see quite a blurry image But as we focus closer and closer to the plane of the virtual image, we see a crisper image appear until we focus to the plane of the virutal image and see a sharp image
  • #9: This is effectively how our auto-focusing mechanism works, by taking advantage of the gradient in the perceived blur
  • #10: What is a point spread function?
  • #11: However, today, we propose a display type that will have a consistent perceived blur regardless of focus state Meaning that we refocus, we end up seeing the exact same image... Which breaks our conventional method of refocusing Point spread function is the impulse response of a system to a point light source.
  • #13: Let’s take a step back and understand how our visual system works. When we look around the real world, our eyes perform two actions simultaneously: vergence and accommodation. Vergence refers to the relative rotation of our eyeballs in their sockets. If I hold my finger up like this and look at it, my eyes rotate inwards. Simple enough.
  • #14: Let’s take a step back and understand how our visual system works. When we look around the real world, our eyes perform two actions simultaneously: vergence and accommodation. Vergence refers to the relative rotation of our eyeballs in their sockets. If I hold my finger up like this and look at it, my eyes rotate inwards. Simple enough.
  • #15: Let’s take a step back and understand how our visual system works. When we look around the real world, our eyes perform two actions simultaneously: vergence and accommodation. Vergence refers to the relative rotation of our eyeballs in their sockets. If I hold my finger up like this and look at it, my eyes rotate inwards. Simple enough.
  • #16: Cross-coupling -> real world
  • #17: In real world the consistent cross coupling allows our vergence and accommodation to converge faster Both systems are always driven to the same distance But this is not the case in current VR displays
  • #18: Systems support vergence, displays correct stereoscopic images Virtual image plane is fixed due to glass optics Mismatch between the binocular disparity and retinal blur cues, and this cross-coupling is now in conflict Mismatch  headache, eye strain, reduced visual clarity Therefore the goal in general here, is to support multiple accommodative planes or even better a continuous range of accommodation distances.
  • #19: Underlying theme: produce realistic retinal blur cues to drive accommodation Not what we are trying to achieve
  • #21: Optically disable
  • #24: And ask a more fundamental question. Given that the accommodation state of the two eyes are linked, can the accommodation switch between these two planes? If so, then we could get two planes of accommodation by simply switching out one of the lenses in your favorite headset, and changing the rendering pipeline slightly to account for the change in magnification. That would be great! And the answer is to the question is ... No... But before we dive into our results, let me show you how we came up with this idea of monovision. It is actually a common alternative to bifocal lenses when treating presbyopia.
  • #26: Think of your eye as a camera
  • #27: Because of the incredibly wide depth of field, objects at all depths look the same, which is the same as removing the retinal blur cue! Pinhole camera to pinhole display Don’t need need constrict the pupil, we can just open up a very small exit pupil of the system
  • #28: Because of the incredibly wide depth of field, objects at all depths look the same, which Need pupil tracking + steering to support this system
  • #29: Because of the incredibly wide depth of field, objects at all depths look the same, which Need pupil tracking + steering to support this system
  • #30: Make this slide have an animation where the the retinal blur cues are superimposed and added eventually created a depth-invariant blur cue 60 Hz
  • #47: All images from 3D diopters
  • #48: All images from 3D diopters
  • #49: All images from 3D diopters
  • #51: here’s out take on adaptive focus display hardware others try to build smaller and smaller displays, we probably built the world’s biggest VR display here
  • #58: A clever combinatino of time-modulated backlight intensity and displayed images may be a viable approach to optimizing image resolution
  • #60: Strobe backlight during sweep Creates multiple virtual images Not multifocal Accommodation driven to plane closer to vergence distance
  • #61: Till now I’ve shown you the conventional display mode, with only 1 virtual image plane, and The accommodation invariant mode where we perform the focal sweep. But now with the strobe we can implement a 2-plane AI mode
  • #62: But now with the strobe we can implement a 2-plane AI mode You can see that the blur
  • #64: Explain what MTF is Industry-standard slanted-edge method of capturing the MTF
  • #65: Explain what MTF is Industry-standard slanted-edge method of capturing the MTF
  • #66: Explain what MTF is Industry-standard slanted-edge method of capturing the MTF
  • #67: Explain what MTF is Industry-standard slanted-edge method of capturing the MTF
  • #68: Explain what MTF is Industry-standard slanted-edge method of capturing the MTF
  • #69: And ask a more fundamental question. Given that the accommodation state of the two eyes are linked, can the accommodation switch between these two planes? If so, then we could get two planes of accommodation by simply switching out one of the lenses in your favorite headset, and changing the rendering pipeline slightly to account for the change in magnification. That would be great! And the answer is to the question is ... No... But before we dive into our results, let me show you how we came up with this idea of monovision. It is actually a common alternative to bifocal lenses when treating presbyopia.
  • #71: For all modes, a 6.2cm Maltese cross oscillated between 0.5 and 4 D (mean 2.25 D, amplitude 1.75 D) at 0.125 Hz
  • #72: For all modes, a 6.2cm Maltese cross oscillated between 0.5 and 4 D (mean 2.25 D, amplitude 1.75 D) at 0.125 Hz
  • #73: For all modes, a 6.2cm Maltese cross oscillated between 0.5 and 4 D (mean 2.25 D, amplitude 1.75 D) at 0.125 Hz Indicate disparity driven accommodation via the removal of focus cues in a near-eye display can be achieved, although the resulting accommodative gain is not quite as high as with natural focus cues. However, there are many depth cues at play here and we are mainly interested in the effect of binocular disparity specifically on accommodation (removing all other cues) To do so, we performed a second study
  • #74: Static target at 9 discrete depths 2 second blank period 3 second stimulus
  • #80: Interesting because we don’t see the step response we’d expect to see for the 2-plane and 3-plane modes. But then again this is averaged data When we look at some individual data plots, we see that some people see very strong response to the AI condition, while others so none at all. It would be interesting to investigate why there is this much discrepancy between users.
  • #81: And ask a more fundamental question. Given that the accommodation state of the two eyes are linked, can the accommodation switch between these two planes? If so, then we could get two planes of accommodation by simply switching out one of the lenses in your favorite headset, and changing the rendering pipeline slightly to account for the change in magnification. That would be great! And the answer is to the question is ... No... But before we dive into our results, let me show you how we came up with this idea of monovision. It is actually a common alternative to bifocal lenses when treating presbyopia.
  • #92: I want to start with the stereoscope … However, the basic optics of these systems have remained largely unchanged since their conception over a century ago. Stereoscope, 1800s, similar to today Two views to get 3D perception
  • #93: However, today, we propose a display type that will have a consistent perceived blur regardless of focus state Meaning that we refocus, we end up seeing the exact same image... Which breaks our conventional method of refocusing Point spread function is the impulse response of a system to a point light source.
  • #94: Why this is a computational display? Point spread function engineering