SlideShare a Scribd company logo
Adaptive Color Display via
Perceptually-driven Factored
Spectral Projection
Isaac Kauvar, Samuel J Yang, Liang Shi,
Ian McDowall, Gordon Wetzstein
Stanford University
v
v
v
v
v
Conventional displays produce a limited set of colors.
Conventional displays produce a limited set of colors.
‘display’s gamut’
‘chromaticity diagram’
Limited gamuts prevent accurate color reproduction.
Adaptive Spectral Projection
What about content?
Multi-spectral cameras
MicaSense RedEdge Ximea xiSpec
Computer generated
images
Big Buck Bunny
Flexible gamut.
software hardware+
Flexibility yields performance.
sRGB
visualization
of multispectral
target image
Flexibility yields performance.
∆L ∆a ∆b
CIELAB-76 error from target
sRGB
visualization
of multispectral
target image
Flexibility yields performance.
∆L ∆a ∆b
Standard
fixed gamut
CIELAB-76 error from target
sRGB
visualization
of multispectral
target image
Flexibility yields performance.
∆L ∆a ∆b
Standard
fixed gamut
Legacy
flexible gamut
CIELAB-76 error from target
sRGB
visualization
of multispectral
target image
Flexibility yields performance.
∆L ∆a ∆b
Standard
fixed gamut
Legacy
flexible gamut
Our new
flexible gamut
algorithm
CIELAB-76 error from target
sRGB
visualization
of multispectral
target image
Flexibility yields performance.
∆L ∆a ∆b
Standard
fixed gamut
Legacy
flexible gamut
Our new
flexible gamut
algorithm
CIELAB-76 error from target
sRGB
visualization
of multispectral
target image
Hardware implementation.
Works with human users.
What limits the gamut of conventional projectors?
+ +
=
Multiplexed color primaries.
Limitations of current displays.
Human flicker fusion rate: 60 fps
Max frame speed of standard display: 180-240 fps
Maximum number of primaries per image: 3-4
÷
=
Limitations of current displays.
Human flicker fusion rate: 60 fps
Max frame speed of standard display: 180-240 fps
Maximum number of primaries per image: 3-4
÷
=
Columbia multispectral dataset
Limitations of current displays.
Human flicker fusion rate: 60 fps
Max frame speed of standard display: 180-240 fps
Maximum number of primaries per image: 3-4
÷
=
Columbia multispectral dataset
But the primaries are
different
for each image!
Related work.
Gamut
mapping
Banterle, et. al. 2011
Related work.
Gamut
selection
Gamut
mapping
Banterle, et. al. 2011 Long and Fairchild, 2011
Related work.
Gamut
selection
Li, et. al. 2015
Multiple
projectors
Gamut
mapping
Banterle, et. al. 2011 Long and Fairchild, 2011
Related work.
Gamut
selection
Joint primary
selection
and gamut
mapping
Ben-chorin and Eliav, 2007Li, et. al. 2015
Multiple
projectors
Gamut
mapping
Banterle, et. al. 2011 Long and Fairchild, 2011
Adaptive Spectral Projection
gamut selection
gamut selectiongamut mapping
Adaptive Spectral Projection
Adaptive Spectral Projection
Adaptive Spectral Projection
Adaptive Spectral Projection
Adaptive Spectral Projection
But human color
perception
is not linear
in CIEXYZ space!
MacAdams 1942
Ellipses scaled by 10.
CIEXYZcolor space
But human color
perception
is not linear
in CIEXYZ space!
MacAdams 1942
Ellipses scaled by 10.
Jung 2011
CIELAB76 color space
Nonlinear
transform
CIEXYZcolor space
Adaptive Spectral Projection
Adaptive Spectral Projection
Adaptive Spectral Projection
Conversion to CIELAB (nonlinear)
Conversion to CIELAB (nonlinear)
Nonconvex!
Reformulate the problem.
Reformulate the problem.
Standard ADMM update rules.
Loop until convergence:
Standard ADMM update rules.
Loop until convergence:
Solve nonlinear
problem on
per-pixel basis
Standard ADMM update rules.
Loop until convergence:
Solve nonlinear
problem on
per-pixel basis
Solve bi-convex
problem
(standard NMF)
Convergence.
Performance.
+ + =
Performance.
+ + =
Resulting gamut
Performance.
+ + =
Resulting gamut
Trade
brightness for
color spread
Performance.
+ + =
Error in LAB coordinates Resulting gamut
Trade
brightness for
color spread
Performance.
+ + =
Error in LAB coordinates Error of legacy method
Optimization significantly reduces LAB error.
Optimization consistently reduces LAB error.
Optimization consistently reduces LAB error.
Optimization consistently reduces LAB error.
Our algorithm
It also has better performance with other metrics.
Design of a flexible gamut projector.
Adaptive Spectral Projection
Adaptive Spectral Projection
Projector calibration.
Projector calibration.
Empirical results.
Empirical results.
Metamer user study corroborates our results.
Our algorithm
Gordon Wetzstein Samuel J Yang
Ian McDowall
computationalimaging.org
www.stanford.edu/~ikauvar
Liang Shi
Wetzstein group homepage:
Isaac Kauvar’s homepage:
5 primaries
5 primaries
Gauss newton step with CIELAB-76
NMF
References
Y. J. Jung, H. Sohn, S. Lee, Y. M. Ro, and H. W. Park, “Quantitative Measurement of Binocular Color Fusion Limit for Non-spectral Colors,”
Optics Express, vol. 19, no. 8, pp. 7325-7338, 2011
MOHAN, A., RASKAR, R., AND TUMBLIN, J. 2008. Agile spec- trum imaging: Programmable wavelength modulation for cam- eras and
projectors. Computer Graphics Forum 27, 2, 709–717.
RICE, J. P., BROWN, S. W., ALLEN, D. W., YOON, H. W., LITORJA, M., AND HWANG, J. C. 2012. Hyperspectral image projector applications.
vol. 8254, 82540R–82540R–8.
AJITO, T., OBI, T., YAMAGUCHI, M., AND OHYAMA, N. 2000. Expanded color gamut reproduced by six-primary projection display. In Proc. SPIE 3954, 130–137.
BEN-CHORIN, M., AND ELIAV, D. 2007. Multi-primary design of spectrally accurate displays. Journal of the SID 15, 9, 667–677.
BANTERLE, F., ARTUSI, A., AYDIN, T. O., DIDYK, P., EISEMANN, E., GUTIERREZ, D., MANTIUK, R., AND MYSZKOWSKI, K. 2011.
Multidimensional image retargeting. In SIGGRAPH Asia 2011 Courses, 15:1–15:612.
MACADAM, D. L. 1942. Visual sensitivities to color differences in daylight. OSA JOSA 32, 5, 247–273.
LI, Y., MAJUMDER, A., LU, D., AND GOPI, M. 2015. Content- independent multi-spectral display using superimposed projec- tions. Computer
Graphics Forum (Eurographics).
CHIAO, C.-C., CRONIN, T. W., AND OSORIO, D. 2000. Color signals in natural scenes: characteristics of reflectance spectra and effects of natural illuminants.
OSA JOSA A 17, 2, 218–224.
DANNEMILLER, J. L. 1992. Spectral reflectance of natural objects: how many basis functions are necessary? OSA JOSA A 9, 4, 507–515.

More Related Content

PPTX
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
PPTX
>A Switchable Light Field Camera Architecture with Angle SEnsitive Pixels and...
PPTX
Vision-correcting Displays @ SIGGRAPH 2014
PPT
vision correcting display
PPTX
End-to-end Optimization of Cameras and Image Processing - SIGGRAPH 2018
PPTX
VR2.0: Making Virtual Reality Better Than Reality?
PPTX
ProxImaL | SIGGRAPH 2016
PPTX
The Light Field Stereoscope | SIGGRAPH 2015
Accommodation-invariant Computational Near-eye Displays - SIGGRAPH 2017
>A Switchable Light Field Camera Architecture with Angle SEnsitive Pixels and...
Vision-correcting Displays @ SIGGRAPH 2014
vision correcting display
End-to-end Optimization of Cameras and Image Processing - SIGGRAPH 2018
VR2.0: Making Virtual Reality Better Than Reality?
ProxImaL | SIGGRAPH 2016
The Light Field Stereoscope | SIGGRAPH 2015

What's hot (20)

PPTX
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 3 Spectral Imaging
PPTX
Computational Near-eye Displays with Focus Cues - SID 2017 Seminar
PPTX
Light Field, Focus-tunable, and Monovision Near-eye Displays | SID 2016
PPTX
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 4 Light Fields
PDF
Compressive DIsplays: SID Keynote by Ramesh Raskar
PPTX
Tailored Displays to Compensate for Visual Aberrations - SIGGRAPH Presentation
PPT
CORNAR: Looking Around Corners using Trillion FPS Imaging
PPTX
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)
PPT
Raskar Keynote at Stereoscopic Display Jan 2011
PDF
Light Field Technology
PPTX
Non-line-of-sight Imaging with Partial Occluders and Surface Normals | TOG 2019
PPTX
Introduction to Light Fields
PPTX
HR3D: Content Adaptive Parallax Barriers
PPTX
Compressive Light Field Displays
PDF
Modeling perceptual similarity and shift invariance in deep networks
PPTX
Light field
PPTX
Light Field Photography Introduction
PPTX
HDR in Cinema: Achievable Contrast
PPTX
Demystifying laser projection for cinema: 5 frequently asked questions, 125+ ...
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 3 Spectral Imaging
Computational Near-eye Displays with Focus Cues - SID 2017 Seminar
Light Field, Focus-tunable, and Monovision Near-eye Displays | SID 2016
SIGGRAPH 2012 Computational Plenoptic Imaging Course - 4 Light Fields
Compressive DIsplays: SID Keynote by Ramesh Raskar
Tailored Displays to Compensate for Visual Aberrations - SIGGRAPH Presentation
CORNAR: Looking Around Corners using Trillion FPS Imaging
SIGGRAPH 2014 Course on Computational Cameras and Displays (part 2)
Raskar Keynote at Stereoscopic Display Jan 2011
Light Field Technology
Non-line-of-sight Imaging with Partial Occluders and Surface Normals | TOG 2019
Introduction to Light Fields
HR3D: Content Adaptive Parallax Barriers
Compressive Light Field Displays
Modeling perceptual similarity and shift invariance in deep networks
Light field
Light Field Photography Introduction
HDR in Cinema: Achievable Contrast
Demystifying laser projection for cinema: 5 frequently asked questions, 125+ ...
Ad

Similar to Adaptive Spectral Projection (20)

PDF
A primer for colour computer vision
PDF
2+3D Photography 2017 – INV 7 The 3D Image Capture Moonshot: Managing the Ene...
PDF
Color: from craft to computation
PDF
Scanner colour calibration
PPT
CS 354 Understanding Color
PDF
Top SIP Research Articles of 2019
PPT
Multimedia color in image and video
PPTX
Image Formation and Represantation and Transformation
PPTX
Chapter 3 Image Enhanvement_ComputerVision.pptx
PPT
Color Image Processing,Digital Image processing
PDF
The Spectral Printer: From Technical Challenge To Business Case 
PPTX
LCD charactrization
PPTX
Ppt ---image processing
PPTX
DIP-CHAPTERs
PDF
digital image processing colour_images.pdf
PPTX
Digital image processing
PPTX
PDF
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
PDF
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
A primer for colour computer vision
2+3D Photography 2017 – INV 7 The 3D Image Capture Moonshot: Managing the Ene...
Color: from craft to computation
Scanner colour calibration
CS 354 Understanding Color
Top SIP Research Articles of 2019
Multimedia color in image and video
Image Formation and Represantation and Transformation
Chapter 3 Image Enhanvement_ComputerVision.pptx
Color Image Processing,Digital Image processing
The Spectral Printer: From Technical Challenge To Business Case 
LCD charactrization
Ppt ---image processing
DIP-CHAPTERs
digital image processing colour_images.pdf
Digital image processing
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
COLOUR IMAGE REPRESENTION OF MULTISPECTRAL IMAGE FUSION
Ad

More from StanfordComputationalImaging (9)

PDF
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
PPTX
Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes - Siggraph 2019
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 5
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 4
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 3
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 2
PPTX
Build Your Own VR Display Course - SIGGRAPH 2017: Part 1
PPTX
Multi-camera Time-of-Flight Systems | SIGGRAPH 2016
PPTX
Compressive Light Field Projection @ SIGGRAPH 2014
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
Autofocals: Evaluating Gaze-Contingent Eyeglasses for Presbyopes - Siggraph 2019
Build Your Own VR Display Course - SIGGRAPH 2017: Part 5
Build Your Own VR Display Course - SIGGRAPH 2017: Part 4
Build Your Own VR Display Course - SIGGRAPH 2017: Part 3
Build Your Own VR Display Course - SIGGRAPH 2017: Part 2
Build Your Own VR Display Course - SIGGRAPH 2017: Part 1
Multi-camera Time-of-Flight Systems | SIGGRAPH 2016
Compressive Light Field Projection @ SIGGRAPH 2014

Recently uploaded (20)

PDF
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
PPTX
Classification Systems_TAXONOMY_SCIENCE8.pptx
PPTX
famous lake in india and its disturibution and importance
PPTX
2. Earth - The Living Planet earth and life
PPTX
2. Earth - The Living Planet Module 2ELS
PPT
6.1 High Risk New Born. Padetric health ppt
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
PPTX
2Systematics of Living Organisms t-.pptx
PDF
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
PPTX
Introduction to Cardiovascular system_structure and functions-1
PDF
lecture 2026 of Sjogren's syndrome l .pdf
PPTX
INTRODUCTION TO EVS | Concept of sustainability
PDF
. Radiology Case Scenariosssssssssssssss
PPT
POSITIONING IN OPERATION THEATRE ROOM.ppt
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PDF
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
PPTX
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
PPTX
7. General Toxicologyfor clinical phrmacy.pptx
PPTX
Pharmacology of Autonomic nervous system
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
Classification Systems_TAXONOMY_SCIENCE8.pptx
famous lake in india and its disturibution and importance
2. Earth - The Living Planet earth and life
2. Earth - The Living Planet Module 2ELS
6.1 High Risk New Born. Padetric health ppt
TOTAL hIP ARTHROPLASTY Presentation.pptx
2Systematics of Living Organisms t-.pptx
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
Introduction to Cardiovascular system_structure and functions-1
lecture 2026 of Sjogren's syndrome l .pdf
INTRODUCTION TO EVS | Concept of sustainability
. Radiology Case Scenariosssssssssssssss
POSITIONING IN OPERATION THEATRE ROOM.ppt
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
Cosmic Outliers: Low-spin Halos Explain the Abundance, Compactness, and Redsh...
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
7. General Toxicologyfor clinical phrmacy.pptx
Pharmacology of Autonomic nervous system
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...

Adaptive Spectral Projection

Editor's Notes

  • #2: Testing.
  • #3: Three of the most important qualities of a display
  • #4: Resolutions have substantially improved, from VGA to HD to 4K UHD and beyond, where for many displays, the pixels are no longer visible to the human eye.
  • #5: Contrast has also been increasing, both through improvements in the black levels of displays, as well as, through new High dynamic range displays.
  • #6: Color, on the other hand, remains an unsolved problem. Large gamuts have yet to be fully realized for general consumption. ( The newRec20:20 specification for UHD calls for a very large gamut, but this gamut, as well as even larger color gamuts have yet to be fully realized for general consumption. )
  • #7: We focus on this problem in the work that I will present today.
  • #8: The problem of color reproduction is highlighted by the fact that most displays can only produce a limited set of colors.
  • #9: The gray horseshoe shape represents all colors the standard human observer can see, whereas the green triangle represents the subset of colors that conventional displays (shown here with the common sRGB gamut) are able to produce.
  • #10: The issue is that limited gamuts prevent accurate color reproduction.
  • #11: This was highlighted when just recently, I was in kyoto, and I visited the ginkakuji silver temple. The garden there is covered with different mosses in every concievable shade of green. I wanted to capture and display these images so that I could share them with my friends, but no matter how hard I tried, my iPhone camera and its display could not achieve the colors that I wanted. There are, of course, two issues at hand: the camera, and the display.
  • #12: There are multiple approaches for overcoming the camera limitation: first, computer generated images can possess any desired gamut, and second, there has recently been much commercial development of cheap and accessible multi-spectral cameras. These cameras are capable of capturing all desired color content. It remains a challenge, however, to display such content.
  • #13: To solve the display problem, we advocate the use a flexible gamut, one whose color primaries depend on the content to be displayed. This enables one to efficiently use the available hardware capabilities. We developed a hardware prototype that enables flexible gamut selection. And we developed an optimization algorithm that chooses the best primaries for a given image.
  • #14: The result is that we can obtain a much better match between the target image and the image reproduced by the projector, shown in this example. As our input image, we use a multispectral image, whose sRGB representation is visualized here. To demonstrate the performance of our algorithm, we compare the results of our and legacy algorithms based on the error between the perceived image color and the target image color. As our input image, we use a multispectral image, whose sRGB representation is visualized here (obviously, the projector in this room is not capable of showing the full multispectral
  • #15: As our perceptual error metric, we use the euclidean distance between the target and the result in CIELAB-76 color space.
  • #16: With a standard fixed gamut and gamut mapping algorithm, there is a large error – represented by the warm colors in this error plot.
  • #17: Using a flexible gamut algorithm that was proposed recently but that does not optimize for smallest perceptual error, we see better performance but still some error.
  • #18: Finally, with our perceptually-driven algorithm, we see nearly zero error between the target and the output, as indicated by the blue images in the bottom row.
  • #20: This new design is more robust, easier to align, cheaper, and more manufacturable. And is the design that we used throughout the rest of the paper.
  • #21: Finally, we found that with our algorithm, human users were better able to see and distinguish colors that were indistinguishable when using legacy techniques.
  • #22: Okay, so what actually limits the gamut of conventional projectors.
  • #23: Many projectors temporally multiplex red, green, and blue images that are then averaged together by your brain. When designing the RGB filters, tradeoffs have to be made between the brightness of the projector and color purity of the primaries.
  • #24: Because of the limited speed of current displays, for each full color image, it is only possible to show 3 or 4 primaries, such as red green and blue.
  • #25: Fortunately, as it turns out, most natural images can be well represented by just three primaries. We did an experiment here to validate that previously discovered claim.
  • #26: Importantly, however, the 3 primaries are different for each image. This insight inspired our ‘flexible gamut’ approach, where we are able to choose different primaries for different images.
  • #27: Before describing our work, it is worth discussing previous related work. The gamut mapping problem of squeezing image content into the gamut of a particular display has received a lot of attention, and these days every company has their own secret sauce.
  • #28: Gamut selection is also essential to the design of current displays, where the gamut must be chosen and permanently fixed during the manufacturing process.
  • #29: There has also been work to extend color gamuts by overlaying multiple projectors that each have different gamuts. This is a strong approach, but requires careful calibration and is not the route we decided to pursue.
  • #30: Instead, we chose to solve the problem of joint gamut mapping and primary selection: For each image, we simultaneously determine both the best gamut and gamut mapping that are possible within the constraints of our hardware. The small amount of previous work on this problem of joint selection has been limited to using non-negative matrix factorization in a non-perceptually accurate color space. I will explain more specfiically what that means over the next few slides.
  • #31: It is easiest to describe our algorithm in the context of the actual hardware implementation that we used. In particular, we had an LED light engine with, in our case, six LEDs, coupled into a DMD based grayscale projector.
  • #32: Gamut selection consisted of choosing, for each time point, the linear combination of LED intensities yielding a color primary, which corresponds to one corner of the gamut triangle.
  • #33: The gamut mapping problem is then equivalent to choosing the grayscale image to display on the DMD corresponding to each color primary.
  • #34: Thus, for each time point, we have a vector of 6 LED coefficients, and the grayscale value of each pixel. Because the three time points are averaged together by the observer, the image formation can be represented as a matrix multiplication.
  • #35: The joint gamut mapping and gamut selection problem then consists of choosing the entries of these matrices, which we call G and H.
  • #36: The matrix product of G and H yields for each pixel a 6-component color vector representing the averaged color of that pixel.
  • #37: We project this six-component color vector onto the spectral sensitivity responses of the retinal cone cells. The result is a three-coordinate color in CIEXYZ color space for each pixel. Now, given a target multispectral image, which can also be represented in XYZ color space, again,our goal is to find G and H that results in the best match to the target image.
  • #38: This optimization can be represented by a non-negative matrix factorization problem, non-negative because we can only have positive pixel and LED values. The error metric here is the euclidean difference between the target and result image in XYZ color space.
  • #39: However, the previous description assumes that color differences are linear in XYZ space. Unfortunately, human color perception is in fact not linear in this space.
  • #40: This is represented by the fact that in XYZ color space, the size of the Just Noticeable Difference region is not constant.
  • #41: By switching to CIELAB76, we can obtain something that is a much better approximation to human perception. While not a perfect representation, locally it is nearly linear, and our assumption is that it is more perceptually relevant. We note that there are even more modern color spaces, some of which we were also able to include in our optimization. See the paper for details.
  • #42: So, how do we incorporate the CIELAB color space into our optimization?
  • #43: We do so by including the nonlinear function that converts from XYZ to LAB.
  • #45: The addition of this term makes the optimization problem significantly harder. Whereas before, the problem was bi-convex, the addition of this nonlinear term makes the problem nonconvex.
  • #46: We initially tried levengerb-marquardt update rules (i.e. Alternating least squares), but it was far too slow and offered no promise of substantial speedup. Note – you cannot just apply the inverse of phi to Ilab, because then your metric is still in linear space…
  • #47: Instead, to efficiently solve this nonconvex problem, we reformulated it in a way that allowed us to split up the easy and hard parts of the problem.
  • #48: Instead, to efficiently solve this nonconvex problem, we reformulated it in a way that allowed us to split up the easy and hard parts of the problem.
  • #49: This formulation then fits directly into the Standard ADMM update rules.
  • #50: The key is that rather than solving a nonlinear levenberg marquardt across the whole image simultaneoulsy, we instead solve a bunch of small, pixelwise problems in parallel.
  • #51: Then, we solve the standard NMF problem; it has been shown previously that this can be implemented on a GPU in real time.
  • #52: The implementation is Efficient and also displays smooth convergence, across all of the images we tested. Although the run time for our implementation was very slow, about 2 hours per image, but it was entirely implemented in matlab and no effort was made to speed it up by porting to a GPU.
  • #53: What does the output of our algorithm look like? Here, represented in false color, we see the output pixel values corresponding to the three primaries for an example multispectral dataset.
  • #54: The gamuts themselves looked like this.
  • #55: Note that the algorithm made a tradeoff in the full 3D color space between color coverage and maximum image brightness.
  • #56: We can observe the performance of the algorithm by looking at the final residual.
  • #57: If we compare this to the legacy adaptive gamut algorithm, however, we see a stark improvement.
  • #58: These results held across many tested images. Our algorithm is the bottom row. Further, our algorithm outperformed both the legacy flexible gamut algorithm and the fixed-gamut gamut mapping. These results held across many tested images. Our algorithm is the bottom row.
  • #59: Pay attention to the bottom row, and the lack of error in the blue images.
  • #61: This performance improvement can be summarized across all tested multispectral images.
  • #62: Additionally, even though we optimized in LAB94 color space, our results were high performing in additional, more complicated and more perceptually accurate color spaces such as sCIELAB and LAB2000
  • #63: Finally, I want to briefly discuss our hardware implementation. Given an optimization algorithm that is doing its job to select the best color gamuts for a given image, how to do we actually display the image using this gamut?
  • #64: We used a Lumencor light engine as the illumination source for a modified Texas Instruments LightCrafter DMD projector.
  • #65: We synchronized the DMD with the light engine using a National Instruments Data Acquisition board.
  • #66: We calibrated the projector by measuring the spectrum of each LED in the light engine.
  • #67: We could then calculate the gamut of the projector, the solid black line, which could cover nearly the entire 2D chromaticity diagram. Thus, we have a lot of room to work around in and play with when determining the optimal 3-primary gamut for an image.
  • #68: Our projector was able to successfully display vivid color images, as shown in this photograph of the setup.
  • #69: While I can’t show you the results using the projector in this room, because the color gamut of this projector is not as wide as that of our projector, I can at least show you that our projector and the entire optimization pipeline was capable of producing artifact free images. You will have to rely primarily on the error plots of the previous few slides as a measure of the performance of our algorithm. We also performed a user study that attempted to support the fact that our algorithm is better at maintaining color differences.
  • #70: We designed this user study to corroborate the effectiveness of our approach, while holding off for now on a full user study that would rely on subjective evaluation of image quality. We chose three pairs of colors, and tested whether unknowing users could distinguish those pairs by asking them how many circles they saw displayed in an image, interleaving the images generated by the three different algorithms: fixed gamut, non-perceptual flexible gamut, and our perceptual flexible gamut. The results highlight that our algorithm better preserves perceptual color differences than both a fixed gamut as well as the legacy adaptive gamut algorithm. Thus, in conclusion we have developed an algorithm and hardware implementation that ensures that the colors displayed by your projector, match the colors you perceive in the world. 1) A flexible gamut has the ability to adapt more readily in situations which cause a fixed gamut to fail; 2) PNMF can maintain perceptual color differences better than NMF; 3) Our hardware prototype is capable of conveying the results of the flexible gamut algorithm.
  • #71: Thank you.