SlideShare a Scribd company logo
On Constructing z-Dimensional
DIBR-Synthesized Images
What is DIBR?
 DIBR stands for Depth Image Based Rendering
 Image-Based Rendering (IBR) is an emerging technology which enables the
synthesis of novel realistic images of a scene from virtual viewpoints, using a
collection of available images. The applications of IBR can be found in various
situations such as virtual reality, telepresence, thanks to the complexity and
performance advantage over model-based techniques, which bases on complex 3-
D geometric models, material properties and lightening conditions of the scene
 DIBR is IBR technique which maps each color pixel in a reference view to a 2D grid
location in the virtual view, using disparity information provided by the
corresponding depth pixel.
What is Rendering and Z Dimension?
 Rendering is the process of generating an image from a 2D or 3D model (or
models in what collectively could be called a scene file) by means of computer
programs.
 Three-dimensional space (also: 3-space or, rarely, tri-dimensional space) is a
geometric setting in which three values (called parameters) are required to
determine the position of an element (i.e., point). This is the informal meaning of
the term dimension.
CONSTRUCTION OF Z DIMENSION
 To construct this new image type, we first perform a new DIBR pixel-mapping for z-dimensional
camera movement.
 We then identify expansion holes—a new kind of missing pixels unique in z-dimensional DIBR-
mapped images—using a depth layering procedure.
 To fill expansion holes we formulate a patch-based maximum a posteriori problem, where the
patches are appropriately spaced using diamond tiling.
 Leveraging on recent advances in graph signal processing, we define a graph-signal smoothness
prior to regularize the inverse problem.
 Finally, we design a fast iterative reweighted least square algorithm to solve the posed problem
efficiently. Experimental results show that our z-dimensional synthesized images outperform
images rendered by a native modification
On constructing z dimensional Image By DIBR Synthesized Images
On constructing z dimensional Image By DIBR Synthesized Images
On constructing z dimensional Image By DIBR Synthesized Images
On constructing z dimensional Image By DIBR Synthesized Images
On constructing z dimensional Image By DIBR Synthesized Images
Virtual Camera
 a virtual camera system aims at controlling a camera or a set of cameras to display
a view of a 3D virtual world. Camera systems where their purpose is to show the
action at the best possible angle; more generally, they are used in 3D virtual worlds
when a third person view is required.
On constructing z dimensional Image By DIBR Synthesized Images
Constructing z-Dimensional DIBR-Synthesized
Images
Divided into three Sections
1.DIBR (Depth Image Based Rendering)
2.Image Super Resolution
3.Graph Based Image Processing
Depth Image Based Rendering
 Color-plus-depth format , consisting of one or more color and depth image pairs
from different viewpoints, is a widely used 3D scene representation. Using this
format, low-complexity DIBR view synthesis procedure such as 3D warping [ can be
used to create credible virtual view images, with the aid of in painting algorithms to
complete disocclusion holes
 In this work, we assume that enough pixels from one or more reference view(s)
have been transmitted to the decoder for virtual view synthesis, and we focus only
on the construction of z-dimensional DIBR-synthesized images given received
reference view pixels.
Image Super Resolution
 Increase in object size due to large z-dimensional virtual camera movement is
analogous to increasing the resolution (super-resolution (SR)) of the whole image.
However, during z-dimensional camera motion an object closer to the camera
increases in size faster than objects farther away, while in SR, resolution is increased
uniformly for all spatial regions in the image.
 For the above reason, we cannot directly apply conventional image SR techniques
[30] in rectangular pixel grid to interpolate the synthesized view. Further, recent
non-local SR techniques such as leveraging on self-similarity of natural images that
require an exhaustive search of similar patches throughout an image tend to be
computationally expensive. In contrast, our interpolation scheme performs only
iterative local filtering, and thus is significantly more computation-efficient.
Graph Based Image Processing
 GSP is the study of signals that live on structured data kernels described by graphs
, leveraging on spectral graph theory for frequency analysis of graph-signals.
 Graph-signal priors have been derived for inverse problems such as denoising ,
interpolation , bit-depth enhancement and de-quantization.
 In this work, we assume the latter case and construct a suitable graph G from
available DIBR-synthesized pixels for joint denoising/interpolation of pixels in a
target patch.
SYSTEM OVERVIEW
 Interactive Free Viewpoint Streaming System
 DIBR
 Rounding Noise in mapped pixels
 Identification of expansion holes
Interactive Free Viewpoint Streaming System
Depth-Image-Based Rendering
Rounding Noise in DIBR-Mapped Pixels
Identification of Expansion Holes
CONCLUSION
 Unlike typical free viewpoint system that considers only synthesis of virtual views
shifted horizontally along the x dimension via DIBR, in this paper we consider in
addition construction of z-dimensional DIBR-synthesized images. In such far-to-
near viewpoint synthesis, there exists a new type of missing pixels called expansion
holes – where objects close to the camera will increase in size and simple pixel-to-
pixel mapping in DIBR from reference to virtual view will result in missing pixel
areas – that demand a new interpolation scheme.
THANKS
 P. Merkle, A. Smolic, K. Mueller, and T. Wiegand, “Multi-view video plus depth
representation and coding,
 A. Chuchvara, M. Georgiev, and A. Gotchev, “CPU-efficient free view synthesis based
on depth layering,” in Proc. 3DTV-Conf: True Vis. - Capture, Transmiss. Display 3D
Video, Jul. 2014,
 M. Tanimoto, M. P. Tehrani, T. Fujii, and T. Yendo, “Free-viewpoint TV,” IEEE Signal
Process. Mag., vol. 28, no. 1, pp. 67–76, Jan. 2011.
REFERENCES

More Related Content

PDF
Visual Hull Construction from Semitransparent Coloured Silhouettes
PDF
Visual Hull Construction from Semitransparent Coloured Silhouettes
PPTX
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
PDF
Visual hull construction from semitransparent coloured silhouettes
PDF
3D reconstruction
PDF
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
PDF
Depth Fusion from RGB and Depth Sensors II
PPTX
GRPHICS01 - Introduction to 3D Graphics
Visual Hull Construction from Semitransparent Coloured Silhouettes
Visual Hull Construction from Semitransparent Coloured Silhouettes
Neural Scene Representation & Rendering: Introduction to Novel View Synthesis
Visual hull construction from semitransparent coloured silhouettes
3D reconstruction
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
Depth Fusion from RGB and Depth Sensors II
GRPHICS01 - Introduction to 3D Graphics

What's hot (18)

PDF
3-d interpretation from single 2-d image for autonomous driving
PPTX
3 d display-methods-in-computer-graphics(For DIU)
PDF
Deep learning for image video processing
PDF
[3D勉強会@関東] Deep Reinforcement Learning of Volume-guided Progressive View Inpa...
PDF
Object Detection for Service Robot Using Range and Color Features of an Image
PDF
Object detection for service robot using range and color features of an image
PPTX
GRPHICS02 - Creating 3D Graphics
PPT
GRPHICS05 - Rendering (2)
PDF
Optic flow estimation with deep learning
PDF
6 texture mapping computer graphics
PPTX
Normal Mapping / Computer Graphics - IK
PPTX
Multimedia content based retrieval in digital libraries
PDF
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
PPTX
study Diffusion Curves: A Vector Representation for Smooth-Shaded Images
PDF
Deep learning for 3-D Scene Reconstruction and Modeling
PDF
mvitelli_ee367_final_report
PDF
Deep vo and slam ii
PDF
ei2106-submit-opt-415
3-d interpretation from single 2-d image for autonomous driving
3 d display-methods-in-computer-graphics(For DIU)
Deep learning for image video processing
[3D勉強会@関東] Deep Reinforcement Learning of Volume-guided Progressive View Inpa...
Object Detection for Service Robot Using Range and Color Features of an Image
Object detection for service robot using range and color features of an image
GRPHICS02 - Creating 3D Graphics
GRPHICS05 - Rendering (2)
Optic flow estimation with deep learning
6 texture mapping computer graphics
Normal Mapping / Computer Graphics - IK
Multimedia content based retrieval in digital libraries
COMPLETE END-TO-END LOW COST SOLUTION TO A 3D SCANNING SYSTEM WITH INTEGRATED...
study Diffusion Curves: A Vector Representation for Smooth-Shaded Images
Deep learning for 3-D Scene Reconstruction and Modeling
mvitelli_ee367_final_report
Deep vo and slam ii
ei2106-submit-opt-415
Ad

Viewers also liked (7)

PPTX
Mahendran Chap 06 Output
PPT
Sergey A. Sukhanov, "3D content production"
PPT
1240934 634561225344818750
PPT
Computer graphics1
PPTX
Graphics Output Hardware Devices
PPTX
Graphics input and output devices
PPTX
I/O devices - Computer graphics
Mahendran Chap 06 Output
Sergey A. Sukhanov, "3D content production"
1240934 634561225344818750
Computer graphics1
Graphics Output Hardware Devices
Graphics input and output devices
I/O devices - Computer graphics
Ad

Similar to On constructing z dimensional Image By DIBR Synthesized Images (20)

PPT
Virtual Reality 3D home applications
PDF
Paper id 21201457
PDF
2008 brokerage 03 scalable 3 d models [compatibility mode]
PPSX
2D to 3D conversion at CRC: A visual perception approach.
PDF
deep_stereo_arxiv_2015
PDF
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
PDF
Structlight
PDF
A PHOTO-BASED AUGMENTED REALITY SYSTEM WITH LOW COMPUTATIONAL COMPLEXITY
PPTX
Hidden surface removal
PDF
Polymer innovationday2 110313_share
PDF
A Region-Based Randomized Voting Scheme for Stereo Matching
PDF
Lecture09
PDF
Image Restoration for 3D Computer Vision
PDF
Battle field3 ssao
PDF
Bl32821831
PPT
DigitalGeometry.ppt
PPT
DigitalGeometry.ppt
PDF
COMPOSITE IMAGELET IDENTIFIER FOR ML PROCESSORS
PDF
Immersive 3 d visualization of remote sensing data
PDF
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
Virtual Reality 3D home applications
Paper id 21201457
2008 brokerage 03 scalable 3 d models [compatibility mode]
2D to 3D conversion at CRC: A visual perception approach.
deep_stereo_arxiv_2015
Dense Image Matching - Challenges and Potentials (Keynote 3D-ARCH 2015)
Structlight
A PHOTO-BASED AUGMENTED REALITY SYSTEM WITH LOW COMPUTATIONAL COMPLEXITY
Hidden surface removal
Polymer innovationday2 110313_share
A Region-Based Randomized Voting Scheme for Stereo Matching
Lecture09
Image Restoration for 3D Computer Vision
Battle field3 ssao
Bl32821831
DigitalGeometry.ppt
DigitalGeometry.ppt
COMPOSITE IMAGELET IDENTIFIER FOR ML PROCESSORS
Immersive 3 d visualization of remote sensing data
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...

Recently uploaded (20)

PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Electronic commerce courselecture one. Pdf
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Encapsulation theory and applications.pdf
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Machine learning based COVID-19 study performance prediction
PPTX
Cloud computing and distributed systems.
PDF
Empathic Computing: Creating Shared Understanding
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Understanding_Digital_Forensics_Presentation.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Electronic commerce courselecture one. Pdf
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Encapsulation theory and applications.pdf
Spectral efficient network and resource selection model in 5G networks
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
Chapter 3 Spatial Domain Image Processing.pdf
NewMind AI Weekly Chronicles - August'25 Week I
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Machine learning based COVID-19 study performance prediction
Cloud computing and distributed systems.
Empathic Computing: Creating Shared Understanding
Unlocking AI with Model Context Protocol (MCP)
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf

On constructing z dimensional Image By DIBR Synthesized Images

  • 2. What is DIBR?  DIBR stands for Depth Image Based Rendering  Image-Based Rendering (IBR) is an emerging technology which enables the synthesis of novel realistic images of a scene from virtual viewpoints, using a collection of available images. The applications of IBR can be found in various situations such as virtual reality, telepresence, thanks to the complexity and performance advantage over model-based techniques, which bases on complex 3- D geometric models, material properties and lightening conditions of the scene  DIBR is IBR technique which maps each color pixel in a reference view to a 2D grid location in the virtual view, using disparity information provided by the corresponding depth pixel.
  • 3. What is Rendering and Z Dimension?  Rendering is the process of generating an image from a 2D or 3D model (or models in what collectively could be called a scene file) by means of computer programs.  Three-dimensional space (also: 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called parameters) are required to determine the position of an element (i.e., point). This is the informal meaning of the term dimension.
  • 4. CONSTRUCTION OF Z DIMENSION  To construct this new image type, we first perform a new DIBR pixel-mapping for z-dimensional camera movement.  We then identify expansion holes—a new kind of missing pixels unique in z-dimensional DIBR- mapped images—using a depth layering procedure.  To fill expansion holes we formulate a patch-based maximum a posteriori problem, where the patches are appropriately spaced using diamond tiling.  Leveraging on recent advances in graph signal processing, we define a graph-signal smoothness prior to regularize the inverse problem.  Finally, we design a fast iterative reweighted least square algorithm to solve the posed problem efficiently. Experimental results show that our z-dimensional synthesized images outperform images rendered by a native modification
  • 10. Virtual Camera  a virtual camera system aims at controlling a camera or a set of cameras to display a view of a 3D virtual world. Camera systems where their purpose is to show the action at the best possible angle; more generally, they are used in 3D virtual worlds when a third person view is required.
  • 12. Constructing z-Dimensional DIBR-Synthesized Images Divided into three Sections 1.DIBR (Depth Image Based Rendering) 2.Image Super Resolution 3.Graph Based Image Processing
  • 13. Depth Image Based Rendering  Color-plus-depth format , consisting of one or more color and depth image pairs from different viewpoints, is a widely used 3D scene representation. Using this format, low-complexity DIBR view synthesis procedure such as 3D warping [ can be used to create credible virtual view images, with the aid of in painting algorithms to complete disocclusion holes  In this work, we assume that enough pixels from one or more reference view(s) have been transmitted to the decoder for virtual view synthesis, and we focus only on the construction of z-dimensional DIBR-synthesized images given received reference view pixels.
  • 14. Image Super Resolution  Increase in object size due to large z-dimensional virtual camera movement is analogous to increasing the resolution (super-resolution (SR)) of the whole image. However, during z-dimensional camera motion an object closer to the camera increases in size faster than objects farther away, while in SR, resolution is increased uniformly for all spatial regions in the image.  For the above reason, we cannot directly apply conventional image SR techniques [30] in rectangular pixel grid to interpolate the synthesized view. Further, recent non-local SR techniques such as leveraging on self-similarity of natural images that require an exhaustive search of similar patches throughout an image tend to be computationally expensive. In contrast, our interpolation scheme performs only iterative local filtering, and thus is significantly more computation-efficient.
  • 15. Graph Based Image Processing  GSP is the study of signals that live on structured data kernels described by graphs , leveraging on spectral graph theory for frequency analysis of graph-signals.  Graph-signal priors have been derived for inverse problems such as denoising , interpolation , bit-depth enhancement and de-quantization.  In this work, we assume the latter case and construct a suitable graph G from available DIBR-synthesized pixels for joint denoising/interpolation of pixels in a target patch.
  • 16. SYSTEM OVERVIEW  Interactive Free Viewpoint Streaming System  DIBR  Rounding Noise in mapped pixels  Identification of expansion holes
  • 17. Interactive Free Viewpoint Streaming System
  • 19. Rounding Noise in DIBR-Mapped Pixels
  • 21. CONCLUSION  Unlike typical free viewpoint system that considers only synthesis of virtual views shifted horizontally along the x dimension via DIBR, in this paper we consider in addition construction of z-dimensional DIBR-synthesized images. In such far-to- near viewpoint synthesis, there exists a new type of missing pixels called expansion holes – where objects close to the camera will increase in size and simple pixel-to- pixel mapping in DIBR from reference to virtual view will result in missing pixel areas – that demand a new interpolation scheme.
  • 22. THANKS  P. Merkle, A. Smolic, K. Mueller, and T. Wiegand, “Multi-view video plus depth representation and coding,  A. Chuchvara, M. Georgiev, and A. Gotchev, “CPU-efficient free view synthesis based on depth layering,” in Proc. 3DTV-Conf: True Vis. - Capture, Transmiss. Display 3D Video, Jul. 2014,  M. Tanimoto, M. P. Tehrani, T. Fujii, and T. Yendo, “Free-viewpoint TV,” IEEE Signal Process. Mag., vol. 28, no. 1, pp. 67–76, Jan. 2011. REFERENCES