SlideShare a Scribd company logo
A Short Introduction to Computer Graphics

                 Frédo Durand

      MIT Laboratory for Computer Science
Chapter I: Basics
1 Introduction
   Although computer graphics is a vast field that encompasses almost any graphical aspect, we are mainly
interested in the generation of images of 3-dimensional scenes. Computer imagery has applications for film
special effects, simulation and training, games, medical imagery, flying logos, etc.
   Computer graphics relies on an internal model of the scene, that is, a mathematical representation
suitable for graphical computations (see Chapter II). The model describes the 3D shapes, layout and
materials of the scene.
   This 3D representation then has to be projected to compute a 2D image from a given viewpoint, this is
the rendering step (see Chapter III). Rendering involves projecting the objects (perspective), handling
visibility (which parts of objects are hidden) and computing their appearance and lighting interactions.
   Finally, for animated sequence, the motion of objects has to be specified. We will not discuss animation
in this document.

2 Pixels
   A computer image is usually represented as a discrete grid of picture elements a.k.a. pixels. The number
of pixels determines the resolution of the image. Typical resolutions range from 320*200 to 2000*1500.
   For a black and white image, a number describes the intensity of each pixel. It can be expressed between
0.0 (black) and 1.0 (white). However, for internal binary representation reasons, it is usually stored as an
integer between 0 (black) and 255 (white)




            A low-resolution digital image. Left: Black and white. Right: Color. (Image © Pixar).

For a color image, each pixel is described by a triple of numbers representing the intensity of red, green and
blue. For example, pure red is (255, 0, 0) and purple is (255, 0, 255).

   Because the image is represented by a discrete array of pixels, aliasing problems may occur. The most
classical form of aliasing is the jaggy aspect of lines (see figure below). Antialiasing techniques are thus
required. In the case of the line, it consists in using intermediate gray levels to “smooth” the appearance of
the line. Another form of aliasing can be observed on television when people wear shirts with a fine
stripped texture. A flickering pattern is observed because the size of the pattern is on the same order of
magnitude as the pixel size.




                                    A line without and with antialiasing.
Chapter II: Geometric Model
  We introduce how the geometry of the scene is represented in the memory of the computer.

1 Polygons
  The most classical method for modeling 3D geometry is the use of polygons. An object is approximated
by a polygonal mesh, that is a set of connected polygons (see below). Most of the time, triangles are used
for simplicity and generality.



                                       4

                                       3                                y
                                       2

                                       1

                                                                                     x
                                 Left: A cow modeled as a mesh of triangles.
      Right: This triangle can be stored using the coordinates of its vertices as [(2,4,2), (3,1,0), (1,1,2)].

   Each polygon or triangle can be described by the 3D coordinates of its list of vertices (see figure above).
   The obvious limitation of triangles is that they produce a flat and geometric appearance. However,
techniques called smoothing or interpolation can greatly improve this.

2 Primitives
  The most classical geometric entities can be directly used as primitives, e.g. cubes, cylinders, spheres and
cones. A sphere for example can be simply described by the coordinates of its center and its radius.

3 Smooth patches
   More complex mathematical entities permit the representation of complex smooth objects. Spline
patches and NURBS are the most popular. They are however harder to manipulate since one does not
directly control the surface but so called control points that are only indirectly related to the final shape.
Moreover, obtaining smooth junctions between different patches can be problematic. However, the recently
popular subdivision surfaces overcome this limitation. They offer the best of both worlds and provide the
simplicity of polygons and the smoothness of patches.
Chapter III:                                     Rendering
1 Projection and rasterization
  The image projection of the 3D objects is computed using linear perspective. Given the position of the
viewpoint and some camera parameters (e.g. field of view), it is very easy to compute the projection of a
3D point onto the 2D image. For mathematics enthusiasts, this can be simply expressed using a 4*4 matrix.
  In most methods, the geometric entities are then rasterized. It consists in drawing all the pixels covered
by the entity.
  In the example below, the projections of the 3 red points have been computed using linear perspective,
and the triangle has then been rasterized by filling the pixels in black. For richer rendering, the color of
each rasterized pixel must take into account the optical properties of the object, as we will discuss below.
                                                         3D Triangle

                               Image




                              Projected (2D)
                                 Triangle

             Viewpoint

                            Projection (left) and rasterization (right) of a triangle.

2 Visibility
   If the scene contains more than one object, occlusions may occur. That is, some objects may be hidden
by others. Only visible objects should be represented. Visibility techniques deal with this issue.
   One classical algorithm that solves the visibility problem is the so-called painter’s algorithm. It consists
in sorting the objects or polygons from back to front and rasterizing them in this order. This way, front-
most polygons cover the more distant polygons that they hide.

                                                                              1
                                                 Image
                                                                        2




                                 Viewpoint


  The Painter’s algorithm. Triangle 1 is drawn first because it is more distant. Triangle 2 is drawn next and
                             covers Triangle 1, which yields correct occlusion.
The ray-tracing algorithm does not use a rasterization phase. It sends one ray from the eye and through
each pixel of the image. The intersection between this ray and the objects of the scene is computed, and
only the closest intersection is considered.


                                                                         1

                                                                2

                                               Pixel




                         Viewpoint

  Ray-tracing. A ray is sent from the eye and through the pixel. Since the intersection with 2 is closer than
                                   the intersection with 1, the pixel is black.

   The z-buffer method is the most common nowadays (e.g. for computer graphics cards). It stores the depth
(z) of each pixel. When a new polygon is rasterized, for each pixel, the algorithm compares the depth of the
current polygon and the depth of the pixel. If the new polygon has a closer depth, the color and depth of the
pixel are updated. Otherwise, it means that for this pixel, a formerly drawn polygon hides the current
polygon.

3 Shading and materials
  Augmenting the scene with light sources allows for better rendering. The objects can be shaded
according to their interaction with light. Various shading models have been proposed in the literature. They
describe how light is reflected by object, depending on the relative orientation of the surface, light source
and viewpoint (see figure below).




                                                                incoming light



                                                                    ?1

                                         outgoing light


                                               ?2

Light reflection model. The ratio of light bouncing off the surface in the direction of the eye depends on the
                                                 two angles.
Sphere rendered using various material models. Note in particular the different highlights. Image from the
                         Columbia-Utrecht Reflectance and Texture Database.

  Texture mapping uses 2D images that are mapped on the 3D models to improve their appearance (see
examples in section 5).

4 Shadows and lighting simulation
  Shading and material models only take into account the local interaction of surfaces and light. They doe
not simulate shadows that are harder to handle because they imply long-range interactions. A shadow is
caused by the occlusion of light by one object.
  Ray-tracing, for example, can handle shadows, but requires a shadow computation for each pixel and
each light source. A shadow ray is sent from the visible point to the light source. If the ray intersects an
object, then the visible point is in shadow.




                                                        shadow ray




                                                                      Visible point



                                                    Pixel




                                Viewpoint


                                   Shadow computation using ray-tracing.
            The visible point is in shadow because the black triangle occludes the light source.

   More complex lighting interactions can then be simulated. In particular, objects that are illuminated by a
primary light source reflect light and produce indirect lighting. This is particularly important for inddor
scenes. Global lighting methods take into account all light inter-reflections within the scene.
5 Example




     Polygonal model rendered in wire-frame                              With visibility.
                 (no vis ibility)




Shaded rendering. Note how the faces of the cube       Smooth patches and shading including highlights.
and cone have different intensities depending on
   their orientation relative to the light source.




  Texture-Mapping improves the appearance of                                  Shadows.
     surfaces (a better lighting is used too).

                               Various rendering qualities (Images © Pixar)

More Related Content

PDF
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
PDF
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
PDF
PPT s02-machine vision-s2
PPTX
COM2304: Digital Image Fundamentals - I
PDF
PPT s12-machine vision-s2
PDF
Retrieving Informations from Satellite Images by Detecting and Removing Shadow
PPTX
Visual realism
PDF
Feature detection and matching
A Method of Survey on Object-Oriented Shadow Detection & Removal for High Res...
SHADOW DETECTION USING TRICOLOR ATTENUATION MODEL ENHANCED WITH ADAPTIVE HIST...
PPT s02-machine vision-s2
COM2304: Digital Image Fundamentals - I
PPT s12-machine vision-s2
Retrieving Informations from Satellite Images by Detecting and Removing Shadow
Visual realism
Feature detection and matching

What's hot (19)

PPTX
Image segmentation
PDF
Ijcatr04041016
PDF
PPT s08-machine vision-s2
PDF
PPT s03-machine vision-s2
PPTX
Image segmentation
DOC
Copy of 3 d report
PDF
Cj36511514
PPTX
Features image processing and Extaction
PPSX
Edge Detection and Segmentation
PDF
Improvement of the Recognition Rate by Random Forest
PDF
Improvement oh the recognition rate by random forest
PPT
Image segmentation ajal
PDF
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
PPT
Dip Image Segmentation
PPT
Image segmentation
PPTX
Image segmentation
PPT
Data structures
PDF
Shadow Detection and Removal Techniques A Perspective View
PDF
Image segmentation
Ijcatr04041016
PPT s08-machine vision-s2
PPT s03-machine vision-s2
Image segmentation
Copy of 3 d report
Cj36511514
Features image processing and Extaction
Edge Detection and Segmentation
Improvement of the Recognition Rate by Random Forest
Improvement oh the recognition rate by random forest
Image segmentation ajal
Shadow Detection and Removal in Still Images by using Hue Properties of Color...
Dip Image Segmentation
Image segmentation
Image segmentation
Data structures
Shadow Detection and Removal Techniques A Perspective View
Ad

Viewers also liked (14)

PDF
Shailaja
PPTX
Open quiz prelims answers
PPTX
Africities
PDF
Lec1
PPTX
La nutricó humana
PPTX
Petitorio pablo medina 2011
DOCX
Iwatch tech 1
PPT
ลาวเวียง ลาวกลาง
PDF
Kljucevi za kreativnost
PDF
Hf radio propagation
PPTX
Geometric isomers ( cis trans )
PDF
Breast Thermography /Digital Infrared Analysis of the Breast
PDF
Remote Sensing
DOC
Makalah manajemen logistik
Shailaja
Open quiz prelims answers
Africities
Lec1
La nutricó humana
Petitorio pablo medina 2011
Iwatch tech 1
ลาวเวียง ลาวกลาง
Kljucevi za kreativnost
Hf radio propagation
Geometric isomers ( cis trans )
Breast Thermography /Digital Infrared Analysis of the Breast
Remote Sensing
Makalah manajemen logistik
Ad

Similar to Graphics (20)

PPTX
visual realism in geometric modeling
PDF
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
PDF
iCAMPResearchPaper_ObjectRecognition (2)
PDF
DIP-Questions.pdf
PDF
Lecture1
PDF
06 robot vision
PPTX
Advance image processing
PPTX
Illumination Model
PDF
Extended fuzzy c means clustering algorithm in segmentation of noisy images
PDF
Module-1_Computer Vision_VTU_Syllabus_Chapter2.pdf
PPT
GRPHICS07 - Textures
PPTX
Rendering Algorithms.pptx
PDF
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
PPTX
From Polygons to Quadratics.pptx
PPTX
ch6 lighting and shading ch6 presenttion(1).pptx
PDF
PDF
J017426467
PDF
PapersWeLove - Rendering Synthetic Objects Into Real Scenes - Paul Debevec.pdf
PPTX
Computer Vision - Image Formation.pptx
PPT
regions
visual realism in geometric modeling
APPEARANCE-BASED REPRESENTATION AND RENDERING OF CAST SHADOWS
iCAMPResearchPaper_ObjectRecognition (2)
DIP-Questions.pdf
Lecture1
06 robot vision
Advance image processing
Illumination Model
Extended fuzzy c means clustering algorithm in segmentation of noisy images
Module-1_Computer Vision_VTU_Syllabus_Chapter2.pdf
GRPHICS07 - Textures
Rendering Algorithms.pptx
Stereo Correspondence Algorithms for Robotic Applications Under Ideal And Non...
From Polygons to Quadratics.pptx
ch6 lighting and shading ch6 presenttion(1).pptx
J017426467
PapersWeLove - Rendering Synthetic Objects Into Real Scenes - Paul Debevec.pdf
Computer Vision - Image Formation.pptx
regions

Recently uploaded (20)

PPTX
Institutional Correction lecture only . . .
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Insiders guide to clinical Medicine.pdf
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
Basic Mud Logging Guide for educational purpose
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PDF
Pre independence Education in Inndia.pdf
PPTX
master seminar digital applications in india
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
PPH.pptx obstetrics and gynecology in nursing
Institutional Correction lecture only . . .
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Microbial diseases, their pathogenesis and prophylaxis
Insiders guide to clinical Medicine.pdf
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
Basic Mud Logging Guide for educational purpose
TR - Agricultural Crops Production NC III.pdf
Module 4: Burden of Disease Tutorial Slides S2 2025
FourierSeries-QuestionsWithAnswers(Part-A).pdf
VCE English Exam - Section C Student Revision Booklet
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
Cell Types and Its function , kingdom of life
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
Pre independence Education in Inndia.pdf
master seminar digital applications in india
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPH.pptx obstetrics and gynecology in nursing

Graphics

  • 1. A Short Introduction to Computer Graphics Frédo Durand MIT Laboratory for Computer Science
  • 2. Chapter I: Basics 1 Introduction Although computer graphics is a vast field that encompasses almost any graphical aspect, we are mainly interested in the generation of images of 3-dimensional scenes. Computer imagery has applications for film special effects, simulation and training, games, medical imagery, flying logos, etc. Computer graphics relies on an internal model of the scene, that is, a mathematical representation suitable for graphical computations (see Chapter II). The model describes the 3D shapes, layout and materials of the scene. This 3D representation then has to be projected to compute a 2D image from a given viewpoint, this is the rendering step (see Chapter III). Rendering involves projecting the objects (perspective), handling visibility (which parts of objects are hidden) and computing their appearance and lighting interactions. Finally, for animated sequence, the motion of objects has to be specified. We will not discuss animation in this document. 2 Pixels A computer image is usually represented as a discrete grid of picture elements a.k.a. pixels. The number of pixels determines the resolution of the image. Typical resolutions range from 320*200 to 2000*1500. For a black and white image, a number describes the intensity of each pixel. It can be expressed between 0.0 (black) and 1.0 (white). However, for internal binary representation reasons, it is usually stored as an integer between 0 (black) and 255 (white) A low-resolution digital image. Left: Black and white. Right: Color. (Image © Pixar). For a color image, each pixel is described by a triple of numbers representing the intensity of red, green and blue. For example, pure red is (255, 0, 0) and purple is (255, 0, 255). Because the image is represented by a discrete array of pixels, aliasing problems may occur. The most classical form of aliasing is the jaggy aspect of lines (see figure below). Antialiasing techniques are thus required. In the case of the line, it consists in using intermediate gray levels to “smooth” the appearance of the line. Another form of aliasing can be observed on television when people wear shirts with a fine stripped texture. A flickering pattern is observed because the size of the pattern is on the same order of magnitude as the pixel size. A line without and with antialiasing.
  • 3. Chapter II: Geometric Model We introduce how the geometry of the scene is represented in the memory of the computer. 1 Polygons The most classical method for modeling 3D geometry is the use of polygons. An object is approximated by a polygonal mesh, that is a set of connected polygons (see below). Most of the time, triangles are used for simplicity and generality. 4 3 y 2 1 x Left: A cow modeled as a mesh of triangles. Right: This triangle can be stored using the coordinates of its vertices as [(2,4,2), (3,1,0), (1,1,2)]. Each polygon or triangle can be described by the 3D coordinates of its list of vertices (see figure above). The obvious limitation of triangles is that they produce a flat and geometric appearance. However, techniques called smoothing or interpolation can greatly improve this. 2 Primitives The most classical geometric entities can be directly used as primitives, e.g. cubes, cylinders, spheres and cones. A sphere for example can be simply described by the coordinates of its center and its radius. 3 Smooth patches More complex mathematical entities permit the representation of complex smooth objects. Spline patches and NURBS are the most popular. They are however harder to manipulate since one does not directly control the surface but so called control points that are only indirectly related to the final shape. Moreover, obtaining smooth junctions between different patches can be problematic. However, the recently popular subdivision surfaces overcome this limitation. They offer the best of both worlds and provide the simplicity of polygons and the smoothness of patches.
  • 4. Chapter III: Rendering 1 Projection and rasterization The image projection of the 3D objects is computed using linear perspective. Given the position of the viewpoint and some camera parameters (e.g. field of view), it is very easy to compute the projection of a 3D point onto the 2D image. For mathematics enthusiasts, this can be simply expressed using a 4*4 matrix. In most methods, the geometric entities are then rasterized. It consists in drawing all the pixels covered by the entity. In the example below, the projections of the 3 red points have been computed using linear perspective, and the triangle has then been rasterized by filling the pixels in black. For richer rendering, the color of each rasterized pixel must take into account the optical properties of the object, as we will discuss below. 3D Triangle Image Projected (2D) Triangle Viewpoint Projection (left) and rasterization (right) of a triangle. 2 Visibility If the scene contains more than one object, occlusions may occur. That is, some objects may be hidden by others. Only visible objects should be represented. Visibility techniques deal with this issue. One classical algorithm that solves the visibility problem is the so-called painter’s algorithm. It consists in sorting the objects or polygons from back to front and rasterizing them in this order. This way, front- most polygons cover the more distant polygons that they hide. 1 Image 2 Viewpoint The Painter’s algorithm. Triangle 1 is drawn first because it is more distant. Triangle 2 is drawn next and covers Triangle 1, which yields correct occlusion.
  • 5. The ray-tracing algorithm does not use a rasterization phase. It sends one ray from the eye and through each pixel of the image. The intersection between this ray and the objects of the scene is computed, and only the closest intersection is considered. 1 2 Pixel Viewpoint Ray-tracing. A ray is sent from the eye and through the pixel. Since the intersection with 2 is closer than the intersection with 1, the pixel is black. The z-buffer method is the most common nowadays (e.g. for computer graphics cards). It stores the depth (z) of each pixel. When a new polygon is rasterized, for each pixel, the algorithm compares the depth of the current polygon and the depth of the pixel. If the new polygon has a closer depth, the color and depth of the pixel are updated. Otherwise, it means that for this pixel, a formerly drawn polygon hides the current polygon. 3 Shading and materials Augmenting the scene with light sources allows for better rendering. The objects can be shaded according to their interaction with light. Various shading models have been proposed in the literature. They describe how light is reflected by object, depending on the relative orientation of the surface, light source and viewpoint (see figure below). incoming light ?1 outgoing light ?2 Light reflection model. The ratio of light bouncing off the surface in the direction of the eye depends on the two angles.
  • 6. Sphere rendered using various material models. Note in particular the different highlights. Image from the Columbia-Utrecht Reflectance and Texture Database. Texture mapping uses 2D images that are mapped on the 3D models to improve their appearance (see examples in section 5). 4 Shadows and lighting simulation Shading and material models only take into account the local interaction of surfaces and light. They doe not simulate shadows that are harder to handle because they imply long-range interactions. A shadow is caused by the occlusion of light by one object. Ray-tracing, for example, can handle shadows, but requires a shadow computation for each pixel and each light source. A shadow ray is sent from the visible point to the light source. If the ray intersects an object, then the visible point is in shadow. shadow ray Visible point Pixel Viewpoint Shadow computation using ray-tracing. The visible point is in shadow because the black triangle occludes the light source. More complex lighting interactions can then be simulated. In particular, objects that are illuminated by a primary light source reflect light and produce indirect lighting. This is particularly important for inddor scenes. Global lighting methods take into account all light inter-reflections within the scene.
  • 7. 5 Example Polygonal model rendered in wire-frame With visibility. (no vis ibility) Shaded rendering. Note how the faces of the cube Smooth patches and shading including highlights. and cone have different intensities depending on their orientation relative to the light source. Texture-Mapping improves the appearance of Shadows. surfaces (a better lighting is used too). Various rendering qualities (Images © Pixar)