SlideShare a Scribd company logo
Graphics System Basics & Models
Book:
Chapter 1 [Ed. Angel, Interactive Computer Graphics]
Computer Graphics
 Computer Graphics: Use of computer in generating
images.
 Computer graphics: concerned with all aspects of
producing pictures or images using a computer.
Applications of Computer Graphics
 Can be roughly divided into four major areas
 Display of Information
 Design
 Simulation and Animation
 User Interfaces
1. Display of Information
 Classic graphics techniques
used as a medium of
conveying information
 Human written/spoken
language
 Historical era:
 Babylonians used to display
floor plans on stones
 Greeks used to display their
architectural plans and language
 Today graphical
representation are generated
by Architects, Designers using
computers.
Display of Information (Contd…)
 Statisticians
 Uses CG to display plots/graphs
of a data set
 Extract information from these
plots
 Very useful in extracting info from
large datasets.
 Medical Imaging
 Graphics used in Computed
Tomography (CT)
 Medical Resonance Imaging
(MRI)
 Ultrasound
 Data Visualization
 Understanding data by placing it
in visual context
2. Design
 Many fields concerned
with design. (Engineering
& Arch)
 With set of specification, a
cost effective and esthetic
design is tried to achieve
using computer graphics.
 Starting 40 years ago,
today Computer Aided
Design (CAD) pervades
many fields.
3. Simulation and Animation
 Simulation is the imitation of the
operation of a real-world process or
system over time.
 Flight simulators - train pilots.
 Safety and Cost reduction
 Architectural designs are tested in many
weather conditions
 Animation - illusion of motion
 Became famous: After successful
simulations
 Artistic effects are achieved.
 Complete movies are made using CG
 Photo-realistic images
 Virtual Reality-replicates an
environment that simulates physical
presence in places in the real world or
imagined worlds and lets the user
interact in that world.
3. Simulation & Animation (Virtual Reality)
4. User Interfaces
 Interaction with computers
increased.
 Desktops, Tablets, Smartphones.
 Use of GUI has overcome CLI
 Microsoft Windows, Mac OS, Linux
 Android, iOS,
 Internet usage increased
 Webpages, applications all are
graphical
 Resources are accessed through
graphical browsers.
 Interaction with UI is so often, that
we have almost forgot that we’re
working with computer graphics.
A Graphics System
 General view of a graphics system
 Generally contains,
1. Input Devices
2. CPU
3. GPU (Graphics Processing Unit)
4. Memory
5. Frame Buffer
6. Output Devices
Pixels, Frame Buffer & Basic Terms
 Pixel: Short for Picture element
 The smallest addressable element of a
display device.
 Basic unit of a digital image.
 Each pixel corresponds to a location or
small area in the image.
 Raster: Array of picture elements or
pixels
 Images seen on displays are raters
produced by graphics sys.
 A raster is a grid of x and y coordinates on
a display space.
Pixels, Frame Buffer & Basic Terms (Contd…)
 Frame Buffer: portion of memory where pixels are
stored
 Core element of graphics system.
 Contains bitmap that is driven to video display
 Resolution: No. of pixels in frame buffer
 Resolution determines the details that can be seen in image
 Higher the resolution  Sharpen the image
 Depth / Precision: No. of bits used per pixel to
determine its properties like color
 1-bit-deep frame buffer  allows only two colors
 8-bit-deep frame buffer  allows 28 (256) colors.
 16-bit (High color)  216 Colors
 24-bit (True color)  224 Colors
 In Simple Sys: Frame Buffer only hold colored pixels that
are displayed
 In most systems, The frame buffer holds far more
information,
 depth information needed for creating images from 3D data.
 In these systems, the frame buffer comprises multiple
buffers,
 one or more of which are color buffers that hold the colored
pixels that are displayed.
 Terms, frame buffer and color buffer can be used
synonymously.
Pixels, Frame Buffer & Basic Terms (Contd…)
CPU and GPU
 In simple system there may be only one CPU
 In early systems, frame buffer was the part of standard
memory
 CPU is responsible for both Normal and Graphical
Processing
 Main graphical processing of CPU is
 Take graphical primitives from application program
 Like (lines, polygons, circles)
 Assign values to the pixels in frame buffer that best
represent those entities.
 Rasterization: Conversion of geometrical entities to
pixel colors and locations in frame buffer. (a.k.a scan
CPU & GPU (Contd…)
 Today, all graphics system are characterized by
special purpose graphics processing unit (GPU).
 GPU: A processing unit custom-tailored to carry out
specific graphics functions
 GPU can either be on the same system board or on
separate graphics card
 Frame buffer usually resides on the same board as GPU
 Frame buffer is accessed through GPU
Output Devices
 Cathode Ray Tube (CRT)
 Most dominant type of display (until, recently)
 Basic Op: When electron strikes the phosphor coating,
light is emitted.
 Deflection plates: to control the direction of the beam
 Computer output is converted from digital(bits) to
analog(voltage) by converters across x and y deflection
plates.
 When sufficiently intense beam of electrons is directed at
the phosphor, light appears on CRT surface
Output Devices (Contd…)
 Refresh rate: No. of times per second the device
retrace the same path/image.
 CRT emits light for short time (few milli seconds)
 To see flickering-free image, same image must be retraced.
 Old systems: Refresh rate = frequency of power system
 50 Hz in US and 60 Hz in most of the world
 Raster System (Fundamental ways of displaying
pixels)
 Non-interlaced: Pixels are displayed row by row at refresh
rate
 Interlaced Display: Odd and even rows are refreshed,
alternatively. 30 Hz instead of 60 Hz.
Output Devices (Contd…)
 Colored CRTs
 Phosphors of three different colors (Red,
Green, Blue)
 Phosphors arranged in small groups.
 Phosphors in triangular groups are called
triads
 Have three electron beams.
 Shadow Mask: a metal sheet with small
holes
 Used to ensure the excitation of proper color
phosphor.
Output Devices (Contd…)
 Flat-panel Technology.
 Flat-panels are inherently raster based.
 Mostly used flat panels are LCD, LED and Plasma
 Generic flat-panel display have
 2-outside plate: containing parallel
grids of wires, oriented
perpendicular to each other.
 Middle plate contains different
material depending upon the
technology.
Output Devices (Contd…)
 Flat-panel Display
 By sending electrical signal to proper wire on both plates.
 Electric field is produced at the point of intersection of two
wires
 Electric field is used to control the corresponding element on
the middle plate.
 Electric field produced can be used in,
 LED, to turn corresponding led on or off
 LCD, to control polarization of liquid crystals to pass light
 Plasma, to energize gases in order to glow or not.
Input Devices
 Input Devices: Devices used for input purposes.
 Common input device Keyboard, mouse
 Other input devices include joy stick, track ball, space ball
 Input Devices (Perspective)
 Physical Device
 Logical Device (application / programmer perspective)
 Their properties are specified in terms of “what they do” from
application perspective.
 For example: cout in C++ outputs the string, the output device
could be printer, display/terminal or a disk file.
 Even the “cout” output could be input for another program.
Input Devices (Physical)
 Two primary types of Input Devices
 Keyboard Devices
 Pointing Devices
 Keyboard generally include physical keyboards or
devices that return character codes.
 ASCII code is used to represent characters.
 ASCII assigns a single unsigned byte to each character.
 Internet application use multiple bytes to represent each
char.
 Mouse & Trackball:
 A mechanical mouse and trackball works on same
principal.
 Motion of the ball is converted to signals by converters.
Input Devices (Physical)
 Signals from encoders might be interpreted as
position (Not necessarily)
 Driver/Program can interpret the signals as two
velocities.
 The Computer can integrate velocities to obtain
position.
 When ball rotates position
changes, otherwise not.
 In this mode, positioning is
relative.
 Motion sensing devices are known as Relative
positioning devices
Input Devices (Physical)
 Data Tablets:
 Absolute positioning
 Position is determined using electromagnetic
interactions between signals traveling through the
wires and sensors in the stylus
 Position sensing devices are known as absolute
positioning devices
 READING: Space ball and Joy stick
Input Devices (Logical)
 Logical Input Devices:
 Addressing of physical input devices as abstract data
types
 ADT: data type defined by its behavior from user view
 Two major characteristics describe logical
behavior of input device:
1. The measurements that the device returns to the
user program
2. The time when the device returns those
measurements.
Input Devices (Logical)
 Logical Input Devices:
 String: Return string of characters from Keyboard, File,
etc
 Locator: Returns a position (in x, y coordinates)
 Pick: Returns a segment name & pick identifier of object
pointed by the user.
 Choice: Represents a choice from a selection of several
possibilities.
 Valuator: Returns a real/analogue value, for example, to
control some sort of analogue device.
 Stroke: series of locations. (Tablets/Touch inputs)
Input Modes
 Input is provided in terms of two entities
 Measure: The returned data from input devices.
 Ex: Data stream from keyboard OR location of pointer from
mouse
 Trigger: Physical input to signal the computer.
 Ex: Pressing of Return (Enter) Key / Esc Key OR clicking the
mouse button.
 Measure of device can be obtained in three (03)
distinct modes
 Each mode is defined by: Relationship b/w measure &
trigger.
Request Mode – Input Modes
 The measure of the device is not returned to the
program until the device is triggered.
 Ex: cin / scanf in C++/C language. (Input Statements)
 Program waits for trigger when input statement is
encountered.
 Take as long as the user wants.
 The measure is only returned upon trigger. (e.g upon
pressing enter/return)
Sample Mode (Input Modes)
 Sample-Mode: Input is immediate. Measure is
returned as the function is encountered in App.
 Position the device or Enter data before the function call.
 Program retrieves “measure” immediately from the
buffer/file/location.
 For example, your application can obtain the location
of the screen cursor at any point in time, through the
use of SAMPLE mode input.
Event Mode – Input Modes
 Case: Multi input devices each with its own measure &
trigger.
 For Example: Flight simulators with multiple inputs.
 Event Mode:
 Application Program & Devices work independently of each other.
 Each time listed device is triggered: measure + id is stored in event
queue
 App. Prog. Retrieves input from event queue whenever required.
 Event Mode (Callback Approach):
 Associate a function call (callback) with events specifically.
 OS queries event queue and calls the associated function.
 Efficient approach in client-server scenarios.
Images – Physically & Synthetic
Elements of Image Formation
 Basic entities of Image Formation
 Objects
 Viewers
 Light
Object(s)
 The object exists in space independent of any
image-formation process and of any viewer.
 Computer Graphics: deals with synthetic objects
 Objects are formed by specifying positions of
geometric primitives(basic shapes) in space like
triangle, polygons etc
 Mostly, set of spatial positions (vertices) are used to
define objects
 For Ex: line can be defined by two vertices
 Triangle can be defined by three vertices.
Viewer(s)
 To form an image, there must be someone or
something that is viewing our objects. Like human, a
camera, etc.
 It is the viewer that forms the image of our objects.
 Human Visual System: Image is formed at back of eye
 Camera: Image is formed in the film plane
 Objects are usually seen from different perspectives.
Light
 Be it physical or synthetic images are in complete
without light.
 No light = dark objects = no image formation
 Light is electromagnetic radiation.
 Light Spectrum: Radio + Infrared + Visual spectrum
 Visual spectrum: 350 – 780 nm
 Around 520 nm: Green
 Near 450 nm: Blue
 Near 650 nm: Red
 Except from recognizing that which frequency
is for which color CG doesn’t deal with light
Light Spectrum
Imaging Systems
 Physical imaging systems to understand imaging in
computers.
 Pin-hole Camera
 Human Visual System
 Pin-hole: To understand basic working principles of
camera.
 Human visual system is complex but obeys the
physical principles of other imaging systems
Pin-hole Camera
 A pinhole camera is a box with a small hole in the
center of one side of the box
 film is on the side opposite the pinhole.
 Hole is so small that only a single ray of light can
enter (assumption)
 For Example: We have point in scene (x, y, z)
 At Image: z = -d
 y = yp
 x = xp (In top view)
 (xp, yp, -d) is called
projection of (x,y,z)
Pin-hole Camera
 Color: In idealized mode, color of the image will be
same as in scene
 Field/Angle of View: is the extent of the observable
world that is seen at any given moment.
 If h is the height of camera (film) then
 Angle is formulated using basic
Trigonometry.
Pin-hole Camera
 Depth of Field: Every object in angle of view is in
focus i.e appears sharply.
 In Ideal pinhole camera depth of field is infinite.
(Assumed)
 Disadvantages of Pinhole camera
 It admits only single ray – almost no light.
 Camera cannot be adjusted to have different angle of
view.
 By replacing hole with lens; problems can be
eliminated
 With proper lens more light can be entered (larger
aperture)
Human Visual System
 Human Visual System is extremely complex.
 Light enters through cornea and lens
 Iris opens/shuts to adjust the amount of light
 Image is formed at retina (back of the eye)
 Cells (Rods and Cones) are sensors
 They excites/responds when light enters eye (350-780
nm)
 Rods: 1 type; Low light sensors; Night vision, not color
sensitive
 Cones: 3 types; responsible for color vision
 Resolution of Visual System
 Resolution: Measure of what size objects can we see.
 Technically: it is a measure of how close we can place
Human Visual System
 Brightness: Brightness is an overall measure of how
we react to the intensity of light
 HVS reacts differently to different wavelengths of light
 HVS is more sensitive to green and less sensitive to red &
blue
 HVS only reacts to three colors instead of whole
visual spectrum due to three types of cones.
 These colors are called primary colors.
 Primary colors are Red, Blue and Green.
Synthetic Camera Model
 The paradigm of emulating image formation by
optical system in computer is known as Synthetic
Camera Model
 Basic Principles:
 As Object & Viewer are independent of each, so CG (API)
should have separate functions for specifications.
 Images can be computed using simple geometric
calculation like in pin hole camera.
Synthetic Camera Model
 In pin hole camera image formed is
flipped.
 In computer, image is retain; by
moving the plane to front (Virtual
image plane)
 A line is drawn called projector from
center of lens/projection (COP) to
the object/point.
 All the light originates from COP
 This virtual image plane is called
projection plane
Synthetic Camera Model
 There is always limitation to the size of the image. In optical
system; field of view expresses the limitation
 Synthetic Camera Model places a Window/Rectangle in
projection plane to cope with the limitation.
 This window / rectangle through which a viewer at COP sees the
world called Clipping window/rectangle
 Given the following
 Location of COP
 Orientation of Projection plane
 Size of clipping window
 We can determine that which
objects will appear in image
Programmer’s Interface
 User interact with graphics system in different ways
like using CAD modules, paint programs etc.
 Programmers interact with GS using graphics library
(API)
 Graphics API: Interface b/w programmer and Graphics
system specified through set of functions.
 Programmers don’t see h/w related details.
 Software Drivers interpret the output of API to the
form that is understood by the specific hardware.
Three Dimensional API
 As per synthetic camera model, 3D API must provide
functions to specify,
 Objects
 Viewer
 Light Sources
 Material properties.
 Objects are specified using vertices.
 Objects are usually specified using geometric
primitives like lines, polygons, triangles etc
 Complex objects may involve multiple ways of
specification.
Three Dimensional API
 Camera can be defined in variety of ways.
 APIs differ in camera selection and methods
 Four types of specification for Camera
 Position: Camera location (COP)
 Orientation: Rotation of camera in axes.
 Focal length: Size of the image/ Angle of view
 Film plane: height & width
 Specifications can be satisfied in various ways
 Most used way is coordinate system transformation
 Transformations convert object positions
represented in coordinate system.
Three Dimensional API
 Light Sources
 Light sources are specified by their
 location, strength, color and direction
 These properties are specified for each light sources
used.
 Material Properties:
 Characteristics or attributes of the objects
 These attributes are specified when the objects are
defined
 Both light and material properties depend upon the
light-material model API supports.
The Modeling–Rendering Paradigm
 Model: Mathematical/Geometrical description of
shapes
 Rendering: Process of generating images from
models
 Modeling can be separated from Rendering.
 Helpful in generating complex images.
 The file/data produced by the modeler is used by the
Renderer.
 This File could be simple; containing info in specific
format
The Modeling–Rendering Paradigm
 Different hardware and software at both blocks.
 Modeler, as well as Rendered are both
customizable
 Use diff: modeler with same renderer
 Use same modeler with diff: renderer
 Most popular approach now a days.
 Models, lights, camera etc are place in special data
structure called scene graph
 Scene graph is then passed to renderer or game
engine.
Graphics Architectures
 Early graphics system: general-purpose computer of
von Neumann architecture
 Single processor system (Single instruction
Execution)
 Calligraphic CRT display
 Generate a line segment by connecting two points.
 Host used to Run app & compute end points and
send them to CRT
 Info needs to be sent at high speed (to avoid flicker)
 Refreshing was slow that even small image
would burden expensive computer.
Display Processors
 Earliest special purpose graphics system
 To separate the process of continuous display of
refreshing
 Display processor included instruction to display
CRT primitives
 Host generate image (Using instructions)
 Send to Display processor
 Display processor store program in memory
(as display file / display list)
 Display processor runs
program iteratively
Pipeline Architecture
 Process/Operation divided into several independent /
dependent segments.
 a + (b ∗ c)
 Pipelining increases throughput of the computer.
Graphics Pipeline
 Sets of Object  Objects (Set of primitives) 
vertices
 Complex objects may contain millions of vertices
 To make process of imaging fast we use pipeline
 Graphical pipeline consists of
 Vertex Processing
 Clipping and Primitive Assembly
 Rasterization
 Fragment Processing
1. Vertex Processing
 Two major functions of this block
 Coordinate transformation
 Compute color of each vertex
Clipping and Primitive Assembly
 Vertices are assembled into primitives (shapes)
 No camera can see the whole world
 Clipping must be done.
 Clipping window/volume is considered
 Clipping is done primitive by primitive.
 Output: set of primitives which will appear in clipping
image.

More Related Content

PDF
UNIT-6-Illumination-Models-and-Surface-Rendering-Methods.pdf
PDF
Digital Image Processing: Image Segmentation
PPTX
Scan line method
PPTX
Depth Buffer Method
PPTX
Mid point line Algorithm - Computer Graphics
PPT
Z buffer
PPT
Intro to scan conversion
UNIT-6-Illumination-Models-and-Surface-Rendering-Methods.pdf
Digital Image Processing: Image Segmentation
Scan line method
Depth Buffer Method
Mid point line Algorithm - Computer Graphics
Z buffer
Intro to scan conversion

What's hot (20)

PPT
Fidelity criteria in image compression
PPTX
Convolutional Neural Networks
PPTX
Polygon filling algorithm
PPTX
Ray tracing
PPTX
Halftoning in Computer Graphics
PPTX
illumination model in Computer Graphics by irru pychukar
PPTX
Basic Relationships between Pixels- Digital Image Processing
PPTX
Image Segmentation using Otsu's Method - Computer Graphics (UCS505) Project PPT
PPTX
Control Strategies in AI
PPTX
Concept of basic illumination model
PDF
Overview of Convolutional Neural Networks
PPTX
Otsu binarization
PDF
Deblurring of Digital Image PPT
PPTX
Image Filtering in the Frequency Domain
PPSX
Image Processing Basics
PPTX
Digital Image Fundamentals - II
PPTX
Computer graphics - bresenham line drawing algorithm
PPTX
Problem Formulation in Artificial Inteligence Projects
PPT
Illumination model
PPTX
Painter's Algorithm https://www old.pptx
Fidelity criteria in image compression
Convolutional Neural Networks
Polygon filling algorithm
Ray tracing
Halftoning in Computer Graphics
illumination model in Computer Graphics by irru pychukar
Basic Relationships between Pixels- Digital Image Processing
Image Segmentation using Otsu's Method - Computer Graphics (UCS505) Project PPT
Control Strategies in AI
Concept of basic illumination model
Overview of Convolutional Neural Networks
Otsu binarization
Deblurring of Digital Image PPT
Image Filtering in the Frequency Domain
Image Processing Basics
Digital Image Fundamentals - II
Computer graphics - bresenham line drawing algorithm
Problem Formulation in Artificial Inteligence Projects
Illumination model
Painter's Algorithm https://www old.pptx
Ad

Viewers also liked (6)

PPTX
Camera model ‫‬
PPTX
Input device
PPTX
Computer graphics basic transformation
PDF
MICROPROCESSOR 8085 WITH PROGRAMS
PDF
Notes 2D-Transformation Unit 2 Computer graphics
PPTX
Data Modeling PPT
Camera model ‫‬
Input device
Computer graphics basic transformation
MICROPROCESSOR 8085 WITH PROGRAMS
Notes 2D-Transformation Unit 2 Computer graphics
Data Modeling PPT
Ad

Similar to computer Graphics (20)

PPTX
Introduction to Computer Graphics.pptx
PPT
Graphics display-devicesmod-1
PPTX
CG_ch1.pptx
PPTX
output, processing, communication devices
PPTX
introdution to COMPUTER HARDWARE PRE.pptx
PPSX
Ic lecture3
PPTX
CG_Unit1_SShah.pptx
PPT
مقدمة للغات البرمجة وانواعها ومبادئ الخوارزميات
PPTX
UNIT 1 Computer Aided Design.pptx
PPTX
CG Unit No.1.pptx computer graphics and gaming
PPT
Computer, introduction, features, anatomy,
PPTX
unit1_updated.pptx
PDF
CG2_HWcomputergraphicshardwareeeeeee.pdf
PDF
Introduction of Computers & C++ Programming
PPT
lecture2 computer graphics graphics hardware(Computer graphics tutorials)
PPTX
Overview of Graphics System
PDF
Computer Graphics
PPTX
chapter 2.pptx
PPT
Introduction to computer graphics and multimedia
PPTX
Components of digital computer
Introduction to Computer Graphics.pptx
Graphics display-devicesmod-1
CG_ch1.pptx
output, processing, communication devices
introdution to COMPUTER HARDWARE PRE.pptx
Ic lecture3
CG_Unit1_SShah.pptx
مقدمة للغات البرمجة وانواعها ومبادئ الخوارزميات
UNIT 1 Computer Aided Design.pptx
CG Unit No.1.pptx computer graphics and gaming
Computer, introduction, features, anatomy,
unit1_updated.pptx
CG2_HWcomputergraphicshardwareeeeeee.pdf
Introduction of Computers & C++ Programming
lecture2 computer graphics graphics hardware(Computer graphics tutorials)
Overview of Graphics System
Computer Graphics
chapter 2.pptx
Introduction to computer graphics and multimedia
Components of digital computer

Recently uploaded (20)

PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Weekly quiz Compilation Jan -July 25.pdf
PDF
Complications of Minimal Access Surgery at WLH
PDF
RMMM.pdf make it easy to upload and study
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PPTX
Cell Types and Its function , kingdom of life
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
Computing-Curriculum for Schools in Ghana
PDF
1_English_Language_Set_2.pdf probationary
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
Final Presentation General Medicine 03-08-2024.pptx
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
LDMMIA Reiki Yoga Finals Review Spring Summer
Supply Chain Operations Speaking Notes -ICLT Program
A systematic review of self-coping strategies used by university students to ...
Paper A Mock Exam 9_ Attempt review.pdf.
Weekly quiz Compilation Jan -July 25.pdf
Complications of Minimal Access Surgery at WLH
RMMM.pdf make it easy to upload and study
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Cell Types and Its function , kingdom of life
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
Computing-Curriculum for Schools in Ghana
1_English_Language_Set_2.pdf probationary

computer Graphics

  • 1. Graphics System Basics & Models Book: Chapter 1 [Ed. Angel, Interactive Computer Graphics]
  • 2. Computer Graphics  Computer Graphics: Use of computer in generating images.  Computer graphics: concerned with all aspects of producing pictures or images using a computer.
  • 3. Applications of Computer Graphics  Can be roughly divided into four major areas  Display of Information  Design  Simulation and Animation  User Interfaces
  • 4. 1. Display of Information  Classic graphics techniques used as a medium of conveying information  Human written/spoken language  Historical era:  Babylonians used to display floor plans on stones  Greeks used to display their architectural plans and language  Today graphical representation are generated by Architects, Designers using computers.
  • 5. Display of Information (Contd…)  Statisticians  Uses CG to display plots/graphs of a data set  Extract information from these plots  Very useful in extracting info from large datasets.  Medical Imaging  Graphics used in Computed Tomography (CT)  Medical Resonance Imaging (MRI)  Ultrasound  Data Visualization  Understanding data by placing it in visual context
  • 6. 2. Design  Many fields concerned with design. (Engineering & Arch)  With set of specification, a cost effective and esthetic design is tried to achieve using computer graphics.  Starting 40 years ago, today Computer Aided Design (CAD) pervades many fields.
  • 7. 3. Simulation and Animation  Simulation is the imitation of the operation of a real-world process or system over time.  Flight simulators - train pilots.  Safety and Cost reduction  Architectural designs are tested in many weather conditions  Animation - illusion of motion  Became famous: After successful simulations  Artistic effects are achieved.  Complete movies are made using CG  Photo-realistic images  Virtual Reality-replicates an environment that simulates physical presence in places in the real world or imagined worlds and lets the user interact in that world.
  • 8. 3. Simulation & Animation (Virtual Reality)
  • 9. 4. User Interfaces  Interaction with computers increased.  Desktops, Tablets, Smartphones.  Use of GUI has overcome CLI  Microsoft Windows, Mac OS, Linux  Android, iOS,  Internet usage increased  Webpages, applications all are graphical  Resources are accessed through graphical browsers.  Interaction with UI is so often, that we have almost forgot that we’re working with computer graphics.
  • 10. A Graphics System  General view of a graphics system  Generally contains, 1. Input Devices 2. CPU 3. GPU (Graphics Processing Unit) 4. Memory 5. Frame Buffer 6. Output Devices
  • 11. Pixels, Frame Buffer & Basic Terms  Pixel: Short for Picture element  The smallest addressable element of a display device.  Basic unit of a digital image.  Each pixel corresponds to a location or small area in the image.  Raster: Array of picture elements or pixels  Images seen on displays are raters produced by graphics sys.  A raster is a grid of x and y coordinates on a display space.
  • 12. Pixels, Frame Buffer & Basic Terms (Contd…)  Frame Buffer: portion of memory where pixels are stored  Core element of graphics system.  Contains bitmap that is driven to video display  Resolution: No. of pixels in frame buffer  Resolution determines the details that can be seen in image  Higher the resolution  Sharpen the image  Depth / Precision: No. of bits used per pixel to determine its properties like color  1-bit-deep frame buffer  allows only two colors  8-bit-deep frame buffer  allows 28 (256) colors.  16-bit (High color)  216 Colors  24-bit (True color)  224 Colors
  • 13.  In Simple Sys: Frame Buffer only hold colored pixels that are displayed  In most systems, The frame buffer holds far more information,  depth information needed for creating images from 3D data.  In these systems, the frame buffer comprises multiple buffers,  one or more of which are color buffers that hold the colored pixels that are displayed.  Terms, frame buffer and color buffer can be used synonymously. Pixels, Frame Buffer & Basic Terms (Contd…)
  • 14. CPU and GPU  In simple system there may be only one CPU  In early systems, frame buffer was the part of standard memory  CPU is responsible for both Normal and Graphical Processing  Main graphical processing of CPU is  Take graphical primitives from application program  Like (lines, polygons, circles)  Assign values to the pixels in frame buffer that best represent those entities.  Rasterization: Conversion of geometrical entities to pixel colors and locations in frame buffer. (a.k.a scan
  • 15. CPU & GPU (Contd…)  Today, all graphics system are characterized by special purpose graphics processing unit (GPU).  GPU: A processing unit custom-tailored to carry out specific graphics functions  GPU can either be on the same system board or on separate graphics card  Frame buffer usually resides on the same board as GPU  Frame buffer is accessed through GPU
  • 16. Output Devices  Cathode Ray Tube (CRT)  Most dominant type of display (until, recently)  Basic Op: When electron strikes the phosphor coating, light is emitted.  Deflection plates: to control the direction of the beam  Computer output is converted from digital(bits) to analog(voltage) by converters across x and y deflection plates.  When sufficiently intense beam of electrons is directed at the phosphor, light appears on CRT surface
  • 17. Output Devices (Contd…)  Refresh rate: No. of times per second the device retrace the same path/image.  CRT emits light for short time (few milli seconds)  To see flickering-free image, same image must be retraced.  Old systems: Refresh rate = frequency of power system  50 Hz in US and 60 Hz in most of the world  Raster System (Fundamental ways of displaying pixels)  Non-interlaced: Pixels are displayed row by row at refresh rate  Interlaced Display: Odd and even rows are refreshed, alternatively. 30 Hz instead of 60 Hz.
  • 18. Output Devices (Contd…)  Colored CRTs  Phosphors of three different colors (Red, Green, Blue)  Phosphors arranged in small groups.  Phosphors in triangular groups are called triads  Have three electron beams.  Shadow Mask: a metal sheet with small holes  Used to ensure the excitation of proper color phosphor.
  • 19. Output Devices (Contd…)  Flat-panel Technology.  Flat-panels are inherently raster based.  Mostly used flat panels are LCD, LED and Plasma  Generic flat-panel display have  2-outside plate: containing parallel grids of wires, oriented perpendicular to each other.  Middle plate contains different material depending upon the technology.
  • 20. Output Devices (Contd…)  Flat-panel Display  By sending electrical signal to proper wire on both plates.  Electric field is produced at the point of intersection of two wires  Electric field is used to control the corresponding element on the middle plate.  Electric field produced can be used in,  LED, to turn corresponding led on or off  LCD, to control polarization of liquid crystals to pass light  Plasma, to energize gases in order to glow or not.
  • 21. Input Devices  Input Devices: Devices used for input purposes.  Common input device Keyboard, mouse  Other input devices include joy stick, track ball, space ball  Input Devices (Perspective)  Physical Device  Logical Device (application / programmer perspective)  Their properties are specified in terms of “what they do” from application perspective.  For example: cout in C++ outputs the string, the output device could be printer, display/terminal or a disk file.  Even the “cout” output could be input for another program.
  • 22. Input Devices (Physical)  Two primary types of Input Devices  Keyboard Devices  Pointing Devices  Keyboard generally include physical keyboards or devices that return character codes.  ASCII code is used to represent characters.  ASCII assigns a single unsigned byte to each character.  Internet application use multiple bytes to represent each char.  Mouse & Trackball:  A mechanical mouse and trackball works on same principal.  Motion of the ball is converted to signals by converters.
  • 23. Input Devices (Physical)  Signals from encoders might be interpreted as position (Not necessarily)  Driver/Program can interpret the signals as two velocities.  The Computer can integrate velocities to obtain position.  When ball rotates position changes, otherwise not.  In this mode, positioning is relative.  Motion sensing devices are known as Relative positioning devices
  • 24. Input Devices (Physical)  Data Tablets:  Absolute positioning  Position is determined using electromagnetic interactions between signals traveling through the wires and sensors in the stylus  Position sensing devices are known as absolute positioning devices  READING: Space ball and Joy stick
  • 25. Input Devices (Logical)  Logical Input Devices:  Addressing of physical input devices as abstract data types  ADT: data type defined by its behavior from user view  Two major characteristics describe logical behavior of input device: 1. The measurements that the device returns to the user program 2. The time when the device returns those measurements.
  • 26. Input Devices (Logical)  Logical Input Devices:  String: Return string of characters from Keyboard, File, etc  Locator: Returns a position (in x, y coordinates)  Pick: Returns a segment name & pick identifier of object pointed by the user.  Choice: Represents a choice from a selection of several possibilities.  Valuator: Returns a real/analogue value, for example, to control some sort of analogue device.  Stroke: series of locations. (Tablets/Touch inputs)
  • 27. Input Modes  Input is provided in terms of two entities  Measure: The returned data from input devices.  Ex: Data stream from keyboard OR location of pointer from mouse  Trigger: Physical input to signal the computer.  Ex: Pressing of Return (Enter) Key / Esc Key OR clicking the mouse button.  Measure of device can be obtained in three (03) distinct modes  Each mode is defined by: Relationship b/w measure & trigger.
  • 28. Request Mode – Input Modes  The measure of the device is not returned to the program until the device is triggered.  Ex: cin / scanf in C++/C language. (Input Statements)  Program waits for trigger when input statement is encountered.  Take as long as the user wants.  The measure is only returned upon trigger. (e.g upon pressing enter/return)
  • 29. Sample Mode (Input Modes)  Sample-Mode: Input is immediate. Measure is returned as the function is encountered in App.  Position the device or Enter data before the function call.  Program retrieves “measure” immediately from the buffer/file/location.  For example, your application can obtain the location of the screen cursor at any point in time, through the use of SAMPLE mode input.
  • 30. Event Mode – Input Modes  Case: Multi input devices each with its own measure & trigger.  For Example: Flight simulators with multiple inputs.  Event Mode:  Application Program & Devices work independently of each other.  Each time listed device is triggered: measure + id is stored in event queue  App. Prog. Retrieves input from event queue whenever required.  Event Mode (Callback Approach):  Associate a function call (callback) with events specifically.  OS queries event queue and calls the associated function.  Efficient approach in client-server scenarios.
  • 31. Images – Physically & Synthetic Elements of Image Formation  Basic entities of Image Formation  Objects  Viewers  Light
  • 32. Object(s)  The object exists in space independent of any image-formation process and of any viewer.  Computer Graphics: deals with synthetic objects  Objects are formed by specifying positions of geometric primitives(basic shapes) in space like triangle, polygons etc  Mostly, set of spatial positions (vertices) are used to define objects  For Ex: line can be defined by two vertices  Triangle can be defined by three vertices.
  • 33. Viewer(s)  To form an image, there must be someone or something that is viewing our objects. Like human, a camera, etc.  It is the viewer that forms the image of our objects.  Human Visual System: Image is formed at back of eye  Camera: Image is formed in the film plane  Objects are usually seen from different perspectives.
  • 34. Light  Be it physical or synthetic images are in complete without light.  No light = dark objects = no image formation  Light is electromagnetic radiation.  Light Spectrum: Radio + Infrared + Visual spectrum  Visual spectrum: 350 – 780 nm  Around 520 nm: Green  Near 450 nm: Blue  Near 650 nm: Red  Except from recognizing that which frequency is for which color CG doesn’t deal with light
  • 36. Imaging Systems  Physical imaging systems to understand imaging in computers.  Pin-hole Camera  Human Visual System  Pin-hole: To understand basic working principles of camera.  Human visual system is complex but obeys the physical principles of other imaging systems
  • 37. Pin-hole Camera  A pinhole camera is a box with a small hole in the center of one side of the box  film is on the side opposite the pinhole.  Hole is so small that only a single ray of light can enter (assumption)  For Example: We have point in scene (x, y, z)  At Image: z = -d  y = yp  x = xp (In top view)  (xp, yp, -d) is called projection of (x,y,z)
  • 38. Pin-hole Camera  Color: In idealized mode, color of the image will be same as in scene  Field/Angle of View: is the extent of the observable world that is seen at any given moment.  If h is the height of camera (film) then  Angle is formulated using basic Trigonometry.
  • 39. Pin-hole Camera  Depth of Field: Every object in angle of view is in focus i.e appears sharply.  In Ideal pinhole camera depth of field is infinite. (Assumed)  Disadvantages of Pinhole camera  It admits only single ray – almost no light.  Camera cannot be adjusted to have different angle of view.  By replacing hole with lens; problems can be eliminated  With proper lens more light can be entered (larger aperture)
  • 40. Human Visual System  Human Visual System is extremely complex.  Light enters through cornea and lens  Iris opens/shuts to adjust the amount of light  Image is formed at retina (back of the eye)  Cells (Rods and Cones) are sensors  They excites/responds when light enters eye (350-780 nm)  Rods: 1 type; Low light sensors; Night vision, not color sensitive  Cones: 3 types; responsible for color vision  Resolution of Visual System  Resolution: Measure of what size objects can we see.  Technically: it is a measure of how close we can place
  • 41. Human Visual System  Brightness: Brightness is an overall measure of how we react to the intensity of light  HVS reacts differently to different wavelengths of light  HVS is more sensitive to green and less sensitive to red & blue  HVS only reacts to three colors instead of whole visual spectrum due to three types of cones.  These colors are called primary colors.  Primary colors are Red, Blue and Green.
  • 42. Synthetic Camera Model  The paradigm of emulating image formation by optical system in computer is known as Synthetic Camera Model  Basic Principles:  As Object & Viewer are independent of each, so CG (API) should have separate functions for specifications.  Images can be computed using simple geometric calculation like in pin hole camera.
  • 43. Synthetic Camera Model  In pin hole camera image formed is flipped.  In computer, image is retain; by moving the plane to front (Virtual image plane)  A line is drawn called projector from center of lens/projection (COP) to the object/point.  All the light originates from COP  This virtual image plane is called projection plane
  • 44. Synthetic Camera Model  There is always limitation to the size of the image. In optical system; field of view expresses the limitation  Synthetic Camera Model places a Window/Rectangle in projection plane to cope with the limitation.  This window / rectangle through which a viewer at COP sees the world called Clipping window/rectangle  Given the following  Location of COP  Orientation of Projection plane  Size of clipping window  We can determine that which objects will appear in image
  • 45. Programmer’s Interface  User interact with graphics system in different ways like using CAD modules, paint programs etc.  Programmers interact with GS using graphics library (API)  Graphics API: Interface b/w programmer and Graphics system specified through set of functions.  Programmers don’t see h/w related details.  Software Drivers interpret the output of API to the form that is understood by the specific hardware.
  • 46. Three Dimensional API  As per synthetic camera model, 3D API must provide functions to specify,  Objects  Viewer  Light Sources  Material properties.  Objects are specified using vertices.  Objects are usually specified using geometric primitives like lines, polygons, triangles etc  Complex objects may involve multiple ways of specification.
  • 47. Three Dimensional API  Camera can be defined in variety of ways.  APIs differ in camera selection and methods  Four types of specification for Camera  Position: Camera location (COP)  Orientation: Rotation of camera in axes.  Focal length: Size of the image/ Angle of view  Film plane: height & width  Specifications can be satisfied in various ways  Most used way is coordinate system transformation  Transformations convert object positions represented in coordinate system.
  • 48. Three Dimensional API  Light Sources  Light sources are specified by their  location, strength, color and direction  These properties are specified for each light sources used.  Material Properties:  Characteristics or attributes of the objects  These attributes are specified when the objects are defined  Both light and material properties depend upon the light-material model API supports.
  • 49. The Modeling–Rendering Paradigm  Model: Mathematical/Geometrical description of shapes  Rendering: Process of generating images from models  Modeling can be separated from Rendering.  Helpful in generating complex images.  The file/data produced by the modeler is used by the Renderer.  This File could be simple; containing info in specific format
  • 50. The Modeling–Rendering Paradigm  Different hardware and software at both blocks.  Modeler, as well as Rendered are both customizable  Use diff: modeler with same renderer  Use same modeler with diff: renderer  Most popular approach now a days.  Models, lights, camera etc are place in special data structure called scene graph  Scene graph is then passed to renderer or game engine.
  • 51. Graphics Architectures  Early graphics system: general-purpose computer of von Neumann architecture  Single processor system (Single instruction Execution)  Calligraphic CRT display  Generate a line segment by connecting two points.  Host used to Run app & compute end points and send them to CRT  Info needs to be sent at high speed (to avoid flicker)  Refreshing was slow that even small image would burden expensive computer.
  • 52. Display Processors  Earliest special purpose graphics system  To separate the process of continuous display of refreshing  Display processor included instruction to display CRT primitives  Host generate image (Using instructions)  Send to Display processor  Display processor store program in memory (as display file / display list)  Display processor runs program iteratively
  • 53. Pipeline Architecture  Process/Operation divided into several independent / dependent segments.  a + (b ∗ c)  Pipelining increases throughput of the computer.
  • 54. Graphics Pipeline  Sets of Object  Objects (Set of primitives)  vertices  Complex objects may contain millions of vertices  To make process of imaging fast we use pipeline  Graphical pipeline consists of  Vertex Processing  Clipping and Primitive Assembly  Rasterization  Fragment Processing
  • 55. 1. Vertex Processing  Two major functions of this block  Coordinate transformation  Compute color of each vertex
  • 56. Clipping and Primitive Assembly  Vertices are assembled into primitives (shapes)  No camera can see the whole world  Clipping must be done.  Clipping window/volume is considered  Clipping is done primitive by primitive.  Output: set of primitives which will appear in clipping image.