SlideShare a Scribd company logo
Dr. Mohieddin Moradi
mohieddinmoradi@gmail.com
Dream
Idea
Plan
Implementation
1
Cool & Warm Colors Lights
2
Cool & Warm Colors Fluorescent Lights
– Regular incandescent lights peak the orange and red wavelengths, and tends to be weak on blue.
– That’s why red colors in your picture look so good and blue colors look so dead under normal
incandescent light.
– Standard “warm white” and “cool white” fluorescent lights overemphasize yellow-green. They’re made to
give the most light in the range of wavelengths that the human eye is most sensitive to.
3
Incandescent
– On the Kelvin scale, zero degrees K (0 K) is defined as “absolute zero” temperature.
– This is the temperature at which molecular energy or molecular motion no longer exists.
– Since heat is a result of molecular motion, temperatures lower than 0 K do not exist.
Kelvin is calculated as:
K = Celsius + 273.15 ºC
Color Temperature
4
Color Temperature
– The spectral distribution of light emitted from a piece of carbon (a
black body that absorbs all radiation without transmission and
reflection) is determined only by its temperature.
– When heated above a certain temperature, carbon will start
glowing and emit a color spectrum particular to that temperature.
– This discovery led researchers to use the temperature of heated
carbon as a reference to describe different spectrums of light.
– This is called Color temperature.
5
Color Temperature
6
Color Temperature
7
The different light source types emit different colors of light (known as color spectrums) and video cameras capture this difference.
Color Temperature
8
Daylight Incandescent Fluorescent
Halogen Cool White LED Warm White LED
Wavelength (nm)Wavelength (nm)Wavelength (nm)
Wavelength (nm) Wavelength (nm) Wavelength (nm)
IntensityIntensity
Intensity
IntensityIntensity
Intensity
Natural light Artificial light
Color Temperature
9
– Our eyes are adaptive to changes in light source colors – i.e., the color of a particular object will always
look the same under all light sources: sunlight, halogen lamps, candlelight, etc.
– However, with color video cameras this is not the case, bringing us to the definition of “color temperature.”
– When shooting images with a color video camera, it is important for the camera to be color balanced
according to the type of light source (or the illuminant) used.
This is because different light source types emit different colors of light (known as color spectrums) and video
cameras capture this difference.
Color Temperature
10
The camera color temperature is lower
than environment color temperature
The camera color temperature is upper
than environment color temperature
Color Temperature
11
– In video technology, color temperature is used to describe the spectral distribution of light emitted from a
light source.
– The cameras do not automatically adapt to the different spectrums of light emitted from different light
source types.
– In such cases, color temperature is used as a reference to adjust the camera’s color balance to match the
light source used.
For example, if a 3200K (Kelvin) light source is used, the camera must also be color balanced at 3200K.
Color Temperature
12
Color Temperature Conversion
– All color cameras are designed to operate at a certain color temperature .
– For example, Sony professional video cameras are designed to be color balanced at 3200K, meaning that
the camera will reproduce colors correctly provided that a 3200K illuminant is used.
– This is the color temperature for indoor shooting when using common halogen lamps.
13
Cameras must also provide the ability to shoot under illuminants with color temperatures other than 3200K.
– For this reason, video cameras have a number of selectable color conversion filters placed before the
prism system.
– These filters optically convert the spectrum distribution of the ambient color temperature (illuminant) to
that of 3200K, the camera’s operating temperature.
– For example, when shooting under an illuminant of 5600K, a 5600K color conversion filter is used to convert
the incoming light’s spectrum distribution to that of approximately 3200K.
Color Temperature Conversion
14
Color Temperature Conversion
15
– When only one optical filter wheel is available within the camera, this allows all filters to be Neutral Density
types providing flexible exposure control.
– The cameras also allow color temperature conversion via electronic means.
– The Electronic Color Conversion Filter allows the operator to change the color temperature from 2,000K to
20,000K as typical.
Color Temperature Conversion
16
Color Temperature Conversion
17
“Why do we need color conversion filters if we can correct the change of color temperature electrically
(white balance)?".
– White balance electrically adjusts the amplitudes of the red (R) and blue (B) signals to be equally
balanced to the green (G) by use of video amplifiers.
– We must keep in mind that using electrical amplification will result in degradation of signal-to-noise ratio.
– Although it may be possible to balance the camera for all color temperatures using the R/G/B amplifier
gains, this is not practical from a signal-to-noise ratio point of view, especially when large gain up is
required.
The color conversion filters reduce the gain adjustments required to achieve correct white balance.
Color Temperature Conversion
18
Variable Color Temperature
The Variable Color Temp. Function allows the operator to change the color temperature from 2,000K to
20,000K
19
Preset Matrix Function
– Preset for 3 Matrices can be set.
– The Matrix level can be preset to different lightings.
– The settings can be easily controlled by the control panel.
20
White Balance & Color Temperature
21
The different light source types emit different colors of light (known as color spectrums) and video cameras capture this difference.
White Balance & Color Temperature
22
Daylight Incandescent Fluorescent
Halogen Cool White LED Warm White LED
Wavelength (nm)Wavelength (nm)Wavelength (nm)
Wavelength (nm) Wavelength (nm) Wavelength (nm)
IntensityIntensity
Intensity
IntensityIntensity
Intensity
The video cameras are not adaptive to the different spectral distributions of each light source type.
– In order to obtain the same color reproduction under different light sources, color temperate variations
must be compensated by converting the ambient color temperature to the camera’s operating color
temperature (Optically or Electrically).
– Once the incoming light’s color temperature is converted to the camera’s operating color temperature
(Optically or Electrically), this conversion alone does not complete color balancing of the camera,
therefore more precise color balancing adjustment must be made .
– A second adjustment must be made to precisely match the incoming light’s color temperature to that of
the camera known as “white balance”
White Balance
23
White Balance
White balance refers to shooting a pure white object, or a grayscale chart, and adjusting the camera’s video amplifiers so
the Red, Green, and Blue channels all output the same video level.
24
380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 70 720 760 780
More Precise
Color Balancing
White Balance
25
White Balance
Why by Performing this adjustment for the given light
source, we ensure that the color “white” and all other
colors are correctly reproduced?
– The color “white” is reproduced by combining Red,
Green, and Blue with an equal 1:1:1 ratio.
– White Balance adjusts the gains of the R/G/B video
amplifiers to provide this output ratio for a white object
shot under the given light source type.
– Once these gains are correctly set for that light source,
other colors are also output with the correct Red,
Green, and Blue ratios.
26
(SDTV)
Y=0.11B+0.3R+0.59G
White Balance
– For example, when a pure yellow object is shot, the
outputs from the Red, Green, and Blue video amplifiers
will have a 1:1:0 ratio (yellow is combined by equally
adding Red and Green).
– In contrast, if the White Balance is not adjusted, and
the video amplifiers have incorrect gains for that light
source type, the yellow color would be output
incorrectly with, for example, a Red, Green, and Blue
channel ratio of 1:0.9:0.1.
27
(SDTV)
Y=0.11B+0.3R+0.59G
Preset White
Preset White is a white-balance selection used in shooting scenarios
• When the white balance cannot be adjusted
OR
• When the color temperature of the shooting environment is already known (3200K or 5600K for instance).
– This means that by simply choosing the correct color conversion filter, optical or electronic, the
approximate white balance can be achieved.
– It must be noted however, that this method is not as accurate as when taking white balance.
By selecting Preset White, the
R/G/B amplifiers used for white-
balance correction are set to
their center values.
center values
28
AWB (Auto White Balance)
Unlike the human eye, cameras are not adaptive to different color temperatures of different light source types
or environments.
– This means that the camera must be adjusted each time a different light source is used, otherwise the
color of an object will not look the same when the light source changes.
– This is achieved by adjusting the camera’s white balance to make a ‘white’ object always appear white.
– Once the camera is adjusted to reproduce white correctly, all other colors are also reproduced as they
should be.
29
AWB (Auto White Balance)
The AWB is achieved by framing the camera on a white object – typically a piece of white paper/clothe or
grayscale chart – so it occupies more than 70% of the display.
Then pressing the AWB button on the camera body instantly adjusts the camera white balance to match the
lighting environment.
Macbeth Chart
30
ATW (Auto Tracing White Balance)
– The AWB is used to set the correct color balance for one particular shooting environment or color
temperature.
– The ATW continuously adjusts camera color balance in accordance with any change in color
Temperature.
For example, imagine shooting a scene that moves from indoors to outdoors. Since the color temperature of
the indoor lighting and outdoor sunlight are very different, the white balance must be changed in real time in
accordance with the ambient color temperature.
31
Black Balance
To ensure Accurate color reproduction throughout all
video levels, it is important that the red, green, and blue
channels are also in correct balance when there is no
incoming light.
When there is no incoming light, the camera’s red,
green, and blue outputs represent the “signal floors” of
the red, green, and blue signals, and unless these signal
floors are matched, the color balance of other signal
levels will not match either.
32
Black Balance
It is necessary when:
– Using the camera for the first time
– Using the camera after a significant perid out of use
– Sudden change in temperature
– Without this adjustment, the red, green, and blue color
balance cannot be precisely matched even with
correct white balance adjustments.
33
ND (Neutral Density) Filter
− It reduces light of all wavelengths.
− It is used when the subject is too bright to be adjusted by the diaphragm alone.
34
ND (Neutral Density) Filter
The ND filters reduce the amount of incoming light to a level where the lens iris can provide correct exposure
for even bright images.
– It is important to note that the use of ND filters does not affect the color temperature of the incoming light –
they are designed so that light intensity is reduced uniformly across the entire spectrum.
– The ND filters can also be used to intentionally control an image’s depth of field to make it more shallow.
– This is because ND filters allow a wider iris opening to be selected, and because depth of field decreases
as iris aperture (opening) increases.
35
ND (Neutral Density) Filter
− Strength of an ND filter may be express as:
• Percent Transmission (T)
• Optical Density (OD or D)
It describes the amount of energy blocked
by the filter.
• Exposure factor
𝑫 = −log 𝑻
𝑬𝒙𝒑𝒐𝒔𝒖𝒓𝒆 𝒇𝒂𝒄𝒕𝒐𝒓 = 𝟏/𝑻
36
Transverse and Longitudinal Waves
Direction of travel
Transverse Wave
Longitudinal Wave
37
– In an un-polarized transverse wave, oscillations may take place in any direction at right angles (90°) to the
direction in which the wave travels.
Wave Travel
Direction
Oscillation Direction
Polarization
38
– In an un-polarized transverse wave, oscillations may take place in any direction at right angles (90°) to the
direction in which the wave travels.
– Polarization is a characteristic of all transverse waves that describes the orientation of oscillations.
– By Polarization, vibration direction of wave are restricted.
Oscillation Direction
Wave Travel
Direction
Polarization
39
− If the oscillation takes place in only one direction then the wave is said to be linearly polarized
(or plane polarized) in that direction.
Oscillation Direction
Wave Travel
Direction
Linear Polarization
40
− This wave is polarized in y direction (E Oscillation Direction)
− Trace of electric field vector is linear
Linear Polarization
41
− Circularly polarized light consists of Two perpendicular EM plane waves of equal amplitude with 90°
difference in phase.
42
Circular Polarization
Circular Polarization
A clockwise circularly-polarized wave An anti-clockwise circularly-polarized wave
43
Light is an Electromagnetic Wave
– Un-polarized light consist of waves with randomly directed electric fields.
– Here the waves are all traveling along the same axis, directly out of the page, and all have the
same amplitude E.
– Light is polarized when its electric fields oscillate in a single plane, rather than in any direction
perpendicular to the direction of propagation.
Direction of motion of wave
z
x
yE
v

B
v

v

E
B
E

v

This wave is polarized in y direction
The EM waves are transverse waves
44
Light and Polarization
45
Polarization
46
Liquid Crystals and Polarizer
− The alignment of the polarizer “stack” changes with voltage.
47
Circular, Linear and Unpolarized Light
48
Polarizer in Glasses
49
Polarizer in Camera
A polarizer is used to intercept (stop/catch) light reflected from the surface of water or glass.
50
Polarizer in Camera
− A polarizer is used to intercept (stop/catch) light reflected from the surface of water or glass.
51
– Since light scattered by the atmosphere is partly polarized, polarizer is also effective when shooting
subjects against a blue sky.
– It can suppress the sky and make mountains or other objects stand out.
A polarizer
1- Reduces the total amount of light to about ¼
2- Changes the color balance ,so the white balance must be readjusted.
Polarizer in Camera
52
AGC (Automatic Gain Control)
− When enough light cannot be captured with the lens iris fully opened, the camera’s AGC function
electrically amplifies the video signal level, increasing and optimizing picture brightness.
− AGC function also degrades S/N ratio since noise is amplified together, hence it is not used on high-end
cameras and camcorders.
53
Black Clip
It prevents the camera output signal from falling below a video level specified by the television standard.
– Black clip function electronically clips off signal levels that are below a defined level called the black clip
point.
– This prevents the camera from outputting a signal level too low, that may be wrongly detected as a sync
signal by other video devices.
– The black clip point is set to 0% video level.
54
Pedestal or Master Black
Sut-up Level
Absolute black level
or the darkest black
that can be
reproduced by the
camera.
H-Sync
Burst
White Clip
All cameras have a white-clip circuit to prevent the camera output signal from exceeding a practical video
level when extreme highlights appear in the image.
– The white-clip circuit clips off or electrically limits the video level of highlights to a level that can be
reproduced on a picture monitor.
55
Gamma, CRT Characteristic
lookmuchbrighter
lookmuchdarker
Itismadedarker
Itismadebrighter
56
Camera Monitor
Light
Light
Voltage
Voltage
Gamma, CRT Characteristic
lookmuchbrighter
lookmuchdarker
Itismadedarker
Itismadebrighter
57
Camera Monitor
Camera Monitor
Light
Light
Voltage
Voltage
Light
Light
Voltage
Voltage
CRT
Control
Grid
Output
Light
Input voltage
Output light
Ideal
Real
Dark areas of a signal Bright areas of a signal
Gamma, CRT Characteristic
It is caused by the voltage
to current grid-drive of the
CRT (voltage-driven) and
not related to the phosphor
(i.e. a current-driven CRT
has a linear response)
58
CRT Gamma
𝐿 = 𝑉 𝛾 𝑚
𝛾 𝑚 = 2.22
Voltage to current grid-drive CRT
CRT
Control
Grid
Light Input
Input voltage
Output light
Camera
OutputLight
Output voltage
Input light
Input light
Output light
Gamma, CRT Characteristic
Legacy system-gamma (cascaded system)
is about 1.2 to compensate for dark
surround viewing conditions (𝜸 𝒎 = 𝟐. 𝟒).
59
ITU-R BT.709 OETF
CRT Gamma
𝐿 = 𝑉 𝛾 𝑚
𝛾 𝑚 = 2.22
Camera Gamma
𝑉 = 𝐿 𝛾𝑐
𝛾𝑐 = 0.45
𝜸 𝒄 𝜸 𝒎 = 𝟏
– CRT Defect? No!
• It is caused by the voltage to current (grid-drive) of the
CRT and not the phosphor.
• The nonlinearity is roughly the inverse of human lightness
perception.
– Legacy system gamma is about 1.2 to compensate for dark
surround viewing conditions.
– Amazing Coincidence!
• CRT gamma curve (grid-drive) nearly matches human
lightness response, so the precorrected camera output is
close to being ‘perceptually coded’
• If CRT TVs had been designed with a linear response, we
would have needed to invent gamma correction anyway!
Gamma, CRT Characteristic
60
0 0.5 1
0
1
CRT Gamma & System Curve
CRT Vk
gammaX Vk 0.5
gammaT Vk
V
k
CRT gamma (2.4) compared
to total system gamma (1.2).
BT.1886 display
– Gamma (γm) is a numerical value that indicates the response characteristics between the brightness of a
display device (such as a CRT or flat panel display) and its input voltage.
– The exponent index that describes this relation is the CRT’s gamma, which is usually around 2.2.
L: the CRT brightness
V: the input voltage
– Although gamma correction was originally intended for compensating for the CRT’s gamma, today’s
cameras offer unique gamma settings (γc) such as film-like gamma to create a film-like look.
Gamma, CRT Characteristic
61
𝑳 = 𝑽 𝜸 𝒎
– The goal in compensating for a CRT’s gamma is to create a camera output signal that has a reverse
relation to the CRT’s gamma.
– In this way, the light that enters the camera will be in proportion to the brightness of the CRT picture tube.
– Technically, the camera should apply a gamma correction of about 1/ γm.
– This exponent γc (1/ γm) is what we call the camera’s gamma, which is about 1/2.2 or 0.45.
Gamma, CRT Characteristic
62
Camera gamma
(𝜸 𝒄 = 𝟎. 𝟒𝟓)
Linear gamma
(𝜸 𝒎 𝜸 𝒄 = 𝟏)
Display gamma
(𝜸 𝒎 = 𝟐. 𝟐𝟐)
– Cameras convert scene light to an electrical signal using an Opto-Electronic Transfer Function (OETF)
– Displays convert an electrical signal back to scene light using an Electro-Optical Transfer Function (EOTF)
(Non Linear)
• The CRT EOTF is commonly known as gamma.
• The Camera OETF is commonly known as inverse gamma.
Transmission MediumScene
Capture
Scene
Display
𝑳 = 𝑽 𝜸
𝜸 = 𝟐. 𝟒𝟓 𝒇𝒐𝒓 𝑯𝑫𝑻𝑽
(𝑰𝑻𝑼 − 𝑹 𝑩𝑻. 𝟕𝟎𝟗)
Gamma, CRT Characteristic
63
Gamma, CRT Characteristic
Recommendation ITU-R BT.709 (Old) Recommendation ITU-R BT.1886 (In 2011)
Overall Opto-ElectronicTransfer Function
at source (OETF)
𝐕 = 𝟏. 𝟎𝟗𝟗𝑳 𝟎.𝟒𝟓
− 𝟎. 𝟎𝟗𝟗 0.018 < L <1
𝐕 = 𝟒. 𝟓𝟎𝟎𝑳 0 < L < 0.018
where:
L : luminance of the image 0 < L < 1
V : corresponding electrical signal
Reference Electro-OpticalTransfer Function (EOTF)
𝑳 = 𝒂(𝐦𝐚𝐱 𝑽 + 𝒃 , 𝟎 ) 𝜸
L: Screen luminance in cd/m2
V: Input video signal level (normalized, black at V = 0, to white at V = 1)
: Exponent of power function, γ = 2.40
a: Variable for user gain (legacy “contrast” control)
b: Variable for user black level lift (legacy “brightness” control)
Above variables a and b are derived by solving following equations in order that V = 1
gives L = LW and that V = 0 gives L = LB:
LW: Screen luminance for white
LB: Screen luminance for black
𝐿 𝐵= 𝑎. 𝑏 𝛾
𝐿 𝑊 = 𝑎. (1 + 𝑏) 𝛾
For content mastered per Recommendation ITU-R BT.709 , 10-bit digital code values “D”
map into values of V per the following equation:
V = (D–64)/876
BT.709
BT.1886
CRT’s already forced the camera
gamma became BT.709
• Recommendation ITU-R BT.709 explicitly specifies a reference OETF function that in combination with a CRT display produces
a good image.
• Recommendation ITU-R BT.1886 in 2011 specifies the EOTF of the reference display to be used for HDTV production; the EOTF
specification is based on the CRT characteristics so that future monitors can mimic the legacy CRT in order to maintain the
same image appearance in future displays.
64
–Adjusting gamma
• CRT is black-level sensitive power-law.
• So the Black-level adjustment (room light) dramatically changes gamma.
–Gamma for a CRT fairly constant at about 2.4 to 2.5
• BT.1886 is the recommended gamma for High Definition flat panel displays.
• BT.1886 says all flat panel displays should be calibrated to 2.4 but effective gamma still changes with
Brightness-control (black-level) or room lighting.
• Black-levels can track room lighting with auto-brightness; but Changes Gamma.
– Why not get rid of gamma power law ?
• Gamma is changed for artistic reasons. But, for current HD (SDR) displays, even with BT.2020
colorimetry, BT.1886 applies for the reference display gamma.
Gamma, CRT Characteristic
65
𝑳𝒊𝒈𝒉𝒕 𝑷𝒐𝒘𝒆𝒓 = (𝑽 + 𝑩𝒍𝒂𝒄𝒌 𝑳𝒆𝒗𝒆𝒍) 𝜸
𝑳𝒊𝒈𝒉𝒕 𝑷𝒐𝒘𝒆𝒓 = (𝑽) 𝟐.𝟒
66
System Gamma, Gamma & Bit Resolution Reduction
67
Linear Camera
(natural response)
Linear CRT
(current driven)
Current driven CRT
or new flat-panel display
D65 daylight
~100 steps (15 bits)
Need to match light output
Look the same with >15-bits!
Perceptually uniform steps
System Gamma, Gamma & Bit Resolution Reduction
Image reproduction using video is perfect if
display light pattern matches scene light pattern.
If camera response were linear, more than 15
bits would be needed for R,G,B and MPEG &
NTSC S/N would need to be better than 90
db, so 8 bits is not sufficient for NTSC & MPEG.
1
2
3
4
RGB
Perception is 1/3 power-law “cube-root”
68
Linear Camera
(natural response)
Linear CRT
(current driven)
Current driven CRT
or new flat-panel display
~100 steps (15 bits)
NTSC & MPEG
(8 bits)
Need to match light output
But do not look the same!
Perceptually uniform steps
System Gamma, Gamma & Bit Resolution Reduction
D65 daylight
If camera response were linear, more than 15
bits would be needed for R,G,B and MPEG &
NTSC S/N would need to be better than 90
db, so 8 bits is not sufficient for NTSC & MPEG.
RGB
1
2
3
4
5
Perception is 1/3 power-law “cube-root”
Image reproduction using video is perfect if
display light pattern matches scene light pattern.
Lightness perception only
important for S/N considerations
69
CRT response
(voltage driven)
Need to match light output.
Look the same with only 7-bits
8-bits
~100 steps (7-bits)
1 JND
1 JND
1 JND
Camera
With Gamma
Standard CRT
(voltage to current
grid-drive of CRT)
System Gamma, Gamma & Bit Resolution Reduction
D65 daylight
Thanks to the CRT gamma, we compress
the signal to roughly match human
perception and only 7 bits is enough!
1
2
3
4
RGB
Perception is 1/3 power-law “cube-root”
Image reproduction using video is perfect if
display light pattern matches scene light pattern.
Quantization and noise
appear uniform due to
inverse transform
5
Recall of BT.709 HDTV System Architecture
EOTF
BT.1886
Reference Display
OETF
BT.709 Artistic AdjustCamera
EOTF
BT.1886
View
Reference Viewing Environment
8-10 bit Delivery
Cam Adj.
e.g. Iris
Sensor
Image
Display Adjust
Non-Ref
Display
Non-Reference Viewing Environment
EOTF of the reference display for HDTV production.
• It specifies the conversion of the non-linear signal into display
light for HDTV.
• The EOTF specification is based on the CRT characteristics so
that future monitors can mimic the legacy CRT in order to
maintain the same image appearance in future displays.
Ex: Toe, Knee
70
(Reference OOTF is cascade of BT.709 OETF and BT.1886 EOTF)
OOTFSDR = OETF709 ×EOTF1886
Reference OETF that in
combination with a CRT
produces a good image
Recall of BT.709 HDTV System Architecture
EOTF
BT.1886
Reference Display
OETF
BT.709 Artistic AdjustCamera
EOTF
BT.1886
Creative Intent
View
Reference Viewing Environment
8-10 bit Delivery
Cam Adj.
e.g. Iris
Sensor
Image
Display Adjust
Non-Ref
Display
Non-Reference Viewing Environment
Ex: Toe, Knee
Ex: Knee
Artistic OOTF
If an artistic image “look” different from that produced by the
reference OOTF is desired, “Artistic adjust” may be used
71
OOTFSDR = OETF709 ×EOTF1886
−There is typically further adjustment (display adjust) to compensate for viewing environment, display limitations, and viewer preference; this
alteration may lift black level, effect a change in system gamma, or impose a “knee” function to soft clip highlights (known as the “shoulder”).
−In practice the EOTF gamma and display adjust functions may be combined in to a single function.
Actual OOTF = OETF (BT.709) + EOTF (BT.1886) + Artistic adjustments+ Display adjustments
EOTF of the reference display for HDTV production.
• It specifies the conversion of the non-linear signal into display
light for HDTV.
• The EOTF specification is based on the CRT characteristics so
that future monitors can mimic the legacy CRT in order to
maintain the same image appearance in future displays.
Reference OETF that in
combination with a CRT
produces a good image
– Recommendation ITU-R BT.709 explicitly specifies a reference OETF function that in combination with a CRT
display produces a good image.
– Recommendation ITU-R BT.1886 in 2011 specifies the EOTF of the reference display to be used for HDTV
production; the EOTF specification is based on the CRT characteristics so that future monitors can mimic
the legacy CRT in order to maintain the same image appearance in future displays.
– A reference OOTF is not explicitly specified for HDTV.
– There is no clearly defined location of the reference OOTF in this system.
The Reference OOTF = OETF (BT.709) + EOTF (BT.1886) (cascaded)
– If an artistic image “look” different from that produced by the reference OOTF is desired for a specific
program, “Artistic adjust” may be used to further alter the image in order to create the image “look” that is
desired for that program. (Any deviation from the reference OOTF for reasons of creative intent must occur
upstream of delivery)
The Actual OOTF = OETF (BT.709) + EOTF (BT.1886) + Artistic and display adjustments
Recall of BT.709 HDTV System Architecture
72
– When shooting a movie or a drama, the colors of an object can sometimes be subject to a drop in saturation.
– For example, as shown in the “Before Setting”, the colors of the red and orange flowers are not reproduced fully, causing
the image to lose its original texture and color depth.
– This occurs because the colors in mid-tone areas (the areas between the highlighted and the dark areas) are not
generated with a “look” that appears natural to the human eye.
– In such cases, adjusting the camera’s gamma correction enables the reproduction of more realistic pictures with much
richer color. This procedure lets your camera to create a unique atmosphere in your video, as provided by film originated
images.
Tips for Creating Film-like Images With Rich Colors
73By lowering the camera gamma correction(γc ) value
Tips for Creating Film-like Images With Rich Colors
74
Tips for Creating Film-like Images With Rich Colors
75
CRT Gamma
Camera Gamma
Output voltage
Input light
More contrast in dark picture areas,
(more low-luminance detail).
more noise.
Less contrast in dark picture areas,
(less low-luminance detail).
less noise.
-
+
Black Gamma
76
CRT
Control
Grid
Light Input Camera
OutputLight
ITU-R BT.709 OETF
The dark areas are reproduced with more color and darkness
77
Black Gamma
78
Black Gamma
Standard video gamma
Gentle gamma curve near the black signal levels
– When shooting, the color saturation in the dark areas of the picture may sometimes not be properly reproduced.
– As shown in the “Before setting” image, the dark areas in the background of the room are not fully reproduced, and the
color of the picture appears slightly faded.
– This occurs because the signals from the dark areas are not output as dark as they should be or with the shades that they
should have.
– In such situations, by adjusting the signal level of the black areas to better match the entire image, the picture is
reproduced with a much richer visual impression.
Tips for Enriching Color Saturation in Dark Areas of an Image
79
By decreasing black gamma dark areas are reproduced
with more color and darkness .
Tips for Enriching Color Saturation in Dark Areas of an Image
80
Tips for Enriching Color Saturation in Dark Areas of an Image
81
Y BLACK Gamma
– The BLACK GAMMA and GAMMA functions are used to adjust signals in the low-luminance area and entire luminance
range respectively.
– Because they process luminance and color-difference signals together in their circuits, both black tonal reproduction
(black contrast) and color saturation are effected.
– So since BLACK GAMMA and GAMMA change the image’s saturation together with contrast, they are not appropriate for
adjusting picture contrast alone.
Both Saturation and Contrast are changed.
82
– The Y BLACK GAMMA function applies gamma processing only to the luminance signal and mixes the resultant signal with
the color difference signals to create the final output.
– This allows operators to independently adjust the tones of black/dark areas (Black Contrast) in the images.
– However, since Y BLACK GAMMA mixes the deepened (stronger) shades of black with the final output, the original colors
may darken or noise may become visible.
– Therefore, it is necessary to always check your monitor when using this function.
Y BLACK Gamma
83
– To shoot images with much richer black reproduction, the BLACK GAMMA function and GAMMA function are effective.
These functions allow you to reproduce the desired color tone by adjusting the saturation and contrast of the image.
– However, since BLACK GAMMA and GAMMA change the image’s saturation together with contrast, they are not
appropriate for adjusting picture contrast alone.
– For example, suppose you wanted to adjust only the black color tone or contrast of the bouquet in the “Before setting”
image.
– When using BLACK GAMMA function (Master Black : -87) as shown in “After setting 1”, the colors of the flowers are also
changed, resulting in the impression of the image turning out to be somewhat different than expected.
– For such cases, the Y GAMMA BLACK function is useful, allowing you to adjust only the black contrast of the image to have
richer and deeper black tones while keeping the color intact.
Tips for Shooting Pictures with Rich and Deep Black reproduction
(Y BLACK GAMMA Function)
84
Tips for Shooting Pictures with Rich and Deep Black reproduction
(Y BLACK GAMMA Function)
85
Tips for Shooting Pictures with Rich and Deep Black reproduction
(Y BLACK GAMMA Function)
86
Tips for Shooting Pictures with Rich and Deep Black reproduction
(Y BLACK GAMMA Function)
87
Tips for Shooting Pictures with Rich and Deep Black reproduction
(Y BLACK GAMMA Function)
88
− CCD image sensors have a dynamic range that is around three to six times larger than the video signal’s
dynamic range.
1) Linear Fashion Mapping:
− Remember that the brightness (luminance) levels need to fit within the 0% to 100% (max. 109%) video
signal range.
Knee Correction
(max. 109%)
89
The picture content most important to us
(ordinarily lighted subjects and human
skin tones ) would be reproduced at very
low video levels, making them look too
dark on a picture monitor.
− CCD image sensors have a dynamic range that is around three to six times larger than the video signal’s
dynamic range.
90
2) Clipping Off or Discarding the
Image’s Highlights
This would offer bright reproduction of
the main picture content, but with the
tradeoff of image highlights having no
contrast and appearing washed out.
Knee Correction
Solution
– Knee Correction offers a solution to both issues, keeping the main content bright while maintaining a
certain level of contrast for highlights.
– The image sensor output is mapped to the video signal so it maintains a proportional relation until a
certain video level. This level is called the knee point.
91
Knee Correction
Input
Signal
Output
Signal
0
0.6
0.7
0.1
0.5
0.2
0.3
0.4
Low Contrast
Highlights
Highlight ImagesMain Subject
Low Knee point
Natural Contrast
for Main Subject
92
Knee Correction
Main Subject
Highlight Images
0
0.6
0.7
0.1
0.5
0.2
0.3
0.4
High Knee point
Input
Signal
Output
Signal
Very Low Contrast
Highlights
Natural Contrast
for Main Subject
Highlight ImagesMain Subject
93
Knee Correction
0
0.6
0.7
0.1
0.5
0.2
0.3
0.4
Low Compression Slope
Input
Signal
Output
Signal
Low Contrast
Small Dynamic Range
Natural Contrast
for Main Subject
94
Knee Correction
Main Subject
0
0.6
0.7
0.5
0.2
0.3
0.4
Natural Contrast
for Main Subject
Greater Dynamic Range
High Compression Slope
Input
Signal
Output
Signal
0.1
Very Low Contrast
95
Knee Correction
Main Subject
– Photo A shows that the scenery outside the window (image highlights) gets overexposed when Knee Correction
is turned off.
– In contrast, by activating the Knee Correction function, both the person inside the car as well as the scenery
outside are clearly reproduced. (Photo B)
96
Knee Correction
– The human eye can capture bright colors of an image, such as a bouquet in a bright environment. This is because the
human eye has a very wide dynamic range (the range of light levels that it can handle).
– However, when the same bouquet is shot with a video camera, the bright areas of the image can be overexposed and
“washed out” on the screen.
– For example, as shown in the “Before Setting” image, the white petals and down are overexposed. This is because the
luminance signal of the petals and down exceeds the camera’s dynamic range.
– In such situations, the picture can be reproduced without overexposure by compressing the signals of the image’s
highlight areas so they fall within the camera’s dynamic range.
Tips for Avoiding “Overexposure” of an Image’s Highlights
97
Tips for Avoiding “Overexposure” of an Image’s Highlights
98
Tips for Avoiding “Overexposure” of an Image’s Highlights
99
– DSP circuit analyses content of high level Video signals and makes real time adjustments to Knee
point & Slope based on a preset algorithm.
– Some parameters of the algorithm may be fine tuned by the user.
Dynamic Contrast Control (DCC)
For high-contrast images
There are no extreme highlights
• When the incoming light far exceeds
the white clip level, the DCC
circuitry lowers the knee point in
accordance with the light intensity.
• To reproduce details of a picture,
even in extremely high contrast.
100
– DCC is a function that allows cameras to reproduce details of a picture, even when extremely high
contrast must be handled.
– A typical example is when shooting a person standing in front of a window from inside the room. With
DCC activated, details of both the person inside and the scenery outside are clearly reproduced, despite
the large difference in luminance level (brightness) between them.
– When the incoming light far exceeds the white clip level, the DCC circuitry lowers the knee point in
accordance with the light intensity. This allows high contrast scenes to be clearly reproduced within the
standard video level.
– This approach allows the camera to achieve a wider dynamic range since the knee point and slope are
optimized for each scene.
101
Dynamic Contrast Control (DCC)
Adaptive Highlight Control
102
On
Highlight contrast is
clearly reproduced
using Adaptive
Highlight Control.
Adaptive Highlight Control
It is a kind of DCC.
– This function intelligently monitors the brightness of all areas of the image and optimizes the knee
points/slopes so the video dynamic range is more effectively used.
– A typical example is shooting an interior scene, which includes sunlit exterior seen through a window.
– This function sets knee points/slopes only for highlight areas of the image, while the middle and low
luminance levels remain unchanged. As a result, both the interior scene and sunlit exterior are clearly
reproduced.
103
− DSP circuit analyses total video content in 7 luminance layers.
− Then makes real time adjustments to multiple Knee points & Slopes based on a preset algorithm.
− Some parameters of the algorithm may be fine tuned by the user.
Input Signal
Output Signal
0
0.6
0.7
0.1
0.5
0.2
0.3
0.4
DynaLatitude
104
DynaLatitude first analyzes the light distribution or light histogram of the picture and assigns more video level
(or more 'latitude') to light levels that occupy a larger area of the picture.
In other words, it applies larger compression to insignificant areas of the picture and applies less or no
compression to significant areas.
105
DynaLatitude
DynaLatitude is a feature offered in SONY DVCAM for capturing images with a very wide dynamic range or, in
other words, images with a very high contrast ratio.
– For example, when shooting a subject in front of a window from inside the room, details of the scenery
outside will be difficult to reproduce due to the video signal's limited 1 Vp-p dynamic range.
– DynaLatitude overcomes this limitation so that both dark areas and bright areas of a picture can be
clearly reproduced within this 1 Vp-p range.
– DynaLatitude functions in such a way that the signal is compressed within the 1 Vp-p range according to
the light distribution of the picture.
106
DynaLatitude
Smooth Knee
Provides natural high-light scene by a
super dynamic compression system. 107
Super Knee
108
Dynamic Range > 400% Dynamic Range > 600%
Super Knee: On
Mechanism of Human Eye
– Images (= light) seen with our eyes are directed to and projected onto the eye’s retina (it consists of
several million photosensitive cells).
– The retina reacts to light and converts it into a very small amount of electrical charges.
– These electrical charges are then sent to the brain through the optic nerve system.
109
Image Sensors
– Image sensors have photo-sensors that work in a similar way to our retina’s photosensitive cells, to convert
light into a signal charge.
– However, the charge readout method is quite different!!!!!
110
One-Chip Imaging System
111
– Each pixel within the image sensor samples the intensity of just one primary color (red, green or blue). In order to provide
full color images from each pixel of the imager, the two other primary colors must be created electronically.
– These missing color components are mathematically calculated or interpolated in the RGB color processor which is
positioned after the image sensor.
– The easiest way to calculate a missing color component:
Add the values of the color components from two surrounding pixels and divide this by two.
Blue color component missing in pixel G22
𝑩𝟐𝟐 = (𝒑𝒊𝒙𝒆𝒍 𝑩𝟐𝟏 + 𝒑𝒊𝒙𝒆𝒍 𝑩𝟐𝟑)/𝟐
112
One-Chip Imaging System
Original Bayer screen output Interpolation Sharpening
113
One-Chip Imaging System
Three-Chip Imaging System
− The dichroic prism system provides more accurate color filtering than a color filter array.
− Capturing the red, green, and blue signals with individual imagers generates purer color reproduction.
− Using three imagers instead of just one, allows for a much wider dynamic range and a higher horizontal
resolution since the image sensing system captures three times more information than a one-chip system.
114
115
Three-Chip Imaging System
Image Sensor Size
– Image sensor size is measured diagonally across the imager’s photosensitive area, from corner to corner.
– A larger image sensor size generally translates into better image capture.
– This is because a larger photosensitive area can be used for each pixel.
The benefits of larger image sensors
1. Higher sensitivity
2. Less smear
3. Better signal-to-noise characteristics
4. Use of better lens optics
5. Wider dynamic range
116
Image Sensor Size
Crop Factor =Diagonal35mm / Diagonalsensor
APS: Advanced Photo System (discontinued)
H (high-definition), C (classic) and P (panorama)
117
Image Sensor Size
Vidicon Tube (2/3 inch in diameter)
• An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm
giving an 11mm diagonal.
• This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor.
118
2/3 inch
≃2/3×2/3 inch=11 mm
2/3 inch
– Video sensor size measurement originates from the first tube cameras where the size designation would
have related to the outside diameter of the glass tube.
– The area of the face of the tube used to create the actual image would have been much smaller,
typically about 2/3rds of the tubes outside diameter.
– A 1″ tube would give a 2/3″ diameter active area, within which you would have a 4:3 frame with a 16mm
diagonal.
– An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm
diagonal. This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor.
– A 1/2″ sensor has a 8mm diagonal and a 1″ sensor a 16mm diagonal.
Image Sensor Size
119
2/3 inch
1 inch
1″ tube
– It’s confusing!!
– But the same 2/3″ lenses as designed for tube cameras in the 1950’s can still be used today on a modern
2/3″ video camera and will give the same field of view today as they did back then.
– This is why some manufacturers are now using the term “1 inch type”, as this is the active area that would
be the equivalent to the active area of an old 1″ diameter Vidicon/Saticon/Plumbicon Tube from the
1950’s.
For comparison:
– 1/3″ → 6mm diag.
– 1/2″ → 8mm diag.
– 2/3″ → 11mm diag.
– 1″ → 16mm diag.
– 4/3″ → 22mm diag.
– A camera with a Super35mm sensor would be the equivalent of approx 35-40mm
– APS-C would be approx 30mm
Image Sensor Size
120
– The term Full Frame or FF is used by users of digital single-lens reflex (DSLR) cameras as a shorthand for an
image sensor format which is the same size as 35mm format (36 mm × 24 mm) film.
Image Sensor Size
121
Picture Element (Pixel)
– The smallest unit that the imager can sense light.
– The main factor that determines the camera’s resolution.
– In all CCD sensors, a certain area along the periphery of the photosensitive area is masked.
– These areas correspond to
• the horizontal and vertical blanking periods or timing of the video signal
• and used as a reference to generate the signal’s absolute black level
122
Thus, there are two definitions for describing the picture elements contained within an imager.
− “Total picture elements”: all picture elements within an imager , including those that are masked.
− “Effective picture elements”: the number of picture elements that are actually used for sensing the
incoming light.
• CCDs usually have about the same number of vertical pixels as the
number of scanning lines in the video system.
• For example, the NTSC video system has 480 effective scanning lines
and therefore CCDs used in this system have about 490 vertical pixels.
123
Picture Element (Pixel)
CCD Image Sensor
124
– Charge Transfer from Photo Sensor to Vertical CCD
– Like Water Draining from a Dam.
125
CCD Image Sensor
– Charge Transfer by CCD in a Bucket-brigade Fashion.
– CCD image sensors get their name from the vertical and horizontal shift registers, which are Charge
Coupled Devices that act as bucket brigades.
126
CCD Image Sensor
ChargeChargeChargeCharge
– Since the charges transferred from the horizontal CCD are very small, an amplifier is needed both to convert the charges
to voltage and to make this voltage stronger.
1. The charge enters a storage region called the "floating diffusion“.
2. The voltage at the surface of the floating diffusion varies in proportion to the accumulated charge.
3. The voltage generated on the surface of the floating diffusion controls the gate of the amplifier.
(When a charge is transferred to the floating diffusion, the voltage generated on its surface decreases in proportion to the amount of
charge. And the gate voltage of the amplifier decreases in proportion.)
To the camera's
signal processor
127
CCD Image Sensor
1. Each pixel (a) photo sensor (b) converts incoming light into electrons.
2. A vertical shift register (c) is situated between the columns of photo sensors. This shift register is literally a "charge
coupled device," from which the CCD image sensor gets it name.
3. During the vertical sync interval, all the charges accumulated in all the photo sensors are simultaneously
transferred into the adjacent vertical shift register CCDs. The photo sensor is like a dam, holding back a reservoir of
electrons. The transfer is like the floodgates of multiple dams opening at the same time, and the water of all these
dams draining at the same time.
4. The timing of the charge transfer to the vertical CCD depends on the frame or field rate of the camera. For
example, at a rate of 50 fields per second, the accumulated charge will be transferred to the vertical CCD every
1/50 second. It is as if the floodgates of all the dams open and drain water at an interval of 1/50 second.
5. Each charge is shifted down the vertical CCD in bucket-brigade fashion. The timing of the transfers is the same as
the line scanning frequency of the camera. With a 625-scanning-line camera operating at 50 interlaced fields per
second, the charges are transferred at intervals of 1/15625 second.
6. The vertical CCDs all transfer charges into another CCD positioned horizontally across the bottom of the image
sensing area. This is the horizontal shift register (d).
7. During the horizontal sync interval, all the charges are shifted in bucket-brigade fashion across the horizontal CCD
and into the amplifier (x).
128
Interline-Transfer CCD (IT CCD)
– CCDs are categorized into two types,
depending on their structure and the method
used to transfer (read out) the accumulated
charge at each photo-site to the output.
– The structure of early IT imagers exhibited a
considerable amount of vertical smear.
129
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
130
Interline-Transfer CCD (IT CCD)
The light conversion and charge accumulation takes place over the video field period (e.g. 1/50 second for PAL).
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Light falling on Sensors
causes the build-up of
an electrical charge
The magnitude of the
electrical charge is
proportionate to the
intensity of the light,
and the duration of
exposure
131
Interline-Transfer CCD (IT CCD)
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Accumulated
charges are
clocked in to
the Vertical
Shift Registers.
132
Interline-Transfer CCD (IT CCD)
The electrical charges collected by each photo-sensor are transferred
to the vertical shift registers during the vertical blanking interval.
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 1 1 1 1 1 1 1
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
First row of
charges are
clocked into
the Horizontal
Shift register
133
Interline-Transfer CCD (IT CCD)
These charges are shifted, at the horizontal frequency, through the vertical shift register and read into
the horizontal register.
Charges within the same row in the CCD array are shifted simultaneously to establish a scanning line.
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 1 1 1 1 1 11Co-related Double Sampling
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
CCD Output
134
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 1 1 1 1 11
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
135
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 1 1 1 11
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
136
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 1 1 11
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
137
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 1 11
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
138
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1 11
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
139
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
11
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
140
Interline-Transfer CCD (IT CCD)
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
1
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Charges are clocked
out of the Horizontal
Shift register at
standard scanning
frequency
Co-related Double Sampling
CCD Output
141
Interline-Transfer CCD (IT CCD)
Once a given line has been read into the horizontal register, it is immediately read out – during the same
horizontal interval – to the camera circuitry so that the next scanning line can be read into the register.
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
2 2 2 2 2 2 2 2
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
Second
row of
charges
are
clocked
into the
Horizont
al Shift
register
Co-related Double Sampling
CCD Output
142
Interline-Transfer CCD (IT CCD)
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
2 2 2 2 2 2 22
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
The process
continues until all
the rows of charges
are sequentially
clocked out
Co-related Double Sampling
CCD Output
143
Interline-Transfer CCD (IT CCD)
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
2 2 2 2 2 22
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
The process
continues until all
the rows of charges
are sequentially
clocked out
Co-related Double Sampling
CCD Output
144
Interline-Transfer CCD (IT CCD)
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
3
4
5
6
7
8
9
2 2 2 2 22Co-related Double Sampling
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
The process
continues until all
the rows of charges
are sequentially
clocked out
CCD Output
145
Interline-Transfer CCD (IT CCD)
Smear
Similar to the CCD’s photo sensors the Vertical Register also has a light-to-electron conversion effect and any
light that leaks into it generates unwanted electrons.
• Vertical smear is a phenomenon inherent in CCD cameras, which occurs when a bright object or light source is shot.
• It is observed as a vertical streak above and below the object or light source.
• Smear is caused due to incoming light leaking directly into the CCD’s vertical shift register when it takes an irregular path.
146Vertical Smear Vertical smear is reduced by the use of the On-Chip Lens layer
– Smear is observed as a vertical streak because the vertical register continues to generate and shift down
unwanted electrons into the horizontal register for as long as the bright light continues to hit the CCD.
– The amount of smear is generally proportional to
I. The brightness of the subject or light source
II. The area they occupy on the CCD surface.
– Thus, in order to evaluate smear level, the area must be defined.
– Smear has been reduced to a negligible level due to the use of the On-Chip Lens and other CCD
refinements.
– Corrective action:
• Increase of the exposure time
• Use of a mechanical or LCD shutter
• Use of flash illumination
147
Smear
Blooming
– Charge overflow (> full well capacity) between neighboring pixels
– Corrective action: Reduction of the incoming light
148
Overflow Gate Technology and Shuttering
149
– It was designed to reduce the generation of smear.
– The upper part of a FIT CCD is structured similar to an IT
CCD with photo-sensing sites and charge-shifting
registers.
– The additional bottom part acts as a temporary storage
area for the accumulated charges.
Frame Interline Transfer CCD (FIT CCD)
150
151
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
One Frame
intermediate
storage
152
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
One Frame
intermediate
storage
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
Light falling on Sensors
causes the build-up of
an electrical charge
153
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
One Frame
intermediate
storage
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
Accumulated
charges are
clocked in to
the Vertical Shift
Registers.
154
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
1
2
3
4
One Frame
intermediate
storage
5
6
7
8
9
5
6
7
8
9
5
6
7
8
9
5
6
7
8
9
5
6
7
8
9
5
6
7
8
9
5
6
7
8
9
5
6
7
8
9
Charges are
clocked
very rapidly
away from
the exposed
area, into
the shielded
intermediate
storage
155
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
1
2
3
4
5
6
7
8
9
One Frame
intermediate
storage
Charges are
clocked
very rapidly
away from
the exposed
area, into
the shielded
intermediate
storage
156
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register1 1 1 1 1 1 1 1
One Frame
intermediate
storage
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
Charges are
clocked from
the
intermediate
storage into
the Horizontal
Shift register at
standard line
Frequency rate
157
Frame Interline Transfer CCD (FIT CCD)
Light sensitive
Pixels
Vertical Shift
Registers
Horizontal Shift
Register1 1 1 1 1 1 11
One Frame
intermediate
storage
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
2
3
4
5
6
7
8
9
Co-related Double Sampling
Charges are
clocked out of
horizontal Shift
Register at the
normal scanning
rate
CCD Output
158
Frame Interline Transfer CCD (FIT CCD)
– Immediately after the charges are transferred from the photo-sensing sites to the vertical registers (during
the vertical blanking interval), they are quickly shifted down to this buffer area.
– Since the charges travel quickly through the vertical register within a short period, the effect of unwanted
light leaking into the vertical register is much smaller than an IT CCD.
– FIT CCDs originally offered a significant reduction in smear, but their complexity increased their cost.
Today, dramatic improvements in IT CCDs have reduced smear to a negligible level, eliminating the need for
the FIT structure. This is due to the use of the HAD sensor structure and improvements in On-Chip Lens (OCL)
technology.
159
Frame Interline Transfer CCD (FIT CCD)
RPN (Residual Point Noise)
– RPN refers to a white or black dot that appears on the camera’s output image due to Defective Pixels in
the image sensor.
– Generally, RPN consists of Several Adjacent Pixels, in the horizontal and/or vertical direction, that have
been destroyed and cannot reproduce colors properly.
These dots are categorized into two types:
I. Dead pixels: It cannot reproduce color information at all.
II. Blemishes pixels: It can reproduce color information but cannot reproduce colors properly due to
unwanted level or voltage shifts of the charges accumulated in their pixels.
160
– Concealment (hiding) technology intelligently interpolates dead pixels by using adjacent picture data in
the horizontal and vertical directions.
– Compensation technology electrically offsets unwanted level or voltage shifts to reduce the blemishing
effect.
The distinct causes of dead pixels and blemishes are still under investigation. However, until now, it has
primarily been believed that cosmic rays cause damage to the CCD pixels during high-altitude transportation
in aircrafts.
Statistics indicate that cameras transported by air tend to exhibit more RPN.
161
RPN (Residual Point Noise)
White Flecks
– Although the CCD image sensors are produced with high-precision technologies, fine white flecks (small
white spot) may be generated on the screen in rare cases, caused by cosmic rays, etc.
– This is related to the principle of CCD image sensors and is not a malfunction.
– The white flecks especially tend to be seen in the following cases:
I. When operating at a high environment temperature.
II. When you have raised the master gain (sensitivity).
III. When operating in slow shutter mode.
– Automatic adjustment of black balance may help to improve this problem.
162
Spatial offset is a method used to improve the luminance horizontal resolution of CCD cameras.
 This technology allows a higher resolution to be achieved than theoretically expected with the number of picture elements in
the CCD array.
o As shown in (a), the red and blue CCDs are fixed to the prism block with a one-half pitch horizontal offset in respect to the
green CCD.
 This doubles the number of samples within a scanning line for creating the luminance signal, giving a higher resolution than
when spatial offset is not used.
o When shooting a subject with a CCD camera using spatial offset, the image is projected on the green, red, and blue CCDs as
shown in (b).
o A and B in (c) are enlargements of areas A and B in (b). The amount of signal charges accumulated in each photo sensor is
shown in 1 through 7 as signal level.
o If displayed separately on a video monitor, the green CCD signal levels would appear as shown in A’ and the red and blue
signal levels would appear as shown in B’. These represent the native resolutions of the CCDs without spatial offset used.
 The luminance signal (Y), which is obtained by adding the R, G, B signals with
certain weights on each signal, is provided by adding A’ and B’ in spatial offset. This is shown in C’. As a result, the resolution is
greatly improved.
163
Spatial Offset Technology
a b 164
Spatial Offset Technology
c
165
Spatial Offset Technology
166
Spatial Offset Technology
Mechanical Shutter
167
Electronic Shutter
– When a shutter speed selection is made with the electronic shutter (e.g., 1/500 second), electrons
accumulated only within this period are read out to the vertical register.
– All the electrons accumulated before this period – the gray triangle in Figure– are discarded to the CCD’s
N-substrate , an area within the CCD used to dispose such unnecessary electrons.
– Discarding electrons until the 1/500-second period commences means that only movement captured
during the shutter period contributes to the image, effectively reducing the picture blur of fast-moving
objects.
168
1/500sec
1/500sec
1/500sec
169
Electronic Shutter
Shutter OFF Shutter ON
During One Readout Cycle:
− Progressive CCDs create one picture frame.
(higher vertical resolution, twice the transfer rate than Interlace CCDs).
− Interlace CCDs create one interlace field.
(higher sensitivity).
Interlace CCD (in Field Integration mode)
Progressive CCD
Faster clocking of the horizontal shift register
(all lines are readout at once)
170
Progressive & Interlace CCD
– In interlace CCDs, two vertically adjacent photo sites are readout together and combined as one signal
charge. This mechanism is used so the number of vertical samples coincides with the number of interlace
scanning lines, that is, half the total scanning lines.
– In contrast, progressive CCDs readout the electric charges individually for each photo site, producing a
picture frame with the CCD’s full line count.
– The vertical register must be capable of carrying charges from each line in a way that ensures they do not
mix.
– Progressive CCDs require twice the transfer rate of interlace CCDs since all lines are readout at once. This
higher transfer speed requires faster clocking of the Horizontal Shift Register.
– Special considerations must also be given to sensitivity. Interlace CCDs double the signal charge by
reading two photo sites together, however, with Progressive CCDs this is not the case.
171
Progressive & Interlace CCD
Frame Integration Mode
High vertical resolution , High sensitivity, Picture blur
– To create even fields, only the charges of the CCD’s even lines are read out.
– To create odd fields, only the charges of the CCD’s odd lines are read out.
Frame Rate Charging (1/25 sec)
172
1/25
Frame Rate Charging (1/25 sec)
– Each pixel in the CCD array accumulates charges for one full frame (e.g., 1/50 second for PAL video) until
transferring them to the vertical register.
– This method provides high vertical resolution.
– It also has the disadvantage of introducing picture blur, because the images are captured across a
longer 1/25 second period.
173
Frame Integration Mode
Frame Rate Charging (1/25 sec)
1/25
Field Integration Mode
Reducing the sensitivity by one-half, Less vertical resolution, Less picture blur
– For an even field, B and C, D and E, and F and G are added together
– For an odd field, A and B, C and D, and E and F are added together.
174
1/50
Field Rate Charging (1/50 sec)
Field Rate Charging (1/50 sec)
– Field Integration method reduces the blur by shortening the charge accumulation period to the field rate
(e.g., 1/50 second for PAL video).
– Shortening the accumulation period and alternating the lines to read out – to create even and odd fields
would reduce the accumulated charges to one half of the Frame Integration method.
– This would result in reducing the sensitivity by one-half.
– After charges being transferred to the vertical register, the charges from two adjacent photo-sites are
added together to represent one pixel of the interlaced scanning line.
– Both even and odd fields are created by altering the photo-site pairs used to create a scanning line.
– This method provides less vertical resolution compared to the Frame Integration mode.(two adjacent
pixels is averaged in the vertical direction).
175
Field Integration Mode
Field Integration has become the default method for all interlace video cameras, to
capture pictures without image blur
Frame Rate Charging (1/25 sec)
176
Field and Frame Integration Mode
Frame Integration Mode
Field Integration Mode
1/50
1/25
Field Rate Charging (1/50 sec) Field Rate Charging (1/50 sec)
177
Field and Frame Integration Mode
Field Rate Charging (1/50 sec)
Frame Rate Charging (1/25 sec)
EVS (Enhance Vertical Definition System)
– Standard definition cameras have a limited vertical resolution due to
• the smaller vertical line count of the SD signal
• use of the interlaced scanning system
EVS is:
A solution to improve vertical resolution limitation.
– These features are not used in high definition (HD) cameras due to their higher vertical
resolutions.
178
SD signal
179
EVS (Enhance Vertical Definition System) SD signal
Frame Integration
• High sensitivity, high resolution
• Motion blur
• No discarded electrons
EVS (Enhance Vertical Definition System)
• High resolution, half-sensitivity
• No motion blur
• Shutter speed is set to 1/60s for NTSC or 1/50s for PAL
• Electrons are discarded to the overflow drain of the CCD
– Frame Integration provides higher vertical resolution than Field Integration. However, it also introduces
motion blur due to its longer 1/30-second charge accumulation period.
– EVS has been developed as a solution to improve this vertical resolution limitation. Its mechanism is based
on using the CCD’s Frame Integration mode, but without introducing the motion blur inherent in this mode.
– EVS eliminates this motion blur by operating the CCD in Frame Integration mode but with a 1/60 second
charge accumulation period.
– Just like Frame Integration, EVS uses the CCD’s even lines to create even fields and odd lines to create
odd fields, providing the same high vertical resolution.
– However, instead of accumulating the charges across a 1/30 second period, EVS discards the charges
accumulated in the first 1/60 seconds (1/30 = 1/60 + 1/60) and keeps only those charges accumulated in
the second 1/60 seconds.
– The result is images with improved vertical resolution and reduced motion blur.
– However, it must be noted that discarding the first 1/60 seconds of the accumulated charges reduces the
sensitivity to one-half.
180
EVS (Enhance Vertical Definition System) SD signal
Super EVS
– Super EVS has been created to provide a solution to this drop in sensitivity.
– The charge readout method used in Super EVS sits between the Field Integration and EVS.
– Instead of discarding all charges accumulated in the first 1/60 seconds, Super EVS allows this discarded
period to be linearly controlled.
• When the period is set to 0, the results will be the same as when using Field Integration.
• When the period is set to 1/60, the results will be identical to Frame Integration.
• When set between 0 to 1/60, Super EVS will provide a combination of the improved vertical resolution of
EVS but with less visible picture blur.
• Most importantly, the amount of resolution improvement and picture blur will depend on the selected
discarding period.
This can be summarized as follows:
– When set near 0: Less improvement in vertical resolution, less picture blur
– When set near 1/60: More improvement in vertical resolution, more picture blur
181
SD signal
182
SD signal
HAD (Hole Accumulated Diode) Sensor
– The HAD structure, developed in 1987, brought a stunning innovation to CCD picture performance.
– Earlier CCDs faced performance limitations due to
• fixed pattern noise
• lag factors
– The HAD structure successfully reduced these to a negligible level by adding a Hole Accumulated Layer
(HAL) above the photo-sensor.
– In addition, the use of an N-substrate to drain excessive electrons accumulated in the photo-sensors
effectively reduced the blooming effect while facilitating features such as the electronic shutter.
183
1. Reduction of dark current noise and fixed pattern noise
2. Reduction of Lag
3. Flexible Shutter Mechanism
4. Reduction of smear
184
These Improvements Are Summarized as Follows in Order of Importance:
– The SiO2 – Si boundary of a CCD is by nature, electrically unstable and tends to generate free electrons,
which cause an electric current to flow into the photo-sensors where signal charges are accumulated.
– This unwanted current is observed as signal noise – commonly known as fixed pattern noise and dark
current noise.
1- Reduction of dark current noise and fixed pattern noise
185
Non HAD Structure
(cross sectional view)
1- Reduction of dark current noise and fixed pattern noise
– To solve this problem, the photo-sensor of the HAD (Hole Accumulated Layer) structure uses an N-type semiconductor
layer.
– The HAD structure effectively reduces such noise by adopting an extra
– P+ layer called the HAL which is situated between the photo-sensor (N+ layer) and the SiO2 layer.
– The HAL is doped so that its carriers are positively charged ‘holes’. Having these holes at the SiO2 – Si boundary prevents
any electrons from generating an electric current there.
HAD Structure
(cross sectional view)
186
– Conventional photodiodes enable signal charge electrons to escape and free electrons to enter.
– Sony's HAD CCD uses a Hole Accumulated Layer (HAL) to cover the photodiode and block these
problems.
1- Reduction of dark current noise and fixed pattern noise
Conventional Photodiode Buried-type Photodiode
187
Early CCDs exhibited a considerable amount of lag, which would appear as an after-image from previous
picture frames.
– The amount of lag in a CCD is determined by the efficiency of transferring the electrons accumulated in the photo-
sensors (N+ area) to the vertical shift register.
– The potential at the bottom of the photo-sensor – where signal charges were accumulated – tended to shift when
voltage was applied.
– This caused a considerable number of electrons to remain in the photo-sensors, even after charge readout.
SHIFT IN VOLTAGE
188
2-Reduction of Lag
Non HAD Structure
With the HAD sensor however, the HAL clamps the bottom of the photo-sensor so that the same potential is
always maintained.
This ensures all accumulated electrons fall completely into the vertical register during readout ,thus
eliminating any lag.
HAD Structure
189
2-Reduction of Lag
3-Flexible Shutter Mechanism
– The HAD structure CCD enabling a true variable-speed electronic shutter to be produced.
– This was achieved by adding an N-substrate for discarding electrons not to be accumulated, thus
shortening the exposure time.
– The HAL layer, N+ layer, P-well, and N-substrate establish an electric potential, as shown in Figure, to
accumulate the signal charges (electrons) during the exposure period.
– However, when the exposure (charge accumulation) period needs to be shortened, the electrons in the
photo-sensor can be discarded to the N-substrate simply by applying a certain voltage (ΔV).
– Since this method does not require complex mechanics to optically control exposure, virtually any shutter
speed can be achieved.
190
3-Flexible Shutter Mechanism
191
Shutter Using N-Substrate
4-Reduction of smear
– Smear in recent CCDs is known as a phenomenon caused by incident light, which should only enter the
photo-sensor, leaking into the vertical shift register through an irregular path.
– This type of smear is observed as a dim white vertical streak.
Non HAD Structure
(cross sectional view)
192
– CCDs developed before the HAD sensor also exhibited a drastic amount of reddish smear. This was due to electrons
generated deep within the CCD by the photoelectric conversion, drifting directly into the vertical shift register.
– These electrons correspond to light with longer wavelengths (mostly red in color), which, by nature, penetrate deeper
into the silicon substrate. With early CCDs, this reddish smear could not be prevented.
4-Reduction of smear
193
Non HAD Structure
(cross sectional view)
– The HAD structure, combined with other CCD refinements, virtually eliminated reddish smear by offering double
protection.
– This was achieved by positioning a second P-well below the vertical register and, of equal importance, by using an N-
substrate for the base of the entire HAD structure CCD.
– The 2nd P-well is a P-type semiconductor that prevents electrons generated deep within the CCD from entering the
vertical register . It pairs up its holes with unwanted emerging electrons that otherwise would result in reddish smear.
4-Reduction of smear
HAD Structure
(cross sectional view)
194
– The N-substrate also prevents any unwanted electrons from emerging in deep areas within the CCD by acting as a drain
to discard them.
– With the non-HAD sensor CCD, unwanted electrons diffuse within the P-substrate and can easily enter the vertical register.
– However, the HAD sensor drains all such electrons to the bottom of the CCD, thereby preventing them from entering the
vertical shift register.
4-Reduction of smear
Non HAD Structure
(cross sectional view)
HAD Structure
(cross sectional view)
195
− On-Chip Lens (OCL) technology drastically enhances the light-collecting capability of a CCD imager by
using micro-lenses aligned above each photo-sensor.
− These micro-lenses converge and direct more of the incident light onto each photo-sensor.
On-Chip Lens (OCL) Technology
196
On-Chip Lens (OCL) Technology
– The combination of HAD Sensor technology and this On-Chip Lens layer greatly enhances the imager’s sensitivity,
thereby allowing shooting even in extremely low-light conditions.
– The On-Chip Lens layer also plays a significant role in reducing vertical smear since converging the incoming light directly
results in less light leaking into the CCD’s vertical register.
197
198
Hyper HAD, Power HAD and Power HAD EX (EAGLE ) CCD Sensors
Reduced gaps between the micro lenses extend sensitivity further still. 199
Hyper HAD & Power HAD Sensor
– Internal lenses boosting sensitivity further.
– Thinner insulation film inside the image sensor cut down on the opportunity for stray light reflections. This
reduces vertical smear to a bare minimum.
In the Power HAD EX CCD, an internal lens improves sensitivity while a thinner insulation film minimizes smear. 200
Power HAD EX (EAGLE) CCDs
CCD and CMOS Image Sensors
CCD and CMOS sensors perform the same steps, but at
different locations, and in a different sequence.
201
Both CCD and CMOS sensors perform all of these steps. However, they differ as to where and in what sequence these steps
occur.
I. Light-to-charge conversion: In the photo-sensitive area of each pixel, light directed from the camera lens is converted
into electrons that collect in a semiconductor "bucket.“
II. Charge accumulation: As more light enters, more electrons come into the bucket.
III. Transfer: The signal must move out of the photosensitive area of the chip, and eventually off the chip itself.
IV. Charge-to-voltage conversion: The accumulated charge must be output as a voltage signal.
V. Amplification: The result of charge-to-voltage conversion is a very weak voltage that must be made strong before it can
be handed off to the camera circuitry.
CCD and CMOS Image Sensors
202
– CMOS sensors have an
amplifier at each pixel.
– The charge is first converted
to a voltage and amplified
right at the pixel.
203
CMOS Image Sensors
– Voltage Generated on Surface of Photo Sensor – Like the Rising Water Level of a Bucket. Since the charge
has a negative electrical value:
• The downward direction indicates a high voltage.
• The upward direction indicates a high negative potential.
204
CMOS Image Sensors
– Signal Voltage Generated by Amplifier (Like a Floodgate that Controls the Water Level of a Canal).
– The downward direction indicates a high positive voltage.
– The upward direction indicates a high negative potential, since the charge has a negative electrical
value.
205
CMOS Image Sensors
1. In modern CMOS sensors, each pixel (a), consists of a photo
sensor (b), an amplifier (y), and a pixel select switch (e).
2. The CMOS pixel's photo sensor (b) converts light into
electrons.
3. Since the charge accumulated in the photo sensor is too
small to transfer through micro wires (f, i), the charge is first
converted to a voltage and amplified right at the pixel by
amplifier (y).
4. Any individual CMOS micro-wire can carry voltage from only
one pixel at a time, as controlled by the pixel-select switch
(e).
This is different from the operation of a CCD image sensor, in
which the charges of all pixels are transferred
simultaneously into their respective vertical shift registers,
and all these charges simultaneously move down the
vertical shift registers.
206
CMOS Image Sensors
5. In addition to the pixel-select switch, the column-select
switch (g) and the column circuit (h) are also used to
control the output of amplified voltages.
• First, all the pixel-select switches on a given row (j) are
turned ON. This action outputs the amplified voltages of
each pixel to their respective column circuit, where they
are processed into signal voltages and temporarily
stored.
• Then, column-select switch (g) are turned ON from left to
right. In this way, signal voltages of each pixel in the same
row are output in order.
• By repeating this operation for all rows from the top to the
bottom in order, signal voltages of all pixels can be
output from the top-left corner to the bottom-right corner
of the image sensor.
6. These signal voltages are output to the signal processor
of the camera.
207
CMOS Image Sensors
Analog Noise
– Where charge is transmitted in the form of an analog signal, the signal will pick up a certain degree of external noise
during its travel.
– Noise will increase in proportion to the travel distance.
Fixed Pattern Noise
– CMOS sensors have an amplifier at each pixel.
– A CMOS sensor in a high-definition device, therefore, contains well over a million amplifiers. It would be unreasonable to
expect that all of these amplifiers will be exactly equivalent, as a certain degree of disparity is inevitably introduced
during the production process.
– This non-uniformity among amplifiers results in a type of interference known as fixed pattern noise.
– Unlike conventional video noise, which has a random behavior, fixed pattern noise creates a permanent, unwanted
texture that can be especially visible in dark scenes.
– Fortunately, this problem can be corrected by incorporating CDS (correlated double sampling) circuits to cancel this
noise and restore the original signal.
208
Two Typical Noise Sources
Conventional CMOS Sensor
Correlated Double Sampling
209
– Active-pixel CMOS sensors use a "reset switch“ in each pixel
to drain the accumulated charge of the previous video field,
in preparation for the next video field.
– Unfortunately, the draining process is not perfect. Some
electrons will always remain in the image sensing area.
– These electrons represent switching noise, which can
become part of the video signal.
– Even worse, this noise is of the ‘fixed pattern’ type. Unlike
conventional video noise, which has a random behavior,
fixed pattern noise creates a permanent, unwanted texture
that can be especially visible in dark scenes.
– Modern CMOS sensors combat fixed pattern noise with
Correlated Double Sampling.
– CMOS image sensors conduct charge-to-voltage
conversion twice for every pixel. Both of these voltages are
also amplified.
Correlated Double Sampling
210
Correlated Double Sampling can effectively suppress noise
by literally subtracting the amplified voltage containing
only noise from the amplified voltage containing both
noise and the desired signal.
1. The reset switch drains the floating diffusion of the old
accumulated charge that was used for the previous
video field.
2. The amplifier converts the charge left in the floating
diffusion ,which represents only noise, into a voltage.
3. The accumulated charge in the photo sensor (during the
active field exposure) transfers into the floating diffusion
area.
4. The amplifier converts the second charge in the floating
diffusion, which represents signal mixed with noise, into a
second voltage.
5. The column circuit subtracts the noise-only voltage from
the signal-mixed-with-noise voltage to produces an
output voltage. As a result, fixed pattern noise can be
effectively suppressed.
Correlated Double Sampling
211
Correlated Double Sampling
Cross section of one pixel in a HAD type CMOS image sensor
212
– The conventional method of CMOS analog to
digital conversion maintains the signal in
analog form in horizontal micro-wires that run
across the bottom of the sensor.
– Unfortunately, these analog signals are
exposed to high frequency switching noise
that can degrade picture quality.
– Digital signals have always been far more
noise-resistant than analog.
– By placing ADCs so close to each photo site,
these sensors significantly reduce the signal's
exposure to noise.
– This could be done with placing one ADC for
each column.
Column Analog-to-Digital Converter
213
An array of analog-to-
digital converters (ADCs),
one for each column can
reduce noise in CMOS
sensors
214
Exmor™ Noise Reduction Technology
Analog CDS
Exmor™ Noise Reduction Technology
– An A/D converter is installed next to each pixel
line (column-parallel A/D conversion), so that the
analog signals are almost immediately digitized.
– The design also employs sophisticated digital
CDS (Correlated Double Sampling) noise
cancellation, which works by measuring the
noise prior to conversion and then canceling the
noise after the conversion. This new system,
which operates both before and after
conversion, is much more precise then
conventional analog-only CDS implementations.
– As a result, camcorders with Exmor technology
offer lower noise than those that use
conventional HD CMOS sensors.
– This is especially significant under low-light
conditions, where Exmor-equipped cameras
perform very well.
215
Digital CDS Used in Exmor CMOS Sensor
By measuring the noise prior to conversion and
then canceling the noise after the conversion
216
− The pixel outputs the amplified noise voltage.
− The column ADC converts the noise voltage to digital.
− The pixel outputs the amplified signal-with noise
voltage.
− The column ADC converts the signal-with noise
voltage to digital.
− The column ADC subtracts the digital noise value from
the digital signal-with-noise value to create the
digital output value.
Creating Multiple Outputs and Processing Speed Issue
CCD image sensor with 2-channel Horizontal CCDs CMOS Image Sensor with 3-channel Outputs 217
– Creating multiple outputs in a CCD requires an increase in complexity and cost.
– Multiple outputs on a CMOS sensor requires only small, easy-to-manufacture micro wires.
Creating Multiple Outputs and Processing Speed Issue
– In CCD when the exposure is complete, the signal
from each pixel is serially transferred to a single A/D.
– The CCD’s ultimate frame rate is limited by the rate
that individual pixels can be transferred and then
digitized.
– The more pixels to transfer in a sensor, the slower
the total frame rate of the camera.
– A CMOS chip eliminates this bottleneck by using an A/D for
every column of pixels, which can number in the thousands.
– The total number of pixels digitized by any one converter is
significantly reduced, enabling shorter readout times and
consequently faster frame rates.
– One row of sensor array could be converted at a time.
– This results in a small time delay between each row’s readout
(Because of column switch operation).
218
Rolling and Global Shutter
219
Global Shutter
– The “global shutter” is the technical term referring to sensors that scan the entire area of the image simultaneously (the
entire frame is captured at the same instant).
– The vast majority of CCD sensors employ global shutter scanning.
– The pixels in a CCD store their charge until it has been depleted.
– The CCD captures the entire image at the same time and then reads the information after the capture is completed, rather
than reading top to bottom during the exposure.
– Because it captures everything at once, the shutter is considered “global”.
– The result is an image with no motion artifacts.
– In Global Shutter mode, every pixel is exposed simultaneously at the same instant in time. This is particularly beneficial when
the image is changing from frame to frame.
220
Rolling Shutter
221
Rolling Shutter
– If the sensor employs a rolling shutter, this means the image is scanned sequentially by sensor, from one side of the sensor
(usually the top) to the other, line by line.
– Many CMOS sensors use rolling shutters (CMOS is also referred to as an APS (Active Pixel Sensor)).
– A rolling shutter is always active and “rolling” through pixels from top to bottom.
– The "rolling shutter" can be either mechanical or electronic.
– The advantage of this method is that the image sensor can continue to gather photons during the acquisition process,
thus effectively increasing sensitivity.
– This produces predictable distortions of fast-moving objects or rapid flashes of light (“Jello”).
– The rolling shutter offers the advantage of fast frame rates.
– Rolling shutter is not inherent to CMOS sensors. The Sony PMW-F55 and Blackmagic Design Production Camera 4K use
CMOS sensors with global shutter circuitry.
– Further transistors per pixel are required for a CMOS sensor with global shutter at the cost of image quality, as the light-
sensitive space on the surface is further reduced in this way.
222
v v
Rolling Shutter Exposure: Row by Row Exposure Start/End Offset
Rolling Shutter
Diagram demonstrates the time delay between each row of pixels in a rolling shutter readout mode with a CMOS camera.
Readout Time
Readout Time
Readout Time
Readout Time
Readout Time
Readout Time
Frame 1
223
Beginning of exposure of 1st row End of exposure of 1st row
End of exposure of 21st row
Beginning of exposure of 2st row
• The delay between the exposures of two consecutive lines for each line are always the same
• The readout interval time is smaller than exposure interval time.
The delay between the exposures of two consecutive lines
t
– Rather than waiting for an entire frame to complete readout in CMOS, to further maximize frame rates, each individual
row is typically able to begin the next frame’s exposure, once completing the previous frame’s readout.
– While fast, the time delay between each row’s readout then translates to a delay between each row’s beginning of
exposure, making them no longer simultaneous.
– The result is that each row in a frame will expose for the same amount of time but begin exposing at a different point in
time, allowing overlapping exposures for two frames.
– The ultimate frame rate is determined by how quickly the rolling readout process can be completed.
– The exposure in the sensor occurs from the first line to the last line and then come back to the first line (for next frame) and
so on.
– The delay between the exposures of two consecutive lines are always the same, whether these two exposures belong to
the same frame or not.
Rolling Shutter
224
10 ms 10 ms 10 ms 10 ms
8.71 µs
Rolling Shutter
Diagram demonstrates the overlap of multiple exposures in a sequence with a sCMOS (Scientific CMOS) sensor running 100fps.
For each line, the readout interval time is
smaller than exposure interval time.
Each row is able to begin the next frame’s exposure, once completing the previous frame’s readout.
225
Row 1
Row 1080
Beginning of 1st
exposure for 1st row
Beginning of 1st
exposure for 2st row
Beginning of 2nd
exposure for 1st row
Beginning of 2st
exposure for 2st row
Frame 2 Frame 3Frame 1
Rolling Shutter
226
Frame readout time (10 ms)= Line Readout time (8.7μs) × Number of Line (1080)
Frame Rate = 1/(Frame Readout time)=1/10 ms=100 fps
10 ms 10 ms 10 ms 10 ms
8.71 µs
Row 1
Row 1080
Beginning of 1st
exposure for 1st row
Beginning of 1st
exposure for 2st row
Beginning of 2nd
exposure for 1st row
Beginning of 2st
exposure for 2st row
Frame 2 Frame 3Frame 1
– For a CMOS sensor in rolling shutter mode, the frame
rate is now determined by
• the speed of the A/D (clocking frequency)
• the number of rows on the sensor.
– Example: for A/D speed of 283MHz with 1080 rows of
pixel, a single line’s readout time, and consequently the
delay between two adjacent rows, is approximately
8.7μs.
– It means that for each line, readout interval time is
smaller than exposure interval time.
– With 1080 rows, the exposure start and readout time
delay from the top of the frame to the bottom is
approximately 10ms.
– This also corresponds to the maximum frame rate of 100
Frames per Second (fps) and minimum temporal
resolution of 10ms (at full frame).
Electronic Shutter Functioning with Rolling Shutter
− A red square represents the reset of a pixel.
− A green squares represents reading times.
(a) For a 360◦ shutter value
(b) For a 180◦ shutter value
227
EX: 1920x1080p60 HDTV imager
− Swing of 1 Volt
− In CCD with a 1-2 analog output, pixels are processed on a 7-14 ns scale.
− In CMOS imager each pixel is processed on the column on a 16µs scale.
That is why
• High frame rate is much easier with CMOS imager
• The read noise can be intrinsically lower (16us to average the noise)
• Sharpness is much better
• HDR can be done too, but that is in the pixel
− To minimize poor lighting issues, we need an imager with a high dynamic
range, and we also need signal processing which uses the additional
dynamic range in the best way possible.
HFR and HDR in CMOS Sensor
HDR-Transfer curve at 3200K
228
Example: 4K Xensium HAWK CMOS Sensor (Grass Valley)
– The 3840x2160p 4K Xensium HAWK CMOS imager offers a unique pixel technology called DPM (dynamic pixel
management) functionality.
– With DPM, the camera provides native 1920x1080 HD acquisition (by combining two horizontal and two vertical adjacent
pixels) without the intrinsic downsides of 4K acquisition, such as rolling-shutter and decreased sensitivity, while delivering
native 4K crispness when needed.
229
Example: 4K Xensium HAWK CMOS Sensor (Grass Valley)
230
Geometric Distortion in CMOS Sensor
– Experienced video shooters often test this phenomenon by rapidly panning the camera back and forth
past the legs of a table.
– A distorted image will show "wobbly legs.“
– CMOS sensors accumulate charges and read them out one line at a time. This can create geometric
distortion when there is relative motion between the camera and the subject (rolling shutter).
– Image distortion in a CMOS camera can make a moving car appear to lean backwards.
231
Noise: Now both technologies are capable of clean, low-noise imagery.
Vertical Smear: Today, both CCD and CMOS technologies routinely produce smear free images.
Power Supply: CMOS sensors have lower power consumption.
• CCD 7 to 10 V
• CMOS 3.3 to 5.3 V
Processing Speed: Higher pixel counts and faster frame rates both place stringent new requirements on
image sensor processing speed. Here, CMOS image sensors can offer advantages.
Systemization: Typical integrated circuits use the same Metal Oxide Semiconductor (MOS) substrate as CMOS
image sensors. This makes it relatively easy to add functions to the CMOS chip, such as column analog-to-
digital converters.
Geometric Distortion: CMOS sensors accumulate charges and read them out one line at a time. This can
create geometric distortion when there is relative motion between the camera and the subject (rolling
shutter).
Image Sensors Comparison
232
Advantages of CCD:
1. High image quality
2. Low spatial noise (FPN)
3. Typically low dark current noise
4. High fill factor (relation of the photo sensitive area to
the whole pixel area) generally by larger pixels
5. Perfect global shutter
– Increased sensitivity
– Good signal quality at low light
6. Modern CCDs with multi tap technologies
– n times readout speed compared to single tap
sensors
Advantages of CMOS:
1. High frame rates, even at high resolution
2. Faster and more flexible readout (e.g.
several AOIs: Area of Interests)
3. High dynamic range or HDR mode
(Acquisition of contrast-rich and
extremely bright objects)
4. No blooming or smear contrary to CCD
5. Integrated control circuit on the sensor
chip
6. More cost-effective and less power
consumption than comparable CCD
sensors
Image Sensors Comparison
233
Conclusion
At the current state of development, CMOS and CCD sensors both deserve a place in broadcast and
professional video cameras.
– CMOS is particularly outstanding where issues of power consumption, systemization and processing speed
are most important.
– CCDs excel where the images will be subjected to the most critical evaluation.
– Recent CMOS sensors deliver:
• Improved global shutter
• Low dark and spatial noise
• Good image quality in low light condition
• Higher quantum efficiency
Together with the existing advantages in speed and cost which makes CMOS sensors suitable for a lot of
vision applications.
234
ClearVid CMOS Sensor
With any given sensor technology, there is a tradeoff between pixel size and pixel quality.
• Larger pixels mean better sensitivity, dynamic range and signal-to-noise ratio.
• Smaller pixels mean higher resolution.
Sony's ClearVid CMOS Sensor is a way to overcome this tradeoff.
– Typical image sensors provide a one-to-one relationship of image sensor photosites to camera pixels.
– In this way, a 1920x1080 camera image usually requires a 1920x1080 sensor – slightly over 2 million
photosites.
– In contrast, a ClearVid CMOS Sensor can achieve almost the same resolution using only half the number of
pixels. Using half the pixels means that the photosites can be twice as large, for improved sensitivity,
dynamic range and signal-to-noise ratio.
235
In the ClearVid CMOS Sensor, the pixels are turned 45 degrees to form a
diamond sampling pattern, instead of the usual vertical and horizontal
grid.
• Half the pixel information is supplied directly from the CMOS
photosites.
• The other half of the pixel information is interpolated with very
high quality, based on information drawn from four adjacent
photosites.
– This interpolation occurs outside the image sensor, in Sony's
Enhanced Imaging Processor large-scale integrated circuit.
– The result is very high spatial resolution combined with very high
performance in sensitivity, dynamic range and signal-to-noise.
236
ClearVid CMOS Sensor
Pixel Arrangement of the 3 ClearVid CMOS Sensor
237
Comparison of Pixel Line Widths Comparison of Pixel Areas
ClearVid CMOS Sensor
238
Compared Against Conventional Sensor Array (1) Compared Against Conventional Sensor Array (2)
ClearVid CMOS Sensor
239
Interpolation Processing on the 3 ClearVid CMOS Sensor (1)
Interpolation Processing on the 3 ClearVid CMOS Sensor (2)
ClearVid CMOS Sensor
240
ClearVid CMOS Sensor
Questions??
Discussion!!
Suggestions!!
Criticism!!
241

More Related Content

PDF
Modern broadcast camera techniques, set up & operation
PDF
Broadcast Lens Technology Part 1
PDF
An Introduction to Video Principles-Part 2
PDF
Video Compression Part 1 Video Principles
PDF
HDR and WCG Principles-Part 2
PDF
HDR and WCG Principles-Part 3
PDF
HDR and WCG Principles-Part 5
PDF
HDR and WCG Principles-Part 6
Modern broadcast camera techniques, set up & operation
Broadcast Lens Technology Part 1
An Introduction to Video Principles-Part 2
Video Compression Part 1 Video Principles
HDR and WCG Principles-Part 2
HDR and WCG Principles-Part 3
HDR and WCG Principles-Part 5
HDR and WCG Principles-Part 6

What's hot (20)

PDF
An Introduction to Video Principles-Part 1
PDF
Broadcast Camera Technology, Part 1
PDF
Video Quality Control
PDF
Broadcast Lens Technology Part 2
PPTX
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
PDF
HDR and WCG Principles-Part 1
PDF
Broadcast Lens Technology Part 3
PDF
Latest Technologies in Production & Broadcasting
PDF
SDI to IP 2110 Transition Part 1
PDF
Broadcast Camera Technology, Part 3
PDF
HDR and WCG Video Broadcasting Considerations
PDF
HDR and WCG Principles-Part 4
PDF
Video Compression, Part 3-Section 2, Some Standard Video Codecs
PDF
SDI to IP 2110 Transition Part 2
PDF
Thinking about IP migration
PDF
An Introduction to HDTV Principles-Part 3
PDF
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
PDF
An Introduction to HDTV Principles-Part 4
PDF
An Introduction to HDTV Principles-Part 1
PDF
Video Compression, Part 4 Section 1, Video Quality Assessment
An Introduction to Video Principles-Part 1
Broadcast Camera Technology, Part 1
Video Quality Control
Broadcast Lens Technology Part 2
VIDEO QUALITY ENHANCEMENT IN BROADCAST CHAIN, OPPORTUNITIES & CHALLENGES
HDR and WCG Principles-Part 1
Broadcast Lens Technology Part 3
Latest Technologies in Production & Broadcasting
SDI to IP 2110 Transition Part 1
Broadcast Camera Technology, Part 3
HDR and WCG Video Broadcasting Considerations
HDR and WCG Principles-Part 4
Video Compression, Part 3-Section 2, Some Standard Video Codecs
SDI to IP 2110 Transition Part 2
Thinking about IP migration
An Introduction to HDTV Principles-Part 3
Designing an 4K/UHD1 HDR OB Truck as 12G-SDI or IP-based
An Introduction to HDTV Principles-Part 4
An Introduction to HDTV Principles-Part 1
Video Compression, Part 4 Section 1, Video Quality Assessment
Ad

Similar to Broadcast Camera Technology, Part 2 (20)

PDF
Session 7 White Balance (Basic Photography Class)
DOCX
Playing with Lights
DOCX
Color Study Final Assignment Part 3
PDF
Digital camera magazine_-_complete_photography_guide_-_master_colour
PDF
Adorama elearning
PPTX
White balance
PPT
Boot Camp 2013: Day 2
PPT
Understanding light
PPT
White Balance & Iso
PPTX
LIGHTING by Keyani
PPTX
White balance
PPT
TV Lighting
 
PPT
White Balancing
PDF
Digital Theory 1.pdf
PPT
Orgeron - Chapter 3 TV camera
PDF
What color is white light? By Steven Rosen
PPT
White balance
PPT
Video camera basic functions... shot types ..aspect ratio
 
PPT
Photography: 5 - White Balancing
PPTX
7. white balance
Session 7 White Balance (Basic Photography Class)
Playing with Lights
Color Study Final Assignment Part 3
Digital camera magazine_-_complete_photography_guide_-_master_colour
Adorama elearning
White balance
Boot Camp 2013: Day 2
Understanding light
White Balance & Iso
LIGHTING by Keyani
White balance
TV Lighting
 
White Balancing
Digital Theory 1.pdf
Orgeron - Chapter 3 TV camera
What color is white light? By Steven Rosen
White balance
Video camera basic functions... shot types ..aspect ratio
 
Photography: 5 - White Balancing
7. white balance
Ad

More from Dr. Mohieddin Moradi (8)

PDF
An Introduction to HDTV Principles-Part 2
PDF
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
PDF
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
PDF
An Introduction to Audio Principles
PDF
Video Compression, Part 4 Section 2, Video Quality Assessment
PDF
Video Compression, Part 3-Section 1, Some Standard Video Codecs
PDF
Video Compression, Part 2-Section 2, Video Coding Concepts
PDF
Video Compression, Part 2-Section 1, Video Coding Concepts
An Introduction to HDTV Principles-Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 2
Serial Digital Interface (SDI), From SD-SDI to 24G-SDI, Part 1
An Introduction to Audio Principles
Video Compression, Part 4 Section 2, Video Quality Assessment
Video Compression, Part 3-Section 1, Some Standard Video Codecs
Video Compression, Part 2-Section 2, Video Coding Concepts
Video Compression, Part 2-Section 1, Video Coding Concepts

Recently uploaded (20)

PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
Internet of Things (IOT) - A guide to understanding
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Welding lecture in detail for understanding
PDF
PPT on Performance Review to get promotions
PDF
Structs to JSON How Go Powers REST APIs.pdf
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPT
Mechanical Engineering MATERIALS Selection
PPTX
OOP with Java - Java Introduction (Basics)
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
composite construction of structures.pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Well-logging-methods_new................
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PPT
Project quality management in manufacturing
Operating System & Kernel Study Guide-1 - converted.pdf
Internet of Things (IOT) - A guide to understanding
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Welding lecture in detail for understanding
PPT on Performance Review to get promotions
Structs to JSON How Go Powers REST APIs.pdf
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
bas. eng. economics group 4 presentation 1.pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Mechanical Engineering MATERIALS Selection
OOP with Java - Java Introduction (Basics)
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
composite construction of structures.pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Well-logging-methods_new................
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
CH1 Production IntroductoryConcepts.pptx
Project quality management in manufacturing

Broadcast Camera Technology, Part 2

  • 2. Cool & Warm Colors Lights 2
  • 3. Cool & Warm Colors Fluorescent Lights – Regular incandescent lights peak the orange and red wavelengths, and tends to be weak on blue. – That’s why red colors in your picture look so good and blue colors look so dead under normal incandescent light. – Standard “warm white” and “cool white” fluorescent lights overemphasize yellow-green. They’re made to give the most light in the range of wavelengths that the human eye is most sensitive to. 3 Incandescent
  • 4. – On the Kelvin scale, zero degrees K (0 K) is defined as “absolute zero” temperature. – This is the temperature at which molecular energy or molecular motion no longer exists. – Since heat is a result of molecular motion, temperatures lower than 0 K do not exist. Kelvin is calculated as: K = Celsius + 273.15 ºC Color Temperature 4
  • 5. Color Temperature – The spectral distribution of light emitted from a piece of carbon (a black body that absorbs all radiation without transmission and reflection) is determined only by its temperature. – When heated above a certain temperature, carbon will start glowing and emit a color spectrum particular to that temperature. – This discovery led researchers to use the temperature of heated carbon as a reference to describe different spectrums of light. – This is called Color temperature. 5
  • 8. The different light source types emit different colors of light (known as color spectrums) and video cameras capture this difference. Color Temperature 8 Daylight Incandescent Fluorescent Halogen Cool White LED Warm White LED Wavelength (nm)Wavelength (nm)Wavelength (nm) Wavelength (nm) Wavelength (nm) Wavelength (nm) IntensityIntensity Intensity IntensityIntensity Intensity
  • 9. Natural light Artificial light Color Temperature 9
  • 10. – Our eyes are adaptive to changes in light source colors – i.e., the color of a particular object will always look the same under all light sources: sunlight, halogen lamps, candlelight, etc. – However, with color video cameras this is not the case, bringing us to the definition of “color temperature.” – When shooting images with a color video camera, it is important for the camera to be color balanced according to the type of light source (or the illuminant) used. This is because different light source types emit different colors of light (known as color spectrums) and video cameras capture this difference. Color Temperature 10
  • 11. The camera color temperature is lower than environment color temperature The camera color temperature is upper than environment color temperature Color Temperature 11
  • 12. – In video technology, color temperature is used to describe the spectral distribution of light emitted from a light source. – The cameras do not automatically adapt to the different spectrums of light emitted from different light source types. – In such cases, color temperature is used as a reference to adjust the camera’s color balance to match the light source used. For example, if a 3200K (Kelvin) light source is used, the camera must also be color balanced at 3200K. Color Temperature 12
  • 13. Color Temperature Conversion – All color cameras are designed to operate at a certain color temperature . – For example, Sony professional video cameras are designed to be color balanced at 3200K, meaning that the camera will reproduce colors correctly provided that a 3200K illuminant is used. – This is the color temperature for indoor shooting when using common halogen lamps. 13
  • 14. Cameras must also provide the ability to shoot under illuminants with color temperatures other than 3200K. – For this reason, video cameras have a number of selectable color conversion filters placed before the prism system. – These filters optically convert the spectrum distribution of the ambient color temperature (illuminant) to that of 3200K, the camera’s operating temperature. – For example, when shooting under an illuminant of 5600K, a 5600K color conversion filter is used to convert the incoming light’s spectrum distribution to that of approximately 3200K. Color Temperature Conversion 14
  • 16. – When only one optical filter wheel is available within the camera, this allows all filters to be Neutral Density types providing flexible exposure control. – The cameras also allow color temperature conversion via electronic means. – The Electronic Color Conversion Filter allows the operator to change the color temperature from 2,000K to 20,000K as typical. Color Temperature Conversion 16
  • 18. “Why do we need color conversion filters if we can correct the change of color temperature electrically (white balance)?". – White balance electrically adjusts the amplitudes of the red (R) and blue (B) signals to be equally balanced to the green (G) by use of video amplifiers. – We must keep in mind that using electrical amplification will result in degradation of signal-to-noise ratio. – Although it may be possible to balance the camera for all color temperatures using the R/G/B amplifier gains, this is not practical from a signal-to-noise ratio point of view, especially when large gain up is required. The color conversion filters reduce the gain adjustments required to achieve correct white balance. Color Temperature Conversion 18
  • 19. Variable Color Temperature The Variable Color Temp. Function allows the operator to change the color temperature from 2,000K to 20,000K 19
  • 20. Preset Matrix Function – Preset for 3 Matrices can be set. – The Matrix level can be preset to different lightings. – The settings can be easily controlled by the control panel. 20
  • 21. White Balance & Color Temperature 21
  • 22. The different light source types emit different colors of light (known as color spectrums) and video cameras capture this difference. White Balance & Color Temperature 22 Daylight Incandescent Fluorescent Halogen Cool White LED Warm White LED Wavelength (nm)Wavelength (nm)Wavelength (nm) Wavelength (nm) Wavelength (nm) Wavelength (nm) IntensityIntensity Intensity IntensityIntensity Intensity
  • 23. The video cameras are not adaptive to the different spectral distributions of each light source type. – In order to obtain the same color reproduction under different light sources, color temperate variations must be compensated by converting the ambient color temperature to the camera’s operating color temperature (Optically or Electrically). – Once the incoming light’s color temperature is converted to the camera’s operating color temperature (Optically or Electrically), this conversion alone does not complete color balancing of the camera, therefore more precise color balancing adjustment must be made . – A second adjustment must be made to precisely match the incoming light’s color temperature to that of the camera known as “white balance” White Balance 23
  • 24. White Balance White balance refers to shooting a pure white object, or a grayscale chart, and adjusting the camera’s video amplifiers so the Red, Green, and Blue channels all output the same video level. 24
  • 25. 380 400 420 440 460 480 500 520 540 560 580 600 620 640 660 680 70 720 760 780 More Precise Color Balancing White Balance 25
  • 26. White Balance Why by Performing this adjustment for the given light source, we ensure that the color “white” and all other colors are correctly reproduced? – The color “white” is reproduced by combining Red, Green, and Blue with an equal 1:1:1 ratio. – White Balance adjusts the gains of the R/G/B video amplifiers to provide this output ratio for a white object shot under the given light source type. – Once these gains are correctly set for that light source, other colors are also output with the correct Red, Green, and Blue ratios. 26 (SDTV) Y=0.11B+0.3R+0.59G
  • 27. White Balance – For example, when a pure yellow object is shot, the outputs from the Red, Green, and Blue video amplifiers will have a 1:1:0 ratio (yellow is combined by equally adding Red and Green). – In contrast, if the White Balance is not adjusted, and the video amplifiers have incorrect gains for that light source type, the yellow color would be output incorrectly with, for example, a Red, Green, and Blue channel ratio of 1:0.9:0.1. 27 (SDTV) Y=0.11B+0.3R+0.59G
  • 28. Preset White Preset White is a white-balance selection used in shooting scenarios • When the white balance cannot be adjusted OR • When the color temperature of the shooting environment is already known (3200K or 5600K for instance). – This means that by simply choosing the correct color conversion filter, optical or electronic, the approximate white balance can be achieved. – It must be noted however, that this method is not as accurate as when taking white balance. By selecting Preset White, the R/G/B amplifiers used for white- balance correction are set to their center values. center values 28
  • 29. AWB (Auto White Balance) Unlike the human eye, cameras are not adaptive to different color temperatures of different light source types or environments. – This means that the camera must be adjusted each time a different light source is used, otherwise the color of an object will not look the same when the light source changes. – This is achieved by adjusting the camera’s white balance to make a ‘white’ object always appear white. – Once the camera is adjusted to reproduce white correctly, all other colors are also reproduced as they should be. 29
  • 30. AWB (Auto White Balance) The AWB is achieved by framing the camera on a white object – typically a piece of white paper/clothe or grayscale chart – so it occupies more than 70% of the display. Then pressing the AWB button on the camera body instantly adjusts the camera white balance to match the lighting environment. Macbeth Chart 30
  • 31. ATW (Auto Tracing White Balance) – The AWB is used to set the correct color balance for one particular shooting environment or color temperature. – The ATW continuously adjusts camera color balance in accordance with any change in color Temperature. For example, imagine shooting a scene that moves from indoors to outdoors. Since the color temperature of the indoor lighting and outdoor sunlight are very different, the white balance must be changed in real time in accordance with the ambient color temperature. 31
  • 32. Black Balance To ensure Accurate color reproduction throughout all video levels, it is important that the red, green, and blue channels are also in correct balance when there is no incoming light. When there is no incoming light, the camera’s red, green, and blue outputs represent the “signal floors” of the red, green, and blue signals, and unless these signal floors are matched, the color balance of other signal levels will not match either. 32
  • 33. Black Balance It is necessary when: – Using the camera for the first time – Using the camera after a significant perid out of use – Sudden change in temperature – Without this adjustment, the red, green, and blue color balance cannot be precisely matched even with correct white balance adjustments. 33
  • 34. ND (Neutral Density) Filter − It reduces light of all wavelengths. − It is used when the subject is too bright to be adjusted by the diaphragm alone. 34
  • 35. ND (Neutral Density) Filter The ND filters reduce the amount of incoming light to a level where the lens iris can provide correct exposure for even bright images. – It is important to note that the use of ND filters does not affect the color temperature of the incoming light – they are designed so that light intensity is reduced uniformly across the entire spectrum. – The ND filters can also be used to intentionally control an image’s depth of field to make it more shallow. – This is because ND filters allow a wider iris opening to be selected, and because depth of field decreases as iris aperture (opening) increases. 35
  • 36. ND (Neutral Density) Filter − Strength of an ND filter may be express as: • Percent Transmission (T) • Optical Density (OD or D) It describes the amount of energy blocked by the filter. • Exposure factor 𝑫 = −log 𝑻 𝑬𝒙𝒑𝒐𝒔𝒖𝒓𝒆 𝒇𝒂𝒄𝒕𝒐𝒓 = 𝟏/𝑻 36
  • 37. Transverse and Longitudinal Waves Direction of travel Transverse Wave Longitudinal Wave 37
  • 38. – In an un-polarized transverse wave, oscillations may take place in any direction at right angles (90°) to the direction in which the wave travels. Wave Travel Direction Oscillation Direction Polarization 38
  • 39. – In an un-polarized transverse wave, oscillations may take place in any direction at right angles (90°) to the direction in which the wave travels. – Polarization is a characteristic of all transverse waves that describes the orientation of oscillations. – By Polarization, vibration direction of wave are restricted. Oscillation Direction Wave Travel Direction Polarization 39
  • 40. − If the oscillation takes place in only one direction then the wave is said to be linearly polarized (or plane polarized) in that direction. Oscillation Direction Wave Travel Direction Linear Polarization 40
  • 41. − This wave is polarized in y direction (E Oscillation Direction) − Trace of electric field vector is linear Linear Polarization 41
  • 42. − Circularly polarized light consists of Two perpendicular EM plane waves of equal amplitude with 90° difference in phase. 42 Circular Polarization
  • 43. Circular Polarization A clockwise circularly-polarized wave An anti-clockwise circularly-polarized wave 43
  • 44. Light is an Electromagnetic Wave – Un-polarized light consist of waves with randomly directed electric fields. – Here the waves are all traveling along the same axis, directly out of the page, and all have the same amplitude E. – Light is polarized when its electric fields oscillate in a single plane, rather than in any direction perpendicular to the direction of propagation. Direction of motion of wave z x yE v  B v  v  E B E  v  This wave is polarized in y direction The EM waves are transverse waves 44
  • 47. Liquid Crystals and Polarizer − The alignment of the polarizer “stack” changes with voltage. 47
  • 48. Circular, Linear and Unpolarized Light 48
  • 50. Polarizer in Camera A polarizer is used to intercept (stop/catch) light reflected from the surface of water or glass. 50
  • 51. Polarizer in Camera − A polarizer is used to intercept (stop/catch) light reflected from the surface of water or glass. 51
  • 52. – Since light scattered by the atmosphere is partly polarized, polarizer is also effective when shooting subjects against a blue sky. – It can suppress the sky and make mountains or other objects stand out. A polarizer 1- Reduces the total amount of light to about ¼ 2- Changes the color balance ,so the white balance must be readjusted. Polarizer in Camera 52
  • 53. AGC (Automatic Gain Control) − When enough light cannot be captured with the lens iris fully opened, the camera’s AGC function electrically amplifies the video signal level, increasing and optimizing picture brightness. − AGC function also degrades S/N ratio since noise is amplified together, hence it is not used on high-end cameras and camcorders. 53
  • 54. Black Clip It prevents the camera output signal from falling below a video level specified by the television standard. – Black clip function electronically clips off signal levels that are below a defined level called the black clip point. – This prevents the camera from outputting a signal level too low, that may be wrongly detected as a sync signal by other video devices. – The black clip point is set to 0% video level. 54 Pedestal or Master Black Sut-up Level Absolute black level or the darkest black that can be reproduced by the camera. H-Sync Burst
  • 55. White Clip All cameras have a white-clip circuit to prevent the camera output signal from exceeding a practical video level when extreme highlights appear in the image. – The white-clip circuit clips off or electrically limits the video level of highlights to a level that can be reproduced on a picture monitor. 55
  • 57. Gamma, CRT Characteristic lookmuchbrighter lookmuchdarker Itismadedarker Itismadebrighter 57 Camera Monitor Camera Monitor Light Light Voltage Voltage Light Light Voltage Voltage
  • 58. CRT Control Grid Output Light Input voltage Output light Ideal Real Dark areas of a signal Bright areas of a signal Gamma, CRT Characteristic It is caused by the voltage to current grid-drive of the CRT (voltage-driven) and not related to the phosphor (i.e. a current-driven CRT has a linear response) 58 CRT Gamma 𝐿 = 𝑉 𝛾 𝑚 𝛾 𝑚 = 2.22 Voltage to current grid-drive CRT
  • 59. CRT Control Grid Light Input Input voltage Output light Camera OutputLight Output voltage Input light Input light Output light Gamma, CRT Characteristic Legacy system-gamma (cascaded system) is about 1.2 to compensate for dark surround viewing conditions (𝜸 𝒎 = 𝟐. 𝟒). 59 ITU-R BT.709 OETF CRT Gamma 𝐿 = 𝑉 𝛾 𝑚 𝛾 𝑚 = 2.22 Camera Gamma 𝑉 = 𝐿 𝛾𝑐 𝛾𝑐 = 0.45 𝜸 𝒄 𝜸 𝒎 = 𝟏
  • 60. – CRT Defect? No! • It is caused by the voltage to current (grid-drive) of the CRT and not the phosphor. • The nonlinearity is roughly the inverse of human lightness perception. – Legacy system gamma is about 1.2 to compensate for dark surround viewing conditions. – Amazing Coincidence! • CRT gamma curve (grid-drive) nearly matches human lightness response, so the precorrected camera output is close to being ‘perceptually coded’ • If CRT TVs had been designed with a linear response, we would have needed to invent gamma correction anyway! Gamma, CRT Characteristic 60 0 0.5 1 0 1 CRT Gamma & System Curve CRT Vk gammaX Vk 0.5 gammaT Vk V k CRT gamma (2.4) compared to total system gamma (1.2). BT.1886 display
  • 61. – Gamma (γm) is a numerical value that indicates the response characteristics between the brightness of a display device (such as a CRT or flat panel display) and its input voltage. – The exponent index that describes this relation is the CRT’s gamma, which is usually around 2.2. L: the CRT brightness V: the input voltage – Although gamma correction was originally intended for compensating for the CRT’s gamma, today’s cameras offer unique gamma settings (γc) such as film-like gamma to create a film-like look. Gamma, CRT Characteristic 61 𝑳 = 𝑽 𝜸 𝒎
  • 62. – The goal in compensating for a CRT’s gamma is to create a camera output signal that has a reverse relation to the CRT’s gamma. – In this way, the light that enters the camera will be in proportion to the brightness of the CRT picture tube. – Technically, the camera should apply a gamma correction of about 1/ γm. – This exponent γc (1/ γm) is what we call the camera’s gamma, which is about 1/2.2 or 0.45. Gamma, CRT Characteristic 62 Camera gamma (𝜸 𝒄 = 𝟎. 𝟒𝟓) Linear gamma (𝜸 𝒎 𝜸 𝒄 = 𝟏) Display gamma (𝜸 𝒎 = 𝟐. 𝟐𝟐)
  • 63. – Cameras convert scene light to an electrical signal using an Opto-Electronic Transfer Function (OETF) – Displays convert an electrical signal back to scene light using an Electro-Optical Transfer Function (EOTF) (Non Linear) • The CRT EOTF is commonly known as gamma. • The Camera OETF is commonly known as inverse gamma. Transmission MediumScene Capture Scene Display 𝑳 = 𝑽 𝜸 𝜸 = 𝟐. 𝟒𝟓 𝒇𝒐𝒓 𝑯𝑫𝑻𝑽 (𝑰𝑻𝑼 − 𝑹 𝑩𝑻. 𝟕𝟎𝟗) Gamma, CRT Characteristic 63
  • 64. Gamma, CRT Characteristic Recommendation ITU-R BT.709 (Old) Recommendation ITU-R BT.1886 (In 2011) Overall Opto-ElectronicTransfer Function at source (OETF) 𝐕 = 𝟏. 𝟎𝟗𝟗𝑳 𝟎.𝟒𝟓 − 𝟎. 𝟎𝟗𝟗 0.018 < L <1 𝐕 = 𝟒. 𝟓𝟎𝟎𝑳 0 < L < 0.018 where: L : luminance of the image 0 < L < 1 V : corresponding electrical signal Reference Electro-OpticalTransfer Function (EOTF) 𝑳 = 𝒂(𝐦𝐚𝐱 𝑽 + 𝒃 , 𝟎 ) 𝜸 L: Screen luminance in cd/m2 V: Input video signal level (normalized, black at V = 0, to white at V = 1) : Exponent of power function, γ = 2.40 a: Variable for user gain (legacy “contrast” control) b: Variable for user black level lift (legacy “brightness” control) Above variables a and b are derived by solving following equations in order that V = 1 gives L = LW and that V = 0 gives L = LB: LW: Screen luminance for white LB: Screen luminance for black 𝐿 𝐵= 𝑎. 𝑏 𝛾 𝐿 𝑊 = 𝑎. (1 + 𝑏) 𝛾 For content mastered per Recommendation ITU-R BT.709 , 10-bit digital code values “D” map into values of V per the following equation: V = (D–64)/876 BT.709 BT.1886 CRT’s already forced the camera gamma became BT.709 • Recommendation ITU-R BT.709 explicitly specifies a reference OETF function that in combination with a CRT display produces a good image. • Recommendation ITU-R BT.1886 in 2011 specifies the EOTF of the reference display to be used for HDTV production; the EOTF specification is based on the CRT characteristics so that future monitors can mimic the legacy CRT in order to maintain the same image appearance in future displays. 64
  • 65. –Adjusting gamma • CRT is black-level sensitive power-law. • So the Black-level adjustment (room light) dramatically changes gamma. –Gamma for a CRT fairly constant at about 2.4 to 2.5 • BT.1886 is the recommended gamma for High Definition flat panel displays. • BT.1886 says all flat panel displays should be calibrated to 2.4 but effective gamma still changes with Brightness-control (black-level) or room lighting. • Black-levels can track room lighting with auto-brightness; but Changes Gamma. – Why not get rid of gamma power law ? • Gamma is changed for artistic reasons. But, for current HD (SDR) displays, even with BT.2020 colorimetry, BT.1886 applies for the reference display gamma. Gamma, CRT Characteristic 65 𝑳𝒊𝒈𝒉𝒕 𝑷𝒐𝒘𝒆𝒓 = (𝑽 + 𝑩𝒍𝒂𝒄𝒌 𝑳𝒆𝒗𝒆𝒍) 𝜸 𝑳𝒊𝒈𝒉𝒕 𝑷𝒐𝒘𝒆𝒓 = (𝑽) 𝟐.𝟒
  • 66. 66 System Gamma, Gamma & Bit Resolution Reduction
  • 67. 67 Linear Camera (natural response) Linear CRT (current driven) Current driven CRT or new flat-panel display D65 daylight ~100 steps (15 bits) Need to match light output Look the same with >15-bits! Perceptually uniform steps System Gamma, Gamma & Bit Resolution Reduction Image reproduction using video is perfect if display light pattern matches scene light pattern. If camera response were linear, more than 15 bits would be needed for R,G,B and MPEG & NTSC S/N would need to be better than 90 db, so 8 bits is not sufficient for NTSC & MPEG. 1 2 3 4 RGB Perception is 1/3 power-law “cube-root”
  • 68. 68 Linear Camera (natural response) Linear CRT (current driven) Current driven CRT or new flat-panel display ~100 steps (15 bits) NTSC & MPEG (8 bits) Need to match light output But do not look the same! Perceptually uniform steps System Gamma, Gamma & Bit Resolution Reduction D65 daylight If camera response were linear, more than 15 bits would be needed for R,G,B and MPEG & NTSC S/N would need to be better than 90 db, so 8 bits is not sufficient for NTSC & MPEG. RGB 1 2 3 4 5 Perception is 1/3 power-law “cube-root” Image reproduction using video is perfect if display light pattern matches scene light pattern. Lightness perception only important for S/N considerations
  • 69. 69 CRT response (voltage driven) Need to match light output. Look the same with only 7-bits 8-bits ~100 steps (7-bits) 1 JND 1 JND 1 JND Camera With Gamma Standard CRT (voltage to current grid-drive of CRT) System Gamma, Gamma & Bit Resolution Reduction D65 daylight Thanks to the CRT gamma, we compress the signal to roughly match human perception and only 7 bits is enough! 1 2 3 4 RGB Perception is 1/3 power-law “cube-root” Image reproduction using video is perfect if display light pattern matches scene light pattern. Quantization and noise appear uniform due to inverse transform 5
  • 70. Recall of BT.709 HDTV System Architecture EOTF BT.1886 Reference Display OETF BT.709 Artistic AdjustCamera EOTF BT.1886 View Reference Viewing Environment 8-10 bit Delivery Cam Adj. e.g. Iris Sensor Image Display Adjust Non-Ref Display Non-Reference Viewing Environment EOTF of the reference display for HDTV production. • It specifies the conversion of the non-linear signal into display light for HDTV. • The EOTF specification is based on the CRT characteristics so that future monitors can mimic the legacy CRT in order to maintain the same image appearance in future displays. Ex: Toe, Knee 70 (Reference OOTF is cascade of BT.709 OETF and BT.1886 EOTF) OOTFSDR = OETF709 ×EOTF1886 Reference OETF that in combination with a CRT produces a good image
  • 71. Recall of BT.709 HDTV System Architecture EOTF BT.1886 Reference Display OETF BT.709 Artistic AdjustCamera EOTF BT.1886 Creative Intent View Reference Viewing Environment 8-10 bit Delivery Cam Adj. e.g. Iris Sensor Image Display Adjust Non-Ref Display Non-Reference Viewing Environment Ex: Toe, Knee Ex: Knee Artistic OOTF If an artistic image “look” different from that produced by the reference OOTF is desired, “Artistic adjust” may be used 71 OOTFSDR = OETF709 ×EOTF1886 −There is typically further adjustment (display adjust) to compensate for viewing environment, display limitations, and viewer preference; this alteration may lift black level, effect a change in system gamma, or impose a “knee” function to soft clip highlights (known as the “shoulder”). −In practice the EOTF gamma and display adjust functions may be combined in to a single function. Actual OOTF = OETF (BT.709) + EOTF (BT.1886) + Artistic adjustments+ Display adjustments EOTF of the reference display for HDTV production. • It specifies the conversion of the non-linear signal into display light for HDTV. • The EOTF specification is based on the CRT characteristics so that future monitors can mimic the legacy CRT in order to maintain the same image appearance in future displays. Reference OETF that in combination with a CRT produces a good image
  • 72. – Recommendation ITU-R BT.709 explicitly specifies a reference OETF function that in combination with a CRT display produces a good image. – Recommendation ITU-R BT.1886 in 2011 specifies the EOTF of the reference display to be used for HDTV production; the EOTF specification is based on the CRT characteristics so that future monitors can mimic the legacy CRT in order to maintain the same image appearance in future displays. – A reference OOTF is not explicitly specified for HDTV. – There is no clearly defined location of the reference OOTF in this system. The Reference OOTF = OETF (BT.709) + EOTF (BT.1886) (cascaded) – If an artistic image “look” different from that produced by the reference OOTF is desired for a specific program, “Artistic adjust” may be used to further alter the image in order to create the image “look” that is desired for that program. (Any deviation from the reference OOTF for reasons of creative intent must occur upstream of delivery) The Actual OOTF = OETF (BT.709) + EOTF (BT.1886) + Artistic and display adjustments Recall of BT.709 HDTV System Architecture 72
  • 73. – When shooting a movie or a drama, the colors of an object can sometimes be subject to a drop in saturation. – For example, as shown in the “Before Setting”, the colors of the red and orange flowers are not reproduced fully, causing the image to lose its original texture and color depth. – This occurs because the colors in mid-tone areas (the areas between the highlighted and the dark areas) are not generated with a “look” that appears natural to the human eye. – In such cases, adjusting the camera’s gamma correction enables the reproduction of more realistic pictures with much richer color. This procedure lets your camera to create a unique atmosphere in your video, as provided by film originated images. Tips for Creating Film-like Images With Rich Colors 73By lowering the camera gamma correction(γc ) value
  • 74. Tips for Creating Film-like Images With Rich Colors 74
  • 75. Tips for Creating Film-like Images With Rich Colors 75
  • 76. CRT Gamma Camera Gamma Output voltage Input light More contrast in dark picture areas, (more low-luminance detail). more noise. Less contrast in dark picture areas, (less low-luminance detail). less noise. - + Black Gamma 76 CRT Control Grid Light Input Camera OutputLight ITU-R BT.709 OETF
  • 77. The dark areas are reproduced with more color and darkness 77 Black Gamma
  • 78. 78 Black Gamma Standard video gamma Gentle gamma curve near the black signal levels
  • 79. – When shooting, the color saturation in the dark areas of the picture may sometimes not be properly reproduced. – As shown in the “Before setting” image, the dark areas in the background of the room are not fully reproduced, and the color of the picture appears slightly faded. – This occurs because the signals from the dark areas are not output as dark as they should be or with the shades that they should have. – In such situations, by adjusting the signal level of the black areas to better match the entire image, the picture is reproduced with a much richer visual impression. Tips for Enriching Color Saturation in Dark Areas of an Image 79 By decreasing black gamma dark areas are reproduced with more color and darkness .
  • 80. Tips for Enriching Color Saturation in Dark Areas of an Image 80
  • 81. Tips for Enriching Color Saturation in Dark Areas of an Image 81
  • 82. Y BLACK Gamma – The BLACK GAMMA and GAMMA functions are used to adjust signals in the low-luminance area and entire luminance range respectively. – Because they process luminance and color-difference signals together in their circuits, both black tonal reproduction (black contrast) and color saturation are effected. – So since BLACK GAMMA and GAMMA change the image’s saturation together with contrast, they are not appropriate for adjusting picture contrast alone. Both Saturation and Contrast are changed. 82
  • 83. – The Y BLACK GAMMA function applies gamma processing only to the luminance signal and mixes the resultant signal with the color difference signals to create the final output. – This allows operators to independently adjust the tones of black/dark areas (Black Contrast) in the images. – However, since Y BLACK GAMMA mixes the deepened (stronger) shades of black with the final output, the original colors may darken or noise may become visible. – Therefore, it is necessary to always check your monitor when using this function. Y BLACK Gamma 83
  • 84. – To shoot images with much richer black reproduction, the BLACK GAMMA function and GAMMA function are effective. These functions allow you to reproduce the desired color tone by adjusting the saturation and contrast of the image. – However, since BLACK GAMMA and GAMMA change the image’s saturation together with contrast, they are not appropriate for adjusting picture contrast alone. – For example, suppose you wanted to adjust only the black color tone or contrast of the bouquet in the “Before setting” image. – When using BLACK GAMMA function (Master Black : -87) as shown in “After setting 1”, the colors of the flowers are also changed, resulting in the impression of the image turning out to be somewhat different than expected. – For such cases, the Y GAMMA BLACK function is useful, allowing you to adjust only the black contrast of the image to have richer and deeper black tones while keeping the color intact. Tips for Shooting Pictures with Rich and Deep Black reproduction (Y BLACK GAMMA Function) 84
  • 85. Tips for Shooting Pictures with Rich and Deep Black reproduction (Y BLACK GAMMA Function) 85
  • 86. Tips for Shooting Pictures with Rich and Deep Black reproduction (Y BLACK GAMMA Function) 86
  • 87. Tips for Shooting Pictures with Rich and Deep Black reproduction (Y BLACK GAMMA Function) 87
  • 88. Tips for Shooting Pictures with Rich and Deep Black reproduction (Y BLACK GAMMA Function) 88
  • 89. − CCD image sensors have a dynamic range that is around three to six times larger than the video signal’s dynamic range. 1) Linear Fashion Mapping: − Remember that the brightness (luminance) levels need to fit within the 0% to 100% (max. 109%) video signal range. Knee Correction (max. 109%) 89 The picture content most important to us (ordinarily lighted subjects and human skin tones ) would be reproduced at very low video levels, making them look too dark on a picture monitor.
  • 90. − CCD image sensors have a dynamic range that is around three to six times larger than the video signal’s dynamic range. 90 2) Clipping Off or Discarding the Image’s Highlights This would offer bright reproduction of the main picture content, but with the tradeoff of image highlights having no contrast and appearing washed out. Knee Correction
  • 91. Solution – Knee Correction offers a solution to both issues, keeping the main content bright while maintaining a certain level of contrast for highlights. – The image sensor output is mapped to the video signal so it maintains a proportional relation until a certain video level. This level is called the knee point. 91 Knee Correction
  • 92. Input Signal Output Signal 0 0.6 0.7 0.1 0.5 0.2 0.3 0.4 Low Contrast Highlights Highlight ImagesMain Subject Low Knee point Natural Contrast for Main Subject 92 Knee Correction Main Subject Highlight Images
  • 93. 0 0.6 0.7 0.1 0.5 0.2 0.3 0.4 High Knee point Input Signal Output Signal Very Low Contrast Highlights Natural Contrast for Main Subject Highlight ImagesMain Subject 93 Knee Correction
  • 94. 0 0.6 0.7 0.1 0.5 0.2 0.3 0.4 Low Compression Slope Input Signal Output Signal Low Contrast Small Dynamic Range Natural Contrast for Main Subject 94 Knee Correction Main Subject
  • 95. 0 0.6 0.7 0.5 0.2 0.3 0.4 Natural Contrast for Main Subject Greater Dynamic Range High Compression Slope Input Signal Output Signal 0.1 Very Low Contrast 95 Knee Correction Main Subject
  • 96. – Photo A shows that the scenery outside the window (image highlights) gets overexposed when Knee Correction is turned off. – In contrast, by activating the Knee Correction function, both the person inside the car as well as the scenery outside are clearly reproduced. (Photo B) 96 Knee Correction
  • 97. – The human eye can capture bright colors of an image, such as a bouquet in a bright environment. This is because the human eye has a very wide dynamic range (the range of light levels that it can handle). – However, when the same bouquet is shot with a video camera, the bright areas of the image can be overexposed and “washed out” on the screen. – For example, as shown in the “Before Setting” image, the white petals and down are overexposed. This is because the luminance signal of the petals and down exceeds the camera’s dynamic range. – In such situations, the picture can be reproduced without overexposure by compressing the signals of the image’s highlight areas so they fall within the camera’s dynamic range. Tips for Avoiding “Overexposure” of an Image’s Highlights 97
  • 98. Tips for Avoiding “Overexposure” of an Image’s Highlights 98
  • 99. Tips for Avoiding “Overexposure” of an Image’s Highlights 99
  • 100. – DSP circuit analyses content of high level Video signals and makes real time adjustments to Knee point & Slope based on a preset algorithm. – Some parameters of the algorithm may be fine tuned by the user. Dynamic Contrast Control (DCC) For high-contrast images There are no extreme highlights • When the incoming light far exceeds the white clip level, the DCC circuitry lowers the knee point in accordance with the light intensity. • To reproduce details of a picture, even in extremely high contrast. 100
  • 101. – DCC is a function that allows cameras to reproduce details of a picture, even when extremely high contrast must be handled. – A typical example is when shooting a person standing in front of a window from inside the room. With DCC activated, details of both the person inside and the scenery outside are clearly reproduced, despite the large difference in luminance level (brightness) between them. – When the incoming light far exceeds the white clip level, the DCC circuitry lowers the knee point in accordance with the light intensity. This allows high contrast scenes to be clearly reproduced within the standard video level. – This approach allows the camera to achieve a wider dynamic range since the knee point and slope are optimized for each scene. 101 Dynamic Contrast Control (DCC)
  • 102. Adaptive Highlight Control 102 On Highlight contrast is clearly reproduced using Adaptive Highlight Control.
  • 103. Adaptive Highlight Control It is a kind of DCC. – This function intelligently monitors the brightness of all areas of the image and optimizes the knee points/slopes so the video dynamic range is more effectively used. – A typical example is shooting an interior scene, which includes sunlit exterior seen through a window. – This function sets knee points/slopes only for highlight areas of the image, while the middle and low luminance levels remain unchanged. As a result, both the interior scene and sunlit exterior are clearly reproduced. 103
  • 104. − DSP circuit analyses total video content in 7 luminance layers. − Then makes real time adjustments to multiple Knee points & Slopes based on a preset algorithm. − Some parameters of the algorithm may be fine tuned by the user. Input Signal Output Signal 0 0.6 0.7 0.1 0.5 0.2 0.3 0.4 DynaLatitude 104
  • 105. DynaLatitude first analyzes the light distribution or light histogram of the picture and assigns more video level (or more 'latitude') to light levels that occupy a larger area of the picture. In other words, it applies larger compression to insignificant areas of the picture and applies less or no compression to significant areas. 105 DynaLatitude
  • 106. DynaLatitude is a feature offered in SONY DVCAM for capturing images with a very wide dynamic range or, in other words, images with a very high contrast ratio. – For example, when shooting a subject in front of a window from inside the room, details of the scenery outside will be difficult to reproduce due to the video signal's limited 1 Vp-p dynamic range. – DynaLatitude overcomes this limitation so that both dark areas and bright areas of a picture can be clearly reproduced within this 1 Vp-p range. – DynaLatitude functions in such a way that the signal is compressed within the 1 Vp-p range according to the light distribution of the picture. 106 DynaLatitude
  • 107. Smooth Knee Provides natural high-light scene by a super dynamic compression system. 107
  • 108. Super Knee 108 Dynamic Range > 400% Dynamic Range > 600% Super Knee: On
  • 109. Mechanism of Human Eye – Images (= light) seen with our eyes are directed to and projected onto the eye’s retina (it consists of several million photosensitive cells). – The retina reacts to light and converts it into a very small amount of electrical charges. – These electrical charges are then sent to the brain through the optic nerve system. 109
  • 110. Image Sensors – Image sensors have photo-sensors that work in a similar way to our retina’s photosensitive cells, to convert light into a signal charge. – However, the charge readout method is quite different!!!!! 110
  • 112. – Each pixel within the image sensor samples the intensity of just one primary color (red, green or blue). In order to provide full color images from each pixel of the imager, the two other primary colors must be created electronically. – These missing color components are mathematically calculated or interpolated in the RGB color processor which is positioned after the image sensor. – The easiest way to calculate a missing color component: Add the values of the color components from two surrounding pixels and divide this by two. Blue color component missing in pixel G22 𝑩𝟐𝟐 = (𝒑𝒊𝒙𝒆𝒍 𝑩𝟐𝟏 + 𝒑𝒊𝒙𝒆𝒍 𝑩𝟐𝟑)/𝟐 112 One-Chip Imaging System
  • 113. Original Bayer screen output Interpolation Sharpening 113 One-Chip Imaging System
  • 114. Three-Chip Imaging System − The dichroic prism system provides more accurate color filtering than a color filter array. − Capturing the red, green, and blue signals with individual imagers generates purer color reproduction. − Using three imagers instead of just one, allows for a much wider dynamic range and a higher horizontal resolution since the image sensing system captures three times more information than a one-chip system. 114
  • 116. Image Sensor Size – Image sensor size is measured diagonally across the imager’s photosensitive area, from corner to corner. – A larger image sensor size generally translates into better image capture. – This is because a larger photosensitive area can be used for each pixel. The benefits of larger image sensors 1. Higher sensitivity 2. Less smear 3. Better signal-to-noise characteristics 4. Use of better lens optics 5. Wider dynamic range 116
  • 117. Image Sensor Size Crop Factor =Diagonal35mm / Diagonalsensor APS: Advanced Photo System (discontinued) H (high-definition), C (classic) and P (panorama) 117
  • 118. Image Sensor Size Vidicon Tube (2/3 inch in diameter) • An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm diagonal. • This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor. 118 2/3 inch ≃2/3×2/3 inch=11 mm 2/3 inch
  • 119. – Video sensor size measurement originates from the first tube cameras where the size designation would have related to the outside diameter of the glass tube. – The area of the face of the tube used to create the actual image would have been much smaller, typically about 2/3rds of the tubes outside diameter. – A 1″ tube would give a 2/3″ diameter active area, within which you would have a 4:3 frame with a 16mm diagonal. – An old 2/3″ Tube camera would have had a 4:3 active area of about 8.8mm x 6.6mm giving an 11mm diagonal. This 4:3 11mm diagonal is the size now used to denote a modern 2/3″ sensor. – A 1/2″ sensor has a 8mm diagonal and a 1″ sensor a 16mm diagonal. Image Sensor Size 119 2/3 inch 1 inch 1″ tube
  • 120. – It’s confusing!! – But the same 2/3″ lenses as designed for tube cameras in the 1950’s can still be used today on a modern 2/3″ video camera and will give the same field of view today as they did back then. – This is why some manufacturers are now using the term “1 inch type”, as this is the active area that would be the equivalent to the active area of an old 1″ diameter Vidicon/Saticon/Plumbicon Tube from the 1950’s. For comparison: – 1/3″ → 6mm diag. – 1/2″ → 8mm diag. – 2/3″ → 11mm diag. – 1″ → 16mm diag. – 4/3″ → 22mm diag. – A camera with a Super35mm sensor would be the equivalent of approx 35-40mm – APS-C would be approx 30mm Image Sensor Size 120
  • 121. – The term Full Frame or FF is used by users of digital single-lens reflex (DSLR) cameras as a shorthand for an image sensor format which is the same size as 35mm format (36 mm × 24 mm) film. Image Sensor Size 121
  • 122. Picture Element (Pixel) – The smallest unit that the imager can sense light. – The main factor that determines the camera’s resolution. – In all CCD sensors, a certain area along the periphery of the photosensitive area is masked. – These areas correspond to • the horizontal and vertical blanking periods or timing of the video signal • and used as a reference to generate the signal’s absolute black level 122
  • 123. Thus, there are two definitions for describing the picture elements contained within an imager. − “Total picture elements”: all picture elements within an imager , including those that are masked. − “Effective picture elements”: the number of picture elements that are actually used for sensing the incoming light. • CCDs usually have about the same number of vertical pixels as the number of scanning lines in the video system. • For example, the NTSC video system has 480 effective scanning lines and therefore CCDs used in this system have about 490 vertical pixels. 123 Picture Element (Pixel)
  • 125. – Charge Transfer from Photo Sensor to Vertical CCD – Like Water Draining from a Dam. 125 CCD Image Sensor
  • 126. – Charge Transfer by CCD in a Bucket-brigade Fashion. – CCD image sensors get their name from the vertical and horizontal shift registers, which are Charge Coupled Devices that act as bucket brigades. 126 CCD Image Sensor ChargeChargeChargeCharge
  • 127. – Since the charges transferred from the horizontal CCD are very small, an amplifier is needed both to convert the charges to voltage and to make this voltage stronger. 1. The charge enters a storage region called the "floating diffusion“. 2. The voltage at the surface of the floating diffusion varies in proportion to the accumulated charge. 3. The voltage generated on the surface of the floating diffusion controls the gate of the amplifier. (When a charge is transferred to the floating diffusion, the voltage generated on its surface decreases in proportion to the amount of charge. And the gate voltage of the amplifier decreases in proportion.) To the camera's signal processor 127 CCD Image Sensor
  • 128. 1. Each pixel (a) photo sensor (b) converts incoming light into electrons. 2. A vertical shift register (c) is situated between the columns of photo sensors. This shift register is literally a "charge coupled device," from which the CCD image sensor gets it name. 3. During the vertical sync interval, all the charges accumulated in all the photo sensors are simultaneously transferred into the adjacent vertical shift register CCDs. The photo sensor is like a dam, holding back a reservoir of electrons. The transfer is like the floodgates of multiple dams opening at the same time, and the water of all these dams draining at the same time. 4. The timing of the charge transfer to the vertical CCD depends on the frame or field rate of the camera. For example, at a rate of 50 fields per second, the accumulated charge will be transferred to the vertical CCD every 1/50 second. It is as if the floodgates of all the dams open and drain water at an interval of 1/50 second. 5. Each charge is shifted down the vertical CCD in bucket-brigade fashion. The timing of the transfers is the same as the line scanning frequency of the camera. With a 625-scanning-line camera operating at 50 interlaced fields per second, the charges are transferred at intervals of 1/15625 second. 6. The vertical CCDs all transfer charges into another CCD positioned horizontally across the bottom of the image sensing area. This is the horizontal shift register (d). 7. During the horizontal sync interval, all the charges are shifted in bucket-brigade fashion across the horizontal CCD and into the amplifier (x). 128
  • 129. Interline-Transfer CCD (IT CCD) – CCDs are categorized into two types, depending on their structure and the method used to transfer (read out) the accumulated charge at each photo-site to the output. – The structure of early IT imagers exhibited a considerable amount of vertical smear. 129
  • 130. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register 130 Interline-Transfer CCD (IT CCD) The light conversion and charge accumulation takes place over the video field period (e.g. 1/50 second for PAL).
  • 131. 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Light falling on Sensors causes the build-up of an electrical charge The magnitude of the electrical charge is proportionate to the intensity of the light, and the duration of exposure 131 Interline-Transfer CCD (IT CCD)
  • 132. 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Accumulated charges are clocked in to the Vertical Shift Registers. 132 Interline-Transfer CCD (IT CCD) The electrical charges collected by each photo-sensor are transferred to the vertical shift registers during the vertical blanking interval.
  • 133. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register First row of charges are clocked into the Horizontal Shift register 133 Interline-Transfer CCD (IT CCD) These charges are shifted, at the horizontal frequency, through the vertical shift register and read into the horizontal register. Charges within the same row in the CCD array are shifted simultaneously to establish a scanning line.
  • 134. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 1 1 1 1 1 11Co-related Double Sampling Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency CCD Output 134 Interline-Transfer CCD (IT CCD)
  • 135. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 1 1 1 1 11 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 135 Interline-Transfer CCD (IT CCD)
  • 136. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 1 1 1 11 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 136 Interline-Transfer CCD (IT CCD)
  • 137. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 1 1 11 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 137 Interline-Transfer CCD (IT CCD)
  • 138. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 1 11 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 138 Interline-Transfer CCD (IT CCD)
  • 139. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 11 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 139 Interline-Transfer CCD (IT CCD)
  • 140. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 11 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 140 Interline-Transfer CCD (IT CCD)
  • 141. 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 1 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Charges are clocked out of the Horizontal Shift register at standard scanning frequency Co-related Double Sampling CCD Output 141 Interline-Transfer CCD (IT CCD) Once a given line has been read into the horizontal register, it is immediately read out – during the same horizontal interval – to the camera circuitry so that the next scanning line can be read into the register.
  • 142. 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 2 2 2 2 2 2 2 2 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register Second row of charges are clocked into the Horizont al Shift register Co-related Double Sampling CCD Output 142 Interline-Transfer CCD (IT CCD)
  • 143. 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 2 2 2 2 2 2 22 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register The process continues until all the rows of charges are sequentially clocked out Co-related Double Sampling CCD Output 143 Interline-Transfer CCD (IT CCD)
  • 144. 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 2 2 2 2 2 22 Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register The process continues until all the rows of charges are sequentially clocked out Co-related Double Sampling CCD Output 144 Interline-Transfer CCD (IT CCD)
  • 145. 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 3 4 5 6 7 8 9 2 2 2 2 22Co-related Double Sampling Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register The process continues until all the rows of charges are sequentially clocked out CCD Output 145 Interline-Transfer CCD (IT CCD)
  • 146. Smear Similar to the CCD’s photo sensors the Vertical Register also has a light-to-electron conversion effect and any light that leaks into it generates unwanted electrons. • Vertical smear is a phenomenon inherent in CCD cameras, which occurs when a bright object or light source is shot. • It is observed as a vertical streak above and below the object or light source. • Smear is caused due to incoming light leaking directly into the CCD’s vertical shift register when it takes an irregular path. 146Vertical Smear Vertical smear is reduced by the use of the On-Chip Lens layer
  • 147. – Smear is observed as a vertical streak because the vertical register continues to generate and shift down unwanted electrons into the horizontal register for as long as the bright light continues to hit the CCD. – The amount of smear is generally proportional to I. The brightness of the subject or light source II. The area they occupy on the CCD surface. – Thus, in order to evaluate smear level, the area must be defined. – Smear has been reduced to a negligible level due to the use of the On-Chip Lens and other CCD refinements. – Corrective action: • Increase of the exposure time • Use of a mechanical or LCD shutter • Use of flash illumination 147 Smear
  • 148. Blooming – Charge overflow (> full well capacity) between neighboring pixels – Corrective action: Reduction of the incoming light 148
  • 149. Overflow Gate Technology and Shuttering 149
  • 150. – It was designed to reduce the generation of smear. – The upper part of a FIT CCD is structured similar to an IT CCD with photo-sensing sites and charge-shifting registers. – The additional bottom part acts as a temporary storage area for the accumulated charges. Frame Interline Transfer CCD (FIT CCD) 150
  • 152. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register One Frame intermediate storage 152 Frame Interline Transfer CCD (FIT CCD)
  • 153. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register One Frame intermediate storage 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Light falling on Sensors causes the build-up of an electrical charge 153 Frame Interline Transfer CCD (FIT CCD)
  • 154. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register One Frame intermediate storage 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 Accumulated charges are clocked in to the Vertical Shift Registers. 154 Frame Interline Transfer CCD (FIT CCD)
  • 155. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 One Frame intermediate storage 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 5 6 7 8 9 Charges are clocked very rapidly away from the exposed area, into the shielded intermediate storage 155 Frame Interline Transfer CCD (FIT CCD)
  • 156. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 One Frame intermediate storage Charges are clocked very rapidly away from the exposed area, into the shielded intermediate storage 156 Frame Interline Transfer CCD (FIT CCD)
  • 157. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register1 1 1 1 1 1 1 1 One Frame intermediate storage 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 Charges are clocked from the intermediate storage into the Horizontal Shift register at standard line Frequency rate 157 Frame Interline Transfer CCD (FIT CCD)
  • 158. Light sensitive Pixels Vertical Shift Registers Horizontal Shift Register1 1 1 1 1 1 11 One Frame intermediate storage 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 Co-related Double Sampling Charges are clocked out of horizontal Shift Register at the normal scanning rate CCD Output 158 Frame Interline Transfer CCD (FIT CCD)
  • 159. – Immediately after the charges are transferred from the photo-sensing sites to the vertical registers (during the vertical blanking interval), they are quickly shifted down to this buffer area. – Since the charges travel quickly through the vertical register within a short period, the effect of unwanted light leaking into the vertical register is much smaller than an IT CCD. – FIT CCDs originally offered a significant reduction in smear, but their complexity increased their cost. Today, dramatic improvements in IT CCDs have reduced smear to a negligible level, eliminating the need for the FIT structure. This is due to the use of the HAD sensor structure and improvements in On-Chip Lens (OCL) technology. 159 Frame Interline Transfer CCD (FIT CCD)
  • 160. RPN (Residual Point Noise) – RPN refers to a white or black dot that appears on the camera’s output image due to Defective Pixels in the image sensor. – Generally, RPN consists of Several Adjacent Pixels, in the horizontal and/or vertical direction, that have been destroyed and cannot reproduce colors properly. These dots are categorized into two types: I. Dead pixels: It cannot reproduce color information at all. II. Blemishes pixels: It can reproduce color information but cannot reproduce colors properly due to unwanted level or voltage shifts of the charges accumulated in their pixels. 160
  • 161. – Concealment (hiding) technology intelligently interpolates dead pixels by using adjacent picture data in the horizontal and vertical directions. – Compensation technology electrically offsets unwanted level or voltage shifts to reduce the blemishing effect. The distinct causes of dead pixels and blemishes are still under investigation. However, until now, it has primarily been believed that cosmic rays cause damage to the CCD pixels during high-altitude transportation in aircrafts. Statistics indicate that cameras transported by air tend to exhibit more RPN. 161 RPN (Residual Point Noise)
  • 162. White Flecks – Although the CCD image sensors are produced with high-precision technologies, fine white flecks (small white spot) may be generated on the screen in rare cases, caused by cosmic rays, etc. – This is related to the principle of CCD image sensors and is not a malfunction. – The white flecks especially tend to be seen in the following cases: I. When operating at a high environment temperature. II. When you have raised the master gain (sensitivity). III. When operating in slow shutter mode. – Automatic adjustment of black balance may help to improve this problem. 162
  • 163. Spatial offset is a method used to improve the luminance horizontal resolution of CCD cameras.  This technology allows a higher resolution to be achieved than theoretically expected with the number of picture elements in the CCD array. o As shown in (a), the red and blue CCDs are fixed to the prism block with a one-half pitch horizontal offset in respect to the green CCD.  This doubles the number of samples within a scanning line for creating the luminance signal, giving a higher resolution than when spatial offset is not used. o When shooting a subject with a CCD camera using spatial offset, the image is projected on the green, red, and blue CCDs as shown in (b). o A and B in (c) are enlargements of areas A and B in (b). The amount of signal charges accumulated in each photo sensor is shown in 1 through 7 as signal level. o If displayed separately on a video monitor, the green CCD signal levels would appear as shown in A’ and the red and blue signal levels would appear as shown in B’. These represent the native resolutions of the CCDs without spatial offset used.  The luminance signal (Y), which is obtained by adding the R, G, B signals with certain weights on each signal, is provided by adding A’ and B’ in spatial offset. This is shown in C’. As a result, the resolution is greatly improved. 163 Spatial Offset Technology
  • 164. a b 164 Spatial Offset Technology
  • 168. Electronic Shutter – When a shutter speed selection is made with the electronic shutter (e.g., 1/500 second), electrons accumulated only within this period are read out to the vertical register. – All the electrons accumulated before this period – the gray triangle in Figure– are discarded to the CCD’s N-substrate , an area within the CCD used to dispose such unnecessary electrons. – Discarding electrons until the 1/500-second period commences means that only movement captured during the shutter period contributes to the image, effectively reducing the picture blur of fast-moving objects. 168 1/500sec 1/500sec 1/500sec
  • 170. During One Readout Cycle: − Progressive CCDs create one picture frame. (higher vertical resolution, twice the transfer rate than Interlace CCDs). − Interlace CCDs create one interlace field. (higher sensitivity). Interlace CCD (in Field Integration mode) Progressive CCD Faster clocking of the horizontal shift register (all lines are readout at once) 170 Progressive & Interlace CCD
  • 171. – In interlace CCDs, two vertically adjacent photo sites are readout together and combined as one signal charge. This mechanism is used so the number of vertical samples coincides with the number of interlace scanning lines, that is, half the total scanning lines. – In contrast, progressive CCDs readout the electric charges individually for each photo site, producing a picture frame with the CCD’s full line count. – The vertical register must be capable of carrying charges from each line in a way that ensures they do not mix. – Progressive CCDs require twice the transfer rate of interlace CCDs since all lines are readout at once. This higher transfer speed requires faster clocking of the Horizontal Shift Register. – Special considerations must also be given to sensitivity. Interlace CCDs double the signal charge by reading two photo sites together, however, with Progressive CCDs this is not the case. 171 Progressive & Interlace CCD
  • 172. Frame Integration Mode High vertical resolution , High sensitivity, Picture blur – To create even fields, only the charges of the CCD’s even lines are read out. – To create odd fields, only the charges of the CCD’s odd lines are read out. Frame Rate Charging (1/25 sec) 172 1/25 Frame Rate Charging (1/25 sec)
  • 173. – Each pixel in the CCD array accumulates charges for one full frame (e.g., 1/50 second for PAL video) until transferring them to the vertical register. – This method provides high vertical resolution. – It also has the disadvantage of introducing picture blur, because the images are captured across a longer 1/25 second period. 173 Frame Integration Mode Frame Rate Charging (1/25 sec) 1/25
  • 174. Field Integration Mode Reducing the sensitivity by one-half, Less vertical resolution, Less picture blur – For an even field, B and C, D and E, and F and G are added together – For an odd field, A and B, C and D, and E and F are added together. 174 1/50 Field Rate Charging (1/50 sec) Field Rate Charging (1/50 sec)
  • 175. – Field Integration method reduces the blur by shortening the charge accumulation period to the field rate (e.g., 1/50 second for PAL video). – Shortening the accumulation period and alternating the lines to read out – to create even and odd fields would reduce the accumulated charges to one half of the Frame Integration method. – This would result in reducing the sensitivity by one-half. – After charges being transferred to the vertical register, the charges from two adjacent photo-sites are added together to represent one pixel of the interlaced scanning line. – Both even and odd fields are created by altering the photo-site pairs used to create a scanning line. – This method provides less vertical resolution compared to the Frame Integration mode.(two adjacent pixels is averaged in the vertical direction). 175 Field Integration Mode Field Integration has become the default method for all interlace video cameras, to capture pictures without image blur
  • 176. Frame Rate Charging (1/25 sec) 176 Field and Frame Integration Mode Frame Integration Mode Field Integration Mode 1/50 1/25 Field Rate Charging (1/50 sec) Field Rate Charging (1/50 sec)
  • 177. 177 Field and Frame Integration Mode Field Rate Charging (1/50 sec) Frame Rate Charging (1/25 sec)
  • 178. EVS (Enhance Vertical Definition System) – Standard definition cameras have a limited vertical resolution due to • the smaller vertical line count of the SD signal • use of the interlaced scanning system EVS is: A solution to improve vertical resolution limitation. – These features are not used in high definition (HD) cameras due to their higher vertical resolutions. 178 SD signal
  • 179. 179 EVS (Enhance Vertical Definition System) SD signal Frame Integration • High sensitivity, high resolution • Motion blur • No discarded electrons EVS (Enhance Vertical Definition System) • High resolution, half-sensitivity • No motion blur • Shutter speed is set to 1/60s for NTSC or 1/50s for PAL • Electrons are discarded to the overflow drain of the CCD
  • 180. – Frame Integration provides higher vertical resolution than Field Integration. However, it also introduces motion blur due to its longer 1/30-second charge accumulation period. – EVS has been developed as a solution to improve this vertical resolution limitation. Its mechanism is based on using the CCD’s Frame Integration mode, but without introducing the motion blur inherent in this mode. – EVS eliminates this motion blur by operating the CCD in Frame Integration mode but with a 1/60 second charge accumulation period. – Just like Frame Integration, EVS uses the CCD’s even lines to create even fields and odd lines to create odd fields, providing the same high vertical resolution. – However, instead of accumulating the charges across a 1/30 second period, EVS discards the charges accumulated in the first 1/60 seconds (1/30 = 1/60 + 1/60) and keeps only those charges accumulated in the second 1/60 seconds. – The result is images with improved vertical resolution and reduced motion blur. – However, it must be noted that discarding the first 1/60 seconds of the accumulated charges reduces the sensitivity to one-half. 180 EVS (Enhance Vertical Definition System) SD signal
  • 181. Super EVS – Super EVS has been created to provide a solution to this drop in sensitivity. – The charge readout method used in Super EVS sits between the Field Integration and EVS. – Instead of discarding all charges accumulated in the first 1/60 seconds, Super EVS allows this discarded period to be linearly controlled. • When the period is set to 0, the results will be the same as when using Field Integration. • When the period is set to 1/60, the results will be identical to Frame Integration. • When set between 0 to 1/60, Super EVS will provide a combination of the improved vertical resolution of EVS but with less visible picture blur. • Most importantly, the amount of resolution improvement and picture blur will depend on the selected discarding period. This can be summarized as follows: – When set near 0: Less improvement in vertical resolution, less picture blur – When set near 1/60: More improvement in vertical resolution, more picture blur 181 SD signal
  • 183. HAD (Hole Accumulated Diode) Sensor – The HAD structure, developed in 1987, brought a stunning innovation to CCD picture performance. – Earlier CCDs faced performance limitations due to • fixed pattern noise • lag factors – The HAD structure successfully reduced these to a negligible level by adding a Hole Accumulated Layer (HAL) above the photo-sensor. – In addition, the use of an N-substrate to drain excessive electrons accumulated in the photo-sensors effectively reduced the blooming effect while facilitating features such as the electronic shutter. 183
  • 184. 1. Reduction of dark current noise and fixed pattern noise 2. Reduction of Lag 3. Flexible Shutter Mechanism 4. Reduction of smear 184 These Improvements Are Summarized as Follows in Order of Importance:
  • 185. – The SiO2 – Si boundary of a CCD is by nature, electrically unstable and tends to generate free electrons, which cause an electric current to flow into the photo-sensors where signal charges are accumulated. – This unwanted current is observed as signal noise – commonly known as fixed pattern noise and dark current noise. 1- Reduction of dark current noise and fixed pattern noise 185 Non HAD Structure (cross sectional view)
  • 186. 1- Reduction of dark current noise and fixed pattern noise – To solve this problem, the photo-sensor of the HAD (Hole Accumulated Layer) structure uses an N-type semiconductor layer. – The HAD structure effectively reduces such noise by adopting an extra – P+ layer called the HAL which is situated between the photo-sensor (N+ layer) and the SiO2 layer. – The HAL is doped so that its carriers are positively charged ‘holes’. Having these holes at the SiO2 – Si boundary prevents any electrons from generating an electric current there. HAD Structure (cross sectional view) 186
  • 187. – Conventional photodiodes enable signal charge electrons to escape and free electrons to enter. – Sony's HAD CCD uses a Hole Accumulated Layer (HAL) to cover the photodiode and block these problems. 1- Reduction of dark current noise and fixed pattern noise Conventional Photodiode Buried-type Photodiode 187
  • 188. Early CCDs exhibited a considerable amount of lag, which would appear as an after-image from previous picture frames. – The amount of lag in a CCD is determined by the efficiency of transferring the electrons accumulated in the photo- sensors (N+ area) to the vertical shift register. – The potential at the bottom of the photo-sensor – where signal charges were accumulated – tended to shift when voltage was applied. – This caused a considerable number of electrons to remain in the photo-sensors, even after charge readout. SHIFT IN VOLTAGE 188 2-Reduction of Lag Non HAD Structure
  • 189. With the HAD sensor however, the HAL clamps the bottom of the photo-sensor so that the same potential is always maintained. This ensures all accumulated electrons fall completely into the vertical register during readout ,thus eliminating any lag. HAD Structure 189 2-Reduction of Lag
  • 190. 3-Flexible Shutter Mechanism – The HAD structure CCD enabling a true variable-speed electronic shutter to be produced. – This was achieved by adding an N-substrate for discarding electrons not to be accumulated, thus shortening the exposure time. – The HAL layer, N+ layer, P-well, and N-substrate establish an electric potential, as shown in Figure, to accumulate the signal charges (electrons) during the exposure period. – However, when the exposure (charge accumulation) period needs to be shortened, the electrons in the photo-sensor can be discarded to the N-substrate simply by applying a certain voltage (ΔV). – Since this method does not require complex mechanics to optically control exposure, virtually any shutter speed can be achieved. 190
  • 192. 4-Reduction of smear – Smear in recent CCDs is known as a phenomenon caused by incident light, which should only enter the photo-sensor, leaking into the vertical shift register through an irregular path. – This type of smear is observed as a dim white vertical streak. Non HAD Structure (cross sectional view) 192
  • 193. – CCDs developed before the HAD sensor also exhibited a drastic amount of reddish smear. This was due to electrons generated deep within the CCD by the photoelectric conversion, drifting directly into the vertical shift register. – These electrons correspond to light with longer wavelengths (mostly red in color), which, by nature, penetrate deeper into the silicon substrate. With early CCDs, this reddish smear could not be prevented. 4-Reduction of smear 193 Non HAD Structure (cross sectional view)
  • 194. – The HAD structure, combined with other CCD refinements, virtually eliminated reddish smear by offering double protection. – This was achieved by positioning a second P-well below the vertical register and, of equal importance, by using an N- substrate for the base of the entire HAD structure CCD. – The 2nd P-well is a P-type semiconductor that prevents electrons generated deep within the CCD from entering the vertical register . It pairs up its holes with unwanted emerging electrons that otherwise would result in reddish smear. 4-Reduction of smear HAD Structure (cross sectional view) 194
  • 195. – The N-substrate also prevents any unwanted electrons from emerging in deep areas within the CCD by acting as a drain to discard them. – With the non-HAD sensor CCD, unwanted electrons diffuse within the P-substrate and can easily enter the vertical register. – However, the HAD sensor drains all such electrons to the bottom of the CCD, thereby preventing them from entering the vertical shift register. 4-Reduction of smear Non HAD Structure (cross sectional view) HAD Structure (cross sectional view) 195
  • 196. − On-Chip Lens (OCL) technology drastically enhances the light-collecting capability of a CCD imager by using micro-lenses aligned above each photo-sensor. − These micro-lenses converge and direct more of the incident light onto each photo-sensor. On-Chip Lens (OCL) Technology 196
  • 197. On-Chip Lens (OCL) Technology – The combination of HAD Sensor technology and this On-Chip Lens layer greatly enhances the imager’s sensitivity, thereby allowing shooting even in extremely low-light conditions. – The On-Chip Lens layer also plays a significant role in reducing vertical smear since converging the incoming light directly results in less light leaking into the CCD’s vertical register. 197
  • 198. 198 Hyper HAD, Power HAD and Power HAD EX (EAGLE ) CCD Sensors
  • 199. Reduced gaps between the micro lenses extend sensitivity further still. 199 Hyper HAD & Power HAD Sensor
  • 200. – Internal lenses boosting sensitivity further. – Thinner insulation film inside the image sensor cut down on the opportunity for stray light reflections. This reduces vertical smear to a bare minimum. In the Power HAD EX CCD, an internal lens improves sensitivity while a thinner insulation film minimizes smear. 200 Power HAD EX (EAGLE) CCDs
  • 201. CCD and CMOS Image Sensors CCD and CMOS sensors perform the same steps, but at different locations, and in a different sequence. 201
  • 202. Both CCD and CMOS sensors perform all of these steps. However, they differ as to where and in what sequence these steps occur. I. Light-to-charge conversion: In the photo-sensitive area of each pixel, light directed from the camera lens is converted into electrons that collect in a semiconductor "bucket.“ II. Charge accumulation: As more light enters, more electrons come into the bucket. III. Transfer: The signal must move out of the photosensitive area of the chip, and eventually off the chip itself. IV. Charge-to-voltage conversion: The accumulated charge must be output as a voltage signal. V. Amplification: The result of charge-to-voltage conversion is a very weak voltage that must be made strong before it can be handed off to the camera circuitry. CCD and CMOS Image Sensors 202
  • 203. – CMOS sensors have an amplifier at each pixel. – The charge is first converted to a voltage and amplified right at the pixel. 203 CMOS Image Sensors
  • 204. – Voltage Generated on Surface of Photo Sensor – Like the Rising Water Level of a Bucket. Since the charge has a negative electrical value: • The downward direction indicates a high voltage. • The upward direction indicates a high negative potential. 204 CMOS Image Sensors
  • 205. – Signal Voltage Generated by Amplifier (Like a Floodgate that Controls the Water Level of a Canal). – The downward direction indicates a high positive voltage. – The upward direction indicates a high negative potential, since the charge has a negative electrical value. 205 CMOS Image Sensors
  • 206. 1. In modern CMOS sensors, each pixel (a), consists of a photo sensor (b), an amplifier (y), and a pixel select switch (e). 2. The CMOS pixel's photo sensor (b) converts light into electrons. 3. Since the charge accumulated in the photo sensor is too small to transfer through micro wires (f, i), the charge is first converted to a voltage and amplified right at the pixel by amplifier (y). 4. Any individual CMOS micro-wire can carry voltage from only one pixel at a time, as controlled by the pixel-select switch (e). This is different from the operation of a CCD image sensor, in which the charges of all pixels are transferred simultaneously into their respective vertical shift registers, and all these charges simultaneously move down the vertical shift registers. 206 CMOS Image Sensors
  • 207. 5. In addition to the pixel-select switch, the column-select switch (g) and the column circuit (h) are also used to control the output of amplified voltages. • First, all the pixel-select switches on a given row (j) are turned ON. This action outputs the amplified voltages of each pixel to their respective column circuit, where they are processed into signal voltages and temporarily stored. • Then, column-select switch (g) are turned ON from left to right. In this way, signal voltages of each pixel in the same row are output in order. • By repeating this operation for all rows from the top to the bottom in order, signal voltages of all pixels can be output from the top-left corner to the bottom-right corner of the image sensor. 6. These signal voltages are output to the signal processor of the camera. 207 CMOS Image Sensors
  • 208. Analog Noise – Where charge is transmitted in the form of an analog signal, the signal will pick up a certain degree of external noise during its travel. – Noise will increase in proportion to the travel distance. Fixed Pattern Noise – CMOS sensors have an amplifier at each pixel. – A CMOS sensor in a high-definition device, therefore, contains well over a million amplifiers. It would be unreasonable to expect that all of these amplifiers will be exactly equivalent, as a certain degree of disparity is inevitably introduced during the production process. – This non-uniformity among amplifiers results in a type of interference known as fixed pattern noise. – Unlike conventional video noise, which has a random behavior, fixed pattern noise creates a permanent, unwanted texture that can be especially visible in dark scenes. – Fortunately, this problem can be corrected by incorporating CDS (correlated double sampling) circuits to cancel this noise and restore the original signal. 208 Two Typical Noise Sources
  • 209. Conventional CMOS Sensor Correlated Double Sampling 209
  • 210. – Active-pixel CMOS sensors use a "reset switch“ in each pixel to drain the accumulated charge of the previous video field, in preparation for the next video field. – Unfortunately, the draining process is not perfect. Some electrons will always remain in the image sensing area. – These electrons represent switching noise, which can become part of the video signal. – Even worse, this noise is of the ‘fixed pattern’ type. Unlike conventional video noise, which has a random behavior, fixed pattern noise creates a permanent, unwanted texture that can be especially visible in dark scenes. – Modern CMOS sensors combat fixed pattern noise with Correlated Double Sampling. – CMOS image sensors conduct charge-to-voltage conversion twice for every pixel. Both of these voltages are also amplified. Correlated Double Sampling 210
  • 211. Correlated Double Sampling can effectively suppress noise by literally subtracting the amplified voltage containing only noise from the amplified voltage containing both noise and the desired signal. 1. The reset switch drains the floating diffusion of the old accumulated charge that was used for the previous video field. 2. The amplifier converts the charge left in the floating diffusion ,which represents only noise, into a voltage. 3. The accumulated charge in the photo sensor (during the active field exposure) transfers into the floating diffusion area. 4. The amplifier converts the second charge in the floating diffusion, which represents signal mixed with noise, into a second voltage. 5. The column circuit subtracts the noise-only voltage from the signal-mixed-with-noise voltage to produces an output voltage. As a result, fixed pattern noise can be effectively suppressed. Correlated Double Sampling 211
  • 212. Correlated Double Sampling Cross section of one pixel in a HAD type CMOS image sensor 212
  • 213. – The conventional method of CMOS analog to digital conversion maintains the signal in analog form in horizontal micro-wires that run across the bottom of the sensor. – Unfortunately, these analog signals are exposed to high frequency switching noise that can degrade picture quality. – Digital signals have always been far more noise-resistant than analog. – By placing ADCs so close to each photo site, these sensors significantly reduce the signal's exposure to noise. – This could be done with placing one ADC for each column. Column Analog-to-Digital Converter 213 An array of analog-to- digital converters (ADCs), one for each column can reduce noise in CMOS sensors
  • 214. 214 Exmor™ Noise Reduction Technology Analog CDS
  • 215. Exmor™ Noise Reduction Technology – An A/D converter is installed next to each pixel line (column-parallel A/D conversion), so that the analog signals are almost immediately digitized. – The design also employs sophisticated digital CDS (Correlated Double Sampling) noise cancellation, which works by measuring the noise prior to conversion and then canceling the noise after the conversion. This new system, which operates both before and after conversion, is much more precise then conventional analog-only CDS implementations. – As a result, camcorders with Exmor technology offer lower noise than those that use conventional HD CMOS sensors. – This is especially significant under low-light conditions, where Exmor-equipped cameras perform very well. 215
  • 216. Digital CDS Used in Exmor CMOS Sensor By measuring the noise prior to conversion and then canceling the noise after the conversion 216 − The pixel outputs the amplified noise voltage. − The column ADC converts the noise voltage to digital. − The pixel outputs the amplified signal-with noise voltage. − The column ADC converts the signal-with noise voltage to digital. − The column ADC subtracts the digital noise value from the digital signal-with-noise value to create the digital output value.
  • 217. Creating Multiple Outputs and Processing Speed Issue CCD image sensor with 2-channel Horizontal CCDs CMOS Image Sensor with 3-channel Outputs 217 – Creating multiple outputs in a CCD requires an increase in complexity and cost. – Multiple outputs on a CMOS sensor requires only small, easy-to-manufacture micro wires.
  • 218. Creating Multiple Outputs and Processing Speed Issue – In CCD when the exposure is complete, the signal from each pixel is serially transferred to a single A/D. – The CCD’s ultimate frame rate is limited by the rate that individual pixels can be transferred and then digitized. – The more pixels to transfer in a sensor, the slower the total frame rate of the camera. – A CMOS chip eliminates this bottleneck by using an A/D for every column of pixels, which can number in the thousands. – The total number of pixels digitized by any one converter is significantly reduced, enabling shorter readout times and consequently faster frame rates. – One row of sensor array could be converted at a time. – This results in a small time delay between each row’s readout (Because of column switch operation). 218
  • 219. Rolling and Global Shutter 219
  • 220. Global Shutter – The “global shutter” is the technical term referring to sensors that scan the entire area of the image simultaneously (the entire frame is captured at the same instant). – The vast majority of CCD sensors employ global shutter scanning. – The pixels in a CCD store their charge until it has been depleted. – The CCD captures the entire image at the same time and then reads the information after the capture is completed, rather than reading top to bottom during the exposure. – Because it captures everything at once, the shutter is considered “global”. – The result is an image with no motion artifacts. – In Global Shutter mode, every pixel is exposed simultaneously at the same instant in time. This is particularly beneficial when the image is changing from frame to frame. 220
  • 222. Rolling Shutter – If the sensor employs a rolling shutter, this means the image is scanned sequentially by sensor, from one side of the sensor (usually the top) to the other, line by line. – Many CMOS sensors use rolling shutters (CMOS is also referred to as an APS (Active Pixel Sensor)). – A rolling shutter is always active and “rolling” through pixels from top to bottom. – The "rolling shutter" can be either mechanical or electronic. – The advantage of this method is that the image sensor can continue to gather photons during the acquisition process, thus effectively increasing sensitivity. – This produces predictable distortions of fast-moving objects or rapid flashes of light (“Jello”). – The rolling shutter offers the advantage of fast frame rates. – Rolling shutter is not inherent to CMOS sensors. The Sony PMW-F55 and Blackmagic Design Production Camera 4K use CMOS sensors with global shutter circuitry. – Further transistors per pixel are required for a CMOS sensor with global shutter at the cost of image quality, as the light- sensitive space on the surface is further reduced in this way. 222
  • 223. v v Rolling Shutter Exposure: Row by Row Exposure Start/End Offset Rolling Shutter Diagram demonstrates the time delay between each row of pixels in a rolling shutter readout mode with a CMOS camera. Readout Time Readout Time Readout Time Readout Time Readout Time Readout Time Frame 1 223 Beginning of exposure of 1st row End of exposure of 1st row End of exposure of 21st row Beginning of exposure of 2st row • The delay between the exposures of two consecutive lines for each line are always the same • The readout interval time is smaller than exposure interval time. The delay between the exposures of two consecutive lines t
  • 224. – Rather than waiting for an entire frame to complete readout in CMOS, to further maximize frame rates, each individual row is typically able to begin the next frame’s exposure, once completing the previous frame’s readout. – While fast, the time delay between each row’s readout then translates to a delay between each row’s beginning of exposure, making them no longer simultaneous. – The result is that each row in a frame will expose for the same amount of time but begin exposing at a different point in time, allowing overlapping exposures for two frames. – The ultimate frame rate is determined by how quickly the rolling readout process can be completed. – The exposure in the sensor occurs from the first line to the last line and then come back to the first line (for next frame) and so on. – The delay between the exposures of two consecutive lines are always the same, whether these two exposures belong to the same frame or not. Rolling Shutter 224
  • 225. 10 ms 10 ms 10 ms 10 ms 8.71 µs Rolling Shutter Diagram demonstrates the overlap of multiple exposures in a sequence with a sCMOS (Scientific CMOS) sensor running 100fps. For each line, the readout interval time is smaller than exposure interval time. Each row is able to begin the next frame’s exposure, once completing the previous frame’s readout. 225 Row 1 Row 1080 Beginning of 1st exposure for 1st row Beginning of 1st exposure for 2st row Beginning of 2nd exposure for 1st row Beginning of 2st exposure for 2st row Frame 2 Frame 3Frame 1
  • 226. Rolling Shutter 226 Frame readout time (10 ms)= Line Readout time (8.7μs) × Number of Line (1080) Frame Rate = 1/(Frame Readout time)=1/10 ms=100 fps 10 ms 10 ms 10 ms 10 ms 8.71 µs Row 1 Row 1080 Beginning of 1st exposure for 1st row Beginning of 1st exposure for 2st row Beginning of 2nd exposure for 1st row Beginning of 2st exposure for 2st row Frame 2 Frame 3Frame 1 – For a CMOS sensor in rolling shutter mode, the frame rate is now determined by • the speed of the A/D (clocking frequency) • the number of rows on the sensor. – Example: for A/D speed of 283MHz with 1080 rows of pixel, a single line’s readout time, and consequently the delay between two adjacent rows, is approximately 8.7μs. – It means that for each line, readout interval time is smaller than exposure interval time. – With 1080 rows, the exposure start and readout time delay from the top of the frame to the bottom is approximately 10ms. – This also corresponds to the maximum frame rate of 100 Frames per Second (fps) and minimum temporal resolution of 10ms (at full frame).
  • 227. Electronic Shutter Functioning with Rolling Shutter − A red square represents the reset of a pixel. − A green squares represents reading times. (a) For a 360◦ shutter value (b) For a 180◦ shutter value 227
  • 228. EX: 1920x1080p60 HDTV imager − Swing of 1 Volt − In CCD with a 1-2 analog output, pixels are processed on a 7-14 ns scale. − In CMOS imager each pixel is processed on the column on a 16µs scale. That is why • High frame rate is much easier with CMOS imager • The read noise can be intrinsically lower (16us to average the noise) • Sharpness is much better • HDR can be done too, but that is in the pixel − To minimize poor lighting issues, we need an imager with a high dynamic range, and we also need signal processing which uses the additional dynamic range in the best way possible. HFR and HDR in CMOS Sensor HDR-Transfer curve at 3200K 228
  • 229. Example: 4K Xensium HAWK CMOS Sensor (Grass Valley) – The 3840x2160p 4K Xensium HAWK CMOS imager offers a unique pixel technology called DPM (dynamic pixel management) functionality. – With DPM, the camera provides native 1920x1080 HD acquisition (by combining two horizontal and two vertical adjacent pixels) without the intrinsic downsides of 4K acquisition, such as rolling-shutter and decreased sensitivity, while delivering native 4K crispness when needed. 229
  • 230. Example: 4K Xensium HAWK CMOS Sensor (Grass Valley) 230
  • 231. Geometric Distortion in CMOS Sensor – Experienced video shooters often test this phenomenon by rapidly panning the camera back and forth past the legs of a table. – A distorted image will show "wobbly legs.“ – CMOS sensors accumulate charges and read them out one line at a time. This can create geometric distortion when there is relative motion between the camera and the subject (rolling shutter). – Image distortion in a CMOS camera can make a moving car appear to lean backwards. 231
  • 232. Noise: Now both technologies are capable of clean, low-noise imagery. Vertical Smear: Today, both CCD and CMOS technologies routinely produce smear free images. Power Supply: CMOS sensors have lower power consumption. • CCD 7 to 10 V • CMOS 3.3 to 5.3 V Processing Speed: Higher pixel counts and faster frame rates both place stringent new requirements on image sensor processing speed. Here, CMOS image sensors can offer advantages. Systemization: Typical integrated circuits use the same Metal Oxide Semiconductor (MOS) substrate as CMOS image sensors. This makes it relatively easy to add functions to the CMOS chip, such as column analog-to- digital converters. Geometric Distortion: CMOS sensors accumulate charges and read them out one line at a time. This can create geometric distortion when there is relative motion between the camera and the subject (rolling shutter). Image Sensors Comparison 232
  • 233. Advantages of CCD: 1. High image quality 2. Low spatial noise (FPN) 3. Typically low dark current noise 4. High fill factor (relation of the photo sensitive area to the whole pixel area) generally by larger pixels 5. Perfect global shutter – Increased sensitivity – Good signal quality at low light 6. Modern CCDs with multi tap technologies – n times readout speed compared to single tap sensors Advantages of CMOS: 1. High frame rates, even at high resolution 2. Faster and more flexible readout (e.g. several AOIs: Area of Interests) 3. High dynamic range or HDR mode (Acquisition of contrast-rich and extremely bright objects) 4. No blooming or smear contrary to CCD 5. Integrated control circuit on the sensor chip 6. More cost-effective and less power consumption than comparable CCD sensors Image Sensors Comparison 233
  • 234. Conclusion At the current state of development, CMOS and CCD sensors both deserve a place in broadcast and professional video cameras. – CMOS is particularly outstanding where issues of power consumption, systemization and processing speed are most important. – CCDs excel where the images will be subjected to the most critical evaluation. – Recent CMOS sensors deliver: • Improved global shutter • Low dark and spatial noise • Good image quality in low light condition • Higher quantum efficiency Together with the existing advantages in speed and cost which makes CMOS sensors suitable for a lot of vision applications. 234
  • 235. ClearVid CMOS Sensor With any given sensor technology, there is a tradeoff between pixel size and pixel quality. • Larger pixels mean better sensitivity, dynamic range and signal-to-noise ratio. • Smaller pixels mean higher resolution. Sony's ClearVid CMOS Sensor is a way to overcome this tradeoff. – Typical image sensors provide a one-to-one relationship of image sensor photosites to camera pixels. – In this way, a 1920x1080 camera image usually requires a 1920x1080 sensor – slightly over 2 million photosites. – In contrast, a ClearVid CMOS Sensor can achieve almost the same resolution using only half the number of pixels. Using half the pixels means that the photosites can be twice as large, for improved sensitivity, dynamic range and signal-to-noise ratio. 235
  • 236. In the ClearVid CMOS Sensor, the pixels are turned 45 degrees to form a diamond sampling pattern, instead of the usual vertical and horizontal grid. • Half the pixel information is supplied directly from the CMOS photosites. • The other half of the pixel information is interpolated with very high quality, based on information drawn from four adjacent photosites. – This interpolation occurs outside the image sensor, in Sony's Enhanced Imaging Processor large-scale integrated circuit. – The result is very high spatial resolution combined with very high performance in sensitivity, dynamic range and signal-to-noise. 236 ClearVid CMOS Sensor Pixel Arrangement of the 3 ClearVid CMOS Sensor
  • 237. 237 Comparison of Pixel Line Widths Comparison of Pixel Areas ClearVid CMOS Sensor
  • 238. 238 Compared Against Conventional Sensor Array (1) Compared Against Conventional Sensor Array (2) ClearVid CMOS Sensor
  • 239. 239 Interpolation Processing on the 3 ClearVid CMOS Sensor (1) Interpolation Processing on the 3 ClearVid CMOS Sensor (2) ClearVid CMOS Sensor