DIGITAL IMAGE PROCESSING
by
Dr Noorullah Shariff C
MODULE-I
INTRODUCTION
 Digital image processing is the manipulation of digital images by mean of a computer.
 It has two main applications:
o Improvement of pictorial information for human interpretation,
o Processingofimage dataforstorage,transmission and representationforautonomous
machine perception.
What is Digital Image Processing
 Digital image processing is the manipulation of digital images by mean of a computer.
 Animagecontainsdescriptiveinformationabouttheobjectthatitrepresents.
 Itcanbedefined mathematically as a 2-dimensional function or signal f(x,y)
o where x and y are the spatial coordinates, and
o f is called the intensity or gray-level of the image at a point.
 Adigitalimagehasallthethreeparametersx,y and f asfiniteanddiscretequantities.
 Ithasafinite number of elements, each of which has a particular location and value.
o These elements are called picture elements, image elements, pels or pixels.
 Humans are limited to the visual band of the Electromagnetic spectrum (EM).
o But, imaging machines coveralmost the entire EM spectrum, ranging from gamma to
radio waves.
o Such machines can operate on images generated by sources including ultrasound,
electron microscopy and computer generated images.
 Continuum from Image Processing to Computer Vision
Origin of Digital Image Processing
 Newspaper industry
– Bartlane cable picture transmission system across Atlantic (1920s)
– Superseded by photographic reproduction from tapes using telegraph terminals
– Earlier images could code in five levels of gray, improved to 15 levels in 1929
 1960s: Improvements in computing technology and the onset of the space race led to a surge
of work in digital image processing
 1964: Computers used to improve the quality of images of the moon taken by the Ranger 7
probe
 Such techniques were used in other space missions including the Apollo landings
 1970s: Digital image processing begins to be used in medical applications
 1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share the Nobel Prize in medicine
for the invention of tomography, the technology behind Computerized Axial Tomography (CAT)
scans
 1980s -Today: The use of digital image processing techniques has exploded and they are now
used for all kinds of tasks in all kinds of areas
– Image enhancement/restoration
– Artistic effects
– Medical visualisation
– Industrial inspection
– Law enforcement
– Human computer interfaces
Examples in the fields that Use Digital Image Processing
Classification of images based on the source of energy, ranging from gamma rays at one end to radio
waves at the other
> Viewing images in non-visible bands of the electromagnetic spectrum, as well as in other energy sources
such as acoustic, ultrasonic, and electronic
> Gamma-ray imaging
– Nuclear medicine
 Inject a patient with a radioactive isotope that emits gamma rays as it decays
 Used to locate sites of bone pathology such as infection of tumors
– Positron emission tomography (PET scan) to detect tumors
 Similar to CAT
 Patient is given a radioactive isotope that emits positrons as it decays
 When a positron meets an electron, both are annihilated giving off two gamma rays
– Astrophysics
 Studying images of stars that glow in gamma rays as natural radiation
– Nuclear reactors
 Looking for gamma radiation from valves
> X-ray imaging
– Medical and industrial applications
– Generated using an X-ray tube – a vacuum tube with a cathode and an anode
 Cathode is heated causing free electrons to be released
 Electrons flow at high speed to positively charged anode
 Upon electron’s impact with a nucleus, energy released in the form of X-ray radiation
 Energy captured by a film sensitive to X-rays
– Angiography or contrast-enhanced radiography
 Used to obtain images or angiograms of blood vessels
 A catheter is inserted into an artery or vein in the groin
 Catheter threaded into the blood vessel and guided to the area to be studied
 An X-ray contrast medium is injected into the catheter tube
 Enhances the contrast of blood vessels and enables radiologists to see any irregularities or
blockages
> Imaging in ultraviolet band
– Lithography, industrial inspection, microscopy, lasers, biological imaging
– Fluorescence microscopy
 A mineral fluorspar fluoresces when UV light is directed upon it
 UV light by itself is not visible but when a photon of UV radiation collides with an electron in an
atom of a fluorescent material, it elevates the electron to a higher energy level
 The excited electron relaxes and emits light in the form of a lower energy photon in the visible
light region
 Fluorescence microscope uses excitation light to irradiate a prepared specimen and then, to
separate the much weaker radiating fluorescent light from the brighter excitation light
 Only the emission light reaches the sensor
 Resulting fluorescing areas shine against a dark background with sufficient contrast to permit
detection
– Astronomy
> Visible and IR band
– Remote sensing, law enforcement
– Thematic bands in satellite imagery
– Multispectral and hyperspectral imagery (Fig. 1.10)
– Weather observation and monitoring
– Target detection
> Imaging in microwave band
– Radar
> Imaging in radio band
– Medicine (MRI) and astronomy
> Other imaging modalities
– Acoustic imaging (ultrasound), electron microscopy.
Fundamental steps in Digital Image Processing
1. ImageAcquisition.Thisstepinvolvesacquiringanimageindigitalformandpreprocessingsuch
as scaling.
2. ImageEnhancement–itistheprocessofmanipulatinganimagesothattheresultis
more suitable than the original for a specific application. For e.g. filtering, sharpening,
smoothing, etc.
3. ImageRestoration–thesetechniquesalsodealwithimprovingtheappearanceofanimage.
But, as opposed to image enhancement which is based on human subjective preferences
regardingwhatconstitutesagoodenhancementresult,imagerestorationisbasedon
mathematical or probabilistic models of image degradation.
4. ColorImageProcessing–Colorprocessingindigitaldomainisusedasthebasisforextracting
features of interest in an image.
5. Wavelets –they are used to represent images in various degrees of resolution.
6. Imagecompression –theydeal withtechniquesfor reducingthe storagerequiredto save an
image, or the bandwidth required to transmit it.
7. Image Segmentation –these procedures partition an image into its constituent parts or objects.
Thisgivesrawpixeldata, constitutingeithertheboundaryof aregion orallthepoints inthe
region itself.
Components of an Image Processing system
1. Image Sensors –two elements are required to acquire a digital image:
a. Aphysicaldevicethatissensitivetotheenergyradiatedbytheobjectwewishtoimage.
And,
b. A digitizer for converting the output of the sensor into digital form.
2. SpecializedImageProcessingHardware–Itmustperformprimitiveoperationslikearithmetic
andlogicoperationsinparallelontheentireimage.Itsmostdistinguishingcharacteristicis
speed, which the typical computer cannot perform.
3. Computer–Thisisrequiredfor primitiveoperationsand canbeageneral purpose PC.
4. Software –it consists of specialized modules that perform specific tasks.
5. MassStorage–shorttermstorageisusedduringprocessing,onlinestorageisusedforrelatively
fast recall, and archival storage is used for infrequent access.
6. Image Displays –Theseare monitorsused to viewthe result of a processing task.
7. Networking – It is required to transmit images.
ELEMENTS OF VISUAL PERCEPTION
 Human intuition and analysisplay a central role in the choice of image processing techniques,
based on subjective and visual judgments.
 So, it isnecessary to understand how human and electronic imaging devices compare in terms
of resolution and ability to adapt to changes in illumination.
Structure of human eye
 The eye isnearly a sphere, with an average diameterof approximately 20mm.
 Three membranes enclose theeye:
o Thecornea andsclera outercover,
o Thechoroid, and
o The retina.
 Theinnermostmembraneof the eyeistheretina,whichlinestheinsideofthewall’sentire
posteriorportion.
 Whentheeyeisproperly focused, light from an object outside the eye is imaged on the
retina.
 Discrete light receptors are distributed over the surface of retina. They are of two types:
cones and rods.
 Conesarelocatedprimarilyinthecentralportionoftheretina,calledthe fovea.
o Theyarehighly sensitivetocolor.
o Eachoneisconnectedtoitsownnerveend.
o Thishelpsapersoninresolvingfine details.
o Musclescontrollingtheeyerotatetheeyeballuntiltheimageofanobjectfallson
fovea.
o Cone vision is called photopic or bright – light vision.
 Rods are distributed over the retinalsurface.
o Severalrodsareconnected to a single nerve end.
o This fact alongwiththelargerareaofdistributionreducetheamountofdetail
discerniblebythesereceptors.
o Rodsgiveageneral,overallpictureofthefieldofview.
o Theyarenotinvolvedincolorvisionandare sensitivetolowlevelsof illumination.
o Rodvisioniscalled scotopicordim–lightvision.
Image formation in the eye
o Inthehumaneye,thedistancebetweenthelensandtheretina(imagingregion)isfixed,as
opposedto anordinaryphotographiccamera(wherethedistancebetweenthelensandfilm
(imagingregion)is varied).
o Thefocallengthneededtoachieveproperfocusisobtainedbyvaryingtheshapeofthelens,
withthehelpof fibersintheciliarybody.
o Thedistancebetweenthe centerofthelensandtheretina alongthevisualaxisis
approximately17mm,andtherangeoffocallengthsisapproximately14mmto 17mm.
o The retinal image is focused primarily on the region of the fovea.
o Perception then takes place by the relative excitation of light receptors, which transform radiant
energy into electrical impulses that ultimately are decoded by the brain.
Brightness adaptation and discrimination
o Therangeoflight intensitylevelstowhichthehumanvisualsystemcanadaptis
enormous–
 Ofthe order of 1010
– from scotopic threshold to the glare limit.
 Subjective brightness (intensity as perceived by human visual system) is a
logarithmic function of the light intensity incident on the eye.
 However thetotalrange ofdistinctintensitylevelstheeyecandiscriminate
simultaneouslyisrathersmallwhencomparedwith thetotaladaptation
range.
 Foranygivensetofconditions,thecurrentsensitivitylevelof thevisual
systemiscalledthebrightnessadaptationlevel.
 Brightnessdiscriminationispooratlowlevelsof illumination, and it improves
significantly as background illumination increases.
 At low levels of illumination,visioniscarriedoutbytherods,whereasathigh
levels,visioniscarriedoutbythecones.
o Weber ratio
 Measure of contrast discrimination ability
 Background intensity given by I
 ∆Ic is the increment of illumination when the illumination is visible against
background intensity
I
 Weber ratio is given by ∆Ic/I
 A small value of ∆Ic/I implies that a small percentage change in intensity is
visible, representing good brightness discrimination.
 A large value of ∆Ic/I implies that a large percentage change is required
for discrimination, representing poor brightness discrimination.
 Typically, brightness discrimination is poor at low levels of illumination and
improves at higher levels of background illumination
 Mach bands
∗
∗
∗
∗
o Scalloped brightness pattern near the boundaries shown in stripes of constant
intensity
o The bars themselves are useful for calibration of display equipment
 Simultaneous contrast
o A region’s perceived brightness does not depend simply on intensity Lighter
background makes an object appear darker while darker background makes the
same object appear brighter
 Optical illusions
Light and Electromagnetic (EM) spectrum.
 The EM spectrum ranges from gamma to radio waves.
 The visible part of the EM spectrum ranges from violet tored.
 Thepartofthespectrumbeforeviolet endiscalledUltraviolet,andthat afterred
endis called the Infrared.
o Visible spectrum
 0.43µm (violet) to 0.79µm (red)
 VIBGYOR regions
 Colors are perceived because of light reflected from an object
o Absorption vs reflectance of colors
 An object appears white because it reflects all colors equally
o Lightthatisvoidofcoloriscalledmonochromaticorachromaticlight.
o Itsonlyattributeisitsintensity.
o Intensity specifies the amount of light.
o Since the intensity of monochromatic light is perceived to vary from black to gray
shades and finally to white, the monochromatic intensity is called Gray level.
o The range of measured values of monochromatic light from black to white is
called the Gray scale, and the monochromatic images are called gray scale
images.
o Chromatic light spans the EM spectrum from 0.43μm (violet end) to 0.79μm (red
end).
o The quantities that describe the quality of a chromatic light source are
o Frequency- f= c/λ where λ is wavelength
o Radiance–itistheamountofenergythatflowsfromthelightsourceandis
measuredinwatts.
o Luminance–itistheamountofenergythatanobserverperceivesandis
measuredinlumens.
o Brightness –it issimilar to intensityin achromatic notion, and cannot be
measured.
Image sensing and acquisition
 Imagesaregeneratedbythecombinationof
o anilluminationsourceand
o thereflection/absorptionof energyfrom that source bythe elementsof the scenebeing
imaged.
 The principal sensor arrangements used to transform illumination energy into digital images
are
1. Single imaging sensor
2. Line sensor
3. Array sensor
1. Singleimageingsensor
 Thiscanbeaphotodiodewhichisconstructedwithsiliconmaterialand its output
voltage isproportional tothe amount of light falling on it.
 The use of filter infront of thesensorimprovesselectivity.
 Fore.g.agreenfilterresultsinastrongersensoroutputfor greenlightthanother
componentsofthevisiblespectrum.
 Inordertogeneratea2-Dimage using a single imaging sensor, there has to be
relative displacement in both x- and y- directions between the sensor and the area to
be imaged.
2. Line sensor
 It consists of an in-line arrangement of sensors in the form of a sensor strip, which
providesimagingelementsinonedirection.
 Movementperpendiculartothestripprovides imagingintheotherdirection.
 Thisarrangementisusedinmostflat-bedscanners.
 Sensing deviceswith4000ormorein-linesensorsarepossible.
 Applicationsoflinesensorsinclude airborne imaging of geographicalareas, medical
and industrialimaging to obtain 3-Dimagesof objects, etc.
3. Arraysensor
 Herethesingleimagingsensorsarearrangedintheformofa2-Darray.
 Asthe sensor array is 2-dimensional, a complete image can be obtained byfocusingthe
energy pattern ontothesurfaceofthearray, andhencemovementinanydirectionis
notnecessary.
 This arrangementisfoundindigitalcameraswhichhasCCDsensorspackedinarraysof
4000X4000 elements ormore.
Image Acquisition
 Theresponseofeachsensorisproportionaltotheintegraloflightenergyprojectedontothe
surfaceof the sensor.
 The sensor integrates this energy over minutes or hoursto reduce the noise.
Acquisition process
 The energy from an illumination source is reflected from the scene element being imaged (or
theenergymaybetransmittedthroughthesceneelement,dependingonthenatureofits
surface).
 The imaging system collects this reflected/transmitted energy and focuses it onto the image
plane.
 Thefront endofthe imaging system projectsthe viewed scene ontothe lensfocal plane.
 The sensor array which is coincidental with the focal plane produces outputs proportional to the
integralofthelightreceivedateachsensor,whicharethendigitizedresultinginadigitaloutput
image.
Image Sampling and Quantization
 Theoutputofthesensorsisacontinuousvoltagethathastobedigitized.
 Thisinvolvestwoprocesses–sampling and quantization.
o Sampling: Digitizing the coordinate values
 It may be viewed as partitioning the X-Y plane into a grid of M rows and N columns
with the coordinates of the center of each cell in the grid being a pair from the
Cartesian product Ƶ2.
o Quantization: Digitizing the amplitude values

· ×
o Digital image is always an approximation of Original image
Image Representation
o As a 3D plot of (x,y,z) where x and y are planar coordinates and z is the value of f at (x,y)
o As intensity of each point, as a real number in the interval [0,1]
o As a set of numerical values in the form of a matrix
 We may even represent the matrix as a vector of size MN x1by reading
each row one at a time into the vector
Letf(s,t)representacontinuousimagefunctionoftwocontinuousvariablessandt.toconvertittoa
digital image,
1. Sampling–
 We sample the continuous image into a 2-Darrayf(x,y)
 Containing M rowsand N columns, where(x,y)arediscretecoordinatestakingup
integervaluesx=0,1,……,M-1andy=0,1,…..,N-1.
 So f(0,1)indicatesthesecondsamplealongthefirstrow.
2. Quantization–
 Thevaluesoftheabovesamplesthatspanacontinuousrangeofintensityvalues, mustbe
convertedtodiscretequantities.
 Thisisdonebydividingtheentirecontinuousintensityscale intoLdiscreteintervals,
rangingfromblacktowhite,whereblackisrepresentedbyavalue0andwhite byL-1.
 DependingontheproximityofasampletooneoftheseLlevels,thecontinuousintensity
levels are quantized.
 In addition to the number of discrete levels used, the accuracy achieved in quantization is
highly dependent on the noise content of the sampled signal.
Digitization Process
 The digitization processrequires that decisions be made regarding values for M, N and L.
 Due to internal storage and quantizing hardware considerations, the number ofintensity
levelstypicallyisanintegerpowerof2.i.e.,L=2K
, andtheimageisreferredtoasaK-bit
image.
 For e.g. an image with 256 intensity levels is called an 8-bit image. The number of bits required
to store a digitized k-bit image is b=MNK.
Digital Image Fundamentals Terminology
 Dynamic Range - It is the ratio of maximum measurable intensity to the minimum detectable
intensity level in the imaging system. It establishes the lowest and highest intensity levels
that an image canhave.
 TheupperlimitisdeterminedbySaturation.Itisthehighestvaluebeyondwhichall
intensity levels are clipped. In an image, the entire saturated area has a high
constant intensity level.
 The lower limit is determined by noise. Especially in the darker regions of an image, the
noise masks the lowest detectable true intensity level.
 Contrast–Itisthedifferenceinintensitybetweenthehighestandlowestintensitylevelsinan
image. For example, a lowcontrast image would have a dull washed out graylook.
 Spatial resolution –
 itisa measure of the smallest detectable detailin an image.
 It is measured aslinepairsperunitdistance,ordotsorpixelsperunitdistance
(DotsperInchorDPI).
 For example,newspapersareprintedwitharesolutionof75DPI,whereas
textbooksareprinted with a resolution of 2400 DPI.
 Lower resolution images are smaller
 IntensityResolution–
 Itisameasureofthesmallestdetectablechangeinintensitylevel.
 It refers to the number of bits used to quantize intensity.
 For example, an image whose intensity is quantized into 256 levels is said to have
8-bits of intensity resolution.
• Image interpolation
– Basic tool used extensively in tasks such as zooming, shrinking, rotating and geometriccorrections
– Process of using known data to estimate values at unknown locations
– Enlarge an image of size 500 × 500 pixels by 1.5 times to 750 × 750 pixels
∗ Create an imaginary 750 × 750 pixel grid with same pixel spacing as original
∗ Shrink it so that it fits over the original image exactly
∗ Assign the intensity of the nearest pixel in the 500 ×500 pixel image to the pixel in the 750 ×750 pixel
image
∗ After assignment, expand the grid to its original size
– Method known as nearest neighbor interpolation Zooming and shrinking considered as image resampling
methods
∗ Zooming ⇒ oversampling
∗ Shrinking ⇒ undersampling
– Zooming
∗ Create new pixel locations
∗ Assign gray levels to these pixel locations
∗ Pixel replication
· Special case of nearest neighbor interpolation
· Applicable when size of image is increased by an integer number of times
· New pixels are exact duplicates of the old ones
∗ Nearest neighbor interpolation
· Assign the gray scale level of nearest pixel to new pixel
· Fast but may produce severe distortion of straight edges, objectionable at high magnification
levels
∗ Better to do bilinear interpolation using a pixel neighborhood Bilinear interpolation
· Use four nearest neighbors to estimate intensity at a given location
· Let (r,c) denote the coordinates of the location to which we want to assign an intensity value v(r,c)
· Bilinear interpolation yields the intensity value as
where the four coefficients are determined from the four equations in four unknowns that can be
written using the four nearest neighbors of point (r, c)
· Better results with a modest increase in computing
∗ Bicubic interpolation
· Use 16 nearest neighbors of a point
· Intensity value for location (r,c) is given by
where the 16 coefficients are determined from the 16 equations in 16 unknowns that can be
written using the 16 nearest neighbors of point (r, c)
· Bicubic interpolation reduces to bilinear form by limiting the two summations from 0 to1
· Bicubic interpolation does a better job of preserving fine detail compared to bilinear
· Standard used in commercial image editing programs
∗ Other techniques for interpolation are based on splines and wavelets
– Shrinking
∗ Done similar to zooming
∗ Equivalent to pixel replication is row-column deletion
∗ Aliasing effects can be removed by slightly blurring the image before reducingit
Basic Relationships between pixels
1. Neighbor of a pixel –we consider the following subset of pixels of an image
L A M
C P D
N B O
LetthepositionofpixelPbethecoordinates(x,y).Thenithastwohorizontalneighborsandtwovertical
neighbors:
Horizontal neighbors : C : (x,y-1)
D : (x,y+1)
Vertical Neighbors : A : (x-1,y)
B : (x+1,y)
These horizontaland vertical neighborsaretogethercalled the 4-nighbors of Pand theset is
denotedby N4(P) = {A, B, C, D} = {(x-1,y), (x+1,y), (x,y-1), (x,y+1)}
If P is on the border of the image, some of the neighbors may not exist.
P also has four diagonal neighbors:
L : (x-1,y-1)
M : (x-1,y+1)
N : (x+1,y-1)
O : (x+1,y+1)
This set is denoted by ND(P) ={L, M, N, O}.
All the above are together called the 8-neighbors of P, and are denoted by N8(P). So
N8(P) = N4(P) U ND(P)
2. Adjacency –let Vbethe set of intensity values used to define adjacency. For example, if we
considerabinaryimagewhichhasonlytwointensityvalues:0(representingblack)and1
(representing white), then V can be chosen as V = {1}.
LetV={1}forthefollowingdefinitions.Considerthefollowingarrangementofpixels(binaryimage).
0 0 0 0 0
0 0 1(B) 1(C) 0
1(D) 0 1(A) 0 0
1(E) 0 0 0 0
0 1(F) 1(G) 0 0
 4-Adjacency–twopixelspandqwithvaluesfromsetVaresaidtobe4-adjacentifqisinset
N4(p). For example,in theadjoining figure, A and Bare4-adjacent,sincethey are 4-neighbors of
each other. But, A and C are not 4-adjaacent.
 8-adjacency -TwopixelspandqwithvaluesfromsetVare 8-adjacentifqisinN8(p).i.e.,ifp
and q are either horizontal, vertical or diagonal neighbors. For example, in the above figure, (A
and B), (A and C) and (B and C) are 8-adjacent.
 m-adjacency –Two pixels p and q with intensity values from V are m-adjacent if either of the
following two conditions are satisfied:
o q is in N4(p), or
o q is in ND(p), and the set N4(p) Ո N4(q) has no pixels with values from V.
Forexample,intheabovefigure,(AandB),(BandC),(DandE)and(FandG)arem-adjacent
since they satisfythe first condition. (Eand F) are also m-adjacent sincethey satisfythesecond
condition,althoughtheyviolatethefirstcondition.(AandC)arenotm-adjacentsincethey
violate both conditions.
3. Path -apathfrompixelpwithcoordinates(x,y)topixelqwithcoordinates(s,t)isasequenceof
distinctpixelswithcoordinates(x0,y0),(x1,y1),…(xn,yn)wherepixels(xi ,yi)and(xi-1 ,yi-1)are
adjacent for 1≤ i ≤ n, and where (x0, y0) =(x,y) and (xn , yn) = (s,t).
 Here, n is the length of the path.
 If (x0 , y0) and (xn , yn) are the same, then the path is a closed path.
 Depending onthe type ofadjacency, we can have 4-path, 8-pathor m-path.
0 1(A) 1(B)
0 1(C) 0
0 0 1(D)
For example, in the above arrangement of pixels, the paths are as follows:
4-path : B A C
8-paths:(i)BAC, (ii)BACD, (iii)B C D
m-path: B A C D
4. Connectivity –let Srepresent a subset of pixelsin an image. Then, two pixelsp and q are said to
beconnectedinS,ifthereexistsapathbetweenthemconsistingentirelyofpixelsinS.For
example,inthepreviousfigure,ifallthe9pixelsformthesubsetS,thenBandC(orBandDin
caseof8andm-paths)aresaidtobeconnectedtoeachother.Ontheotherhand,ifSconsists
of pixels of rightmost column, then B and D are not connected to each other.
 For any pixel p in S, the set of pixels that are connected to it in S is called a connected
componentofS.forexample,ifallthe9pixelsintheabovefigureformS,thepixelsA,B,Cand
D form a connected component.
 Ifthesubset Shasonlyoneconnectedcomponent,thenthe set Siscalleda connectedset.For
example,
(i) The following is a connected set
0 0 0 0 0
0 0 1 1 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 0
U
U U
(ii) Thefollowingsubsetisnotaconnectedset,sinceithastwoconnectedcomponents.
0 0 0 0 0
0 0 1 1 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 0
0 0 1 1 0
0 0 0 1 0
0 0 0 0 0
5. Region–letRbeasubsetofpixelsofanimage.Riscalledaregionoftheimage,ifRisa
connected set.
 TworegionsRi andRj aresaidtobeadjacentiftheirunionformsaconnectedset.
Regionsthatare notadjacentaresaidtobe disjoint.Weconsider4 and8-adjacency
onlywhenreferringtoregions.Forexample, consideringtworegionsRi andRjas
follows:
Ri
Rj
PixelsAandBare8-adjacentbutnot4-adjacent.UnionofRiandRjwillformaconnectedsetif8-
adjacencyisused,andthenRi andRj willbe adjacentregions.If4-adjacencyisused,they willbe
disjoint regions.
6. Boundary –let an image contain K disjoint regions. Let RU denote the pixelsin the connected
componentsoftheunionofalltheseKregions,andletR C
denoteitscomplement.ThenR is
called the foreground and R C
is called the background of the image. The inner
border/boundary of a region RU is the set of pixels in the region that have atleast one
background neighbor (adjacency must be specified).
7. DistanceMeasures–considerpixelspandqwithcoordinates(x,y)and(s,t)respectively.
 Euclideandistance–itisdefinedasDe(p,q)=√[(x−s)2+(y−t)2].Allthepixels that
haveEuclideandistancelessthanorequaltoavaluerfrompixelp(x,y)are
contained in a disk of radius r centered at (x,y).
 D4 distance(City-blockdistance)–itisdefinedasD4(p,q) =|x−s|+|y−t|.Allthe
pixelshavingD4 distancelessthanorequaltoavaluerarecontainedinadiamond
centered at (x,y). Also, all the pixels with D4 =1 are the 4-neighborsof (x,y).
 D8 distance(Chessboarddistance)–itisdefinedasD8(p,q)=max(|x−s|,|y−t|). Allthe
pixelswithD8distancelessthanorequaltosomevaluerfrom(x,y)arecontained in a
1 1 1
1 0 1
0 1(A) 0
0 0 1(B)
1 1 1
1 1 1
square centered at (x,y). All pixels with D8 =1 are the 8-neigborsof (x,y).
 De ,D4 andD8 distancesareindependentofanypathsthatmayexistbetweenpandq.
 Dm distance – it is defined as the shortest m-path between p and q. So, the Dm distance
betweentwopixelswilldependonthevaluesofthepixelsalongthepath,aswellason
thevaluesoftheirneighbors.Forexample,considerthefollowingsubsetofpixels
P8 P3 P4
P1 P2 P6
P P5 P7
For adjacency, let V={1}. Let P, P2 and P4 =1
i) IfP1, P3, P5, P6,P7, andP8 =0.ThentheDm distance between PandP4 is2
(only one m-path exists: P – P2 – P4)
0 0 1
0 1 0
1 0 0
ii) IfP,P1 ,P2 andP4 =1,andrestarezero,thentheDm distanceis3(P–P1 –
P2 – P4)
0 0 1
1 1 0
1 0 0
Mathematical Tools used in Digital Image Processing
1. Arrayversus Matrix operations –Arrayoperationsinimagesare carried out on a pixel –by–
pixelbasis.For example, raising an imageto a power meansthateach individualpixelisraised
tothatpower.Therearealsosituationsinwhichoperationsbetweenimagesarecarriedout
using matrixtheory.
2. LinearversusNonlinearoperations–consideringageneraloperatorHthatwhenappliedonan
input image f(x,y) gives an output image g(x,y) .i.e.,
H[ƒ(x,y)] = g(x,y)
Then, H is said to be a linear operator if
H[aƒ1(x,y) + bƒ2(x,y)] = ag1(x,y) + bg2(x,y)
Where, a and b are arbitrary constants. f1(x,y) and f2(x,y) are input images, and g1(x,y) and
g2(x,y) are the corresponding output images.
Thatis,ifHsatisfiesthepropertiesofadditivityandhomogeneity,thenitisalinearoperator.
3. Arithmetic operations –these operations are arrayoperationsand aredenoted by
S(x,y) = f(x,y) + g(x,y)
D(x,y)=f(x,y)–g(x,y)
P(x,y)=f(x,y)*g(x,y)
V(x,y)=f(x,y)÷g(x,y)
Theseoperationsareperformedbetweencorrespondingpixelpairsinfandgforx=0,1,2,…
M-1andy=0,1,2,…,N-1,wherealltheimagesareofsizeMXN.
For example,
 ifweconsiderasetofKnoisyimagesofaparticularscene{gi(x,y)},then,inordertoobtainan
image with less noise, averaging operation can be done as follows:
1 m
g̅ (x, y) =
m
∑gi(x, y)
i=1
Thisaveragingoperationcanbeusedinthefieldofastronomywhereimagingunderverylow
lightlevelsfrequentlycausessensornoisetorendersingleimagesvirtuallyuselessforanalysis.
 Image Subtraction –thisoperation can be used to find smallminute differences between
images that are indistinguishable to the naked eye.
 Image Multiplication –this operation can be used in masking where some regions of interest
need to be extracted from a given image. The process consists of multiplying a given image with
a mask image that has 1s in the region of interest and 0s elsewhere.
 Inpractice,forann-bitimage, the intensitiesareinthe range [0, K]whereK = 2n
–1.When
arithmeticoperationsareperformedonanimage,theintensitiesmaygooutofthisrange.For
example,imagesubtractionmayleadtoimageswithintensitiesintherange[-255,255].In
ordertobringthistotheoriginalrange[0,K],thefollowingtwooperationscanbeperformed.If
f(x,y) is the result of the arithmetic operation, then
o We perform ƒm(x,y) = ƒ(x,y) − min[ƒ(x,y)]. This creates an image whose minimum
value is zero.
o Then, we perform
ƒ (x,y) = ∗ [
ƒm(x,y)
]
c
max[ƒm(x,y)]
This results in an image fS (x,y) whose intensities are in the range [ 0 , K ].
 While performing image division, a small number must be added to the pixels of divisor image to
avoid division byzero.
4. SetOperations–forgrayscaleimages,setoperationsarearrayoperations.Theunionand
intersectionoperationsbetweentwoimagesaredefinedasthemaximumandminimumof
correspondingpixelpairsrespectively.Thecomplementoperationonanimageisdefinedasthe
pairwise differencesbetween a constant andthe intensityof every pixel in the image.
Let set A represent a gray scale image whose elementsare the triplets (x, y, z) where (x,y) is the
location of the pixel and z is its intensity. Then,
Union :
Intersection:
Complement:
A UB= {max(a,b)|a∗A,b∗B}
z
Afi B = {min(a,b)|a∗A,b∗B}
z
AC = {(x, y, ∗ − z)|(x, y, z) ∗ A}
5. Logical Operations –whendealingwith binaryimages,the1 –valued pixelscan bethoughtof
asforegroundand0–valuedpixelsasbackground.Now,consideringtworegionsAandB
composed of foregroundpixels,
TheORoperationofthesetwosetsisthesetofcoordinatesbelongingeithertoAortoBorto
both.
TheANDoperationisthesetofelementsthatarecommontobothAandB.
The NOT operation on set A is the set of elements not in A.
The logical operations are performed on two regions of same image, which can be irregular and
of different sizes.
The set operations involve complete images.

More Related Content

PPTX
Introduction to Medical Imaging
PPTX
Medical imaging
PPTX
Digital imaging system
PPT
Digital Imaging
PPT
Recent advances digital imaging
PDF
Introduction to Medical Imaging
PPTX
Dental Digital Radiography in Easy to Understand Steps
Introduction to Medical Imaging
Medical imaging
Digital imaging system
Digital Imaging
Recent advances digital imaging
Introduction to Medical Imaging
Dental Digital Radiography in Easy to Understand Steps

What's hot (20)

PPTX
Digital Radiography
PPT
Medical imaging summary 1
PDF
General Consideration of all Radiology (imaging) Modalities
DOCX
PPT
Digital imaging
PPT
Biomedical Imaging Informatics
PPTX
DIGITAL IMAGING
PPTX
Digital Radiography
PPT
Introduction to digital radiography and pacs
PPT
Fluoroscopy systems
PDF
Medical x ray image sensors
PPTX
Digital radiography.. an update
PPT
20 medical physics techniques
PDF
Nyu wireless presentation
PPTX
digital radiography
PPTX
Biomedical Optical Imaging
PPTX
Digital Radiography in Dentistry Seminar by Dr Pratik
PPTX
Ee 417 Senior Design
PPTX
AHRA 2014 Annual Meeting l MD Buyline's Breakout Session: Transitioning to Di...
PPTX
Computed radiography
Digital Radiography
Medical imaging summary 1
General Consideration of all Radiology (imaging) Modalities
Digital imaging
Biomedical Imaging Informatics
DIGITAL IMAGING
Digital Radiography
Introduction to digital radiography and pacs
Fluoroscopy systems
Medical x ray image sensors
Digital radiography.. an update
20 medical physics techniques
Nyu wireless presentation
digital radiography
Biomedical Optical Imaging
Digital Radiography in Dentistry Seminar by Dr Pratik
Ee 417 Senior Design
AHRA 2014 Annual Meeting l MD Buyline's Breakout Session: Transitioning to Di...
Computed radiography
Ad

Similar to Dip mod1 (20)

DOCX
Introduction to image processing-Class Notes
PDF
Dip 4 ece-1 & 2
PPTX
RADIOGRAPHIC IMAGING INTRODUCTION TO RADIOLOGUC TECHNOLOGY.pptx
PPT
1. Introduction to Radiology and Imaging - Orthotrauma [Autosaved].ppt
PPTX
Orthodontic radiograph..
PDF
Class 17_Medical Imaging for biomedical engineering
PPTX
Electromagnetic wave
PPTX
Ram Chandra - 418.pptxyvyvuvuvuvuvuvuvuguggu
PPTX
Tissue Engineering introduction for physicists - Lecture two
PPT
Single Positron Emission Computed Tomography (SPECT)
PDF
Forestry rs introduction 2021
PPTX
Applications of Digital image processing in Medical Field
PPT
1 diagnostic imaging
PPTX
LCU RDG 402 PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
PPTX
PPTX
Imaging in periodontics
PPTX
PPTX
Cone beam CT: concepts and applications.pptx
PDF
Digital_image_processing introduction ppt for basic understanding
PPTX
6 biophysics of vision 2015
Introduction to image processing-Class Notes
Dip 4 ece-1 & 2
RADIOGRAPHIC IMAGING INTRODUCTION TO RADIOLOGUC TECHNOLOGY.pptx
1. Introduction to Radiology and Imaging - Orthotrauma [Autosaved].ppt
Orthodontic radiograph..
Class 17_Medical Imaging for biomedical engineering
Electromagnetic wave
Ram Chandra - 418.pptxyvyvuvuvuvuvuvuvuguggu
Tissue Engineering introduction for physicists - Lecture two
Single Positron Emission Computed Tomography (SPECT)
Forestry rs introduction 2021
Applications of Digital image processing in Medical Field
1 diagnostic imaging
LCU RDG 402 PRINCIPLES OF COMPUTED TOMOGRAPHY.pptx
Imaging in periodontics
Cone beam CT: concepts and applications.pptx
Digital_image_processing introduction ppt for basic understanding
6 biophysics of vision 2015
Ad

Recently uploaded (20)

PPTX
introduction to high performance computing
PDF
Soil Improvement Techniques Note - Rabbi
PDF
Accra-Kumasi Expressway - Prefeasibility Report Volume 1 of 7.11.2018.pdf
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PDF
distributed database system" (DDBS) is often used to refer to both the distri...
PPTX
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
PPTX
Module 8- Technological and Communication Skills.pptx
PDF
Visual Aids for Exploratory Data Analysis.pdf
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PDF
Design Guidelines and solutions for Plastics parts
PPTX
CyberSecurity Mobile and Wireless Devices
PPTX
Fundamentals of safety and accident prevention -final (1).pptx
PPTX
Current and future trends in Computer Vision.pptx
PPTX
Information Storage and Retrieval Techniques Unit III
PPTX
Amdahl’s law is explained in the above power point presentations
PPTX
Software Engineering and software moduleing
PPTX
tack Data Structure with Array and Linked List Implementation, Push and Pop O...
introduction to high performance computing
Soil Improvement Techniques Note - Rabbi
Accra-Kumasi Expressway - Prefeasibility Report Volume 1 of 7.11.2018.pdf
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
distributed database system" (DDBS) is often used to refer to both the distri...
CURRICULAM DESIGN engineering FOR CSE 2025.pptx
Module 8- Technological and Communication Skills.pptx
Visual Aids for Exploratory Data Analysis.pdf
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
Design Guidelines and solutions for Plastics parts
CyberSecurity Mobile and Wireless Devices
Fundamentals of safety and accident prevention -final (1).pptx
Current and future trends in Computer Vision.pptx
Information Storage and Retrieval Techniques Unit III
Amdahl’s law is explained in the above power point presentations
Software Engineering and software moduleing
tack Data Structure with Array and Linked List Implementation, Push and Pop O...

Dip mod1

  • 1. DIGITAL IMAGE PROCESSING by Dr Noorullah Shariff C MODULE-I INTRODUCTION  Digital image processing is the manipulation of digital images by mean of a computer.  It has two main applications: o Improvement of pictorial information for human interpretation, o Processingofimage dataforstorage,transmission and representationforautonomous machine perception. What is Digital Image Processing  Digital image processing is the manipulation of digital images by mean of a computer.  Animagecontainsdescriptiveinformationabouttheobjectthatitrepresents.  Itcanbedefined mathematically as a 2-dimensional function or signal f(x,y) o where x and y are the spatial coordinates, and o f is called the intensity or gray-level of the image at a point.  Adigitalimagehasallthethreeparametersx,y and f asfiniteanddiscretequantities.  Ithasafinite number of elements, each of which has a particular location and value. o These elements are called picture elements, image elements, pels or pixels.  Humans are limited to the visual band of the Electromagnetic spectrum (EM). o But, imaging machines coveralmost the entire EM spectrum, ranging from gamma to radio waves. o Such machines can operate on images generated by sources including ultrasound, electron microscopy and computer generated images.  Continuum from Image Processing to Computer Vision Origin of Digital Image Processing  Newspaper industry – Bartlane cable picture transmission system across Atlantic (1920s) – Superseded by photographic reproduction from tapes using telegraph terminals – Earlier images could code in five levels of gray, improved to 15 levels in 1929
  • 2.  1960s: Improvements in computing technology and the onset of the space race led to a surge of work in digital image processing  1964: Computers used to improve the quality of images of the moon taken by the Ranger 7 probe  Such techniques were used in other space missions including the Apollo landings  1970s: Digital image processing begins to be used in medical applications  1979: Sir Godfrey N. Hounsfield & Prof. Allan M. Cormack share the Nobel Prize in medicine for the invention of tomography, the technology behind Computerized Axial Tomography (CAT) scans  1980s -Today: The use of digital image processing techniques has exploded and they are now used for all kinds of tasks in all kinds of areas – Image enhancement/restoration – Artistic effects – Medical visualisation – Industrial inspection – Law enforcement – Human computer interfaces Examples in the fields that Use Digital Image Processing Classification of images based on the source of energy, ranging from gamma rays at one end to radio waves at the other > Viewing images in non-visible bands of the electromagnetic spectrum, as well as in other energy sources such as acoustic, ultrasonic, and electronic > Gamma-ray imaging – Nuclear medicine  Inject a patient with a radioactive isotope that emits gamma rays as it decays  Used to locate sites of bone pathology such as infection of tumors – Positron emission tomography (PET scan) to detect tumors  Similar to CAT  Patient is given a radioactive isotope that emits positrons as it decays  When a positron meets an electron, both are annihilated giving off two gamma rays – Astrophysics  Studying images of stars that glow in gamma rays as natural radiation – Nuclear reactors  Looking for gamma radiation from valves > X-ray imaging – Medical and industrial applications – Generated using an X-ray tube – a vacuum tube with a cathode and an anode  Cathode is heated causing free electrons to be released  Electrons flow at high speed to positively charged anode  Upon electron’s impact with a nucleus, energy released in the form of X-ray radiation
  • 3.  Energy captured by a film sensitive to X-rays – Angiography or contrast-enhanced radiography  Used to obtain images or angiograms of blood vessels  A catheter is inserted into an artery or vein in the groin  Catheter threaded into the blood vessel and guided to the area to be studied  An X-ray contrast medium is injected into the catheter tube  Enhances the contrast of blood vessels and enables radiologists to see any irregularities or blockages > Imaging in ultraviolet band – Lithography, industrial inspection, microscopy, lasers, biological imaging – Fluorescence microscopy  A mineral fluorspar fluoresces when UV light is directed upon it  UV light by itself is not visible but when a photon of UV radiation collides with an electron in an atom of a fluorescent material, it elevates the electron to a higher energy level  The excited electron relaxes and emits light in the form of a lower energy photon in the visible light region  Fluorescence microscope uses excitation light to irradiate a prepared specimen and then, to separate the much weaker radiating fluorescent light from the brighter excitation light  Only the emission light reaches the sensor  Resulting fluorescing areas shine against a dark background with sufficient contrast to permit detection – Astronomy > Visible and IR band – Remote sensing, law enforcement – Thematic bands in satellite imagery – Multispectral and hyperspectral imagery (Fig. 1.10) – Weather observation and monitoring – Target detection > Imaging in microwave band – Radar > Imaging in radio band – Medicine (MRI) and astronomy > Other imaging modalities – Acoustic imaging (ultrasound), electron microscopy.
  • 4. Fundamental steps in Digital Image Processing 1. ImageAcquisition.Thisstepinvolvesacquiringanimageindigitalformandpreprocessingsuch as scaling. 2. ImageEnhancement–itistheprocessofmanipulatinganimagesothattheresultis more suitable than the original for a specific application. For e.g. filtering, sharpening, smoothing, etc. 3. ImageRestoration–thesetechniquesalsodealwithimprovingtheappearanceofanimage. But, as opposed to image enhancement which is based on human subjective preferences regardingwhatconstitutesagoodenhancementresult,imagerestorationisbasedon mathematical or probabilistic models of image degradation. 4. ColorImageProcessing–Colorprocessingindigitaldomainisusedasthebasisforextracting features of interest in an image. 5. Wavelets –they are used to represent images in various degrees of resolution. 6. Imagecompression –theydeal withtechniquesfor reducingthe storagerequiredto save an image, or the bandwidth required to transmit it. 7. Image Segmentation –these procedures partition an image into its constituent parts or objects. Thisgivesrawpixeldata, constitutingeithertheboundaryof aregion orallthepoints inthe region itself.
  • 5. Components of an Image Processing system 1. Image Sensors –two elements are required to acquire a digital image: a. Aphysicaldevicethatissensitivetotheenergyradiatedbytheobjectwewishtoimage. And, b. A digitizer for converting the output of the sensor into digital form. 2. SpecializedImageProcessingHardware–Itmustperformprimitiveoperationslikearithmetic andlogicoperationsinparallelontheentireimage.Itsmostdistinguishingcharacteristicis speed, which the typical computer cannot perform. 3. Computer–Thisisrequiredfor primitiveoperationsand canbeageneral purpose PC. 4. Software –it consists of specialized modules that perform specific tasks. 5. MassStorage–shorttermstorageisusedduringprocessing,onlinestorageisusedforrelatively fast recall, and archival storage is used for infrequent access. 6. Image Displays –Theseare monitorsused to viewthe result of a processing task. 7. Networking – It is required to transmit images. ELEMENTS OF VISUAL PERCEPTION  Human intuition and analysisplay a central role in the choice of image processing techniques, based on subjective and visual judgments.
  • 6.  So, it isnecessary to understand how human and electronic imaging devices compare in terms of resolution and ability to adapt to changes in illumination. Structure of human eye  The eye isnearly a sphere, with an average diameterof approximately 20mm.  Three membranes enclose theeye: o Thecornea andsclera outercover, o Thechoroid, and o The retina.  Theinnermostmembraneof the eyeistheretina,whichlinestheinsideofthewall’sentire posteriorportion.  Whentheeyeisproperly focused, light from an object outside the eye is imaged on the retina.  Discrete light receptors are distributed over the surface of retina. They are of two types: cones and rods.  Conesarelocatedprimarilyinthecentralportionoftheretina,calledthe fovea. o Theyarehighly sensitivetocolor. o Eachoneisconnectedtoitsownnerveend. o Thishelpsapersoninresolvingfine details. o Musclescontrollingtheeyerotatetheeyeballuntiltheimageofanobjectfallson fovea.
  • 7. o Cone vision is called photopic or bright – light vision.  Rods are distributed over the retinalsurface. o Severalrodsareconnected to a single nerve end. o This fact alongwiththelargerareaofdistributionreducetheamountofdetail discerniblebythesereceptors. o Rodsgiveageneral,overallpictureofthefieldofview. o Theyarenotinvolvedincolorvisionandare sensitivetolowlevelsof illumination. o Rodvisioniscalled scotopicordim–lightvision. Image formation in the eye
  • 8. o Inthehumaneye,thedistancebetweenthelensandtheretina(imagingregion)isfixed,as opposedto anordinaryphotographiccamera(wherethedistancebetweenthelensandfilm (imagingregion)is varied). o Thefocallengthneededtoachieveproperfocusisobtainedbyvaryingtheshapeofthelens, withthehelpof fibersintheciliarybody. o Thedistancebetweenthe centerofthelensandtheretina alongthevisualaxisis approximately17mm,andtherangeoffocallengthsisapproximately14mmto 17mm. o The retinal image is focused primarily on the region of the fovea. o Perception then takes place by the relative excitation of light receptors, which transform radiant energy into electrical impulses that ultimately are decoded by the brain. Brightness adaptation and discrimination o Therangeoflight intensitylevelstowhichthehumanvisualsystemcanadaptis enormous–  Ofthe order of 1010 – from scotopic threshold to the glare limit.  Subjective brightness (intensity as perceived by human visual system) is a logarithmic function of the light intensity incident on the eye.  However thetotalrange ofdistinctintensitylevelstheeyecandiscriminate
  • 9. simultaneouslyisrathersmallwhencomparedwith thetotaladaptation range.  Foranygivensetofconditions,thecurrentsensitivitylevelof thevisual systemiscalledthebrightnessadaptationlevel.  Brightnessdiscriminationispooratlowlevelsof illumination, and it improves significantly as background illumination increases.  At low levels of illumination,visioniscarriedoutbytherods,whereasathigh levels,visioniscarriedoutbythecones. o Weber ratio  Measure of contrast discrimination ability  Background intensity given by I  ∆Ic is the increment of illumination when the illumination is visible against background intensity I  Weber ratio is given by ∆Ic/I  A small value of ∆Ic/I implies that a small percentage change in intensity is visible, representing good brightness discrimination.  A large value of ∆Ic/I implies that a large percentage change is required for discrimination, representing poor brightness discrimination.  Typically, brightness discrimination is poor at low levels of illumination and improves at higher levels of background illumination  Mach bands ∗ ∗ ∗ ∗
  • 10. o Scalloped brightness pattern near the boundaries shown in stripes of constant intensity o The bars themselves are useful for calibration of display equipment  Simultaneous contrast o A region’s perceived brightness does not depend simply on intensity Lighter background makes an object appear darker while darker background makes the same object appear brighter  Optical illusions
  • 11. Light and Electromagnetic (EM) spectrum.
  • 12.  The EM spectrum ranges from gamma to radio waves.  The visible part of the EM spectrum ranges from violet tored.  Thepartofthespectrumbeforeviolet endiscalledUltraviolet,andthat afterred endis called the Infrared. o Visible spectrum  0.43µm (violet) to 0.79µm (red)  VIBGYOR regions  Colors are perceived because of light reflected from an object o Absorption vs reflectance of colors  An object appears white because it reflects all colors equally o Lightthatisvoidofcoloriscalledmonochromaticorachromaticlight. o Itsonlyattributeisitsintensity. o Intensity specifies the amount of light. o Since the intensity of monochromatic light is perceived to vary from black to gray shades and finally to white, the monochromatic intensity is called Gray level. o The range of measured values of monochromatic light from black to white is
  • 13. called the Gray scale, and the monochromatic images are called gray scale images. o Chromatic light spans the EM spectrum from 0.43μm (violet end) to 0.79μm (red end). o The quantities that describe the quality of a chromatic light source are o Frequency- f= c/λ where λ is wavelength o Radiance–itistheamountofenergythatflowsfromthelightsourceandis measuredinwatts. o Luminance–itistheamountofenergythatanobserverperceivesandis measuredinlumens. o Brightness –it issimilar to intensityin achromatic notion, and cannot be measured. Image sensing and acquisition  Imagesaregeneratedbythecombinationof o anilluminationsourceand o thereflection/absorptionof energyfrom that source bythe elementsof the scenebeing imaged.  The principal sensor arrangements used to transform illumination energy into digital images are 1. Single imaging sensor 2. Line sensor 3. Array sensor 1. Singleimageingsensor  Thiscanbeaphotodiodewhichisconstructedwithsiliconmaterialand its output voltage isproportional tothe amount of light falling on it.  The use of filter infront of thesensorimprovesselectivity.  Fore.g.agreenfilterresultsinastrongersensoroutputfor greenlightthanother componentsofthevisiblespectrum.  Inordertogeneratea2-Dimage using a single imaging sensor, there has to be relative displacement in both x- and y- directions between the sensor and the area to be imaged. 2. Line sensor  It consists of an in-line arrangement of sensors in the form of a sensor strip, which providesimagingelementsinonedirection.  Movementperpendiculartothestripprovides imagingintheotherdirection.
  • 14.  Thisarrangementisusedinmostflat-bedscanners.  Sensing deviceswith4000ormorein-linesensorsarepossible.  Applicationsoflinesensorsinclude airborne imaging of geographicalareas, medical and industrialimaging to obtain 3-Dimagesof objects, etc. 3. Arraysensor  Herethesingleimagingsensorsarearrangedintheformofa2-Darray.  Asthe sensor array is 2-dimensional, a complete image can be obtained byfocusingthe energy pattern ontothesurfaceofthearray, andhencemovementinanydirectionis notnecessary.  This arrangementisfoundindigitalcameraswhichhasCCDsensorspackedinarraysof 4000X4000 elements ormore. Image Acquisition  Theresponseofeachsensorisproportionaltotheintegraloflightenergyprojectedontothe surfaceof the sensor.  The sensor integrates this energy over minutes or hoursto reduce the noise. Acquisition process  The energy from an illumination source is reflected from the scene element being imaged (or theenergymaybetransmittedthroughthesceneelement,dependingonthenatureofits surface).  The imaging system collects this reflected/transmitted energy and focuses it onto the image plane.  Thefront endofthe imaging system projectsthe viewed scene ontothe lensfocal plane.  The sensor array which is coincidental with the focal plane produces outputs proportional to the integralofthelightreceivedateachsensor,whicharethendigitizedresultinginadigitaloutput image.
  • 15. Image Sampling and Quantization  Theoutputofthesensorsisacontinuousvoltagethathastobedigitized.  Thisinvolvestwoprocesses–sampling and quantization. o Sampling: Digitizing the coordinate values  It may be viewed as partitioning the X-Y plane into a grid of M rows and N columns with the coordinates of the center of each cell in the grid being a pair from the Cartesian product Ƶ2. o Quantization: Digitizing the amplitude values 
  • 16. · × o Digital image is always an approximation of Original image Image Representation o As a 3D plot of (x,y,z) where x and y are planar coordinates and z is the value of f at (x,y) o As intensity of each point, as a real number in the interval [0,1] o As a set of numerical values in the form of a matrix  We may even represent the matrix as a vector of size MN x1by reading each row one at a time into the vector Letf(s,t)representacontinuousimagefunctionoftwocontinuousvariablessandt.toconvertittoa digital image, 1. Sampling–  We sample the continuous image into a 2-Darrayf(x,y)  Containing M rowsand N columns, where(x,y)arediscretecoordinatestakingup integervaluesx=0,1,……,M-1andy=0,1,…..,N-1.  So f(0,1)indicatesthesecondsamplealongthefirstrow.
  • 17. 2. Quantization–  Thevaluesoftheabovesamplesthatspanacontinuousrangeofintensityvalues, mustbe convertedtodiscretequantities.  Thisisdonebydividingtheentirecontinuousintensityscale intoLdiscreteintervals, rangingfromblacktowhite,whereblackisrepresentedbyavalue0andwhite byL-1.  DependingontheproximityofasampletooneoftheseLlevels,thecontinuousintensity levels are quantized.  In addition to the number of discrete levels used, the accuracy achieved in quantization is highly dependent on the noise content of the sampled signal. Digitization Process  The digitization processrequires that decisions be made regarding values for M, N and L.  Due to internal storage and quantizing hardware considerations, the number ofintensity levelstypicallyisanintegerpowerof2.i.e.,L=2K , andtheimageisreferredtoasaK-bit image.  For e.g. an image with 256 intensity levels is called an 8-bit image. The number of bits required to store a digitized k-bit image is b=MNK.
  • 18. Digital Image Fundamentals Terminology  Dynamic Range - It is the ratio of maximum measurable intensity to the minimum detectable intensity level in the imaging system. It establishes the lowest and highest intensity levels that an image canhave.  TheupperlimitisdeterminedbySaturation.Itisthehighestvaluebeyondwhichall intensity levels are clipped. In an image, the entire saturated area has a high constant intensity level.  The lower limit is determined by noise. Especially in the darker regions of an image, the noise masks the lowest detectable true intensity level.  Contrast–Itisthedifferenceinintensitybetweenthehighestandlowestintensitylevelsinan image. For example, a lowcontrast image would have a dull washed out graylook.  Spatial resolution –  itisa measure of the smallest detectable detailin an image.  It is measured aslinepairsperunitdistance,ordotsorpixelsperunitdistance (DotsperInchorDPI).  For example,newspapersareprintedwitharesolutionof75DPI,whereas textbooksareprinted with a resolution of 2400 DPI.  Lower resolution images are smaller  IntensityResolution–
  • 19.  Itisameasureofthesmallestdetectablechangeinintensitylevel.  It refers to the number of bits used to quantize intensity.  For example, an image whose intensity is quantized into 256 levels is said to have 8-bits of intensity resolution. • Image interpolation – Basic tool used extensively in tasks such as zooming, shrinking, rotating and geometriccorrections – Process of using known data to estimate values at unknown locations – Enlarge an image of size 500 × 500 pixels by 1.5 times to 750 × 750 pixels ∗ Create an imaginary 750 × 750 pixel grid with same pixel spacing as original ∗ Shrink it so that it fits over the original image exactly ∗ Assign the intensity of the nearest pixel in the 500 ×500 pixel image to the pixel in the 750 ×750 pixel image ∗ After assignment, expand the grid to its original size – Method known as nearest neighbor interpolation Zooming and shrinking considered as image resampling methods ∗ Zooming ⇒ oversampling
  • 20. ∗ Shrinking ⇒ undersampling – Zooming ∗ Create new pixel locations ∗ Assign gray levels to these pixel locations ∗ Pixel replication · Special case of nearest neighbor interpolation · Applicable when size of image is increased by an integer number of times · New pixels are exact duplicates of the old ones ∗ Nearest neighbor interpolation · Assign the gray scale level of nearest pixel to new pixel · Fast but may produce severe distortion of straight edges, objectionable at high magnification levels ∗ Better to do bilinear interpolation using a pixel neighborhood Bilinear interpolation · Use four nearest neighbors to estimate intensity at a given location · Let (r,c) denote the coordinates of the location to which we want to assign an intensity value v(r,c) · Bilinear interpolation yields the intensity value as where the four coefficients are determined from the four equations in four unknowns that can be written using the four nearest neighbors of point (r, c) · Better results with a modest increase in computing ∗ Bicubic interpolation · Use 16 nearest neighbors of a point · Intensity value for location (r,c) is given by where the 16 coefficients are determined from the 16 equations in 16 unknowns that can be written using the 16 nearest neighbors of point (r, c) · Bicubic interpolation reduces to bilinear form by limiting the two summations from 0 to1 · Bicubic interpolation does a better job of preserving fine detail compared to bilinear · Standard used in commercial image editing programs ∗ Other techniques for interpolation are based on splines and wavelets – Shrinking ∗ Done similar to zooming ∗ Equivalent to pixel replication is row-column deletion ∗ Aliasing effects can be removed by slightly blurring the image before reducingit Basic Relationships between pixels 1. Neighbor of a pixel –we consider the following subset of pixels of an image L A M C P D N B O LetthepositionofpixelPbethecoordinates(x,y).Thenithastwohorizontalneighborsandtwovertical neighbors: Horizontal neighbors : C : (x,y-1) D : (x,y+1)
  • 21. Vertical Neighbors : A : (x-1,y) B : (x+1,y) These horizontaland vertical neighborsaretogethercalled the 4-nighbors of Pand theset is denotedby N4(P) = {A, B, C, D} = {(x-1,y), (x+1,y), (x,y-1), (x,y+1)} If P is on the border of the image, some of the neighbors may not exist. P also has four diagonal neighbors: L : (x-1,y-1) M : (x-1,y+1) N : (x+1,y-1) O : (x+1,y+1) This set is denoted by ND(P) ={L, M, N, O}. All the above are together called the 8-neighbors of P, and are denoted by N8(P). So N8(P) = N4(P) U ND(P) 2. Adjacency –let Vbethe set of intensity values used to define adjacency. For example, if we considerabinaryimagewhichhasonlytwointensityvalues:0(representingblack)and1 (representing white), then V can be chosen as V = {1}. LetV={1}forthefollowingdefinitions.Considerthefollowingarrangementofpixels(binaryimage). 0 0 0 0 0 0 0 1(B) 1(C) 0 1(D) 0 1(A) 0 0 1(E) 0 0 0 0 0 1(F) 1(G) 0 0  4-Adjacency–twopixelspandqwithvaluesfromsetVaresaidtobe4-adjacentifqisinset N4(p). For example,in theadjoining figure, A and Bare4-adjacent,sincethey are 4-neighbors of each other. But, A and C are not 4-adjaacent.  8-adjacency -TwopixelspandqwithvaluesfromsetVare 8-adjacentifqisinN8(p).i.e.,ifp and q are either horizontal, vertical or diagonal neighbors. For example, in the above figure, (A and B), (A and C) and (B and C) are 8-adjacent.  m-adjacency –Two pixels p and q with intensity values from V are m-adjacent if either of the following two conditions are satisfied: o q is in N4(p), or o q is in ND(p), and the set N4(p) Ո N4(q) has no pixels with values from V.
  • 22. Forexample,intheabovefigure,(AandB),(BandC),(DandE)and(FandG)arem-adjacent since they satisfythe first condition. (Eand F) are also m-adjacent sincethey satisfythesecond condition,althoughtheyviolatethefirstcondition.(AandC)arenotm-adjacentsincethey violate both conditions. 3. Path -apathfrompixelpwithcoordinates(x,y)topixelqwithcoordinates(s,t)isasequenceof distinctpixelswithcoordinates(x0,y0),(x1,y1),…(xn,yn)wherepixels(xi ,yi)and(xi-1 ,yi-1)are adjacent for 1≤ i ≤ n, and where (x0, y0) =(x,y) and (xn , yn) = (s,t).  Here, n is the length of the path.  If (x0 , y0) and (xn , yn) are the same, then the path is a closed path.  Depending onthe type ofadjacency, we can have 4-path, 8-pathor m-path. 0 1(A) 1(B) 0 1(C) 0 0 0 1(D) For example, in the above arrangement of pixels, the paths are as follows: 4-path : B A C 8-paths:(i)BAC, (ii)BACD, (iii)B C D m-path: B A C D 4. Connectivity –let Srepresent a subset of pixelsin an image. Then, two pixelsp and q are said to beconnectedinS,ifthereexistsapathbetweenthemconsistingentirelyofpixelsinS.For example,inthepreviousfigure,ifallthe9pixelsformthesubsetS,thenBandC(orBandDin caseof8andm-paths)aresaidtobeconnectedtoeachother.Ontheotherhand,ifSconsists of pixels of rightmost column, then B and D are not connected to each other.  For any pixel p in S, the set of pixels that are connected to it in S is called a connected componentofS.forexample,ifallthe9pixelsintheabovefigureformS,thepixelsA,B,Cand D form a connected component.  Ifthesubset Shasonlyoneconnectedcomponent,thenthe set Siscalleda connectedset.For example, (i) The following is a connected set 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0
  • 23. U U U (ii) Thefollowingsubsetisnotaconnectedset,sinceithastwoconnectedcomponents. 0 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 1 0 0 0 0 1 0 0 0 0 0 0 5. Region–letRbeasubsetofpixelsofanimage.Riscalledaregionoftheimage,ifRisa connected set.  TworegionsRi andRj aresaidtobeadjacentiftheirunionformsaconnectedset. Regionsthatare notadjacentaresaidtobe disjoint.Weconsider4 and8-adjacency onlywhenreferringtoregions.Forexample, consideringtworegionsRi andRjas follows: Ri Rj PixelsAandBare8-adjacentbutnot4-adjacent.UnionofRiandRjwillformaconnectedsetif8- adjacencyisused,andthenRi andRj willbe adjacentregions.If4-adjacencyisused,they willbe disjoint regions. 6. Boundary –let an image contain K disjoint regions. Let RU denote the pixelsin the connected componentsoftheunionofalltheseKregions,andletR C denoteitscomplement.ThenR is called the foreground and R C is called the background of the image. The inner border/boundary of a region RU is the set of pixels in the region that have atleast one background neighbor (adjacency must be specified). 7. DistanceMeasures–considerpixelspandqwithcoordinates(x,y)and(s,t)respectively.  Euclideandistance–itisdefinedasDe(p,q)=√[(x−s)2+(y−t)2].Allthepixels that haveEuclideandistancelessthanorequaltoavaluerfrompixelp(x,y)are contained in a disk of radius r centered at (x,y).  D4 distance(City-blockdistance)–itisdefinedasD4(p,q) =|x−s|+|y−t|.Allthe pixelshavingD4 distancelessthanorequaltoavaluerarecontainedinadiamond centered at (x,y). Also, all the pixels with D4 =1 are the 4-neighborsof (x,y).  D8 distance(Chessboarddistance)–itisdefinedasD8(p,q)=max(|x−s|,|y−t|). Allthe pixelswithD8distancelessthanorequaltosomevaluerfrom(x,y)arecontained in a 1 1 1 1 0 1 0 1(A) 0 0 0 1(B) 1 1 1 1 1 1
  • 24. square centered at (x,y). All pixels with D8 =1 are the 8-neigborsof (x,y).  De ,D4 andD8 distancesareindependentofanypathsthatmayexistbetweenpandq.  Dm distance – it is defined as the shortest m-path between p and q. So, the Dm distance betweentwopixelswilldependonthevaluesofthepixelsalongthepath,aswellason thevaluesoftheirneighbors.Forexample,considerthefollowingsubsetofpixels P8 P3 P4 P1 P2 P6 P P5 P7 For adjacency, let V={1}. Let P, P2 and P4 =1 i) IfP1, P3, P5, P6,P7, andP8 =0.ThentheDm distance between PandP4 is2 (only one m-path exists: P – P2 – P4) 0 0 1 0 1 0 1 0 0 ii) IfP,P1 ,P2 andP4 =1,andrestarezero,thentheDm distanceis3(P–P1 – P2 – P4) 0 0 1 1 1 0 1 0 0 Mathematical Tools used in Digital Image Processing 1. Arrayversus Matrix operations –Arrayoperationsinimagesare carried out on a pixel –by– pixelbasis.For example, raising an imageto a power meansthateach individualpixelisraised tothatpower.Therearealsosituationsinwhichoperationsbetweenimagesarecarriedout using matrixtheory. 2. LinearversusNonlinearoperations–consideringageneraloperatorHthatwhenappliedonan input image f(x,y) gives an output image g(x,y) .i.e., H[ƒ(x,y)] = g(x,y) Then, H is said to be a linear operator if H[aƒ1(x,y) + bƒ2(x,y)] = ag1(x,y) + bg2(x,y) Where, a and b are arbitrary constants. f1(x,y) and f2(x,y) are input images, and g1(x,y) and g2(x,y) are the corresponding output images. Thatis,ifHsatisfiesthepropertiesofadditivityandhomogeneity,thenitisalinearoperator. 3. Arithmetic operations –these operations are arrayoperationsand aredenoted by S(x,y) = f(x,y) + g(x,y) D(x,y)=f(x,y)–g(x,y) P(x,y)=f(x,y)*g(x,y) V(x,y)=f(x,y)÷g(x,y) Theseoperationsareperformedbetweencorrespondingpixelpairsinfandgforx=0,1,2,… M-1andy=0,1,2,…,N-1,wherealltheimagesareofsizeMXN. For example,  ifweconsiderasetofKnoisyimagesofaparticularscene{gi(x,y)},then,inordertoobtainan
  • 25. image with less noise, averaging operation can be done as follows: 1 m g̅ (x, y) = m ∑gi(x, y) i=1 Thisaveragingoperationcanbeusedinthefieldofastronomywhereimagingunderverylow lightlevelsfrequentlycausessensornoisetorendersingleimagesvirtuallyuselessforanalysis.  Image Subtraction –thisoperation can be used to find smallminute differences between images that are indistinguishable to the naked eye.  Image Multiplication –this operation can be used in masking where some regions of interest need to be extracted from a given image. The process consists of multiplying a given image with a mask image that has 1s in the region of interest and 0s elsewhere.  Inpractice,forann-bitimage, the intensitiesareinthe range [0, K]whereK = 2n –1.When arithmeticoperationsareperformedonanimage,theintensitiesmaygooutofthisrange.For example,imagesubtractionmayleadtoimageswithintensitiesintherange[-255,255].In ordertobringthistotheoriginalrange[0,K],thefollowingtwooperationscanbeperformed.If f(x,y) is the result of the arithmetic operation, then o We perform ƒm(x,y) = ƒ(x,y) − min[ƒ(x,y)]. This creates an image whose minimum value is zero. o Then, we perform ƒ (x,y) = ∗ [ ƒm(x,y) ] c max[ƒm(x,y)] This results in an image fS (x,y) whose intensities are in the range [ 0 , K ].  While performing image division, a small number must be added to the pixels of divisor image to avoid division byzero. 4. SetOperations–forgrayscaleimages,setoperationsarearrayoperations.Theunionand intersectionoperationsbetweentwoimagesaredefinedasthemaximumandminimumof correspondingpixelpairsrespectively.Thecomplementoperationonanimageisdefinedasthe pairwise differencesbetween a constant andthe intensityof every pixel in the image. Let set A represent a gray scale image whose elementsare the triplets (x, y, z) where (x,y) is the location of the pixel and z is its intensity. Then, Union : Intersection: Complement: A UB= {max(a,b)|a∗A,b∗B} z Afi B = {min(a,b)|a∗A,b∗B} z AC = {(x, y, ∗ − z)|(x, y, z) ∗ A}
  • 26. 5. Logical Operations –whendealingwith binaryimages,the1 –valued pixelscan bethoughtof asforegroundand0–valuedpixelsasbackground.Now,consideringtworegionsAandB composed of foregroundpixels, TheORoperationofthesetwosetsisthesetofcoordinatesbelongingeithertoAortoBorto both. TheANDoperationisthesetofelementsthatarecommontobothAandB. The NOT operation on set A is the set of elements not in A. The logical operations are performed on two regions of same image, which can be irregular and of different sizes. The set operations involve complete images.