SlideShare a Scribd company logo
Module 4
Image Segmentation
INTRODUCTION:
Image segmentation is a commonly used technique in digital
image processing and analysis to partition an image into multiple
parts or regions, based on the characteristics of the pixels in the
image.
Image Segmentation algorithms are based on one of two basic
properties intensity values:
Discontinuity: It is an approach to partition an image based on abrupt
changes in intensity, such as edges in an image
The three basic types of gray level discontinuities in a digital image:
a) Points b) Lines and c) Edges
Similarity: It is an approach to partition an image into regions that are similar
according to a set of predefined criteria.
Example; Thresholding, Region growing and Region splitting and merging
Athmaranjan K, Dept of ISE
DETECTION OF DISCONTINUITIES
Let us consider a 3 x 3 mask with mask coefficients are as shown
below:
Athmaranjan K, Dept of ISE
DETECTION OF DISCONTINUITIES
3 x 3 Mask is superimposed with a 3 x 3 region of an image (the
z’s are gray-level values) shown below
Athmaranjan K, Dept of ISE
Computation of sum of products of the mask coefficients (wi)
with the gray levels (zi) contained in the region encompassed by
the mask.
Therefore by applying the convolution process we get the
response of the mask at any point in the image and is given by:
As usual the response of the mask is defined with respect to its center location.
Athmaranjan K, Dept of ISE
POINT DETECTION:
An isolated point is a point whose gray level is significantly
different from its background in a homogeneous area.
Example:
Any dark spot in white background or white spot in dark
background, in an image can be observed as an isolated point.
Athmaranjan K, Dept of ISE
• To identify the availability of any isolated point in image we
must select the threshold value.
• The threshold value is denoted by T, which is used to identify
the points.
• We say that a point has been detected at the location f(x, y)
on which the mask is centered if the absolute value of the
response of the mask at that point exceeds a specified
threshold.
• Such points are labelled 1 in the output image and all others
are labelled 0, thus producing a binary image
Athmaranjan K, Dept of ISE
A sample mask used for point detection is shown below
Athmaranjan K, Dept of ISE
LINE DETECTION:
• In an image line is another of kind discontinuity.
• Here four types of masks are used to get the responses. That
is R1, R2, R3, and R4 for the directions vertical, horizontal, +450
and -450
respectively.
Athmaranjan K, Dept of ISE
• Apply these masks on the given image
• Then superimpose and convolution process is being applied
and accordingly we have to calculate the response of the mask.
The response of the mask is given by:
• R1 is the response when moving the mask from left to right
• R2 is the response when moving the mask from top to bottom.
• Similarly R3 is the response of the mask along +450
line and
• R4 is the response of the mask along -450
line.
Suppose that an image is filtered (individually) with the four
masks. If, at a given point in the image
That point is said to me more likely associated with a line in the direction of
mask k.
Athmaranjan K, Dept of ISE
EDGE DETECTION:
• An edge is a set of connected pixels that lies on the boundary
between two regions which differ in gray value.
• Pixels on edge are known as edge points.
• Edges provide an outline of the object.
• An edge can be extracted by computing the derivative of the
image function.
• Magnitude of the derivative indicates the contrast of edge and
direction of the derivative vector indicates the edge orientation.
Example: if at a point in the image response R1 of a mask,
That particular point is said to be more likely associated with a
horizontal line
Athmaranjan K, Dept of ISE
Step edge involves abrupt change intensity and the transition
between two intensity levels occurring ideally over the distance of
1 pixel.
Ramp edge: involves slow and gradual change in intensity
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Roof edge: It is not instantaneous over short distance
Athmaranjan K, Dept of ISE
Edge detection stages:
1. For the given image filtering is applied in order to get a better
input image for edge detection by performing smoothing and
noise reduction.
2. Detection of edge points: this is a local operation that extracts
from an image all points that are potential candidates to become
edge points.
• This can be done by using first order derivative or second
order derivative. The magnitude of the first derivative can be
used to detect the presence of an edge at a point in an image.
Athmaranjan K, Dept of ISE
• Similarly, the sign of the second derivative can be used to
determine whether an edge pixel lies on the dark or light side
of an edge.
• In this method we take the 1st derivative of the intensity
value across the image and find points where the derivative is
maximum then the edge could be located
• The image gradient is used to find edge strength and direction
at location (x, y) of image, and defines as the vector.
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
• Edge localization; steps involves in determining the
exact location of edge in an image, and also it
involves in edge thinning and edge linking steps
Athmaranjan K, Dept of ISE
FIRST ORDER EDGE DETECTION OPERATORS
• The gradient of an image measures the change in image
function f(x, y) in X and Y directions.
• A 3 x 3 region of an image (the z’s are gray-level values) and
masks used to compute the gradient at point labeled z5 is
shown below
First order derivative operators
Robert operator
Gx = (z9 – z5)
Gy = (z8 – z6)
= |z9 – z5| + |z8 – z5|
Athmaranjan K, Dept of ISE
PREWITT OPERATOR
Gx = (z7 +z8 + z9) – (z1 +z2 + z3)
Gy = (z3 +z6 + z9) – (z1 +z4 + z7)
Athmaranjan K, Dept of ISE
SOBEL OPERATOR
Gx =
Gy =
Athmaranjan K, Dept of ISE
Prewitt and Sobel mask for detecting diagonal edges
Athmaranjan K, Dept of ISE
EDGE LINKING AND BOUNDARY DETECTION
• Edge detection what we discussed so far yield pixels lying only
on edges.
• These set of pixels from edge detecting algorithms, seldom
define a boundary completely because of noise, breaks in the
boundary etc.
• Therefore, Edge detecting algorithms are typically followed by
linking and other detection procedures, designed to assemble
edge pixels into meaningful boundaries.
Edge linking is used to group the edge points that are detected
using edge detection algorithms such as first order and second
order derivative operators.
There are two methods used for edge linking.
1. Local Processing (Local Edge Linker)
2. Global Edge Linker (Using Hough Transform)
Athmaranjan K, Dept of ISE
LOCAL PROCESSING
• Edge points are grouped based on some similarity criteria.
Steps:
• Detect the edges (edge points) using edge detection
algorithm.
• Analyze the characteristics of edge points with neighborhood
(3 x 3 or 5 x 5) pixels and group into an edge if the pre-defined
criteria for similarity are met.
The two principal properties used for establishing pre-defined
criteria for similarity are:
• The strength (Magnitude) of the response of the gradient
operator used to produce the edge pixel.
• The direction of the gradient vector.
• Let us consider an edge point f(x, y) and another neighboring
edge point f(x0, y0), they can be grouped to form the edge
(two edge points can be connected) only if the;
Athmaranjan K, Dept of ISE
An edge point in the pre-defined neighborhood of (x, y) is
link to the pixel at (x, y) if both magnitude and direction
criteria are satisfied. This process is repeated at every
location in the image
Athmaranjan K, Dept of ISE
GLOBAL PROCESSING VIA THE HOUGH TRANSFORM
Edge points are linked by determining first if they lie on a curve of
specified shape.
HOUGH TRANSFORM
• The Hough transform is a feature extraction method used to
detect simple shapes such as lines, circles and objects in an
image using the concept of parameter space.
• Hough transform is used to connect the disjoined edges in
image.
Athmaranjan K, Dept of ISE
Working Principle:
Consider any point (xi, yi) on xy plane and the general equation
of a straight line in slope-intercept form is given by:
yi = axi + b
Infinitely many lines pass through this (xi, yi) but they all satisfy
the equation yi = axi + b for varying values of a and b
A line in xy plane contains many points on it. We need to transfer
this line from xy plane to ab plane using the equation
b = -axi + yi.
In fact, all the points on this line have lines in parameter (Hough)
space that intersect at point (a’, b’).
Let us consider a line in xy plane which contains two points (xi, yi)
and (xj, yj) as shown below: When we convert this line to
parameter (Hough) space we get two
lines intersecting at point (a’, b’) unless
these two lines are parallel where a’ is
the slope and b’ is the intercept of the
line containing both (xi, yi) and (xj, yj) in
the xy plane
Athmaranjan K, Dept of ISE
• For each edge point we draw the lines in the parameter space
and then find their point of intersection (if any).
• The intersection point will give us the parameter (slope and
intercept) of the line.
• Thus all the edge points; say (x1, y1), (x2, y2), (x3, y3), ……… lie on
the line only if their corresponding lines intersect at one
common point in parameter space
Drawback:
A practical difficulty with this approach, however, is that a (the
slope of a line) approaches infinity as the line approaches the
vertical direction. One way around this difficulty is to use the
normal representation of a line:
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Global Processing via Graph-Theoretic Techniques
• It is a global approach for edge detection and linking based on
representing edge segments in the form of a graph and searching
the graph for low-cost paths that correspond to significant edges.
• This representation provides a strong approach that performs
well in the presence of noise.
• But the procedure is considerably more complicated and
requires more processing time than the methods discussed so far.
Athmaranjan K, Dept of ISE
• We begin the development with some basic definitions.
• A graph G = (N, U) is a finite nonempty set of nodes N, together
with a set U of unordered pairs of distinct elements of N.
• Each pair (ni, nj) of U is called an arc.
• A graph in which the arcs, are directed is called a directed graph,
If an arc is directed from node ni, to node nj.
• then nj, is said to be a successor of the parent node nj.
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
Thresholding
It is carried out with the assumption that the range of intensity
levels covered by objects of interest is different from the
background.
Steps:
1. A threshold T is selected
2. Any point (x, y) in the image at which f(x, y) > T is called an
object point otherwise background point.
3. The segmented image denoted by g(x, y);
Athmaranjan K, Dept of ISE
Suppose that the gray level histogram corresponding to an
image f(x, y), composed of light objects on dark background. In
such a way that object and background pixels have gray levels
grouped into two dominant modes.
Athmaranjan K, Dept of ISE
For example an image contains two types of light objects on dark
background. The histogram corresponding to this image
characterized by 3 dominant modes.
Athmaranjan K, Dept of ISE
Athmaranjan K, Dept of ISE
• Here multilevel thresholding classifies a point (x, y) as
belonging to one object class if
• To another object class if
• To the background class if
• Thresholding is a very important technique for image
segment.
• It provides uniform regions based on the threshold value T
• Key parameter of thresholding process is the choice of
threshold value.
Athmaranjan K, Dept of ISE
Types of Thresholding
1. Global Thresholding: Threshold operation depend upon only gray
scale value f(x, y). T is constant value.
2. Local Thresholding: Threshold operation depends on both gray
scale value f(x, y) and neighbourhood properties p(x, y)
3. Dynamic or Adaptive: Threshold operation depends upon spatial
coordinates x, y
Athmaranjan K, Dept of ISE
Global Thresholding
1. Select an initial estimate for T (This value should be greater than
the minimum and less than the maximum intensity levels in the
image. It is better to choose the average intensity of an image.
2. Segment the image using T: This will produce 2 groups of pixels;
G1: consisting of all pixels with gray level values > T and G2
consisting of pixel values ≤ T
3. Compute the average gray level values μ1 and μ2 for the pixels in
regions G1 and G2
4. Compute a new threshold value:
5. Repeat steps 2 through 4 until the difference in T in
successive iterations is smaller than a predefined
parameter T0
The parameter T0 is used to stop the algorithm after changes become small in
terms of this parameter.
Athmaranjan K, Dept of ISE
Example: Find the threshold value for the image
shown below:
Dynamic or Adaptive Thresholding
• Imaging factors such as uneven illumination can transform a perfectly
segment able histogram into a histogram that can not be partitioned
effectively by a single threshold.
• One approach is to divide the original image into sub-images and then
utilize a different threshold to segment each sub image .
• Key issues in this approach are how to sub divide the image and how to
estimate the threshold for each resulting sub image.
• Threshold value used for each pixel depends on the location of pixel in
terms of the sub images; Adaptive
Athmaranjan K, Dept of ISE
• Various statistical properties like mean, median, variance,
constant c etc.
• Apply the global thresholding method for each sub images.
Result of global Thresholding
Original Image
Athmaranjan K, Dept of ISE
Result of Adaptive Thresholding
Athmaranjan K, Dept of ISE
REGION BASED SEGMENTATION
• The objective of segmentation is to partition an image into
regions.
• Boundaries between the regions using gray level
discontinuities.
• Segmentation via threshold based on the distribution of pixel
properties.(gray level or color)
• Segmentation based on finding the regions directly.
Athmaranjan K, Dept of ISE

More Related Content

PPT
MODULE_4_part1_Intro_image-segzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz...
PPT
MODULE_4_part1_Intro_image-segmentation.ppt AAAAAAAAAAAAAAAAAAAAAAA
PPTX
Lecture 06 - image processingcourse1.pptx
PDF
Module-5-1_230523_171754 (1).pdf
PPTX
image segmentation image segmentation.pptx
PPT
Edges and lines
PPT
IVP_segmentation ppt for image segmentation used in computer graphics
PDF
Ijarcet vol-2-issue-7-2246-2251
MODULE_4_part1_Intro_image-segzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz...
MODULE_4_part1_Intro_image-segmentation.ppt AAAAAAAAAAAAAAAAAAAAAAA
Lecture 06 - image processingcourse1.pptx
Module-5-1_230523_171754 (1).pdf
image segmentation image segmentation.pptx
Edges and lines
IVP_segmentation ppt for image segmentation used in computer graphics
Ijarcet vol-2-issue-7-2246-2251

Similar to Module 4-Image segmentation.pptx aaaaaaaaaaaaaaaaaaaaaaaaaaa (20)

PDF
Ijarcet vol-2-issue-7-2246-2251
PPT
Chapter10 image segmentation
PPT
Segmentation
PDF
Edge detection by using lookup table
PDF
MachineVision_Chapter5_Edge_Detection.pdf
PPTX
Digital Image Processing -Unit-3 - L1.pptx
PPTX
Notes on image processing
PDF
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
PDF
Real time Canny edge detection
PDF
UNIT-4.pdf image processing btech aktu notes
PPTX
08_Lecture -Chapter 10- Image Segmentation_Part I_Edge Detection.pptx
PPT
EDGEDETECTION algorithm and theory for image processing
PDF
Edge detection.pdf
PPTX
Computer Vision UNit 3 Presentaion Slide
DOCX
EDGE DETECTION
PDF
Study of Various Edge Detection Techniques and Implementation of Real Time Fr...
PDF
Biomedical engineering 20231023-segmentation-1.pdf
PDF
Signal Processing, Statistical and Learning Machine Techniques for Edge Detec...
PPTX
Introduction to Edges Detection Techniques
PPTX
Fuzzy Logic Based Edge Detection
Ijarcet vol-2-issue-7-2246-2251
Chapter10 image segmentation
Segmentation
Edge detection by using lookup table
MachineVision_Chapter5_Edge_Detection.pdf
Digital Image Processing -Unit-3 - L1.pptx
Notes on image processing
AN OPTIMAL SOLUTION FOR IMAGE EDGE DETECTION PROBLEM USING SIMPLIFIED GABOR W...
Real time Canny edge detection
UNIT-4.pdf image processing btech aktu notes
08_Lecture -Chapter 10- Image Segmentation_Part I_Edge Detection.pptx
EDGEDETECTION algorithm and theory for image processing
Edge detection.pdf
Computer Vision UNit 3 Presentaion Slide
EDGE DETECTION
Study of Various Edge Detection Techniques and Implementation of Real Time Fr...
Biomedical engineering 20231023-segmentation-1.pdf
Signal Processing, Statistical and Learning Machine Techniques for Edge Detec...
Introduction to Edges Detection Techniques
Fuzzy Logic Based Edge Detection
Ad

More from ATHMARANJANBhandary (6)

PPT
lecture 18 (Loader-Machine Dependent Features).ppt
PPT
loaderfffffffffffffffffffffffffffflinker.ppt
PPTX
Module 2.pptxwwwwwwwwwwwwwwwwwwwwwwwwwww
PPTX
Module 5.pptxsssssssssssssssssssssssssssssssssssssss
PPTX
Module 3-DCT.pptxssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss...
PPTX
Digital Image Processing aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...
lecture 18 (Loader-Machine Dependent Features).ppt
loaderfffffffffffffffffffffffffffflinker.ppt
Module 2.pptxwwwwwwwwwwwwwwwwwwwwwwwwwww
Module 5.pptxsssssssssssssssssssssssssssssssssssssss
Module 3-DCT.pptxssssssssssssssssssssssssssssssssssssssssssssssssssssssssssss...
Digital Image Processing aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa...
Ad

Recently uploaded (20)

PPTX
Welding lecture in detail for understanding
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Sustainable Sites - Green Building Construction
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
DOCX
573137875-Attendance-Management-System-original
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
Geodesy 1.pptx...............................................
PPT
Project quality management in manufacturing
PPTX
web development for engineering and engineering
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
composite construction of structures.pdf
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Welding lecture in detail for understanding
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Sustainable Sites - Green Building Construction
CYBER-CRIMES AND SECURITY A guide to understanding
573137875-Attendance-Management-System-original
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
Geodesy 1.pptx...............................................
Project quality management in manufacturing
web development for engineering and engineering
Lecture Notes Electrical Wiring System Components
Foundation to blockchain - A guide to Blockchain Tech
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
Automation-in-Manufacturing-Chapter-Introduction.pdf
composite construction of structures.pdf
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026

Module 4-Image segmentation.pptx aaaaaaaaaaaaaaaaaaaaaaaaaaa

  • 2. INTRODUCTION: Image segmentation is a commonly used technique in digital image processing and analysis to partition an image into multiple parts or regions, based on the characteristics of the pixels in the image.
  • 3. Image Segmentation algorithms are based on one of two basic properties intensity values: Discontinuity: It is an approach to partition an image based on abrupt changes in intensity, such as edges in an image The three basic types of gray level discontinuities in a digital image: a) Points b) Lines and c) Edges Similarity: It is an approach to partition an image into regions that are similar according to a set of predefined criteria. Example; Thresholding, Region growing and Region splitting and merging Athmaranjan K, Dept of ISE
  • 4. DETECTION OF DISCONTINUITIES Let us consider a 3 x 3 mask with mask coefficients are as shown below: Athmaranjan K, Dept of ISE
  • 5. DETECTION OF DISCONTINUITIES 3 x 3 Mask is superimposed with a 3 x 3 region of an image (the z’s are gray-level values) shown below Athmaranjan K, Dept of ISE
  • 6. Computation of sum of products of the mask coefficients (wi) with the gray levels (zi) contained in the region encompassed by the mask. Therefore by applying the convolution process we get the response of the mask at any point in the image and is given by: As usual the response of the mask is defined with respect to its center location. Athmaranjan K, Dept of ISE
  • 7. POINT DETECTION: An isolated point is a point whose gray level is significantly different from its background in a homogeneous area. Example: Any dark spot in white background or white spot in dark background, in an image can be observed as an isolated point. Athmaranjan K, Dept of ISE
  • 8. • To identify the availability of any isolated point in image we must select the threshold value. • The threshold value is denoted by T, which is used to identify the points. • We say that a point has been detected at the location f(x, y) on which the mask is centered if the absolute value of the response of the mask at that point exceeds a specified threshold. • Such points are labelled 1 in the output image and all others are labelled 0, thus producing a binary image Athmaranjan K, Dept of ISE
  • 9. A sample mask used for point detection is shown below Athmaranjan K, Dept of ISE
  • 10. LINE DETECTION: • In an image line is another of kind discontinuity. • Here four types of masks are used to get the responses. That is R1, R2, R3, and R4 for the directions vertical, horizontal, +450 and -450 respectively. Athmaranjan K, Dept of ISE
  • 11. • Apply these masks on the given image • Then superimpose and convolution process is being applied and accordingly we have to calculate the response of the mask. The response of the mask is given by:
  • 12. • R1 is the response when moving the mask from left to right • R2 is the response when moving the mask from top to bottom. • Similarly R3 is the response of the mask along +450 line and • R4 is the response of the mask along -450 line. Suppose that an image is filtered (individually) with the four masks. If, at a given point in the image That point is said to me more likely associated with a line in the direction of mask k. Athmaranjan K, Dept of ISE
  • 13. EDGE DETECTION: • An edge is a set of connected pixels that lies on the boundary between two regions which differ in gray value. • Pixels on edge are known as edge points. • Edges provide an outline of the object. • An edge can be extracted by computing the derivative of the image function. • Magnitude of the derivative indicates the contrast of edge and direction of the derivative vector indicates the edge orientation.
  • 14. Example: if at a point in the image response R1 of a mask, That particular point is said to be more likely associated with a horizontal line Athmaranjan K, Dept of ISE
  • 15. Step edge involves abrupt change intensity and the transition between two intensity levels occurring ideally over the distance of 1 pixel. Ramp edge: involves slow and gradual change in intensity Athmaranjan K, Dept of ISE Athmaranjan K, Dept of ISE
  • 16. Roof edge: It is not instantaneous over short distance Athmaranjan K, Dept of ISE
  • 17. Edge detection stages: 1. For the given image filtering is applied in order to get a better input image for edge detection by performing smoothing and noise reduction. 2. Detection of edge points: this is a local operation that extracts from an image all points that are potential candidates to become edge points. • This can be done by using first order derivative or second order derivative. The magnitude of the first derivative can be used to detect the presence of an edge at a point in an image. Athmaranjan K, Dept of ISE
  • 18. • Similarly, the sign of the second derivative can be used to determine whether an edge pixel lies on the dark or light side of an edge. • In this method we take the 1st derivative of the intensity value across the image and find points where the derivative is maximum then the edge could be located
  • 19. • The image gradient is used to find edge strength and direction at location (x, y) of image, and defines as the vector. Athmaranjan K, Dept of ISE
  • 21. • Edge localization; steps involves in determining the exact location of edge in an image, and also it involves in edge thinning and edge linking steps Athmaranjan K, Dept of ISE
  • 22. FIRST ORDER EDGE DETECTION OPERATORS • The gradient of an image measures the change in image function f(x, y) in X and Y directions. • A 3 x 3 region of an image (the z’s are gray-level values) and masks used to compute the gradient at point labeled z5 is shown below
  • 23. First order derivative operators Robert operator Gx = (z9 – z5) Gy = (z8 – z6) = |z9 – z5| + |z8 – z5| Athmaranjan K, Dept of ISE
  • 24. PREWITT OPERATOR Gx = (z7 +z8 + z9) – (z1 +z2 + z3) Gy = (z3 +z6 + z9) – (z1 +z4 + z7) Athmaranjan K, Dept of ISE
  • 25. SOBEL OPERATOR Gx = Gy = Athmaranjan K, Dept of ISE
  • 26. Prewitt and Sobel mask for detecting diagonal edges Athmaranjan K, Dept of ISE
  • 27. EDGE LINKING AND BOUNDARY DETECTION • Edge detection what we discussed so far yield pixels lying only on edges. • These set of pixels from edge detecting algorithms, seldom define a boundary completely because of noise, breaks in the boundary etc. • Therefore, Edge detecting algorithms are typically followed by linking and other detection procedures, designed to assemble edge pixels into meaningful boundaries.
  • 28. Edge linking is used to group the edge points that are detected using edge detection algorithms such as first order and second order derivative operators. There are two methods used for edge linking. 1. Local Processing (Local Edge Linker) 2. Global Edge Linker (Using Hough Transform) Athmaranjan K, Dept of ISE
  • 29. LOCAL PROCESSING • Edge points are grouped based on some similarity criteria. Steps: • Detect the edges (edge points) using edge detection algorithm. • Analyze the characteristics of edge points with neighborhood (3 x 3 or 5 x 5) pixels and group into an edge if the pre-defined criteria for similarity are met.
  • 30. The two principal properties used for establishing pre-defined criteria for similarity are: • The strength (Magnitude) of the response of the gradient operator used to produce the edge pixel. • The direction of the gradient vector.
  • 31. • Let us consider an edge point f(x, y) and another neighboring edge point f(x0, y0), they can be grouped to form the edge (two edge points can be connected) only if the; Athmaranjan K, Dept of ISE
  • 32. An edge point in the pre-defined neighborhood of (x, y) is link to the pixel at (x, y) if both magnitude and direction criteria are satisfied. This process is repeated at every location in the image Athmaranjan K, Dept of ISE
  • 33. GLOBAL PROCESSING VIA THE HOUGH TRANSFORM Edge points are linked by determining first if they lie on a curve of specified shape. HOUGH TRANSFORM • The Hough transform is a feature extraction method used to detect simple shapes such as lines, circles and objects in an image using the concept of parameter space. • Hough transform is used to connect the disjoined edges in image. Athmaranjan K, Dept of ISE
  • 34. Working Principle: Consider any point (xi, yi) on xy plane and the general equation of a straight line in slope-intercept form is given by: yi = axi + b Infinitely many lines pass through this (xi, yi) but they all satisfy the equation yi = axi + b for varying values of a and b
  • 35. A line in xy plane contains many points on it. We need to transfer this line from xy plane to ab plane using the equation b = -axi + yi. In fact, all the points on this line have lines in parameter (Hough) space that intersect at point (a’, b’).
  • 36. Let us consider a line in xy plane which contains two points (xi, yi) and (xj, yj) as shown below: When we convert this line to parameter (Hough) space we get two lines intersecting at point (a’, b’) unless these two lines are parallel where a’ is the slope and b’ is the intercept of the line containing both (xi, yi) and (xj, yj) in the xy plane Athmaranjan K, Dept of ISE
  • 37. • For each edge point we draw the lines in the parameter space and then find their point of intersection (if any). • The intersection point will give us the parameter (slope and intercept) of the line. • Thus all the edge points; say (x1, y1), (x2, y2), (x3, y3), ……… lie on the line only if their corresponding lines intersect at one common point in parameter space
  • 38. Drawback: A practical difficulty with this approach, however, is that a (the slope of a line) approaches infinity as the line approaches the vertical direction. One way around this difficulty is to use the normal representation of a line: Athmaranjan K, Dept of ISE
  • 40. Global Processing via Graph-Theoretic Techniques • It is a global approach for edge detection and linking based on representing edge segments in the form of a graph and searching the graph for low-cost paths that correspond to significant edges. • This representation provides a strong approach that performs well in the presence of noise. • But the procedure is considerably more complicated and requires more processing time than the methods discussed so far. Athmaranjan K, Dept of ISE
  • 41. • We begin the development with some basic definitions. • A graph G = (N, U) is a finite nonempty set of nodes N, together with a set U of unordered pairs of distinct elements of N. • Each pair (ni, nj) of U is called an arc. • A graph in which the arcs, are directed is called a directed graph, If an arc is directed from node ni, to node nj. • then nj, is said to be a successor of the parent node nj.
  • 49. Thresholding It is carried out with the assumption that the range of intensity levels covered by objects of interest is different from the background.
  • 50. Steps: 1. A threshold T is selected 2. Any point (x, y) in the image at which f(x, y) > T is called an object point otherwise background point. 3. The segmented image denoted by g(x, y); Athmaranjan K, Dept of ISE
  • 51. Suppose that the gray level histogram corresponding to an image f(x, y), composed of light objects on dark background. In such a way that object and background pixels have gray levels grouped into two dominant modes. Athmaranjan K, Dept of ISE
  • 52. For example an image contains two types of light objects on dark background. The histogram corresponding to this image characterized by 3 dominant modes. Athmaranjan K, Dept of ISE Athmaranjan K, Dept of ISE
  • 53. • Here multilevel thresholding classifies a point (x, y) as belonging to one object class if • To another object class if • To the background class if
  • 54. • Thresholding is a very important technique for image segment. • It provides uniform regions based on the threshold value T • Key parameter of thresholding process is the choice of threshold value. Athmaranjan K, Dept of ISE
  • 55. Types of Thresholding 1. Global Thresholding: Threshold operation depend upon only gray scale value f(x, y). T is constant value. 2. Local Thresholding: Threshold operation depends on both gray scale value f(x, y) and neighbourhood properties p(x, y) 3. Dynamic or Adaptive: Threshold operation depends upon spatial coordinates x, y Athmaranjan K, Dept of ISE
  • 56. Global Thresholding 1. Select an initial estimate for T (This value should be greater than the minimum and less than the maximum intensity levels in the image. It is better to choose the average intensity of an image. 2. Segment the image using T: This will produce 2 groups of pixels; G1: consisting of all pixels with gray level values > T and G2 consisting of pixel values ≤ T 3. Compute the average gray level values μ1 and μ2 for the pixels in regions G1 and G2 4. Compute a new threshold value:
  • 57. 5. Repeat steps 2 through 4 until the difference in T in successive iterations is smaller than a predefined parameter T0 The parameter T0 is used to stop the algorithm after changes become small in terms of this parameter. Athmaranjan K, Dept of ISE
  • 58. Example: Find the threshold value for the image shown below:
  • 59. Dynamic or Adaptive Thresholding • Imaging factors such as uneven illumination can transform a perfectly segment able histogram into a histogram that can not be partitioned effectively by a single threshold. • One approach is to divide the original image into sub-images and then utilize a different threshold to segment each sub image . • Key issues in this approach are how to sub divide the image and how to estimate the threshold for each resulting sub image. • Threshold value used for each pixel depends on the location of pixel in terms of the sub images; Adaptive Athmaranjan K, Dept of ISE
  • 60. • Various statistical properties like mean, median, variance, constant c etc. • Apply the global thresholding method for each sub images. Result of global Thresholding Original Image Athmaranjan K, Dept of ISE
  • 61. Result of Adaptive Thresholding Athmaranjan K, Dept of ISE
  • 62. REGION BASED SEGMENTATION • The objective of segmentation is to partition an image into regions. • Boundaries between the regions using gray level discontinuities. • Segmentation via threshold based on the distribution of pixel properties.(gray level or color) • Segmentation based on finding the regions directly. Athmaranjan K, Dept of ISE