Course: Machine Vision
Image Matching
Session 12
D5627 – I Gede Putra Kusuma Negara, B.Eng., PhD
Outline
• Definition
• Invariant features
• Scale Invariant Feature Transform
• Feature matching
Definition
Definition
• Image matching can be defined as “the process of bringing two
images geometrically into agreement so that corresponding pixels in
the two images correspond to the same physical region of the scene
being imaged
• Image matching problem is accomplished by transforming (e.g.,
translating, rotating, scaling) one of the images in such a way that the
similarity with the other image is maximized in some sense
• The 3D nature of real-world scenarios makes this solution complex to
achieve, specially because images can be taken from arbitrary
viewpoints and in different illumination conditions.
Image Matching
• Image matching is a fundamental aspect of many problems in
computer vision, including object and scene recognition, content-
based image retrieval, stereo correspondence, motion tracking,
texture classification and video data mining.
• It is a complex problem, that remains challenging due to partial
occlusions, image deformations, and viewpoint or lighting changes
that may occur across different images
Image Matching (example)
by Diva Sian
by swashford
Feature Matching
• What stuff in the left image matches with stuff on the right?
• This information is important for for automatic panorama stitching
Harder Case
by Diva Sian by scgbt
Even Harder Case
NASA Mars Rover images with SIFT feature matches
Figure by Noah Snavely
Image Matching Procedure
1. Detection of distinguished regions
2. Local invariant description of these regions
3. Definition of the correspondence space
4. searching of a globally consistent subset of correspondences.
Invariant Features
Invariant Local Features
1. At an interesting
point, let’s define a
coordinate system
(x,y axis)
2. Use the coordinate
system to pull out a
patch at that point
Invariant Local Features
• Algorithm for finding points and representing their patches should produce similar
results even when conditions vary
• Buzzword is “invariance”
– Geometric invariance: translation, rotation, scale
– Photometric invariance: brightness, exposure, …etc.
Feature Descriptors
What makes good feature?
• We would like to align by matching local features of these 3 images
• What would be the good local features (ones easy to match)?
Local Measure of Uniqueness
Suppose we only consider a small window of pixels
• What defines whether a feature is a good or bad candidate?
• How does the window change when we shift it?
• Shifting the window in any direction causes a big change
“flat” region:
no change in all
directions
“edge”:
no change along
the edge direction
“corner”:
significant change
in all directions
Harris Corner Detector
(example)
Invariance
Suppose we are comparing two images I1 and I2
– I2 may be a transformed version of I1
– What kinds of transformations are we likely to encounter in
practice?
We’d like to find the same features regardless of the
transformation
– This is called transformational invariance
– Most feature methods are designed to be invariant to
• Translation, 2D rotation, scale
– They can usually also handle
• Limited 3D rotations (SIFT works up to about 60 degrees)
• Limited affine transformations (2D rotation, scale, shear)
• Limited illumination/contrast changes
How to achieve invariance?
1. Make sure our detector is invariant
– Harris is invariant to translation and rotation
– Scale is trickier
• common approach is to detect features at many scales using a Gaussian pyramid
(e.g., MOPS)
• More sophisticated methods find “the best scale” to represent each feature (e.g.,
SIFT)
2. Design an invariant feature descriptor
– A descriptor captures the information in a region around the
detected feature point
– The simplest descriptor: a square window of pixels
• What’s this invariant to?
– Let’s look at some better approaches…
Scale invariant detection
• Suppose we are looking for
corners
• Key idea: find scale that
gives local maximum of f
– f is a local maximum in both
position and scale
– Common definition of f:
Laplacian
(or difference between two
Gaussian filtered images with
different sigmas)
Slide from Tinne Tuytelaars
Lindeberg et al, 1996
Slide from Tinne Tuytelaars
Lindeberg et al., 1996
PPT s11-machine vision-s2
Rotation invariance for feature
descriptors
Find dominant orientation of the image patch
• This is given by x+, the eigenvector of H corresponding to +
– + is the larger eigenvalue
• Rotate the patch according to this angle
Figure by Matthew Brown
Multiscale Oriented PatcheS
descriptor
Take 40x40 square window
around detected feature
• Scale to 1/5 size (using
prefiltering)
• Rotate to horizontal
• Sample 8x8 square
window centered at
feature
• Intensity normalize the
window by subtracting
the mean, dividing by the
standard deviation in the
window (both window I
and aI+b will match)
8 pixels
Detections at Multiple Scales
Scale Invariant Feature Transform
Scale Invariant Feature
Transform
Basic idea:
• Take 16x16 square window
around detected feature
• Compute edge orientation (angle
of the gradient - 90 ) for each
pixel
• Throw out weak edges
(threshold gradient magnitude)
• Create histogram of surviving
edge orientations
0 2
SIFT Descriptor
Full version
• Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown
below)
• Compute an orientation histogram for each cell
• 16 cells * 8 orientations = 128 dimensional descriptor
SIFT Properties
Extraordinarily robust matching technique
• Can handle changes in viewpoint
• Up to about 60 degree out of plane rotation
• Can handle significant changes in illumination, sometimes even day
vs. night (next example)
• Fast and efficient—can run in real time
• Lots of code available
http://people.csail.mit.edu/albert/ladypack/wiki/index.php?title=Kno
wn_implementations_of_SIFT
SIFT in Action
Feature Matching
Feature Matching
Given a feature in I1, how to find
the best match in I2?
1. Define distance function that
compares two descriptors
2. Test all the features in I2, find the one
with min distance
How to define the difference
between two features f1, f2?
• Simple approach is SSD(f1, f2)
– Sum of square differences between
entries of the two descriptors
– Can give good scores to very
ambiguous (bad) matches
I1
f1
f2
I2
Feature Distance
How to define the difference
between two features f1, f2?
• Better approach: ratio
distance = SSD(f1, f2) /
SSD(f1, f2’)
– f2 is best SSD match to f1 in I2
– f2’ is 2nd best SSD match to f1 in
I2
– gives small values for ambiguous
matches
f1
f2
f2’
I1
I2
Evaluating the Results
• How can we measure the performance of a feature matcher?
50
75
200
feature distance
True/false Positive
The distance threshold affects performance
• True positives = # of detected matches that are correct
– Suppose we want to maximize these—how to choose threshold?
• False positives = # of detected matches that are incorrect
– Suppose we want to minimize these—how to choose threshold?
50
75
200
feature distance
false match
true match
Applications
Features are used for:
• Image alignment (e.g., mosaics)
• 3D reconstruction
• Motion tracking
• Object recognition
• Indexing and database retrieval
• Robot navigation
• … etc.
Acknowledgment
Some of slides in this PowerPoint presentation are adaptation from
various slides, many thanks to:
1. Deva Ramanan, Department of Computer Science, University of
California at Irvine (http://www.ics.uci.edu/~dramanan/)
Thank You

More Related Content

PDF
PPT s07-machine vision-s2
PDF
PPT s06-machine vision-s2
PDF
LN s11-machine vision-s2
PDF
PPT s12-machine vision-s2
PDF
PPT s01-machine vision-s2
PDF
PPT s04-machine vision-s2
PDF
PPT s08-machine vision-s2
PDF
PPT s10-machine vision-s2
PPT s07-machine vision-s2
PPT s06-machine vision-s2
LN s11-machine vision-s2
PPT s12-machine vision-s2
PPT s01-machine vision-s2
PPT s04-machine vision-s2
PPT s08-machine vision-s2
PPT s10-machine vision-s2

What's hot (20)

PDF
Feature detection and matching
PDF
PPT s02-machine vision-s2
PPT
Image segmentation
PPTX
Segmentation Techniques -I
PPTX
Various object detection and tracking methods
PPSX
Edge Detection and Segmentation
PDF
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
PDF
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
PPTX
Image segmentation in Digital Image Processing
PDF
PPT s03-machine vision-s2
PDF
Ed34785790
PPTX
Edge Detection algorithm and code
PPT
Image segmentation ajal
PPTX
Image segmentation
PPT
Image segmentation
PDF
Threshold Selection for Image segmentation
PPTX
Video Segmentation
PPTX
Features image processing and Extaction
PPSX
Exploring Methods to Improve Edge Detection with Canny Algorithm
Feature detection and matching
PPT s02-machine vision-s2
Image segmentation
Segmentation Techniques -I
Various object detection and tracking methods
Edge Detection and Segmentation
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
TYBSC (CS) SEM 6- DIGITAL IMAGE PROCESSING
Image segmentation in Digital Image Processing
PPT s03-machine vision-s2
Ed34785790
Edge Detection algorithm and code
Image segmentation ajal
Image segmentation
Image segmentation
Threshold Selection for Image segmentation
Video Segmentation
Features image processing and Extaction
Exploring Methods to Improve Edge Detection with Canny Algorithm
Ad

Similar to PPT s11-machine vision-s2 (20)

PPTX
Computer Vision descriptors
PPT
SIFT.ppt
PPT
SIFT.ppt
PDF
Lecture 6-computer vision features descriptors matching
PPTX
11 cie552 image_featuresii_sift
PPTX
Scale Invariant Feature Transform
PDF
Literature Survey on Interest Points based Watermarking
PPTX
feature matching and model description .pptx
PPTX
Real Time Stitching Of IR Images using ml.pptx
PDF
Lec10 alignment
PDF
SIFT Based Feature Extraction and Matching for Archaeological Artifacts
PDF
2 imagedescription.ppt
PDF
View and illumination invariant iterative based image matching
PDF
symfeat_cvpr2012.pdf
DOCX
Sift detector boosted by adaptive contrast threshold to improve matching robu...
DOCX
Sift detector boosted by adaptive contrast threshold to improve matching robu...
PDF
Lecture 02 internet video search
PDF
View and illumination invariant iterative based image
PPT
SIFT.ppt
PPTX
Lecture_07_InterestPoints_computer_vision.pptx
Computer Vision descriptors
SIFT.ppt
SIFT.ppt
Lecture 6-computer vision features descriptors matching
11 cie552 image_featuresii_sift
Scale Invariant Feature Transform
Literature Survey on Interest Points based Watermarking
feature matching and model description .pptx
Real Time Stitching Of IR Images using ml.pptx
Lec10 alignment
SIFT Based Feature Extraction and Matching for Archaeological Artifacts
2 imagedescription.ppt
View and illumination invariant iterative based image matching
symfeat_cvpr2012.pdf
Sift detector boosted by adaptive contrast threshold to improve matching robu...
Sift detector boosted by adaptive contrast threshold to improve matching robu...
Lecture 02 internet video search
View and illumination invariant iterative based image
SIFT.ppt
Lecture_07_InterestPoints_computer_vision.pptx
Ad

More from Binus Online Learning (20)

PDF
LN s12-machine vision-s2
PDF
LN s10-machine vision-s2
PDF
LN s09-machine vision-s2
PDF
LN s08-machine vision-s2
PDF
LN s07-machine vision-s2
PDF
LN s06-machine vision-s2
PDF
LN s05-machine vision-s2
PDF
LN s04-machine vision-s2
PDF
LN s03-machine vision-s2
PDF
LN s02-machine vision-s2
PDF
LN s01-machine vision-s2
PDF
PPT s09-machine vision-s2
PDF
PPT s05-machine vision-s2
PDF
LN sesi 2 delivering quality-1
PPTX
PPT Sesi 2 FO the guest delivering quality-1
PPTX
PPT Sesi 3 FO the guest - delivering quality 2
PDF
LN sesi 3 delivering quality-2
PDF
LN sesi 4 managing guest reservation-1
PPTX
PPT Sesi 4 managing guest reservation-1
PDF
LN sesi 5 managing guest reservation-2
LN s12-machine vision-s2
LN s10-machine vision-s2
LN s09-machine vision-s2
LN s08-machine vision-s2
LN s07-machine vision-s2
LN s06-machine vision-s2
LN s05-machine vision-s2
LN s04-machine vision-s2
LN s03-machine vision-s2
LN s02-machine vision-s2
LN s01-machine vision-s2
PPT s09-machine vision-s2
PPT s05-machine vision-s2
LN sesi 2 delivering quality-1
PPT Sesi 2 FO the guest delivering quality-1
PPT Sesi 3 FO the guest - delivering quality 2
LN sesi 3 delivering quality-2
LN sesi 4 managing guest reservation-1
PPT Sesi 4 managing guest reservation-1
LN sesi 5 managing guest reservation-2

Recently uploaded (20)

PPTX
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
PPTX
Fundamentals of Mechanical Engineering.pptx
PDF
Soil Improvement Techniques Note - Rabbi
PDF
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
PPTX
Amdahl’s law is explained in the above power point presentations
PDF
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
PDF
Design Guidelines and solutions for Plastics parts
PDF
Abrasive, erosive and cavitation wear.pdf
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PDF
III.4.1.2_The_Space_Environment.p pdffdf
PDF
August 2025 - Top 10 Read Articles in Network Security & Its Applications
PPTX
introduction to high performance computing
PDF
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
PDF
Visual Aids for Exploratory Data Analysis.pdf
PPTX
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
PPTX
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
PDF
Exploratory_Data_Analysis_Fundamentals.pdf
PPTX
Current and future trends in Computer Vision.pptx
PPTX
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
PDF
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf
Sorting and Hashing in Data Structures with Algorithms, Techniques, Implement...
Fundamentals of Mechanical Engineering.pptx
Soil Improvement Techniques Note - Rabbi
UNIT no 1 INTRODUCTION TO DBMS NOTES.pdf
Amdahl’s law is explained in the above power point presentations
ChapteR012372321DFGDSFGDFGDFSGDFGDFGDFGSDFGDFGFD
Design Guidelines and solutions for Plastics parts
Abrasive, erosive and cavitation wear.pdf
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
III.4.1.2_The_Space_Environment.p pdffdf
August 2025 - Top 10 Read Articles in Network Security & Its Applications
introduction to high performance computing
Human-AI Collaboration: Balancing Agentic AI and Autonomy in Hybrid Systems
Visual Aids for Exploratory Data Analysis.pdf
ASME PCC-02 TRAINING -DESKTOP-NLE5HNP.pptx
Graph Data Structures with Types, Traversals, Connectivity, and Real-Life App...
Exploratory_Data_Analysis_Fundamentals.pdf
Current and future trends in Computer Vision.pptx
6ME3A-Unit-II-Sensors and Actuators_Handouts.pptx
Improvement effect of pyrolyzed agro-food biochar on the properties of.pdf

PPT s11-machine vision-s2

  • 1. Course: Machine Vision Image Matching Session 12 D5627 – I Gede Putra Kusuma Negara, B.Eng., PhD
  • 2. Outline • Definition • Invariant features • Scale Invariant Feature Transform • Feature matching
  • 4. Definition • Image matching can be defined as “the process of bringing two images geometrically into agreement so that corresponding pixels in the two images correspond to the same physical region of the scene being imaged • Image matching problem is accomplished by transforming (e.g., translating, rotating, scaling) one of the images in such a way that the similarity with the other image is maximized in some sense • The 3D nature of real-world scenarios makes this solution complex to achieve, specially because images can be taken from arbitrary viewpoints and in different illumination conditions.
  • 5. Image Matching • Image matching is a fundamental aspect of many problems in computer vision, including object and scene recognition, content- based image retrieval, stereo correspondence, motion tracking, texture classification and video data mining. • It is a complex problem, that remains challenging due to partial occlusions, image deformations, and viewpoint or lighting changes that may occur across different images
  • 6. Image Matching (example) by Diva Sian by swashford
  • 7. Feature Matching • What stuff in the left image matches with stuff on the right? • This information is important for for automatic panorama stitching
  • 8. Harder Case by Diva Sian by scgbt
  • 9. Even Harder Case NASA Mars Rover images with SIFT feature matches Figure by Noah Snavely
  • 10. Image Matching Procedure 1. Detection of distinguished regions 2. Local invariant description of these regions 3. Definition of the correspondence space 4. searching of a globally consistent subset of correspondences.
  • 12. Invariant Local Features 1. At an interesting point, let’s define a coordinate system (x,y axis) 2. Use the coordinate system to pull out a patch at that point
  • 13. Invariant Local Features • Algorithm for finding points and representing their patches should produce similar results even when conditions vary • Buzzword is “invariance” – Geometric invariance: translation, rotation, scale – Photometric invariance: brightness, exposure, …etc. Feature Descriptors
  • 14. What makes good feature? • We would like to align by matching local features of these 3 images • What would be the good local features (ones easy to match)?
  • 15. Local Measure of Uniqueness Suppose we only consider a small window of pixels • What defines whether a feature is a good or bad candidate? • How does the window change when we shift it? • Shifting the window in any direction causes a big change “flat” region: no change in all directions “edge”: no change along the edge direction “corner”: significant change in all directions
  • 17. Invariance Suppose we are comparing two images I1 and I2 – I2 may be a transformed version of I1 – What kinds of transformations are we likely to encounter in practice? We’d like to find the same features regardless of the transformation – This is called transformational invariance – Most feature methods are designed to be invariant to • Translation, 2D rotation, scale – They can usually also handle • Limited 3D rotations (SIFT works up to about 60 degrees) • Limited affine transformations (2D rotation, scale, shear) • Limited illumination/contrast changes
  • 18. How to achieve invariance? 1. Make sure our detector is invariant – Harris is invariant to translation and rotation – Scale is trickier • common approach is to detect features at many scales using a Gaussian pyramid (e.g., MOPS) • More sophisticated methods find “the best scale” to represent each feature (e.g., SIFT) 2. Design an invariant feature descriptor – A descriptor captures the information in a region around the detected feature point – The simplest descriptor: a square window of pixels • What’s this invariant to? – Let’s look at some better approaches…
  • 19. Scale invariant detection • Suppose we are looking for corners • Key idea: find scale that gives local maximum of f – f is a local maximum in both position and scale – Common definition of f: Laplacian (or difference between two Gaussian filtered images with different sigmas)
  • 20. Slide from Tinne Tuytelaars Lindeberg et al, 1996 Slide from Tinne Tuytelaars Lindeberg et al., 1996
  • 22. Rotation invariance for feature descriptors Find dominant orientation of the image patch • This is given by x+, the eigenvector of H corresponding to + – + is the larger eigenvalue • Rotate the patch according to this angle Figure by Matthew Brown
  • 23. Multiscale Oriented PatcheS descriptor Take 40x40 square window around detected feature • Scale to 1/5 size (using prefiltering) • Rotate to horizontal • Sample 8x8 square window centered at feature • Intensity normalize the window by subtracting the mean, dividing by the standard deviation in the window (both window I and aI+b will match) 8 pixels
  • 26. Scale Invariant Feature Transform Basic idea: • Take 16x16 square window around detected feature • Compute edge orientation (angle of the gradient - 90 ) for each pixel • Throw out weak edges (threshold gradient magnitude) • Create histogram of surviving edge orientations 0 2
  • 27. SIFT Descriptor Full version • Divide the 16x16 window into a 4x4 grid of cells (2x2 case shown below) • Compute an orientation histogram for each cell • 16 cells * 8 orientations = 128 dimensional descriptor
  • 28. SIFT Properties Extraordinarily robust matching technique • Can handle changes in viewpoint • Up to about 60 degree out of plane rotation • Can handle significant changes in illumination, sometimes even day vs. night (next example) • Fast and efficient—can run in real time • Lots of code available http://people.csail.mit.edu/albert/ladypack/wiki/index.php?title=Kno wn_implementations_of_SIFT
  • 31. Feature Matching Given a feature in I1, how to find the best match in I2? 1. Define distance function that compares two descriptors 2. Test all the features in I2, find the one with min distance How to define the difference between two features f1, f2? • Simple approach is SSD(f1, f2) – Sum of square differences between entries of the two descriptors – Can give good scores to very ambiguous (bad) matches I1 f1 f2 I2
  • 32. Feature Distance How to define the difference between two features f1, f2? • Better approach: ratio distance = SSD(f1, f2) / SSD(f1, f2’) – f2 is best SSD match to f1 in I2 – f2’ is 2nd best SSD match to f1 in I2 – gives small values for ambiguous matches f1 f2 f2’ I1 I2
  • 33. Evaluating the Results • How can we measure the performance of a feature matcher? 50 75 200 feature distance
  • 34. True/false Positive The distance threshold affects performance • True positives = # of detected matches that are correct – Suppose we want to maximize these—how to choose threshold? • False positives = # of detected matches that are incorrect – Suppose we want to minimize these—how to choose threshold? 50 75 200 feature distance false match true match
  • 35. Applications Features are used for: • Image alignment (e.g., mosaics) • 3D reconstruction • Motion tracking • Object recognition • Indexing and database retrieval • Robot navigation • … etc.
  • 36. Acknowledgment Some of slides in this PowerPoint presentation are adaptation from various slides, many thanks to: 1. Deva Ramanan, Department of Computer Science, University of California at Irvine (http://www.ics.uci.edu/~dramanan/)