SlideShare a Scribd company logo
Image alignment
Image alignment: Motivation
Panorama stitching
Recognition
of object
instances
Image alignment: Challenges
Small degree of overlap
Occlusion,
clutter
Image alignment
• Two broad approaches:
• Direct (pixel-based) alignment
– Search for alignment where most pixels agree
• Feature-based alignment
– Search for alignment where extracted features agree
– Can be verified using pixel-based alignment
Alignment as fitting
• Previous lectures: fitting a model to features in one image
∑
i
i M
x )
,
(
residual
Find model M that minimizes
M
xi
Alignment as fitting
• Previous lectures: fitting a model to features in one image
• Alignment: fitting a model to a transformation between
pairs of features (matches) in two images
∑
i
i M
x )
,
(
residual
∑ ′
i
i
i x
x
T )
),
(
(
residual
Find model M that minimizes
Find transformation T
that minimizes
M
xi
T
xi
xi
'
Feature-based alignment outline
Feature-based alignment outline
• Extract features
Feature-based alignment outline
• Extract features
• Compute putative matches
Feature-based alignment outline
• Extract features
• Compute putative matches
• Loop:
• Hypothesize transformation T (small group of putative
matches that are related by T)
Feature-based alignment outline
• Extract features
• Compute putative matches
• Loop:
• Hypothesize transformation T (small group of putative
matches that are related by T)
• Verify transformation (search for other matches consistent
with T)
Feature-based alignment outline
• Extract features
• Compute putative matches
• Loop:
• Hypothesize transformation T (small group of putative
matches that are related by T)
• Verify transformation (search for other matches consistent
with T)
2D transformation models
• Similarity
(translation,
scale, rotation)
• Affine
• Projective
(homography)
Let’s start with affine transformations
• Simple fitting procedure (linear least squares)
• Approximates viewpoint changes for roughly planar
objects and roughly orthographic cameras
• Can be used to initialize fitting for more complex
models
Fitting an affine transformation
• Assume we know the correspondences, how do we
get the transformation?
)
,
( i
i y
x ′
′
)
,
( i
i y
x
⎥
⎦
⎤
⎢
⎣
⎡
+
⎥
⎦
⎤
⎢
⎣
⎡
⎥
⎦
⎤
⎢
⎣
⎡
=
⎥
⎦
⎤
⎢
⎣
⎡
′
′
2
1
4
3
2
1
t
t
y
x
m
m
m
m
y
x
i
i
i
i
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
′
′
=
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
L
L
L
L
i
i
i
i
i
i
y
x
t
t
m
m
m
m
y
x
y
x
2
1
4
3
2
1
1
0
0
0
0
1
0
0
Fitting an affine transformation
• Linear system with six unknowns
• Each match gives us two linearly independent
equations: need at least three to solve for the
transformation parameters
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
′
′
=
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎣
⎡
L
L
L
L
i
i
i
i
i
i
y
x
t
t
m
m
m
m
y
x
y
x
2
1
4
3
2
1
1
0
0
0
0
1
0
0
What if we don’t know the correspondences?
?
What if we don’t know the correspondences?
• Need to compare feature descriptors of local patches
surrounding interest points
( ) ( )
=
?
feature
descriptor
feature
descriptor
?
Feature descriptors
• Assuming the patches are already normalized (i.e.,
the local effect of the geometric transformation is
factored out), how do we compute their similarity?
• Want invariance to intensity changes, noise,
perceptually insignificant changes of the pixel pattern
• Simplest descriptor: vector of raw intensity values
• How to compare two such vectors?
• Sum of squared differences (SSD)
– Not invariant to intensity change
• Normalized correlation
– Invariant to affine intensity change
Feature descriptors
( )
∑ −
=
i
i
i v
u
v
u
2
)
,
SSD(
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
−
⎟
⎟
⎠
⎞
⎜
⎜
⎝
⎛
−
−
−
=
∑
∑
∑
j
j
j
j
i i
i
v
v
u
u
v
v
u
u
v
u
2
2
)
(
)
(
)
)(
(
)
,
(
ρ
Feature descriptors
• Disadvantage of patches as descriptors:
• Small shifts can affect matching score a lot
• Solution: histograms
0 2 π
• Descriptor computation:
• Divide patch into 4x4 sub-patches
• Compute histogram of gradient orientations (8 reference
angles) inside each sub-patch
• Resulting descriptor: 4x4x8 = 128 dimensions
Feature descriptors: SIFT
David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004.
• Descriptor computation:
• Divide patch into 4x4 sub-patches
• Compute histogram of gradient orientations (8 reference
angles) inside each sub-patch
• Resulting descriptor: 4x4x8 = 128 dimensions
• Advantage over raw vectors of pixel values
• Gradients less sensitive to illumination change
• “Subdivide and disorder” strategy achieves robustness to
small shifts, but still preserves some spatial information
Feature descriptors: SIFT
David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004.
Feature matching
?
• Generating putative matches: for each patch in one
image, find a short list of patches in the other image
that could match it based solely on appearance
Feature matching
• Generating putative matches: for each patch in one
image, find a short list of patches in the other image
that could match it based solely on appearance
• Exhaustive search
– For each feature in one image, compute the distance to all
features in the other image and find the “closest” ones
(threshold or fixed number of top matches)
• Fast approximate nearest neighbor search
– Hierarchical spatial data structures (kd-trees, vocabulary trees)
– Hashing
Feature space outlier rejection
• How can we tell which putative matches are more
reliable?
• Heuristic: compare distance of nearest neighbor to
that of second nearest neighbor
• Ratio will be high for features that are not distinctive
• Threshold of 0.8 provides good separation
David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60
(2), pp. 91-110, 2004.
Reading
David G. Lowe. "Distinctive image features from scale-
invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
Dealing with outliers
• The set of putative matches contains a very high
percentage of outliers
• Heuristics for feature-space outlier rejection
• Geometric fitting strategies:
• RANSAC
• Incremental alignment
• Hough transform
• Hashing
Strategy 1: RANSAC
RANSAC loop:
1. Randomly select a seed group of matches
2. Compute transformation from seed group
3. Find inliers to this transformation
4. If the number of inliers is sufficiently large, re-compute
least-squares estimate of transformation on all of the
inliers
Keep the transformation with the largest number of inliers
RANSAC example: Translation
Putative matches
RANSAC example: Translation
Select one match, count inliers
RANSAC example: Translation
Select one match, count inliers
RANSAC example: Translation
Select translation with the most inliers
Problem with RANSAC
• In many practical situations, the percentage of
outliers (incorrect putative matches) is often very high
(90% or above)
• Alternative strategy: restrict search space by using
strong locality constraints on seed groups and inliers
• Incremental alignment
Strategy 2: Incremental alignment
• Take advantage of strong locality constraints: only
pick close-by matches to start with, and gradually add
more matches in the same neighborhood
S. Lazebnik, C. Schmid and J. Ponce, “Semi-local affine parts for object
recognition,” BMVC 2004.
Strategy 2: Incremental alignment
• Take advantage of strong locality constraints: only
pick close-by matches to start with, and gradually add
more matches in the same neighborhood
Strategy 2: Incremental alignment
• Take advantage of strong locality constraints: only
pick close-by matches to start with, and gradually add
more matches in the same neighborhood
Strategy 2: Incremental alignment
• Take advantage of strong locality constraints: only
pick close-by matches to start with, and gradually add
more matches in the same neighborhood
A
Strategy 2: Incremental alignment
• Take advantage of strong locality constraints: only
pick close-by matches to start with, and gradually add
more matches in the same neighborhood
Strategy 3: Hough transform
• Recall: Generalized Hough transform
B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with
an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004
model
visual codeword with
displacement vectors
test image
Strategy 3: Hough transform
• Suppose our features are adapted to scale and rotation
• Then a single feature match provides an alignment hypothesis
(translation, scale, orientation)
David G. Lowe. "Distinctive image features from scale-invariant keypoints.”
IJCV 60 (2), pp. 91-110, 2004.
model
Strategy 3: Hough transform
• Suppose our features are adapted to scale and rotation
• Then a single feature match provides an alignment hypothesis
(translation, scale, orientation)
• Of course, a hypothesis obtained from a single match is unreliable
• Solution: let each match vote for its hypothesis in a Hough space
with very coarse bins
David G. Lowe. "Distinctive image features from scale-invariant keypoints.”
IJCV 60 (2), pp. 91-110, 2004.
model
Hough transform details (D. Lowe’s system)
• Training phase: For each model feature, record 2D
location, scale, and orientation of model (relative to
normalized feature frame)
• Test phase: Let each match between a test and a
model feature vote in a 4D Hough space
• Use broad bin sizes of 30 degrees for orientation, a factor of
2 for scale, and 0.25 times image size for location
• Vote for two closest bins in each dimension
• Find all bins with at least three votes and perform
geometric verification
• Estimate least squares affine transformation
• Use stricter thresholds on transformation residual
• Search for additional features that agree with the alignment
David G. Lowe. "Distinctive image features from scale-invariant keypoints.”
IJCV 60 (2), pp. 91-110, 2004.
Strategy 4: Hashing
• Make each image feature into a low-dimensional “key” that
indexes into a table of hypotheses
model
hash table
Strategy 4: Hashing
• Make each image feature into a low-dimensional “key” that
indexes into a table of hypotheses
• Given a new test image, compute the hash keys for all features
found in that image, access the table, and look for consistent
hypotheses
model
hash table
test image
Strategy 4: Hashing
• Make each image feature into a low-dimensional “key” that
indexes into a table of hypotheses
• Given a new test image, compute the hash keys for all features
found in that image, access the table, and look for consistent
hypotheses
• This can even work when we don’t have any feature descriptors:
we can take n-tuples of neighboring features and compute
invariant hash codes from their geometric configurations
A
B
C
D
Application: Searching the sky
http://www.astrometry.net/
Beyond affine transformations
• Homography: plane projective transformation
(transformation taking a quad to another arbitrary
quad)
Homography
• The transformation between two views of a planar
surface
• The transformation between images from two
cameras that share the same center
Fitting a homography
• Recall: homogenenous coordinates
Converting to homogenenous
image coordinates
Converting from homogenenous
image coordinates
Fitting a homography
• Recall: homogenenous coordinates
• Equation for homography:
Converting to homogenenous
image coordinates
Converting from homogenenous
image coordinates
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
=
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
′
′
1
1 33
32
31
23
22
21
13
12
11
y
x
h
h
h
h
h
h
h
h
h
y
x
λ
Fitting a homography
• Equation for homography:
i
T
T
T
i
i x
h
h
h
x
H
x
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
=
=
′
3
2
1
λ
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
=
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
′
′
1
1 33
32
31
23
22
21
13
12
11
i
i
i
i
y
x
h
h
h
h
h
h
h
h
h
y
x
λ
0
=
×
′ i
i x
H
x
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
′
−
′
′
−
−
′
=
×
′
i
T
i
i
T
i
i
T
i
i
T
i
T
i
T
i
i
i
y
x
x
y
x
h
x
h
x
h
x
h
x
h
x
h
x
H
x
1
2
3
1
2
3
0
0
0
0
3
2
1
=
⎟
⎟
⎟
⎠
⎞
⎜
⎜
⎜
⎝
⎛
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎣
⎡
′
′
−
′
−
′
−
h
h
h
x
x
x
x
x
x
T
T
i
i
T
i
i
T
i
i
T
T
i
T
i
i
T
i
T
x
y
x
y
3 equations, only 2 linearly
independent
9 entries, 8 degrees of freedom
(scale is arbitrary)
Direct linear transform
• H has 8 degrees of freedom (9 parameters, but scale
is arbitrary)
• One match gives us two linearly independent
equations
• Four matches needed for a minimal solution (null
space of 8x9 matrix)
• More than four: homogeneous least squares
0
0
0
0
0
3
2
1
1
1
1
1
1
1
=
⎟
⎟
⎟
⎠
⎞
⎜
⎜
⎜
⎝
⎛
⎥
⎥
⎥
⎥
⎥
⎥
⎦
⎤
⎢
⎢
⎢
⎢
⎢
⎢
⎣
⎡
′
−
′
−
′
−
′
−
h
h
h
x
x
x
x
x
x
x
x
T
n
n
T
T
n
T
n
n
T
n
T
T
T
T
T
T
T
x
y
x
y
L
L
L 0
=
h
A
Application: Panorama stitching
Recognizing panoramas
M. Brown and D. Lowe, “Recognizing Panoramas,” ICCV 2003.
• Given contents of a camera memory card,
automatically figure out which pictures go together
and stitch them together into panoramas
http://www.cs.ubc.ca/~mbrown/panorama/panorama.html
Issues in alignment-based applications
• Choosing the geometric alignment model
• Tradeoff between “correctness” and robustness (also,
efficiency)
• Choosing the descriptor
• “Rich” imagery (natural images): high-dimensional patch-based
descriptors (e.g., SIFT)
• “Impoverished” imagery (e.g., star fields): need to create
invariant geometric descriptors from k-tuples of point-based
features
• Strategy for finding putative matches
• Small number of images, one-time computation (e.g., panorama
stitching): brute force search
• Large database of model images, frequent queries: indexing or
hashing
• Heuristics for feature-space pruning of putative matches
Issues in alignment-based applications
• Choosing the geometric alignment model
• Choosing the descriptor
• Strategy for finding putative matches
• Hypothesis generation strategy
• Relatively large inlier ratio: RANSAC
• Small inlier ratio: locality constraints, Hough transform
• Hypothesis verification strategy
• Size of consensus set, residual tolerance depend on inlier ratio
and expected accuracy of the model
• Possible refinement of geometric model
• Dense verification
Next time: Single-view geometry

More Related Content

PDF
Maximizing your Global Delivery Strategy (ADM Outsourcing)
PPTX
Cloud Lending Solutions Overview - Lending Innovation Starts Here™
PPTX
Computer Vision descriptors
PPT
Michal Erel's SIFT presentation
PPTX
06 image features
PDF
PPT s12-machine vision-s2
PPT
siftppthttps://www.youtube.com/watch?v=ckftH9saonM.ppt
PPTX
Learning a Joint Embedding Representation for Image Search using Self-supervi...
Maximizing your Global Delivery Strategy (ADM Outsourcing)
Cloud Lending Solutions Overview - Lending Innovation Starts Here™
Computer Vision descriptors
Michal Erel's SIFT presentation
06 image features
PPT s12-machine vision-s2
siftppthttps://www.youtube.com/watch?v=ckftH9saonM.ppt
Learning a Joint Embedding Representation for Image Search using Self-supervi...

Similar to Lec10 alignment (20)

PDF
MLIP - Chapter 6 - Generation, Super-Resolution, Style transfer
PDF
Computer Vision Computer Vision: Algorithms and Applications Richard Szeliski
PDF
Motion and tracking
PDF
06_features_slides.pdf
PPTX
Evolving a Medical Image Similarity Search
PPT
Machine Learning workshop by GDSC Amity University Chhattisgarh
PDF
CounterFactual Explanations.pdf
PPTX
W5_CLASSIFICATION.pptxW5_CLASSIFICATION.pptx
PPT
Machine Learning Deep Learning Machine learning
PDF
Intro to data visualization
PPTX
Various object detection and tracking methods
PDF
MLIP - Chapter 5 - Detection, Segmentation, Captioning
PPT
PDF
stable_diffusion_a_tutorial, How stable_diffusion works, build stable_diffusi...
PDF
Computer Vision: Algorithms and Applications Richard Szeliski
PPTX
Computer Vision image classification
PPT
Saliency-based Models of Image Content and their Application to Auto-Annotati...
PPT
Constraint Satisfaction problem in AI.ppt
PDF
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
PDF
高次元空間におけるハブの出現 (第11回ステアラボ人工知能セミナー)
MLIP - Chapter 6 - Generation, Super-Resolution, Style transfer
Computer Vision Computer Vision: Algorithms and Applications Richard Szeliski
Motion and tracking
06_features_slides.pdf
Evolving a Medical Image Similarity Search
Machine Learning workshop by GDSC Amity University Chhattisgarh
CounterFactual Explanations.pdf
W5_CLASSIFICATION.pptxW5_CLASSIFICATION.pptx
Machine Learning Deep Learning Machine learning
Intro to data visualization
Various object detection and tracking methods
MLIP - Chapter 5 - Detection, Segmentation, Captioning
stable_diffusion_a_tutorial, How stable_diffusion works, build stable_diffusi...
Computer Vision: Algorithms and Applications Richard Szeliski
Computer Vision image classification
Saliency-based Models of Image Content and their Application to Auto-Annotati...
Constraint Satisfaction problem in AI.ppt
The Search for a New Visual Search Beyond Language - StampedeCon AI Summit 2017
高次元空間におけるハブの出現 (第11回ステアラボ人工知能セミナー)
Ad

More from BaliThorat1 (20)

PDF
Lec15 sfm
PDF
Lec14 multiview stereo
PPTX
Lec13 stereo converted
PDF
Lec12 epipolar
PPTX
Lec11 single view-converted
PDF
Lec09 hough
PDF
Lec08 fitting
PDF
8 operating system concept
PDF
7 processor
PDF
6 input output devices
PDF
2 windows operating system
PDF
5 computer memory
PDF
4 computer languages
PDF
1 fundamentals of computer
PDF
1 fundamentals of computer system
PDF
Computer generation and classification
PDF
Algorithm and flowchart
PDF
6 cpu scheduling
PDF
5 process synchronization
PDF
4 threads
Lec15 sfm
Lec14 multiview stereo
Lec13 stereo converted
Lec12 epipolar
Lec11 single view-converted
Lec09 hough
Lec08 fitting
8 operating system concept
7 processor
6 input output devices
2 windows operating system
5 computer memory
4 computer languages
1 fundamentals of computer
1 fundamentals of computer system
Computer generation and classification
Algorithm and flowchart
6 cpu scheduling
5 process synchronization
4 threads
Ad

Recently uploaded (20)

PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Pharma ospi slides which help in ospi learning
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Final Presentation General Medicine 03-08-2024.pptx
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Pharma ospi slides which help in ospi learning
Microbial disease of the cardiovascular and lymphatic systems
STATICS OF THE RIGID BODIES Hibbelers.pdf
TR - Agricultural Crops Production NC III.pdf
RMMM.pdf make it easy to upload and study
VCE English Exam - Section C Student Revision Booklet
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
O7-L3 Supply Chain Operations - ICLT Program
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Complications of Minimal Access Surgery at WLH
Week 4 Term 3 Study Techniques revisited.pptx
Abdominal Access Techniques with Prof. Dr. R K Mishra
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Final Presentation General Medicine 03-08-2024.pptx

Lec10 alignment

  • 2. Image alignment: Motivation Panorama stitching Recognition of object instances
  • 3. Image alignment: Challenges Small degree of overlap Occlusion, clutter
  • 4. Image alignment • Two broad approaches: • Direct (pixel-based) alignment – Search for alignment where most pixels agree • Feature-based alignment – Search for alignment where extracted features agree – Can be verified using pixel-based alignment
  • 5. Alignment as fitting • Previous lectures: fitting a model to features in one image ∑ i i M x ) , ( residual Find model M that minimizes M xi
  • 6. Alignment as fitting • Previous lectures: fitting a model to features in one image • Alignment: fitting a model to a transformation between pairs of features (matches) in two images ∑ i i M x ) , ( residual ∑ ′ i i i x x T ) ), ( ( residual Find model M that minimizes Find transformation T that minimizes M xi T xi xi '
  • 9. Feature-based alignment outline • Extract features • Compute putative matches
  • 10. Feature-based alignment outline • Extract features • Compute putative matches • Loop: • Hypothesize transformation T (small group of putative matches that are related by T)
  • 11. Feature-based alignment outline • Extract features • Compute putative matches • Loop: • Hypothesize transformation T (small group of putative matches that are related by T) • Verify transformation (search for other matches consistent with T)
  • 12. Feature-based alignment outline • Extract features • Compute putative matches • Loop: • Hypothesize transformation T (small group of putative matches that are related by T) • Verify transformation (search for other matches consistent with T)
  • 13. 2D transformation models • Similarity (translation, scale, rotation) • Affine • Projective (homography)
  • 14. Let’s start with affine transformations • Simple fitting procedure (linear least squares) • Approximates viewpoint changes for roughly planar objects and roughly orthographic cameras • Can be used to initialize fitting for more complex models
  • 15. Fitting an affine transformation • Assume we know the correspondences, how do we get the transformation? ) , ( i i y x ′ ′ ) , ( i i y x ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ + ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ = ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ′ ′ 2 1 4 3 2 1 t t y x m m m m y x i i i i ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ′ ′ = ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ L L L L i i i i i i y x t t m m m m y x y x 2 1 4 3 2 1 1 0 0 0 0 1 0 0
  • 16. Fitting an affine transformation • Linear system with six unknowns • Each match gives us two linearly independent equations: need at least three to solve for the transformation parameters ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ′ ′ = ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ L L L L i i i i i i y x t t m m m m y x y x 2 1 4 3 2 1 1 0 0 0 0 1 0 0
  • 17. What if we don’t know the correspondences? ?
  • 18. What if we don’t know the correspondences? • Need to compare feature descriptors of local patches surrounding interest points ( ) ( ) = ? feature descriptor feature descriptor ?
  • 19. Feature descriptors • Assuming the patches are already normalized (i.e., the local effect of the geometric transformation is factored out), how do we compute their similarity? • Want invariance to intensity changes, noise, perceptually insignificant changes of the pixel pattern
  • 20. • Simplest descriptor: vector of raw intensity values • How to compare two such vectors? • Sum of squared differences (SSD) – Not invariant to intensity change • Normalized correlation – Invariant to affine intensity change Feature descriptors ( ) ∑ − = i i i v u v u 2 ) , SSD( ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − − = ∑ ∑ ∑ j j j j i i i v v u u v v u u v u 2 2 ) ( ) ( ) )( ( ) , ( ρ
  • 21. Feature descriptors • Disadvantage of patches as descriptors: • Small shifts can affect matching score a lot • Solution: histograms 0 2 π
  • 22. • Descriptor computation: • Divide patch into 4x4 sub-patches • Compute histogram of gradient orientations (8 reference angles) inside each sub-patch • Resulting descriptor: 4x4x8 = 128 dimensions Feature descriptors: SIFT David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
  • 23. • Descriptor computation: • Divide patch into 4x4 sub-patches • Compute histogram of gradient orientations (8 reference angles) inside each sub-patch • Resulting descriptor: 4x4x8 = 128 dimensions • Advantage over raw vectors of pixel values • Gradients less sensitive to illumination change • “Subdivide and disorder” strategy achieves robustness to small shifts, but still preserves some spatial information Feature descriptors: SIFT David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
  • 24. Feature matching ? • Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance
  • 25. Feature matching • Generating putative matches: for each patch in one image, find a short list of patches in the other image that could match it based solely on appearance • Exhaustive search – For each feature in one image, compute the distance to all features in the other image and find the “closest” ones (threshold or fixed number of top matches) • Fast approximate nearest neighbor search – Hierarchical spatial data structures (kd-trees, vocabulary trees) – Hashing
  • 26. Feature space outlier rejection • How can we tell which putative matches are more reliable? • Heuristic: compare distance of nearest neighbor to that of second nearest neighbor • Ratio will be high for features that are not distinctive • Threshold of 0.8 provides good separation David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
  • 27. Reading David G. Lowe. "Distinctive image features from scale- invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
  • 28. Dealing with outliers • The set of putative matches contains a very high percentage of outliers • Heuristics for feature-space outlier rejection • Geometric fitting strategies: • RANSAC • Incremental alignment • Hough transform • Hashing
  • 29. Strategy 1: RANSAC RANSAC loop: 1. Randomly select a seed group of matches 2. Compute transformation from seed group 3. Find inliers to this transformation 4. If the number of inliers is sufficiently large, re-compute least-squares estimate of transformation on all of the inliers Keep the transformation with the largest number of inliers
  • 31. RANSAC example: Translation Select one match, count inliers
  • 32. RANSAC example: Translation Select one match, count inliers
  • 33. RANSAC example: Translation Select translation with the most inliers
  • 34. Problem with RANSAC • In many practical situations, the percentage of outliers (incorrect putative matches) is often very high (90% or above) • Alternative strategy: restrict search space by using strong locality constraints on seed groups and inliers • Incremental alignment
  • 35. Strategy 2: Incremental alignment • Take advantage of strong locality constraints: only pick close-by matches to start with, and gradually add more matches in the same neighborhood S. Lazebnik, C. Schmid and J. Ponce, “Semi-local affine parts for object recognition,” BMVC 2004.
  • 36. Strategy 2: Incremental alignment • Take advantage of strong locality constraints: only pick close-by matches to start with, and gradually add more matches in the same neighborhood
  • 37. Strategy 2: Incremental alignment • Take advantage of strong locality constraints: only pick close-by matches to start with, and gradually add more matches in the same neighborhood
  • 38. Strategy 2: Incremental alignment • Take advantage of strong locality constraints: only pick close-by matches to start with, and gradually add more matches in the same neighborhood
  • 39. A Strategy 2: Incremental alignment • Take advantage of strong locality constraints: only pick close-by matches to start with, and gradually add more matches in the same neighborhood
  • 40. Strategy 3: Hough transform • Recall: Generalized Hough transform B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004 model visual codeword with displacement vectors test image
  • 41. Strategy 3: Hough transform • Suppose our features are adapted to scale and rotation • Then a single feature match provides an alignment hypothesis (translation, scale, orientation) David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004. model
  • 42. Strategy 3: Hough transform • Suppose our features are adapted to scale and rotation • Then a single feature match provides an alignment hypothesis (translation, scale, orientation) • Of course, a hypothesis obtained from a single match is unreliable • Solution: let each match vote for its hypothesis in a Hough space with very coarse bins David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004. model
  • 43. Hough transform details (D. Lowe’s system) • Training phase: For each model feature, record 2D location, scale, and orientation of model (relative to normalized feature frame) • Test phase: Let each match between a test and a model feature vote in a 4D Hough space • Use broad bin sizes of 30 degrees for orientation, a factor of 2 for scale, and 0.25 times image size for location • Vote for two closest bins in each dimension • Find all bins with at least three votes and perform geometric verification • Estimate least squares affine transformation • Use stricter thresholds on transformation residual • Search for additional features that agree with the alignment David G. Lowe. "Distinctive image features from scale-invariant keypoints.” IJCV 60 (2), pp. 91-110, 2004.
  • 44. Strategy 4: Hashing • Make each image feature into a low-dimensional “key” that indexes into a table of hypotheses model hash table
  • 45. Strategy 4: Hashing • Make each image feature into a low-dimensional “key” that indexes into a table of hypotheses • Given a new test image, compute the hash keys for all features found in that image, access the table, and look for consistent hypotheses model hash table test image
  • 46. Strategy 4: Hashing • Make each image feature into a low-dimensional “key” that indexes into a table of hypotheses • Given a new test image, compute the hash keys for all features found in that image, access the table, and look for consistent hypotheses • This can even work when we don’t have any feature descriptors: we can take n-tuples of neighboring features and compute invariant hash codes from their geometric configurations A B C D
  • 47. Application: Searching the sky http://www.astrometry.net/
  • 48. Beyond affine transformations • Homography: plane projective transformation (transformation taking a quad to another arbitrary quad)
  • 49. Homography • The transformation between two views of a planar surface • The transformation between images from two cameras that share the same center
  • 50. Fitting a homography • Recall: homogenenous coordinates Converting to homogenenous image coordinates Converting from homogenenous image coordinates
  • 51. Fitting a homography • Recall: homogenenous coordinates • Equation for homography: Converting to homogenenous image coordinates Converting from homogenenous image coordinates ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ′ ′ 1 1 33 32 31 23 22 21 13 12 11 y x h h h h h h h h h y x λ
  • 52. Fitting a homography • Equation for homography: i T T T i i x h h h x H x ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = = ′ 3 2 1 λ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ = ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ′ ′ 1 1 33 32 31 23 22 21 13 12 11 i i i i y x h h h h h h h h h y x λ 0 = × ′ i i x H x ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ′ − ′ ′ − − ′ = × ′ i T i i T i i T i i T i T i T i i i y x x y x h x h x h x h x h x h x H x 1 2 3 1 2 3 0 0 0 0 3 2 1 = ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ ′ ′ − ′ − ′ − h h h x x x x x x T T i i T i i T i i T T i T i i T i T x y x y 3 equations, only 2 linearly independent 9 entries, 8 degrees of freedom (scale is arbitrary)
  • 53. Direct linear transform • H has 8 degrees of freedom (9 parameters, but scale is arbitrary) • One match gives us two linearly independent equations • Four matches needed for a minimal solution (null space of 8x9 matrix) • More than four: homogeneous least squares 0 0 0 0 0 3 2 1 1 1 1 1 1 1 = ⎟ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎜ ⎝ ⎛ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ′ − ′ − ′ − ′ − h h h x x x x x x x x T n n T T n T n n T n T T T T T T T x y x y L L L 0 = h A
  • 55. Recognizing panoramas M. Brown and D. Lowe, “Recognizing Panoramas,” ICCV 2003. • Given contents of a camera memory card, automatically figure out which pictures go together and stitch them together into panoramas http://www.cs.ubc.ca/~mbrown/panorama/panorama.html
  • 56. Issues in alignment-based applications • Choosing the geometric alignment model • Tradeoff between “correctness” and robustness (also, efficiency) • Choosing the descriptor • “Rich” imagery (natural images): high-dimensional patch-based descriptors (e.g., SIFT) • “Impoverished” imagery (e.g., star fields): need to create invariant geometric descriptors from k-tuples of point-based features • Strategy for finding putative matches • Small number of images, one-time computation (e.g., panorama stitching): brute force search • Large database of model images, frequent queries: indexing or hashing • Heuristics for feature-space pruning of putative matches
  • 57. Issues in alignment-based applications • Choosing the geometric alignment model • Choosing the descriptor • Strategy for finding putative matches • Hypothesis generation strategy • Relatively large inlier ratio: RANSAC • Small inlier ratio: locality constraints, Hough transform • Hypothesis verification strategy • Size of consensus set, residual tolerance depend on inlier ratio and expected accuracy of the model • Possible refinement of geometric model • Dense verification