SlideShare a Scribd company logo
Image Processing
Introduction
In an experiment, several images were taken to document the aperture field and what
occurred within it. A water solution consisting of sand was pumped into the fractured cell
and the system oscillated to simulate the act of an earthquake on a fractured aquifer. A
sequence of 100 images was taken. The camera was also used to take images of clear and
blue dye solutions that were being pumped into the cell. The averages of these two sets of
100 images are used to create the aperture field based on [Detwiler, et al. 2009] for
obtaining the aperture field of fractured cells. The aperture field was compared against the
experimental images to determine how the fluid behaves within the given fracture cell or the
cells aperture. Before any image processing can occur, one must be certain that the two
sets of images are completely lined up using a Matlab function, CPSELECT.
The Problem
With each picture taken, a variety of modifications to the image positioning occurred. These
may have been slight, such as those due to the camera heating up with use. The camera
may have also been moved drastically such as if the velcro used to hold it in place had to
be adjusted, or if the camera had to be moved entirely. In order to analyze the images
properly, and therefore measure the exact movement of water and sand through the field at
the moment each image was taken, the image must be realigned exactly with the aperture
field image.
CPSelect
Within Matlab’s Image Processing Toolkit, the command cpselect(input, base) starts the
Control Point Selection Tool which allows the user to choose what are called control points
within two related images. Input is the name of the image that needs to be adjusted in
comparison to the alignment of the base image. In CPSELECT terms, input is the
unregistered image and base is the original image. Once this command is run, CPSELECT
opens the two images alongside each other.
Figure 1: Image result of cpselect(r_gray, Iclear_all_gray)
The user then chooses pairs of control points, which are positioned at exactly the same
point on whatever was being photographed. (This is typically a part of the picture that can
be clearly identified in both the unregistered and registered images).
Figure 2: Image result of cpselect(r_gray, Iclear_all_gray) with control points selected
The number of control point pairs chosen depends on the type of transformation the image
needs. Different transformations are chosen depending on the needed type of alignment.
This is typically not clear to the user, but simply a matter of trying various transformations
until the proper alignment results. One example of a transformation is nonreflective
similarity. This only needs two pairs of control points and is needed when shapes in an
image are not changed but the image itself must be rotated, scaled, or translated. Similarity
calls for three pairs of points and allows for reflection of the image. Another is piecewise
linear which requires four pairs and is needed when the image appears to be distorted
differently in different regions.
Type of Transformation Before After
A way to help in choosing the right transformation:
The user also has the option of testing cpselect’s ability to choose (predict) points by using
the control point prediction button. After choosing 2 pairs of points, the user has the option
to click the “control point prediction” button, choose a point (on either the base or the input
image) and see what point cpselect predicts is the corresponding location of your chosen
point. If cpselect is unable to correctly predict the other point of the pair, the user knows to
choose another pair, try again, and so on until cpselect is able to choose the correct point.
Knowing how many pairs are needed until cpselect can properly predict points indicates
how many points are needed for a transformation, and will therefore help narrow down the
options of transformations to choose from.
(This may not always work, but is a good resource to try).
After the points have been chosen, CPSELECT returns input-points and base- points. The
user may use these points to finish the image adjustment or what can be a better option,
depending on the images, would be to use cpcorr to compensate for error between the
points chosen within a pair.
To correlate the points, the following syntax is used:
input_pts_adjusted = cpcorr(input_points, base_points, input_image_name(: , : ,1),
base_image_name);
The remaining code is used:
mytform = cp2tform(input_pts_adjusted, base_points, transformation_type);
registered_image = imtransform(input_image_name, mytform, **look into different
types of correlations**);
** registered image = same size as input image? **
* If cpcorr is not used, for example if the image will not allow for the points to be correlated,
then input_points is used instead of input_pts_adjusted. For example, cpcorr could not be
used for the Aperture Field Image because, once converted to grayscale, one can see that
the Aperture Field Image appears reversed from other images. Meaning that where the
Aperture Field appears white, the input images appear black and vise-versa. Cpcorr “uses
normalized cross-correlation” to adjust the input and base points selected by the user.
* The original base image can now be compared to the new “registered” image.
The transformation that best aligned the image we were trying to register was “piecewise
linear”. To compare how well the transformations worked implixelinfo was used to determine
the pixel coordinates of each of the corners of a border and those of a long needle in the
field in the experimental image versus the aperture field before and after alignment.
Using impixelinfo in matlab example:
imagesc(registered_image)
impixelinfo
** looking at coordinates**
Aligning Images to Aperture Field
The images from the Flow Sand Oscillation file are aligned to the aperture field image
created. However, because of the color scale of the images, they do not appear in
CPSELECT (see Figure 3 below). The images are therefore converted to an appropriate
gray scale. For these images, the function mat2gray was used. CPSELECT is run with the
new gray image and the resulting points are used to align every point in the file.
Figure 3: Image result of cpselect(r,Iclear_all)
Example: converting image “r” to gray scale image “r_gray”
r_gray = mat2gray(r);
The same is done for Iclear_all and cpselect(r_gray, Iclear_all_gray) gives Figure 1 above.
Mat2gray is one of several functions that properly scales images to convert them between
class and type. Mat2gray is valid for images of class double and converts to double within
range [0,1].
For more information, or if a different function is needed refer to “Digital Image Processing
Using Matlab (Gonzalez, Woods, Eddins)” white binder or search “converting between
image classes” under Matlab’s Help tool.
** include “recipe of steps” show expected output also**
Algorithm
1. Load base image and image(s) to be registered
2. Run cpselect
cpselect(input_image,base_image)
a. Convert to grayscale if needed:
cpselect(Input_gray,base_gray)
b. choose points
3. Save executed input and base points
4. Align images (example code):
for n = a:b;
file = char(FL(n));
FullFileName =
['/Users/Lab/JeanElkhoury/Experiment_1/Exp_20110330_FlowSandOscillations/'
file];
im = imread(FullFileName);
r = double(squeeze(im(:,:,1)));
r_gray = mat2gray(r);
input_pts_adj =
cpcorr(input_pts_r_gray_4pair,base_pts_Iclear_all_gray_4pair, r_gray,
Iclear_all_gray); % correlates pts to correct error with pts chosen
mytform = cp2tform(input_pts_adj, base_pts_Iclear_all_gray_4pair,
'piecewise linear');
registered = imtransform(r_gray, mytform);
end
5. Check if images have been properly aligned
- impixelinfo
imagesc(registered_image)
impixelinfo
6. Save registered images

More Related Content

PDF
Programming in matlab lesson5
PDF
IRJET- 3D Vision System using Calibrated Stereo Camera
PDF
A complete user adaptive antenna tutorial demonstration. a gui based approach...
DOCX
Segmentation based image copy-move forgery detection scheme
PPTX
03 image transformations_i
PPTX
04 image transformations_ii
PDF
A Mat Lab built software application for similar image retrieval
PDF
Local binary pattern
Programming in matlab lesson5
IRJET- 3D Vision System using Calibrated Stereo Camera
A complete user adaptive antenna tutorial demonstration. a gui based approach...
Segmentation based image copy-move forgery detection scheme
03 image transformations_i
04 image transformations_ii
A Mat Lab built software application for similar image retrieval
Local binary pattern

What's hot (20)

PDF
Y34147151
PPTX
05 contours seg_matching
PDF
Sublinear tolerant property_testing_halfplane
PDF
EDGE DETECTION BY MODIFIED OTSU METHOD
PDF
Edge detection by modified otsu method
PDF
Project 2
DOCX
E E 458 Project 002
PDF
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
PDF
Multiple region of interest tracking of non rigid objects using demon's algor...
PDF
Quality Assessment of Gray and Color Images through Image Fusion Technique
PDF
Project 1
PDF
Image Processing
PDF
Wordoku Puzzle Solver - Image Processing Project
PDF
Exposure Fusion by FABEMD
PDF
IRJET- An Image Cryptography using Henon Map and Arnold Cat Map
PDF
An automatic algorithm for object recognition and detection based on asift ke...
PPTX
Otsu binarization
DOCX
Himadeep
PPTX
Marker Controlled Segmentation Technique for Medical application
PDF
Project 8
Y34147151
05 contours seg_matching
Sublinear tolerant property_testing_halfplane
EDGE DETECTION BY MODIFIED OTSU METHOD
Edge detection by modified otsu method
Project 2
E E 458 Project 002
MULTIPLE REGION OF INTEREST TRACKING OF NON-RIGID OBJECTS USING DEMON'S ALGOR...
Multiple region of interest tracking of non rigid objects using demon's algor...
Quality Assessment of Gray and Color Images through Image Fusion Technique
Project 1
Image Processing
Wordoku Puzzle Solver - Image Processing Project
Exposure Fusion by FABEMD
IRJET- An Image Cryptography using Henon Map and Arnold Cat Map
An automatic algorithm for object recognition and detection based on asift ke...
Otsu binarization
Himadeep
Marker Controlled Segmentation Technique for Medical application
Project 8
Ad

Similar to dominguez_cecilia_image-processing-manual (20)

PDF
The Effectiveness and Efficiency of Medical Images after Special Filtration f...
PDF
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
PDF
PDF
Structure and Motion - 3D Reconstruction of Cameras and Structure
PDF
Segmentation of Images by using Fuzzy k-means clustering with ACO
PDF
House Price Estimation as a Function Fitting Problem with using ANN Approach
PPTX
SBSI optimization tutorial
PPT
Image inpainting
PDF
A graphic library and an application for simple curve manipolation
PDF
06466595
PDF
clinic_poster_final_3
PPTX
cnn ppt.pptx
PDF
IRJET- Coloring Greyscale Images using Deep Learning
PDF
Image Inpainting Using Cloning Algorithms
PPTX
Image recogonization
PDF
PPT
Recovering 3D human body configurations using shape contexts
PDF
isvc_draft6_final_1_harvey_mudd (1)
PDF
Building 3D Morphable Models from 2D Images
PDF
A comparative analysis of retrieval techniques in content based image retrieval
The Effectiveness and Efficiency of Medical Images after Special Filtration f...
3D Reconstruction from Multiple uncalibrated 2D Images of an Object
Structure and Motion - 3D Reconstruction of Cameras and Structure
Segmentation of Images by using Fuzzy k-means clustering with ACO
House Price Estimation as a Function Fitting Problem with using ANN Approach
SBSI optimization tutorial
Image inpainting
A graphic library and an application for simple curve manipolation
06466595
clinic_poster_final_3
cnn ppt.pptx
IRJET- Coloring Greyscale Images using Deep Learning
Image Inpainting Using Cloning Algorithms
Image recogonization
Recovering 3D human body configurations using shape contexts
isvc_draft6_final_1_harvey_mudd (1)
Building 3D Morphable Models from 2D Images
A comparative analysis of retrieval techniques in content based image retrieval
Ad

dominguez_cecilia_image-processing-manual

  • 1. Image Processing Introduction In an experiment, several images were taken to document the aperture field and what occurred within it. A water solution consisting of sand was pumped into the fractured cell and the system oscillated to simulate the act of an earthquake on a fractured aquifer. A sequence of 100 images was taken. The camera was also used to take images of clear and blue dye solutions that were being pumped into the cell. The averages of these two sets of 100 images are used to create the aperture field based on [Detwiler, et al. 2009] for obtaining the aperture field of fractured cells. The aperture field was compared against the experimental images to determine how the fluid behaves within the given fracture cell or the cells aperture. Before any image processing can occur, one must be certain that the two sets of images are completely lined up using a Matlab function, CPSELECT. The Problem With each picture taken, a variety of modifications to the image positioning occurred. These may have been slight, such as those due to the camera heating up with use. The camera may have also been moved drastically such as if the velcro used to hold it in place had to be adjusted, or if the camera had to be moved entirely. In order to analyze the images properly, and therefore measure the exact movement of water and sand through the field at the moment each image was taken, the image must be realigned exactly with the aperture field image. CPSelect Within Matlab’s Image Processing Toolkit, the command cpselect(input, base) starts the Control Point Selection Tool which allows the user to choose what are called control points within two related images. Input is the name of the image that needs to be adjusted in comparison to the alignment of the base image. In CPSELECT terms, input is the unregistered image and base is the original image. Once this command is run, CPSELECT opens the two images alongside each other.
  • 2. Figure 1: Image result of cpselect(r_gray, Iclear_all_gray) The user then chooses pairs of control points, which are positioned at exactly the same point on whatever was being photographed. (This is typically a part of the picture that can be clearly identified in both the unregistered and registered images). Figure 2: Image result of cpselect(r_gray, Iclear_all_gray) with control points selected
  • 3. The number of control point pairs chosen depends on the type of transformation the image needs. Different transformations are chosen depending on the needed type of alignment. This is typically not clear to the user, but simply a matter of trying various transformations until the proper alignment results. One example of a transformation is nonreflective similarity. This only needs two pairs of control points and is needed when shapes in an image are not changed but the image itself must be rotated, scaled, or translated. Similarity calls for three pairs of points and allows for reflection of the image. Another is piecewise linear which requires four pairs and is needed when the image appears to be distorted differently in different regions. Type of Transformation Before After A way to help in choosing the right transformation: The user also has the option of testing cpselect’s ability to choose (predict) points by using the control point prediction button. After choosing 2 pairs of points, the user has the option to click the “control point prediction” button, choose a point (on either the base or the input image) and see what point cpselect predicts is the corresponding location of your chosen point. If cpselect is unable to correctly predict the other point of the pair, the user knows to choose another pair, try again, and so on until cpselect is able to choose the correct point. Knowing how many pairs are needed until cpselect can properly predict points indicates how many points are needed for a transformation, and will therefore help narrow down the options of transformations to choose from. (This may not always work, but is a good resource to try). After the points have been chosen, CPSELECT returns input-points and base- points. The user may use these points to finish the image adjustment or what can be a better option, depending on the images, would be to use cpcorr to compensate for error between the points chosen within a pair. To correlate the points, the following syntax is used: input_pts_adjusted = cpcorr(input_points, base_points, input_image_name(: , : ,1), base_image_name); The remaining code is used: mytform = cp2tform(input_pts_adjusted, base_points, transformation_type); registered_image = imtransform(input_image_name, mytform, **look into different types of correlations**); ** registered image = same size as input image? ** * If cpcorr is not used, for example if the image will not allow for the points to be correlated,
  • 4. then input_points is used instead of input_pts_adjusted. For example, cpcorr could not be used for the Aperture Field Image because, once converted to grayscale, one can see that the Aperture Field Image appears reversed from other images. Meaning that where the Aperture Field appears white, the input images appear black and vise-versa. Cpcorr “uses normalized cross-correlation” to adjust the input and base points selected by the user. * The original base image can now be compared to the new “registered” image. The transformation that best aligned the image we were trying to register was “piecewise linear”. To compare how well the transformations worked implixelinfo was used to determine the pixel coordinates of each of the corners of a border and those of a long needle in the field in the experimental image versus the aperture field before and after alignment. Using impixelinfo in matlab example: imagesc(registered_image) impixelinfo ** looking at coordinates** Aligning Images to Aperture Field The images from the Flow Sand Oscillation file are aligned to the aperture field image created. However, because of the color scale of the images, they do not appear in CPSELECT (see Figure 3 below). The images are therefore converted to an appropriate gray scale. For these images, the function mat2gray was used. CPSELECT is run with the new gray image and the resulting points are used to align every point in the file. Figure 3: Image result of cpselect(r,Iclear_all)
  • 5. Example: converting image “r” to gray scale image “r_gray” r_gray = mat2gray(r); The same is done for Iclear_all and cpselect(r_gray, Iclear_all_gray) gives Figure 1 above. Mat2gray is one of several functions that properly scales images to convert them between class and type. Mat2gray is valid for images of class double and converts to double within range [0,1]. For more information, or if a different function is needed refer to “Digital Image Processing Using Matlab (Gonzalez, Woods, Eddins)” white binder or search “converting between image classes” under Matlab’s Help tool. ** include “recipe of steps” show expected output also** Algorithm 1. Load base image and image(s) to be registered 2. Run cpselect cpselect(input_image,base_image) a. Convert to grayscale if needed: cpselect(Input_gray,base_gray) b. choose points 3. Save executed input and base points 4. Align images (example code): for n = a:b; file = char(FL(n)); FullFileName = ['/Users/Lab/JeanElkhoury/Experiment_1/Exp_20110330_FlowSandOscillations/' file]; im = imread(FullFileName); r = double(squeeze(im(:,:,1))); r_gray = mat2gray(r); input_pts_adj = cpcorr(input_pts_r_gray_4pair,base_pts_Iclear_all_gray_4pair, r_gray, Iclear_all_gray); % correlates pts to correct error with pts chosen mytform = cp2tform(input_pts_adj, base_pts_Iclear_all_gray_4pair, 'piecewise linear'); registered = imtransform(r_gray, mytform);
  • 6. end 5. Check if images have been properly aligned - impixelinfo imagesc(registered_image) impixelinfo 6. Save registered images