SlideShare a Scribd company logo
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
29
IMAGE SIMILARITY USING FOURIER TRANSFORM
Siddharth Narayanan1
, P K Thirivikraman2
1
Research Center Imarat, DRDO Hyderabad, India 500069
2
Birla Institute of Technology & Science Hyderabad, India 500078
ABSTRACT
In this paper, a similarity measure for images based on values from their respective Fourier
Transforms is proposed for image registration. The approach uses image content to generate
signatures and is not based on image annotation and therefore does not require human assistance. It
uses both, the real and complex components of the FFT to compute the final rank for measuring
similarity. Any robust approach must accurately represent all objects in an image and depending on
the size of the image data set, diverse techniques may need to be followed. This paper discusses
implementation of a similarity rating scheme through the Open CV library and introduces a metric
for comparison, carried out by considering Intersection bounds of a covariance matrix of two
compared images with normalized values of the Magnitude and Phase spectrum. Sample results on a
test collection are given along with data using existing methods of image histogram comparison.
Results have shown that this method is particularly advantageous in images with varying degrees of
lighting.
Keywords: Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT), Feature extraction,
Histogram Intersection, Image Signature
1. INTRODUCTION
Several approaches have been proposed to the problem of identifying image features that
could be used as objective indicators of their contents. One set of approaches require explicit image
annotation by humans to mark objects, regions and shapes. The other set contains methods for
automatic extraction of physical properties like color and contour. These are computed through
values and distributions that are used in image classification. Automatic feature extraction in many
cases provides good performance in large databases and for applications where cost of image
processing must be kept low. A specific area of application is retrieval by similarity in thematic
databases. For retrieval by similarity, a query is made of one or more parts of an image. The process
to procure similar images is based on some selected visual properties according to an appropriate
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING &
TECHNOLOGY (IJCET)
ISSN 0976 – 6367(Print)
ISSN 0976 – 6375(Online)
Volume 6, Issue 2, February (2015), pp. 29-37
© IAEME: www.iaeme.com/IJCET.asp
Journal Impact Factor (2015): 8.9958 (Calculated by GISI)
www.jifactor.com
IJCET
© I A E M E
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
30
metric. The content of a thematic database is specific to a certain domain. In this context, it is
assumed that the thematic nature allows queries to be targeted to visual aspects rather than to content
interpretation.
Two major approaches can be followed in extracting information about textures and lines
direction in an image. The first is based on images segmented by an edge finding procedure while
the second uses full colors or b/w multilevel images. This paper proposes an approach to the problem
of finding the distribution of image lines direction by analyzing its Fourier transform. As noted by
several authors, the 2D Fourier power spectrum preserves direction information of an image [1].
Once the Fourier transform is computed, its frequency domain representation can be scanned and
required values generated.
This paper is organized as follows. In Section 2 a brief overview of the current literature is
given. Section 3 discusses the use of the FFT algorithm as a means for matching similar images.
Section 4 describes the implementation technique, and Section 5 presents the results of the retrieval
technique based on similarity. Conclusions are given in Section 6 and references follow.
2. RESEARCH ELABORATIONS
2.1 IMAGE ANALYSIS
The large objects in an image are usually relevant for interpreting the image. These can be
assumed to be most frequently localized in central areas like foreground objects or spanning an entire
image like in the case of recurrent shapes or landscapes. These objects contribute to the low
frequency information of the image [2]. Higher frequencies are mostly located in small details and
fine-grain textures. In the frequency domain, components with continuous and low frequency tend to
be located at the center. Higher frequencies that are related to fast variations in the image are located
close to the border. Most of data is located near the origin. Image information is largely related to
larger objects, rather than to details and continuous background.
2.2 RETRIEVAL SYSTEMS
Comparing two images, or an image and a model, is the fundamental operation for any
retrieval systems [3].Retrieval by similarity refers to retrieving images which are similar to a
previous image. This can also be called retrieval by example, or to a model or schema. Retrieval by
similarity requires specific definitions of what similar means. Representation of features of images -
like color, texture, shape, motion, etc. - is a fundamental problem in visual information retrieval.
Algorithms for pattern recognition and image analysis provide the means to extract numeric
descriptors thus giving us a quantitative measure of these features. Computer vision enables object
and motion identification by comparing extracted patterns with predefined models. Coupling it with
localization techniques would allow autonomous robots to globally localize them selves to reliably
track their positions and recover from localization failures. Such a system would also be robust
against distortion and occlusions.
Today, similarity queries arise naturally in a large variety of applications like electronic
catalogues, Medical databases for ECG, X-ray and TAC, Weather prediction and Criminal
investigations. The drawbacks of traditional approaches can be overcome through Similarity search
by using numerical features computed by direct analysis of the information content. Content-Base
Image Retrieval (CBIR) were proposed in the early 1990’s that use visual features to represent the
image content. This approach allows features to be computed automatically, and the information
used during the retrieval process is always consistent as it does not depend on human interpretation.
A query image, or select a prototype image is taken by the user, searching for something similar. The
result is a list of images sorted by increasing or decreasing values of similarity relative to the query
image depending on the method of comparison. Retrieval is immediate, based on an appropriate
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
31
predefined similarity criterion, and able to measure the grade of similarity between two images using
only low level image properties (i.e. no human experts should provide additional information).
Moreover, an efficient way to obtain the most similar DB images to the query has to be defined. At
query time, the feature vector of the query image is computed and the DB is searched for the most
similar feature vectors.
2.3 SIGNAL DISCRETIZATION
The time signal squared;‫ݔ‬ଶ
(‫)ݐ‬ represents how the energy contained in the signal distributes
over time, while its spectrum squared;ܺଶ
(݂) represents how the energy distributes over frequency
(therefore the term power density spectrum). Obviously, the same amount of energy is contained in
either time or frequency domain, as indicated by Parseval's formula (1).
‫׬‬ |‫|)ݐ(ݔ‬ଶ∞
ି∞
݀‫ݐ‬ =	‫׬‬ |ܺ(݂)|ଶ∞
ି∞
݂݀				 (1)
Signal discretization is used to process a given continuous physical signal and in
Time/Frequency domain computer processing, the signal is digitized and stored in a digital computer
to be later processed by any desired processing algorithm with maximum flexibility. It is therefore
not a real time process. The processing can either be in the time domain or the frequency domain.
For the latter, the Fast Fourier Transform algorithm is used to transform the data between the time
and frequency domains. The physical signal is truncated and sampled before it can be further
analyzed and processed numerically by a digital computer.
The DFT for an N by N matrix Wcan be written as (2).
ܺሾ݊ሿ =	
ଵ
√ே
∑ ‫ݔ‬ሾ݉ሿ݁ି
ೕమഏ೘೙
ಿேିଵ
௠ୀ଴ =									∑ ‫ݓ‬ே
௠௡ேିଵ
௠ୀ଴ ‫ݔ‬ሾ݉ሿ = (2)
෍ ‫ݓ‬ሾ݊, ݉ሿ‫ݔ‬ሾ݉ሿ			(
ேିଵ
௠ୀ଴
	݊ = 0,1, … , ܰ − 1)
Similarly, (3) represents the inverse transform.
‫ݔ‬ሾ݉ሿ =	
ଵ
√ே
∑ ܺሾ݊ሿ݁
ೕమഏ೘೙
ಿேିଵ
௡ୀ଴ =	 (3)
෍ ‫ݓ‬∗ሾ݊, ݉ሿ‫ݔ‬ሾ݉ሿ			(
ேିଵ
௠ୀ଴
	݉ = 0,1, … , ܰ − 1)
The computational complexity of DFT is	(ܰଶ
). The Fast Fourier Transform algorithm allows
the complexity to be much improved based onequations (4) to (6).
‫ݓ‬ே
௞ே
=	݁ି
ೕమೖಿഏ
ಿ =	 ݁ିଶ௞௝గ
	 ≡ 1 (4)
‫ݓ‬ଶே
ଶ௞
=	݁ି
ೕమೖమഏ
మಿ =		 ݁ି
మೖೕഏ
ಿ ≡ ‫ݓ‬ே
௞
(5)
‫ݓ‬ଶே
ே
=	݁ି
ೕమಿഏ
మಿ =		 ݁ି௝గ	
≡ −1 (6)
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
32
The complexity is herefore reduced from	(ܰଶ
)to	(݈ܰ‫݃݋‬ଶܰ). This major improvement of
computational makes Fourier transform practically possible in many applications.
C code for FFT:
for (i=0; i<N; ++i) { // bit reversal
j=0;
for (k=0; k<m; ++k)
j=(j << 1) | (1 & (i >> k));
if (j < i)
{ SWAP(xr[i],xr[j]); SWAP(xi[i],xi[j]); }
}
for (i=0; i<m; i++) { // for log N stages
n=pow(2.0,(float)i); w=Pi/n; if (inverse) w=-w;
k=0;
while (k<N-1) { // for N components
for (j=0; j<n; j++) { // for each section
c=cos(-j*w); s=sin(-j*w);
j1=k+j;
tempr=xr[j1+n]*c-xi[j1+n]*s;
tempi=xi[j1+n]*c+xr[j1+n]*s;
xr[j1+n]=xr[j1]-tempr;
xi[j1+n]=xi[j1]-tempi;
xr[j1]=xr[j1]+tempr;
xi[j1]=xi[j1]+tempi;
}
k+=2*n;
}
}
3. PROPOSED METHOD
Many methods for comparison currently exist like Correlation, Chi-Square, Bhattacharya
Distance and Intersection of images to provide numerical parameters that express how well two
image histograms match with each other. This paper proposes a new metric using their respective
FFT to compare to given images for similarity. A function is used to get a numerical parameter that
express how well two image histograms match with each other. This approach is useful also because
it allows cancelling out and negation of values. When considering both the real and complex
matrices of the FFT, negative components help to keep values low and makes for a much simpler
metric system.
(∑ ிభ೔ிమ೔ିேிభிమ
ಿ
೔సభ )మ
(∑ |ிభ೔|మಿ
೔సభ ି	ேிభ
మ
)(∑ |ிమ೔|మಿ
೔సభ ି	ேிమ
మ
)
(7)
The metric for comparison is computed using the formula in (7). For each frequency, an
intensity value is obtained from the real and complex parts of the Fourier Transform. A suitable
image size (512‫	ݔ݌‬ × 	512‫	)ݔ݌‬is taken to maintain uniformity for generating required results.‫ܨ‬ଵ௜is
the intensity value of ݅௧௛
pixel of the first image while ‫ܨ‬ଶ௜ is the intensity value of ݅௧௛
pixel of the
second image. ‫ܨ‬ଵand‫ܨ‬ଶ represent the average frequency values of each image. The input base image
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
33
is loaded along with Images to be compared with it and all images are transformed into gray scale
format. Convert the images to the CV_32F format to capture intensity values in float data type for
accurate computation. The float values have a much wider range from 0 to 1. Histogram for all the
images is calculated and normalized in order to compare them. The histogram of the base image is
compared with respect to all other test histograms &numerical and matching parameters are
obtained.
Fig 1: Values ranging from 0 to 255 for a 20X20 pixel image in CV_8U format. A similar mapping
is generated in CV_32F.
4. IMPLEMENTATION
The proposed technique has been implemented using the Open CV library and the Microsoft
Visual C++ Suite. The current code base has been configured to support gray scale images from a
sample set of 300 images. The CV_8U format of the Open CV libraryis useful for displaying the
final FFT and filtered results. Figure1 shows the intensity mapping for a sample image in this format
[4]. While performing calculations during the study however, the restricted set of values of this
format (i.e. intensity in range 0 - 255) did not provide very accurate results. The CV_32F format was
then employed due to its wider range of allowed values.
The proposed metric is also compared to results of existing metrics to compute the matching of
image histograms. The three techniques used for analysis in this study are –
Correlation:
݀൫ܲଵ,ܲଶ൯ =	
∑ (௉భ(ூ)ି௉భ)(௉మ(ூ)ି௉మ)಺
ට∑ (௉భ(ூ)ି௉భ)మ
಺ ∑ (௉మ(ூ)ି௉మ)మ
಺
(8)
Intersection:
݀൫ܲଵ,ܲଶ൯ =	∑ ‫(݊݅ܯ‬ூ ܲଵ(‫,)ܫ‬ ܲଶ(‫)9())ܫ‬
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
34
Bhattacharyya distance:
݀൫ܲଵ,ܲଶ൯ =	ඨ1 −	
ଵ
ට௉భ௉మேమ
∑ ඥܲଵ(‫.)ܫ‬ ܲଶ(‫)ܫ‬ூ (10)
Fig2: (a) The resized image (b) The FFT function in OpenCV allows us to view the resulting
Magnitude and (c) Phase spectrum images. (d) Output values for 2X2 matrix for real and complex
components of the image
4.1 APPLICATION OF FILTERS
In the frequency domain all image components are represented as the sum of periodic
functions characterized by different periods, centered on the zero frequency components. They
therefore retain information only about the direction of image components. Also, image mirroring
corresponds to mirroring of the spectrum, while image rotation rotates the spectrum by the same
angle. In all three cases, noise and aliasing may be introduced due to the discrete nature of the image
pixels. This in turn may add or exclude certain objects thus modifying the overall content of the
image. Noise and aliasing can be eliminated through the application of filters before FFT
computation.
4.2 COMPUTATION OF FOURIER TRANSFORM
The procedure discussed in this paper is based on the FFT computation and its interpretation.
The computation of FFT is a basic operation in image and signal processing [5]. The total number of
floating point operations needed for FFT computation of an N-point sequence is	݈ܰ‫݃݋‬ଶܰ.
Thus, for a‫	ܯ‬ × ܰ	pixel image, the total number of operations required is	‫	ܯ‬ × ܰ	(݈‫݃݋‬ଶ‫ܯ‬ +
	݈‫݃݋‬ଶܰ). Applying the same in the case of this study yields:
512	 × 512	(݈‫݃݋‬ଶ512 +	݈‫݃݋‬ଶ512) = 	4718592	Operations
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
35
5. RESULTS
The following images are taken as input images to test the proposed metric and the results are
compared to existing methods of comparison using the OpenCV library. A perfect match is displayed
on comparing the base image histogram with itself. Also, for the other two test images do not
provide very good matches since they have very different lighting conditions. The numerical results
of the study are presented in the following table
Table 1: The results of the comparative study are presented for all the four approaches implemented
using C++ on OpenCV
Image Correlation Intersection
Bhattacharyya
distance
Rank - FFT
Car 1 1 37.348704 0 0.995642
Car 2 0.065667 8.238745 0.782008 0.455956
Car 3 0.031694 1.203044 0.831347 0.410324
Car 4 0.009159 1.111309 0.896255 -0.22937
Car 5 -0.017561 2.113258 0.896092 -2.1178
Car 6 -0.0116 0.785085 0.927125 -5.32439
Car 7 0.011672 0.918962 0.914182 0.264427
Car 8 0.108535 4.70819 0.705022 0.643458
Car 9 0.016492 0.57914 0.908365 0.32495
Fig3: Car 1 is selected as the base image and should yield the highest similarity measure with itself
in all approaches.
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
36
For the Correlation and Intersection methods, the higher the metric, the more accurate
the match while for the Bhattacharyya distance metric, the lesser the result, the better the match
[6].As expected, the match of the base-base image is the highest and the matches of other images
with respect to the base are comparatively worse. When collating the data for all images, this method
was found especially robust in images with similar image components and differences in lighting
effects. Example images presented in this study use cars in a particular orientation to highlight this
feature. These images are first resized and padded for (512‫	ݔ݌‬ × 	512‫)ݔ݌‬ image size and provided
as input parameters to the implemented FFT rank function.
5.1 TRANSLATION
In the frequency domain all the image components are represented as the sum of periodic
functions characterized by different periods centered about the zero frequency component. Being
centered in the origin, the image components do not retain any information about the original
position, but only information about their direction. Image translation can be taken into account by
calculating it in terms of the angular spectrum computed on the Fourier transform.
5.2 ROTATION AND MIRRORING
Image rotation corresponds to a horizontal shift of the spectrum, and image mirroring
corresponds to mirroring of the spectrum. It is important to note that the discrete nature of the image
pixels may introduce noise and aliasing, and that translation and rotations may modify the overall
content of the image by adding or excluding some objects. Like Image translation, rotation and
mirroring can also be accounted for by interpreting them in terms of the angular spectrum computed
on the Fourier transform.
6. CONCLUSION
This paper discusses the use of intersection of normalized values of the Magnitude and Phase
spectrum from FFT of two compared images as a means of comparing image content. The
comparison is performed between a reference image, assumed as the query, and a collection of
images. In order to be effective, the collection must be semantically homogeneous so that visual
similarity can be a surrogate for content match. The database population can be executed in
acceptable time using almost all programming environments. Results based on a sample test
collection are provided along with a comparison with more assessed techniques. The approach is
suitable for image retrieval application where the graphical content or layout bear most of the
information. It was also found to be useful in applications which have different lighting. While,
initial tests in specific domain like the analysis of human signatures and fingerprints are promising, it
has to be compared with the safer and more assessed techniques to further refine the process.
REFERENCES
1. S. K. Mitra, J. F. Kaiser, Handbook for Digital Signal Processing, John Wiley & Sons,
1993Wachs, J.P., Kölsch, M., Stern, H., Edan, Y., Vision-Based Hand-Gesture Applications.
Communications of the acm, 54 (2) (2011) 60-71.
2. Similarity Matching in Computer Vision and Multimedia NicuSebe (2008) Computer
Vision and Image Understanding, online at www.sciencedirect.com [Accessed 22 January
2014]
3. A FFT based technique for image signature generation Augusto Celentanoa and
2014.aUniversitàCa’ Foscari di Venezia,
International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print),
ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME
37
4. OpenCV 2.4.9.0 documentation » OpenCV Tutorials » imgproc module. Image Processing »
Histogram Calculation http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogramc
alculation/histogram_calculation.html [Accessed 4 November 2014].
5. fourier.eng.hmc.edu/e101/lectures/image_processing/node2.html [Accessed 14 November
2014].
6. OpenCV 2.4.9.0 documentation » OpenCV Tutorials » imgproc module.
http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_c
omparison.html [Accessed 4 December 2014].
7. C. Basavaraju and Dr. Chandrakanth.H.G, “FFT Based Spectrum Analysis Model For An
Efficient Spectrum Sensing” International Journal of Advanced Research in Engineering &
Technology (IJARET), Volume 5, Issue 12, 2014, pp. 87 - 96, ISSN Print: 0976-6480, ISSN
Online: 0976-6499.
8. Dr. Hanaa m. A. Salman, “Information Hiding In Edge Location of Video Using Amalgamate
Fft and Cubic Spline” International journal of Computer Engineering & Technology (IJCET),
Volume 4, Issue 4, 2013, pp. 240 - 247, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.
9. Priyanka Chauhan and Girish Chandra Thakur, “Efficient Way of Image Encryption Using
Generalized Weighted Fractional Fourier Transform with Double Random Phase Encoding”
International Journal of Advanced Research in Engineering & Technology (IJARET),
Volume 5, Issue 6, 2014, pp. 45 - 52, ISSN Print: 0976-6480, ISSN Online: 0976-6499.
10. Prof. Maher K. Mahmood and Jinan N. Shehab, “Image Encryption and Compression Based
on Compressive Sensing and Chaos”, International journal of Computer Engineering &
Technology (IJCET), Volume 5, Issue 1, 2014, pp. 68 - 84, ISSN Print: 0976 – 6367, ISSN
Online: 0976 – 6375.

More Related Content

PPTX
Soft Toric Contact Lens
PPTX
Re l ex smile, the future of refractive surgery - medifocus sept 2013
PPTX
Accommodative convergence
PDF
PPTX
Bionic Eye
PPTX
A scan ultrasonography
PPTX
Retinal laser in opthalmology
PPTX
Optic Nerve and Visual pathway.pptx
Soft Toric Contact Lens
Re l ex smile, the future of refractive surgery - medifocus sept 2013
Accommodative convergence
Bionic Eye
A scan ultrasonography
Retinal laser in opthalmology
Optic Nerve and Visual pathway.pptx

What's hot (20)

PPTX
Decentration and prismatic effect in lens (1)
PPTX
Biometry.pptx
PPTX
Viral infections of eye
PPTX
Bionic eye
PPT
Equine conjunctivitis
PPT
Angioid streaks
DOCX
contact lens solution
PPTX
PPTX
Cataract
PPTX
viral corneal ulcer.pptx
PPT
Retinal disease lecture. Optometry. Optometri.
PPT
Corneal pachymetry.ppt
PPTX
Applanation tonometer
PPTX
Photostress recovery time
PPTX
Lensometer
DOC
28 Multiple Choice Questions (MCQS) - NEURO-ophthalmology.doc
PPT
Blumenthal technique @ SN
PPTX
traumatic lens subluxation what to know
Decentration and prismatic effect in lens (1)
Biometry.pptx
Viral infections of eye
Bionic eye
Equine conjunctivitis
Angioid streaks
contact lens solution
Cataract
viral corneal ulcer.pptx
Retinal disease lecture. Optometry. Optometri.
Corneal pachymetry.ppt
Applanation tonometer
Photostress recovery time
Lensometer
28 Multiple Choice Questions (MCQS) - NEURO-ophthalmology.doc
Blumenthal technique @ SN
traumatic lens subluxation what to know
Ad

Viewers also liked (9)

PPTX
PPTX
Analysis of hybrid image with FFT (Fast Fourier Transform)
PPT
Connected vehicles
PPT
image theory
PDF
Smart vehicle trend map
PDF
Connected Car Technology
PDF
PPT
Image trnsformations
PDF
The Connected Car: Driving Towards the Future
Analysis of hybrid image with FFT (Fast Fourier Transform)
Connected vehicles
image theory
Smart vehicle trend map
Connected Car Technology
Image trnsformations
The Connected Car: Driving Towards the Future
Ad

Similar to Image similarity using fourier transform (20)

PDF
Fuzzy Type Image Fusion Using SPIHT Image Compression Technique
PDF
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
PDF
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
PDF
Segmentation and Classification of MRI Brain Tumor
PDF
Region wise processing of an image using multithreading in multi core environ
PDF
Region wise processing of an image using multithreading in multi core environ
PDF
A comparative analysis of retrieval techniques in content based image retrieval
PDF
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL
PDF
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
PDF
TEMPLATE MATCHING TECHNIQUE FOR SEARCHING WORDS IN DOCUMENT IMAGES
PDF
International Journal of Engineering Research and Development
PDF
40120140503006
PDF
Development and Comparison of Image Fusion Techniques for CT&MRI Images
PDF
Comparative analysis of multimodal medical image fusion using pca and wavelet...
PDF
Object extraction using edge, motion and saliency information from videos
PDF
International Journal of Soft Computing, Mathematics and Control (IJSCMC)
PDF
Linearity of Feature Extraction Techniques for Medical Images by using Scale ...
PDF
Image Registration for Recovering Affine Transformation Using Nelder Mead Sim...
PDF
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONS
PDF
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...
Fuzzy Type Image Fusion Using SPIHT Image Compression Technique
IRJET - Symmetric Image Registration based on Intensity and Spatial Informati...
Query Image Searching With Integrated Textual and Visual Relevance Feedback f...
Segmentation and Classification of MRI Brain Tumor
Region wise processing of an image using multithreading in multi core environ
Region wise processing of an image using multithreading in multi core environ
A comparative analysis of retrieval techniques in content based image retrieval
A COMPARATIVE ANALYSIS OF RETRIEVAL TECHNIQUES IN CONTENT BASED IMAGE RETRIEVAL
APPLICATION OF IMAGE FUSION FOR ENHANCING THE QUALITY OF AN IMAGE
TEMPLATE MATCHING TECHNIQUE FOR SEARCHING WORDS IN DOCUMENT IMAGES
International Journal of Engineering Research and Development
40120140503006
Development and Comparison of Image Fusion Techniques for CT&MRI Images
Comparative analysis of multimodal medical image fusion using pca and wavelet...
Object extraction using edge, motion and saliency information from videos
International Journal of Soft Computing, Mathematics and Control (IJSCMC)
Linearity of Feature Extraction Techniques for Medical Images by using Scale ...
Image Registration for Recovering Affine Transformation Using Nelder Mead Sim...
SVD BASED LATENT SEMANTIC INDEXING WITH USE OF THE GPU COMPUTATIONS
Dual Tree Complex Wavelet Transform, Probabilistic Neural Network and Fuzzy C...

More from IAEME Publication (20)

PDF
IAEME_Publication_Call_for_Paper_September_2022.pdf
PDF
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
PDF
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
PDF
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
PDF
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
PDF
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
PDF
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
PDF
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
PDF
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
PDF
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
PDF
GANDHI ON NON-VIOLENT POLICE
PDF
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
PDF
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
PDF
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
PDF
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
PDF
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
PDF
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
PDF
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
PDF
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
PDF
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT
IAEME_Publication_Call_for_Paper_September_2022.pdf
MODELING AND ANALYSIS OF SURFACE ROUGHNESS AND WHITE LATER THICKNESS IN WIRE-...
A STUDY ON THE REASONS FOR TRANSGENDER TO BECOME ENTREPRENEURS
BROAD UNEXPOSED SKILLS OF TRANSGENDER ENTREPRENEURS
DETERMINANTS AFFECTING THE USER'S INTENTION TO USE MOBILE BANKING APPLICATIONS
ANALYSE THE USER PREDILECTION ON GPAY AND PHONEPE FOR DIGITAL TRANSACTIONS
VOICE BASED ATM FOR VISUALLY IMPAIRED USING ARDUINO
IMPACT OF EMOTIONAL INTELLIGENCE ON HUMAN RESOURCE MANAGEMENT PRACTICES AMONG...
VISUALISING AGING PARENTS & THEIR CLOSE CARERS LIFE JOURNEY IN AGING ECONOMY
A STUDY ON THE IMPACT OF ORGANIZATIONAL CULTURE ON THE EFFECTIVENESS OF PERFO...
GANDHI ON NON-VIOLENT POLICE
A STUDY ON TALENT MANAGEMENT AND ITS IMPACT ON EMPLOYEE RETENTION IN SELECTED...
ATTRITION IN THE IT INDUSTRY DURING COVID-19 PANDEMIC: LINKING EMOTIONAL INTE...
INFLUENCE OF TALENT MANAGEMENT PRACTICES ON ORGANIZATIONAL PERFORMANCE A STUD...
A STUDY OF VARIOUS TYPES OF LOANS OF SELECTED PUBLIC AND PRIVATE SECTOR BANKS...
EXPERIMENTAL STUDY OF MECHANICAL AND TRIBOLOGICAL RELATION OF NYLON/BaSO4 POL...
ROLE OF SOCIAL ENTREPRENEURSHIP IN RURAL DEVELOPMENT OF INDIA - PROBLEMS AND ...
OPTIMAL RECONFIGURATION OF POWER DISTRIBUTION RADIAL NETWORK USING HYBRID MET...
APPLICATION OF FRUGAL APPROACH FOR PRODUCTIVITY IMPROVEMENT - A CASE STUDY OF...
A MULTIPLE – CHANNEL QUEUING MODELS ON FUZZY ENVIRONMENT

Recently uploaded (20)

PDF
Approach and Philosophy of On baking technology
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPT
Teaching material agriculture food technology
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PPTX
Cloud computing and distributed systems.
PDF
Encapsulation theory and applications.pdf
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
Approach and Philosophy of On baking technology
The Rise and Fall of 3GPP – Time for a Sabbatical?
Teaching material agriculture food technology
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Mobile App Security Testing_ A Comprehensive Guide.pdf
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Cloud computing and distributed systems.
Encapsulation theory and applications.pdf
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Per capita expenditure prediction using model stacking based on satellite ima...
Advanced methodologies resolving dimensionality complications for autism neur...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Understanding_Digital_Forensics_Presentation.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
NewMind AI Weekly Chronicles - August'25 Week I
CIFDAQ's Market Insight: SEC Turns Pro Crypto
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Encapsulation_ Review paper, used for researhc scholars

Image similarity using fourier transform

  • 1. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 29 IMAGE SIMILARITY USING FOURIER TRANSFORM Siddharth Narayanan1 , P K Thirivikraman2 1 Research Center Imarat, DRDO Hyderabad, India 500069 2 Birla Institute of Technology & Science Hyderabad, India 500078 ABSTRACT In this paper, a similarity measure for images based on values from their respective Fourier Transforms is proposed for image registration. The approach uses image content to generate signatures and is not based on image annotation and therefore does not require human assistance. It uses both, the real and complex components of the FFT to compute the final rank for measuring similarity. Any robust approach must accurately represent all objects in an image and depending on the size of the image data set, diverse techniques may need to be followed. This paper discusses implementation of a similarity rating scheme through the Open CV library and introduces a metric for comparison, carried out by considering Intersection bounds of a covariance matrix of two compared images with normalized values of the Magnitude and Phase spectrum. Sample results on a test collection are given along with data using existing methods of image histogram comparison. Results have shown that this method is particularly advantageous in images with varying degrees of lighting. Keywords: Discrete Fourier Transform (DFT), Fast Fourier Transform (FFT), Feature extraction, Histogram Intersection, Image Signature 1. INTRODUCTION Several approaches have been proposed to the problem of identifying image features that could be used as objective indicators of their contents. One set of approaches require explicit image annotation by humans to mark objects, regions and shapes. The other set contains methods for automatic extraction of physical properties like color and contour. These are computed through values and distributions that are used in image classification. Automatic feature extraction in many cases provides good performance in large databases and for applications where cost of image processing must be kept low. A specific area of application is retrieval by similarity in thematic databases. For retrieval by similarity, a query is made of one or more parts of an image. The process to procure similar images is based on some selected visual properties according to an appropriate INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING & TECHNOLOGY (IJCET) ISSN 0976 – 6367(Print) ISSN 0976 – 6375(Online) Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME: www.iaeme.com/IJCET.asp Journal Impact Factor (2015): 8.9958 (Calculated by GISI) www.jifactor.com IJCET © I A E M E
  • 2. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 30 metric. The content of a thematic database is specific to a certain domain. In this context, it is assumed that the thematic nature allows queries to be targeted to visual aspects rather than to content interpretation. Two major approaches can be followed in extracting information about textures and lines direction in an image. The first is based on images segmented by an edge finding procedure while the second uses full colors or b/w multilevel images. This paper proposes an approach to the problem of finding the distribution of image lines direction by analyzing its Fourier transform. As noted by several authors, the 2D Fourier power spectrum preserves direction information of an image [1]. Once the Fourier transform is computed, its frequency domain representation can be scanned and required values generated. This paper is organized as follows. In Section 2 a brief overview of the current literature is given. Section 3 discusses the use of the FFT algorithm as a means for matching similar images. Section 4 describes the implementation technique, and Section 5 presents the results of the retrieval technique based on similarity. Conclusions are given in Section 6 and references follow. 2. RESEARCH ELABORATIONS 2.1 IMAGE ANALYSIS The large objects in an image are usually relevant for interpreting the image. These can be assumed to be most frequently localized in central areas like foreground objects or spanning an entire image like in the case of recurrent shapes or landscapes. These objects contribute to the low frequency information of the image [2]. Higher frequencies are mostly located in small details and fine-grain textures. In the frequency domain, components with continuous and low frequency tend to be located at the center. Higher frequencies that are related to fast variations in the image are located close to the border. Most of data is located near the origin. Image information is largely related to larger objects, rather than to details and continuous background. 2.2 RETRIEVAL SYSTEMS Comparing two images, or an image and a model, is the fundamental operation for any retrieval systems [3].Retrieval by similarity refers to retrieving images which are similar to a previous image. This can also be called retrieval by example, or to a model or schema. Retrieval by similarity requires specific definitions of what similar means. Representation of features of images - like color, texture, shape, motion, etc. - is a fundamental problem in visual information retrieval. Algorithms for pattern recognition and image analysis provide the means to extract numeric descriptors thus giving us a quantitative measure of these features. Computer vision enables object and motion identification by comparing extracted patterns with predefined models. Coupling it with localization techniques would allow autonomous robots to globally localize them selves to reliably track their positions and recover from localization failures. Such a system would also be robust against distortion and occlusions. Today, similarity queries arise naturally in a large variety of applications like electronic catalogues, Medical databases for ECG, X-ray and TAC, Weather prediction and Criminal investigations. The drawbacks of traditional approaches can be overcome through Similarity search by using numerical features computed by direct analysis of the information content. Content-Base Image Retrieval (CBIR) were proposed in the early 1990’s that use visual features to represent the image content. This approach allows features to be computed automatically, and the information used during the retrieval process is always consistent as it does not depend on human interpretation. A query image, or select a prototype image is taken by the user, searching for something similar. The result is a list of images sorted by increasing or decreasing values of similarity relative to the query image depending on the method of comparison. Retrieval is immediate, based on an appropriate
  • 3. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 31 predefined similarity criterion, and able to measure the grade of similarity between two images using only low level image properties (i.e. no human experts should provide additional information). Moreover, an efficient way to obtain the most similar DB images to the query has to be defined. At query time, the feature vector of the query image is computed and the DB is searched for the most similar feature vectors. 2.3 SIGNAL DISCRETIZATION The time signal squared;‫ݔ‬ଶ (‫)ݐ‬ represents how the energy contained in the signal distributes over time, while its spectrum squared;ܺଶ (݂) represents how the energy distributes over frequency (therefore the term power density spectrum). Obviously, the same amount of energy is contained in either time or frequency domain, as indicated by Parseval's formula (1). ‫׬‬ |‫|)ݐ(ݔ‬ଶ∞ ି∞ ݀‫ݐ‬ = ‫׬‬ |ܺ(݂)|ଶ∞ ି∞ ݂݀ (1) Signal discretization is used to process a given continuous physical signal and in Time/Frequency domain computer processing, the signal is digitized and stored in a digital computer to be later processed by any desired processing algorithm with maximum flexibility. It is therefore not a real time process. The processing can either be in the time domain or the frequency domain. For the latter, the Fast Fourier Transform algorithm is used to transform the data between the time and frequency domains. The physical signal is truncated and sampled before it can be further analyzed and processed numerically by a digital computer. The DFT for an N by N matrix Wcan be written as (2). ܺሾ݊ሿ = ଵ √ே ∑ ‫ݔ‬ሾ݉ሿ݁ି ೕమഏ೘೙ ಿேିଵ ௠ୀ଴ = ∑ ‫ݓ‬ே ௠௡ேିଵ ௠ୀ଴ ‫ݔ‬ሾ݉ሿ = (2) ෍ ‫ݓ‬ሾ݊, ݉ሿ‫ݔ‬ሾ݉ሿ ( ேିଵ ௠ୀ଴ ݊ = 0,1, … , ܰ − 1) Similarly, (3) represents the inverse transform. ‫ݔ‬ሾ݉ሿ = ଵ √ே ∑ ܺሾ݊ሿ݁ ೕమഏ೘೙ ಿேିଵ ௡ୀ଴ = (3) ෍ ‫ݓ‬∗ሾ݊, ݉ሿ‫ݔ‬ሾ݉ሿ ( ேିଵ ௠ୀ଴ ݉ = 0,1, … , ܰ − 1) The computational complexity of DFT is (ܰଶ ). The Fast Fourier Transform algorithm allows the complexity to be much improved based onequations (4) to (6). ‫ݓ‬ே ௞ே = ݁ି ೕమೖಿഏ ಿ = ݁ିଶ௞௝గ ≡ 1 (4) ‫ݓ‬ଶே ଶ௞ = ݁ି ೕమೖమഏ మಿ = ݁ି మೖೕഏ ಿ ≡ ‫ݓ‬ே ௞ (5) ‫ݓ‬ଶே ே = ݁ି ೕమಿഏ మಿ = ݁ି௝గ ≡ −1 (6)
  • 4. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 32 The complexity is herefore reduced from (ܰଶ )to (݈ܰ‫݃݋‬ଶܰ). This major improvement of computational makes Fourier transform practically possible in many applications. C code for FFT: for (i=0; i<N; ++i) { // bit reversal j=0; for (k=0; k<m; ++k) j=(j << 1) | (1 & (i >> k)); if (j < i) { SWAP(xr[i],xr[j]); SWAP(xi[i],xi[j]); } } for (i=0; i<m; i++) { // for log N stages n=pow(2.0,(float)i); w=Pi/n; if (inverse) w=-w; k=0; while (k<N-1) { // for N components for (j=0; j<n; j++) { // for each section c=cos(-j*w); s=sin(-j*w); j1=k+j; tempr=xr[j1+n]*c-xi[j1+n]*s; tempi=xi[j1+n]*c+xr[j1+n]*s; xr[j1+n]=xr[j1]-tempr; xi[j1+n]=xi[j1]-tempi; xr[j1]=xr[j1]+tempr; xi[j1]=xi[j1]+tempi; } k+=2*n; } } 3. PROPOSED METHOD Many methods for comparison currently exist like Correlation, Chi-Square, Bhattacharya Distance and Intersection of images to provide numerical parameters that express how well two image histograms match with each other. This paper proposes a new metric using their respective FFT to compare to given images for similarity. A function is used to get a numerical parameter that express how well two image histograms match with each other. This approach is useful also because it allows cancelling out and negation of values. When considering both the real and complex matrices of the FFT, negative components help to keep values low and makes for a much simpler metric system. (∑ ிభ೔ிమ೔ିேிభிమ ಿ ೔సభ )మ (∑ |ிభ೔|మಿ ೔సభ ି ேிభ మ )(∑ |ிమ೔|మಿ ೔సభ ି ேிమ మ ) (7) The metric for comparison is computed using the formula in (7). For each frequency, an intensity value is obtained from the real and complex parts of the Fourier Transform. A suitable image size (512‫ ݔ݌‬ × 512‫ )ݔ݌‬is taken to maintain uniformity for generating required results.‫ܨ‬ଵ௜is the intensity value of ݅௧௛ pixel of the first image while ‫ܨ‬ଶ௜ is the intensity value of ݅௧௛ pixel of the second image. ‫ܨ‬ଵand‫ܨ‬ଶ represent the average frequency values of each image. The input base image
  • 5. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 33 is loaded along with Images to be compared with it and all images are transformed into gray scale format. Convert the images to the CV_32F format to capture intensity values in float data type for accurate computation. The float values have a much wider range from 0 to 1. Histogram for all the images is calculated and normalized in order to compare them. The histogram of the base image is compared with respect to all other test histograms &numerical and matching parameters are obtained. Fig 1: Values ranging from 0 to 255 for a 20X20 pixel image in CV_8U format. A similar mapping is generated in CV_32F. 4. IMPLEMENTATION The proposed technique has been implemented using the Open CV library and the Microsoft Visual C++ Suite. The current code base has been configured to support gray scale images from a sample set of 300 images. The CV_8U format of the Open CV libraryis useful for displaying the final FFT and filtered results. Figure1 shows the intensity mapping for a sample image in this format [4]. While performing calculations during the study however, the restricted set of values of this format (i.e. intensity in range 0 - 255) did not provide very accurate results. The CV_32F format was then employed due to its wider range of allowed values. The proposed metric is also compared to results of existing metrics to compute the matching of image histograms. The three techniques used for analysis in this study are – Correlation: ݀൫ܲଵ,ܲଶ൯ = ∑ (௉భ(ூ)ି௉భ)(௉మ(ூ)ି௉మ)಺ ට∑ (௉భ(ூ)ି௉భ)మ ಺ ∑ (௉మ(ூ)ି௉మ)మ ಺ (8) Intersection: ݀൫ܲଵ,ܲଶ൯ = ∑ ‫(݊݅ܯ‬ூ ܲଵ(‫,)ܫ‬ ܲଶ(‫)9())ܫ‬
  • 6. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 34 Bhattacharyya distance: ݀൫ܲଵ,ܲଶ൯ = ඨ1 − ଵ ට௉భ௉మேమ ∑ ඥܲଵ(‫.)ܫ‬ ܲଶ(‫)ܫ‬ூ (10) Fig2: (a) The resized image (b) The FFT function in OpenCV allows us to view the resulting Magnitude and (c) Phase spectrum images. (d) Output values for 2X2 matrix for real and complex components of the image 4.1 APPLICATION OF FILTERS In the frequency domain all image components are represented as the sum of periodic functions characterized by different periods, centered on the zero frequency components. They therefore retain information only about the direction of image components. Also, image mirroring corresponds to mirroring of the spectrum, while image rotation rotates the spectrum by the same angle. In all three cases, noise and aliasing may be introduced due to the discrete nature of the image pixels. This in turn may add or exclude certain objects thus modifying the overall content of the image. Noise and aliasing can be eliminated through the application of filters before FFT computation. 4.2 COMPUTATION OF FOURIER TRANSFORM The procedure discussed in this paper is based on the FFT computation and its interpretation. The computation of FFT is a basic operation in image and signal processing [5]. The total number of floating point operations needed for FFT computation of an N-point sequence is ݈ܰ‫݃݋‬ଶܰ. Thus, for a‫ ܯ‬ × ܰ pixel image, the total number of operations required is ‫ ܯ‬ × ܰ (݈‫݃݋‬ଶ‫ܯ‬ + ݈‫݃݋‬ଶܰ). Applying the same in the case of this study yields: 512 × 512 (݈‫݃݋‬ଶ512 + ݈‫݃݋‬ଶ512) = 4718592 Operations
  • 7. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 35 5. RESULTS The following images are taken as input images to test the proposed metric and the results are compared to existing methods of comparison using the OpenCV library. A perfect match is displayed on comparing the base image histogram with itself. Also, for the other two test images do not provide very good matches since they have very different lighting conditions. The numerical results of the study are presented in the following table Table 1: The results of the comparative study are presented for all the four approaches implemented using C++ on OpenCV Image Correlation Intersection Bhattacharyya distance Rank - FFT Car 1 1 37.348704 0 0.995642 Car 2 0.065667 8.238745 0.782008 0.455956 Car 3 0.031694 1.203044 0.831347 0.410324 Car 4 0.009159 1.111309 0.896255 -0.22937 Car 5 -0.017561 2.113258 0.896092 -2.1178 Car 6 -0.0116 0.785085 0.927125 -5.32439 Car 7 0.011672 0.918962 0.914182 0.264427 Car 8 0.108535 4.70819 0.705022 0.643458 Car 9 0.016492 0.57914 0.908365 0.32495 Fig3: Car 1 is selected as the base image and should yield the highest similarity measure with itself in all approaches.
  • 8. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 36 For the Correlation and Intersection methods, the higher the metric, the more accurate the match while for the Bhattacharyya distance metric, the lesser the result, the better the match [6].As expected, the match of the base-base image is the highest and the matches of other images with respect to the base are comparatively worse. When collating the data for all images, this method was found especially robust in images with similar image components and differences in lighting effects. Example images presented in this study use cars in a particular orientation to highlight this feature. These images are first resized and padded for (512‫ ݔ݌‬ × 512‫)ݔ݌‬ image size and provided as input parameters to the implemented FFT rank function. 5.1 TRANSLATION In the frequency domain all the image components are represented as the sum of periodic functions characterized by different periods centered about the zero frequency component. Being centered in the origin, the image components do not retain any information about the original position, but only information about their direction. Image translation can be taken into account by calculating it in terms of the angular spectrum computed on the Fourier transform. 5.2 ROTATION AND MIRRORING Image rotation corresponds to a horizontal shift of the spectrum, and image mirroring corresponds to mirroring of the spectrum. It is important to note that the discrete nature of the image pixels may introduce noise and aliasing, and that translation and rotations may modify the overall content of the image by adding or excluding some objects. Like Image translation, rotation and mirroring can also be accounted for by interpreting them in terms of the angular spectrum computed on the Fourier transform. 6. CONCLUSION This paper discusses the use of intersection of normalized values of the Magnitude and Phase spectrum from FFT of two compared images as a means of comparing image content. The comparison is performed between a reference image, assumed as the query, and a collection of images. In order to be effective, the collection must be semantically homogeneous so that visual similarity can be a surrogate for content match. The database population can be executed in acceptable time using almost all programming environments. Results based on a sample test collection are provided along with a comparison with more assessed techniques. The approach is suitable for image retrieval application where the graphical content or layout bear most of the information. It was also found to be useful in applications which have different lighting. While, initial tests in specific domain like the analysis of human signatures and fingerprints are promising, it has to be compared with the safer and more assessed techniques to further refine the process. REFERENCES 1. S. K. Mitra, J. F. Kaiser, Handbook for Digital Signal Processing, John Wiley & Sons, 1993Wachs, J.P., Kölsch, M., Stern, H., Edan, Y., Vision-Based Hand-Gesture Applications. Communications of the acm, 54 (2) (2011) 60-71. 2. Similarity Matching in Computer Vision and Multimedia NicuSebe (2008) Computer Vision and Image Understanding, online at www.sciencedirect.com [Accessed 22 January 2014] 3. A FFT based technique for image signature generation Augusto Celentanoa and 2014.aUniversitàCa’ Foscari di Venezia,
  • 9. International Journal of Computer Engineering and Technology (IJCET), ISSN 0976-6367(Print), ISSN 0976 - 6375(Online), Volume 6, Issue 2, February (2015), pp. 29-37 © IAEME 37 4. OpenCV 2.4.9.0 documentation » OpenCV Tutorials » imgproc module. Image Processing » Histogram Calculation http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogramc alculation/histogram_calculation.html [Accessed 4 November 2014]. 5. fourier.eng.hmc.edu/e101/lectures/image_processing/node2.html [Accessed 14 November 2014]. 6. OpenCV 2.4.9.0 documentation » OpenCV Tutorials » imgproc module. http://docs.opencv.org/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_c omparison.html [Accessed 4 December 2014]. 7. C. Basavaraju and Dr. Chandrakanth.H.G, “FFT Based Spectrum Analysis Model For An Efficient Spectrum Sensing” International Journal of Advanced Research in Engineering & Technology (IJARET), Volume 5, Issue 12, 2014, pp. 87 - 96, ISSN Print: 0976-6480, ISSN Online: 0976-6499. 8. Dr. Hanaa m. A. Salman, “Information Hiding In Edge Location of Video Using Amalgamate Fft and Cubic Spline” International journal of Computer Engineering & Technology (IJCET), Volume 4, Issue 4, 2013, pp. 240 - 247, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375. 9. Priyanka Chauhan and Girish Chandra Thakur, “Efficient Way of Image Encryption Using Generalized Weighted Fractional Fourier Transform with Double Random Phase Encoding” International Journal of Advanced Research in Engineering & Technology (IJARET), Volume 5, Issue 6, 2014, pp. 45 - 52, ISSN Print: 0976-6480, ISSN Online: 0976-6499. 10. Prof. Maher K. Mahmood and Jinan N. Shehab, “Image Encryption and Compression Based on Compressive Sensing and Chaos”, International journal of Computer Engineering & Technology (IJCET), Volume 5, Issue 1, 2014, pp. 68 - 84, ISSN Print: 0976 – 6367, ISSN Online: 0976 – 6375.