SlideShare a Scribd company logo
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 185
VIEW AND ILLUMINATION INVARIANT ITERATIVE BASED IMAGE
MATCHING
Y. Ramesh1
, J. Sofia Priya Dharshini2
1
Student, 2
Associate Prof., Department of ECE, RGMCET, Nandyal, Kurnool Dist., Andhra Pradesh, INDIA
ramesh7792@gmail.com, sofi_prida@yahoo.co.in
Abstract
In this paper, the challenges in local-feature-based image matching are variations of view and illumination. Different methods have
been recently proposed to address these problems by using invariant feature detectors and distinctive descriptors. However, the
matching performance is still unstable and inaccurate, particularly when large variation in view or illumination occurs. In this paper,
we propose a view and illumination invariant image matching method. We iteratively estimate the relationship of the relative view and
illumination of the images, transform the view of one image to the other, and normalize their illumination for accurate matching. The
performance of matching is significantly improved and is not affected by the changes of view and illumination in a valid range. The
proposed method would fail when the initial view and illumination method fails, which gives us a new sight to evaluate the traditional
detectors. We propose two novel indicators for detector evaluation, namely, valid angle and valid illumination, which reflect the
maximum allowable change in view and illumination, respectively.
Keywords-Feature detector evaluation, image matching, Iterative algorithm.
-----------------------------------------------------------------------***-----------------------------------------------------------------------
1. INTRODUCTION
Image matching is a fundamental issue in computer vision. It
has been widely used in tracking, image stitching, 3-D
reconstruction, simultaneous localization and mapping
(SLAM) systems, camera calibration, object classification,
recognition, and so on. Image matching aim to find the
correspondence between two images of the same scene or
objects in different pose, illumination and environment In this
paper, we focus on local feature-based image matching. The
challenges of this work reside in stable and invariant feature
extraction from varying situations and robust matching.
Generally speaking, the framework of a region feature based
image matching consists of three steps. Detecting stable
regions. Interesting points are extracted from images, and the
region of interest is the associated circular (or elliptical) region
around the interesting point. Generally, researchers use corner
(Harris [1], SUSAN [2], CSS [3], etc.) or center of silent
region (SIFT [4], SURF [5], DoH [6], HLSIFD [7], etc.) as the
interesting point since they are stable and easy to locate and
describe. The radius of the region is determined by a priori
setting (Harris corner) or the region scale (scale invariant
features). The total number of features detected is the
minimum number of the features extracted from the matched
images. Describing regions. Color, structure, and texture are
widely used to describe images in the recent literature
Descriptors with edge orientation information (SIFT and
HOG) are also very popular since they are more robust to
scale, blur, and rotation. Matching features. Local features
from two images are first matched when they are the nearest
pair. A handful of distances can be used in practice, such as
L1distance, L2 distance, histogram intersection distance [8],
and earth mover’s distance [9]. If the nearest distance is higher
than k times (k ϵ (0, 1) empirically) of the second nearest
distance, the nearest matching pair will be removed. These are
the very initial matching results.
Fig1 Illustration of the proposed matching algorithm Ir and It
are the images to be matched. Ie is simulated from It by
transformation T. Ir is difficult to match with It for the
difference of view point and illumination, whereas Ie is easier
to match with It since they are closer in the parameter space.
The three parts of the detect–describe–match (DDM)
framework determine the performance of image matching.
The first step is the basis of this framework. Unstable and
variant features increase the difficulties of the next two steps.
Researchers mostly focus on the first step for invariant feature
extraction and have proposed many excellent detectors [1],
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 186
[4], [5], [7]. However, an important experience of a previous
work is that all the aforementioned feature detectors are not
strictly invariant to the changes of view and illumination. For
larger changes, there would be few invariant features that can
be extracted from both images to be matched. This motivates
us to think the essential difference of images with different
view and illumination. Normally, a question need to be
answered: whether an object in two images with different
views and illumination looks like the same one, supposing
there are two images with a large view change, as shown in
Fig. 1. The two top images are the same object in different
views. They are so different in appearance that they can be
considered as two different objects. We do not attempt to find
invariant local feature detectors as in a previous work but
focus on a better framework for image matching.
2. VIEW AND ILLUMINATION INVARIANT
IMAGE MATCHING
2.1. General Definition of Image Matching
Two images of the same object or scene are shown as two
points in parameter space P of the object (scene). Let I be the
original appearance of an object, = ( ( )) be the real
appearance of the object shown from an image, where L
indicates the illumination, and H is the object transformation
factor from a normal pose. Here, we define the parameter
space of a given image I as P = { . } (simply written as
= { . }in the following). Translation L and H is a point in
the parameter space; thus, the observed image is shown as a
point in the parameter space, which is expanded by object I.
Therefore, the purpose of image matching is to find
transformation T between the two points in the parameter
space { . } → { . } or, in other words, | =
( ). . ). The purpose isto find the coordinate
differences between the two points. The norm of this space is
difficult to define since illumination factor L and
transformation H are totally independent and cannot be
combined together. In this paper, we simply use images with
planar objects; therefore, H is the homograph transform
matrix, and L is the histogram matching function that
transforms the histogram of one image to a specific one.
2.2. Proposed Method
Denote the reference image and test image to be matched as
and. Suppose that the true pose transformation matrix from
I to I is and the illumination change function is . The
relationship between !"# is
($) = %( ) = & ( )' = & ( $)' (1)
Where % is the true transformation between !"# , is the
homogeneous co-ordinates, and + = (,, ., 1). If there exists
approximate estimations about illumination and
transformation, the could be transformed to an estimated
image , i.e.,
(/) = ( ) = ( ( $)) (2)
Where H denotes the view point transformation L and denotes
the illumination transformation. If T is not a very rough
estimation between !"# , the estimated image I would be
more similar to 1ℎ!" itself. In other words,
34 56748 17 than to . Thus, the matching between
!"# will be easier, as shown in Fig. 1.
In this way, we propose the following iterative image
matching process:
9X = 9( ;) = <( =( <X>)) ( ; = )
?X = TA(IAB9) = C( CB<( C$D)) (i > 1) (3)
2.3. Algorithm 1: The Proposed Method:
Initial: ; = { =. =} = LM. 1NOP, = ;, QR, QS;
Iterate
3 = 3 + 1;
V413W!18 ?: C. C;
= ? ∘ ;
Y = YZ ∗ Y.
Transform ?B9 17 ? `. (3);
Until C − M < σd, e C − 1NOe < QS7 3 > ".(E is the
unit matrix. f?is a histogram transformation
vector, QR !"# QS are convergence thresholds.)
Return T, H
The algorithm is summarized in Algorithm 1. The final
estimation of the % is
% = ⋯ ∘ h ∘ hB9 ∘ ⋯ ∘ i ∘ 9 (4)
≈ l ∘ lB9 ∘ ⋯ ∘ i ∘ 9 (5)
Where “∘” denotes function composition. Our experiments in
Section IV-B show the convergence of the iteration with SIFT
and the performance with respect to the number of iterations
2.4. Estimate the Parameters H and L
General image-matching methods by local features focus on
the first parameter H since the concerned issue is the space
correspondence between the two images. One of the advantage
of the proposed method is that it also estimates the
illumination change, which makes matching much better when
illumination has changed.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 187
The purpose of general image-matching methods is to find the
transformation matrix between the reference image and the
test image. These methods are invariant to rotation, scale, and
partially affine changes. The H can be easily estimated by the
general methods without other information. First, we extract
features from the matching images and obtain features
descriptions (which method is used is not important). Then,
we match two features when they are the nearest pair in the
feature space. Here, fi norm is used to calculate the distance
between to the features The RANSAC algorithm is employed
to calculate transformation matrix. The general methods, i.e.,
HarAff, HesAff, SURF, SIFT, and HLSIFD, all can be used as
the feature extraction method. We call them I-HarAff, I-
HesAff, ISURF, ISIFT, and IHLSIFD (“I” indicates
“Iterative”), respectively. Moreover, image matching is
usually used in video sequences.
Fig.2 Illustration of the histogram transformation (a) The original image. (b) Darker image. (c) Transformed image from (b) according
to the histogram of (a). (d) Brighter image. (e) Transformed image from (d) according to the histogram of (a). (f)–(j) The
corresponding histograms of (a)–(e).
We assume that the difference between two consecutive
frames is not large, and the object or the camera smoothly
moves. Thus, the th frame’s transformation Ccan be
approximated by the previous results. Different detectors and
descriptors have been developed to extract illumination
invariant local features. The gradient direction histogram is
normalized to form the descriptors. There is usually a tradeoff
between the distinction and the invariance. If we do not
normalize the descriptors, they will be sensitive to
illumination changes but more distinctive. Computing
detectors and descriptors also cost much time. Conversely, the
detector will be more efficient if we do not require the detector
to be invariant to illumination change. We want to keep both
illumination invariant and descriptor distinctive in our method.
Thus, it is necessary to estimate the illumination change
between the two images.
Estimating the illumination is a challenging issue since the
objects in the images are often accompanied by clutter
background or noise. Benefitting from the estimation of the
transformation matrix, we can warp the test image to another
pose in which the object pose looks similar to that in the
reference image. Accordingly, approximate object
segmentation would be obtained on the simulated image. To
eliminate the occlusion, we only use the matched regions. The
matched regions are the region in the scale of the matched
interesting points. First, we calculate the illumination
histogram of the two images in the matched region. Second,
we fix one image and calculate histogram translation function
L from the other image to the fixed one. Suppose the
histogram of the fixed image is ℎ9 and the histogram of the
other image is ℎi . We calculate the cumulative functions
ofℎ9!"#ℎi − n9!"#ni. Finally, the translation function is
= op
B<
o<.																										 q
Since the cumulative function of gray histogram is always
monotonically increasing, inverse function oB<
always exists.
We transform the histogram of the test image according to the
histogram of the reference image to normalize the illumination
between the pair, as shown in Fig. 2, and the whole procedure
is illustrated in Fig. 3. To sum up, we estimate transformation
matrix H between the matching pairs by feature detector,
estimate the illumination relationship, and change one of the
images according to the color histogram of the other to map
the pose and illumination of the object in one image to the
other.
2.5. Relationship between the Iterative Algorithm
and ASIFT
The proposed iterative method is similar to ASIFT in ASIFT,
the features are not invariant to affine change, but they cover
the whole affine space, as shown in the middle block in Fig. 4.
Every simulation of the reference image is one pose of the
image in the affine space. Therefore, parts of the simulations
of the reference image and the test image should have similar
poses in the affine space theoretically. The simulations of the
reference image and the test image are independently
constructed. No mutual information is used in the simulations
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 188
Table 1: Comparition of Asift and our Method
ASIFT Our method
Simulation to ref image Yes No
Simulaton to test image Yes Yes
Number of simulations Many Few
Number fo features 104
~ 105
103
Pose simulation Yes Yes
Illumination simulation No Yes
NCM High High
RS Vey low High
Affine invariancy Full Partial
Computational cost High low
Rea-time No Yes
Simulating in a high density in the affine space, may supposed
image poses are constructed, and then, they are matched in a
general way. The number of matches increases with the
number of the simulations. ASIFT indeed increases the
invariability of the image-matching method. However, it does
not care what the transformation matrix between the reference
and test images is, by trying many possible transformations
and combining the matches. Thus, ASIFT can be regarded as a
sampling method around the original points in parameter
space, whose properties are shown in the left column of Table
I. Essentially, our method also constructs “simulation.” We
simulate the image not only in the pose but also in
illumination, as shown in the right part of Fig. 4. In addition,
we transform one simulation per iteration, and in most tasks,
two iterations are enough. We will give an experiment to
illustrate this in Section IV-B. Benefiting from few
simulations, the computational cost of our method is very low,
compared with ASIFT, which simulates much more images
than our method.
Fig. 3 Procedure of illumination estimation (a) The test image. Warp (a) by the estimated transformation matrix to generate (b). (c)
Mask with the matched regions labeled as 1, and the unmatched regions labeled as 0. (d) The inner product of (b) and (c). (e) The
reference image. (f) The inner product of (c) and (e). (g) Illumination simulated image from (d) according to the histogram of (f).
Fig. 4 Relationship among the general framework, ASIFT, and the proposed method. (Left block) The general framework, (middle
block) ASIFT, and (right block) ours. The general DDM framework directly estimates the transformation between two images. It is
simple but coarse. ASIFT simulates many poses of the two images to cover the affine space, whereas our method estimates the
transformed pose first and then accurately matches in the projective space.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 189
A coarse-to-fine scheme can reduce the computational time of
ASIFT to three times of the SIFT, whereas our method only
costs two times. One drawback of the proposed method is that
it does not increase the invariability of the original method.
When the initial method fails in matching images, the
proposed method also fails. One promising method to
overcome this shortage is to combine the proposed method
with the ASIFT, which improves both the invariability and the
accuracy. Furthermore, the histogram matching may amplify
noise that seems to affect the performance. A few more key
points would be extracted after the histogram matching, but
they would not affect the performance too much. We will
show this in Section IV-C. Experimental results show that the
performance of the proposed framework reaches a comparable
level, compared with ASIFT with much fewer features totally
detected, as shown in Table III. Therefore, the RS of our
method is much higher than that of ASIFT. The computational
cost of our method is much lower than that of ASIFT because
much fewer features are required.
Above all, there are some common properties between
iterative SIFT (ISIFT) and ASIFT. Instead of directly
matching the original images, both methods find good
simulations of the original pairs. ASIFT samples the
imaginary images in the whole affine space, whereas our
method directly estimates in the whole parameter space. We
should point out here that these comparisons and the
experiments shown in the following section are all under the
situation that the original method, i.e., SIFT, still works. When
it fails, the proposed method also fails, whereas the ASIFT can
still obtain a valid result.
3. EXPERIMENTAL RESULTS
3.1. Database
In the first experiment, we want to show the performance of
the proposed method. We capture two images with changes
both in illumination and view. This experiment is not used for
comparison, but it only shows the effectiveness of the
proposed method. To evaluate the performance of the
proposed image-matching framework, we do experiments on
the database provided by Mikolajczyk.1 This database
contains eight groups of images with challenging
transformations. Parts of them are shown in Fig. 5. We
compare the proposed method with ASIFT and the usual
DDM framework with the state-of-the-art detectors: SURF,
SIFT. In addition, two evaluations on the detectors through
our strategy are proposed. One of them tests the adaptive
capacity on the view change, and the other tests the capacity
on the illumination change. To finish the two evaluations, we
build two databases. One of them contains 88 frames with
view changes from 0^ to 87^. The other one contains 55
frames with light exposure changes from 40 to 14 (0.1 EV).
The two databases contain continuous transformation frames.
Thus, we can evaluate the view invariant ability of the
detectors at a 1 interval and the illumination change invariant
ability at a step of 0.1 EV. Such databases seldom appear in
the open literature, and they will be currently available on the
Internet.
3.2. Convergence
As we mentioned in Section III-B, the number of iteration is
an important parameter. A question that should be answered is
whether more iterations bring better performance.
Experiments show that, under the proposed framework, our
method converges very fast. Fig. 6 shows an experiment on
matching two images. The reference image is captured from a
frontal view, and the test image is captured from a view angle
of 60 ^,as shown in Fig. 6(a) and (b), respectively. Here, SIFT
is used as the base detector. The RS and NCM of our method
and the DDM framework with SIFT are drawn for
comparison, as shown in Fig. 6(c) and (d).
Table 2: Performance of sift, isift with only pose estimation,
isift both pose and illumination estimation
SIFT ISIFT(H) ISIFT(H&L)
Total detected 436 388 2021
Total matches 50 64 169
NCM 39 57 153
RS(%) 8.95 14.7 7.57
MP(%) 78.0 89.1 90.5
The results show that more iteration does not necessarily
increase the performance significantly, whereas it increases
the computation time linearly. When, n=2 the performance
significantly increases. The NCM increases more than 300
matches from only 12 to 365, and the RS increases from
12.1% to 37.1%. However, as n further increases the
performance little, the NCM only moves around 360, and the
RS moves around 37%. Thus, two iterations are enough in
general situations, and we use n=2 in the following
experiments. Moreover, all the features in this experiment and
the following experiments are described by a SIFT descriptor,
except SURF, which is described by a SURF descriptor.
Following the general evaluation, three criteria are often used
as feature evaluator.
1) NCMs are the number of total correct match pairs.
2) RS is the ratio between the NCM and the minimum of total
number of features detected from the image pair RS
NCM/TOTAL.
3) Matching precision (MP) is the ratio between the NCM and
the number of matches MP NCM/Matches.
3.3. Performance
In this experiment, a brief view of the performance of the
proposed method is given. We use SIFT as the base detector in
this experiment (ISIFT). Two images with both view and
illumination changes are matched here.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 190
We first match the two images by SIFT, and then, we only
simulate the pose of the left image in our strategy. Finally, we
simulate both pose and illumination. The matching results are
shown in Fig. 7 and Table II. View and illumination changes
both degrade the performance of the general method. SIFT
could achieve 8.95% RS with 39 correct matches. ISIFT, with
the pose estimation only, could achieve 14.7% RS with 57
correct matches. When we estimate the pose and illumination
changes, the number of total detected features rapidly
increases, and the NCM increase to 153. Because histogram
matching amplifies noise in simulation, many fake features are
detected, and the RS is reduced to 7.57%. This experiment is
only a brief view of our strategy, and more experiments will
be presented in the following. We estimate the global
illumination change between the matching pair to increase the
NCMs. The illumination change is usually continuous in the
image. Thus, revising the illumination of part of the image
could benefit to other regions. Our algorithm does not increase
the invariance of the original detector, but it increases the
accuracy, stability, and reliability of the matching results.
When SIFT fails, our method also fails. However, when SIFT
works, but not robust, the proposed method will play an
important role. More matches could not increase the
invariance, but it can increase the accuracy of alignment when
the matching by SIFT is inaccurate.
Fig.5. Four groups of images that we used for comparison [33] Each group contains one or two transformations with six images,
and only parts of them are shown here. (a) Boats (scale rotation). (b) Graf (view). (c) Wall (view). (d) Leuven (illumination)
Fig.6. Experiments of convergence. (a) The reference image. (b) The test image. (c)–(d) The NCM and RS of ISIFT compared with
SIFT.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 191
Fig.7. Matching results of SIFT and ISIFT. (a) Matching
result of SIFT. (b Matching result of ISIFT with only pose
simulation (H). (c) Result of ISIFT with both pose and
illumination ( H and L) simulation.
In other words, the advantage of the proposed method is that
the performance does not degrade with the increase in the pose
change or transition tilt, which is addressed in the valid range.
Additionally, the local key point location will be more
accurate than that of the original detected point. To
corroborate this point of view, we show an extra experiment in
the following. The first row in Fig. 8 is the matching results of
SIFT, and the second row is the results of ISIFT. Both the
matches and the alignment residual error are shown. From this
experiment, we can find that our algorithm can obtain less
error than SIFT, and the NCM affects the accuracy of
matching very much.
3.4. Comparison
We choose the database provided by Mikolajczyk and
compare them with SURF, SIFT and ASIFT. Four pairs of
images with scale, view, and illumination change are tested, as
shown in Fig. 9. The images on top are the reference image,
and those at the bottom are the test image. Table 3 is a
comparison of this experiment in terms of NCM, RS, and MP.
Our method estimates the pose and illumination of the
matching pairs and simulates the reference image. Therefore,
the simulated image is closer to the original image, which
contains most information of the original image, shortening
the distance of the matching pairs in the parameter space.
First, the NCM of the ISIFT is much higher than that of the
traditional methods. They obtain 584 matches, whereas SURF
obtains 9 matches, and SIFT obtains 46 matches in the Graf
(affine change situation; second row in Fig. 9). SURF and
SIFT obtain 793 and 2837 features, respectively. Thus, the RS
of ISIFT increases to 36.4%, whereas that of SURF and SIFT
is only 1.14% and 1.62%. This implies that the efficacy of IIM
framework is much better than the traditional DDM
framework. We increase about 32 times and 22 times RS in
this view-change experiment. With the significant increasing
performance, we can make the matching more stable and
reliable. Similarly, more correspondences are found in other
experiments, particularly under affine and illumination change
situations. Our method does not significantly increase NCM
under only scale change comparing to SIFT, SURF, and
HLSIFD since they are theoretically scale invariant. The RS
and MP also significantly increase.
However, in extreme situations when SIFT fails in the first
matching, our algorithm also fails. The proposed method can
increase the stability, reliability, and accuracy of the original
detector, but it cannot increase the invariance. A solution is
integrating the proposed method into ASIFT as the second
layer to refine the original matching results. We will show an
experiment in Section IV-E. ASIFT also obtains 105, 465,
556, and 157 matches from Boat, Graf,Wall, and Leuven
matching images (61, 46, 409, and 259 matches are found by
SIFT, respectively). However, these matches are calculated
from 29 985, 45 151, 64 908, and 22 562 extracted features.
Indeed, ASIFT increases the NCM, but they need to extract
much more features from the images, which cost much time in
computation. More detail results are summarized in Table III.
In this paper, we try to link our method with the general
optimization theory. Essentially, the target of image matching
is finding the correspondence. We want to find the
transformation function between the matching pair, which can
minimize the matching error. Thus, we optimize the view
difference and then optimize the illumination. With the two-
step optimization, our method can find more accurate
transformation function. Different from ASIFT, the proposed
method does not increase the invariance of the original
detector, but it increases the stability and reliability.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 192
Fig.8. Matching error of the SIFT and the proposed method. (a) The matches of SIFT. (b) The residual error of SIFT. (c) The matches
of ISIFT. (d) The residual error of ISIFT.
Fig. 9 Matching results of four groups of images. (Test images from top to bottom) Boat, Graf, Wall, and Leuven. The results of the
correct matches are drawn in blue or white lines.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 193
Table 3: Comparison of the algorithms on view change pairs
Methods SURF SIFT ASIFT(HR) ISIFT
Boat / Scale Total
Matches
NCM
RS(%)
MP(%)
722
43
8
1.25
18.6
7986
125
61
0.764
48.8
29985
-
105
0.35
-
615
94
79
12.8
84
Graf / Affine Total
Matches
NCM
RS(%)
MP(%)
793
34
9
1.14
26.5
2837
210
46
1.62
21.9
45151
-
465
1.03
-
1605
586
584
36.4
99.7
Wall / Affine Total
Matches
NCM
RS(%)
MP(%)
1730
78
40
2.31
51.3
7094
452
409
5.77
90.5
64908
-
556
0.857
-
5358
834
833
15.5
99.9
Leuven /
Illumination
Total
Matches
NCM
RS(%)
MP(%)
647
172
161
24.9
93.6
999
289
259
25.9
89.6
22562
-
157
6.96
-
1159
379
344
29.7
90.8
3.5. Real-Time Image Matching
An important application of image matching is object
detection and poses estimation in video frame. Suppose that
the camera smoothly moves and the reference image can be
matched with the first frame, the estimation of the
transformation matrix from the reference image to certain
frame in video can be initialized from the matching of the
previous frame. In addition, we match the first frame with the
reference image directly by local-feature-based image-
matching method. We directly use SIFT here. The RS and
NCM of our method and SIFT are shown in Fig. 14, and parts
of the matching results of ISIFT and SIFT are shown in Fig.
13. The RS of our method (ISIFT is used here) stays around
30%, and NCM is always higher than 100 pairs in this
experiment.
The RS of SIFT is running around 7%. Only a small part of
features are useful for the correspondence calculation. The
NCM of SIFT is about 70 matches, which is lower than that of
the proposed method. The mean of the RS and NCM of the
ISFIT and SIFT is, respectively 29.6%, 137, 5.7%, and 66.
Our method accurately calculates matches all through the
video frames, even in large view changes such as frames 750
to 900. To sum up, ISIFT is very accurate and stable in real
applications.
We develop a real-time image-matching system to show the
efficiency. The proposed method could cope with a wide
range of view and illumination changes with stable matches,
as shown in Fig. 15.We compare the real performance of
SURF and SIFT by using them as our basic detector. ISURF is
faster than ISIFT; however, it is not as stable as ISIFT. The
system is implemented on a computer with two dual-core 2.8-
GHz central processing unit, and the processed image size is
640 X 480. The matching could be finished in 80 ms, with
parallel coding in a algorithmic level.
CONCLUSIONS
In this paper, we have proposed a novel image-matching
algorithm based on an iterative framework and two new
indicators for local feature detector, namely, the VA and the
VI. The proposed framework iteratively estimates the relative
pose and illumination relationship between the matching pair
and simulates one of them to the other to degrade the
challenge of matching images in the valid region (VA and VI).
Our algorithm can significantly increase the number of
matching pairs, RS, and matching accuracy when the
transformation is not beyond the valid region. The proposed
method would fail when the initial estimation fails, which is
relative to the ability of the detector. We have proposed two
indicators, i.e., the VA and the VI, according to this
phenomenon to evaluate the detectors, which reflect the
maximal available change in view and illumination,
respectively. Extensive experimental results show that our
method improves the traditional detectors, even in large
variations, and the new indicators are distinctive.
IJRET: International Journal of Research in
__________________________________________________________________________________________
Volume: 02 Issue: 11 | Nov-2013, Available @
REFERENCES
[1]. C. Harris and M. Stephens, “A combined corner and edge
detection,” in Proc. 4th Alvey Vis. Conf., 1988, pp. 147
[2]. S. M. Smith and J. M. Brady, “Susan—A new approach to
low level image processing,” Int. J. Comput. Vis., vol. 23, no.
1, pp. 45–78, May 1997.
[3]. F. Mokhtarian and R. Suomela, “Robust image corner
detection through curvature scale space,” IEEE Trans. Pat
Anal. Mach. Intell., vol. 20, no. 12, pp. 1376
1998.
[4]. D. G. Lowe, “Distinctive image features from scale
invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp.
91–110, Nov. 2004.
[5]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Go
up robust features (SURF),” Comput. Vis. Image Understand.,
vol. 110, no. 3, pp. 346–359, Jun. 2008.
[6]. T. Lindeberg, Scale-Space Theory in Computer Vision.
Norwell, MA: Kluwer, 1994.
[7]. Y. Yu, K. Huang, and T. Tan, “A Harris
invariant feature detector,” in Proc. Asian Conf. Comput. Vis.,
2009, pp. 586–595.
[8]. A. Barla, F. Odone, and A. Verri, “Histogram intersection
kernel for image classification,” in Proc. Int. Conf. Image
Process., 2003, vol. 3, p. III–513–16 [Online]. Avai
http://dx.doi.org/10.1109/ICIP. 2003.1247294, vol.2
[9]. Y. Rubner, C. Tomasi, and L. J. Guibas, A Metric for
DistributionsWith Applications to Image Databases.
Washington, DC: IEEE Comput. Soc., 1998, p. 59.
BIOGRAPHIES
Y. Ramesh received B.Tech degree from
GATES Engineering College in the year
2010 and currently pursing M.Tech in
Digital Systems and Computer Electronics
at RGM College of Engineering and
Technology, Nandyal, Kurnool(dist),
Andhra Pradesh. His areas of interest are
Digital Image Processing, Data
Communication and the application of Digital systems.
J. Sofia priya dharshini
B.Tech degree (Electronics &
Communication Engineering) in the year
2005, M.Tech (Digital Systems and
Computer Electronics) in 2009
pursuing Ph.D. (Wireless Communications
and Networking) from JNTU Anantapur,
A.P, India. Presently she is working as
Associate Professor in the dept. of ECE, RGMCET, Nandyal.
She has published two papers in International Journals and
nine papers in National and International Conferences. Her
area of interest includes Wireless communications and
Networks, Mobile Computing and Video and Image
processing.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319
__________________________________________________________________________________________
2013, Available @ http://www.ijret.org
C. Harris and M. Stephens, “A combined corner and edge
in Proc. 4th Alvey Vis. Conf., 1988, pp. 147–151.
A new approach to
image processing,” Int. J. Comput. Vis., vol. 23, no.
F. Mokhtarian and R. Suomela, “Robust image corner
through curvature scale space,” IEEE Trans. Pattern
Intell., vol. 20, no. 12, pp. 1376–1381, Dec.
D. G. Lowe, “Distinctive image features from scale-
Int. J. Comput. Vis., vol. 60, no. 2, pp.
H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded-
features (SURF),” Comput. Vis. Image Understand.,
Space Theory in Computer Vision.
Y. Yu, K. Huang, and T. Tan, “A Harris-like scale
detector,” in Proc. Asian Conf. Comput. Vis.,
A. Barla, F. Odone, and A. Verri, “Histogram intersection
image classification,” in Proc. Int. Conf. Image
16 [Online]. Available:
2003.1247294, vol.2
Y. Rubner, C. Tomasi, and L. J. Guibas, A Metric for
Applications to Image Databases.
Soc., 1998, p. 59.
received B.Tech degree from
GATES Engineering College in the year
2010 and currently pursing M.Tech in
Digital Systems and Computer Electronics
at RGM College of Engineering and
Technology, Nandyal, Kurnool(dist),
. His areas of interest are
Digital Image Processing, Data
Communication and the application of Digital systems.
J. Sofia priya dharshini has received
B.Tech degree (Electronics &
Communication Engineering) in the year
2005, M.Tech (Digital Systems and
Computer Electronics) in 2009 and also
pursuing Ph.D. (Wireless Communications
and Networking) from JNTU Anantapur,
A.P, India. Presently she is working as
Associate Professor in the dept. of ECE, RGMCET, Nandyal.
She has published two papers in International Journals and
in National and International Conferences. Her
area of interest includes Wireless communications and
Networks, Mobile Computing and Video and Image
eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
194

More Related Content

PDF
View and illumination invariant iterative based image matching
PDF
A novel tool for stereo matching of images
PDF
A novel tool for stereo matching of images
PDF
Tracking Faces using Active Appearance Models
PPT
Gil Shapira's Active Appearance Model slides
PDF
Object Recognition Using Shape Context with Canberra Distance
PPT
IEEE ICAPR 2009
PDF
Face skin color based recognition using local spectral and gray scale features
View and illumination invariant iterative based image matching
A novel tool for stereo matching of images
A novel tool for stereo matching of images
Tracking Faces using Active Appearance Models
Gil Shapira's Active Appearance Model slides
Object Recognition Using Shape Context with Canberra Distance
IEEE ICAPR 2009
Face skin color based recognition using local spectral and gray scale features

What's hot (17)

PDF
Www.cs.berkeley.edu kunal
PDF
Fourier mellin transform based face recognition
DOCX
Template Matching - Pattern Recognition
PDF
Implementation of High Dimension Colour Transform in Domain of Image Processing
PDF
Contrast enhancement using various statistical operations and neighborhood pr...
PPTX
Features image processing and Extaction
PDF
Human’s facial parts extraction to recognize facial expression
PDF
Based on correlation coefficient in image matching
PPT
Multi-Image Matching
PPT
Template matching03
PDF
Active shape appearance model-presentation 1st
PDF
Automatic rectification of perspective distortion from a single image using p...
PDF
IRJET- Multi Image Morphing: A Review
PDF
Region filling and object removal by exemplar based image inpainting
PPT
Michal Erel's SIFT presentation
PDF
Predicting growth of urban agglomerations through fractal analysis of geo spa...
PPTX
Comparison of image segmentation
Www.cs.berkeley.edu kunal
Fourier mellin transform based face recognition
Template Matching - Pattern Recognition
Implementation of High Dimension Colour Transform in Domain of Image Processing
Contrast enhancement using various statistical operations and neighborhood pr...
Features image processing and Extaction
Human’s facial parts extraction to recognize facial expression
Based on correlation coefficient in image matching
Multi-Image Matching
Template matching03
Active shape appearance model-presentation 1st
Automatic rectification of perspective distortion from a single image using p...
IRJET- Multi Image Morphing: A Review
Region filling and object removal by exemplar based image inpainting
Michal Erel's SIFT presentation
Predicting growth of urban agglomerations through fractal analysis of geo spa...
Comparison of image segmentation
Ad

Viewers also liked (20)

PDF
Performance evaluation of proactive, reactive and
PDF
Distance protection of hvdc transmission line with novel fault location techn...
PDF
Computer aided diagnosis for liver cancer using
PDF
Operating and emission characterstics of a novel
PDF
Effect of modulus of masonry on initial lateral stiffness of infilled frames ...
PDF
Non invasive modalities of neurocognitive science
PDF
A detection technique of signal in mimo system
PDF
A comprehensive review on performance of aodv and dsdv protocol using manhatt...
PDF
Development of mobile surface water filtration system through simulation usin...
PDF
Extreme software estimation (xsoft estimation)
PDF
Dynamic thresholding on speech segmentation
PDF
Agile software development and challenges
PDF
The efficiency of the inference system knowledge
PDF
Mhd effects on non newtonian micro polar fluid with
PDF
Design and development of fall detector using fall
PDF
Enhanced equally distributed load balancing algorithm for cloud computing
PDF
Application of ibearugbulem’s model for optimizing granite concrete mix
PDF
Co axial fed microstrip rectangular patch antenna
PDF
Accelerted testing of deteriorated concrete structures due to carbonation
PDF
Performance of high power light emitting diode for
Performance evaluation of proactive, reactive and
Distance protection of hvdc transmission line with novel fault location techn...
Computer aided diagnosis for liver cancer using
Operating and emission characterstics of a novel
Effect of modulus of masonry on initial lateral stiffness of infilled frames ...
Non invasive modalities of neurocognitive science
A detection technique of signal in mimo system
A comprehensive review on performance of aodv and dsdv protocol using manhatt...
Development of mobile surface water filtration system through simulation usin...
Extreme software estimation (xsoft estimation)
Dynamic thresholding on speech segmentation
Agile software development and challenges
The efficiency of the inference system knowledge
Mhd effects on non newtonian micro polar fluid with
Design and development of fall detector using fall
Enhanced equally distributed load balancing algorithm for cloud computing
Application of ibearugbulem’s model for optimizing granite concrete mix
Co axial fed microstrip rectangular patch antenna
Accelerted testing of deteriorated concrete structures due to carbonation
Performance of high power light emitting diode for
Ad

Similar to View and illumination invariant iterative based image (20)

PDF
Feature detection and matching
PDF
PPT s11-machine vision-s2
PDF
Image Features Matching and Classification Using Machine Learning
PDF
Lec10 alignment
PPTX
Real Time Stitching Of IR Images using ml.pptx
PDF
symfeat_cvpr2012.pdf
PDF
An Assessment of Image Matching Algorithms in Depth Estimation
PDF
Jw2517421747
PDF
Jw2517421747
PDF
Introduction to Computer Vision (uapycon 2017)
PDF
Final Paper
PDF
A comparison of SIFT, PCA-SIFT and SURF
PDF
LLNL_poster_15.compressed
PDF
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
PDF
Authenticate Aadhar Card Picture with Current Image using Content Based Image...
PDF
Lecture 6-computer vision features descriptors matching
PDF
Gavrila_ICCV99.pdf
PPTX
feature matching and model fitting .pptx
PDF
2 imagedescription.ppt
PDF
Method of optimization of the fundamental matrix by technique speeded up rob...
Feature detection and matching
PPT s11-machine vision-s2
Image Features Matching and Classification Using Machine Learning
Lec10 alignment
Real Time Stitching Of IR Images using ml.pptx
symfeat_cvpr2012.pdf
An Assessment of Image Matching Algorithms in Depth Estimation
Jw2517421747
Jw2517421747
Introduction to Computer Vision (uapycon 2017)
Final Paper
A comparison of SIFT, PCA-SIFT and SURF
LLNL_poster_15.compressed
IRJET - A Survey Paper on Efficient Object Detection and Matching using F...
Authenticate Aadhar Card Picture with Current Image using Content Based Image...
Lecture 6-computer vision features descriptors matching
Gavrila_ICCV99.pdf
feature matching and model fitting .pptx
2 imagedescription.ppt
Method of optimization of the fundamental matrix by technique speeded up rob...

More from eSAT Publishing House (20)

PDF
Likely impacts of hudhud on the environment of visakhapatnam
PDF
Impact of flood disaster in a drought prone area – case study of alampur vill...
PDF
Hudhud cyclone – a severe disaster in visakhapatnam
PDF
Groundwater investigation using geophysical methods a case study of pydibhim...
PDF
Flood related disasters concerned to urban flooding in bangalore, india
PDF
Enhancing post disaster recovery by optimal infrastructure capacity building
PDF
Effect of lintel and lintel band on the global performance of reinforced conc...
PDF
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...
PDF
Wind damage to buildings, infrastrucuture and landscape elements along the be...
PDF
Shear strength of rc deep beam panels – a review
PDF
Role of voluntary teams of professional engineers in dissater management – ex...
PDF
Risk analysis and environmental hazard management
PDF
Review study on performance of seismically tested repaired shear walls
PDF
Monitoring and assessment of air quality with reference to dust particles (pm...
PDF
Low cost wireless sensor networks and smartphone applications for disaster ma...
PDF
Coastal zones – seismic vulnerability an analysis from east coast of india
PDF
Can fracture mechanics predict damage due disaster of structures
PDF
Assessment of seismic susceptibility of rc buildings
PDF
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...
PDF
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...
Likely impacts of hudhud on the environment of visakhapatnam
Impact of flood disaster in a drought prone area – case study of alampur vill...
Hudhud cyclone – a severe disaster in visakhapatnam
Groundwater investigation using geophysical methods a case study of pydibhim...
Flood related disasters concerned to urban flooding in bangalore, india
Enhancing post disaster recovery by optimal infrastructure capacity building
Effect of lintel and lintel band on the global performance of reinforced conc...
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...
Wind damage to buildings, infrastrucuture and landscape elements along the be...
Shear strength of rc deep beam panels – a review
Role of voluntary teams of professional engineers in dissater management – ex...
Risk analysis and environmental hazard management
Review study on performance of seismically tested repaired shear walls
Monitoring and assessment of air quality with reference to dust particles (pm...
Low cost wireless sensor networks and smartphone applications for disaster ma...
Coastal zones – seismic vulnerability an analysis from east coast of india
Can fracture mechanics predict damage due disaster of structures
Assessment of seismic susceptibility of rc buildings
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...

Recently uploaded (20)

PDF
composite construction of structures.pdf
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
Sustainable Sites - Green Building Construction
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Digital Logic Computer Design lecture notes
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
PPT on Performance Review to get promotions
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Construction Project Organization Group 2.pptx
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
composite construction of structures.pdf
R24 SURVEYING LAB MANUAL for civil enggi
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Model Code of Practice - Construction Work - 21102022 .pdf
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Foundation to blockchain - A guide to Blockchain Tech
Automation-in-Manufacturing-Chapter-Introduction.pdf
CYBER-CRIMES AND SECURITY A guide to understanding
Sustainable Sites - Green Building Construction
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Digital Logic Computer Design lecture notes
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPT on Performance Review to get promotions
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Construction Project Organization Group 2.pptx
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS

View and illumination invariant iterative based image

  • 1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 185 VIEW AND ILLUMINATION INVARIANT ITERATIVE BASED IMAGE MATCHING Y. Ramesh1 , J. Sofia Priya Dharshini2 1 Student, 2 Associate Prof., Department of ECE, RGMCET, Nandyal, Kurnool Dist., Andhra Pradesh, INDIA ramesh7792@gmail.com, sofi_prida@yahoo.co.in Abstract In this paper, the challenges in local-feature-based image matching are variations of view and illumination. Different methods have been recently proposed to address these problems by using invariant feature detectors and distinctive descriptors. However, the matching performance is still unstable and inaccurate, particularly when large variation in view or illumination occurs. In this paper, we propose a view and illumination invariant image matching method. We iteratively estimate the relationship of the relative view and illumination of the images, transform the view of one image to the other, and normalize their illumination for accurate matching. The performance of matching is significantly improved and is not affected by the changes of view and illumination in a valid range. The proposed method would fail when the initial view and illumination method fails, which gives us a new sight to evaluate the traditional detectors. We propose two novel indicators for detector evaluation, namely, valid angle and valid illumination, which reflect the maximum allowable change in view and illumination, respectively. Keywords-Feature detector evaluation, image matching, Iterative algorithm. -----------------------------------------------------------------------***----------------------------------------------------------------------- 1. INTRODUCTION Image matching is a fundamental issue in computer vision. It has been widely used in tracking, image stitching, 3-D reconstruction, simultaneous localization and mapping (SLAM) systems, camera calibration, object classification, recognition, and so on. Image matching aim to find the correspondence between two images of the same scene or objects in different pose, illumination and environment In this paper, we focus on local feature-based image matching. The challenges of this work reside in stable and invariant feature extraction from varying situations and robust matching. Generally speaking, the framework of a region feature based image matching consists of three steps. Detecting stable regions. Interesting points are extracted from images, and the region of interest is the associated circular (or elliptical) region around the interesting point. Generally, researchers use corner (Harris [1], SUSAN [2], CSS [3], etc.) or center of silent region (SIFT [4], SURF [5], DoH [6], HLSIFD [7], etc.) as the interesting point since they are stable and easy to locate and describe. The radius of the region is determined by a priori setting (Harris corner) or the region scale (scale invariant features). The total number of features detected is the minimum number of the features extracted from the matched images. Describing regions. Color, structure, and texture are widely used to describe images in the recent literature Descriptors with edge orientation information (SIFT and HOG) are also very popular since they are more robust to scale, blur, and rotation. Matching features. Local features from two images are first matched when they are the nearest pair. A handful of distances can be used in practice, such as L1distance, L2 distance, histogram intersection distance [8], and earth mover’s distance [9]. If the nearest distance is higher than k times (k ϵ (0, 1) empirically) of the second nearest distance, the nearest matching pair will be removed. These are the very initial matching results. Fig1 Illustration of the proposed matching algorithm Ir and It are the images to be matched. Ie is simulated from It by transformation T. Ir is difficult to match with It for the difference of view point and illumination, whereas Ie is easier to match with It since they are closer in the parameter space. The three parts of the detect–describe–match (DDM) framework determine the performance of image matching. The first step is the basis of this framework. Unstable and variant features increase the difficulties of the next two steps. Researchers mostly focus on the first step for invariant feature extraction and have proposed many excellent detectors [1],
  • 2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 186 [4], [5], [7]. However, an important experience of a previous work is that all the aforementioned feature detectors are not strictly invariant to the changes of view and illumination. For larger changes, there would be few invariant features that can be extracted from both images to be matched. This motivates us to think the essential difference of images with different view and illumination. Normally, a question need to be answered: whether an object in two images with different views and illumination looks like the same one, supposing there are two images with a large view change, as shown in Fig. 1. The two top images are the same object in different views. They are so different in appearance that they can be considered as two different objects. We do not attempt to find invariant local feature detectors as in a previous work but focus on a better framework for image matching. 2. VIEW AND ILLUMINATION INVARIANT IMAGE MATCHING 2.1. General Definition of Image Matching Two images of the same object or scene are shown as two points in parameter space P of the object (scene). Let I be the original appearance of an object, = ( ( )) be the real appearance of the object shown from an image, where L indicates the illumination, and H is the object transformation factor from a normal pose. Here, we define the parameter space of a given image I as P = { . } (simply written as = { . }in the following). Translation L and H is a point in the parameter space; thus, the observed image is shown as a point in the parameter space, which is expanded by object I. Therefore, the purpose of image matching is to find transformation T between the two points in the parameter space { . } → { . } or, in other words, | = ( ). . ). The purpose isto find the coordinate differences between the two points. The norm of this space is difficult to define since illumination factor L and transformation H are totally independent and cannot be combined together. In this paper, we simply use images with planar objects; therefore, H is the homograph transform matrix, and L is the histogram matching function that transforms the histogram of one image to a specific one. 2.2. Proposed Method Denote the reference image and test image to be matched as and. Suppose that the true pose transformation matrix from I to I is and the illumination change function is . The relationship between !"# is ($) = %( ) = & ( )' = & ( $)' (1) Where % is the true transformation between !"# , is the homogeneous co-ordinates, and + = (,, ., 1). If there exists approximate estimations about illumination and transformation, the could be transformed to an estimated image , i.e., (/) = ( ) = ( ( $)) (2) Where H denotes the view point transformation L and denotes the illumination transformation. If T is not a very rough estimation between !"# , the estimated image I would be more similar to 1ℎ!" itself. In other words, 34 56748 17 than to . Thus, the matching between !"# will be easier, as shown in Fig. 1. In this way, we propose the following iterative image matching process: 9X = 9( ;) = <( =( <X>)) ( ; = ) ?X = TA(IAB9) = C( CB<( C$D)) (i > 1) (3) 2.3. Algorithm 1: The Proposed Method: Initial: ; = { =. =} = LM. 1NOP, = ;, QR, QS; Iterate 3 = 3 + 1; V413W!18 ?: C. C; = ? ∘ ; Y = YZ ∗ Y. Transform ?B9 17 ? `. (3); Until C − M < σd, e C − 1NOe < QS7 3 > ".(E is the unit matrix. f?is a histogram transformation vector, QR !"# QS are convergence thresholds.) Return T, H The algorithm is summarized in Algorithm 1. The final estimation of the % is % = ⋯ ∘ h ∘ hB9 ∘ ⋯ ∘ i ∘ 9 (4) ≈ l ∘ lB9 ∘ ⋯ ∘ i ∘ 9 (5) Where “∘” denotes function composition. Our experiments in Section IV-B show the convergence of the iteration with SIFT and the performance with respect to the number of iterations 2.4. Estimate the Parameters H and L General image-matching methods by local features focus on the first parameter H since the concerned issue is the space correspondence between the two images. One of the advantage of the proposed method is that it also estimates the illumination change, which makes matching much better when illumination has changed.
  • 3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 187 The purpose of general image-matching methods is to find the transformation matrix between the reference image and the test image. These methods are invariant to rotation, scale, and partially affine changes. The H can be easily estimated by the general methods without other information. First, we extract features from the matching images and obtain features descriptions (which method is used is not important). Then, we match two features when they are the nearest pair in the feature space. Here, fi norm is used to calculate the distance between to the features The RANSAC algorithm is employed to calculate transformation matrix. The general methods, i.e., HarAff, HesAff, SURF, SIFT, and HLSIFD, all can be used as the feature extraction method. We call them I-HarAff, I- HesAff, ISURF, ISIFT, and IHLSIFD (“I” indicates “Iterative”), respectively. Moreover, image matching is usually used in video sequences. Fig.2 Illustration of the histogram transformation (a) The original image. (b) Darker image. (c) Transformed image from (b) according to the histogram of (a). (d) Brighter image. (e) Transformed image from (d) according to the histogram of (a). (f)–(j) The corresponding histograms of (a)–(e). We assume that the difference between two consecutive frames is not large, and the object or the camera smoothly moves. Thus, the th frame’s transformation Ccan be approximated by the previous results. Different detectors and descriptors have been developed to extract illumination invariant local features. The gradient direction histogram is normalized to form the descriptors. There is usually a tradeoff between the distinction and the invariance. If we do not normalize the descriptors, they will be sensitive to illumination changes but more distinctive. Computing detectors and descriptors also cost much time. Conversely, the detector will be more efficient if we do not require the detector to be invariant to illumination change. We want to keep both illumination invariant and descriptor distinctive in our method. Thus, it is necessary to estimate the illumination change between the two images. Estimating the illumination is a challenging issue since the objects in the images are often accompanied by clutter background or noise. Benefitting from the estimation of the transformation matrix, we can warp the test image to another pose in which the object pose looks similar to that in the reference image. Accordingly, approximate object segmentation would be obtained on the simulated image. To eliminate the occlusion, we only use the matched regions. The matched regions are the region in the scale of the matched interesting points. First, we calculate the illumination histogram of the two images in the matched region. Second, we fix one image and calculate histogram translation function L from the other image to the fixed one. Suppose the histogram of the fixed image is ℎ9 and the histogram of the other image is ℎi . We calculate the cumulative functions ofℎ9!"#ℎi − n9!"#ni. Finally, the translation function is = op B< o<. q Since the cumulative function of gray histogram is always monotonically increasing, inverse function oB< always exists. We transform the histogram of the test image according to the histogram of the reference image to normalize the illumination between the pair, as shown in Fig. 2, and the whole procedure is illustrated in Fig. 3. To sum up, we estimate transformation matrix H between the matching pairs by feature detector, estimate the illumination relationship, and change one of the images according to the color histogram of the other to map the pose and illumination of the object in one image to the other. 2.5. Relationship between the Iterative Algorithm and ASIFT The proposed iterative method is similar to ASIFT in ASIFT, the features are not invariant to affine change, but they cover the whole affine space, as shown in the middle block in Fig. 4. Every simulation of the reference image is one pose of the image in the affine space. Therefore, parts of the simulations of the reference image and the test image should have similar poses in the affine space theoretically. The simulations of the reference image and the test image are independently constructed. No mutual information is used in the simulations
  • 4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 188 Table 1: Comparition of Asift and our Method ASIFT Our method Simulation to ref image Yes No Simulaton to test image Yes Yes Number of simulations Many Few Number fo features 104 ~ 105 103 Pose simulation Yes Yes Illumination simulation No Yes NCM High High RS Vey low High Affine invariancy Full Partial Computational cost High low Rea-time No Yes Simulating in a high density in the affine space, may supposed image poses are constructed, and then, they are matched in a general way. The number of matches increases with the number of the simulations. ASIFT indeed increases the invariability of the image-matching method. However, it does not care what the transformation matrix between the reference and test images is, by trying many possible transformations and combining the matches. Thus, ASIFT can be regarded as a sampling method around the original points in parameter space, whose properties are shown in the left column of Table I. Essentially, our method also constructs “simulation.” We simulate the image not only in the pose but also in illumination, as shown in the right part of Fig. 4. In addition, we transform one simulation per iteration, and in most tasks, two iterations are enough. We will give an experiment to illustrate this in Section IV-B. Benefiting from few simulations, the computational cost of our method is very low, compared with ASIFT, which simulates much more images than our method. Fig. 3 Procedure of illumination estimation (a) The test image. Warp (a) by the estimated transformation matrix to generate (b). (c) Mask with the matched regions labeled as 1, and the unmatched regions labeled as 0. (d) The inner product of (b) and (c). (e) The reference image. (f) The inner product of (c) and (e). (g) Illumination simulated image from (d) according to the histogram of (f). Fig. 4 Relationship among the general framework, ASIFT, and the proposed method. (Left block) The general framework, (middle block) ASIFT, and (right block) ours. The general DDM framework directly estimates the transformation between two images. It is simple but coarse. ASIFT simulates many poses of the two images to cover the affine space, whereas our method estimates the transformed pose first and then accurately matches in the projective space.
  • 5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 189 A coarse-to-fine scheme can reduce the computational time of ASIFT to three times of the SIFT, whereas our method only costs two times. One drawback of the proposed method is that it does not increase the invariability of the original method. When the initial method fails in matching images, the proposed method also fails. One promising method to overcome this shortage is to combine the proposed method with the ASIFT, which improves both the invariability and the accuracy. Furthermore, the histogram matching may amplify noise that seems to affect the performance. A few more key points would be extracted after the histogram matching, but they would not affect the performance too much. We will show this in Section IV-C. Experimental results show that the performance of the proposed framework reaches a comparable level, compared with ASIFT with much fewer features totally detected, as shown in Table III. Therefore, the RS of our method is much higher than that of ASIFT. The computational cost of our method is much lower than that of ASIFT because much fewer features are required. Above all, there are some common properties between iterative SIFT (ISIFT) and ASIFT. Instead of directly matching the original images, both methods find good simulations of the original pairs. ASIFT samples the imaginary images in the whole affine space, whereas our method directly estimates in the whole parameter space. We should point out here that these comparisons and the experiments shown in the following section are all under the situation that the original method, i.e., SIFT, still works. When it fails, the proposed method also fails, whereas the ASIFT can still obtain a valid result. 3. EXPERIMENTAL RESULTS 3.1. Database In the first experiment, we want to show the performance of the proposed method. We capture two images with changes both in illumination and view. This experiment is not used for comparison, but it only shows the effectiveness of the proposed method. To evaluate the performance of the proposed image-matching framework, we do experiments on the database provided by Mikolajczyk.1 This database contains eight groups of images with challenging transformations. Parts of them are shown in Fig. 5. We compare the proposed method with ASIFT and the usual DDM framework with the state-of-the-art detectors: SURF, SIFT. In addition, two evaluations on the detectors through our strategy are proposed. One of them tests the adaptive capacity on the view change, and the other tests the capacity on the illumination change. To finish the two evaluations, we build two databases. One of them contains 88 frames with view changes from 0^ to 87^. The other one contains 55 frames with light exposure changes from 40 to 14 (0.1 EV). The two databases contain continuous transformation frames. Thus, we can evaluate the view invariant ability of the detectors at a 1 interval and the illumination change invariant ability at a step of 0.1 EV. Such databases seldom appear in the open literature, and they will be currently available on the Internet. 3.2. Convergence As we mentioned in Section III-B, the number of iteration is an important parameter. A question that should be answered is whether more iterations bring better performance. Experiments show that, under the proposed framework, our method converges very fast. Fig. 6 shows an experiment on matching two images. The reference image is captured from a frontal view, and the test image is captured from a view angle of 60 ^,as shown in Fig. 6(a) and (b), respectively. Here, SIFT is used as the base detector. The RS and NCM of our method and the DDM framework with SIFT are drawn for comparison, as shown in Fig. 6(c) and (d). Table 2: Performance of sift, isift with only pose estimation, isift both pose and illumination estimation SIFT ISIFT(H) ISIFT(H&L) Total detected 436 388 2021 Total matches 50 64 169 NCM 39 57 153 RS(%) 8.95 14.7 7.57 MP(%) 78.0 89.1 90.5 The results show that more iteration does not necessarily increase the performance significantly, whereas it increases the computation time linearly. When, n=2 the performance significantly increases. The NCM increases more than 300 matches from only 12 to 365, and the RS increases from 12.1% to 37.1%. However, as n further increases the performance little, the NCM only moves around 360, and the RS moves around 37%. Thus, two iterations are enough in general situations, and we use n=2 in the following experiments. Moreover, all the features in this experiment and the following experiments are described by a SIFT descriptor, except SURF, which is described by a SURF descriptor. Following the general evaluation, three criteria are often used as feature evaluator. 1) NCMs are the number of total correct match pairs. 2) RS is the ratio between the NCM and the minimum of total number of features detected from the image pair RS NCM/TOTAL. 3) Matching precision (MP) is the ratio between the NCM and the number of matches MP NCM/Matches. 3.3. Performance In this experiment, a brief view of the performance of the proposed method is given. We use SIFT as the base detector in this experiment (ISIFT). Two images with both view and illumination changes are matched here.
  • 6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 190 We first match the two images by SIFT, and then, we only simulate the pose of the left image in our strategy. Finally, we simulate both pose and illumination. The matching results are shown in Fig. 7 and Table II. View and illumination changes both degrade the performance of the general method. SIFT could achieve 8.95% RS with 39 correct matches. ISIFT, with the pose estimation only, could achieve 14.7% RS with 57 correct matches. When we estimate the pose and illumination changes, the number of total detected features rapidly increases, and the NCM increase to 153. Because histogram matching amplifies noise in simulation, many fake features are detected, and the RS is reduced to 7.57%. This experiment is only a brief view of our strategy, and more experiments will be presented in the following. We estimate the global illumination change between the matching pair to increase the NCMs. The illumination change is usually continuous in the image. Thus, revising the illumination of part of the image could benefit to other regions. Our algorithm does not increase the invariance of the original detector, but it increases the accuracy, stability, and reliability of the matching results. When SIFT fails, our method also fails. However, when SIFT works, but not robust, the proposed method will play an important role. More matches could not increase the invariance, but it can increase the accuracy of alignment when the matching by SIFT is inaccurate. Fig.5. Four groups of images that we used for comparison [33] Each group contains one or two transformations with six images, and only parts of them are shown here. (a) Boats (scale rotation). (b) Graf (view). (c) Wall (view). (d) Leuven (illumination) Fig.6. Experiments of convergence. (a) The reference image. (b) The test image. (c)–(d) The NCM and RS of ISIFT compared with SIFT.
  • 7. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 191 Fig.7. Matching results of SIFT and ISIFT. (a) Matching result of SIFT. (b Matching result of ISIFT with only pose simulation (H). (c) Result of ISIFT with both pose and illumination ( H and L) simulation. In other words, the advantage of the proposed method is that the performance does not degrade with the increase in the pose change or transition tilt, which is addressed in the valid range. Additionally, the local key point location will be more accurate than that of the original detected point. To corroborate this point of view, we show an extra experiment in the following. The first row in Fig. 8 is the matching results of SIFT, and the second row is the results of ISIFT. Both the matches and the alignment residual error are shown. From this experiment, we can find that our algorithm can obtain less error than SIFT, and the NCM affects the accuracy of matching very much. 3.4. Comparison We choose the database provided by Mikolajczyk and compare them with SURF, SIFT and ASIFT. Four pairs of images with scale, view, and illumination change are tested, as shown in Fig. 9. The images on top are the reference image, and those at the bottom are the test image. Table 3 is a comparison of this experiment in terms of NCM, RS, and MP. Our method estimates the pose and illumination of the matching pairs and simulates the reference image. Therefore, the simulated image is closer to the original image, which contains most information of the original image, shortening the distance of the matching pairs in the parameter space. First, the NCM of the ISIFT is much higher than that of the traditional methods. They obtain 584 matches, whereas SURF obtains 9 matches, and SIFT obtains 46 matches in the Graf (affine change situation; second row in Fig. 9). SURF and SIFT obtain 793 and 2837 features, respectively. Thus, the RS of ISIFT increases to 36.4%, whereas that of SURF and SIFT is only 1.14% and 1.62%. This implies that the efficacy of IIM framework is much better than the traditional DDM framework. We increase about 32 times and 22 times RS in this view-change experiment. With the significant increasing performance, we can make the matching more stable and reliable. Similarly, more correspondences are found in other experiments, particularly under affine and illumination change situations. Our method does not significantly increase NCM under only scale change comparing to SIFT, SURF, and HLSIFD since they are theoretically scale invariant. The RS and MP also significantly increase. However, in extreme situations when SIFT fails in the first matching, our algorithm also fails. The proposed method can increase the stability, reliability, and accuracy of the original detector, but it cannot increase the invariance. A solution is integrating the proposed method into ASIFT as the second layer to refine the original matching results. We will show an experiment in Section IV-E. ASIFT also obtains 105, 465, 556, and 157 matches from Boat, Graf,Wall, and Leuven matching images (61, 46, 409, and 259 matches are found by SIFT, respectively). However, these matches are calculated from 29 985, 45 151, 64 908, and 22 562 extracted features. Indeed, ASIFT increases the NCM, but they need to extract much more features from the images, which cost much time in computation. More detail results are summarized in Table III. In this paper, we try to link our method with the general optimization theory. Essentially, the target of image matching is finding the correspondence. We want to find the transformation function between the matching pair, which can minimize the matching error. Thus, we optimize the view difference and then optimize the illumination. With the two- step optimization, our method can find more accurate transformation function. Different from ASIFT, the proposed method does not increase the invariance of the original detector, but it increases the stability and reliability.
  • 8. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 192 Fig.8. Matching error of the SIFT and the proposed method. (a) The matches of SIFT. (b) The residual error of SIFT. (c) The matches of ISIFT. (d) The residual error of ISIFT. Fig. 9 Matching results of four groups of images. (Test images from top to bottom) Boat, Graf, Wall, and Leuven. The results of the correct matches are drawn in blue or white lines.
  • 9. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ http://www.ijret.org 193 Table 3: Comparison of the algorithms on view change pairs Methods SURF SIFT ASIFT(HR) ISIFT Boat / Scale Total Matches NCM RS(%) MP(%) 722 43 8 1.25 18.6 7986 125 61 0.764 48.8 29985 - 105 0.35 - 615 94 79 12.8 84 Graf / Affine Total Matches NCM RS(%) MP(%) 793 34 9 1.14 26.5 2837 210 46 1.62 21.9 45151 - 465 1.03 - 1605 586 584 36.4 99.7 Wall / Affine Total Matches NCM RS(%) MP(%) 1730 78 40 2.31 51.3 7094 452 409 5.77 90.5 64908 - 556 0.857 - 5358 834 833 15.5 99.9 Leuven / Illumination Total Matches NCM RS(%) MP(%) 647 172 161 24.9 93.6 999 289 259 25.9 89.6 22562 - 157 6.96 - 1159 379 344 29.7 90.8 3.5. Real-Time Image Matching An important application of image matching is object detection and poses estimation in video frame. Suppose that the camera smoothly moves and the reference image can be matched with the first frame, the estimation of the transformation matrix from the reference image to certain frame in video can be initialized from the matching of the previous frame. In addition, we match the first frame with the reference image directly by local-feature-based image- matching method. We directly use SIFT here. The RS and NCM of our method and SIFT are shown in Fig. 14, and parts of the matching results of ISIFT and SIFT are shown in Fig. 13. The RS of our method (ISIFT is used here) stays around 30%, and NCM is always higher than 100 pairs in this experiment. The RS of SIFT is running around 7%. Only a small part of features are useful for the correspondence calculation. The NCM of SIFT is about 70 matches, which is lower than that of the proposed method. The mean of the RS and NCM of the ISFIT and SIFT is, respectively 29.6%, 137, 5.7%, and 66. Our method accurately calculates matches all through the video frames, even in large view changes such as frames 750 to 900. To sum up, ISIFT is very accurate and stable in real applications. We develop a real-time image-matching system to show the efficiency. The proposed method could cope with a wide range of view and illumination changes with stable matches, as shown in Fig. 15.We compare the real performance of SURF and SIFT by using them as our basic detector. ISURF is faster than ISIFT; however, it is not as stable as ISIFT. The system is implemented on a computer with two dual-core 2.8- GHz central processing unit, and the processed image size is 640 X 480. The matching could be finished in 80 ms, with parallel coding in a algorithmic level. CONCLUSIONS In this paper, we have proposed a novel image-matching algorithm based on an iterative framework and two new indicators for local feature detector, namely, the VA and the VI. The proposed framework iteratively estimates the relative pose and illumination relationship between the matching pair and simulates one of them to the other to degrade the challenge of matching images in the valid region (VA and VI). Our algorithm can significantly increase the number of matching pairs, RS, and matching accuracy when the transformation is not beyond the valid region. The proposed method would fail when the initial estimation fails, which is relative to the ability of the detector. We have proposed two indicators, i.e., the VA and the VI, according to this phenomenon to evaluate the detectors, which reflect the maximal available change in view and illumination, respectively. Extensive experimental results show that our method improves the traditional detectors, even in large variations, and the new indicators are distinctive.
  • 10. IJRET: International Journal of Research in __________________________________________________________________________________________ Volume: 02 Issue: 11 | Nov-2013, Available @ REFERENCES [1]. C. Harris and M. Stephens, “A combined corner and edge detection,” in Proc. 4th Alvey Vis. Conf., 1988, pp. 147 [2]. S. M. Smith and J. M. Brady, “Susan—A new approach to low level image processing,” Int. J. Comput. Vis., vol. 23, no. 1, pp. 45–78, May 1997. [3]. F. Mokhtarian and R. Suomela, “Robust image corner detection through curvature scale space,” IEEE Trans. Pat Anal. Mach. Intell., vol. 20, no. 12, pp. 1376 1998. [4]. D. G. Lowe, “Distinctive image features from scale invariant keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, pp. 91–110, Nov. 2004. [5]. H. Bay, A. Ess, T. Tuytelaars, and L. V. Go up robust features (SURF),” Comput. Vis. Image Understand., vol. 110, no. 3, pp. 346–359, Jun. 2008. [6]. T. Lindeberg, Scale-Space Theory in Computer Vision. Norwell, MA: Kluwer, 1994. [7]. Y. Yu, K. Huang, and T. Tan, “A Harris invariant feature detector,” in Proc. Asian Conf. Comput. Vis., 2009, pp. 586–595. [8]. A. Barla, F. Odone, and A. Verri, “Histogram intersection kernel for image classification,” in Proc. Int. Conf. Image Process., 2003, vol. 3, p. III–513–16 [Online]. Avai http://dx.doi.org/10.1109/ICIP. 2003.1247294, vol.2 [9]. Y. Rubner, C. Tomasi, and L. J. Guibas, A Metric for DistributionsWith Applications to Image Databases. Washington, DC: IEEE Comput. Soc., 1998, p. 59. BIOGRAPHIES Y. Ramesh received B.Tech degree from GATES Engineering College in the year 2010 and currently pursing M.Tech in Digital Systems and Computer Electronics at RGM College of Engineering and Technology, Nandyal, Kurnool(dist), Andhra Pradesh. His areas of interest are Digital Image Processing, Data Communication and the application of Digital systems. J. Sofia priya dharshini B.Tech degree (Electronics & Communication Engineering) in the year 2005, M.Tech (Digital Systems and Computer Electronics) in 2009 pursuing Ph.D. (Wireless Communications and Networking) from JNTU Anantapur, A.P, India. Presently she is working as Associate Professor in the dept. of ECE, RGMCET, Nandyal. She has published two papers in International Journals and nine papers in National and International Conferences. Her area of interest includes Wireless communications and Networks, Mobile Computing and Video and Image processing. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319 __________________________________________________________________________________________ 2013, Available @ http://www.ijret.org C. Harris and M. Stephens, “A combined corner and edge in Proc. 4th Alvey Vis. Conf., 1988, pp. 147–151. A new approach to image processing,” Int. J. Comput. Vis., vol. 23, no. F. Mokhtarian and R. Suomela, “Robust image corner through curvature scale space,” IEEE Trans. Pattern Intell., vol. 20, no. 12, pp. 1376–1381, Dec. D. G. Lowe, “Distinctive image features from scale- Int. J. Comput. Vis., vol. 60, no. 2, pp. H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool, “Speeded- features (SURF),” Comput. Vis. Image Understand., Space Theory in Computer Vision. Y. Yu, K. Huang, and T. Tan, “A Harris-like scale detector,” in Proc. Asian Conf. Comput. Vis., A. Barla, F. Odone, and A. Verri, “Histogram intersection image classification,” in Proc. Int. Conf. Image 16 [Online]. Available: 2003.1247294, vol.2 Y. Rubner, C. Tomasi, and L. J. Guibas, A Metric for Applications to Image Databases. Soc., 1998, p. 59. received B.Tech degree from GATES Engineering College in the year 2010 and currently pursing M.Tech in Digital Systems and Computer Electronics at RGM College of Engineering and Technology, Nandyal, Kurnool(dist), . His areas of interest are Digital Image Processing, Data Communication and the application of Digital systems. J. Sofia priya dharshini has received B.Tech degree (Electronics & Communication Engineering) in the year 2005, M.Tech (Digital Systems and Computer Electronics) in 2009 and also pursuing Ph.D. (Wireless Communications and Networking) from JNTU Anantapur, A.P, India. Presently she is working as Associate Professor in the dept. of ECE, RGMCET, Nandyal. She has published two papers in International Journals and in National and International Conferences. Her area of interest includes Wireless communications and Networks, Mobile Computing and Video and Image eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ 194