SlideShare a Scribd company logo
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.5, October 2014 
RECOGNITION OF OPTICAL IMAGES BASED ON THE 
FEATURE SPECTRUM ENTROPY IN WAVELET 
DOMAIN 
Matthew 
Department of Mathematics Science, University of Liverpool, L69 3BX, Liverpool,UK 
ABSTRACT 
Under the certain circumstances of the low and unacceptable accuracy on image recognition, the feature 
extraction method for optical images based on the wavelet space feature spectrum entropy is recently 
studied. With this method, the principle that the energy is constant before and after the wavelet 
transformation is employed to construct the wavelet energy pattern matrices, and the feature spectrum 
entropy of singular value is extracted as the image features by singular value decomposition of the matrix. 
At the same time, BP neural network is also applied in image recognition. The experimental results show 
that high image recognition accuracy can be acquired by using the feature extraction method for optical 
images proposed in this paper, which proves the validity of the method. 
KEYWORDS 
Image Recognition, Wavelet Domain, Feature Extraction, Spectrum Entropy 
1.INTRODUCTION 
With the development of the science technology, the image recognition technology in daily 
production and life has become increasingly common, and has been widely used in the field of 
aerospace, medical and health, industrial control, transportation and logistics [1-3]. Due to the 
human visual properties, the images acquired by optical sensors are most likely to be accepted 
and identified by our eyes. So optical sensors are used to acquire images as the core device of 
many products, including the high-end digital cameras, camcorders, etc. As is well known, the 
principle of optical imaging is the light transmission and refraction, thus the optical images are 
liable to be influenced by clouds, the weather and other natural factors. Especially the turbulence 
will cause the lightwave distortion from the original plane wave front into a random surface when 
the light waves spread in the atmosphere, which can induce the blurred image. In addition, some 
features of optical sensor and its related devices such as temperature drift, edge nonlinearity, will 
further affect the imaging results of the optical image. 
In order to reduce the impact produced by the natural conditions on the optical images and 
improve the recognition accuracy, two methods are employed in the current research: the first 
method is to use physical measures, that is, to improve the quality of the images by adding lenses 
and other devices, and then the visible light correlator (VLC) or joint transform correlator (JTC) 
is used for image recognition. VLC is simple in principle with a high signal-to-noise ratio, but is 
not suitable for the real-time target recognition because of its complex machining process. In 
addition, JTC cannot get a good picture because of its high intensity zero-order spectrum. In 
literature [4], an improvement of edge adjusting JTC by wavelet transform method was 
researched, and the image displacement problem with joint scores transform technology based on 
DOI:10.5121/ijcsa.2014.4502 15
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.5, October 2014 
phase encoding was studied in [5]. In literature [6], the image encryption and authentication 
technologies using non-traditional fractional joint transform method were investigated 
comparatively. The second method is the image processing using the software system, and the 
keys to the study are the feature extraction and the optical image recognition algorithms. An 
improved Hu matrix for image object recognition was proposed in [7], and the application of 
computer vision and pattern recognition techniques in image processing was studied in [8]. In 
literature [9], the image metric learning algorithm with adaptive iterative restoration for 
asymmetric optical images was made a thorough study. These researches have provided a 
theoretical basis for the development of the optical image recognition technology. 
Among the previous research results, the study focused on image recognition algorithms and 
image recovery technologies. According to the study of plenty of literatures, we argue that the 
feature extraction of optical image exerts a tremendous influence on the image recognition 
accuracy. The special emphasis has been put on the feature extraction technology of optical 
image in this paper. The main innovations of this paper is that the wavelet transform technology 
is introduced into the optical image processing, and spectrum entropy is considered as the 
features of the image for image recognition using BP neural network. 
16 
2.THE RECOGNITION PROCESS OF OPTICAL IMAGE 
The image recognition is a complicated process, which can be divided into four major stages: 
image acquisition, image preprocessing, image feature extraction and image recognition. Image 
acquisition is the first step in the whole process, which is the procedure of acquiring the image 
using optical sensors. The second step is image preprocessing, that is, the original image is 
transformed preliminary to lay the foundation for the feature extraction. Image preprocessing is 
also a complicated process with a lot complex algorithms, which is not the key of our research 
work. Feature extraction is the important part of image recognition and is the emphasis in this 
paper, because top-quality feature extraction can largely improve the image recognition accuracy. 
Image recognition has already been an important branch of pattern recognition, and there are 
many recognition algorithms applied to image recognition. The neural network algorithm is 
applied for image classification and recognition in this work. 
Optical image is often heavily contaminated by sophisticated background noise in the image 
acquisition, recording and transmission process, so that an image often contains some noise. 
Therefore, in image processing, image preprocessing is necessary to restrain the noise, including 
grayscale, binary, edge detection and filtering of optical image, etc. The optical image grayscale 
is the process of converting a color image into a grayscale image, which has the great advantage 
of not only retaining the structural features such as chromaticity and luminance distribution of the 
image objects but improving efficiency by reducing the amount of data processed. In this paper, 
the image grayscale method is the weighted average method. Binarization algorithm is a 
conventional image segmentation algorithm based on threshold, which is used to turn the gray 
values of pixels within a specified threshold into black pixels (0), and turn the other pixels into 
white pixels (255). It is crucial to select segmentation threshold for the Binarization algorithm, 
and the method of maximum class square error is used in this paper. Image filtering is to remove 
the noise produced by external signal interference in the shooting or transferring phase, and is 
beneficial to succeeding process. However, the process will lower the quality of the original 
image more or less, and the algorithm should be appropriate. The Gaussian filtering method has 
been chosen in this paper.
International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.5, October 2014 
17 
3.FEATURE EXTRACTION OF IMAGES 
The process of digitizing and extracting some features of the image is called feature extraction. 
As the difference of an image with the other image increases, the image becomes easier to be 
correctly recognized. The common methods used in image feature extraction involve the region 
segmentation matrix method and the moment invariants algorithm [10][13][14] etc. According to 
the feature of the optical image, wavelet space feature spectrum entropy of image is used as the 
features for image recognition in this paper. 
3.1. Wavelet Analysis Theory 
The basic theory of wavelet analysis was first proposed by French scientists in the early 1980s, 
which has become a mature branch of mathematics, and the theory developed continuously. 
Wavelet analysis is similar to a mathematical microscope with zoom, shrink and shift function, 
analyzing the dynamic properties of the signal by examining the signal variances in different 
magnifications. Therefore, wavelet analysis is widely used in many fields [11]. Wavelet function 
is obtained by the translation and dilation of a wavelet basic function. Wavelet analysis is that the 
signal is decomposed to be the superposition of a series of wavelet function. Wavelet 
transformation is that a basic wavelet functiondoes inner product with signalin the different scales 
a after shifting, as follows: 
1 
( )  ( ) +¥ 
−¥ 
 
 
 − 
 
= , 
, dt 
a 
t 
x t 
a 
f a x 
t 
t j a  0, (1) 
the equivalent time-domain expression is: 
( )  ( ) ( ) +¥ 
f a t = j w jw 
, 
a  0, (2) 
x −¥ 
1 
, x t a e dt 
a 
where the parametert is the distance the lens moved in parallel relative to the target, a is the 
distance the lens close to or away from the target. From equation (1) and (2), we can see that the 
wavelet analysis is a good analysis tool, which can analyze the local features of signals by the 
transformation of wavelet basis function with the characteristic of signal direction selectivity in 
the two-dimensional case. 
3.2. The Wavelet Space Feature Spectrum Entropy Of The Image 
The image feature extraction methods commonly are extracting the regional feature or time-domain 
feature, but this feature is not obvious when the image nature is similar to that of the 
shooting environment. Therefore, we propose to study the wavelet space feature spectrum entropy 
based on wavelet transformation in the time-frequency domain. Obviously, the energy of the 
function f (t ) with limited energy before and after the wavelet transformation must be constant, 
that is: 
1 (3) 
 +¥ 
( ) 2  − 
2 
( ) −¥ 
Y 
= 
a 
a E a da 
C 
f t dt 
0 
, 
in the equation (3), 
ˆ 2 
( ) 
 +¥ 
−¥ 
j w 
= v 
, 
C d v 
v 
( )  ( ) +¥ 
= , . 
−¥ 
2 
E a W a b db f W (a b) f , is the amplitude of the 
wavelet transformation; v C is the admissible condition of the wavelet function; E(a) is the energy 
value of function f (t )when the scale is a.
International Journal on Computational Sciences  Applications (IJCSA) Vol.4, No.5, October 2014 
From equation (3), the wavelet transformation is that the one dimensional signal is mapped into 
d / d is the proportion of the i singular value in the singular value spectrum. 
18 
the two-dimensional wavelet space. Matrix 

 
= 2 
( )
2 
, 
W a b 
C a 
W 
f 
j 
is called the wavelet energy 
distribution matrix of the two-dimensional wavelet space, which may serve as the pattern matrix 
of signals, and singular value decomposition for matrix W is given. n d ³d ³ × × × ³d 1 2 is selected as 
the singular values of matrix W. Singular value spectrum{ } i d 
is the efficient partition of the 
original signal{ } i x in time-frequency domain, and the wavelet space feature spectrum entropy of 
the image in time-frequency domain can be defined as: 
H p log p , 
(4) 
ws i i  
= 
1 
= − 
n 
i 
Where in  
i i i p 
= 
= 
n 
i 
1 
Wavelet space feature spectrum entropy reflects the energy distribution of the image in the time-frequency 
domain. Wavelet space feature spectrum entropy decreases when images features 
become simple because the energy is concentrated in a small number of modes. Conversely, 
wavelet space feature spectrum entropy increases as the images features become more 
complicated, because the energy begin to disperse. Thus, we can recognize image correctly by 
comparing the wavelet space feature spectrum entropy. 
4.IMAGE RECOGNITION 
Image recognition is a classification process according to the image features. The selection of the 
recognition algorithm is of great importance, because the image recognition should be accurate 
and fast. In this paper, BP neural network method serves as the recognition algorithm. 
4.1. Overview Of BP Neural Network 
The core content of the BP neural network is the feed forward of signals and the back-propagation 
of error. During the feed forward of signals, the signals are processed layer-by-layer, 
and each neuron in input layer has an influence on the neurons in output layer. If the error 
between the actual output and the expected output does not meet the requirement, back 
propagation will be used for training neural network, and the weights and threshold values of the 
network are adjusted to reduce the error according to the optimal target. This process will not stop 
until the error meets the accuracy requirement [12]. The structure of BP neural network often 
used in engineering is shown in Fig. 1. 
Figure 1. The topology structure of BP neutral network
International Journal on Computational Sciences  Applications (IJCSA) Vol.4, No.5, October 2014 
19 
4.2. The Training Process Of BP Neutral Network 
BP neural network must be trained to have the functions of associative memory and predictability 
before being used. The training process of BP neural network includes the following steps: 
Step1 
network initialization. The number of input layer node n, the number of hidden layer node 
: 
l, the number of output layer node m is determined by the input and output sequence data (X,Y )in 
the system. The connection weight ij w of neurons between the input layer and hidden layer, the 
connection weight jk w of neurons between the hidden layer and output layer, the threshold of 
hidden layer a and the threshold of output layer b are initialized. The learning rate and neuronal 
excitation function are given. 
Step2: the calculation of the hidden layer output. Calculating the output of hidden layer H 
according to the input vector X, ij w and a: 
 
H f w x a , 
j = 1,2,× × ×l, (5) 
i ij i i n 
=  − 
1 
 
 
 
= 
i 
Step3: the calculation of the output layer output. Calculating the predicted network output O 
according to H, jk w and b: 
l 
 
= 
= − 
k j jk k O H b 
j 
1 
w , k = 1,2,× × ×m, (6) 
Step4: error calculation. Calculating the prediction error e according to the predicted network 
output O and the desired network output Y: 
k k k e = Y −O k = 1,2,L,m , k k k e = Y −O (7) 
Step5: weights update. Updating ij w and jk w according to e : 
(1 ) ( ) , 
 
= 
ij ij j j jk k w w hH H x i w e j = 1,2,× × ×l, (8) 
1 
= + − 
m 
k 
, jk jk j k w =w +hH e j = 1,2,× × ×, l, k = 1,2,× × ×m, (9) 
Step6: thresholds update. Updating a and b according to e: 
( ) 
h 1 w , j = 1,2,× × ×l, (10) 
j j j j jk k a a H H e 
= 
= + − 
m 
k 
1 
, k k k b = b + e k = 1,2,× × ×m, (11) 
Step7: Determine whether the iterative algorithm end or whether the prediction error meets the 
desired requirement. If not, return Step2, otherwise end. 
5.EXPERIMENTAL ANALYSIS 
Here we analyze and investigate the face orientation recognition. Facial image recognition has 
been a complex pattern recognition problem, and face orientation recognition is becoming a 
hotspot of resent research, which is widely used in the field of guard against burglary, tracking 
and identification, etc. The main purpose of the experiment is to verify the validity of the method 
of image feature extraction based on the wavelet space feature spectrum entropy. 
Firstly, the facial images towards different orientations are collected by using a high-speed digital 
camera equipped with optical lens sensor. The images are shot by 10 different persons, and each 
person shoots five images. The face orientations are as follows: left, left front, front, front right
International Journal on Computational Sciences  Applications (IJCSA) Vol.4, No.5, October 2014 
and right. After the preprocessing of grayscale, Binarization and filtering, the images are shown 
in Fig. 2. 
20 
Fig. 2 The face recognition images 
From the Fig.2, we can see the images show good effects after preprocessing, and then we begin 
to extract the feature. Firstly, these 50 images are distributed into five groups for ease of 
distinction. Number 11, 12, 13, 14, 15 represent the images of being left, front left, front, right 
front, and right in the first group, respectively. After that, these 50 images are wavelet 
transformed. DB6 is the mother wavelet in this paper, and the pattern matrix is constructed to 
attain the singular value according to the process introduced in 4.2. Finally, the wavelet space 
feature spectrum entropy of the image is calculated in accordance with the formula (4), and the 
results are classified by left, left front, front, front right and right, as shown in table 1 below. 
In Table 1, there is significant difference in the wavelet space feature spectrum entropy of facial 
images towards different orientations. It can make a preliminary decision that the method 
considering the wavelet space feature spectrum entropy as the feature of the image is efficiency. 
However, the ranges of wavelet space feature spectrum entropy in the left front and front right 
directions overlap, so BP neural network will be used for image recognition finally. 
In order to ensure the randomness of the network training, the features of 30 images are selected 
randomly from 50 images as the training data volume, and the features of the remaining 20 
images serve as the testing data volume. The next step will be the determination of the network 
structure. In this paper, a typical three-layer network is selected relying on the actual demand of 
the experiment. The input values are the wavelet space feature spectrum entropy of the image, so 
there is only one node in the input layer of the neural network. The output of the network should 
be the image recognition results, so there is also one node in the output layer. We should encode 
the output status of the network because the mode of the BP neural network output is binary data. 
The coding available here are three binary numbers, specifically as shown in Table 2. The 
number of hidden layer node is selected as 7 based on experience. Ultimately, the network 
structure is determined as 1x7x1. 
Table1.The features of images towards different orientations 
Left Left front Front Front right Right 
Wavelet space feature 
spectrum entropy 
0.812~0.77 
8 
0.527~0.49 
6 
0.251~0.23 
5 
0.545~0.51 
9 
0.745~0.76 
1 
Table 2 Neural network output status table 
Left Left front Front Front rignt Right 
Coding 100 110 010 011 001
International Journal on Computational Sciences  Applications (IJCSA) Vol.4, No.5, October 2014 
21 
Table 3 Image recognition results 
No. Predictive value Expected value No. Predictive value Expected value 
1 010 010 11 001 001 
2 110 110 12 100 100 
3 001 001 13 011 110 
4 010 010 14 010 010 
5 100 100 15 011 011 
6 110 011 16 100 100 
7 011 011 17 100 100 
8 001 001 18 001 001 
9 010 010 19 010 010 
10 100 100 20 110 110 
In our analysis, the training step number is set to be 3000, and training goal is 0.01. The training 
procedure is shown in Fig. 3. We can see it would take only 1746 steps to finish the training 
under the training goal of 0.01, and the training time is short. Then the features of remaining 20 
images are inputted into the network for testing network, and the image recognition results are 
shown in Table 3. 
Fig. 3 Network training curve 
It can be found from Table 3 that the recognition results of 18 images are correct except for the 
image No. 6 and No. 13, of which the accuracy is 90%. To further verify the validity of the 
proposed method, 100 repeated experiments are carried out under the same conditions, and 30 
images are selected randomly from 50 images for recognition in each experiment. The 
recognition results are shown in Fig. 4. The repeated experiments results indicate that the 
recognition accuracy is high as an average accuracy of 93.13% in the total 100 experiments. The 
highest accuracy can reach 100%, while the lowest accuracy is 86.7%. The accuracy can come up 
to the requirements of use even though the accuracy in practical situations differs from that in 
experiments, which reinforces the fact that the image feature extraction method based on wavelet
International Journal on Computational Sciences  Applications (IJCSA) Vol.4, No.5, October 2014 
space feature spectrum entropy is efficient. And then transversal comparison is conducted for 
research, which shows the superiority of this method. 
The image segmentation method and grayscale entropy method are the most common feature 
extraction methods for face orientation recognition based on the current study. The image 
segmentation method is to determine the position where the eyes are by matrix partitioning 
images, and then to determine the face orientation in accordance with the characteristics of the 
sub-matrix on which the eyes are. During extracting the features by using this method, the image 
is divided into 6 rows and 8 columns of 48 sub-matrix blocks according to the size of the image, 
and the eyes are located in the second row, so the face orientation can be determined according to 
number and location of 1 pixel point on the 8 sub-matrix blocks in the second row. The 
grayscale entropy method is to calculate the gray percentage of the image, then to extract the 
grayscale entropy. We can compute the grayscale entropy directly because the image has been 
preprocessed. The image features of 50 images are extracted by using these two methods, 
respectively. The comparisons of recognition accuracy and recognition time between the wavelet 
method and the two image feature extraction methods are shown in Fig. 5 and Fig.6, respectively. 
22 
Fig. 4 Comparison of multiple recognition results 
From the comparison results in Fig. 5, we can see that the average recognition accuracy of matrix 
method is only about 80%, similar to the result of grayscale entropy method, which is lower than 
that of the wavelet method (about 90%). In Fig. 6, the average time of the wavelet method is 
about 88 seconds, while the average time of the grayscale entropy method is about 84 seconds 
and the time the matrix method speed is about 78 seconds. Although the wavelet method spends 
more time than the other two methods, the average time can also meet the requirements of 
practical engineering because the difference is small. In addition, the recognition result 000 
appears when using the matrix method, which cannot give the image orientation result because 
000 is not in accordance with any kind of encoding shown in Table 2. However, the wavelet 
method is the most effective method to overcome the defect.
International Journal on Computational Sciences  Applications (IJCSA) Vol.4, No.5, October 2014 
23 
Fig. 5 Comparison of recognition accuracy 
Fig. 6 Comparison of recognition time 
6. CONCLUSIONS 
Because the optical images are liable to be influenced by the natural factors, the image feature 
extraction method is important to determine whether the image can be identified correctly. This 
paper focuses on the image feature extraction method based on the wavelet space feature 
spectrum entropy. In this method, the principle that the energy is constant before and after the 
wavelet transformation is employed to construct the wavelet energy pattern matrices, and the 
feature spectrum entropy of singular value is extracted as the image features by singular value 
decomposition of the matrix. Finally, BP neural network is applied in image recognition 
according to the image feature. The experimental results show that the image features extracted 
by using the wavelet space spectrum entropy method are representative and separable, and a high 
image recognition accuracy of up to 90% can be acquired. Compared with the current image 
feature extraction methods, the proposed method has great advantages. But the experiment result 
also shows that the time this method spends is slightly longer than that of the other two methods,

More Related Content

PDF
A Novel Approach for Tracking with Implicit Video Shot Detection
PDF
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
PDF
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...
PPTX
Object tracking
PDF
IRJET-Motion Segmentation
PDF
Improvement of Weld Images using MATLAB –A Review
PDF
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
PDF
Image restoration model with wavelet based fusion
A Novel Approach for Tracking with Implicit Video Shot Detection
A NOVEL APPROACH FOR SEGMENTATION OF SECTOR SCAN SONAR IMAGES USING ADAPTIVE ...
Particle Swarm Optimization for Nano-Particles Extraction from Supporting Mat...
Object tracking
IRJET-Motion Segmentation
Improvement of Weld Images using MATLAB –A Review
2-Dimensional Wavelet pre-processing to extract IC-Pin information for disarr...
Image restoration model with wavelet based fusion

What's hot (20)

PDF
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
PDF
An Efficient Image Denoising Approach for the Recovery of Impulse Noise
PDF
Detection and Tracking of Moving Object: A Survey
PDF
Minor_project
PDF
Survey on Image Integration of Misaligned Images
PDF
Development and Implementation of VLSI Reconfigurable Architecture for Gabor ...
PDF
Overview Of Video Object Tracking System
PPT
Moving object detection
PDF
Lec3: Pre-Processing Medical Images
PDF
Automatic target detection in
PDF
Ijartes v1-i2-008
PDF
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
PDF
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
PDF
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
PDF
Noise Reduction in Magnetic Resonance Images using Wave Atom Shrinkage
PDF
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
PDF
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
PDF
Geometric wavelet transform for optical flow estimation algorithm
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
An Efficient Image Denoising Approach for the Recovery of Impulse Noise
Detection and Tracking of Moving Object: A Survey
Minor_project
Survey on Image Integration of Misaligned Images
Development and Implementation of VLSI Reconfigurable Architecture for Gabor ...
Overview Of Video Object Tracking System
Moving object detection
Lec3: Pre-Processing Medical Images
Automatic target detection in
Ijartes v1-i2-008
ANALYSIS OF BIOMEDICAL IMAGE USING WAVELET TRANSFORM
A Flexible Scheme for Transmission Line Fault Identification Using Image Proc...
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
Noise Reduction in Magnetic Resonance Images using Wave Atom Shrinkage
ADOPTING AND IMPLEMENTATION OF SELF ORGANIZING FEATURE MAP FOR IMAGE FUSION
AN ENHANCED EDGE ADAPTIVE STEGANOGRAPHY APPROACH USING THRESHOLD VALUE FOR RE...
Geometric wavelet transform for optical flow estimation algorithm
Ad

Viewers also liked (20)

PDF
An efficient recovery mechanism
PDF
A SERIAL COMPUTING MODEL OF AGENT ENABLED MINING OF GLOBALLY STRONG ASSOCIATI...
PDF
An approach to decrease dimensions of drift
PDF
Empirical evaluation of web based personal
PDF
ANGLE ROUTING:A FULLY ADAPTIVE PACKET ROUTING FOR NOC
PDF
ON APPROACH OF OPTIMIZATION OF FORMATION OF INHOMOGENOUS DISTRIBUTIONS OF DOP...
PDF
Computational science guided soft
PDF
Techniques of lattice based
PDF
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...
PDF
STABILIZATION AT UPRIGHT EQUILIBRIUM POSITION OF A DOUBLE INVERTED PENDULUM W...
PDF
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
PDF
A SURVEY OF MACHINE LEARNING TECHNIQUES FOR SENTIMENT CLASSIFICATION
PDF
COVERAGE OPTIMIZED AND TIME EFFICIENT LOCAL SEARCH BETWEENNESS ROUTING FOR HE...
PDF
SBVRLDNACOMP:AN EFFECTIVE DNA SEQUENCE COMPRESSION ALGORITHM
PDF
An approach to decrease dimentions of logical
PDF
SUCCESSIVE LINEARIZATION SOLUTION OF A BOUNDARY LAYER CONVECTIVE HEAT TRANSFE...
PDF
Effects of missing observations on
PDF
Tradershotelcert
PDF
Doc Chengs Rest Manager
PDF
Vj Leaving Cert
An efficient recovery mechanism
A SERIAL COMPUTING MODEL OF AGENT ENABLED MINING OF GLOBALLY STRONG ASSOCIATI...
An approach to decrease dimensions of drift
Empirical evaluation of web based personal
ANGLE ROUTING:A FULLY ADAPTIVE PACKET ROUTING FOR NOC
ON APPROACH OF OPTIMIZATION OF FORMATION OF INHOMOGENOUS DISTRIBUTIONS OF DOP...
Computational science guided soft
Techniques of lattice based
EMPIRICAL APPLICATION OF SIMULATED ANNEALING USING OBJECT-ORIENTED METRICS TO...
STABILIZATION AT UPRIGHT EQUILIBRIUM POSITION OF A DOUBLE INVERTED PENDULUM W...
A NOVEL BINNING AND INDEXING APPROACH USING HAND GEOMETRY AND PALM PRINT TO E...
A SURVEY OF MACHINE LEARNING TECHNIQUES FOR SENTIMENT CLASSIFICATION
COVERAGE OPTIMIZED AND TIME EFFICIENT LOCAL SEARCH BETWEENNESS ROUTING FOR HE...
SBVRLDNACOMP:AN EFFECTIVE DNA SEQUENCE COMPRESSION ALGORITHM
An approach to decrease dimentions of logical
SUCCESSIVE LINEARIZATION SOLUTION OF A BOUNDARY LAYER CONVECTIVE HEAT TRANSFE...
Effects of missing observations on
Tradershotelcert
Doc Chengs Rest Manager
Vj Leaving Cert
Ad

Similar to Recognition of optical images based on the (20)

PDF
Orientation Spectral Resolution Coding for Pattern Recognition
PDF
EFFICIENT IMAGE RETRIEVAL USING REGION BASED IMAGE RETRIEVAL
PDF
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
PDF
Spectral Density Oriented Feature Coding For Pattern Recognition Application
PDF
Framework on Retrieval of Hypermedia Data using Data mining Technique
PDF
Denoising and Edge Detection Using Sobelmethod
PDF
Image enhancement techniques a review
PDF
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
PDF
Damage Assessment System for Aircraft Structural Defects using Wavelet Transform
PDF
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...
PDF
Cq32579584
PDF
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
PDF
Satellite image resolution enhancement using multi
PDF
A Survey on Image Retrieval By Different Features and Techniques
PDF
Crack Detection of Wall Using MATLAB
DOC
Paper on image processing
PDF
Density Driven Image Coding for Tumor Detection in mri Image
PDF
Enhancement of SAR Imagery using DWT
PDF
Blending of Images Using Discrete Wavelet Transform
PPTX
Remotely sensed image segmentation using multiphase level set acm
Orientation Spectral Resolution Coding for Pattern Recognition
EFFICIENT IMAGE RETRIEVAL USING REGION BASED IMAGE RETRIEVAL
IJCER (www.ijceronline.com) International Journal of computational Engineerin...
Spectral Density Oriented Feature Coding For Pattern Recognition Application
Framework on Retrieval of Hypermedia Data using Data mining Technique
Denoising and Edge Detection Using Sobelmethod
Image enhancement techniques a review
SIGNIFICANCE OF DIMENSIONALITY REDUCTION IN IMAGE PROCESSING
Damage Assessment System for Aircraft Structural Defects using Wavelet Transform
Feature Extraction of an Image by Using Adaptive Filtering and Morpological S...
Cq32579584
INFORMATION SATURATION IN MULTISPECTRAL PIXEL LEVEL IMAGE FUSION
Satellite image resolution enhancement using multi
A Survey on Image Retrieval By Different Features and Techniques
Crack Detection of Wall Using MATLAB
Paper on image processing
Density Driven Image Coding for Tumor Detection in mri Image
Enhancement of SAR Imagery using DWT
Blending of Images Using Discrete Wavelet Transform
Remotely sensed image segmentation using multiphase level set acm

Recently uploaded (20)

PDF
Categorization of Factors Affecting Classification Algorithms Selection
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Geodesy 1.pptx...............................................
PDF
737-MAX_SRG.pdf student reference guides
PPTX
UNIT 4 Total Quality Management .pptx
PPT
Project quality management in manufacturing
PPTX
Current and future trends in Computer Vision.pptx
PDF
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
Fundamentals of Mechanical Engineering.pptx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
PPT on Performance Review to get promotions
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPT
Mechanical Engineering MATERIALS Selection
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
Well-logging-methods_new................
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PDF
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Categorization of Factors Affecting Classification Algorithms Selection
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Geodesy 1.pptx...............................................
737-MAX_SRG.pdf student reference guides
UNIT 4 Total Quality Management .pptx
Project quality management in manufacturing
Current and future trends in Computer Vision.pptx
Artificial Superintelligence (ASI) Alliance Vision Paper.pdf
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Fundamentals of Mechanical Engineering.pptx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPT on Performance Review to get promotions
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Mechanical Engineering MATERIALS Selection
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Well-logging-methods_new................
Foundation to blockchain - A guide to Blockchain Tech
Level 2 – IBM Data and AI Fundamentals (1)_v1.1.PDF
A SYSTEMATIC REVIEW OF APPLICATIONS IN FRAUD DETECTION
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx

Recognition of optical images based on the

  • 1. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.5, October 2014 RECOGNITION OF OPTICAL IMAGES BASED ON THE FEATURE SPECTRUM ENTROPY IN WAVELET DOMAIN Matthew Department of Mathematics Science, University of Liverpool, L69 3BX, Liverpool,UK ABSTRACT Under the certain circumstances of the low and unacceptable accuracy on image recognition, the feature extraction method for optical images based on the wavelet space feature spectrum entropy is recently studied. With this method, the principle that the energy is constant before and after the wavelet transformation is employed to construct the wavelet energy pattern matrices, and the feature spectrum entropy of singular value is extracted as the image features by singular value decomposition of the matrix. At the same time, BP neural network is also applied in image recognition. The experimental results show that high image recognition accuracy can be acquired by using the feature extraction method for optical images proposed in this paper, which proves the validity of the method. KEYWORDS Image Recognition, Wavelet Domain, Feature Extraction, Spectrum Entropy 1.INTRODUCTION With the development of the science technology, the image recognition technology in daily production and life has become increasingly common, and has been widely used in the field of aerospace, medical and health, industrial control, transportation and logistics [1-3]. Due to the human visual properties, the images acquired by optical sensors are most likely to be accepted and identified by our eyes. So optical sensors are used to acquire images as the core device of many products, including the high-end digital cameras, camcorders, etc. As is well known, the principle of optical imaging is the light transmission and refraction, thus the optical images are liable to be influenced by clouds, the weather and other natural factors. Especially the turbulence will cause the lightwave distortion from the original plane wave front into a random surface when the light waves spread in the atmosphere, which can induce the blurred image. In addition, some features of optical sensor and its related devices such as temperature drift, edge nonlinearity, will further affect the imaging results of the optical image. In order to reduce the impact produced by the natural conditions on the optical images and improve the recognition accuracy, two methods are employed in the current research: the first method is to use physical measures, that is, to improve the quality of the images by adding lenses and other devices, and then the visible light correlator (VLC) or joint transform correlator (JTC) is used for image recognition. VLC is simple in principle with a high signal-to-noise ratio, but is not suitable for the real-time target recognition because of its complex machining process. In addition, JTC cannot get a good picture because of its high intensity zero-order spectrum. In literature [4], an improvement of edge adjusting JTC by wavelet transform method was researched, and the image displacement problem with joint scores transform technology based on DOI:10.5121/ijcsa.2014.4502 15
  • 2. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.5, October 2014 phase encoding was studied in [5]. In literature [6], the image encryption and authentication technologies using non-traditional fractional joint transform method were investigated comparatively. The second method is the image processing using the software system, and the keys to the study are the feature extraction and the optical image recognition algorithms. An improved Hu matrix for image object recognition was proposed in [7], and the application of computer vision and pattern recognition techniques in image processing was studied in [8]. In literature [9], the image metric learning algorithm with adaptive iterative restoration for asymmetric optical images was made a thorough study. These researches have provided a theoretical basis for the development of the optical image recognition technology. Among the previous research results, the study focused on image recognition algorithms and image recovery technologies. According to the study of plenty of literatures, we argue that the feature extraction of optical image exerts a tremendous influence on the image recognition accuracy. The special emphasis has been put on the feature extraction technology of optical image in this paper. The main innovations of this paper is that the wavelet transform technology is introduced into the optical image processing, and spectrum entropy is considered as the features of the image for image recognition using BP neural network. 16 2.THE RECOGNITION PROCESS OF OPTICAL IMAGE The image recognition is a complicated process, which can be divided into four major stages: image acquisition, image preprocessing, image feature extraction and image recognition. Image acquisition is the first step in the whole process, which is the procedure of acquiring the image using optical sensors. The second step is image preprocessing, that is, the original image is transformed preliminary to lay the foundation for the feature extraction. Image preprocessing is also a complicated process with a lot complex algorithms, which is not the key of our research work. Feature extraction is the important part of image recognition and is the emphasis in this paper, because top-quality feature extraction can largely improve the image recognition accuracy. Image recognition has already been an important branch of pattern recognition, and there are many recognition algorithms applied to image recognition. The neural network algorithm is applied for image classification and recognition in this work. Optical image is often heavily contaminated by sophisticated background noise in the image acquisition, recording and transmission process, so that an image often contains some noise. Therefore, in image processing, image preprocessing is necessary to restrain the noise, including grayscale, binary, edge detection and filtering of optical image, etc. The optical image grayscale is the process of converting a color image into a grayscale image, which has the great advantage of not only retaining the structural features such as chromaticity and luminance distribution of the image objects but improving efficiency by reducing the amount of data processed. In this paper, the image grayscale method is the weighted average method. Binarization algorithm is a conventional image segmentation algorithm based on threshold, which is used to turn the gray values of pixels within a specified threshold into black pixels (0), and turn the other pixels into white pixels (255). It is crucial to select segmentation threshold for the Binarization algorithm, and the method of maximum class square error is used in this paper. Image filtering is to remove the noise produced by external signal interference in the shooting or transferring phase, and is beneficial to succeeding process. However, the process will lower the quality of the original image more or less, and the algorithm should be appropriate. The Gaussian filtering method has been chosen in this paper.
  • 3. International Journal on Computational Sciences & Applications (IJCSA) Vol.4, No.5, October 2014 17 3.FEATURE EXTRACTION OF IMAGES The process of digitizing and extracting some features of the image is called feature extraction. As the difference of an image with the other image increases, the image becomes easier to be correctly recognized. The common methods used in image feature extraction involve the region segmentation matrix method and the moment invariants algorithm [10][13][14] etc. According to the feature of the optical image, wavelet space feature spectrum entropy of image is used as the features for image recognition in this paper. 3.1. Wavelet Analysis Theory The basic theory of wavelet analysis was first proposed by French scientists in the early 1980s, which has become a mature branch of mathematics, and the theory developed continuously. Wavelet analysis is similar to a mathematical microscope with zoom, shrink and shift function, analyzing the dynamic properties of the signal by examining the signal variances in different magnifications. Therefore, wavelet analysis is widely used in many fields [11]. Wavelet function is obtained by the translation and dilation of a wavelet basic function. Wavelet analysis is that the signal is decomposed to be the superposition of a series of wavelet function. Wavelet transformation is that a basic wavelet functiondoes inner product with signalin the different scales a after shifting, as follows: 1 ( ) ( ) +¥ −¥ − = , , dt a t x t a f a x t t j a 0, (1) the equivalent time-domain expression is: ( ) ( ) ( ) +¥ f a t = j w jw , a 0, (2) x −¥ 1 , x t a e dt a where the parametert is the distance the lens moved in parallel relative to the target, a is the distance the lens close to or away from the target. From equation (1) and (2), we can see that the wavelet analysis is a good analysis tool, which can analyze the local features of signals by the transformation of wavelet basis function with the characteristic of signal direction selectivity in the two-dimensional case. 3.2. The Wavelet Space Feature Spectrum Entropy Of The Image The image feature extraction methods commonly are extracting the regional feature or time-domain feature, but this feature is not obvious when the image nature is similar to that of the shooting environment. Therefore, we propose to study the wavelet space feature spectrum entropy based on wavelet transformation in the time-frequency domain. Obviously, the energy of the function f (t ) with limited energy before and after the wavelet transformation must be constant, that is: 1 (3) +¥ ( ) 2 − 2 ( ) −¥ Y = a a E a da C f t dt 0 , in the equation (3), ˆ 2 ( ) +¥ −¥ j w = v , C d v v ( ) ( ) +¥ = , . −¥ 2 E a W a b db f W (a b) f , is the amplitude of the wavelet transformation; v C is the admissible condition of the wavelet function; E(a) is the energy value of function f (t )when the scale is a.
  • 4. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 From equation (3), the wavelet transformation is that the one dimensional signal is mapped into d / d is the proportion of the i singular value in the singular value spectrum. 18 the two-dimensional wavelet space. Matrix = 2 ( )
  • 5. 2 , W a b C a W f j is called the wavelet energy distribution matrix of the two-dimensional wavelet space, which may serve as the pattern matrix of signals, and singular value decomposition for matrix W is given. n d ³d ³ × × × ³d 1 2 is selected as the singular values of matrix W. Singular value spectrum{ } i d is the efficient partition of the original signal{ } i x in time-frequency domain, and the wavelet space feature spectrum entropy of the image in time-frequency domain can be defined as: H p log p , (4) ws i i = 1 = − n i Where in i i i p = = n i 1 Wavelet space feature spectrum entropy reflects the energy distribution of the image in the time-frequency domain. Wavelet space feature spectrum entropy decreases when images features become simple because the energy is concentrated in a small number of modes. Conversely, wavelet space feature spectrum entropy increases as the images features become more complicated, because the energy begin to disperse. Thus, we can recognize image correctly by comparing the wavelet space feature spectrum entropy. 4.IMAGE RECOGNITION Image recognition is a classification process according to the image features. The selection of the recognition algorithm is of great importance, because the image recognition should be accurate and fast. In this paper, BP neural network method serves as the recognition algorithm. 4.1. Overview Of BP Neural Network The core content of the BP neural network is the feed forward of signals and the back-propagation of error. During the feed forward of signals, the signals are processed layer-by-layer, and each neuron in input layer has an influence on the neurons in output layer. If the error between the actual output and the expected output does not meet the requirement, back propagation will be used for training neural network, and the weights and threshold values of the network are adjusted to reduce the error according to the optimal target. This process will not stop until the error meets the accuracy requirement [12]. The structure of BP neural network often used in engineering is shown in Fig. 1. Figure 1. The topology structure of BP neutral network
  • 6. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 19 4.2. The Training Process Of BP Neutral Network BP neural network must be trained to have the functions of associative memory and predictability before being used. The training process of BP neural network includes the following steps: Step1 network initialization. The number of input layer node n, the number of hidden layer node : l, the number of output layer node m is determined by the input and output sequence data (X,Y )in the system. The connection weight ij w of neurons between the input layer and hidden layer, the connection weight jk w of neurons between the hidden layer and output layer, the threshold of hidden layer a and the threshold of output layer b are initialized. The learning rate and neuronal excitation function are given. Step2: the calculation of the hidden layer output. Calculating the output of hidden layer H according to the input vector X, ij w and a: H f w x a , j = 1,2,× × ×l, (5) i ij i i n = − 1 = i Step3: the calculation of the output layer output. Calculating the predicted network output O according to H, jk w and b: l = = − k j jk k O H b j 1 w , k = 1,2,× × ×m, (6) Step4: error calculation. Calculating the prediction error e according to the predicted network output O and the desired network output Y: k k k e = Y −O k = 1,2,L,m , k k k e = Y −O (7) Step5: weights update. Updating ij w and jk w according to e : (1 ) ( ) , = ij ij j j jk k w w hH H x i w e j = 1,2,× × ×l, (8) 1 = + − m k , jk jk j k w =w +hH e j = 1,2,× × ×, l, k = 1,2,× × ×m, (9) Step6: thresholds update. Updating a and b according to e: ( ) h 1 w , j = 1,2,× × ×l, (10) j j j j jk k a a H H e = = + − m k 1 , k k k b = b + e k = 1,2,× × ×m, (11) Step7: Determine whether the iterative algorithm end or whether the prediction error meets the desired requirement. If not, return Step2, otherwise end. 5.EXPERIMENTAL ANALYSIS Here we analyze and investigate the face orientation recognition. Facial image recognition has been a complex pattern recognition problem, and face orientation recognition is becoming a hotspot of resent research, which is widely used in the field of guard against burglary, tracking and identification, etc. The main purpose of the experiment is to verify the validity of the method of image feature extraction based on the wavelet space feature spectrum entropy. Firstly, the facial images towards different orientations are collected by using a high-speed digital camera equipped with optical lens sensor. The images are shot by 10 different persons, and each person shoots five images. The face orientations are as follows: left, left front, front, front right
  • 7. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 and right. After the preprocessing of grayscale, Binarization and filtering, the images are shown in Fig. 2. 20 Fig. 2 The face recognition images From the Fig.2, we can see the images show good effects after preprocessing, and then we begin to extract the feature. Firstly, these 50 images are distributed into five groups for ease of distinction. Number 11, 12, 13, 14, 15 represent the images of being left, front left, front, right front, and right in the first group, respectively. After that, these 50 images are wavelet transformed. DB6 is the mother wavelet in this paper, and the pattern matrix is constructed to attain the singular value according to the process introduced in 4.2. Finally, the wavelet space feature spectrum entropy of the image is calculated in accordance with the formula (4), and the results are classified by left, left front, front, front right and right, as shown in table 1 below. In Table 1, there is significant difference in the wavelet space feature spectrum entropy of facial images towards different orientations. It can make a preliminary decision that the method considering the wavelet space feature spectrum entropy as the feature of the image is efficiency. However, the ranges of wavelet space feature spectrum entropy in the left front and front right directions overlap, so BP neural network will be used for image recognition finally. In order to ensure the randomness of the network training, the features of 30 images are selected randomly from 50 images as the training data volume, and the features of the remaining 20 images serve as the testing data volume. The next step will be the determination of the network structure. In this paper, a typical three-layer network is selected relying on the actual demand of the experiment. The input values are the wavelet space feature spectrum entropy of the image, so there is only one node in the input layer of the neural network. The output of the network should be the image recognition results, so there is also one node in the output layer. We should encode the output status of the network because the mode of the BP neural network output is binary data. The coding available here are three binary numbers, specifically as shown in Table 2. The number of hidden layer node is selected as 7 based on experience. Ultimately, the network structure is determined as 1x7x1. Table1.The features of images towards different orientations Left Left front Front Front right Right Wavelet space feature spectrum entropy 0.812~0.77 8 0.527~0.49 6 0.251~0.23 5 0.545~0.51 9 0.745~0.76 1 Table 2 Neural network output status table Left Left front Front Front rignt Right Coding 100 110 010 011 001
  • 8. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 21 Table 3 Image recognition results No. Predictive value Expected value No. Predictive value Expected value 1 010 010 11 001 001 2 110 110 12 100 100 3 001 001 13 011 110 4 010 010 14 010 010 5 100 100 15 011 011 6 110 011 16 100 100 7 011 011 17 100 100 8 001 001 18 001 001 9 010 010 19 010 010 10 100 100 20 110 110 In our analysis, the training step number is set to be 3000, and training goal is 0.01. The training procedure is shown in Fig. 3. We can see it would take only 1746 steps to finish the training under the training goal of 0.01, and the training time is short. Then the features of remaining 20 images are inputted into the network for testing network, and the image recognition results are shown in Table 3. Fig. 3 Network training curve It can be found from Table 3 that the recognition results of 18 images are correct except for the image No. 6 and No. 13, of which the accuracy is 90%. To further verify the validity of the proposed method, 100 repeated experiments are carried out under the same conditions, and 30 images are selected randomly from 50 images for recognition in each experiment. The recognition results are shown in Fig. 4. The repeated experiments results indicate that the recognition accuracy is high as an average accuracy of 93.13% in the total 100 experiments. The highest accuracy can reach 100%, while the lowest accuracy is 86.7%. The accuracy can come up to the requirements of use even though the accuracy in practical situations differs from that in experiments, which reinforces the fact that the image feature extraction method based on wavelet
  • 9. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 space feature spectrum entropy is efficient. And then transversal comparison is conducted for research, which shows the superiority of this method. The image segmentation method and grayscale entropy method are the most common feature extraction methods for face orientation recognition based on the current study. The image segmentation method is to determine the position where the eyes are by matrix partitioning images, and then to determine the face orientation in accordance with the characteristics of the sub-matrix on which the eyes are. During extracting the features by using this method, the image is divided into 6 rows and 8 columns of 48 sub-matrix blocks according to the size of the image, and the eyes are located in the second row, so the face orientation can be determined according to number and location of 1 pixel point on the 8 sub-matrix blocks in the second row. The grayscale entropy method is to calculate the gray percentage of the image, then to extract the grayscale entropy. We can compute the grayscale entropy directly because the image has been preprocessed. The image features of 50 images are extracted by using these two methods, respectively. The comparisons of recognition accuracy and recognition time between the wavelet method and the two image feature extraction methods are shown in Fig. 5 and Fig.6, respectively. 22 Fig. 4 Comparison of multiple recognition results From the comparison results in Fig. 5, we can see that the average recognition accuracy of matrix method is only about 80%, similar to the result of grayscale entropy method, which is lower than that of the wavelet method (about 90%). In Fig. 6, the average time of the wavelet method is about 88 seconds, while the average time of the grayscale entropy method is about 84 seconds and the time the matrix method speed is about 78 seconds. Although the wavelet method spends more time than the other two methods, the average time can also meet the requirements of practical engineering because the difference is small. In addition, the recognition result 000 appears when using the matrix method, which cannot give the image orientation result because 000 is not in accordance with any kind of encoding shown in Table 2. However, the wavelet method is the most effective method to overcome the defect.
  • 10. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 23 Fig. 5 Comparison of recognition accuracy Fig. 6 Comparison of recognition time 6. CONCLUSIONS Because the optical images are liable to be influenced by the natural factors, the image feature extraction method is important to determine whether the image can be identified correctly. This paper focuses on the image feature extraction method based on the wavelet space feature spectrum entropy. In this method, the principle that the energy is constant before and after the wavelet transformation is employed to construct the wavelet energy pattern matrices, and the feature spectrum entropy of singular value is extracted as the image features by singular value decomposition of the matrix. Finally, BP neural network is applied in image recognition according to the image feature. The experimental results show that the image features extracted by using the wavelet space spectrum entropy method are representative and separable, and a high image recognition accuracy of up to 90% can be acquired. Compared with the current image feature extraction methods, the proposed method has great advantages. But the experiment result also shows that the time this method spends is slightly longer than that of the other two methods,
  • 11. International Journal on Computational Sciences Applications (IJCSA) Vol.4, No.5, October 2014 and the future research will focus on how to reduce the complexity of the method, and how to improve the efficiency. In the future, we decide to use some mathematical[15] and related method to optimize our algorithm[16][17]. 24 REFERENCES 1. M. Anant. Interplay between intensity standardization and inhomogeneity correction in MR image processing. IEEE Transactions on Medical Imaging, 2005, 24(5): 561-576. 2. K. Kudoh. Image fusion processing for IKONOS 1-m color imagery. IEEE Transactions on Geoscience and Remote Sensing, 2007, 45(10): 3075-3086. 3. T. Danell. Fast morphological image processing open-source extensions for GPU processing with CUDA. IEEE Journal of Selected Topics in Signal Processing, 2012, 6(7): 849-855. 4. A. Bhagatji, N. K. Nishchal, K. Arun. Wavelet-modified fringe-adjusted joint transform correlator. Optics Laser Technology, 2008, 40(1): 99-112. 5. G. Peng, Q. Li, H. Feng. Image displacement based on phase-encoded reference joint fractional transform correlator. Optics Laser Technology, 2011, 43(8): 1385-1390. 6. G. Peng, Q. Li, H. Feng. Displacement measurements based on joint fractional Fourier transform correlator. Optics and Lasers in Engineering, 2012, 50(3): 399-404. 7. L. Zhang, F. Xiang, J. Pu. Application of improved Hu moments in object recognition. 2012 IEEE International Conference on Automation and Logistics, 2012, 554-558. 8. C. R. Jung. Image processing, computer vision and pattern recognition in Latin America. Pattern Recognition Letters, 2011, 32(1): 1-2. 9. Wang, Haoxiang, Ferdinand Shkjezi, and Ela Hoxha. Distance metric learning for multi-camera people matching. Advanced Computational Intelligence (ICACI), 2013 Sixth International Conference on. IEEE, 2013. 10. L. Wang, X. Wang, J. Feng. On image matrix based feature extraction algorithms. Systems, Man, and Cybernetics, Part B: Cybernetics, 2006, 36(1): 194-197. 11. K. David. P. V. Hammond, R. Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 2011, 30(2): 129-150. 12 G. Jing, W. Du, Y. Guo. Studies on prediction of separation percent in electrodialysis process via BP neural networks and improved BP algorithms. Desalination, 2012, 291(2): 78-83. 13. Gao, K., Du, C., Wang, H., Zhang, S. (2013). An Efficient of Coal and Gangue Recognition Algorithm. International Journal of Signal Processing, Image Processing Pattern Recognition, 6(4). 14. Q Cai, H Man, Dynamic Structure Preserving Map (DSPM) for Human Action Primitive Modeling, E-LETTER, vol. 9, no.3, 2014. 15 .Chen, G. S., Tang, H. Y., Zhang, D. Q., Jiao, Y. X., Wang, H. X. (2013). Existence of three solutions for a nonlocal elliptic system of-Kirchhoff type. Boundary Value Problems, 2013(1), 1-9. 16. Q. Cai, Y. Yin, H. Man, Learning Spatio-Temporal Dependencies for Action Recognition, ICIP, pp.3740 - 3744, 2013. 17. H. Fei and J. Huan, Boosting with Structure Information in the Functional Space: an Application to Graph Classification, in Proceedings of the 16th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (SIGKDD’10), 2010