SlideShare a Scribd company logo
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 779
ANALYSIS OF IMAGE STORAGE AND RETRIEVAL IN GRADED
MEMORY
B Sudarshan1
, R Manjunatha2
1
Research Scholar, Electronics and Communication Engg, Department, Jain University, Bangalore, India
2
Research Guide, Electronics and Communication Engg, Department, Jain University, Bangalore, India
Abstract
An approach to storing and retrieving static images using multilayer Hopfield neural network is analyzed. Here, the Hopfield
network is used as a memory, which stores images in predefined resolution. During the image retrieval, down sampled version of
the stored image is provided as the query mage, The memory initially gives out a coarse image. The finer details of the image are
synthesized later by using this coarse output image. This coarse output image is fed as the input to the memory again. The output
this time will be better than the output that was got initially. The output of the memory becomes better and better as the time
progresses. We call this memory a graded memory. Here the work proposes various models of the graded memory using
multilayer Hopfield neural network, analyses the effectiveness of this memory with parameters like MSE, RMSE and PSNR.
Keywords: Hopfield network, graded memory, image storage, image retrieval.
--------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
There are many ways in which picture images can be stored
in memory and retrieved. Simple way is to store the image
in normal memory without compressing the image. But,
generally, conventional way of storing static images is by
using some compression techniques like JPEG, Wavelets
etc. This requires entire image to be compressed using any
of the above mentioned technique then, coding this
compressed data using suitable coding techniques.
The paper analyzes a different approach to static image
storage and retrieval using a multilayer Hopfield neural
network so that this can be used as a graded memory1
. Here,
a single multilayer Hopfield neural network acts like a
memory by storing multiple images. The stored images can
be retrieved by providing the corresponding down sampled
image as the input to the neural network. The image thus
retrieved may be lossy, i.e., the retrieved image may not
exactly match pixel to pixel. The lossy image is sufficient in
many situations to arrive at a conclusion. The quality of the
image recall is analyzed using parameters like Mean
Squared Error (MSE), Root Mean Squared Error (RMSE)
and the Peak Signal to Noise ratio (PSNR).
Few authors have proposed storing gray scale images[3][4]
using Hopfield neural network. The method presented by
Giovanni Costantini[4] et al. decomposes the gray scale
image into L binary patterns. Each pattern represents one bit
in a digital coding of the gray levels. The image is stored
independently using a conventional neural binary network
with n neurons, where n is the number of pixels. There are L
uncoupled neural networks with n2
connections in each
level. The main advantage is that L uncoupled neural
networks can be implemented in parallel saving
considerable amount of time during both training as well as
during recall. In this method, if a binary pattern cannot be
stored in one sub-network, then the whole image cannot be
stored. C. Oh[5] et al. proposed design of associative
memory, wherein a large scale image is decomposed into
many sub-images & stored in independent neural networks.
Similar problem arise in this method also. Overlapping
between sub-images is done to overcome these effects. Igor
Aizenberg[2] et al. proposed multilayer neural network with
multi valued to classify textures.
In the proposed work, the multilayer Hopfield network is
used as memory and the algorithm presented in[3] is
adapted. This algorithm is better than the one used in[4]
which stores an image in the neural network having L
uncoupled layers without any interaction among the layers.
The paper is organized as below. A brief of Hopfield neural
network is discussed in Section 2. Section 3 elaborates the
actual design of graded memory focusing on image storage
and retrieval. Section 4 explains simulation setup and
results. Section 5 provides conclusion and future scope of
work.
2. MULTILAYER HOPFIELD NEURAL
NETWORK
Multilayer 2-dimensional Hopfield neural network[6] has
only one layer of neurons. Each neuron has an input, an
output and also performs computation. All neurons in the
layer have bidirectional interconnections. The state equation
of the network is given by
𝒅𝑼𝒊𝒋
𝒅𝒕
= −𝑼𝒊𝒋 + 𝑾𝒊𝒋,𝒌𝒍 𝑽 𝒌𝒍 + 𝑰𝒊𝒋
𝒌𝒍
1
where, i, k = 0,1,2 …M and j ,l = 0,1,2…N
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 780
MxN is the number of neurons, Uij is the input to the neuron
at ith
row and jth
column.Vkl is the neuron output at ith
row
and jth
column. W =[Wij,kl] is the weight matrix for
connections. Input bias Iij is taken as zero for all i, j here.
The output Vij is found by applying piecewise-linear
saturation function f(x) = 0.5(|x+1|-|x-1|). In this work, to
store a gray scale image, 3-dimensional (MxNxL) multilayer
Hopfield neural network is considered, where M denotes
number of rows, N columns and L, the levels in the grid of
3-dimensional neurons, the current neuron is denoted by
C(i,j,p) where i, j & p denote row, column and level
respectively. Each neuron is connected to other neurons in
the neighborhood given by the following constraint as
presented in3
.
Nr,s(i,j,p) = { (k,l,q):|k-q|≤r v |k-I ≥M-r,|l-j| ≤r v |l-j|≥N-r, |q-
p||≤s v |q-p| ≥L-s} (2)
Each neuron is connected to (2r +1)2
neurons in the nearest
neighborhood defined by the above constraint in each of the
layers p-s, ..., p-1, p, p+1, ..., p+s, and (2s+1) connections in
other layers. Here r ≥ s is assumed. Each neuron will have μ
number of connections given by μ = (2s +1) (2r +1)2
The above constraint defining the neighborhood makes the
network to have wraparound connections, meaning neurons
lying on a torus with wraparound connections. For the
neural network without wrap around connections, i.e., non-
torus neural network, the equation (2) reduces to
Nr,s(i,j,p) = { (k,l,q):|k-q|≤r, |l-j| ≤r, |q-p||≤s} (3)
The state equation for this 3-dimensional Hopfield neural
network having L number of 2-dimensional layers is given
by
𝒅𝑼𝒊𝒋𝒑
𝒅𝒕
= −𝑼𝒊𝒋𝒑 + 𝑾𝒊𝒋𝒑,𝒌𝒍𝒒 𝑽 𝒌𝒍𝒒 + 𝑰𝒊𝒋𝒑
𝑪(𝒌,𝒍,𝒒)∊𝑵 𝒓,𝒔(𝒊,𝒋,𝒑)
(𝟒)
The image with n pixels and 2L
gray levels is decomposed
into L binary patterns with n pixels. Each pattern
corresponds to a layer of the multilayer Hopfield neural
network. Hence, the proposed architecture consists of L
layers, each with n neurons; the total number of neurons is
nL and the total number of interconnections is μnL. The
number of interconnections grows only linearly with the
number of pixels and logarithmically with the number of
gray levels. The image is decomposed into binary patterns
using binary weighted code. These are then converted to
bipolar patterns and these bipolar patterns are used to train
the neural network which is discussed in the next section. It
is observed that binary patterns coded in reflective gray
coding technique did not give better results compared to
when binary patterns coded using binary weighted code.
Whereas, in[3], binary patterns are coded in reflective gray
coding technique.
3. DESIGN OF GRADED MEMORY
3.1 Image Storage
To store the gray scale image in a memory, the images
shown figure 1 are taken, since storing a 128x128 sized gray
scale image requires large number of neurons and
connections requiring huge computer resources. Due to
computer resource issues related to virtual memory
allotment for storing large number of connections, the gray-
scale image to be stored is partitioned into many sub-blocks
of equal size. For example, 128x128 image can be
partitioned into 16 sub-blocks of size 32x32, 64 sub-blocks
of 16x16 size or 256 sub-blocks of size 8x8. When an image
of size 48x48 is partitioned into 9 sub-blocks of size 16x16
pixels each, the neural network for all these 9 sub-blocks of
the image is trained simultaneously. Each of these sub-
images are stored in the neural network during training. The
method used to design the multilayer Hopfield network is
adapted from3
is as given below for reference.
The simulation setup includes simulation of training and
testing of the multilayer Hopfield Neural network using
MATLAB 7.10. The values of delta is set to 500, r and s
values are set to 4 with learning rate taken as 1.
(a)
(b)
(c)
Fig 1 images used for training the neural network
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 781
Let yijp = f(xijp)
Assume Wijp,klq = 1 for all i, j, p = k, l, q.
If there are Q images to be stored in the network, then each
gray scale image coded in binary weighted code is first
decomposed into L binary patterns, these patterns are then
converted to bipolar patterns with 0 replaced by -1 and
retaining 1 as it is. The ith
image yi ∈{-1,+1}M×N×L.
So y1, y2 ...yQ are the Q bipolar patterns corresponding to the
Q images to be stored.
Connection weights Wijp,klq satisfying the following set of
constraints are found.
The constraints are as below.
𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞𝐶(𝑘,𝑙,𝑞)∊𝑁 𝑟,𝑠(𝑖,𝑗 ,𝑝) 𝑉𝑖𝑗𝑝
(𝑚)
𝑉𝑘𝑙𝑞
(𝑚)
≥ 𝛿 > 0 (5)
Where I,k = 1,2,…M, j,l = 1,2,….N, p,q=1,2,..L and m =
I,2,..Q and δ is the stability factor for the stored images in
the network.
Connection weights are computed using the following
algorithm.
Initially, all the weights are set to 0, i.e., Wijp, klq (0) = 0 for
all i, j, p k, l, q
For every iteration, i.e., t > 0
△𝑖𝑗𝑝
(𝑚)
𝑡 = 𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞 𝑉𝑖𝑗𝑝
(𝑚)
𝑉𝑘𝑙𝑞
(𝑚)
𝐶 𝑘,𝑙,𝑞 ∊𝑁 𝑟,𝑠(𝑖,𝑗 ,𝑝)
− 𝛿 (6)
Then, P(△𝑖𝑗𝑝 𝑡 ) = 0 for △𝑖𝑗𝑝 𝑡 ≥0 and P(△𝑖𝑗𝑝 𝑡 ) = 1
for △𝑖𝑗𝑝 𝑡 < 0 are calculated.
The connection weights are updated with the following
equation.
𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞 𝑡 + 1
= 𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞 𝑡
+ 𝜂 𝑉𝑖𝑗𝑝
(𝑚)
𝑉𝑘𝑙𝑞
(𝑚)
P △𝑖𝑗𝑝 𝑡
𝐶(𝑘,𝑙,𝑞)∊𝑁 𝑟,𝑠(𝑖,𝑗,𝑝)
(7)
Where, η is the learning rate greater than zero. Here the
computed W matrix is not symmetric.
3.2 Image Retrieval
Once the network is trained to store all the sub-blocks of the
entire image meeting the constraints specified in the
algorithm, the stored images are retrieved by providing the
down sampled version of the partitioned input image.
During the retrieval of the complete image, the trained
network is fed with down sampled version (test image) of
the partitioned images one after the other. For each down
sampled version of the input partition image that is fed to
the network, the recalled image is initially a coarse output
image, this coarse output is fed as the input to the network in
the 2nd
pass and it is observed that the output obtained in the
2nd
pass is better than that obtained in the first pass. In the
third pass, the output image obtained in 2nd
pass is fed as the
input image and the third pass output is noted. This is
repeated till the output image of some ath
pass is satisfactory.
This is repeated for each partitioned image. After noting all
the recalled images, these are combined to get the total
image. The recalled combined image is compared against
the stored image and the performance of the network is
measured using the parameters MSE, RMSE and PSNR.
The results are presented in section 4.
4. SIMULATION AND RESULTS
Matlab code is written to simulate multilayer Hopfield
neural network using the algorithm presented in section 3.1
and 3.2. The three images shown in figure 1 are used to test
the performance of the graded memory. The graded memory
stores 9 images of size 16x16 during the training and during
the retrieval, don sampled version of these images are given
as test images and the results are noted. Table 1 shows
image retrieval when the image shown in fig. 1 (a) of size
48x48 is partitioned into 9 images and stored in the graded
memory. Similarly, the tables 3 and 4 show the image
retrieval results obtained when the images shown in fig.
1(b) and (c) are partitioned and stored .
Table 2 shows the consolidated image quality in terms of
PSNR in dB when the multilayer Hopfield neural network is
connected in fully connected, torus and non torus fashion
Figures 2-4 show the graph showing the image retrieval
quality in terms of PSNR in dB for images shown in fig 1
Table 1 Image retrieval when network stores 9 images in
different configuration of the network
Training
image
Test Image
Fully
Connected
Torus
connection
Non Torus
connection
Output
for test
image
1st
pass
2nd
pass
3rd pass
4th
pass
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 782
5th
pass
6th
pass
7th
pass
8th
pass
9th
pass
10th
pass
Table 2 PSNR(dB) values obtained when images are
retrieved from graded memory, each row indicates the
number of the iteration.
Barbara Lena Baboon
FC T NT FC T NT FC T NT
18.6 17.0
5
15.2
7
14.
8
13.8
5
12.2
7
29.6
3
23.8
7
18.4
422.9
7
22.0
4
17.4
8
17.
21
15.7
5
12.6
2
100 38.1
4
23.2
424.6
5
25.0
5
19.8 19.
97
18.2
4
13.8
6
100 24.4
126.2
4
26.4
1
19.9
8
33.
91
20.0
1
14.9
2
26.0
545.6
5
40.6
9
20.4
5
100 21.4
7
15.8
5
26.0
9100 46.3
7
20.8
9
25.1
3
17.5 26.1
946.3
7
20.8
2
100 20.3
7
26.5
46.5
3
20.9
4
22.0
7
26.8
821.3
9
22.1 27.0
922.0
2
22.1 27.2
322.3
3
27.2
9
where
FC- Fully Connected network
T - Torus network NT - Non Torus network
Fig 2 Graph showing PSNR variation with respect to type of
network used for image shown in fig. 1. (a)
Table 3 Image retrieval when network stores 9 images in
different configuration of the network
Training
image
Test Image
Fully
Connected
Torus
connection
Non Torus
connection
Output
for test
image
1st
pass
2nd
pass
3rd pass
4th
pass
5th
pass
6th
pass
7th
pass
8th
pass
Fig 3 Graph showing PSNR variation with respect to type of
network used for image shown in fig. 1.(b)
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 783
Table 4 Image retrieval when network stores 9 images in
different configuration of the network
Training
image
Test Image
Fully
Connected
Torus
connection
Non Torus
connection
Output for
test image
1st
pass
2nd
pass
3rd pass
4th
pass
5th
pass
6th
pass
7th
pass
8th
pass
9th
pass
10th
pass
Fig 4 Graph showing PSNR variation with respect to type of
network used for image shown in fig. 1.(c)
5. CONCLUSION AND FUTURE WORK
In this work, the multilayer Hopfield neural network has
been used in three configurations i) Fully connected ii)
Torus connection and iii) non torus connection. From the
simulation results it is observed that multilayer Hopfield
neural network can be configured to work as a graded
memory in all these three configurations especially in fully
connected fashion. It is clear from the results that the fully
connected neural network generates the image that exactly
match the original image in few passes whereas torus
connected network is able to generate image that is enough
to arrive at conclusion in few passes though it is not able to
generate the original image exactly. The non torus
connected network is able to generate the image which of
poor quality compared to that obtained in other to cases. We
can conclude that Fully connected network is best among
the three network configurations used. The only limitation is
that the network requires large number of neurons and
connection weights.
Future work can focus on how to improve the quality of
image when the network is connected in non torus fashion.
Also, the graded memory currently using the above
algorithm is able to store and retrieve up to Nine images
successfully, need to look at how to improve this number
further.
REFERENCES
[1] B. Sudarshan, R Manjunatha, Image Storage and
Retrieval in Graded Memory, International Journal
of Advances in Engineering & Technology (IJAET),
Volume 8 Issue 1, Feb. 2015, pp. 2123-2128.
[2] Igor Aizenberg, Senior Member, IEEE, Jacob
Jackson, and Shane Alexander, Classification of
Blurred Textures using Multilayer Neural Network
Based on Multi-Valued Neurons, Proc. of
International Joint Conference on Neural Networks,
San Jose, California, USA, July 31 – August 5, 2011.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 784
[3] Govanni Costantini, Design of Associative memory
for Gray-Scale images by multilayer Hopfield Neural
networks, Proc. Of the 10th
ESEAS international
conference on CIRCUITS, Vouliagmeni, Athens,
Greece, July 10-12, 2006, pp. 376-379.
[4] Giovanni Costantini, Daniele Casali and Renzo
Perfetti, Neural associative memory storing gray-
scale images, IEEE Trans. Neural Networks, vol14,
No. 3, May 2003, pp703-707.
[5] C. Oh, S H Zak, Associative memory design using
overlapping decompositions and generalized brain-
state-in a –box neural networks, Intl. Journal for
Neural System. Vol. 13, no. 3, 2003, pp. 139-153.
[6] J.J. Hopfield, Neural networks and physical systems
with emergent collective computational abilities,
Proc. Nat. Acad. Sci., vol. 79, 1982, pp. 2554-2558.
BIOGRAPHIES
Mr. B Sudarshan is working as
Assistant Professor at KS Institute of
technology, Bangalore, India. He is
currently pursuing PhD from Jain
university, Bangalore India. He has
more than 12 years of industrial
experience in India and United states of
America. His area of research includes
image processing, artificial intelligence.
Dr. Manjunath R is well known scientist
in India and abroad. He has got PhD
from Bangalore university, India. He has
3 patents to his credit and published
more than 100 papers in international
journals. His research areas include DSP,
artificial intelligence, image processing.

More Related Content

PDF
Improving of artifical neural networks performance by using gpu's a survey
PDF
IMPROVING OF ARTIFICIAL NEURAL NETWORKS PERFORMANCE BY USING GPU’S: A SURVEY
PDF
06 17443 an neuro fuzzy...
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
M017427985
PPTX
Image Compression Using Neural Network
PDF
O017429398
PDF
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
Improving of artifical neural networks performance by using gpu's a survey
IMPROVING OF ARTIFICIAL NEURAL NETWORKS PERFORMANCE BY USING GPU’S: A SURVEY
06 17443 an neuro fuzzy...
International Journal of Engineering Research and Development (IJERD)
M017427985
Image Compression Using Neural Network
O017429398
MATLAB Code + Description : Very Simple Automatic English Optical Character R...

What's hot (15)

PDF
Black-box modeling of nonlinear system using evolutionary neural NARX model
PDF
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
PDF
Neural network based image compression with lifting scheme and rlc
PDF
A broad ranging open access journal Fast and efficient online submission Expe...
PDF
Ag044216224
PDF
proposal_pura
PDF
11.digital image processing for camera application in mobile devices using ar...
PDF
Efficient design of feedforward network for pattern classification
PDF
6119ijcsitce01
PDF
Background Estimation Using Principal Component Analysis Based on Limited Mem...
PDF
Image compression and reconstruction using a new approach by artificial neura...
PDF
J017426467
PDF
An efficient technique for color image classification based on lower feature ...
PDF
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
PDF
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
Black-box modeling of nonlinear system using evolutionary neural NARX model
A SURVEY OF SPIKING NEURAL NETWORKS AND SUPPORT VECTOR MACHINE PERFORMANCE BY...
Neural network based image compression with lifting scheme and rlc
A broad ranging open access journal Fast and efficient online submission Expe...
Ag044216224
proposal_pura
11.digital image processing for camera application in mobile devices using ar...
Efficient design of feedforward network for pattern classification
6119ijcsitce01
Background Estimation Using Principal Component Analysis Based on Limited Mem...
Image compression and reconstruction using a new approach by artificial neura...
J017426467
An efficient technique for color image classification based on lower feature ...
AN ENHANCED SEPARABLE REVERSIBLE DATA HIDING IN ENCRYPTED IMAGES USING SIDE M...
MULTIPLE RECONSTRUCTION COMPRESSION FRAMEWORK BASED ON PNG IMAGE
Ad

Similar to Analysis of image storage and retrieval in graded memory (20)

PDF
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
PPTX
Hopfield Neural Network
PPTX
Using Hopfield Networks for Solving TSP
PDF
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
PPT
NNFL 4 - Guru Nanak Dev Engineering College
PDF
PDF
PDF
Evolving Connection Weights for Pattern Storage and Recall in Hopfield Model ...
PDF
A simplified design of multiplier for multi layer feed forward hardware neura...
PDF
PDF
PDF
Pattern Recognition using Artificial Neural Network
PDF
G013124354
PDF
Dh31738742
PDF
EVOLVING CONNECTION WEIGHTS FOR PATTERN STORAGE AND RECALL IN HOPFIELD MODEL ...
PDF
Welcome to International Journal of Engineering Research and Development (IJERD)
PPTX
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
PDF
Architecture neural network deep optimizing based on self organizing feature ...
PDF
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
NETWORK LEARNING AND TRAINING OF A CASCADED LINK-BASED FEED FORWARD NEURAL NE...
Hopfield Neural Network
Using Hopfield Networks for Solving TSP
Artificial Neural Network Lecture 6- Associative Memories & Discrete Hopfield...
NNFL 4 - Guru Nanak Dev Engineering College
Evolving Connection Weights for Pattern Storage and Recall in Hopfield Model ...
A simplified design of multiplier for multi layer feed forward hardware neura...
Pattern Recognition using Artificial Neural Network
G013124354
Dh31738742
EVOLVING CONNECTION WEIGHTS FOR PATTERN STORAGE AND RECALL IN HOPFIELD MODEL ...
Welcome to International Journal of Engineering Research and Development (IJERD)
0321204662_lec07_2.pptxjnj bnkm jbnkmo kjmkn
Architecture neural network deep optimizing based on self organizing feature ...
A PERFORMANCE EVALUATION OF A PARALLEL BIOLOGICAL NETWORK MICROCIRCUIT IN NEURON
Ad

More from eSAT Journals (20)

PDF
Mechanical properties of hybrid fiber reinforced concrete for pavements
PDF
Material management in construction – a case study
PDF
Managing drought short term strategies in semi arid regions a case study
PDF
Life cycle cost analysis of overlay for an urban road in bangalore
PDF
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materials
PDF
Laboratory investigation of expansive soil stabilized with natural inorganic ...
PDF
Influence of reinforcement on the behavior of hollow concrete block masonry p...
PDF
Influence of compaction energy on soil stabilized with chemical stabilizer
PDF
Geographical information system (gis) for water resources management
PDF
Forest type mapping of bidar forest division, karnataka using geoinformatics ...
PDF
Factors influencing compressive strength of geopolymer concrete
PDF
Experimental investigation on circular hollow steel columns in filled with li...
PDF
Experimental behavior of circular hsscfrc filled steel tubular columns under ...
PDF
Evaluation of punching shear in flat slabs
PDF
Evaluation of performance of intake tower dam for recent earthquake in india
PDF
Evaluation of operational efficiency of urban road network using travel time ...
PDF
Estimation of surface runoff in nallur amanikere watershed using scs cn method
PDF
Estimation of morphometric parameters and runoff using rs &amp; gis techniques
PDF
Effect of variation of plastic hinge length on the results of non linear anal...
PDF
Effect of use of recycled materials on indirect tensile strength of asphalt c...
Mechanical properties of hybrid fiber reinforced concrete for pavements
Material management in construction – a case study
Managing drought short term strategies in semi arid regions a case study
Life cycle cost analysis of overlay for an urban road in bangalore
Laboratory studies of dense bituminous mixes ii with reclaimed asphalt materials
Laboratory investigation of expansive soil stabilized with natural inorganic ...
Influence of reinforcement on the behavior of hollow concrete block masonry p...
Influence of compaction energy on soil stabilized with chemical stabilizer
Geographical information system (gis) for water resources management
Forest type mapping of bidar forest division, karnataka using geoinformatics ...
Factors influencing compressive strength of geopolymer concrete
Experimental investigation on circular hollow steel columns in filled with li...
Experimental behavior of circular hsscfrc filled steel tubular columns under ...
Evaluation of punching shear in flat slabs
Evaluation of performance of intake tower dam for recent earthquake in india
Evaluation of operational efficiency of urban road network using travel time ...
Estimation of surface runoff in nallur amanikere watershed using scs cn method
Estimation of morphometric parameters and runoff using rs &amp; gis techniques
Effect of variation of plastic hinge length on the results of non linear anal...
Effect of use of recycled materials on indirect tensile strength of asphalt c...

Recently uploaded (20)

PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
Sustainable Sites - Green Building Construction
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPT
Mechanical Engineering MATERIALS Selection
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Welding lecture in detail for understanding
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
PPT on Performance Review to get promotions
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
Construction Project Organization Group 2.pptx
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Model Code of Practice - Construction Work - 21102022 .pdf
Sustainable Sites - Green Building Construction
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
R24 SURVEYING LAB MANUAL for civil enggi
Mechanical Engineering MATERIALS Selection
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Embodied AI: Ushering in the Next Era of Intelligent Systems
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Welding lecture in detail for understanding
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPT on Performance Review to get promotions
UNIT 4 Total Quality Management .pptx
Construction Project Organization Group 2.pptx
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...

Analysis of image storage and retrieval in graded memory

  • 1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 779 ANALYSIS OF IMAGE STORAGE AND RETRIEVAL IN GRADED MEMORY B Sudarshan1 , R Manjunatha2 1 Research Scholar, Electronics and Communication Engg, Department, Jain University, Bangalore, India 2 Research Guide, Electronics and Communication Engg, Department, Jain University, Bangalore, India Abstract An approach to storing and retrieving static images using multilayer Hopfield neural network is analyzed. Here, the Hopfield network is used as a memory, which stores images in predefined resolution. During the image retrieval, down sampled version of the stored image is provided as the query mage, The memory initially gives out a coarse image. The finer details of the image are synthesized later by using this coarse output image. This coarse output image is fed as the input to the memory again. The output this time will be better than the output that was got initially. The output of the memory becomes better and better as the time progresses. We call this memory a graded memory. Here the work proposes various models of the graded memory using multilayer Hopfield neural network, analyses the effectiveness of this memory with parameters like MSE, RMSE and PSNR. Keywords: Hopfield network, graded memory, image storage, image retrieval. --------------------------------------------------------------------***---------------------------------------------------------------------- 1. INTRODUCTION There are many ways in which picture images can be stored in memory and retrieved. Simple way is to store the image in normal memory without compressing the image. But, generally, conventional way of storing static images is by using some compression techniques like JPEG, Wavelets etc. This requires entire image to be compressed using any of the above mentioned technique then, coding this compressed data using suitable coding techniques. The paper analyzes a different approach to static image storage and retrieval using a multilayer Hopfield neural network so that this can be used as a graded memory1 . Here, a single multilayer Hopfield neural network acts like a memory by storing multiple images. The stored images can be retrieved by providing the corresponding down sampled image as the input to the neural network. The image thus retrieved may be lossy, i.e., the retrieved image may not exactly match pixel to pixel. The lossy image is sufficient in many situations to arrive at a conclusion. The quality of the image recall is analyzed using parameters like Mean Squared Error (MSE), Root Mean Squared Error (RMSE) and the Peak Signal to Noise ratio (PSNR). Few authors have proposed storing gray scale images[3][4] using Hopfield neural network. The method presented by Giovanni Costantini[4] et al. decomposes the gray scale image into L binary patterns. Each pattern represents one bit in a digital coding of the gray levels. The image is stored independently using a conventional neural binary network with n neurons, where n is the number of pixels. There are L uncoupled neural networks with n2 connections in each level. The main advantage is that L uncoupled neural networks can be implemented in parallel saving considerable amount of time during both training as well as during recall. In this method, if a binary pattern cannot be stored in one sub-network, then the whole image cannot be stored. C. Oh[5] et al. proposed design of associative memory, wherein a large scale image is decomposed into many sub-images & stored in independent neural networks. Similar problem arise in this method also. Overlapping between sub-images is done to overcome these effects. Igor Aizenberg[2] et al. proposed multilayer neural network with multi valued to classify textures. In the proposed work, the multilayer Hopfield network is used as memory and the algorithm presented in[3] is adapted. This algorithm is better than the one used in[4] which stores an image in the neural network having L uncoupled layers without any interaction among the layers. The paper is organized as below. A brief of Hopfield neural network is discussed in Section 2. Section 3 elaborates the actual design of graded memory focusing on image storage and retrieval. Section 4 explains simulation setup and results. Section 5 provides conclusion and future scope of work. 2. MULTILAYER HOPFIELD NEURAL NETWORK Multilayer 2-dimensional Hopfield neural network[6] has only one layer of neurons. Each neuron has an input, an output and also performs computation. All neurons in the layer have bidirectional interconnections. The state equation of the network is given by 𝒅𝑼𝒊𝒋 𝒅𝒕 = −𝑼𝒊𝒋 + 𝑾𝒊𝒋,𝒌𝒍 𝑽 𝒌𝒍 + 𝑰𝒊𝒋 𝒌𝒍 1 where, i, k = 0,1,2 …M and j ,l = 0,1,2…N
  • 2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 780 MxN is the number of neurons, Uij is the input to the neuron at ith row and jth column.Vkl is the neuron output at ith row and jth column. W =[Wij,kl] is the weight matrix for connections. Input bias Iij is taken as zero for all i, j here. The output Vij is found by applying piecewise-linear saturation function f(x) = 0.5(|x+1|-|x-1|). In this work, to store a gray scale image, 3-dimensional (MxNxL) multilayer Hopfield neural network is considered, where M denotes number of rows, N columns and L, the levels in the grid of 3-dimensional neurons, the current neuron is denoted by C(i,j,p) where i, j & p denote row, column and level respectively. Each neuron is connected to other neurons in the neighborhood given by the following constraint as presented in3 . Nr,s(i,j,p) = { (k,l,q):|k-q|≤r v |k-I ≥M-r,|l-j| ≤r v |l-j|≥N-r, |q- p||≤s v |q-p| ≥L-s} (2) Each neuron is connected to (2r +1)2 neurons in the nearest neighborhood defined by the above constraint in each of the layers p-s, ..., p-1, p, p+1, ..., p+s, and (2s+1) connections in other layers. Here r ≥ s is assumed. Each neuron will have μ number of connections given by μ = (2s +1) (2r +1)2 The above constraint defining the neighborhood makes the network to have wraparound connections, meaning neurons lying on a torus with wraparound connections. For the neural network without wrap around connections, i.e., non- torus neural network, the equation (2) reduces to Nr,s(i,j,p) = { (k,l,q):|k-q|≤r, |l-j| ≤r, |q-p||≤s} (3) The state equation for this 3-dimensional Hopfield neural network having L number of 2-dimensional layers is given by 𝒅𝑼𝒊𝒋𝒑 𝒅𝒕 = −𝑼𝒊𝒋𝒑 + 𝑾𝒊𝒋𝒑,𝒌𝒍𝒒 𝑽 𝒌𝒍𝒒 + 𝑰𝒊𝒋𝒑 𝑪(𝒌,𝒍,𝒒)∊𝑵 𝒓,𝒔(𝒊,𝒋,𝒑) (𝟒) The image with n pixels and 2L gray levels is decomposed into L binary patterns with n pixels. Each pattern corresponds to a layer of the multilayer Hopfield neural network. Hence, the proposed architecture consists of L layers, each with n neurons; the total number of neurons is nL and the total number of interconnections is μnL. The number of interconnections grows only linearly with the number of pixels and logarithmically with the number of gray levels. The image is decomposed into binary patterns using binary weighted code. These are then converted to bipolar patterns and these bipolar patterns are used to train the neural network which is discussed in the next section. It is observed that binary patterns coded in reflective gray coding technique did not give better results compared to when binary patterns coded using binary weighted code. Whereas, in[3], binary patterns are coded in reflective gray coding technique. 3. DESIGN OF GRADED MEMORY 3.1 Image Storage To store the gray scale image in a memory, the images shown figure 1 are taken, since storing a 128x128 sized gray scale image requires large number of neurons and connections requiring huge computer resources. Due to computer resource issues related to virtual memory allotment for storing large number of connections, the gray- scale image to be stored is partitioned into many sub-blocks of equal size. For example, 128x128 image can be partitioned into 16 sub-blocks of size 32x32, 64 sub-blocks of 16x16 size or 256 sub-blocks of size 8x8. When an image of size 48x48 is partitioned into 9 sub-blocks of size 16x16 pixels each, the neural network for all these 9 sub-blocks of the image is trained simultaneously. Each of these sub- images are stored in the neural network during training. The method used to design the multilayer Hopfield network is adapted from3 is as given below for reference. The simulation setup includes simulation of training and testing of the multilayer Hopfield Neural network using MATLAB 7.10. The values of delta is set to 500, r and s values are set to 4 with learning rate taken as 1. (a) (b) (c) Fig 1 images used for training the neural network
  • 3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 781 Let yijp = f(xijp) Assume Wijp,klq = 1 for all i, j, p = k, l, q. If there are Q images to be stored in the network, then each gray scale image coded in binary weighted code is first decomposed into L binary patterns, these patterns are then converted to bipolar patterns with 0 replaced by -1 and retaining 1 as it is. The ith image yi ∈{-1,+1}M×N×L. So y1, y2 ...yQ are the Q bipolar patterns corresponding to the Q images to be stored. Connection weights Wijp,klq satisfying the following set of constraints are found. The constraints are as below. 𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞𝐶(𝑘,𝑙,𝑞)∊𝑁 𝑟,𝑠(𝑖,𝑗 ,𝑝) 𝑉𝑖𝑗𝑝 (𝑚) 𝑉𝑘𝑙𝑞 (𝑚) ≥ 𝛿 > 0 (5) Where I,k = 1,2,…M, j,l = 1,2,….N, p,q=1,2,..L and m = I,2,..Q and δ is the stability factor for the stored images in the network. Connection weights are computed using the following algorithm. Initially, all the weights are set to 0, i.e., Wijp, klq (0) = 0 for all i, j, p k, l, q For every iteration, i.e., t > 0 △𝑖𝑗𝑝 (𝑚) 𝑡 = 𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞 𝑉𝑖𝑗𝑝 (𝑚) 𝑉𝑘𝑙𝑞 (𝑚) 𝐶 𝑘,𝑙,𝑞 ∊𝑁 𝑟,𝑠(𝑖,𝑗 ,𝑝) − 𝛿 (6) Then, P(△𝑖𝑗𝑝 𝑡 ) = 0 for △𝑖𝑗𝑝 𝑡 ≥0 and P(△𝑖𝑗𝑝 𝑡 ) = 1 for △𝑖𝑗𝑝 𝑡 < 0 are calculated. The connection weights are updated with the following equation. 𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞 𝑡 + 1 = 𝑊𝑖𝑗𝑝 ,𝑘𝑙𝑞 𝑡 + 𝜂 𝑉𝑖𝑗𝑝 (𝑚) 𝑉𝑘𝑙𝑞 (𝑚) P △𝑖𝑗𝑝 𝑡 𝐶(𝑘,𝑙,𝑞)∊𝑁 𝑟,𝑠(𝑖,𝑗,𝑝) (7) Where, η is the learning rate greater than zero. Here the computed W matrix is not symmetric. 3.2 Image Retrieval Once the network is trained to store all the sub-blocks of the entire image meeting the constraints specified in the algorithm, the stored images are retrieved by providing the down sampled version of the partitioned input image. During the retrieval of the complete image, the trained network is fed with down sampled version (test image) of the partitioned images one after the other. For each down sampled version of the input partition image that is fed to the network, the recalled image is initially a coarse output image, this coarse output is fed as the input to the network in the 2nd pass and it is observed that the output obtained in the 2nd pass is better than that obtained in the first pass. In the third pass, the output image obtained in 2nd pass is fed as the input image and the third pass output is noted. This is repeated till the output image of some ath pass is satisfactory. This is repeated for each partitioned image. After noting all the recalled images, these are combined to get the total image. The recalled combined image is compared against the stored image and the performance of the network is measured using the parameters MSE, RMSE and PSNR. The results are presented in section 4. 4. SIMULATION AND RESULTS Matlab code is written to simulate multilayer Hopfield neural network using the algorithm presented in section 3.1 and 3.2. The three images shown in figure 1 are used to test the performance of the graded memory. The graded memory stores 9 images of size 16x16 during the training and during the retrieval, don sampled version of these images are given as test images and the results are noted. Table 1 shows image retrieval when the image shown in fig. 1 (a) of size 48x48 is partitioned into 9 images and stored in the graded memory. Similarly, the tables 3 and 4 show the image retrieval results obtained when the images shown in fig. 1(b) and (c) are partitioned and stored . Table 2 shows the consolidated image quality in terms of PSNR in dB when the multilayer Hopfield neural network is connected in fully connected, torus and non torus fashion Figures 2-4 show the graph showing the image retrieval quality in terms of PSNR in dB for images shown in fig 1 Table 1 Image retrieval when network stores 9 images in different configuration of the network Training image Test Image Fully Connected Torus connection Non Torus connection Output for test image 1st pass 2nd pass 3rd pass 4th pass
  • 4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 782 5th pass 6th pass 7th pass 8th pass 9th pass 10th pass Table 2 PSNR(dB) values obtained when images are retrieved from graded memory, each row indicates the number of the iteration. Barbara Lena Baboon FC T NT FC T NT FC T NT 18.6 17.0 5 15.2 7 14. 8 13.8 5 12.2 7 29.6 3 23.8 7 18.4 422.9 7 22.0 4 17.4 8 17. 21 15.7 5 12.6 2 100 38.1 4 23.2 424.6 5 25.0 5 19.8 19. 97 18.2 4 13.8 6 100 24.4 126.2 4 26.4 1 19.9 8 33. 91 20.0 1 14.9 2 26.0 545.6 5 40.6 9 20.4 5 100 21.4 7 15.8 5 26.0 9100 46.3 7 20.8 9 25.1 3 17.5 26.1 946.3 7 20.8 2 100 20.3 7 26.5 46.5 3 20.9 4 22.0 7 26.8 821.3 9 22.1 27.0 922.0 2 22.1 27.2 322.3 3 27.2 9 where FC- Fully Connected network T - Torus network NT - Non Torus network Fig 2 Graph showing PSNR variation with respect to type of network used for image shown in fig. 1. (a) Table 3 Image retrieval when network stores 9 images in different configuration of the network Training image Test Image Fully Connected Torus connection Non Torus connection Output for test image 1st pass 2nd pass 3rd pass 4th pass 5th pass 6th pass 7th pass 8th pass Fig 3 Graph showing PSNR variation with respect to type of network used for image shown in fig. 1.(b)
  • 5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 783 Table 4 Image retrieval when network stores 9 images in different configuration of the network Training image Test Image Fully Connected Torus connection Non Torus connection Output for test image 1st pass 2nd pass 3rd pass 4th pass 5th pass 6th pass 7th pass 8th pass 9th pass 10th pass Fig 4 Graph showing PSNR variation with respect to type of network used for image shown in fig. 1.(c) 5. CONCLUSION AND FUTURE WORK In this work, the multilayer Hopfield neural network has been used in three configurations i) Fully connected ii) Torus connection and iii) non torus connection. From the simulation results it is observed that multilayer Hopfield neural network can be configured to work as a graded memory in all these three configurations especially in fully connected fashion. It is clear from the results that the fully connected neural network generates the image that exactly match the original image in few passes whereas torus connected network is able to generate image that is enough to arrive at conclusion in few passes though it is not able to generate the original image exactly. The non torus connected network is able to generate the image which of poor quality compared to that obtained in other to cases. We can conclude that Fully connected network is best among the three network configurations used. The only limitation is that the network requires large number of neurons and connection weights. Future work can focus on how to improve the quality of image when the network is connected in non torus fashion. Also, the graded memory currently using the above algorithm is able to store and retrieve up to Nine images successfully, need to look at how to improve this number further. REFERENCES [1] B. Sudarshan, R Manjunatha, Image Storage and Retrieval in Graded Memory, International Journal of Advances in Engineering & Technology (IJAET), Volume 8 Issue 1, Feb. 2015, pp. 2123-2128. [2] Igor Aizenberg, Senior Member, IEEE, Jacob Jackson, and Shane Alexander, Classification of Blurred Textures using Multilayer Neural Network Based on Multi-Valued Neurons, Proc. of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011.
  • 6. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 04 Issue: 04 | Apr-2015, Available @ http://www.ijret.org 784 [3] Govanni Costantini, Design of Associative memory for Gray-Scale images by multilayer Hopfield Neural networks, Proc. Of the 10th ESEAS international conference on CIRCUITS, Vouliagmeni, Athens, Greece, July 10-12, 2006, pp. 376-379. [4] Giovanni Costantini, Daniele Casali and Renzo Perfetti, Neural associative memory storing gray- scale images, IEEE Trans. Neural Networks, vol14, No. 3, May 2003, pp703-707. [5] C. Oh, S H Zak, Associative memory design using overlapping decompositions and generalized brain- state-in a –box neural networks, Intl. Journal for Neural System. Vol. 13, no. 3, 2003, pp. 139-153. [6] J.J. Hopfield, Neural networks and physical systems with emergent collective computational abilities, Proc. Nat. Acad. Sci., vol. 79, 1982, pp. 2554-2558. BIOGRAPHIES Mr. B Sudarshan is working as Assistant Professor at KS Institute of technology, Bangalore, India. He is currently pursuing PhD from Jain university, Bangalore India. He has more than 12 years of industrial experience in India and United states of America. His area of research includes image processing, artificial intelligence. Dr. Manjunath R is well known scientist in India and abroad. He has got PhD from Bangalore university, India. He has 3 patents to his credit and published more than 100 papers in international journals. His research areas include DSP, artificial intelligence, image processing.