SlideShare a Scribd company logo
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 385
A SURVEY: RESEARCH SUMMARY ON NEURAL NETWORKS
ShrutiB.Hiregoudar1
, Manjunath.K2
, K.S.Patil3
1
Dept. Computer Science, BasaveshwareEngineering College, Bagalkot, Karnataka, India
2
Dept. Computer Science, Basaveshware Engineering College, Bagalkot, Karnataka, India
3
Prof: Dept. Computer Science, Basaveshware Engineering College, Bagalkot, Karnataka, India
Abstract
Neural Networks are relatively crude electronic models based on the neural structure of the brain. The brain basically learns from
experience. It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy
efficient packages.In this paper we propose the fundamentals of neural network topologies, activation function and learning
algorithms based on the flow of information in bi-direction or uni-directions. We outline themain features of a number of popular
neural networks and provide an overview on their topologies and their learning capabilities.
Keywords: Neural Network, Feed Forward, Recurrent, Radial Basic Function Network (RBFN), Kohonen Self Organizing
Map (KSOM).
-----------------------------------------------------------------------***----------------------------------------------------------------------
1. INTRODUCTION
The human bairn has capabilities in processing information
and marking instantaneous decision. The many researchers
shown that the human brain make computations in a radically
different manner to that done by binary computers. The
neurons is a massive network of parallel and distributed
computing elements, many scientists are working last few
decades to build computational system called neural network,
which is also called as connectionist model.A neural network
is composed of set of parallel and distributed processing units
called nodes or neurons, these neurons are interconnected by
means of unidirectional or bidirectional links by ordering them
in layers.
Fig-1:Basic Structure of Neural Network
The basic unit of neural network is neuron, it consist of N no
of inputs to the network are represented by x(n) and each
input are multiply by a connection weight these weights are
represented by w(n).The product of input and weight are
simply summed and feed through a transfer function
(activation function) to generate the result (output).
2. NEURAL NETWORK DESIGN
Neural network mainly consist of three things
 Network Topology
 Network Transfer Function
 Network Learning Algorithm
2.1 Network Topology
the neural network topologies are classified based upon
interconnection are arranged with in the layer, there two well-
known neural network topologies are.
 Feed Forward Topology
 Recurrent Topology
2.1.1 Feed Forward Topology
In feed forward topology network, the nodes are hierarchically
arranged in layers starting with the input layers and ending
with output layers. In between the input layer and output layer
the number of hidden layers provide most of the network
computational power. The nodes in each layers connect to next
layer through uni-direction paths starting from one layer
(source) and ending at the subsequently layer (sink). The
output of a given layers feed the nodes of the following layers
in a forward directions and does not allow feedback flow of
information in the structure. Application multilayer layer
perception network and radial basic function network.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 386
Fig-2: Feed Forward Topology
2.1.2 Recurrent Topology
In (RNT) allows flow of information in between the connected
nodes in bi directions.i.e support both feed forward and
feedback. In recurrent network, structure has some sort of
memory which help topermit storageof information in
theiroutput nodes through dynamicstates. The mapping of
inputs and outputs are dynamic in nature.Application:
Hopfield Network and Time Delayed Neural Network
(TDNN).
Fig-3: Recurrent Topology
2.2 Network Transfer Function
The basic unit of neural network is neuron,these are sorts of
simple processors which take the weighted sum of their input
from other node and apply to them non-linear mapping
function called an activation function before delivering the
output to the next to the next neuron.
Fig-4: Transfer Function
2.3Neural Network Learning Algorithm
Learning algorithm are used to update the Weight parameter
of the input connections level of the neuron during the training
processes of the network. There are three types of leaving
algorithms are classified:
 Supervised
 Unsupervised and
 Reinforcement
2.3.1 Supervised Learning
In supervised learning mechanism,the external source provides
the network ith a set of input stimulus for which the output is a
priori known and during the running process the output results
are continuously compared with the desire data. After number
of iterations, the gradient descent rule uses the error between
the actual output and the target data to adjust the connections
weights so as to obtain the closest match between the target
out and the actual out. Application: feed forward network.
Fig-5: Supervised Learning
2.3.2 Unsupervised Learning Algorithm:
It is also called as self-organizing learning algorithm because
there is no any external source to provide the network and
relies instead upon local information and internal control.
Thetraining data and input pattern are presented to the system
and system organization .the data into clusters or categories. A
set of training data is presented to the system at the input layer
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 387
level, the network connection weights are then adjusted
through some sort of competition among the node of the
output layer where the successful candidate will be the node
with the highest value.
Fig-6: Unsupervised learning
2.3.3 Reinforcement Learning:
The reinforcement learning algorithm also called as graded
learning it has mimic in a way the adjusting behaviour of
humans which interacting with a given physical environment
.the network connections are modified according to feedback
information provided to the network by its environment. This
information simply instructs the systems on whether or not a
correct response has been obtained. In case of correct response
the corresponding connections leading to that output are
strengthened otherwise they are weakened. Reinforcement
learning doesn’t get information on what the output should be
when the network is presented with a given input pattern.
Fig-7: Reinforcement learning
3. MAJOR CLASSES OF NEURAL NETWORKS
There are 4 types of neural network classes
 Multilayer Perception
 Radial Basis Function Network
 The Kohonen Self Organizing Map
 Hop Field Network.
3.1 The Multilayer Perception:
Topology: The multilayer perceptron belong to the class of
feed-forward networks, means that the information flows
among the network nodes exclusively in the forward direction.
the number of hidden layers required with in multi-layer
perceptron depends in major part on the type of problem being
addressed .for instances, a system with a single hidden layer
able to solve the problem of XOR function or related problems
in which the separate boundaries are relatively simple.
Activation function: in multilayer perception network with
one single hidden layer composed of an appropriate number of
nodes with sigmoid activation function ,as the activation
function for all the neurons of the network defined
as:Ec=1/2 1𝑛
𝑖=0 𝑡𝑖 𝑘 − 𝑜𝑖 𝑘 2
(𝑞
𝑖=1 .
Learning algorithm: The algorithm is based on the gradient
descent technique for solving an optimization problem which
involves the minimization of the network cumulative error Ec ,
Ec represents the sum of n squared errors
E(K)=I/2 𝑡𝑖 𝑘 − 𝑜𝑖 𝑘 2
𝑞
𝑖=1 , Where the index represents
the i-th neuron of the output composed of a total number of q
neurons. The algorithm is designed in such a way as to update
the weights in the direction of the gradient descent of the
cumulative error. Applications: signal processing, weather
forecasting, financial market prediction, pattern recognition,
signal compression.
3.2 Radial Basis Function Networks:
Topology: radial basis function network represent a special
category of the feed forward neural network architecture. The
basic RBFN structure consists of an input layer, hidden layer
with activation function and output layer. The network
structure use non liner transformations at its hidden layer but
uses linear transformations between the hidden and the output
layers.in general form of RBF function is given by gi(x)=
𝑟𝑖 | 𝑥 − 𝑣𝑖 | 𝜎𝑖 ,where x is the input vector and vi is the
vector denoting the center of the receptive field unit gi with 𝜎i
as its unit width parameter.
Activation function: the logistic function of RBF is given by
gi(x)=1(1+exp( | 𝑥 − 𝑣𝑖 |2 𝜎𝑖2 .
Learning Algorithm: In RBF network is a two stage learning
strategy:
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 388
Step1: train the RBFN layer to get the adaption of centres and
scaling parameters using the unsupervised training.
Step2: adapt the weights of the output layer using the
supervised training algorithm.
Applications: control system,audio and video signal
processing and pattern recognition, weatherand power load
forecasting.
3.3 Kohonen’s Self Organizing Network:
Topology: The kohonen self-organizing network (KSON) also
called self-organizing map(SOM) belongs to the class of
unsupervised learning networks in KSON , the node
distributed themselves across the input space to recognize
groups of similar input vector while the output nodes compute
among themselves to de fired one at a time in response to a
particular input vector this processes is called competitive
learning. The nodes of the KSON can be recognize groups of
similar input vector this generate a topographic mapping of the
input vector to output vector which depends primarily on the
pattern of the input vector and results in dimensionality
reduction of the input space.
Learning algorithm: The learning here permits the clustering
of input data into a smaller set of elements having similar
characteristics. It is based on the competitive learning
technique also known as the winner take all strategy.
Application: speech recognition, vector coding, texture
segmentation, designing nonlinear controllers.
3.4 Hopfield Network
Topology: The hope field is a recurrent topology and working
of Hopfield is based on the associative memory concept that
means the network is able to recognize newly presented
pattern using an already stored complete version of that
pattern. Hopfield defines as any physical system whose
dynamics in phase space is dominated by a substantial number
of locally stable states to which it is attached can therefore be
regarded as a general content addressable memory.
The activation function of each neuron in Hopfield is defined
by the equation:
oi=sign( 𝑤𝑖𝑗𝑜𝑖 − 𝜃𝑖)𝑛
𝑖=1
𝑖!=𝑗
,
Where𝜃𝑖𝑠𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑𝑣𝑎𝑙𝑢𝑒,oi is output of the current
processing unit.
Learning algorithm: The learning algorithm of Hopfield is
based upon the Hebbain learning rule which is based on
supervised learning algorithm. The Hebbian learning rule
applied to a set of q presented patterns
Pk(k=1,2,3,4…….q)each with dimension n, where n is the
numbers of neurons unit in the Hopfield network.
Applications: It has been used to solve optimization problem,
original optimization problem, combination tutorial
optimization problem.
4. APPLICATION OF NEURAL NETWORKS
 Neural network for process monitoring and optimal
control.
 Neural network is a semiconductor manufacturing
process.
 Neuralnetwork is a power systems.
 Neuralnetwork in robotics.
 Network in communications.
 Neuralnetwork in pattern recognition.
5. CONCLUSIONS
In this paper mainly we description of neural network main
features in terms of topology, learning algorithm and
activation functions. These include the feed forward and the
recurrent based topologies, they also include network with
supervised and unsupervised learning algorithm. We provide
detailed descriptions of a numbers of the very often used
neural network models the Multilayer Perceptron, Radial Basis
Function network, Kohonen’s Self-Organizing network, the
Hopfield network along with highlights on their fields of
applications.
REFERENCES
[1]. Hop, good, A. (1993) knowledge –based systems for
Engineer and scientists,Boca Raton, Florida, CRC Press , 159-
85.
[2]. Jang, J, Sun, S, and Mizutani, Etice (1997) Neuro-Fuzzy
and Soft Computing, prentice Hall, Upper Saddle Rver, NJ.uin
[3].Haynkin,S. (1994) Neural Networks, A Comprehensive
Foundation, MacMillan Publishing,EnglewoodCliffs,NJ.
[4]. Werbos,P.(1947) “Beyond Regression: new tools for
predicgions and analysis in the behivoral sciences .” PhD
dissertation, Harvard University .
[5]. Rumelhary, D. Hinton, G., and Williams, R, (1986)
“Learning represantations by backpropagation errors,” Nature,
vol.323,pp. 553-6.
[6]. Haykin, S.(1994) Neural Networks, a Comprehncive
Foundation, MacMillan Publishing,EnglewoodCliffs,NJ.
[7]. Shar, S.andPalmieri, F.(1990) “MEKA. A fast local
algorithm for training feed-forward neural networks,”in
proceedings of the International Joints Conference on Neural
Networks.
[8]. Kattyama, R., Kuwata, K., Kajitani,
Y,.andWatanabe,M.(1995) “Embeding Functions networks,”
Fuzzy Sets and Systems, 72(3):311-27.
[9]. Hopfield, J. (1984) “Neurons with graded response have
collective computational properties like those of two state
neurons ,” in proceedings of the national properties like those
of two state national Acccadamy of science, pp.388-92.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
__________________________________________________________________________________________
Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 389
BIOGRAPHIES
1 ShrutiB.Hiregoudar, Mtech(cse), Basaveshware Engineering
College, Bagalkot. and shrutishining411@ gmail.com
2 Manjunath K, Mtech(cse), Basaveshware Engineering
College, Bagalkot. andmanjuise.026@gmail.com.
3 Prof: K.S.Patil,Mtech(cse), Basaveshware Engineering
College, Bagalkot.and kamalashashibec@rediffmail.com

More Related Content

PDF
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
PDF
Efficiency of Neural Networks Study in the Design of Trusses
PDF
Web spam classification using supervised artificial neural network algorithms
PDF
Investigations on Hybrid Learning in ANFIS
PDF
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
PPTX
Artificial Neural Network
PDF
G010334554
DOCX
Artificial neural networks seminar presentation using MSWord.
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Efficiency of Neural Networks Study in the Design of Trusses
Web spam classification using supervised artificial neural network algorithms
Investigations on Hybrid Learning in ANFIS
ARTIFICIAL NEURAL NETWORK APPROACH TO MODELING OF POLYPROPYLENE REACTOR
Artificial Neural Network
G010334554
Artificial neural networks seminar presentation using MSWord.

What's hot (19)

PDF
F017533540
PDF
6119ijcsitce01
PPTX
Artificial neural network by arpit_sharma
PDF
Implementation of Feed Forward Neural Network for Classification by Education...
PPTX
Artificial neural network
PPT
Neural networks
PPTX
Artificial Neural Network
PDF
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
PPT
Ann
PDF
Artificial Neural Networks Lect3: Neural Network Learning rules
PPT
Artificial Neural Network seminar presentation using ppt.
PDF
Neural networks introduction
PDF
Artificial Neural Network report
PDF
Modeling of neural image compression using gradient decent technology
PPT
Artificial neural network
PPTX
Artificial Neural Network Topology
PDF
Efficient design of feedforward network for pattern classification
PDF
D028018022
PDF
Optimal neural network models for wind speed prediction
F017533540
6119ijcsitce01
Artificial neural network by arpit_sharma
Implementation of Feed Forward Neural Network for Classification by Education...
Artificial neural network
Neural networks
Artificial Neural Network
Comparison Between Levenberg-Marquardt And Scaled Conjugate Gradient Training...
Ann
Artificial Neural Networks Lect3: Neural Network Learning rules
Artificial Neural Network seminar presentation using ppt.
Neural networks introduction
Artificial Neural Network report
Modeling of neural image compression using gradient decent technology
Artificial neural network
Artificial Neural Network Topology
Efficient design of feedforward network for pattern classification
D028018022
Optimal neural network models for wind speed prediction
Ad

Viewers also liked (20)

PDF
Microstructure analysis and wear behaviour of al based metal matrix composite...
PDF
Scholarship Report 2011.pdf
PPTX
3D printing undergrad symposium 2016
PPTX
3D 프린팅 연구/리서치. 3D printing research korean version
PDF
Hydrostatic transmission as an alternative to conventional gearbox
PDF
Study of macro mechanical properties of ultra high strength concrete using qu...
PDF
Dorsal hand vein pattern authentication by hough peaks
PDF
Investigation of various parameters on the
PDF
Seismic response of reinforced concrete structure by using different bracing ...
PDF
Costomization of recommendation system using collaborative filtering algorith...
PDF
Co axial fed microstrip rectangular patch antenna
PDF
Measurable, safe and secure data management for sensitive users in cloud comp...
PDF
Preliminary study of on cladding process on gray cast
PDF
Localization based range map stitching in wireless sensor network under non l...
PDF
Intelligent two axis dual-ccd image-servo shooting platform design
PDF
Design and verification of pipelined parallel architecture implementation in ...
PDF
Comparative study of slot loaded rectangular and triangular microstrip array ...
PDF
Predicting construction project duration with support
PDF
Assessment of electromagnetic radiations from
Microstructure analysis and wear behaviour of al based metal matrix composite...
Scholarship Report 2011.pdf
3D printing undergrad symposium 2016
3D 프린팅 연구/리서치. 3D printing research korean version
Hydrostatic transmission as an alternative to conventional gearbox
Study of macro mechanical properties of ultra high strength concrete using qu...
Dorsal hand vein pattern authentication by hough peaks
Investigation of various parameters on the
Seismic response of reinforced concrete structure by using different bracing ...
Costomization of recommendation system using collaborative filtering algorith...
Co axial fed microstrip rectangular patch antenna
Measurable, safe and secure data management for sensitive users in cloud comp...
Preliminary study of on cladding process on gray cast
Localization based range map stitching in wireless sensor network under non l...
Intelligent two axis dual-ccd image-servo shooting platform design
Design and verification of pipelined parallel architecture implementation in ...
Comparative study of slot loaded rectangular and triangular microstrip array ...
Predicting construction project duration with support
Assessment of electromagnetic radiations from
Ad

Similar to A survey research summary on neural networks (20)

PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
PDF
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
PDF
Electricity Demand Forecasting Using Fuzzy-Neural Network
PDF
Optimal neural network models for wind speed prediction
PDF
Optimal neural network models for wind speed prediction
PDF
A040101001006
PPTX
Unit ii supervised ii
PDF
Iv3515241527
PDF
Live to learn: learning rules-based artificial neural network
PDF
Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using A...
DOC
Neural network
PDF
Review: “Implementation of Feedforward and Feedback Neural Network for Signal...
PDF
PDF
Implementation of recurrent neural network for the forecasting of USD buy ra...
PDF
A new method for controlling and maintaining
PDF
Electricity consumption-prediction-model-using neuro-fuzzy-system
PDF
Digital Implementation of Artificial Neural Network for Function Approximatio...
PDF
Digital Implementation of Artificial Neural Network for Function Approximatio...
PDF
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
PDF
Ameliorate the performance using soft computing approaches in wireless networks
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Web Spam Classification Using Supervised Artificial Neural Network Algorithms
Electricity Demand Forecasting Using Fuzzy-Neural Network
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
A040101001006
Unit ii supervised ii
Iv3515241527
Live to learn: learning rules-based artificial neural network
Mobile Network Coverage Determination at 900MHz for Abuja Rural Areas using A...
Neural network
Review: “Implementation of Feedforward and Feedback Neural Network for Signal...
Implementation of recurrent neural network for the forecasting of USD buy ra...
A new method for controlling and maintaining
Electricity consumption-prediction-model-using neuro-fuzzy-system
Digital Implementation of Artificial Neural Network for Function Approximatio...
Digital Implementation of Artificial Neural Network for Function Approximatio...
DESIGN AND IMPLEMENTATION OF BINARY NEURAL NETWORK LEARNING WITH FUZZY CLUSTE...
Ameliorate the performance using soft computing approaches in wireless networks

More from eSAT Publishing House (20)

PDF
Likely impacts of hudhud on the environment of visakhapatnam
PDF
Impact of flood disaster in a drought prone area – case study of alampur vill...
PDF
Hudhud cyclone – a severe disaster in visakhapatnam
PDF
Groundwater investigation using geophysical methods a case study of pydibhim...
PDF
Flood related disasters concerned to urban flooding in bangalore, india
PDF
Enhancing post disaster recovery by optimal infrastructure capacity building
PDF
Effect of lintel and lintel band on the global performance of reinforced conc...
PDF
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...
PDF
Wind damage to buildings, infrastrucuture and landscape elements along the be...
PDF
Shear strength of rc deep beam panels – a review
PDF
Role of voluntary teams of professional engineers in dissater management – ex...
PDF
Risk analysis and environmental hazard management
PDF
Review study on performance of seismically tested repaired shear walls
PDF
Monitoring and assessment of air quality with reference to dust particles (pm...
PDF
Low cost wireless sensor networks and smartphone applications for disaster ma...
PDF
Coastal zones – seismic vulnerability an analysis from east coast of india
PDF
Can fracture mechanics predict damage due disaster of structures
PDF
Assessment of seismic susceptibility of rc buildings
PDF
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...
PDF
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...
Likely impacts of hudhud on the environment of visakhapatnam
Impact of flood disaster in a drought prone area – case study of alampur vill...
Hudhud cyclone – a severe disaster in visakhapatnam
Groundwater investigation using geophysical methods a case study of pydibhim...
Flood related disasters concerned to urban flooding in bangalore, india
Enhancing post disaster recovery by optimal infrastructure capacity building
Effect of lintel and lintel band on the global performance of reinforced conc...
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...
Wind damage to buildings, infrastrucuture and landscape elements along the be...
Shear strength of rc deep beam panels – a review
Role of voluntary teams of professional engineers in dissater management – ex...
Risk analysis and environmental hazard management
Review study on performance of seismically tested repaired shear walls
Monitoring and assessment of air quality with reference to dust particles (pm...
Low cost wireless sensor networks and smartphone applications for disaster ma...
Coastal zones – seismic vulnerability an analysis from east coast of india
Can fracture mechanics predict damage due disaster of structures
Assessment of seismic susceptibility of rc buildings
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...

Recently uploaded (20)

PDF
Digital Logic Computer Design lecture notes
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PPTX
additive manufacturing of ss316l using mig welding
PDF
composite construction of structures.pdf
PPTX
Geodesy 1.pptx...............................................
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PPT
Project quality management in manufacturing
PPT
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPT
Mechanical Engineering MATERIALS Selection
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
OOP with Java - Java Introduction (Basics)
DOCX
573137875-Attendance-Management-System-original
PDF
Well-logging-methods_new................
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPTX
Construction Project Organization Group 2.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Digital Logic Computer Design lecture notes
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
R24 SURVEYING LAB MANUAL for civil enggi
additive manufacturing of ss316l using mig welding
composite construction of structures.pdf
Geodesy 1.pptx...............................................
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Operating System & Kernel Study Guide-1 - converted.pdf
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
Project quality management in manufacturing
CRASH COURSE IN ALTERNATIVE PLUMBING CLASS
Embodied AI: Ushering in the Next Era of Intelligent Systems
Mechanical Engineering MATERIALS Selection
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
OOP with Java - Java Introduction (Basics)
573137875-Attendance-Management-System-original
Well-logging-methods_new................
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Construction Project Organization Group 2.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...

A survey research summary on neural networks

  • 1. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 385 A SURVEY: RESEARCH SUMMARY ON NEURAL NETWORKS ShrutiB.Hiregoudar1 , Manjunath.K2 , K.S.Patil3 1 Dept. Computer Science, BasaveshwareEngineering College, Bagalkot, Karnataka, India 2 Dept. Computer Science, Basaveshware Engineering College, Bagalkot, Karnataka, India 3 Prof: Dept. Computer Science, Basaveshware Engineering College, Bagalkot, Karnataka, India Abstract Neural Networks are relatively crude electronic models based on the neural structure of the brain. The brain basically learns from experience. It is natural proof that some problems that are beyond the scope of current computers are indeed solvable by small energy efficient packages.In this paper we propose the fundamentals of neural network topologies, activation function and learning algorithms based on the flow of information in bi-direction or uni-directions. We outline themain features of a number of popular neural networks and provide an overview on their topologies and their learning capabilities. Keywords: Neural Network, Feed Forward, Recurrent, Radial Basic Function Network (RBFN), Kohonen Self Organizing Map (KSOM). -----------------------------------------------------------------------***---------------------------------------------------------------------- 1. INTRODUCTION The human bairn has capabilities in processing information and marking instantaneous decision. The many researchers shown that the human brain make computations in a radically different manner to that done by binary computers. The neurons is a massive network of parallel and distributed computing elements, many scientists are working last few decades to build computational system called neural network, which is also called as connectionist model.A neural network is composed of set of parallel and distributed processing units called nodes or neurons, these neurons are interconnected by means of unidirectional or bidirectional links by ordering them in layers. Fig-1:Basic Structure of Neural Network The basic unit of neural network is neuron, it consist of N no of inputs to the network are represented by x(n) and each input are multiply by a connection weight these weights are represented by w(n).The product of input and weight are simply summed and feed through a transfer function (activation function) to generate the result (output). 2. NEURAL NETWORK DESIGN Neural network mainly consist of three things  Network Topology  Network Transfer Function  Network Learning Algorithm 2.1 Network Topology the neural network topologies are classified based upon interconnection are arranged with in the layer, there two well- known neural network topologies are.  Feed Forward Topology  Recurrent Topology 2.1.1 Feed Forward Topology In feed forward topology network, the nodes are hierarchically arranged in layers starting with the input layers and ending with output layers. In between the input layer and output layer the number of hidden layers provide most of the network computational power. The nodes in each layers connect to next layer through uni-direction paths starting from one layer (source) and ending at the subsequently layer (sink). The output of a given layers feed the nodes of the following layers in a forward directions and does not allow feedback flow of information in the structure. Application multilayer layer perception network and radial basic function network.
  • 2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 386 Fig-2: Feed Forward Topology 2.1.2 Recurrent Topology In (RNT) allows flow of information in between the connected nodes in bi directions.i.e support both feed forward and feedback. In recurrent network, structure has some sort of memory which help topermit storageof information in theiroutput nodes through dynamicstates. The mapping of inputs and outputs are dynamic in nature.Application: Hopfield Network and Time Delayed Neural Network (TDNN). Fig-3: Recurrent Topology 2.2 Network Transfer Function The basic unit of neural network is neuron,these are sorts of simple processors which take the weighted sum of their input from other node and apply to them non-linear mapping function called an activation function before delivering the output to the next to the next neuron. Fig-4: Transfer Function 2.3Neural Network Learning Algorithm Learning algorithm are used to update the Weight parameter of the input connections level of the neuron during the training processes of the network. There are three types of leaving algorithms are classified:  Supervised  Unsupervised and  Reinforcement 2.3.1 Supervised Learning In supervised learning mechanism,the external source provides the network ith a set of input stimulus for which the output is a priori known and during the running process the output results are continuously compared with the desire data. After number of iterations, the gradient descent rule uses the error between the actual output and the target data to adjust the connections weights so as to obtain the closest match between the target out and the actual out. Application: feed forward network. Fig-5: Supervised Learning 2.3.2 Unsupervised Learning Algorithm: It is also called as self-organizing learning algorithm because there is no any external source to provide the network and relies instead upon local information and internal control. Thetraining data and input pattern are presented to the system and system organization .the data into clusters or categories. A set of training data is presented to the system at the input layer
  • 3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 387 level, the network connection weights are then adjusted through some sort of competition among the node of the output layer where the successful candidate will be the node with the highest value. Fig-6: Unsupervised learning 2.3.3 Reinforcement Learning: The reinforcement learning algorithm also called as graded learning it has mimic in a way the adjusting behaviour of humans which interacting with a given physical environment .the network connections are modified according to feedback information provided to the network by its environment. This information simply instructs the systems on whether or not a correct response has been obtained. In case of correct response the corresponding connections leading to that output are strengthened otherwise they are weakened. Reinforcement learning doesn’t get information on what the output should be when the network is presented with a given input pattern. Fig-7: Reinforcement learning 3. MAJOR CLASSES OF NEURAL NETWORKS There are 4 types of neural network classes  Multilayer Perception  Radial Basis Function Network  The Kohonen Self Organizing Map  Hop Field Network. 3.1 The Multilayer Perception: Topology: The multilayer perceptron belong to the class of feed-forward networks, means that the information flows among the network nodes exclusively in the forward direction. the number of hidden layers required with in multi-layer perceptron depends in major part on the type of problem being addressed .for instances, a system with a single hidden layer able to solve the problem of XOR function or related problems in which the separate boundaries are relatively simple. Activation function: in multilayer perception network with one single hidden layer composed of an appropriate number of nodes with sigmoid activation function ,as the activation function for all the neurons of the network defined as:Ec=1/2 1𝑛 𝑖=0 𝑡𝑖 𝑘 − 𝑜𝑖 𝑘 2 (𝑞 𝑖=1 . Learning algorithm: The algorithm is based on the gradient descent technique for solving an optimization problem which involves the minimization of the network cumulative error Ec , Ec represents the sum of n squared errors E(K)=I/2 𝑡𝑖 𝑘 − 𝑜𝑖 𝑘 2 𝑞 𝑖=1 , Where the index represents the i-th neuron of the output composed of a total number of q neurons. The algorithm is designed in such a way as to update the weights in the direction of the gradient descent of the cumulative error. Applications: signal processing, weather forecasting, financial market prediction, pattern recognition, signal compression. 3.2 Radial Basis Function Networks: Topology: radial basis function network represent a special category of the feed forward neural network architecture. The basic RBFN structure consists of an input layer, hidden layer with activation function and output layer. The network structure use non liner transformations at its hidden layer but uses linear transformations between the hidden and the output layers.in general form of RBF function is given by gi(x)= 𝑟𝑖 | 𝑥 − 𝑣𝑖 | 𝜎𝑖 ,where x is the input vector and vi is the vector denoting the center of the receptive field unit gi with 𝜎i as its unit width parameter. Activation function: the logistic function of RBF is given by gi(x)=1(1+exp( | 𝑥 − 𝑣𝑖 |2 𝜎𝑖2 . Learning Algorithm: In RBF network is a two stage learning strategy:
  • 4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 388 Step1: train the RBFN layer to get the adaption of centres and scaling parameters using the unsupervised training. Step2: adapt the weights of the output layer using the supervised training algorithm. Applications: control system,audio and video signal processing and pattern recognition, weatherand power load forecasting. 3.3 Kohonen’s Self Organizing Network: Topology: The kohonen self-organizing network (KSON) also called self-organizing map(SOM) belongs to the class of unsupervised learning networks in KSON , the node distributed themselves across the input space to recognize groups of similar input vector while the output nodes compute among themselves to de fired one at a time in response to a particular input vector this processes is called competitive learning. The nodes of the KSON can be recognize groups of similar input vector this generate a topographic mapping of the input vector to output vector which depends primarily on the pattern of the input vector and results in dimensionality reduction of the input space. Learning algorithm: The learning here permits the clustering of input data into a smaller set of elements having similar characteristics. It is based on the competitive learning technique also known as the winner take all strategy. Application: speech recognition, vector coding, texture segmentation, designing nonlinear controllers. 3.4 Hopfield Network Topology: The hope field is a recurrent topology and working of Hopfield is based on the associative memory concept that means the network is able to recognize newly presented pattern using an already stored complete version of that pattern. Hopfield defines as any physical system whose dynamics in phase space is dominated by a substantial number of locally stable states to which it is attached can therefore be regarded as a general content addressable memory. The activation function of each neuron in Hopfield is defined by the equation: oi=sign( 𝑤𝑖𝑗𝑜𝑖 − 𝜃𝑖)𝑛 𝑖=1 𝑖!=𝑗 , Where𝜃𝑖𝑠𝑡ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑𝑣𝑎𝑙𝑢𝑒,oi is output of the current processing unit. Learning algorithm: The learning algorithm of Hopfield is based upon the Hebbain learning rule which is based on supervised learning algorithm. The Hebbian learning rule applied to a set of q presented patterns Pk(k=1,2,3,4…….q)each with dimension n, where n is the numbers of neurons unit in the Hopfield network. Applications: It has been used to solve optimization problem, original optimization problem, combination tutorial optimization problem. 4. APPLICATION OF NEURAL NETWORKS  Neural network for process monitoring and optimal control.  Neural network is a semiconductor manufacturing process.  Neuralnetwork is a power systems.  Neuralnetwork in robotics.  Network in communications.  Neuralnetwork in pattern recognition. 5. CONCLUSIONS In this paper mainly we description of neural network main features in terms of topology, learning algorithm and activation functions. These include the feed forward and the recurrent based topologies, they also include network with supervised and unsupervised learning algorithm. We provide detailed descriptions of a numbers of the very often used neural network models the Multilayer Perceptron, Radial Basis Function network, Kohonen’s Self-Organizing network, the Hopfield network along with highlights on their fields of applications. REFERENCES [1]. Hop, good, A. (1993) knowledge –based systems for Engineer and scientists,Boca Raton, Florida, CRC Press , 159- 85. [2]. Jang, J, Sun, S, and Mizutani, Etice (1997) Neuro-Fuzzy and Soft Computing, prentice Hall, Upper Saddle Rver, NJ.uin [3].Haynkin,S. (1994) Neural Networks, A Comprehensive Foundation, MacMillan Publishing,EnglewoodCliffs,NJ. [4]. Werbos,P.(1947) “Beyond Regression: new tools for predicgions and analysis in the behivoral sciences .” PhD dissertation, Harvard University . [5]. Rumelhary, D. Hinton, G., and Williams, R, (1986) “Learning represantations by backpropagation errors,” Nature, vol.323,pp. 553-6. [6]. Haykin, S.(1994) Neural Networks, a Comprehncive Foundation, MacMillan Publishing,EnglewoodCliffs,NJ. [7]. Shar, S.andPalmieri, F.(1990) “MEKA. A fast local algorithm for training feed-forward neural networks,”in proceedings of the International Joints Conference on Neural Networks. [8]. Kattyama, R., Kuwata, K., Kajitani, Y,.andWatanabe,M.(1995) “Embeding Functions networks,” Fuzzy Sets and Systems, 72(3):311-27. [9]. Hopfield, J. (1984) “Neurons with graded response have collective computational properties like those of two state neurons ,” in proceedings of the national properties like those of two state national Acccadamy of science, pp.388-92.
  • 5. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 __________________________________________________________________________________________ Volume: 03 Special Issue: 03 | May-2014 | NCRIET-2014, Available @ http://www.ijret.org 389 BIOGRAPHIES 1 ShrutiB.Hiregoudar, Mtech(cse), Basaveshware Engineering College, Bagalkot. and shrutishining411@ gmail.com 2 Manjunath K, Mtech(cse), Basaveshware Engineering College, Bagalkot. andmanjuise.026@gmail.com. 3 Prof: K.S.Patil,Mtech(cse), Basaveshware Engineering College, Bagalkot.and kamalashashibec@rediffmail.com