SlideShare a Scribd company logo
Artificial Neural Network and its Applications
Shritosh Kumar [1], Tarun Kumawat [2], Naresh kumar Marwal [3], [4] Brijesh Kumar Singh
[1] Shritosh Kumar [CSE] Arya College of Engg.& IT, Kukas, Rajasthan, India
[2] Tarun Kumawat [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan, India
[3] Naresh Kumar Marwal [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan
[4] Brijesh Kumar Singh [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan
Abstract
This report is an introduction to Artificial Neural
Networks. The various types of neural networks are
explained and demonstrated, applications of neural
networks like ANNs in medicine are described, and a
detailed historical background is provided. The
connection between the artificial and the real thing is
also investigated and explained. Finally, the
mathematical models involved are presented and
demonstrated.
Introduction
This paper presents a neural network based artificial
vision and its applications. The concept of neural
networks is modeled after biological sensory
mechanisms where the neuron signals are transmitted
to the brain and processed. This concept moves away
from traditional statistical models where data are
analyzed based upon holding everything else constant
(ceteris paribus). The weakness in statistical models
lies in their inability to model the changing
relationships between variables (non-linear problem)
and thus presents challenges in making a predictive
analysis where the underlying relationships are not
constant. A neural network overcomes this problem
by being adaptive to real sets of data. Much like
living organisms, a neural network gets training and
learns the tricks of the trade by observation and
readjusts its learning against new sets of data
iteratively.
Other reasons why we make use of Neural Networks
include;
• Adaptive learning: An ability to learn how
to do tasks based on the data given for
training or initial experience.
• Self-Organization: An ANN can create its
own organization or representation of the
information it receives during learning time.
• Real Time Operation: ANN computations
may be carried out in parallel, and special
hardware devices are being designed and
manufactured which take advantage of this
capability.
• Fault Tolerance via Redundant Information
Coding: Partial destruction of a network
leads to the corresponding degradation of
performance. However, some network
capabilities may be retained even with major
network damage.
The Definition of Neural Network
Artificial neurons are similar to their biological
counterparts. They have input connections which are
summed together to determine the strength of their
output, which is the result of the sum being fed into
an activation function. Though many activation
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1621
functions exist, the most common is the sigmoid
activation function, which outputs a number between
0 (for low input values) and 1 (for high input values).
An Artificial Neural Network (ANN), usually called
Neural Network has been found to hold great
potential as the front – end of expert system and have
made remarkable success in providing real time
response to complex pattern recognition problems.
Artificial neural networks (ANNs or briefly NNs)
belong to this group of models. Artificial neural
network models try to model the operation of
biological neural networks and especially those of the
human brain that can perform a series of tasks, like
pattern recognition, classification etc., with an
amazing speed and accuracy, despite the fact that
inputs to the human brain consist most of the time of
noisy and erroneous data.
Using computer systems to replicate the learning and
recall methods of the human brain has
researchers in a variety of disciplines for half a
century. Neural network computing is the closest
approximation of brain function to evolve to a stage
where practical application is attainable.
The technology of artificial neural network has
developed from a biological model of the brain. A
neural network consists of set of connected cells: the
neurons A neuron is a real function of the input
vector (y1,……,yk). The output is obtained as
f(xj)=f(aj+ Σk
i=1wijyj),where f is a function , typically
the sigmoid function. A graphical presentation of
neuron is given in figure below.
Figure: A single neuron
The brain and nervous system provide the structural
model for a neural network. A neural net
number of processing elements known as neurons,
each of which can have multiple inputs and only one
output. Single outputs do branch out and become
input to many other neurons, thus affording the many
incoming signals each neuron receives. In a typical
neural network, neurons receive most of their input
functions exist, the most common is the sigmoid
activation function, which outputs a number between
0 (for low input values) and 1 (for high input values).
l Network (ANN), usually called
Neural Network has been found to hold great
end of expert system and have
made remarkable success in providing real time
response to complex pattern recognition problems.
ANNs or briefly NNs)
belong to this group of models. Artificial neural
network models try to model the operation of
biological neural networks and especially those of the
human brain that can perform a series of tasks, like
ion etc., with an
amazing speed and accuracy, despite the fact that
inputs to the human brain consist most of the time of
Using computer systems to replicate the learning and
been a goal of
searchers in a variety of disciplines for half a
computing is the closest
approximation of brain function to evolve to a stage
application is attainable.
The technology of artificial neural network has
a biological model of the brain. A
connected cells: the
A neuron is a real function of the input
output is obtained as
,where f is a function , typically
function. A graphical presentation of
Figure: A single neuron
The brain and nervous system provide the structural
model for a neural network. A neural net consists of a
number of processing elements known as neurons,
inputs and only one
output. Single outputs do branch out and become
neurons, thus affording the many
incoming signals each neuron receives. In a typical
network, neurons receive most of their inputs
from other neurons; the rest are from the outside
world -- -- -- data describing events.
Advantages:
• A neural network can perform tasks that a
linear program can not.
• When an element of the neural network
fails, it can continue without any problem by
their parallel nature.
• A neural network learns and does not need
to be reprogrammed.
• It can be implemented in any application.
• It can be implemented without any problem.
Disadvantages:
• The neural network needs training to
operate.
• The architecture of a ne
different from the architecture of
microprocessors therefore needs to be
emulated.
• Requires high processing time for large
neural networks.
Architecture of neural networks
Feed-forward (associative) networks
Feed-forward nets are the most well
widely-used class of neural network. The popularity
of feed-forward networks derives from the fact that
they have been applied successfully to a wide range
of information processing tasks in such diverse fields
as speech recognition, financial prediction, image
compression, medical diagnosis and
prediction; new applications are being discovered all
the time.
Feedback (auto associative) networks
Recurrent neural networks
feedback connections. Contrary to feed
networks, the dynamical properties of the network
are important. In some cases, the activation values of
the units undergo a relaxation process such that the
neural network will evolve to a stable state in which
these activations do not change anymore. In other
applications, the change of the activation values of
the output neurons are significant, such that
dynamical behaviour constitutes the output of the
neural network (Pearlmutter, 1990)
from other neurons; the rest are from the outside
data describing events.
A neural network can perform tasks that a
When an element of the neural network
fails, it can continue without any problem by
A neural network learns and does not need
It can be implemented in any application.
It can be implemented without any problem.
The neural network needs training to
The architecture of a neural network is
different from the architecture of
microprocessors therefore needs to be
Requires high processing time for large
Architecture of neural networks
forward (associative) networks
forward nets are the most well-known and
used class of neural network. The popularity
forward networks derives from the fact that
they have been applied successfully to a wide range
of information processing tasks in such diverse fields
as speech recognition, financial prediction, image
compression, medical diagnosis and protein structure
; new applications are being discovered all
associative) networks
that do contain
feedback connections. Contrary to feed-forward
networks, the dynamical properties of the network
are important. In some cases, the activation values of
the units undergo a relaxation process such that the
neural network will evolve to a stable state in which
these activations do not change anymore. In other
applications, the change of the activation values of
the output neurons are significant, such that the
dynamical behaviour constitutes the output of the
neural network (Pearlmutter, 1990)
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1622
Network layers
The commonest type of artificial neural network
consists of three groups, or layers, of units: a layer of
"input" units is connected to a layer of "hidden"
units, which is connected to a layer of "output"
units. (see Figure 4.1)
• The activity of the input units represents the
raw information that is fed into the network.
• The activity of each hidden unit is
determined by the activities of the input
units and the weights on the connections
between the input and the hidden units.
• The behaviour of the output units depends
on the activity of the hidden units and the
weights between the hidden and output
units.
Properties of Neural Networks
The following important properties of neural
networks and neurons should be distinguished:
1) Nonlinearity: A neuron is basically a
nonlinear device. Consequently, a
neural network, made up of an
interconnection of neurons, is itself
nonlinear. Moreover, the nonlinearity is
of a special kind in the sense that it is
distributed throughout the network.
Nonlinearity is a highly important
property, particularly if the underlying
physical mechanism responsible for the
generalization of an input signal (e.g.,
speech or video signal) is inherently
nonlinear.
2) Input-Output Mapping: A popular
paradigm of learning called supervised
learning involves the modification of
the synaptic weights of a neural
network by applying a set of labeled
training samples. Each example always
consists of a unique input signal and the
corresponding desired response. The
network is sequentially presented an
example, and the synaptic weights of
the network are being modified so as to
minimize the difference between the
desired response and the actual response
of the network produced by the input
signal. The training of the network is
repeated for the selected examples in
the set until the network reaches a
steady state, where there are no further
significant changes in the synaptic
weights. Thus the network learns from
the examples by constructing an input-
output mapping for the problem at hand.
3) Adaptivity. Neural networks have a
built-in capability to adapt their
synaptic weights to changes in the
surrounding environment. In particular,
a neural network trained to operate in a
specific environment can be easily
retrained to deal with minor changes in
the operating environmental conditions.
4) Evidential Response. In the context of
pattern classification, a neural network
can be designed to provide information
not only about which particular pattern
to select, but also about the confidence
in the decision made. This latter
information may be used to reject
ambiguous patterns, should they arise
and thereby improve the classification
performance of the network.
Perceptrons
Perceptrons are the simplest architecture to learn
when studying Neural Networking. Picture you
mind of a perceptron as a node of a wide,
interconnected neural network, sort of like a data
tree, although the neural network does not necessarily
have to have a top and bottom sections. The
connections among all the nodes not only show the
relationship between the nodes but also transmit data
and information, called a signal or impulse. The
perceptron is a simple model of a neuron .
A BRIEF HISTORICAL
We want to mention here the most important events
that present a historical point by point evolution of
the field of neural networks:
• 1943 – W. McCulloch and W. Pitts – the first
nonlinear mathematical model of the neuron (a
formal neuron) [6].
• 1948 – D. Hebb – the first learning rule. One can
memorize an object by adapting weights [7].
• 1958 – R. Rosenblatt – concept of the perceptron as
a machine, which can learn and classify patterns [8].
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1623
• 1963 – A.B.J. Novikoff – significant development
of the learning theory, a proof of the theorem about
convergence of the learning algorithm applied to
solution of the pattern recognition problem using the
perceptron [9].
• 1960-s – the extensive development of the threshold
logic, initiated by previous results in perceptron
theory. A deep learning of the features of threshold
Boolean functions, as one of the most important
objects considered in the theory of perceptrons and
neural networks. The most complete summaries are
given by M. Dertouzos [10] and S. Muroga [11].
• 1969 – M. Minsky & S. Papert – potential limit of
the perceptron as a computing system is shown [12].
• 1977 – T. Kohonen – consideration of the
associative memory as a content-addressable
memory, which is able to learn [13].
• 1982 – J. Hopfield shows by means of energy
functions that neural networks are capable to solve a
large number of problems. Revival of extensive
research in the field [14].
• 1982 – T. Kohonen describes the self-organizing
maps [15].
• 1986 – D.E. Rumelhart & J.L. McClelland –
introduction of the feedforward neural network and
learning with backpropagation. Consideration of a
neural network as a universal approximator.
• Present – more and more scientists and research
centers are devoted to research in the field of neural
networks and their applications in pattern
recognition, classification, prediction, image
processing and others.
Applications of Neural Networks
Sales and
Marketing
Operational
Analysis
Medical
Targeted
Marketing
Service
Retail
Inventories
Optimization
Detection and
Evaluation of
Medical
Phenomena
Sales
Forecasting
Scheduling
Optimization
Medical
Diagnosis
Usage
Forecasting
Managerial
Decision
Making
Patient's Stay
Forecasts
Retail Margins
Forecasting
Cash Flow
Forecasting
Treatment Cost
Estimation
Financial Data Mining Energy
Stock Market
Prediction
Prediction Electrical Load
Forecasting
Bankruptcy
Prediction
Classification Energy Demand
Forecasting
Credit
Worthiness,
Credit Rating
Change and
Deviation
Detection
Short and Long-
Term Load
Estimation
Property
Appraisal
Knowledge
Discovery
Predicting
Gas/Coal Prices
Fraud
Detection
Response
Modeling
Power Control
Systems
Price Forecasts Time Series
Analysis
Hydro Dam
Monitoring
Educational Science HR
Management
Teaching
Neural
Networks
Pattern
Recognition
Employee
Retention
Neural
Network
Research
Formulation
Optimization
Employee
Hiring
College
Application
Screening
Chemical
Identification
Staff
Scheduling
Predict Student
Performance
Physical
System
Modeling
Personnel
Profiling
Ecosystem
Evaluation
Industrial Polymer
Identification
Other
Process
Control
Recognizing
Genes
Sports Betting
Quality
Control
Botanical
Classification
Games
Development
Temperature
Prediction
Biological
Systems
Analysis
Quantitative
Weather
Forecast
Force
Prediction
Ground Level
Ozone Prognosi
Optimization
Problem
Routing
Agricultural
Product
Estimates
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1624
The applications featured here are:
• CoEvolution of Neural Networks for Control
of Pursuit & Evasion
• Learning the Distribution of Object
Trajectories for Event Recognition
• Radiosity for Virtual Reality Systems
• Autonomous Walker & Swimming Eel
• Robocup: Robot World Cup
• Using HMM's for Audio-to-Visual
Conversion
• Artificial Life: Galapagos
• Speechreading (Lipreading)
• Detection and Tracking of Moving Targets
• Real-time Target Identification for Security
Applications
• Facial Animation
• Behavioral Animation and Evolution of
Behavior
• A Three Layer Feedforward Neural Network
• Artificial Life for Graphics, Animation,
Multimedia, and Virtual Reality: Siggraph '95
Showcase
• Creatures: The World Most Advanced
Artificial Life!
• Framsticks Artificial Life
Application of Neural Network Models in IR
In this section, we will study the
applications of some neural network models
(i.e., SOFM and Hopfield network) and
related algorithms (i. e., semantic network)
in information retrieval systems.
1) The Application of SOFM : Most of the
applications of SOFM in IR systems are
base on the fact that SOFM is a topographic
map and can do mappings from a
multidimensional space to a two- or three-
dimensional space. Kohonen has shown that
his self-organizing feature map "is able to
represent rather complicated hierarchical
relations of high-dimensional space in a
two-dimensional display
2) Application of Hopfield Net : Hopfield
net was introduced as a neural net that can
be used as a content addressable memory.
Knowledge and information can be stored in
single-layered interconnected neurons
(nodes) and weighted synapses (links) can
be retrieved based on the network's parallel
relaxation method. It had been used for
various classification tasks and global
optimization.
3) The Applications of MLP Networks
and Semantic Networks: It is hard to
distinguish the applications of MLP and the
applications of semantic networks with
spreading activation methods in IR. In most
cases, the applications of semantic networks
in IR are making use of spreading activation
models while having a feed-forward
network structure similar to that of MLP
networks.
Wong and his colleagues (1993) have
developed a method for computing term
associations using a three-layer feed-forward
network with linear threshold functions.
Each document is represented as a node in
input layer. The nodes in the hidden layer
represent query terms and the output layer
consists of just one node, which pools the
input from all the query terms. Term
associations are modeled by weighted links
connecting different neurons, and are
derived by the perceptron learning algorithm
without the need for introducing any ad hoc
parameters. The preliminary results indicate
the usefulness of neural networks in the
design of adaptive information retrieval
systems.
Future work on Neural Network:
Improvement of existing technologies:
All current Neural Network technologies will most
likely be vastly improved upon in the future.
Everything from handwriting and speech recognition
to stock market prediction will become more
sophisticated as researchers develop better training
methods and network architectures.
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1625
Neural Networks might, in the future, allow:
• Robots that can see, feel, and predict the
world around them.
• Improved stock prediction
• Common usage of self-driving cars.
• Composition of music.
• Handwritten documents to be automatically
transformed into formatted word processing
documents.
• Trends found in the human genome to aid in
the understanding of the data compiled by
the Human Genome Project.
• Self-diagnosis of medical problems using
neural networks
Conclusion:
The computing world has a lot to gain from neural
networks. Their ability to learn by example makes
them very flexible and powerful. Furthermore there is
no need to devise an algorithm in order to perform a
specific task; i.e. there is no need to understand the
internal mechanisms of that task. They are also very
well suited for real time systems because of their fast
response and computational times which are due to
their parallel architecture.
Neural networks also contribute to other areas of
research such as neurology and psychology. They are
regularly used to model parts of living organisms and
to investigate the internal mechanisms of the brain.
Perhaps the most exciting aspect of neural networks
is the possibility that some day 'conscious' networks
might be produced. There is a number of scientists
arguing that consciousness is a 'mechanical' property
and that 'conscious' neural networks are a realistic
possibility.
Finally, I would like to state that even though neural
networks have a huge potential we will only get the
best of them when they are integrated with
computing, AI, fuzzy logic and related subjects.
References :
1. An introduction to neural computing.
Aleksander, I. and Morton, H. 2nd edition
2. Neural Networks at Pacific Northwest
National Laboratory
http://www.emsl.pnl.gov:2080/docs/cie/neura
l/neural.homepage.html
3. Industrial Applications of Neural Networks
(research reports Esprit, I.F.Croall,
J.P.Mason)
4. A Novel Approach to Modelling and
Diagnosing the Cardiovascular System
http://www.emsl.pnl.gov:2080/docs/cie/neura
l/papers2/keller.wcnn95.abs.html
5. Artificial Neural Networks in Medicine
http://www.emsl.pnl.gov:2080/docs/cie/techb
rief/NN.techbrief.ht
6. Neural Networks by Eric Davalo and Patrick
Naim
7. Learning internal representations by error
propagation by Rumelhart, Hinton and
Williams (1986).
8. Klimasauskas, CC. (1989). The 1989 Neuro
Computing Bibliography. Hammerstrom, D.
(1986). A Connectionist/Neural Network
Bibliography.
9. DARPA Neural Network Study (October,
1987-February, 1989). MIT Lincoln Lab.
Neural Networks, Eric Davalo and Patrick
Naim
10. Assimov, I (1984, 1950), Robot, Ballatine,
New York.
11. Electronic Noses for Telemedicine
http://www.emsl.pnl.gov:2080/docs/cie/neura
l/papers2/keller.ccc95.abs.html
12. http://www-cs-
faculty.stanford.edu/~eroberts/courses/soco/p
rojects/neural-networks/Future/index.html
13. Pattern Recognition of Pathology Images
http://kopernik-
eth.npac.syr.edu:1200/Task4/pattern.html
International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013
ISSN 2278-733X
Shritosh Kumar et.al. www.ijcsmr.org1626

More Related Content

PDF
Neural Network Architectures
PPTX
Bubble Sort Algorithm Presentation
PPT
Neural network final NWU 4.3 Graphics Course
PPTX
Artifical Neural Network and its applications
PPTX
Artificial Neural Network
PPTX
Deep neural networks
PPTX
Bubble sort | Data structure |
Neural Network Architectures
Bubble Sort Algorithm Presentation
Neural network final NWU 4.3 Graphics Course
Artifical Neural Network and its applications
Artificial Neural Network
Deep neural networks
Bubble sort | Data structure |

What's hot (20)

PPTX
INTRODUCTION TO MACHINE LEARNING.pptx
PPTX
Neural network
PDF
Introduction to Neural Networks
PDF
Machine Learning: Introduction to Neural Networks
PPT
PDF
Data preprocessing using Machine Learning
PDF
Introduction to Recurrent Neural Network
PPTX
Artificial neural network
PDF
Artificial Neural Network
PPT
BINARY TREE REPRESENTATION.ppt
PPTX
Doubly Linked List
PPT
backpropagation in neural networks
PPTX
Mc Culloch Pitts Neuron
PPTX
Artificial neural network
PPTX
Bubble sort
PPT
Artificial neural network
PPTX
Artificial neural network
PPTX
Tree in data structure
PPTX
Module 1: Fundamentals of neural network.pptx
PPTX
neural network
INTRODUCTION TO MACHINE LEARNING.pptx
Neural network
Introduction to Neural Networks
Machine Learning: Introduction to Neural Networks
Data preprocessing using Machine Learning
Introduction to Recurrent Neural Network
Artificial neural network
Artificial Neural Network
BINARY TREE REPRESENTATION.ppt
Doubly Linked List
backpropagation in neural networks
Mc Culloch Pitts Neuron
Artificial neural network
Bubble sort
Artificial neural network
Artificial neural network
Tree in data structure
Module 1: Fundamentals of neural network.pptx
neural network
Ad

Viewers also liked (12)

PPTX
Neural network & its applications
PPTX
Artificial intelligence NEURAL NETWORKS
PDF
Neural Networks in the Wild: Handwriting Recognition
PPT
Character Recognition using Artificial Neural Networks
PPT
2d/3D transformations in computer graphics(Computer graphics Tutorials)
PPT
Perceptron
PPTX
2 d transformations and homogeneous coordinates
PPT
2 d transformations by amit kumar (maimt)
PDF
Notes 2D-Transformation Unit 2 Computer graphics
PPTX
Introduction Of Artificial neural network
PPTX
Neural networks
PDF
Artificial neural networks
Neural network & its applications
Artificial intelligence NEURAL NETWORKS
Neural Networks in the Wild: Handwriting Recognition
Character Recognition using Artificial Neural Networks
2d/3D transformations in computer graphics(Computer graphics Tutorials)
Perceptron
2 d transformations and homogeneous coordinates
2 d transformations by amit kumar (maimt)
Notes 2D-Transformation Unit 2 Computer graphics
Introduction Of Artificial neural network
Neural networks
Artificial neural networks
Ad

Similar to Artificial Neural Network and its Applications (20)

DOCX
Neural networks of artificial intelligence
PPTX
Neural network
DOCX
Project Report -Vaibhav
DOCX
Neural network
PDF
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
PPTX
neural networks
DOCX
Neural networks report
PDF
7 nn1-intro.ppt
PPTX
Neural network
PPT
Neural networks - Finding solutions through human evolution.ppt
PDF
Artificial Neural Network report
PDF
Neural Network
PDF
Artificial Neural Networks Lect1: Introduction & neural computation
PPTX
Karan ppt for neural network and deep learning
PPTX
IAI - UNIT 3 - ANN, EMERGENT SYSTEMS.pptx
PPTX
Neural Netwrok
PPTX
neuralnetwork.pptx
PPTX
neuralnetwork.pptx
PDF
An Overview On Neural Network And Its Application
Neural networks of artificial intelligence
Neural network
Project Report -Vaibhav
Neural network
[IJET V2I2P20] Authors: Dr. Sanjeev S Sannakki, Ms.Anjanabhargavi A Kulkarni
neural networks
Neural networks report
7 nn1-intro.ppt
Neural network
Neural networks - Finding solutions through human evolution.ppt
Artificial Neural Network report
Neural Network
Artificial Neural Networks Lect1: Introduction & neural computation
Karan ppt for neural network and deep learning
IAI - UNIT 3 - ANN, EMERGENT SYSTEMS.pptx
Neural Netwrok
neuralnetwork.pptx
neuralnetwork.pptx
An Overview On Neural Network And Its Application

Recently uploaded (20)

PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PDF
composite construction of structures.pdf
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPT
Project quality management in manufacturing
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
PPT on Performance Review to get promotions
PPTX
Welding lecture in detail for understanding
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Structs to JSON How Go Powers REST APIs.pdf
PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Construction Project Organization Group 2.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
UNIT-1 - COAL BASED THERMAL POWER PLANTS
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
composite construction of structures.pdf
CH1 Production IntroductoryConcepts.pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
bas. eng. economics group 4 presentation 1.pptx
Project quality management in manufacturing
Model Code of Practice - Construction Work - 21102022 .pdf
PPT on Performance Review to get promotions
Welding lecture in detail for understanding
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Internet of Things (IOT) - A guide to understanding
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
Lecture Notes Electrical Wiring System Components
Structs to JSON How Go Powers REST APIs.pdf
Lesson 3_Tessellation.pptx finite Mathematics
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Construction Project Organization Group 2.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...

Artificial Neural Network and its Applications

  • 1. Artificial Neural Network and its Applications Shritosh Kumar [1], Tarun Kumawat [2], Naresh kumar Marwal [3], [4] Brijesh Kumar Singh [1] Shritosh Kumar [CSE] Arya College of Engg.& IT, Kukas, Rajasthan, India [2] Tarun Kumawat [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan, India [3] Naresh Kumar Marwal [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan [4] Brijesh Kumar Singh [CSE] JECRC UDML College of Engineering, Kukas, Rajasthan Abstract This report is an introduction to Artificial Neural Networks. The various types of neural networks are explained and demonstrated, applications of neural networks like ANNs in medicine are described, and a detailed historical background is provided. The connection between the artificial and the real thing is also investigated and explained. Finally, the mathematical models involved are presented and demonstrated. Introduction This paper presents a neural network based artificial vision and its applications. The concept of neural networks is modeled after biological sensory mechanisms where the neuron signals are transmitted to the brain and processed. This concept moves away from traditional statistical models where data are analyzed based upon holding everything else constant (ceteris paribus). The weakness in statistical models lies in their inability to model the changing relationships between variables (non-linear problem) and thus presents challenges in making a predictive analysis where the underlying relationships are not constant. A neural network overcomes this problem by being adaptive to real sets of data. Much like living organisms, a neural network gets training and learns the tricks of the trade by observation and readjusts its learning against new sets of data iteratively. Other reasons why we make use of Neural Networks include; • Adaptive learning: An ability to learn how to do tasks based on the data given for training or initial experience. • Self-Organization: An ANN can create its own organization or representation of the information it receives during learning time. • Real Time Operation: ANN computations may be carried out in parallel, and special hardware devices are being designed and manufactured which take advantage of this capability. • Fault Tolerance via Redundant Information Coding: Partial destruction of a network leads to the corresponding degradation of performance. However, some network capabilities may be retained even with major network damage. The Definition of Neural Network Artificial neurons are similar to their biological counterparts. They have input connections which are summed together to determine the strength of their output, which is the result of the sum being fed into an activation function. Though many activation International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013 ISSN 2278-733X Shritosh Kumar et.al. www.ijcsmr.org1621
  • 2. functions exist, the most common is the sigmoid activation function, which outputs a number between 0 (for low input values) and 1 (for high input values). An Artificial Neural Network (ANN), usually called Neural Network has been found to hold great potential as the front – end of expert system and have made remarkable success in providing real time response to complex pattern recognition problems. Artificial neural networks (ANNs or briefly NNs) belong to this group of models. Artificial neural network models try to model the operation of biological neural networks and especially those of the human brain that can perform a series of tasks, like pattern recognition, classification etc., with an amazing speed and accuracy, despite the fact that inputs to the human brain consist most of the time of noisy and erroneous data. Using computer systems to replicate the learning and recall methods of the human brain has researchers in a variety of disciplines for half a century. Neural network computing is the closest approximation of brain function to evolve to a stage where practical application is attainable. The technology of artificial neural network has developed from a biological model of the brain. A neural network consists of set of connected cells: the neurons A neuron is a real function of the input vector (y1,……,yk). The output is obtained as f(xj)=f(aj+ Σk i=1wijyj),where f is a function , typically the sigmoid function. A graphical presentation of neuron is given in figure below. Figure: A single neuron The brain and nervous system provide the structural model for a neural network. A neural net number of processing elements known as neurons, each of which can have multiple inputs and only one output. Single outputs do branch out and become input to many other neurons, thus affording the many incoming signals each neuron receives. In a typical neural network, neurons receive most of their input functions exist, the most common is the sigmoid activation function, which outputs a number between 0 (for low input values) and 1 (for high input values). l Network (ANN), usually called Neural Network has been found to hold great end of expert system and have made remarkable success in providing real time response to complex pattern recognition problems. ANNs or briefly NNs) belong to this group of models. Artificial neural network models try to model the operation of biological neural networks and especially those of the human brain that can perform a series of tasks, like ion etc., with an amazing speed and accuracy, despite the fact that inputs to the human brain consist most of the time of Using computer systems to replicate the learning and been a goal of searchers in a variety of disciplines for half a computing is the closest approximation of brain function to evolve to a stage application is attainable. The technology of artificial neural network has a biological model of the brain. A connected cells: the A neuron is a real function of the input output is obtained as ,where f is a function , typically function. A graphical presentation of Figure: A single neuron The brain and nervous system provide the structural model for a neural network. A neural net consists of a number of processing elements known as neurons, inputs and only one output. Single outputs do branch out and become neurons, thus affording the many incoming signals each neuron receives. In a typical network, neurons receive most of their inputs from other neurons; the rest are from the outside world -- -- -- data describing events. Advantages: • A neural network can perform tasks that a linear program can not. • When an element of the neural network fails, it can continue without any problem by their parallel nature. • A neural network learns and does not need to be reprogrammed. • It can be implemented in any application. • It can be implemented without any problem. Disadvantages: • The neural network needs training to operate. • The architecture of a ne different from the architecture of microprocessors therefore needs to be emulated. • Requires high processing time for large neural networks. Architecture of neural networks Feed-forward (associative) networks Feed-forward nets are the most well widely-used class of neural network. The popularity of feed-forward networks derives from the fact that they have been applied successfully to a wide range of information processing tasks in such diverse fields as speech recognition, financial prediction, image compression, medical diagnosis and prediction; new applications are being discovered all the time. Feedback (auto associative) networks Recurrent neural networks feedback connections. Contrary to feed networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990) from other neurons; the rest are from the outside data describing events. A neural network can perform tasks that a When an element of the neural network fails, it can continue without any problem by A neural network learns and does not need It can be implemented in any application. It can be implemented without any problem. The neural network needs training to The architecture of a neural network is different from the architecture of microprocessors therefore needs to be Requires high processing time for large Architecture of neural networks forward (associative) networks forward nets are the most well-known and used class of neural network. The popularity forward networks derives from the fact that they have been applied successfully to a wide range of information processing tasks in such diverse fields as speech recognition, financial prediction, image compression, medical diagnosis and protein structure ; new applications are being discovered all associative) networks that do contain feedback connections. Contrary to feed-forward networks, the dynamical properties of the network are important. In some cases, the activation values of the units undergo a relaxation process such that the neural network will evolve to a stable state in which these activations do not change anymore. In other applications, the change of the activation values of the output neurons are significant, such that the dynamical behaviour constitutes the output of the neural network (Pearlmutter, 1990) International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013 ISSN 2278-733X Shritosh Kumar et.al. www.ijcsmr.org1622
  • 3. Network layers The commonest type of artificial neural network consists of three groups, or layers, of units: a layer of "input" units is connected to a layer of "hidden" units, which is connected to a layer of "output" units. (see Figure 4.1) • The activity of the input units represents the raw information that is fed into the network. • The activity of each hidden unit is determined by the activities of the input units and the weights on the connections between the input and the hidden units. • The behaviour of the output units depends on the activity of the hidden units and the weights between the hidden and output units. Properties of Neural Networks The following important properties of neural networks and neurons should be distinguished: 1) Nonlinearity: A neuron is basically a nonlinear device. Consequently, a neural network, made up of an interconnection of neurons, is itself nonlinear. Moreover, the nonlinearity is of a special kind in the sense that it is distributed throughout the network. Nonlinearity is a highly important property, particularly if the underlying physical mechanism responsible for the generalization of an input signal (e.g., speech or video signal) is inherently nonlinear. 2) Input-Output Mapping: A popular paradigm of learning called supervised learning involves the modification of the synaptic weights of a neural network by applying a set of labeled training samples. Each example always consists of a unique input signal and the corresponding desired response. The network is sequentially presented an example, and the synaptic weights of the network are being modified so as to minimize the difference between the desired response and the actual response of the network produced by the input signal. The training of the network is repeated for the selected examples in the set until the network reaches a steady state, where there are no further significant changes in the synaptic weights. Thus the network learns from the examples by constructing an input- output mapping for the problem at hand. 3) Adaptivity. Neural networks have a built-in capability to adapt their synaptic weights to changes in the surrounding environment. In particular, a neural network trained to operate in a specific environment can be easily retrained to deal with minor changes in the operating environmental conditions. 4) Evidential Response. In the context of pattern classification, a neural network can be designed to provide information not only about which particular pattern to select, but also about the confidence in the decision made. This latter information may be used to reject ambiguous patterns, should they arise and thereby improve the classification performance of the network. Perceptrons Perceptrons are the simplest architecture to learn when studying Neural Networking. Picture you mind of a perceptron as a node of a wide, interconnected neural network, sort of like a data tree, although the neural network does not necessarily have to have a top and bottom sections. The connections among all the nodes not only show the relationship between the nodes but also transmit data and information, called a signal or impulse. The perceptron is a simple model of a neuron . A BRIEF HISTORICAL We want to mention here the most important events that present a historical point by point evolution of the field of neural networks: • 1943 – W. McCulloch and W. Pitts – the first nonlinear mathematical model of the neuron (a formal neuron) [6]. • 1948 – D. Hebb – the first learning rule. One can memorize an object by adapting weights [7]. • 1958 – R. Rosenblatt – concept of the perceptron as a machine, which can learn and classify patterns [8]. International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013 ISSN 2278-733X Shritosh Kumar et.al. www.ijcsmr.org1623
  • 4. • 1963 – A.B.J. Novikoff – significant development of the learning theory, a proof of the theorem about convergence of the learning algorithm applied to solution of the pattern recognition problem using the perceptron [9]. • 1960-s – the extensive development of the threshold logic, initiated by previous results in perceptron theory. A deep learning of the features of threshold Boolean functions, as one of the most important objects considered in the theory of perceptrons and neural networks. The most complete summaries are given by M. Dertouzos [10] and S. Muroga [11]. • 1969 – M. Minsky & S. Papert – potential limit of the perceptron as a computing system is shown [12]. • 1977 – T. Kohonen – consideration of the associative memory as a content-addressable memory, which is able to learn [13]. • 1982 – J. Hopfield shows by means of energy functions that neural networks are capable to solve a large number of problems. Revival of extensive research in the field [14]. • 1982 – T. Kohonen describes the self-organizing maps [15]. • 1986 – D.E. Rumelhart & J.L. McClelland – introduction of the feedforward neural network and learning with backpropagation. Consideration of a neural network as a universal approximator. • Present – more and more scientists and research centers are devoted to research in the field of neural networks and their applications in pattern recognition, classification, prediction, image processing and others. Applications of Neural Networks Sales and Marketing Operational Analysis Medical Targeted Marketing Service Retail Inventories Optimization Detection and Evaluation of Medical Phenomena Sales Forecasting Scheduling Optimization Medical Diagnosis Usage Forecasting Managerial Decision Making Patient's Stay Forecasts Retail Margins Forecasting Cash Flow Forecasting Treatment Cost Estimation Financial Data Mining Energy Stock Market Prediction Prediction Electrical Load Forecasting Bankruptcy Prediction Classification Energy Demand Forecasting Credit Worthiness, Credit Rating Change and Deviation Detection Short and Long- Term Load Estimation Property Appraisal Knowledge Discovery Predicting Gas/Coal Prices Fraud Detection Response Modeling Power Control Systems Price Forecasts Time Series Analysis Hydro Dam Monitoring Educational Science HR Management Teaching Neural Networks Pattern Recognition Employee Retention Neural Network Research Formulation Optimization Employee Hiring College Application Screening Chemical Identification Staff Scheduling Predict Student Performance Physical System Modeling Personnel Profiling Ecosystem Evaluation Industrial Polymer Identification Other Process Control Recognizing Genes Sports Betting Quality Control Botanical Classification Games Development Temperature Prediction Biological Systems Analysis Quantitative Weather Forecast Force Prediction Ground Level Ozone Prognosi Optimization Problem Routing Agricultural Product Estimates International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013 ISSN 2278-733X Shritosh Kumar et.al. www.ijcsmr.org1624
  • 5. The applications featured here are: • CoEvolution of Neural Networks for Control of Pursuit & Evasion • Learning the Distribution of Object Trajectories for Event Recognition • Radiosity for Virtual Reality Systems • Autonomous Walker & Swimming Eel • Robocup: Robot World Cup • Using HMM's for Audio-to-Visual Conversion • Artificial Life: Galapagos • Speechreading (Lipreading) • Detection and Tracking of Moving Targets • Real-time Target Identification for Security Applications • Facial Animation • Behavioral Animation and Evolution of Behavior • A Three Layer Feedforward Neural Network • Artificial Life for Graphics, Animation, Multimedia, and Virtual Reality: Siggraph '95 Showcase • Creatures: The World Most Advanced Artificial Life! • Framsticks Artificial Life Application of Neural Network Models in IR In this section, we will study the applications of some neural network models (i.e., SOFM and Hopfield network) and related algorithms (i. e., semantic network) in information retrieval systems. 1) The Application of SOFM : Most of the applications of SOFM in IR systems are base on the fact that SOFM is a topographic map and can do mappings from a multidimensional space to a two- or three- dimensional space. Kohonen has shown that his self-organizing feature map "is able to represent rather complicated hierarchical relations of high-dimensional space in a two-dimensional display 2) Application of Hopfield Net : Hopfield net was introduced as a neural net that can be used as a content addressable memory. Knowledge and information can be stored in single-layered interconnected neurons (nodes) and weighted synapses (links) can be retrieved based on the network's parallel relaxation method. It had been used for various classification tasks and global optimization. 3) The Applications of MLP Networks and Semantic Networks: It is hard to distinguish the applications of MLP and the applications of semantic networks with spreading activation methods in IR. In most cases, the applications of semantic networks in IR are making use of spreading activation models while having a feed-forward network structure similar to that of MLP networks. Wong and his colleagues (1993) have developed a method for computing term associations using a three-layer feed-forward network with linear threshold functions. Each document is represented as a node in input layer. The nodes in the hidden layer represent query terms and the output layer consists of just one node, which pools the input from all the query terms. Term associations are modeled by weighted links connecting different neurons, and are derived by the perceptron learning algorithm without the need for introducing any ad hoc parameters. The preliminary results indicate the usefulness of neural networks in the design of adaptive information retrieval systems. Future work on Neural Network: Improvement of existing technologies: All current Neural Network technologies will most likely be vastly improved upon in the future. Everything from handwriting and speech recognition to stock market prediction will become more sophisticated as researchers develop better training methods and network architectures. International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013 ISSN 2278-733X Shritosh Kumar et.al. www.ijcsmr.org1625
  • 6. Neural Networks might, in the future, allow: • Robots that can see, feel, and predict the world around them. • Improved stock prediction • Common usage of self-driving cars. • Composition of music. • Handwritten documents to be automatically transformed into formatted word processing documents. • Trends found in the human genome to aid in the understanding of the data compiled by the Human Genome Project. • Self-diagnosis of medical problems using neural networks Conclusion: The computing world has a lot to gain from neural networks. Their ability to learn by example makes them very flexible and powerful. Furthermore there is no need to devise an algorithm in order to perform a specific task; i.e. there is no need to understand the internal mechanisms of that task. They are also very well suited for real time systems because of their fast response and computational times which are due to their parallel architecture. Neural networks also contribute to other areas of research such as neurology and psychology. They are regularly used to model parts of living organisms and to investigate the internal mechanisms of the brain. Perhaps the most exciting aspect of neural networks is the possibility that some day 'conscious' networks might be produced. There is a number of scientists arguing that consciousness is a 'mechanical' property and that 'conscious' neural networks are a realistic possibility. Finally, I would like to state that even though neural networks have a huge potential we will only get the best of them when they are integrated with computing, AI, fuzzy logic and related subjects. References : 1. An introduction to neural computing. Aleksander, I. and Morton, H. 2nd edition 2. Neural Networks at Pacific Northwest National Laboratory http://www.emsl.pnl.gov:2080/docs/cie/neura l/neural.homepage.html 3. Industrial Applications of Neural Networks (research reports Esprit, I.F.Croall, J.P.Mason) 4. A Novel Approach to Modelling and Diagnosing the Cardiovascular System http://www.emsl.pnl.gov:2080/docs/cie/neura l/papers2/keller.wcnn95.abs.html 5. Artificial Neural Networks in Medicine http://www.emsl.pnl.gov:2080/docs/cie/techb rief/NN.techbrief.ht 6. Neural Networks by Eric Davalo and Patrick Naim 7. Learning internal representations by error propagation by Rumelhart, Hinton and Williams (1986). 8. Klimasauskas, CC. (1989). The 1989 Neuro Computing Bibliography. Hammerstrom, D. (1986). A Connectionist/Neural Network Bibliography. 9. DARPA Neural Network Study (October, 1987-February, 1989). MIT Lincoln Lab. Neural Networks, Eric Davalo and Patrick Naim 10. Assimov, I (1984, 1950), Robot, Ballatine, New York. 11. Electronic Noses for Telemedicine http://www.emsl.pnl.gov:2080/docs/cie/neura l/papers2/keller.ccc95.abs.html 12. http://www-cs- faculty.stanford.edu/~eroberts/courses/soco/p rojects/neural-networks/Future/index.html 13. Pattern Recognition of Pathology Images http://kopernik- eth.npac.syr.edu:1200/Task4/pattern.html International Journal of Computer Science and Management Research Vol 2 Issue 2 February 2013 ISSN 2278-733X Shritosh Kumar et.al. www.ijcsmr.org1626