SlideShare a Scribd company logo
Artificial Intelligence
Artificial Intelligence
Dr Khadija Kanwal
Institute of CS&IT
The Women University Multan
Neural Networks
Neural Networks
Artificial Neural Networks
Artificial Neural Networks
(ANNs)
(ANNs)
The inventor of the first neuro computer,
Dr. Robert Hecht-Nielsen, defines a
neural network as −
"...a computing system made up of a
number of simple, highly interconnected
processing elements, which process
information by their dynamic state
response to external inputs.”
Basic Structure of ANNs
Basic Structure of ANNs
The idea of ANNs is based on the belief that working of
human brain by making the right connections, can be
imitated using silicon and wires as
living neurons and dendrites.
 The human brain is composed of 86 billion nerve cells
called neurons. They are connected to other thousand
cells by Axons. Stimuli from external environment or
inputs from sensory organs are accepted by dendrites.
These inputs create electric impulses, which quickly
travel through the neural network. A neuron can then
send the message to other neuron to handle the issue
or does not send it forward.
• The Brain and the Neuron
• In animals, learning occurs within the brain.
• If we can understand how the brain works, then there
might be things in there for us to copy and use for our
machine learning systems.
• While the brain is an impressively powerful and
complicated system, the basic building blocks that it is
made up of are fairly simple and easy to understand.
• The processing units of the brain are nerve cells called
neurons.
• There are lots of them (100 billion = 1011
is the figure
that is often given).
Leading to the Neural Networks…
The general operation of neurons is:
•Transmitter chemicals within the fluid of the brain raise or lower
the electrical potential inside the body of the neuron.
•If this membrane potential reaches some threshold, the neuron
spikes or fires, and a pulse of fixed strength and duration is sent
down the axon.
•The axons divide (arborise) into connections to many other
neurons, connecting to each of these neurons in a synapse.
•Each neuron is typically connected to thousands of other neurons,
so that it is estimated that there are about 100 trillion (= 1014
)
synapses within the brain.
•After firing, the neuron must wait for some time to recover its
energy (the refractory period) before it can fire again.
Leading to the Neural Networks…
• Each neuron can be viewed as a separate processor,
performing a very simple computation: deciding
whether or not to fire.
• This makes the brain a massively parallel computer
made up of 1011
processing elements.
• If that is all there is to the brain, then we should be
able to model it inside a computer and end up with
animal or human intelligence inside a computer.
• This is the view of strong AI.
Leading to the Neural Networks…
Machine Learning in ANNs
Machine Learning in ANNs
 ANNs are capable of learning and they need to be trained.There are
several learning strategies −
 Supervised Learning It involves a teacher that is scholar than the
−
ANN itself. For example, the teacher feeds some example data about
which the teacher already knows the answers.
 For example, pattern recognizing.The ANN comes up with guesses
while recognizing.Then the teacher provides the ANN with the
answers.The network then compares it guesses with the teacher’s
“correct” answers and makes adjustments according to errors.
 Unsupervised Learning It is required when there is no example
−
data set with known answers. For example, searching for a hidden
pattern. In this case, clustering i.e. dividing a set of elements into groups
according to some unknown pattern is carried out based on the
existing data sets present.
Machine Learning in ANNs
Machine Learning in ANNs
 Reinforcement Learning This strategy built on
−
observation.The ANN makes a decision by observing its
environment. If the observation is negative, the network
adjusts its weights to be able to make a different required
decision the next time.
 Back Propagation Algorithm
 It is the training or learning algorithm. It learns by example.
If you submit to the algorithm the example of what you
want the network to do, it changes the network’s weights
so that it can produce desired output for a particular input
on finishing the training.
 Back Propagation networks are ideal for simple Pattern
Recognition and MappingTasks.
• Hebb’s rule says that the changes in the strength of
synaptic connections are proportional to the
correlation in the firing of the two connecting neurons.
• So if two neurons consistently fire simultaneously, then
any connection between them will change in strength,
becoming stronger.
• However, if the two neurons never fire simultaneously,
the connection between them will die away.
• The idea is that if two neurons both respond to
something, then they should be connected.
Hebb’s Rule
• Suppose that you have a neuron somewhere that
recognises your grandmother (this will probably get
input from lots of visual processing neurons, but don’t
worry about that).
• Now if your grandmother always gives you a chocolate
bar when she comes to visit, then some neurons,
which are happy because you like the taste of
chocolate, will also be stimulated.
• Since these neurons fire at the same time, they will be
connected together, and the connection will get
stronger over time. So eventually, the sight of your
grandmother, even in a photo, will be enough to make
you think of chocolate.
Example of Hebb’s Rule
• Pavlov used this idea, called classical conditioning, to
train his dogs so that when food was shown to the
dogs and the bell was rung at the same time, the
neurons for salivating over the food and hearing the
bell fired simultaneously, and so became strongly
connected.
• Over time, the strength of the synapse between the
neurons that responded to hearing the bell and those
that caused the salivation reflex was enough that just
hearing the bell caused the salivation neurons to fire in
sympathy.
Classical Conditioning
• Studying neurons isn’t actually that easy.You need to be able to
extract the neuron from the brain, and then keep it alive so that
you can see how it reacts in controlled circumstances.
• Doing this takes a lot of care. One of the problems is that
neurons are generally quite small (they must be if you’ve got
1011 of them in your head!) so getting electrodes into the
synapses is difficult.
• It has been done, though, using neurons from the giant squid,
which has some neurons that are large enough to see.
• Hodgkin and Huxley did this in 1952, measuring and writing
down differential equations that compute the membrane
potential based on various chemical concentrations, something
that earned them a Nobel prize.
McCulloch and Pitts Neurons
• McCulloch and Pitts produced a perfect
example of this when they modelled a neuron
as:
1. a set of weighted inputs wi that correspond
to the synapses
2. an adder that sums the input signals
(equivalent to the membrane of the cell that
collects electrical charge)
3. an activation function (initially a threshold
function) that decides whether the neuron
fires (‘spikes’) for the current inputs
McCulloch and Pitts Neurons
McCulloch and Pitts Neurons
A picture of McCulloch and Pitt’s mathematical model of a neuron. The
inputs x
i
are multiplied by the weights w
i
, and the neurons sum their
values. If this sum is greater than the threshold θ then the neuron fires,
• One question that is worth considering is how realistic
is this model of a neuron? The answer is: not very.
• Real neurons are much more complicated.
• One neuron isn’t that interesting. It doesn’t do very
much, except fire or not fire when we give it inputs. In
fact, it doesn’t even learn. If we feed in the same set of
inputs over and over again, the output of the neuron
never varies—it either fires or does not.
• So to make the neuron a little more interesting we
need to work out how to make it learn, and then we
need to put sets of neurons together into neural
networks so that they can do something useful.
Limitations of McCulloch and Pitts
Neurons
• Have a look again at the McCulloch and Pitts neuron
and try to work out what can change in that model.
• The only things that make up the neuron are the
inputs, the weights, and the threshold (and there is
only one threshold for each neuron, but lots of inputs).
• The inputs can’t change, since they are external, so we
can only change the weights and the threshold, which
is interesting since it tells us that most of the learning
is in the weights, which aren’t part of the neuron at all;
they are the model of the synapse!
The Perceptron: Neural Network
So in order to make a neuron learn, the
question that we need to ask is:
How should we change the weights and
thresholds of the neurons so that the
network gets the right answer more
often?
The Perceptron: Neural Network
• The Perceptron is nothing more than a collection of
McCulloch and Pitts neurons together with a set of
inputs and some weights to fasten the inputs to the
neurons.
• Inputs xi:An input vector is the data given as one input to the
network.
• Weights wij: which is the weighted connection between nodes i
and j.These weights are equivalent to the synapses in the brain.
They are arranged into a matrixW.
• Outputs yi: We can write y(x,W) to remind ourselves that the
output depends on the inputs to the algorithm and the current
set of weights of the network.
• Targets tj: The extra data that we need for supervised learning,
since they provide the ‘correct’ answers that the algorithm is
learning about.
• Activation Function g(·) is a mathematical function that
describes the firing of the neuron as a response to the weighted
inputs, such as the threshold function.
• Error E, a function that computes the inaccuracies of the
network as a function of the outputs y and targets t.
• The neurons in the Perceptron are completely
independent of each other:
• It doesn’t matter to any neuron what the others are
doing, it works out whether or not to fire by
multiplying together its own weights and the input,
adding them together, and comparing the result to its
own threshold, regardless of what the other neurons
are doing.
• Even the weights that go into each neuron are
separate for each one, so the only thing they share is
the inputs, since every neuron sees all of the inputs to
the network.
The Perceptron: Neural Network
• When we looked at the McCulloch and Pitts neuron,
the weights were labelled as wi, with the i index
running over the number of inputs.
• Here, we also need to work out which neuron the
weight feeds into, so we label them as wij, where the j
index runs over the number of neurons.
• So w32 is the weight that connects input node 3 to
neuron 2.
• When we make an implementation of the neural
network, we can use a two-dimensional array to hold
these weights.
The Perceptron: Neural Network
• A typical output pattern could be (0, 1, 0, 0, 1), which
means that the second and fifth neurons fired and the
others did not.
• We compare that pattern to the target, which is our
known correct answer for this input, to identify which
neurons got the answer right, and which did not.
• For a neuron that is correct, we are happy, but any
neuron that fired when it shouldn’t have done, or failed
to fire when it should, needs to have its weights or
thresholds changed.
The Perceptron: Neural Network
Applications of Neural Networks
Applications of Neural Networks
 They can perform tasks that are easy for a human but difficult
for a machine −
 Aerospace Autopilot aircrafts, aircraft fault detection.
−
 Automotive Automobile guidance systems.
−
 Military Weapon orientation and steering, target tracking,
−
object discrimination, facial recognition, signal/image
identification.
 Electronics Code sequence prediction, IC chip layout,
−
chip failure analysis, machine vision, voice synthesis.
 Financial Real estate appraisal, loan advisor, mortgage
−
screening, corporate bond rating, portfolio trading program,
corporate financial analysis, currency value prediction,
document readers, credit application evaluators.
Applications of Neural Networks
Applications of Neural Networks
 Industrial Manufacturing process control, product
−
design and analysis, quality inspection systems, welding
quality analysis, paper quality prediction, chemical
product design analysis, dynamic modeling of chemical
process systems, machine maintenance analysis, project
bidding, planning, and management.
 Medical Cancer cell analysis, EEG and ECG analysis,
−
prosthetic design, transplant time optimizer.
 Speech Speech recognition, speech classification,
−
text to speech conversion.
 Telecommunications Image and data compression,
−
automated information services, real-time spoken
language translation.
Applications of Neural Networks
Applications of Neural Networks
 Transportation Truck Brake system diagnosis, vehicle
−
scheduling, routing systems.
 Software Pattern Recognition in facial recognition, optical
−
character recognition, etc.
 Time Series Prediction ANNs are used to make predictions
−
on stocks and natural calamities.
 Signal Processing Neural networks can be trained to process
−
an audio signal and filter it appropriately in the hearing aids.
 Control ANNs are often used to make steering decisions of
−
physical vehicles.
 Anomaly Detection As ANNs are expert at recognizing
−
patterns, they can also be trained to generate an output when
something unusual occurs that misfits the pattern
ThankYou!

More Related Content

PDF
Introduction_NNFL_Aug2022.pdf
PDF
M.Sc_CengineeringS_II_Soft_Computing_PCSC401.pdf
PPTX
Artificial neural network
PPTX
Artificial Neural Network - Basic Concepts.pptx
PPT
Lec 1-2-3-intr.
PPTX
02 Fundamental Concepts of ANN
PPTX
Artificial Neural Network in Medical Diagnosis
PPTX
ANN sgjjjjkjhhjkkjjgjkgjhgkjgjjgjjjhjghh
Introduction_NNFL_Aug2022.pdf
M.Sc_CengineeringS_II_Soft_Computing_PCSC401.pdf
Artificial neural network
Artificial Neural Network - Basic Concepts.pptx
Lec 1-2-3-intr.
02 Fundamental Concepts of ANN
Artificial Neural Network in Medical Diagnosis
ANN sgjjjjkjhhjkkjjgjkgjhgkjgjjgjjjhjghh

Similar to Neural Networks in ARTIFICAL INTELLIGENCE (20)

PPT
PPTX
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
PDF
Fundamentals of Neural Network (Soft Computing)
PPTX
Artificial Neural Network
PPTX
Homeostasis Neurons, synapses and action potentials.pptx
PDF
Neural networks are parallel computing devices.docx.pdf
PDF
IntrotoooooooooooooooooooooooooooooNNetwork.pdf
PDF
Soft computing BY:- Dr. Rakesh Kumar Maurya
PPT
The Introduction to Neural Networks.ppt
PPTX
Introduction to Neural networks (under graduate course) Lecture 1 of 9
PPTX
PowerPoint Presentation - Research Project 2015
PPTX
Artificial Neural Network
DOCX
Artifical neural networks
PDF
Artificial Neural Network report
PPTX
THE HUMAN BRAIN AS THE NEURAL NETWORK.pptx
PPTX
Presentation6.pptx
PDF
Neural Network
PPTX
ANN.pptx
PPTX
SujanKhamrui_28100119050.pptx
PPT
Artificial Neural Network seminar presentation using ppt.
ARITIFICIAL NEURAL NETWORKS BEGIINER TOPIC
Fundamentals of Neural Network (Soft Computing)
Artificial Neural Network
Homeostasis Neurons, synapses and action potentials.pptx
Neural networks are parallel computing devices.docx.pdf
IntrotoooooooooooooooooooooooooooooNNetwork.pdf
Soft computing BY:- Dr. Rakesh Kumar Maurya
The Introduction to Neural Networks.ppt
Introduction to Neural networks (under graduate course) Lecture 1 of 9
PowerPoint Presentation - Research Project 2015
Artificial Neural Network
Artifical neural networks
Artificial Neural Network report
THE HUMAN BRAIN AS THE NEURAL NETWORK.pptx
Presentation6.pptx
Neural Network
ANN.pptx
SujanKhamrui_28100119050.pptx
Artificial Neural Network seminar presentation using ppt.
Ad

Recently uploaded (20)

PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Empathic Computing: Creating Shared Understanding
PDF
Electronic commerce courselecture one. Pdf
PPT
Teaching material agriculture food technology
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
KodekX | Application Modernization Development
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
20250228 LYD VKU AI Blended-Learning.pptx
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Empathic Computing: Creating Shared Understanding
Electronic commerce courselecture one. Pdf
Teaching material agriculture food technology
The AUB Centre for AI in Media Proposal.docx
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Spectral efficient network and resource selection model in 5G networks
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Big Data Technologies - Introduction.pptx
KodekX | Application Modernization Development
GamePlan Trading System Review: Professional Trader's Honest Take
Per capita expenditure prediction using model stacking based on satellite ima...
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
MYSQL Presentation for SQL database connectivity
Advanced methodologies resolving dimensionality complications for autism neur...
Reach Out and Touch Someone: Haptics and Empathic Computing
Ad

Neural Networks in ARTIFICAL INTELLIGENCE

  • 1. Artificial Intelligence Artificial Intelligence Dr Khadija Kanwal Institute of CS&IT The Women University Multan
  • 3. Artificial Neural Networks Artificial Neural Networks (ANNs) (ANNs) The inventor of the first neuro computer, Dr. Robert Hecht-Nielsen, defines a neural network as − "...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.”
  • 4. Basic Structure of ANNs Basic Structure of ANNs The idea of ANNs is based on the belief that working of human brain by making the right connections, can be imitated using silicon and wires as living neurons and dendrites.  The human brain is composed of 86 billion nerve cells called neurons. They are connected to other thousand cells by Axons. Stimuli from external environment or inputs from sensory organs are accepted by dendrites. These inputs create electric impulses, which quickly travel through the neural network. A neuron can then send the message to other neuron to handle the issue or does not send it forward.
  • 5. • The Brain and the Neuron • In animals, learning occurs within the brain. • If we can understand how the brain works, then there might be things in there for us to copy and use for our machine learning systems. • While the brain is an impressively powerful and complicated system, the basic building blocks that it is made up of are fairly simple and easy to understand. • The processing units of the brain are nerve cells called neurons. • There are lots of them (100 billion = 1011 is the figure that is often given). Leading to the Neural Networks…
  • 6. The general operation of neurons is: •Transmitter chemicals within the fluid of the brain raise or lower the electrical potential inside the body of the neuron. •If this membrane potential reaches some threshold, the neuron spikes or fires, and a pulse of fixed strength and duration is sent down the axon. •The axons divide (arborise) into connections to many other neurons, connecting to each of these neurons in a synapse. •Each neuron is typically connected to thousands of other neurons, so that it is estimated that there are about 100 trillion (= 1014 ) synapses within the brain. •After firing, the neuron must wait for some time to recover its energy (the refractory period) before it can fire again. Leading to the Neural Networks…
  • 7. • Each neuron can be viewed as a separate processor, performing a very simple computation: deciding whether or not to fire. • This makes the brain a massively parallel computer made up of 1011 processing elements. • If that is all there is to the brain, then we should be able to model it inside a computer and end up with animal or human intelligence inside a computer. • This is the view of strong AI. Leading to the Neural Networks…
  • 8. Machine Learning in ANNs Machine Learning in ANNs  ANNs are capable of learning and they need to be trained.There are several learning strategies −  Supervised Learning It involves a teacher that is scholar than the − ANN itself. For example, the teacher feeds some example data about which the teacher already knows the answers.  For example, pattern recognizing.The ANN comes up with guesses while recognizing.Then the teacher provides the ANN with the answers.The network then compares it guesses with the teacher’s “correct” answers and makes adjustments according to errors.  Unsupervised Learning It is required when there is no example − data set with known answers. For example, searching for a hidden pattern. In this case, clustering i.e. dividing a set of elements into groups according to some unknown pattern is carried out based on the existing data sets present.
  • 9. Machine Learning in ANNs Machine Learning in ANNs  Reinforcement Learning This strategy built on − observation.The ANN makes a decision by observing its environment. If the observation is negative, the network adjusts its weights to be able to make a different required decision the next time.  Back Propagation Algorithm  It is the training or learning algorithm. It learns by example. If you submit to the algorithm the example of what you want the network to do, it changes the network’s weights so that it can produce desired output for a particular input on finishing the training.  Back Propagation networks are ideal for simple Pattern Recognition and MappingTasks.
  • 10. • Hebb’s rule says that the changes in the strength of synaptic connections are proportional to the correlation in the firing of the two connecting neurons. • So if two neurons consistently fire simultaneously, then any connection between them will change in strength, becoming stronger. • However, if the two neurons never fire simultaneously, the connection between them will die away. • The idea is that if two neurons both respond to something, then they should be connected. Hebb’s Rule
  • 11. • Suppose that you have a neuron somewhere that recognises your grandmother (this will probably get input from lots of visual processing neurons, but don’t worry about that). • Now if your grandmother always gives you a chocolate bar when she comes to visit, then some neurons, which are happy because you like the taste of chocolate, will also be stimulated. • Since these neurons fire at the same time, they will be connected together, and the connection will get stronger over time. So eventually, the sight of your grandmother, even in a photo, will be enough to make you think of chocolate. Example of Hebb’s Rule
  • 12. • Pavlov used this idea, called classical conditioning, to train his dogs so that when food was shown to the dogs and the bell was rung at the same time, the neurons for salivating over the food and hearing the bell fired simultaneously, and so became strongly connected. • Over time, the strength of the synapse between the neurons that responded to hearing the bell and those that caused the salivation reflex was enough that just hearing the bell caused the salivation neurons to fire in sympathy. Classical Conditioning
  • 13. • Studying neurons isn’t actually that easy.You need to be able to extract the neuron from the brain, and then keep it alive so that you can see how it reacts in controlled circumstances. • Doing this takes a lot of care. One of the problems is that neurons are generally quite small (they must be if you’ve got 1011 of them in your head!) so getting electrodes into the synapses is difficult. • It has been done, though, using neurons from the giant squid, which has some neurons that are large enough to see. • Hodgkin and Huxley did this in 1952, measuring and writing down differential equations that compute the membrane potential based on various chemical concentrations, something that earned them a Nobel prize. McCulloch and Pitts Neurons
  • 14. • McCulloch and Pitts produced a perfect example of this when they modelled a neuron as: 1. a set of weighted inputs wi that correspond to the synapses 2. an adder that sums the input signals (equivalent to the membrane of the cell that collects electrical charge) 3. an activation function (initially a threshold function) that decides whether the neuron fires (‘spikes’) for the current inputs McCulloch and Pitts Neurons
  • 15. McCulloch and Pitts Neurons A picture of McCulloch and Pitt’s mathematical model of a neuron. The inputs x i are multiplied by the weights w i , and the neurons sum their values. If this sum is greater than the threshold θ then the neuron fires,
  • 16. • One question that is worth considering is how realistic is this model of a neuron? The answer is: not very. • Real neurons are much more complicated. • One neuron isn’t that interesting. It doesn’t do very much, except fire or not fire when we give it inputs. In fact, it doesn’t even learn. If we feed in the same set of inputs over and over again, the output of the neuron never varies—it either fires or does not. • So to make the neuron a little more interesting we need to work out how to make it learn, and then we need to put sets of neurons together into neural networks so that they can do something useful. Limitations of McCulloch and Pitts Neurons
  • 17. • Have a look again at the McCulloch and Pitts neuron and try to work out what can change in that model. • The only things that make up the neuron are the inputs, the weights, and the threshold (and there is only one threshold for each neuron, but lots of inputs). • The inputs can’t change, since they are external, so we can only change the weights and the threshold, which is interesting since it tells us that most of the learning is in the weights, which aren’t part of the neuron at all; they are the model of the synapse! The Perceptron: Neural Network
  • 18. So in order to make a neuron learn, the question that we need to ask is: How should we change the weights and thresholds of the neurons so that the network gets the right answer more often? The Perceptron: Neural Network
  • 19. • The Perceptron is nothing more than a collection of McCulloch and Pitts neurons together with a set of inputs and some weights to fasten the inputs to the neurons.
  • 20. • Inputs xi:An input vector is the data given as one input to the network. • Weights wij: which is the weighted connection between nodes i and j.These weights are equivalent to the synapses in the brain. They are arranged into a matrixW. • Outputs yi: We can write y(x,W) to remind ourselves that the output depends on the inputs to the algorithm and the current set of weights of the network. • Targets tj: The extra data that we need for supervised learning, since they provide the ‘correct’ answers that the algorithm is learning about. • Activation Function g(·) is a mathematical function that describes the firing of the neuron as a response to the weighted inputs, such as the threshold function. • Error E, a function that computes the inaccuracies of the network as a function of the outputs y and targets t.
  • 21. • The neurons in the Perceptron are completely independent of each other: • It doesn’t matter to any neuron what the others are doing, it works out whether or not to fire by multiplying together its own weights and the input, adding them together, and comparing the result to its own threshold, regardless of what the other neurons are doing. • Even the weights that go into each neuron are separate for each one, so the only thing they share is the inputs, since every neuron sees all of the inputs to the network. The Perceptron: Neural Network
  • 22. • When we looked at the McCulloch and Pitts neuron, the weights were labelled as wi, with the i index running over the number of inputs. • Here, we also need to work out which neuron the weight feeds into, so we label them as wij, where the j index runs over the number of neurons. • So w32 is the weight that connects input node 3 to neuron 2. • When we make an implementation of the neural network, we can use a two-dimensional array to hold these weights. The Perceptron: Neural Network
  • 23. • A typical output pattern could be (0, 1, 0, 0, 1), which means that the second and fifth neurons fired and the others did not. • We compare that pattern to the target, which is our known correct answer for this input, to identify which neurons got the answer right, and which did not. • For a neuron that is correct, we are happy, but any neuron that fired when it shouldn’t have done, or failed to fire when it should, needs to have its weights or thresholds changed. The Perceptron: Neural Network
  • 24. Applications of Neural Networks Applications of Neural Networks  They can perform tasks that are easy for a human but difficult for a machine −  Aerospace Autopilot aircrafts, aircraft fault detection. −  Automotive Automobile guidance systems. −  Military Weapon orientation and steering, target tracking, − object discrimination, facial recognition, signal/image identification.  Electronics Code sequence prediction, IC chip layout, − chip failure analysis, machine vision, voice synthesis.  Financial Real estate appraisal, loan advisor, mortgage − screening, corporate bond rating, portfolio trading program, corporate financial analysis, currency value prediction, document readers, credit application evaluators.
  • 25. Applications of Neural Networks Applications of Neural Networks  Industrial Manufacturing process control, product − design and analysis, quality inspection systems, welding quality analysis, paper quality prediction, chemical product design analysis, dynamic modeling of chemical process systems, machine maintenance analysis, project bidding, planning, and management.  Medical Cancer cell analysis, EEG and ECG analysis, − prosthetic design, transplant time optimizer.  Speech Speech recognition, speech classification, − text to speech conversion.  Telecommunications Image and data compression, − automated information services, real-time spoken language translation.
  • 26. Applications of Neural Networks Applications of Neural Networks  Transportation Truck Brake system diagnosis, vehicle − scheduling, routing systems.  Software Pattern Recognition in facial recognition, optical − character recognition, etc.  Time Series Prediction ANNs are used to make predictions − on stocks and natural calamities.  Signal Processing Neural networks can be trained to process − an audio signal and filter it appropriately in the hearing aids.  Control ANNs are often used to make steering decisions of − physical vehicles.  Anomaly Detection As ANNs are expert at recognizing − patterns, they can also be trained to generate an output when something unusual occurs that misfits the pattern