ERI SUMMER
TRAINING
COMPUTERS & SYSTEMS DEPT.
Dr. Randa ElanwarLecture 2
Content
ERI Summer training (C&S) Dr. Randa
Elanwar
 Neural networks (Definition, applications,
history)
 Linear problems
 Perceptron learning rule
 Neural networks types
 Feed forward learning
 Pattern learning mode
 Batch learning mode
2
Neural networks
3
 Definition
 From a practical point of view, an artificial
neural network ANN is just a parallel
computational system, consisting of many
simple processing elements, connected
together in a specific way in order to perform a
particular task, which is difficult to traditional
(serial) computers
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
4
 Applications
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
5
 Historical information
 Alexander Bain (1873) claimed that both thoughts and
body activity resulted from interactions among
neurons within the brain.
 For Bain, every activity led to the firing of a certain set
of neurons. When activities were repeated, the
connections between those neurons strengthened.
According to his theory, this repetition was what led to
the formation of memory. In other words the brain
learns the relation between the neuron input and the
output activity that should happen depending on this
and store or represent this relation as the "strength" of
the connections.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
6
 The structure of a biological neuron
 A biological neuron has three types of main
components; dendrites, soma (or cell body) and axon.
 Dendrites receives signals from other neurons.
 The soma, sums the incoming signals. When
sufficient input is received, the cell fires; that is it
transmit a signal over its axon to other cells.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
7
 What does this have to do with classification?
 NN Structure: large number of highly interconnected
processing elements (neurons) working together. Like
people, they learn from experience (by example)
 NNs learn relationship between cause and effect or
organize large volumes of data into orderly and
informative patterns
 This means that if we have examples (samples or
patterns) and we can tell the network to which class
they belong this means it can learn to do it
automatically.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
8
 What we have is the input samples, in other words the
feature vectors X = (x1, x2, x3) representing these samples
(because samples can be images, speech, video, etc.). What
we know is the relationship of these samples i.e. to which
class each sample belongs. What we don't know and want
the network to learn is the "strength" of connection W = (w1,
w2, w3) in order to be able to do the task automatically later
on
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
9
 How are neural networks being used in solving
problems?
 The problem variables are mainly: inputs, weights and
outputs
 Examples (training data) represent a solved problem.
i.e. Both the inputs and outputs are known
 Thus, by certain learning algorithm we can
adapt/adjust the NN weights using the known inputs
and outputs of training data
 For a new problem, we now have the inputs and the
weights, therefore, we can easily get the outputs.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks
10
 How does this relate to the classification
procedure we described before and the straight
line decision boundary?
 Simply the weights we want to compute
(connection strength between the NN input and
output nodes) are exactly the straight line
coefficients (slope and constant) we wanted to
find automatically, in other words, the equation
parameters we are seeking. These type of
problems are known scientifically as linear
problems.ERI Summer training (C&S) Dr. Randa
Elanwar
Linear problems
11
 The simplest type of problems are the linear
problems.
 Why ‘linear’? Because we can model the
problem by a straight line equation
(ax+by+c=z). In other words our problem
solution is to find a straight line equation.
ERI Summer training (C&S) Dr. Randa
Elanwar
Linear problems
12
 important note: k is the number of features this
means we can work with cases k>2 .. Great! one of
our problems have been solved. Still we need to
know how the decision boundary is made and how
the weights are learned?
 How can we make a decision boundary?
 We can use a threshold T
ERI Summer training (C&S) Dr. Randa
Elanwar
Linear problems
13
 which can be written as
 where f is the function acting as decision
boundary for the classifier
 Something weird!
 Where did the constant "b" go? ... it didn't go
anywhere
ERI Summer training (C&S) Dr. Randa
Elanwar
Linear problems
14
 b = -T .............. thus if
 Is referred to activation function. Domain is set of
activation values net. It sums the inputs multiplied by
the connection weight (net) and fires the neuron "o =
1" if the summation corresponding to this "input" is
larger than some threshold value T, i.e., the neuron is
activated. Otherwise, "o = 0" if the summation
corresponding to this "input" is smaller than T, i.e., the
neuron is not activated.ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
15
 Now .. How can we learn W?
 Through what is called "learning algorithm".
 What does this mean?
 Simply if there is a final straight line to be learned,
then let's start by a random one, i.e. with random
weights and then begin shifting it and rotating it (i.e.
update the weights, which are the slope and constant)
each time we find some samples belong to the wrong
class on the wrong side of the line, till we check that
all samples on both sides belong to the right class. In
other words, stop when substituting their feature
vectors in the boundary equation generates the
correct output sign.ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
16
 Learning steps:
1. Initial network has a randomly assigned weights.
2. Learning is done by making small adjustments in the
weights (i.e. the slope and transition of the straight
line) to reduce the difference between the observed
(current class membership of patterns) and
predicted values (desired membership).
3. We need to repeat the update phase several times
in order to achieve convergence (i.e. minimum or
zero error with respect to class membership) .
4. Updating process is divided into epochs (iterations).
5. Each epoch updates all the weights of the process.
ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
17
 Note that:
 We can control the weight adjustment speed
and this is what we call "learning rate".
 The initial weights and the learning rate values
determine the number of iterations needed for
conversion.
ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
18
ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
19
 It's like for example having 2 glasses: first is narrow
and tall and has water in it, second is wide and short
with no water in it and the target is to make both
glasses contain the same volume of water
 Initially, we add some water from the tall to the short then
we measure volumes
 If the volume in the short is less than the tall we add (+)
more water from the tall
 If the volume in the short is more than the tall we return
back (-) some water to the tall
 And so on till: If both volumes are equal we are done
 The target = desired output, water = weights, difference
measure = error
ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
20
 When all samples give the right output then the
perceptron has been learned (weights are
computed) and ready to be used with any new
input.
 Perceptron? what is perceptron?
 It is simply the mathematical model of the straight
line and in the same time is the simple
computational unit which mimics the natural
neuron. NN is a parallel combination of this
perceptron.
ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
21
 It consists of an input layer of nodes equal to the
number of feature vector elements. Each input node
is connected to the output node with a connection
with strength represented by the connection weight
(coefficients of the straight line). The perceptron has
an output layer with number of nodes equal to the
number of outputs required.
ERI Summer training (C&S) Dr. Randa
Elanwar
Perceptron learning rule
22
 Note: the number of iterations for learning the
perceptron depends on the initial weights values
and there are infinite solutions to the problem
depending on these weights, all are correct.
 Before proceeding with classification problem
using neural networks classifier we need to
discuss some important details.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
23
 Basic model of Neural networks
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
24
 1. Activation functions
 You can select your activation function f(net) to be
unipolar (step) or bipolar (sign) or soft limited
(sigmoid) and many other shapes to represent the
output (class membership)
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
25
 2. Interconnections
 The network connection we
used so far is "feed
forward" connection and we
used "single perceptron"
i.e. one activation function
and one output node. But
actually this is not a
network! The network is to
have a layer of multiple
output nodes and will look
like a number of inter
connected perceptrons.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
26
 The bad news is we have to change the
notations we now have wi,j instead of wi where i
represent the connection with the input node i
and j the connection with output node j
 The good news is we can solve a multiple class
problem, since each output node can give 0 or 1
output then for n nodes at the output layer we
can classify up to 2n distinct classes.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
27
 Not all connections are in forward direction and
not all networks have only two layers (input and
output). These different structures enable the
NN to solve problems with different
characteristics.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
28
 Examples:
 Multilayer feed forward network
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
29
 Feedback network
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
30
 In general there are a lot of types of NN like:
 Mapping networks:
 Back-propagation neural network
 Self-organizing map (SOM)
 Counter propagation network
 Spatiotemporal Network
 Space Displacement Neural Network (SDNN)
 Time Delay Neural Network (TDNN)
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
31
 Stochastic Networks
 Boltzmann machine
 Neurocognition network
 Other types: Convolutional Network (CN), Radial
Basis Function (RBF), , Quantum Neural Network
(QNN), Hopfield Neural Network (HNN),
Recurrent NN, etc.
 Each has specific structure, applications and
learning rules but they need much time to be
discussed in details.
ERI Summer training (C&S) Dr. Randa
Elanwar
Neural networks types
32
 3. Learning rules
 differ according to the NN type. For example,
perceptron (delta) rule for the feed forwards
network we illustrated. There is another rule
called back propagation rule for the MLP NN
and so on.
 There are other rules like Hibbean, Winner take
all (competitive learning) and others.
ERI Summer training (C&S) Dr. Randa
Elanwar
Feed forward learning
33
 Learning in feed forward NN
ERI Summer training (C&S) Dr. Randa
Elanwar
Feed forward learning
34
 Learning rate : (1)Used to control the amount of
weight adjustment at each step of training, (2) ranges
from 0 to 1, (3) determines the rate of learning in each
time step
ERI Summer training (C&S) Dr. Randa
Elanwar
Feed forward learning
35
 Node biases
 A node’s output is a weighted function of its inputs
 What is a bias?
 represents the constant in straight line equation ax +
by + c = 0 or represent the threshold T in O = 1 if
net>=T and O = 0 otherwise.
 How can we learn the bias value?
 Answer: treat them like just another weight
ERI Summer training (C&S) Dr. Randa
Elanwar
Feed forward learning
36
ERI Summer training (C&S) Dr. Randa
Elanwar
Feed forward learning
37
ERI Summer training (C&S) Dr. Randa
Elanwar
Feed forward learning
38
 We need to see a real example to see how things work.
 Learning modes
 NN learning has two modes "batch and pattern". We will see
examples for both
 Assume we have a problem like a ladies only train carriage,
so men are not allowed to take it. Men tickets are blue and
ladies tickets are yellow. Passengers enter in couples, so you
have 4 possibilities {2 blue tickets, blue and yellow, yellow
and blue , 2 yellow}.
 we can solve the problem using a NN that takes the ticket
color as a feature let blue = 0, yellow = 1. So we want to
implement a NN to act as logical AND function .. In other
words whenever the input has a '0' the output is '0' and doors
will keep closed, otherwise the output is '1' and doors will
open.ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
39
 Assume logical AND, with initial weights 0.5,
0.3 with bias = 0.5 and activation step function
at t=0.5. The learning rate = 1
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
40
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
41
 The NN will begin substituting by the given
examples and observe the output
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
42
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
43
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
44
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
45
ERI Summer training (C&S) Dr. Randa
Elanwar
Pattern learning mode
46
ERI Summer training (C&S) Dr. Randa
Elanwar
Batch learning mode
47
 Now we will see the same example solved in
"Batch" mode where we update after trying all input
samples altogether and using all the erroneous
ones in 1 update.
ERI Summer training (C&S) Dr. Randa
Elanwar
Batch learning mode
48
ERI Summer training (C&S) Dr. Randa
Elanwar
Batch learning mode
49
ERI Summer training (C&S) Dr. Randa
Elanwar
Batch learning mode
50
ERI Summer training (C&S) Dr. Randa
Elanwar
Batch learning mode
51
ERI Summer training (C&S) Dr. Randa
Elanwar

More Related Content

PPT
lecture07.ppt
PPT
Character Recognition using Artificial Neural Networks
PPT
NNFL 12- Guru Nanak Dev Engineering College
PDF
Adaptive Resonance Theory (ART)
PDF
Deep learning book_chap_02
DOC
EE8120_Projecte_15
PDF
Neural networks introduction
PPTX
Generalization abstraction
lecture07.ppt
Character Recognition using Artificial Neural Networks
NNFL 12- Guru Nanak Dev Engineering College
Adaptive Resonance Theory (ART)
Deep learning book_chap_02
EE8120_Projecte_15
Neural networks introduction
Generalization abstraction

Similar to What is pattern_recognition (lecture 2 of 6) (20)

PDF
What is pattern recognition (lecture 3 of 6)
PPT
19_Learning.ppt
PDF
Artificial neural network in Audiology.pdf
POTX
SoftComputing6
PPTX
Artificial Neural Networks ppt.pptx for final sem cse
PPT
AI-CH5 (ANN) - Artificial Neural Network
PPTX
Artifical Neural Network and its applications
PPT
Neural-Networks.ppt
PPTX
Artificial neural networks
PDF
2-Perceptrons.pdf
PDF
Neural networks are parallel computing devices.docx.pdf
PDF
m3 (2).pdf
PDF
Artificial neural network paper
PPT
Perceptron
PDF
Machine Learning- Perceptron_Backpropogation_Module 3.pdf
PPT
Neural network final NWU 4.3 Graphics Course
PPT
cs4811-ch11-neural-networks.ppt
PPTX
An Introduction to Neural Nets using the Neurons.pptx
PPTX
Introduction to Neural networks (under graduate course) Lecture 9 of 9
What is pattern recognition (lecture 3 of 6)
19_Learning.ppt
Artificial neural network in Audiology.pdf
SoftComputing6
Artificial Neural Networks ppt.pptx for final sem cse
AI-CH5 (ANN) - Artificial Neural Network
Artifical Neural Network and its applications
Neural-Networks.ppt
Artificial neural networks
2-Perceptrons.pdf
Neural networks are parallel computing devices.docx.pdf
m3 (2).pdf
Artificial neural network paper
Perceptron
Machine Learning- Perceptron_Backpropogation_Module 3.pdf
Neural network final NWU 4.3 Graphics Course
cs4811-ch11-neural-networks.ppt
An Introduction to Neural Nets using the Neurons.pptx
Introduction to Neural networks (under graduate course) Lecture 9 of 9
Ad

More from Randa Elanwar (20)

PDF
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص )_Pdf5of5
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة والأخطاء ال...
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد )_Pdf3of5
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية )_Pdf2of5
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5
PDF
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين
PDF
Entrepreneurship_who_is_your_customer_(arabic)_7of7
PDF
Entrepreneurship_who_is_your_customer_(arabic)_5of7
PDF
Entrepreneurship_who_is_your_customer_(arabic)_4of7
PDF
Entrepreneurship_who_is_your_customer_(arabic)_2of7
PDF
يوميات طالب بدرجة مشرف (Part 19 of 20)
PDF
يوميات طالب بدرجة مشرف (Part 18 of 20)
PDF
يوميات طالب بدرجة مشرف (Part 17 of 20)
PDF
يوميات طالب بدرجة مشرف (Part 16 of 20)
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص )_Pdf5of5
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة والأخطاء ال...
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد )_Pdf3of5
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية )_Pdf2of5
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين
Entrepreneurship_who_is_your_customer_(arabic)_7of7
Entrepreneurship_who_is_your_customer_(arabic)_5of7
Entrepreneurship_who_is_your_customer_(arabic)_4of7
Entrepreneurship_who_is_your_customer_(arabic)_2of7
يوميات طالب بدرجة مشرف (Part 19 of 20)
يوميات طالب بدرجة مشرف (Part 18 of 20)
يوميات طالب بدرجة مشرف (Part 17 of 20)
يوميات طالب بدرجة مشرف (Part 16 of 20)
Ad

Recently uploaded (20)

PDF
Unit 5 Preparations, Reactions, Properties and Isomersim of Organic Compounds...
PDF
Packaging materials of fruits and vegetables
PPTX
Substance Disorders- part different drugs change body
PPTX
BODY FLUIDS AND CIRCULATION class 11 .pptx
PPT
Biochemestry- PPT ON Protein,Nitrogenous constituents of Urine, Blood, their ...
PPTX
endocrine - management of adrenal incidentaloma.pptx
PPTX
Hypertension_Training_materials_English_2024[1] (1).pptx
PDF
Worlds Next Door: A Candidate Giant Planet Imaged in the Habitable Zone of ↵ ...
PPTX
POULTRY PRODUCTION AND MANAGEMENTNNN.pptx
PDF
Communicating Health Policies to Diverse Populations (www.kiu.ac.ug)
PPT
Enhancing Laboratory Quality Through ISO 15189 Compliance
PPTX
SCIENCE 4 Q2W5 PPT.pptx Lesson About Plnts and animals and their habitat
PPTX
TORCH INFECTIONS in pregnancy with toxoplasma
PDF
BET Eukaryotic signal Transduction BET Eukaryotic signal Transduction.pdf
PDF
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
PDF
S2 SOIL BY TR. OKION.pdf based on the new lower secondary curriculum
PDF
GROUP 2 ORIGINAL PPT. pdf Hhfiwhwifhww0ojuwoadwsfjofjwsofjw
PDF
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
PPT
veterinary parasitology ````````````.ppt
PDF
CHAPTER 2 The Chemical Basis of Life Lecture Outline.pdf
Unit 5 Preparations, Reactions, Properties and Isomersim of Organic Compounds...
Packaging materials of fruits and vegetables
Substance Disorders- part different drugs change body
BODY FLUIDS AND CIRCULATION class 11 .pptx
Biochemestry- PPT ON Protein,Nitrogenous constituents of Urine, Blood, their ...
endocrine - management of adrenal incidentaloma.pptx
Hypertension_Training_materials_English_2024[1] (1).pptx
Worlds Next Door: A Candidate Giant Planet Imaged in the Habitable Zone of ↵ ...
POULTRY PRODUCTION AND MANAGEMENTNNN.pptx
Communicating Health Policies to Diverse Populations (www.kiu.ac.ug)
Enhancing Laboratory Quality Through ISO 15189 Compliance
SCIENCE 4 Q2W5 PPT.pptx Lesson About Plnts and animals and their habitat
TORCH INFECTIONS in pregnancy with toxoplasma
BET Eukaryotic signal Transduction BET Eukaryotic signal Transduction.pdf
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
S2 SOIL BY TR. OKION.pdf based on the new lower secondary curriculum
GROUP 2 ORIGINAL PPT. pdf Hhfiwhwifhww0ojuwoadwsfjofjwsofjw
Looking into the jet cone of the neutrino-associated very high-energy blazar ...
veterinary parasitology ````````````.ppt
CHAPTER 2 The Chemical Basis of Life Lecture Outline.pdf

What is pattern_recognition (lecture 2 of 6)

  • 1. ERI SUMMER TRAINING COMPUTERS & SYSTEMS DEPT. Dr. Randa ElanwarLecture 2
  • 2. Content ERI Summer training (C&S) Dr. Randa Elanwar  Neural networks (Definition, applications, history)  Linear problems  Perceptron learning rule  Neural networks types  Feed forward learning  Pattern learning mode  Batch learning mode 2
  • 3. Neural networks 3  Definition  From a practical point of view, an artificial neural network ANN is just a parallel computational system, consisting of many simple processing elements, connected together in a specific way in order to perform a particular task, which is difficult to traditional (serial) computers ERI Summer training (C&S) Dr. Randa Elanwar
  • 4. Neural networks 4  Applications ERI Summer training (C&S) Dr. Randa Elanwar
  • 5. Neural networks 5  Historical information  Alexander Bain (1873) claimed that both thoughts and body activity resulted from interactions among neurons within the brain.  For Bain, every activity led to the firing of a certain set of neurons. When activities were repeated, the connections between those neurons strengthened. According to his theory, this repetition was what led to the formation of memory. In other words the brain learns the relation between the neuron input and the output activity that should happen depending on this and store or represent this relation as the "strength" of the connections. ERI Summer training (C&S) Dr. Randa Elanwar
  • 6. Neural networks 6  The structure of a biological neuron  A biological neuron has three types of main components; dendrites, soma (or cell body) and axon.  Dendrites receives signals from other neurons.  The soma, sums the incoming signals. When sufficient input is received, the cell fires; that is it transmit a signal over its axon to other cells. ERI Summer training (C&S) Dr. Randa Elanwar
  • 7. Neural networks 7  What does this have to do with classification?  NN Structure: large number of highly interconnected processing elements (neurons) working together. Like people, they learn from experience (by example)  NNs learn relationship between cause and effect or organize large volumes of data into orderly and informative patterns  This means that if we have examples (samples or patterns) and we can tell the network to which class they belong this means it can learn to do it automatically. ERI Summer training (C&S) Dr. Randa Elanwar
  • 8. Neural networks 8  What we have is the input samples, in other words the feature vectors X = (x1, x2, x3) representing these samples (because samples can be images, speech, video, etc.). What we know is the relationship of these samples i.e. to which class each sample belongs. What we don't know and want the network to learn is the "strength" of connection W = (w1, w2, w3) in order to be able to do the task automatically later on ERI Summer training (C&S) Dr. Randa Elanwar
  • 9. Neural networks 9  How are neural networks being used in solving problems?  The problem variables are mainly: inputs, weights and outputs  Examples (training data) represent a solved problem. i.e. Both the inputs and outputs are known  Thus, by certain learning algorithm we can adapt/adjust the NN weights using the known inputs and outputs of training data  For a new problem, we now have the inputs and the weights, therefore, we can easily get the outputs. ERI Summer training (C&S) Dr. Randa Elanwar
  • 10. Neural networks 10  How does this relate to the classification procedure we described before and the straight line decision boundary?  Simply the weights we want to compute (connection strength between the NN input and output nodes) are exactly the straight line coefficients (slope and constant) we wanted to find automatically, in other words, the equation parameters we are seeking. These type of problems are known scientifically as linear problems.ERI Summer training (C&S) Dr. Randa Elanwar
  • 11. Linear problems 11  The simplest type of problems are the linear problems.  Why ‘linear’? Because we can model the problem by a straight line equation (ax+by+c=z). In other words our problem solution is to find a straight line equation. ERI Summer training (C&S) Dr. Randa Elanwar
  • 12. Linear problems 12  important note: k is the number of features this means we can work with cases k>2 .. Great! one of our problems have been solved. Still we need to know how the decision boundary is made and how the weights are learned?  How can we make a decision boundary?  We can use a threshold T ERI Summer training (C&S) Dr. Randa Elanwar
  • 13. Linear problems 13  which can be written as  where f is the function acting as decision boundary for the classifier  Something weird!  Where did the constant "b" go? ... it didn't go anywhere ERI Summer training (C&S) Dr. Randa Elanwar
  • 14. Linear problems 14  b = -T .............. thus if  Is referred to activation function. Domain is set of activation values net. It sums the inputs multiplied by the connection weight (net) and fires the neuron "o = 1" if the summation corresponding to this "input" is larger than some threshold value T, i.e., the neuron is activated. Otherwise, "o = 0" if the summation corresponding to this "input" is smaller than T, i.e., the neuron is not activated.ERI Summer training (C&S) Dr. Randa Elanwar
  • 15. Perceptron learning rule 15  Now .. How can we learn W?  Through what is called "learning algorithm".  What does this mean?  Simply if there is a final straight line to be learned, then let's start by a random one, i.e. with random weights and then begin shifting it and rotating it (i.e. update the weights, which are the slope and constant) each time we find some samples belong to the wrong class on the wrong side of the line, till we check that all samples on both sides belong to the right class. In other words, stop when substituting their feature vectors in the boundary equation generates the correct output sign.ERI Summer training (C&S) Dr. Randa Elanwar
  • 16. Perceptron learning rule 16  Learning steps: 1. Initial network has a randomly assigned weights. 2. Learning is done by making small adjustments in the weights (i.e. the slope and transition of the straight line) to reduce the difference between the observed (current class membership of patterns) and predicted values (desired membership). 3. We need to repeat the update phase several times in order to achieve convergence (i.e. minimum or zero error with respect to class membership) . 4. Updating process is divided into epochs (iterations). 5. Each epoch updates all the weights of the process. ERI Summer training (C&S) Dr. Randa Elanwar
  • 17. Perceptron learning rule 17  Note that:  We can control the weight adjustment speed and this is what we call "learning rate".  The initial weights and the learning rate values determine the number of iterations needed for conversion. ERI Summer training (C&S) Dr. Randa Elanwar
  • 18. Perceptron learning rule 18 ERI Summer training (C&S) Dr. Randa Elanwar
  • 19. Perceptron learning rule 19  It's like for example having 2 glasses: first is narrow and tall and has water in it, second is wide and short with no water in it and the target is to make both glasses contain the same volume of water  Initially, we add some water from the tall to the short then we measure volumes  If the volume in the short is less than the tall we add (+) more water from the tall  If the volume in the short is more than the tall we return back (-) some water to the tall  And so on till: If both volumes are equal we are done  The target = desired output, water = weights, difference measure = error ERI Summer training (C&S) Dr. Randa Elanwar
  • 20. Perceptron learning rule 20  When all samples give the right output then the perceptron has been learned (weights are computed) and ready to be used with any new input.  Perceptron? what is perceptron?  It is simply the mathematical model of the straight line and in the same time is the simple computational unit which mimics the natural neuron. NN is a parallel combination of this perceptron. ERI Summer training (C&S) Dr. Randa Elanwar
  • 21. Perceptron learning rule 21  It consists of an input layer of nodes equal to the number of feature vector elements. Each input node is connected to the output node with a connection with strength represented by the connection weight (coefficients of the straight line). The perceptron has an output layer with number of nodes equal to the number of outputs required. ERI Summer training (C&S) Dr. Randa Elanwar
  • 22. Perceptron learning rule 22  Note: the number of iterations for learning the perceptron depends on the initial weights values and there are infinite solutions to the problem depending on these weights, all are correct.  Before proceeding with classification problem using neural networks classifier we need to discuss some important details. ERI Summer training (C&S) Dr. Randa Elanwar
  • 23. Neural networks types 23  Basic model of Neural networks ERI Summer training (C&S) Dr. Randa Elanwar
  • 24. Neural networks types 24  1. Activation functions  You can select your activation function f(net) to be unipolar (step) or bipolar (sign) or soft limited (sigmoid) and many other shapes to represent the output (class membership) ERI Summer training (C&S) Dr. Randa Elanwar
  • 25. Neural networks types 25  2. Interconnections  The network connection we used so far is "feed forward" connection and we used "single perceptron" i.e. one activation function and one output node. But actually this is not a network! The network is to have a layer of multiple output nodes and will look like a number of inter connected perceptrons. ERI Summer training (C&S) Dr. Randa Elanwar
  • 26. Neural networks types 26  The bad news is we have to change the notations we now have wi,j instead of wi where i represent the connection with the input node i and j the connection with output node j  The good news is we can solve a multiple class problem, since each output node can give 0 or 1 output then for n nodes at the output layer we can classify up to 2n distinct classes. ERI Summer training (C&S) Dr. Randa Elanwar
  • 27. Neural networks types 27  Not all connections are in forward direction and not all networks have only two layers (input and output). These different structures enable the NN to solve problems with different characteristics. ERI Summer training (C&S) Dr. Randa Elanwar
  • 28. Neural networks types 28  Examples:  Multilayer feed forward network ERI Summer training (C&S) Dr. Randa Elanwar
  • 29. Neural networks types 29  Feedback network ERI Summer training (C&S) Dr. Randa Elanwar
  • 30. Neural networks types 30  In general there are a lot of types of NN like:  Mapping networks:  Back-propagation neural network  Self-organizing map (SOM)  Counter propagation network  Spatiotemporal Network  Space Displacement Neural Network (SDNN)  Time Delay Neural Network (TDNN) ERI Summer training (C&S) Dr. Randa Elanwar
  • 31. Neural networks types 31  Stochastic Networks  Boltzmann machine  Neurocognition network  Other types: Convolutional Network (CN), Radial Basis Function (RBF), , Quantum Neural Network (QNN), Hopfield Neural Network (HNN), Recurrent NN, etc.  Each has specific structure, applications and learning rules but they need much time to be discussed in details. ERI Summer training (C&S) Dr. Randa Elanwar
  • 32. Neural networks types 32  3. Learning rules  differ according to the NN type. For example, perceptron (delta) rule for the feed forwards network we illustrated. There is another rule called back propagation rule for the MLP NN and so on.  There are other rules like Hibbean, Winner take all (competitive learning) and others. ERI Summer training (C&S) Dr. Randa Elanwar
  • 33. Feed forward learning 33  Learning in feed forward NN ERI Summer training (C&S) Dr. Randa Elanwar
  • 34. Feed forward learning 34  Learning rate : (1)Used to control the amount of weight adjustment at each step of training, (2) ranges from 0 to 1, (3) determines the rate of learning in each time step ERI Summer training (C&S) Dr. Randa Elanwar
  • 35. Feed forward learning 35  Node biases  A node’s output is a weighted function of its inputs  What is a bias?  represents the constant in straight line equation ax + by + c = 0 or represent the threshold T in O = 1 if net>=T and O = 0 otherwise.  How can we learn the bias value?  Answer: treat them like just another weight ERI Summer training (C&S) Dr. Randa Elanwar
  • 36. Feed forward learning 36 ERI Summer training (C&S) Dr. Randa Elanwar
  • 37. Feed forward learning 37 ERI Summer training (C&S) Dr. Randa Elanwar
  • 38. Feed forward learning 38  We need to see a real example to see how things work.  Learning modes  NN learning has two modes "batch and pattern". We will see examples for both  Assume we have a problem like a ladies only train carriage, so men are not allowed to take it. Men tickets are blue and ladies tickets are yellow. Passengers enter in couples, so you have 4 possibilities {2 blue tickets, blue and yellow, yellow and blue , 2 yellow}.  we can solve the problem using a NN that takes the ticket color as a feature let blue = 0, yellow = 1. So we want to implement a NN to act as logical AND function .. In other words whenever the input has a '0' the output is '0' and doors will keep closed, otherwise the output is '1' and doors will open.ERI Summer training (C&S) Dr. Randa Elanwar
  • 39. Pattern learning mode 39  Assume logical AND, with initial weights 0.5, 0.3 with bias = 0.5 and activation step function at t=0.5. The learning rate = 1 ERI Summer training (C&S) Dr. Randa Elanwar
  • 40. Pattern learning mode 40 ERI Summer training (C&S) Dr. Randa Elanwar
  • 41. Pattern learning mode 41  The NN will begin substituting by the given examples and observe the output ERI Summer training (C&S) Dr. Randa Elanwar
  • 42. Pattern learning mode 42 ERI Summer training (C&S) Dr. Randa Elanwar
  • 43. Pattern learning mode 43 ERI Summer training (C&S) Dr. Randa Elanwar
  • 44. Pattern learning mode 44 ERI Summer training (C&S) Dr. Randa Elanwar
  • 45. Pattern learning mode 45 ERI Summer training (C&S) Dr. Randa Elanwar
  • 46. Pattern learning mode 46 ERI Summer training (C&S) Dr. Randa Elanwar
  • 47. Batch learning mode 47  Now we will see the same example solved in "Batch" mode where we update after trying all input samples altogether and using all the erroneous ones in 1 update. ERI Summer training (C&S) Dr. Randa Elanwar
  • 48. Batch learning mode 48 ERI Summer training (C&S) Dr. Randa Elanwar
  • 49. Batch learning mode 49 ERI Summer training (C&S) Dr. Randa Elanwar
  • 50. Batch learning mode 50 ERI Summer training (C&S) Dr. Randa Elanwar
  • 51. Batch learning mode 51 ERI Summer training (C&S) Dr. Randa Elanwar