SlideShare a Scribd company logo
Neural Networks
Dr. Randa Elanwar
Lecture 5
Lecture Content
• Linearly separable functions: 2-class problem
implementation
– Learning laws: Perceptron learning rule
– Pattern mode solution method
– Batch mode solution method
2Neural Networks Dr. Randa Elanwar
Linearly Separable Functions
• Example: logical OR, with initial weights 0.5, 0.3
with bias = 0.5 and activation step function at t=0.5.
The learning rate = 1
3Neural Networks Dr. Randa Elanwar
x2
w1= 0.5
w2 = 0.3
x1
yin = x1w1 + x2w2
  y
Activation Function:
Binary Step Function
t = 0.5,
(y-in) = 1 if y-in >= t
otherwise (y-in) = 0
Solving Linearly Separable Functions
(Pattern mode)
• Given:
• Since we consider bias as additional weight thus the
weight vector is 1x3 we have to fix the
dimensionality of the input vector x1, x2, x3 and x4
from 2x1 to be 3x1 to perform the multiplication.
4Neural Networks Dr. Randa Elanwar











11
10
01
00
X).( bXWfY 
x1
x2
x3
x4
x1 x2 y
0 0 0
0 1 1
1 0 1
1 1 1










111
101
011
001
X
 5.03.05.0)0( W
Solving Linearly Separable Functions
(Pattern mode)
• Update weight vector for iteration 1
5Neural Networks Dr. Randa Elanwar
OK
Wrong
1,1
1
0
1
].5.03.15.0[3.)1( 








 yXW
1,3.2
1
1
1
].5.03.15.0[4.)1( 








 yXW
OK
OK
Wrong1,5.0
1
0
0
].5.03.15.0[1.)1( 








 yXW
  0,5.0
1
0
0
.5.03.05.01.)0( 








 yXW
  0,2.0
1
1
0
.5.03.05.02.)0( 








 yXW









5.0
3.1
5.0
)..( 2)0()1( XyWW ydis
TT

Solving Linearly Separable Functions
(Pattern mode)
• Update weight vector for iteration 2
• Update weight vector for iteration 3
6Neural Networks Dr. Randa Elanwar










5.0
3.1
5.0
1)..()1()2( XyyWW dis
TT

1,8.0
1
1
0
].5.03.15.0[2.)2( 








 yXW
0,0
1
0
1
].5.03.15.0[3.)2( 








 yXW
OK
Wrong









5.0
3.1
5.1
3)..()2()3( XyyWW dis
TT

1,5.0
1
0
0
].5.03.15.1[1.)3( 








 yXW
1,3.3
1
1
1
].5.03.15.1[4.)3( 








 yXW OK
Wrong
Solving Linearly Separable Functions
(Pattern mode)
• Update weight vector for iteration 4
• The weights learning has converged at 4 iterations
7Neural Networks Dr. Randa Elanwar
0,5.0
1
0
0
].5.03.15.2[1.)4( 








 yXW
1,8.0
1
1
0
].5.03.15.1[2.)4( 








 yXW










5.0
3.1
5.1
1)..()3()4( XyyWW dis
TT

1,1
1
0
1
].5.03.15.1[3.)4( 








 yXW
1,3.2
1
1
1
].5.03.15.1[4.)4( 








 yXW
OK
OK
OK
OK
Solving Linearly Separable Functions (Batch
mode)
• Update weight vector for iteration 1
• Add w for all misclassified inputs together in 1 step
8Neural Networks Dr. Randa Elanwar
0,0
1
0
1
].5.03.05.0[3.)0( 








 yXW
0,3.0
1
1
1
].5.03.05.0[4.)0( 








 yXW
OK
Wrong
Wrong
Wrong
4)..(3)..(2)..()0()1( XyXyXy yyyWW disdisdis
TT
 














































5.2
3.2
5.2
1
1
1
1
0
1
1
1
0
5.0
3.0
5.0
)1(W
T
  0,5.0
1
0
0
.5.03.05.01.)0( 








 yXW
  0,2.0
1
1
0
.5.03.05.02.)0( 








 yXW
Solving Linearly Separable Functions (Batch
mode)
• Update weight vector for iteration 1
• Add w for all misclassified inputs together in 1 step
9Neural Networks Dr. Randa Elanwar
1,5.2
1
0
0
].5.23.25.2[1.)1( 








 yXW
1,8.4
1
1
0
].5.23.25.2[2.)1( 








 yXW
1,5
1
0
1
].5.23.25.2[3.)1( 








 yXW
1,3.7
1
1
1
].5.23.25.2[4.)1( 








 yXW
Wrong
OK
OK
OK









5.1
3.2
5.2
1)..()1()2( XyyWW dis
TT

Solving Linearly Separable Functions (Batch
mode)
• Update weight vector for iteration 3
• Add w for all misclassified inputs together in 1
step
10Neural Networks Dr. Randa Elanwar
1,5.1
1
0
0
].5.13.25.2[1.)2( 








 yXW
1,8.3
1
1
0
].5.13.25.2[2.)2( 








 yXW
1,4
1
0
1
].5.13.25.2[3.)2( 








 yXW
1,3.6
1
1
1
].5.13.25.2[4.)2( 








 yXW
OK
OK
OK
Wrong









5.0
3.2
5.2
1)..()2()3( XyyWW dis
TT

Solving Linearly Separable Functions (Batch
mode)
• Update weight vector for iteration 4
• Add w for all misclassified inputs together in 1
step
11Neural Networks Dr. Randa Elanwar
OK
OK
OK
Wrong










5.0
3.2
5.2
1)..()3()4( XyyWW dis
TT

1,5.0
1
0
0
].5.03.25.2[1.)3( 








 yXW
1,8.2
1
1
0
].5.03.25.2[2.)3( 








 yXW
1,3
1
0
1
].5.03.25.2[3.)3( 








 yXW
1,3.5
1
1
1
].5.03.25.2[4.)3( 








 yXW
Solving Linearly Separable Functions (Batch
mode)
• The weights learning has also converged at 4
iterations but with different final values
12Neural Networks Dr. Randa Elanwar
OK
OK
OK
OK0,5.0
1
0
0
].5.03.25.2[1.)4( 








 yXW
1,8.1
1
1
0
].5.03.25.2[2.)4( 








 yXW
1,2
1
0
1
].5.03.25.2[3.)4( 








 yXW
1,3.4
1
1
1
].5.03.25.2[4.)4( 








 yXW
Linearly Separable Functions
• Example: Consider linearly separable patterns where class
C1 consists of the two patterns [1 0]T and [1 1]T and C2
consists of the two patterns [0 0]T and [0 1]T. Use the
perceptron algorithm with  = 1 and w(0)= [0 1 -1/2]T to
design the line separating the two classes.
13Neural Networks Dr. Randa Elanwar
X2
X1
X4
X3
X2
X1
initial
finalIt is very important to graph the problem to
define the initial line and assign the direction
of positive and negative
Initial weight  X2-1/2 = 0, intersects vertical
axis at (0,1/2) and is parallel to horizontal axis
When X2>1/2 we get +ve value thus the
positive direction is above the line
+ve
-ve
Linearly Separable Functions
• Initially x2 and x3 are in the correct side while x1 and x4 are
in the wrong side.
• Thus we assume x1 and x2 have to be on the +ve side and x3
and x4 have to be on the –ve side
• Note that sometimes you are not given the activation
function f. In such case: you can compute outputs and update
weights by polarity (sign) instead of the activation function
value:
If the W.X >0 and it is wrong w = .(-1).X
If the W.X <0 and it is wrong w = .(+1).X
14Neural Networks Dr. Randa Elanwar
Solving Linearly Separable Functions
(Pattern mode)
• Again since the weight vector is 1x3 we have to fix
the dimensionality of the input vector x1, x2, x3 and
x4 from 2x1 to be 3x1 to perform the multiplication.
• Update weight vector for iteration 1
15Neural Networks Dr. Randa Elanwar
5.0
1
0
1
].5.010[1.)0( 








XW -ve and it have to be +ve









5.0
1
1
1)0()0()1( Xw WWW
TTT
5.2
1
1
1
].5.011[2.)1( 








XW
OK
5.0
1
0
0
].5.011[3.)1( 








XW +ve and it have to be -ve
Solving Linearly Separable Functions
(Pattern mode)
• Update weight vector for iteration 2
• Update weight vector for iteration 3
• Update weight vector for iteration 4
16Neural Networks Dr. Randa Elanwar










5.0
1
1
3)1()1()2( Xw WWW
TTT
5.0
1
1
0
].5.011[4.)2( 








XW +ve and it have to be -ve










5.1
0
1
4)2()2()3( Xw WWW
TTT
5.0
1
0
1
].5.101[1.)3( 








XW -ve and it have to be +ve










5.0
0
2
1)3()3()4( Xw WWW
TTT
5.1
1
1
1
].5.002[2.)4( 








XW
OK
Solving Linearly Separable Functions
(Pattern mode)
• The new straight line equation is 2X1-1/2 = 0 a
vertical line intersecting the horizontal axis at 1/4
and is parallel to the vertical axis
• The solution has converged in 4 iterations.
17Neural Networks Dr. Randa Elanwar
  5.0
1
0
0
5.0023.)4( 








XW OK
  5.0
1
1
0
5.0024.)4( 








XW OK
  5.1
1
0
1
5.0021.)4( 








XW OK
Solving Linearly Separable Functions (Batch
mode)
• Again since the weight vector is 3x1 we have to fix the dimensionality
of the input vector x1, x2, x3 and x4 to be 1x3 to perform the
multiplication.
• Update weight vector for iteration 1. Add w for all misclassified
inputs together in 1 step
18Neural Networks Dr. Randa Elanwar
+ve and it have to be -ve
-ve and it have to be +ve
OK
OK
4.1.)0()1( XXWW
TT
  





































5.0
0
1
1
1
0
1
0
1
5.0
1
0
)1(W
T
5.0
1
0
1
].5.010[1.)0( 








XW
1
1
1
1
].5.010[2.)0( 








XW
5.0
1
0
0
].5.010[3.)0( 








XW
5.0
1
1
0
].5.010[4.)0( 








XW
Solving Linearly Separable Functions (Batch
mode)
• The new straight line equation is X1-1/2 = 0 a vertical line
intersecting the horizontal axis at 1/2 and is parallel to the
vertical axis
• The solution has converged in 1 iteration.
19Neural Networks Dr. Randa Elanwar
OK
OK
OK
OK5.0
1
0
1
].5.001[1.)1( 








XW
5.0
1
1
1
].5.001[2.)1( 








XW
5.0
1
0
0
].5.001[3.)1( 








XW
5.0
1
1
0
].5.001[4.)1( 








XW
Non linear problems
• XOR problem
• No way to draw a line to separate the positive from
negative examples
20Neural Networks Dr. Randa Elanwar
Input1 Input2 Output
0 0 0
0 1 1
1 0 1
1 1 0
Non linear problems
• XOR problem
• The only way to separate the positive from negative examples
is to draw 2 lines (i.e., we need 2 straight line equations) or
nonlinear region to capture one type only
21Neural Networks Dr. Randa Elanwar
+ve
+ve
-ve
-ve+ve
-ve
cba yx 
22
Non linear problems
• To implement the nonlinearity we need to insert one or more
extra layer of nodes between the input layer and the output
layer (Hidden layer)
22Neural Networks Dr. Randa Elanwar

More Related Content

PPTX
Introduction to Neural networks (under graduate course) Lecture 4 of 9
PPTX
Introduction to Neural networks (under graduate course) Lecture 6 of 9
PPTX
Introduction to Neural networks (under graduate course) Lecture 8 of 9
PPTX
Introduction to Neural networks (under graduate course) Lecture 7 of 9
PPTX
03 Single layer Perception Classifier
PPTX
Machine Learning - Neural Networks - Perceptron
PDF
Artificial neural networks
PPTX
Introduction to Neural networks (under graduate course) Lecture 4 of 9
Introduction to Neural networks (under graduate course) Lecture 6 of 9
Introduction to Neural networks (under graduate course) Lecture 8 of 9
Introduction to Neural networks (under graduate course) Lecture 7 of 9
03 Single layer Perception Classifier
Machine Learning - Neural Networks - Perceptron
Artificial neural networks

What's hot (20)

PDF
Introduction to Artificial Neural Networks
PPT
Multilayer perceptron
PPTX
Fuzzy and nn
PPTX
Artificial neural network
PPTX
Neural Networks
PPT
lecture07.ppt
PPSX
Perceptron (neural network)
PPTX
04 Multi-layer Feedforward Networks
PPT
Artificial Neural Networks
PPT
Lec 3-4-5-learning
PPTX
Deep neural networks & computational graphs
PPTX
Basic Learning Algorithms of ANN
PPTX
Deep learning algorithms
PDF
Introduction to Artificial Neural Network
PPTX
Activation function
PDF
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
PPT
Soft Computering Technics - Unit2
PPT
Hebbian Learning
PDF
neural networksNnf
PPTX
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Introduction to Artificial Neural Networks
Multilayer perceptron
Fuzzy and nn
Artificial neural network
Neural Networks
lecture07.ppt
Perceptron (neural network)
04 Multi-layer Feedforward Networks
Artificial Neural Networks
Lec 3-4-5-learning
Deep neural networks & computational graphs
Basic Learning Algorithms of ANN
Deep learning algorithms
Introduction to Artificial Neural Network
Activation function
Artificial Neural Network Lect4 : Single Layer Perceptron Classifiers
Soft Computering Technics - Unit2
Hebbian Learning
neural networksNnf
Backpropagation And Gradient Descent In Neural Networks | Neural Network Tuto...
Ad

Viewers also liked (9)

PPTX
fuzzy logic application
PPTX
Swaroop.m.r
PPTX
Short course fuzzy logic applications
PPTX
Fuzzy logic and application in AI
PPTX
Fuzzy logic application (aircraft landing)
PPT
Fuzzy logic
PPTX
Chapter 5 - Fuzzy Logic
PPTX
Application of fuzzy logic
PPT
Fuzzy logic ppt
fuzzy logic application
Swaroop.m.r
Short course fuzzy logic applications
Fuzzy logic and application in AI
Fuzzy logic application (aircraft landing)
Fuzzy logic
Chapter 5 - Fuzzy Logic
Application of fuzzy logic
Fuzzy logic ppt
Ad

Similar to Introduction to Neural networks (under graduate course) Lecture 5 of 9 (20)

PPTX
CS767_Lecture_04.pptx
PPTX
Artificial neural networks - A gentle introduction to ANNS.pptx
PPTX
Machine learning with neural networks
PPTX
ANN presentation explaination and architecture.pptx
PDF
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
PDF
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
PDF
Chapter8 LINEAR DESCRIMINANT FOR MACHINE LEARNING.pdf
PDF
Artificial Neural Network
PPTX
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
PPT
feedforward-network-
PPT
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
PPT
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
PPT
Neural networks,Single Layer Feed Forward
PPTX
Introduction to Neural Netwoks
PPT
introduction to neural networks ANN deep
PPTX
Linear Algebra and Matlab tutorial
PPT
SOFTCOMPUTERING TECHNICS - Unit
PPTX
Deep learning study 2
PPT
nural network ER. Abhishek k. upadhyay
CS767_Lecture_04.pptx
Artificial neural networks - A gentle introduction to ANNS.pptx
Machine learning with neural networks
ANN presentation explaination and architecture.pptx
Artificial Neural Networks Lect2: Neurobiology & Architectures of ANNS
ARTIFICIAL-NEURAL-NETWORKMACHINELEARNING
Chapter8 LINEAR DESCRIMINANT FOR MACHINE LEARNING.pdf
Artificial Neural Network
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
feedforward-network-
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
cs621-lect18-feedforward-network-contd-2009-9-24.ppt
Neural networks,Single Layer Feed Forward
Introduction to Neural Netwoks
introduction to neural networks ANN deep
Linear Algebra and Matlab tutorial
SOFTCOMPUTERING TECHNICS - Unit
Deep learning study 2
nural network ER. Abhishek k. upadhyay

More from Randa Elanwar (20)

PDF
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص )_Pdf5of5
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة والأخطاء ال...
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد )_Pdf3of5
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية )_Pdf2of5
PDF
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5
PDF
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين
PDF
Entrepreneurship_who_is_your_customer_(arabic)_7of7
PDF
Entrepreneurship_who_is_your_customer_(arabic)_5of7
PDF
Entrepreneurship_who_is_your_customer_(arabic)_4of7
PDF
Entrepreneurship_who_is_your_customer_(arabic)_2of7
PDF
يوميات طالب بدرجة مشرف (Part 19 of 20)
PDF
يوميات طالب بدرجة مشرف (Part 18 of 20)
PDF
يوميات طالب بدرجة مشرف (Part 17 of 20)
PDF
يوميات طالب بدرجة مشرف (Part 16 of 20)
الجزء السادس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الخامس ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الرابع ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثالث ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الثاني ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
الجزء الأول ماذا ستقدم لعميلك ريادة الأعمال خطوة بخطوة
تدريب مدونة علماء مصر على الكتابة الفنية (الترجمة والتلخيص )_Pdf5of5
تدريب مدونة علماء مصر على الكتابة الفنية (القصة القصيرة والخاطرة والأخطاء ال...
تدريب مدونة علماء مصر على الكتابة الفنية (مقالات الموارد )_Pdf3of5
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات الإخبارية )_Pdf2of5
تدريب مدونة علماء مصر على الكتابة الفنية (المقالات المبنية على البحث )_Pdf1of5
تعريف بمدونة علماء مصر ومحاور التدريب على الكتابة للمدونين
Entrepreneurship_who_is_your_customer_(arabic)_7of7
Entrepreneurship_who_is_your_customer_(arabic)_5of7
Entrepreneurship_who_is_your_customer_(arabic)_4of7
Entrepreneurship_who_is_your_customer_(arabic)_2of7
يوميات طالب بدرجة مشرف (Part 19 of 20)
يوميات طالب بدرجة مشرف (Part 18 of 20)
يوميات طالب بدرجة مشرف (Part 17 of 20)
يوميات طالب بدرجة مشرف (Part 16 of 20)

Recently uploaded (20)

PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Sports Quiz easy sports quiz sports quiz
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
01-Introduction-to-Information-Management.pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
RMMM.pdf make it easy to upload and study
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PPTX
Cell Types and Its function , kingdom of life
PDF
Basic Mud Logging Guide for educational purpose
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
3rd Neelam Sanjeevareddy Memorial Lecture.pdf
102 student loan defaulters named and shamed – Is someone you know on the list?
Sports Quiz easy sports quiz sports quiz
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
01-Introduction-to-Information-Management.pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
Renaissance Architecture: A Journey from Faith to Humanism
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
RMMM.pdf make it easy to upload and study
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Final Presentation General Medicine 03-08-2024.pptx
Cell Types and Its function , kingdom of life
Basic Mud Logging Guide for educational purpose
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
human mycosis Human fungal infections are called human mycosis..pptx
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
TR - Agricultural Crops Production NC III.pdf
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Microbial diseases, their pathogenesis and prophylaxis
3rd Neelam Sanjeevareddy Memorial Lecture.pdf

Introduction to Neural networks (under graduate course) Lecture 5 of 9

  • 1. Neural Networks Dr. Randa Elanwar Lecture 5
  • 2. Lecture Content • Linearly separable functions: 2-class problem implementation – Learning laws: Perceptron learning rule – Pattern mode solution method – Batch mode solution method 2Neural Networks Dr. Randa Elanwar
  • 3. Linearly Separable Functions • Example: logical OR, with initial weights 0.5, 0.3 with bias = 0.5 and activation step function at t=0.5. The learning rate = 1 3Neural Networks Dr. Randa Elanwar x2 w1= 0.5 w2 = 0.3 x1 yin = x1w1 + x2w2   y Activation Function: Binary Step Function t = 0.5, (y-in) = 1 if y-in >= t otherwise (y-in) = 0
  • 4. Solving Linearly Separable Functions (Pattern mode) • Given: • Since we consider bias as additional weight thus the weight vector is 1x3 we have to fix the dimensionality of the input vector x1, x2, x3 and x4 from 2x1 to be 3x1 to perform the multiplication. 4Neural Networks Dr. Randa Elanwar            11 10 01 00 X).( bXWfY  x1 x2 x3 x4 x1 x2 y 0 0 0 0 1 1 1 0 1 1 1 1           111 101 011 001 X  5.03.05.0)0( W
  • 5. Solving Linearly Separable Functions (Pattern mode) • Update weight vector for iteration 1 5Neural Networks Dr. Randa Elanwar OK Wrong 1,1 1 0 1 ].5.03.15.0[3.)1(           yXW 1,3.2 1 1 1 ].5.03.15.0[4.)1(           yXW OK OK Wrong1,5.0 1 0 0 ].5.03.15.0[1.)1(           yXW   0,5.0 1 0 0 .5.03.05.01.)0(           yXW   0,2.0 1 1 0 .5.03.05.02.)0(           yXW          5.0 3.1 5.0 )..( 2)0()1( XyWW ydis TT 
  • 6. Solving Linearly Separable Functions (Pattern mode) • Update weight vector for iteration 2 • Update weight vector for iteration 3 6Neural Networks Dr. Randa Elanwar           5.0 3.1 5.0 1)..()1()2( XyyWW dis TT  1,8.0 1 1 0 ].5.03.15.0[2.)2(           yXW 0,0 1 0 1 ].5.03.15.0[3.)2(           yXW OK Wrong          5.0 3.1 5.1 3)..()2()3( XyyWW dis TT  1,5.0 1 0 0 ].5.03.15.1[1.)3(           yXW 1,3.3 1 1 1 ].5.03.15.1[4.)3(           yXW OK Wrong
  • 7. Solving Linearly Separable Functions (Pattern mode) • Update weight vector for iteration 4 • The weights learning has converged at 4 iterations 7Neural Networks Dr. Randa Elanwar 0,5.0 1 0 0 ].5.03.15.2[1.)4(           yXW 1,8.0 1 1 0 ].5.03.15.1[2.)4(           yXW           5.0 3.1 5.1 1)..()3()4( XyyWW dis TT  1,1 1 0 1 ].5.03.15.1[3.)4(           yXW 1,3.2 1 1 1 ].5.03.15.1[4.)4(           yXW OK OK OK OK
  • 8. Solving Linearly Separable Functions (Batch mode) • Update weight vector for iteration 1 • Add w for all misclassified inputs together in 1 step 8Neural Networks Dr. Randa Elanwar 0,0 1 0 1 ].5.03.05.0[3.)0(           yXW 0,3.0 1 1 1 ].5.03.05.0[4.)0(           yXW OK Wrong Wrong Wrong 4)..(3)..(2)..()0()1( XyXyXy yyyWW disdisdis TT                                                 5.2 3.2 5.2 1 1 1 1 0 1 1 1 0 5.0 3.0 5.0 )1(W T   0,5.0 1 0 0 .5.03.05.01.)0(           yXW   0,2.0 1 1 0 .5.03.05.02.)0(           yXW
  • 9. Solving Linearly Separable Functions (Batch mode) • Update weight vector for iteration 1 • Add w for all misclassified inputs together in 1 step 9Neural Networks Dr. Randa Elanwar 1,5.2 1 0 0 ].5.23.25.2[1.)1(           yXW 1,8.4 1 1 0 ].5.23.25.2[2.)1(           yXW 1,5 1 0 1 ].5.23.25.2[3.)1(           yXW 1,3.7 1 1 1 ].5.23.25.2[4.)1(           yXW Wrong OK OK OK          5.1 3.2 5.2 1)..()1()2( XyyWW dis TT 
  • 10. Solving Linearly Separable Functions (Batch mode) • Update weight vector for iteration 3 • Add w for all misclassified inputs together in 1 step 10Neural Networks Dr. Randa Elanwar 1,5.1 1 0 0 ].5.13.25.2[1.)2(           yXW 1,8.3 1 1 0 ].5.13.25.2[2.)2(           yXW 1,4 1 0 1 ].5.13.25.2[3.)2(           yXW 1,3.6 1 1 1 ].5.13.25.2[4.)2(           yXW OK OK OK Wrong          5.0 3.2 5.2 1)..()2()3( XyyWW dis TT 
  • 11. Solving Linearly Separable Functions (Batch mode) • Update weight vector for iteration 4 • Add w for all misclassified inputs together in 1 step 11Neural Networks Dr. Randa Elanwar OK OK OK Wrong           5.0 3.2 5.2 1)..()3()4( XyyWW dis TT  1,5.0 1 0 0 ].5.03.25.2[1.)3(           yXW 1,8.2 1 1 0 ].5.03.25.2[2.)3(           yXW 1,3 1 0 1 ].5.03.25.2[3.)3(           yXW 1,3.5 1 1 1 ].5.03.25.2[4.)3(           yXW
  • 12. Solving Linearly Separable Functions (Batch mode) • The weights learning has also converged at 4 iterations but with different final values 12Neural Networks Dr. Randa Elanwar OK OK OK OK0,5.0 1 0 0 ].5.03.25.2[1.)4(           yXW 1,8.1 1 1 0 ].5.03.25.2[2.)4(           yXW 1,2 1 0 1 ].5.03.25.2[3.)4(           yXW 1,3.4 1 1 1 ].5.03.25.2[4.)4(           yXW
  • 13. Linearly Separable Functions • Example: Consider linearly separable patterns where class C1 consists of the two patterns [1 0]T and [1 1]T and C2 consists of the two patterns [0 0]T and [0 1]T. Use the perceptron algorithm with  = 1 and w(0)= [0 1 -1/2]T to design the line separating the two classes. 13Neural Networks Dr. Randa Elanwar X2 X1 X4 X3 X2 X1 initial finalIt is very important to graph the problem to define the initial line and assign the direction of positive and negative Initial weight  X2-1/2 = 0, intersects vertical axis at (0,1/2) and is parallel to horizontal axis When X2>1/2 we get +ve value thus the positive direction is above the line +ve -ve
  • 14. Linearly Separable Functions • Initially x2 and x3 are in the correct side while x1 and x4 are in the wrong side. • Thus we assume x1 and x2 have to be on the +ve side and x3 and x4 have to be on the –ve side • Note that sometimes you are not given the activation function f. In such case: you can compute outputs and update weights by polarity (sign) instead of the activation function value: If the W.X >0 and it is wrong w = .(-1).X If the W.X <0 and it is wrong w = .(+1).X 14Neural Networks Dr. Randa Elanwar
  • 15. Solving Linearly Separable Functions (Pattern mode) • Again since the weight vector is 1x3 we have to fix the dimensionality of the input vector x1, x2, x3 and x4 from 2x1 to be 3x1 to perform the multiplication. • Update weight vector for iteration 1 15Neural Networks Dr. Randa Elanwar 5.0 1 0 1 ].5.010[1.)0(          XW -ve and it have to be +ve          5.0 1 1 1)0()0()1( Xw WWW TTT 5.2 1 1 1 ].5.011[2.)1(          XW OK 5.0 1 0 0 ].5.011[3.)1(          XW +ve and it have to be -ve
  • 16. Solving Linearly Separable Functions (Pattern mode) • Update weight vector for iteration 2 • Update weight vector for iteration 3 • Update weight vector for iteration 4 16Neural Networks Dr. Randa Elanwar           5.0 1 1 3)1()1()2( Xw WWW TTT 5.0 1 1 0 ].5.011[4.)2(          XW +ve and it have to be -ve           5.1 0 1 4)2()2()3( Xw WWW TTT 5.0 1 0 1 ].5.101[1.)3(          XW -ve and it have to be +ve           5.0 0 2 1)3()3()4( Xw WWW TTT 5.1 1 1 1 ].5.002[2.)4(          XW OK
  • 17. Solving Linearly Separable Functions (Pattern mode) • The new straight line equation is 2X1-1/2 = 0 a vertical line intersecting the horizontal axis at 1/4 and is parallel to the vertical axis • The solution has converged in 4 iterations. 17Neural Networks Dr. Randa Elanwar   5.0 1 0 0 5.0023.)4(          XW OK   5.0 1 1 0 5.0024.)4(          XW OK   5.1 1 0 1 5.0021.)4(          XW OK
  • 18. Solving Linearly Separable Functions (Batch mode) • Again since the weight vector is 3x1 we have to fix the dimensionality of the input vector x1, x2, x3 and x4 to be 1x3 to perform the multiplication. • Update weight vector for iteration 1. Add w for all misclassified inputs together in 1 step 18Neural Networks Dr. Randa Elanwar +ve and it have to be -ve -ve and it have to be +ve OK OK 4.1.)0()1( XXWW TT                                         5.0 0 1 1 1 0 1 0 1 5.0 1 0 )1(W T 5.0 1 0 1 ].5.010[1.)0(          XW 1 1 1 1 ].5.010[2.)0(          XW 5.0 1 0 0 ].5.010[3.)0(          XW 5.0 1 1 0 ].5.010[4.)0(          XW
  • 19. Solving Linearly Separable Functions (Batch mode) • The new straight line equation is X1-1/2 = 0 a vertical line intersecting the horizontal axis at 1/2 and is parallel to the vertical axis • The solution has converged in 1 iteration. 19Neural Networks Dr. Randa Elanwar OK OK OK OK5.0 1 0 1 ].5.001[1.)1(          XW 5.0 1 1 1 ].5.001[2.)1(          XW 5.0 1 0 0 ].5.001[3.)1(          XW 5.0 1 1 0 ].5.001[4.)1(          XW
  • 20. Non linear problems • XOR problem • No way to draw a line to separate the positive from negative examples 20Neural Networks Dr. Randa Elanwar Input1 Input2 Output 0 0 0 0 1 1 1 0 1 1 1 0
  • 21. Non linear problems • XOR problem • The only way to separate the positive from negative examples is to draw 2 lines (i.e., we need 2 straight line equations) or nonlinear region to capture one type only 21Neural Networks Dr. Randa Elanwar +ve +ve -ve -ve+ve -ve cba yx  22
  • 22. Non linear problems • To implement the nonlinearity we need to insert one or more extra layer of nodes between the input layer and the output layer (Hidden layer) 22Neural Networks Dr. Randa Elanwar