SlideShare a Scribd company logo
Artificial Neural Networks (ANNs)
Step-By-Step Training & Testing Example
MENOUFIA UNIVERSITY
FACULTY OF COMPUTERS AND INFORMATION
ALL DEPARTMENTS
ARTIFICIAL INTELLIGENCE
‫المنوفية‬ ‫جامعة‬
‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬
‫األقسام‬ ‫جميع‬
‫الذكاء‬‫اإلصطناعي‬
‫المنوفية‬ ‫جامعة‬
Ahmed Fawzy Gad
ahmed.fawzy@ci.menofia.edu.eg
Neural Networks & Classification
Linear Classifiers
Linear Classifiers
Linear Classifiers
Linear Classifiers
Complex Data
Linear Classifiers
Complex Data
Linear Classifiers
Complex Data
Not Solved Linearly
Not Solved Linearly
Nonlinear Classifiers
Nonlinear Classifiers
Training
Nonlinear Classifiers
Training
Nonlinear Classifiers
Training
Classification Example
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Neural Networks
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Hidden Output
Neural Networks
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Layer
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
Output Layer
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
𝒀𝒋
Weights
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
𝑾 𝟏
𝑾 𝟐
𝑾 𝟑
Weights=𝑾𝒊
𝒀𝒋
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
Components
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
s
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑 S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊
𝒀𝒋
Activation Function
Inputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
ss=SOP(𝑿𝒊, 𝑾𝒊)
𝑿𝒊=Inputs 𝑾𝒊=Weights
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊
𝒀𝒋
Activation Function
Outputs
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
Class Label
𝒀𝒋
Activation Functions
Piecewise
Linear Sigmoid Signum
Activation Functions
Which activation function to use?
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Outputs
Class
Labels
Activation
Function
TWO Class
Labels
TWO
Outputs
One that gives two outputs.
B (BLUE)G (GREEN)R (RED)
00255
RED 6880248
25500
BLUE 2101567
Which activation function to use?
𝑪𝒋𝒀𝒋
Activation Functions
Piecewise
Linear Sigmoid SignumSignum
Activation Function
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝑿 𝟎 = +𝟏
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(+𝟏 ∗ 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=0
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=0
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=+v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=+v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=-v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
X
Y y=x+b
Y-Intercept
b=+v
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
Same Concept Applies to Bias
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Bias Importance
B (BLUE)G (GREEN)R (RED)
00255
RED
6880248
25500
BLUE
2101567
Input Output
R
G
B
RED/BLUE
W1
W2
W3
F(s)s
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
S= 𝟏
𝒎
𝑿𝒊 𝑾𝒊+BIAS
Learning Rate
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
Summary of Parameters
Inputs 𝑿 𝒎
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
Summary of Parameters
Weights 𝑾 𝒎
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
Summary of Parameters
Bias 𝒃
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Sum Of Products (SOP) 𝒔
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Activation Function 𝒔𝒈𝒏
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Outputs 𝒀𝒋
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Summary of Parameters
Learning Rate η
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Other Parameters
Step n
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, …
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Other Parameters
Desired Output 𝒅𝒋
R
G
B
RED/BLUE
W1
W2
W3
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
𝒀𝒋
=+1𝑿 𝟎
W0
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, …
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
𝒅 𝒏 =
−𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏 (𝑹𝑬𝑫)
+𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐 (𝑩𝑳𝑼𝑬)
𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
Neural Networks Training Steps
Weights Initialization
Inputs Application
Sum of Inputs-Weights Products
Activation Function Response Calculation
Weights Adaptation
Back to Step 2
1
2
3
4
5
6
Regarding 5th Step: Weights Adaptation
• If the predicted output Y is not the same as the desired output d,
then weights are to be adapted according to the following equation:
𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
Where
𝑾 𝒏 = [𝒃 𝒏 , 𝑾 𝟏(𝒏), 𝑾 𝟐(𝒏), 𝑾 𝟑(𝒏), … , 𝑾 𝒎(𝒏)]
Neural Networks
Training Example
Step n=0
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=0:
η = .001
𝑋 𝑛 = 𝑋 0 = +1, 255, 0, 0
𝑊 𝑛 = 𝑊 0 = −1, −2, 1, 6.2
𝑑 𝑛 = 𝑑 0 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0 - SOP
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1+255*-2+0*1+0*6.2
=-511
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0 - Output
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-511
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟎
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −𝟓𝟏𝟏
= −𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=0 - Output
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-511 −𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟎
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −𝟓𝟏𝟏
= −𝟏
-1
RED
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=0
Predicted Vs. Desired
255
0
0
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-511 −𝟏
=+1𝑿 𝟎
-1
-1
𝒀 𝒏 = 𝒀 𝟎 = −𝟏
𝐝 𝒏 = 𝒅 𝟎 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=1:
η = .001
𝑋 𝑛 = 𝑋 1 = +1, 248, 80, 68
𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1, −2, 1, 6.2
𝑑 𝑛 = 𝑑 1 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1
248
80
68
RED/BLUE
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1 - SOP
248
80
68
RED/BLUE
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
-1
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1+248*-2+80*1+68*6.2
=4.6
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1 - Output
248
80
68
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
4.6
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟏
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟒. 𝟔
= +𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=1 - Output
248
80
68
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
4.6 +𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟏
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 𝟒. 𝟔
= +𝟏
+1
BLUE
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=1
Predicted Vs. Desired
248
80
68
-2
1
6.2
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
4.6 +𝟏
=+1𝑿 𝟎
-1
+1
𝒀 𝒏 = 𝒀 𝟏 = +𝟏
𝐝 𝒏 = 𝒅 𝟏 = −𝟏
∵ 𝒀 𝒏 ≠ 𝒅 𝒏
∴ Weights are Incorrect.
Adaptation Required
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Weights Adaptation
• According to
𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏)
• Where n = 1
𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏)
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟏 − (+𝟏) +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖
𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐, −. 𝟒𝟗𝟔, −. 𝟏𝟔, −. 𝟏𝟑𝟔
𝑾 𝟐 = −𝟏. 𝟎𝟎𝟐, −𝟐. 𝟒𝟗𝟔, . 𝟖𝟒, 𝟔. 𝟎𝟔𝟒
Neural Networks
Training Example
Step n=2
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=2:
η = .001
𝑋 𝑛 = 𝑋 2 = +1, 0, 0, 255
𝑊 𝑛 = 𝑊 2 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 2 = +1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=2
0
0
255
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=2 - SOP
0
0
255
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+0*-
2.496+0*.84+255*6.064
=1545.32
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=2 - Output
0
0
255
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1545
.32
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟐
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1545.32
= +𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=2 - Output
0
0
255
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1545
.32 +𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟐
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1545.32
= +𝟏
+1
BLUE
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=2
Predicted Vs. Desired
0
0
255
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1545
.32 +𝟏
=+1𝑿 𝟎
+1
𝒀 𝒏 = 𝒀 𝟐 = +𝟏
𝐝 𝒏 = 𝒅 𝟐 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=3
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=3:
η = .001
𝑋 𝑛 = 𝑋 3 = +1, 67, 15, 210
𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 3 = +1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=3
67
15
210
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=3 - SOP
67
15
210
RED/BLUE
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+67*-
2.496+15*.84+210*6.064
=1349.542
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=3 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1349
.542
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟑
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1349.542
= +𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
67
15
210
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=3 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1349
.542 +𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟑
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 1349.542
= +𝟏
+1
BLUE
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
67
15
210
Neural Networks
Training Example
Step n=3
Predicted Vs. Desired
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
1349
.542 +𝟏
=+1𝑿 𝟎
+1
𝒀 𝒏 = 𝒀 𝟑 = +𝟏
𝐝 𝒏 = 𝒅 𝟑 = +𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
67
15
210
Neural Networks
Training Example
Step n=4
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=4:
η = .001
𝑋 𝑛 = 𝑋 4 = +1, 255, 0, 0
𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 4 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=4
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=4 - SOP
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+255*-
2.496+0*.84+0*6.064
=-637.482
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=4 - Output
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟒
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −637.482
= −𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=4 - Output
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
−𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟒
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −637.482
= −𝟏
-1
RED
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=4
Predicted Vs. Desired
255
0
0
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
−𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟒 = −𝟏
𝐝 𝒏 = 𝒅 𝟒 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=5
• In each step in the solution, the parameters of the neural network
must be known.
• Parameters of step n=5:
η = .001
𝑋 𝑛 = 𝑋 5 = +1, 248, 80, 68
𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1.002, −2.496, .84, 6.064
𝑑 𝑛 = 𝑑 5 = −1
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
Neural Networks
Training Example
Step n=5
248
80
68
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=5 - SOP
248
80
68
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+248*-
2.496+80*.84+68*6.064
=-31.306
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
Neural Networks
Training Example
Step n=5 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
31.3
06
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟓
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −31.306
= −𝟏
sgn
RED/BLUE
𝒀(𝒏)
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
248
80
68
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Neural Networks
Training Example
Step n=5 - Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
31.3
06
−𝟏
=+1𝑿 𝟎
𝒀 𝒏 = 𝒀 𝟓
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 −31.306
= −𝟏
-1
RED
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
248
80
68
Neural Networks
Training Example
Step n=5
Predicted Vs. Desired
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
-
637.
482
−𝟏
=+1𝑿 𝟎
-1
𝒀 𝒏 = 𝒀 𝟓 = −𝟏
𝐝 𝒏 = 𝒅 𝟓 = −𝟏
∵ 𝒀 𝒏 = 𝒅 𝒏
∴ Weights are Correct.
No Adaptation
B (BLUE)G (GREEN)R (RED)
00255
RED = -1
6880248
25500
BLUE = +1
2101567
6.064
.84
-2.496
-1.002
248
80
68
Correct Weights
• After testing the weights across all samples and results were correct
then we can conclude that current weights are correct ones for
training the neural network.
• After training phase we come to testing the neural network.
• What is the class of the unknown color of values of R=150, G=100,
B=180?
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Trained Neural Networks Parameters
η = .001
𝑊 = −1.002, −2.496, .84, 6.064
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
SOP
150
100
180
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
F(s)s sgn
=+1𝑿 𝟎
s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
=+1*-1.002+150*-
2.496+100*.84+180*6.064
=800.118
RED/BLUE
𝒀
6.064
.84
-2.496
-1.002
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
800.
118
=+1𝑿 𝟎
𝒀
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 800.118
= +𝟏
sgn
RED/BLUE
𝒀
6.064
.84
-2.496
-1.002
150
100
180
𝒔𝒈𝒏 𝒔 =
+𝟏, 𝒔 ≥ 𝟎
−𝟏, 𝒔 < 𝟎
Testing Trained Neural Network
(R, G, B) = (150, 100, 180)
Output
𝑿 𝟏
𝑿 𝟐
𝑿 𝟑
800.
118 +𝟏
=+1𝑿 𝟎
𝒀
= 𝑺𝑮𝑵 𝒔
= 𝑺𝑮𝑵 800.118
= +𝟏
+1
BLUE
6.064
.84
-2.496
-1.002
150
100
180

More Related Content

PDF
Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & T...
PDF
Artificial Neural Networks (ANNs) - XOR - Step-By-Step
PDF
Brief Introduction to Deep Learning + Solving XOR using ANNs
PPTX
Loss Function.pptx
PDF
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
PPTX
Neural network
PPTX
03 Single layer Perception Classifier
Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & T...
Artificial Neural Networks (ANNs) - XOR - Step-By-Step
Brief Introduction to Deep Learning + Solving XOR using ANNs
Loss Function.pptx
Backpropagation: Understanding How to Update ANNs Weights Step-by-Step
Neural network
03 Single layer Perception Classifier

What's hot (20)

PPT
Perceptron
PDF
Introduction to Neural Networks
PPTX
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
PPTX
Mc Culloch Pitts Neuron
PPTX
Artificial Intelligence Course | AI Tutorial For Beginners | Artificial Intel...
PPTX
Neural network
PPTX
Batch normalization presentation
PDF
Neural Networks: Multilayer Perceptron
PPTX
Introduction to artificial neural network
PDF
Data preprocessing using Machine Learning
PPTX
Classification Algorithm-II
PPT
Deep learning ppt
PPTX
Artifical Neural Network and its applications
PPTX
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
PDF
Lecture 6 radial basis-function_network
PDF
Artificial Neural Network report
PPTX
strassen matrix multiplication algorithm
PPTX
Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Cl...
PDF
Training Neural Networks
Perceptron
Introduction to Neural Networks
Recurrent Neural Network (RNN) | RNN LSTM Tutorial | Deep Learning Course | S...
Mc Culloch Pitts Neuron
Artificial Intelligence Course | AI Tutorial For Beginners | Artificial Intel...
Neural network
Batch normalization presentation
Neural Networks: Multilayer Perceptron
Introduction to artificial neural network
Data preprocessing using Machine Learning
Classification Algorithm-II
Deep learning ppt
Artifical Neural Network and its applications
Artificial Neural Network | Deep Neural Network Explained | Artificial Neural...
Lecture 6 radial basis-function_network
Artificial Neural Network report
strassen matrix multiplication algorithm
Noise2Score: Tweedie’s Approach to Self-Supervised Image Denoising without Cl...
Training Neural Networks
Ad

Similar to Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & Testing Example (20)

PPTX
Neural Network Back Propagation Algorithm
PPT
Lecture2---Feed-Forward Neural Networks.ppt
PDF
Learning Deep Learning
PPT
Neural network and mlp
PPTX
Deep learning study 2
PPTX
Machine Learning Essentials Demystified part2 | Big Data Demystified
PDF
Curve fitting
PDF
Artificial Neural Networks
PPTX
latest TYPES OF NEURAL NETWORKS (2).pptx
PDF
Some Equations for MAchine LEarning
PPTX
Machine Learning Introduction by Dr.C.R.Dhivyaa Kongu Engineering College
PPTX
UNIT I (6).pptx
PDF
Feedforward Networks and Deep Learning Module-02.pdf
PPTX
1.2 A Tutorial Example - Deep Learning Foundations and Concepts.pptx
PDF
Introduction to Gaussian Processes
PPT
Machine Learning and Inductive Inference
PPTX
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
PPTX
Deep learning simplified
PPT
Ann ics320 part4
PPTX
Unit 2 TOMMichlwjwjwjwjwwjejejejejejejej
Neural Network Back Propagation Algorithm
Lecture2---Feed-Forward Neural Networks.ppt
Learning Deep Learning
Neural network and mlp
Deep learning study 2
Machine Learning Essentials Demystified part2 | Big Data Demystified
Curve fitting
Artificial Neural Networks
latest TYPES OF NEURAL NETWORKS (2).pptx
Some Equations for MAchine LEarning
Machine Learning Introduction by Dr.C.R.Dhivyaa Kongu Engineering College
UNIT I (6).pptx
Feedforward Networks and Deep Learning Module-02.pdf
1.2 A Tutorial Example - Deep Learning Foundations and Concepts.pptx
Introduction to Gaussian Processes
Machine Learning and Inductive Inference
Module1 (2).pptxvgybhunjimko,l.vgbyhnjmk;
Deep learning simplified
Ann ics320 part4
Unit 2 TOMMichlwjwjwjwjwwjejejejejejejej
Ad

More from Ahmed Gad (20)

PPTX
ICEIT'20 Cython for Speeding-up Genetic Algorithm
PDF
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
PDF
Python for Computer Vision - Revision 2nd Edition
PDF
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
PDF
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
PDF
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
PDF
Introduction to Optimization with Genetic Algorithm (GA)
PDF
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
PDF
Avoid Overfitting with Regularization
PDF
Genetic Algorithm (GA) Optimization - Step-by-Step Example
PDF
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
PDF
Computer Vision: Correlation, Convolution, and Gradient
PDF
Python for Computer Vision - Revision
PDF
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
PDF
Operations in Digital Image Processing + Convolution by Example
PDF
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
PDF
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
PDF
Graduation Project - Face Login : A Robust Face Identification System for Sec...
PDF
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
PDF
Introduction to Digital Signal Processing (DSP) - Course Notes
ICEIT'20 Cython for Speeding-up Genetic Algorithm
NumPyCNNAndroid: A Library for Straightforward Implementation of Convolutiona...
Python for Computer Vision - Revision 2nd Edition
Multi-Objective Optimization using Non-Dominated Sorting Genetic Algorithm wi...
M.Sc. Thesis - Automatic People Counting in Crowded Scenes
Derivation of Convolutional Neural Network from Fully Connected Network Step-...
Introduction to Optimization with Genetic Algorithm (GA)
Derivation of Convolutional Neural Network (ConvNet) from Fully Connected Net...
Avoid Overfitting with Regularization
Genetic Algorithm (GA) Optimization - Step-by-Step Example
ICCES 2017 - Crowd Density Estimation Method using Regression Analysis
Computer Vision: Correlation, Convolution, and Gradient
Python for Computer Vision - Revision
Anime Studio Pro 10 Tutorial as Part of Multimedia Course
Operations in Digital Image Processing + Convolution by Example
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
MATLAB Code + Description : Very Simple Automatic English Optical Character R...
Graduation Project - Face Login : A Robust Face Identification System for Sec...
Introduction to MATrices LABoratory (MATLAB) as Part of Digital Signal Proces...
Introduction to Digital Signal Processing (DSP) - Course Notes

Recently uploaded (20)

PDF
A systematic review of self-coping strategies used by university students to ...
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PPTX
master seminar digital applications in india
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Cell Types and Its function , kingdom of life
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
Institutional Correction lecture only . . .
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
RMMM.pdf make it easy to upload and study
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPTX
Final Presentation General Medicine 03-08-2024.pptx
A systematic review of self-coping strategies used by university students to ...
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
master seminar digital applications in india
Microbial disease of the cardiovascular and lymphatic systems
VCE English Exam - Section C Student Revision Booklet
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Cell Types and Its function , kingdom of life
O5-L3 Freight Transport Ops (International) V1.pdf
Microbial diseases, their pathogenesis and prophylaxis
Institutional Correction lecture only . . .
102 student loan defaulters named and shamed – Is someone you know on the list?
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
RMMM.pdf make it easy to upload and study
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Final Presentation General Medicine 03-08-2024.pptx

Introduction to Artificial Neural Networks (ANNs) - Step-by-Step Training & Testing Example

  • 1. Artificial Neural Networks (ANNs) Step-By-Step Training & Testing Example MENOUFIA UNIVERSITY FACULTY OF COMPUTERS AND INFORMATION ALL DEPARTMENTS ARTIFICIAL INTELLIGENCE ‫المنوفية‬ ‫جامعة‬ ‫والمعلومات‬ ‫الحاسبات‬ ‫كلية‬ ‫األقسام‬ ‫جميع‬ ‫الذكاء‬‫اإلصطناعي‬ ‫المنوفية‬ ‫جامعة‬ Ahmed Fawzy Gad ahmed.fawzy@ci.menofia.edu.eg
  • 2. Neural Networks & Classification
  • 15. Classification Example B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567
  • 16. Neural Networks B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Hidden Output
  • 17. Neural Networks B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567
  • 18. Input Layer B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B
  • 19. Output Layer B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE 𝒀𝒋
  • 20. Weights B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE 𝑾 𝟏 𝑾 𝟐 𝑾 𝟑 Weights=𝑾𝒊 𝒀𝒋
  • 21. Activation Function B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 22. Activation Function B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 23. Activation Function B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 24. Activation Function Components B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 𝒀𝒋
  • 25. Activation Function Inputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 s 𝒀𝒋
  • 26. Activation Function Inputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝒀𝒋
  • 27. Activation Function Inputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝒀𝒋
  • 28. Activation Function Inputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 𝒀𝒋
  • 29. Activation Function Inputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊 𝒀𝒋
  • 30. Activation Function Inputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 ss=SOP(𝑿𝒊, 𝑾𝒊) 𝑿𝒊=Inputs 𝑾𝒊=Weights 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊 𝒀𝒋
  • 31. Activation Function Outputs B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 Class Label 𝒀𝒋
  • 33. Activation Functions Which activation function to use? B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Outputs Class Labels Activation Function TWO Class Labels TWO Outputs One that gives two outputs. B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Which activation function to use? 𝑪𝒋𝒀𝒋
  • 35. Activation Function B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋
  • 36. Bias B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 37. Bias B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 38. Bias B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝑿 𝟎 = +𝟏
  • 39. Bias B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(+𝟏 ∗ 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 40. Bias B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 41. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑)
  • 42. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y
  • 43. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b
  • 44. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b
  • 45. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept
  • 46. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=0
  • 47. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=0
  • 48. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=+v
  • 49. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=+v
  • 50. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=-v
  • 51. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) X Y y=x+b Y-Intercept b=+v
  • 52. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) Same Concept Applies to Bias S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 53. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 54. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 55. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 56. Bias Importance B (BLUE)G (GREEN)R (RED) 00255 RED 6880248 25500 BLUE 2101567 Input Output R G B RED/BLUE W1 W2 W3 F(s)s 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) s=(𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) S= 𝟏 𝒎 𝑿𝒊 𝑾𝒊+BIAS
  • 57. Learning Rate R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏
  • 58. Summary of Parameters Inputs 𝑿 𝒎 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
  • 59. Summary of Parameters Weights 𝑾 𝒎 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑) 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑)
  • 60. Summary of Parameters Bias 𝒃 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 61. Summary of Parameters Sum Of Products (SOP) 𝒔 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 62. Summary of Parameters Activation Function 𝒔𝒈𝒏 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 63. Summary of Parameters Outputs 𝒀𝒋 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 64. Summary of Parameters Learning Rate η R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 65. Other Parameters Step n R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, … 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 66. Other Parameters Desired Output 𝒅𝒋 R G B RED/BLUE W1 W2 W3 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn 𝒀𝒋 =+1𝑿 𝟎 W0 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) 𝟎 ≤ η ≤ 𝟏𝒏 = 𝟎, 𝟏, 𝟐, … B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 𝒅 𝒏 = −𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟏 (𝑹𝑬𝑫) +𝟏, 𝒙 𝒏 𝒃𝒆𝒍𝒐𝒏𝒈𝒔 𝒕𝒐 𝑪𝟐 (𝑩𝑳𝑼𝑬) 𝑿(𝒏)=(𝑿 𝟎, 𝑿 𝟏,𝑿 𝟐, 𝑿 𝟑) W(𝒏)=(𝑾 𝟎, 𝑾 𝟏,𝑾 𝟐, 𝑾 𝟑)
  • 67. Neural Networks Training Steps Weights Initialization Inputs Application Sum of Inputs-Weights Products Activation Function Response Calculation Weights Adaptation Back to Step 2 1 2 3 4 5 6
  • 68. Regarding 5th Step: Weights Adaptation • If the predicted output Y is not the same as the desired output d, then weights are to be adapted according to the following equation: 𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏) Where 𝑾 𝒏 = [𝒃 𝒏 , 𝑾 𝟏(𝒏), 𝑾 𝟐(𝒏), 𝑾 𝟑(𝒏), … , 𝑾 𝒎(𝒏)]
  • 69. Neural Networks Training Example Step n=0 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=0: η = .001 𝑋 𝑛 = 𝑋 0 = +1, 255, 0, 0 𝑊 𝑛 = 𝑊 0 = −1, −2, 1, 6.2 𝑑 𝑛 = 𝑑 0 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 70. Neural Networks Training Example Step n=0 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 71. Neural Networks Training Example Step n=0 - SOP 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1+255*-2+0*1+0*6.2 =-511 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 72. Neural Networks Training Example Step n=0 - Output 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 -511 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟎 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −𝟓𝟏𝟏 = −𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 73. Neural Networks Training Example Step n=0 - Output 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 -511 −𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟎 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −𝟓𝟏𝟏 = −𝟏 -1 RED B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 74. Neural Networks Training Example Step n=0 Predicted Vs. Desired 255 0 0 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 -511 −𝟏 =+1𝑿 𝟎 -1 -1 𝒀 𝒏 = 𝒀 𝟎 = −𝟏 𝐝 𝒏 = 𝒅 𝟎 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 75. Neural Networks Training Example Step n=1 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=1: η = .001 𝑋 𝑛 = 𝑋 1 = +1, 248, 80, 68 𝑊 𝑛 = 𝑊 1 = 𝑊 0 = −1, −2, 1, 6.2 𝑑 𝑛 = 𝑑 1 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 76. Neural Networks Training Example Step n=1 248 80 68 RED/BLUE -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 77. Neural Networks Training Example Step n=1 - SOP 248 80 68 RED/BLUE -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 -1 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1+248*-2+80*1+68*6.2 =4.6 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 78. Neural Networks Training Example Step n=1 - Output 248 80 68 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 4.6 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟏 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟒. 𝟔 = +𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 79. Neural Networks Training Example Step n=1 - Output 248 80 68 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 4.6 +𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟏 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 𝟒. 𝟔 = +𝟏 +1 BLUE B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 80. Neural Networks Training Example Step n=1 Predicted Vs. Desired 248 80 68 -2 1 6.2 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 4.6 +𝟏 =+1𝑿 𝟎 -1 +1 𝒀 𝒏 = 𝒀 𝟏 = +𝟏 𝐝 𝒏 = 𝒅 𝟏 = −𝟏 ∵ 𝒀 𝒏 ≠ 𝒅 𝒏 ∴ Weights are Incorrect. Adaptation Required B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 81. Weights Adaptation • According to 𝑾 𝒏 + 𝟏 = 𝑾 𝒏 + η 𝒅 𝒏 − 𝒀 𝒏 𝑿(𝒏) • Where n = 1 𝑾 𝟏 + 𝟏 = 𝑾 𝟏 + η 𝒅 𝟏 − 𝒀 𝟏 𝑿(𝟏) 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟏 − (+𝟏) +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + .001 −𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐 +𝟏, 𝟐𝟒𝟖, 𝟖𝟎, 𝟔𝟖 𝑾 𝟐 = −𝟏, −𝟐, 𝟏, 𝟔. 𝟐 + −. 𝟎𝟎𝟐, −. 𝟒𝟗𝟔, −. 𝟏𝟔, −. 𝟏𝟑𝟔 𝑾 𝟐 = −𝟏. 𝟎𝟎𝟐, −𝟐. 𝟒𝟗𝟔, . 𝟖𝟒, 𝟔. 𝟎𝟔𝟒
  • 82. Neural Networks Training Example Step n=2 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=2: η = .001 𝑋 𝑛 = 𝑋 2 = +1, 0, 0, 255 𝑊 𝑛 = 𝑊 2 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 2 = +1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 83. Neural Networks Training Example Step n=2 0 0 255 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 84. Neural Networks Training Example Step n=2 - SOP 0 0 255 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+0*- 2.496+0*.84+255*6.064 =1545.32 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 85. Neural Networks Training Example Step n=2 - Output 0 0 255 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1545 .32 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟐 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1545.32 = +𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 86. Neural Networks Training Example Step n=2 - Output 0 0 255 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1545 .32 +𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟐 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1545.32 = +𝟏 +1 BLUE B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 87. Neural Networks Training Example Step n=2 Predicted Vs. Desired 0 0 255 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1545 .32 +𝟏 =+1𝑿 𝟎 +1 𝒀 𝒏 = 𝒀 𝟐 = +𝟏 𝐝 𝒏 = 𝒅 𝟐 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 88. Neural Networks Training Example Step n=3 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=3: η = .001 𝑋 𝑛 = 𝑋 3 = +1, 67, 15, 210 𝑊 𝑛 = 𝑊 3 = 𝑊 2 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 3 = +1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 89. Neural Networks Training Example Step n=3 67 15 210 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 90. Neural Networks Training Example Step n=3 - SOP 67 15 210 RED/BLUE 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+67*- 2.496+15*.84+210*6.064 =1349.542 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 91. Neural Networks Training Example Step n=3 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1349 .542 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟑 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1349.542 = +𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 67 15 210 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 92. Neural Networks Training Example Step n=3 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1349 .542 +𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟑 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 1349.542 = +𝟏 +1 BLUE B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 67 15 210
  • 93. Neural Networks Training Example Step n=3 Predicted Vs. Desired 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 1349 .542 +𝟏 =+1𝑿 𝟎 +1 𝒀 𝒏 = 𝒀 𝟑 = +𝟏 𝐝 𝒏 = 𝒅 𝟑 = +𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 67 15 210
  • 94. Neural Networks Training Example Step n=4 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=4: η = .001 𝑋 𝑛 = 𝑋 4 = +1, 255, 0, 0 𝑊 𝑛 = 𝑊 4 = 𝑊 3 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 4 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 95. Neural Networks Training Example Step n=4 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 96. Neural Networks Training Example Step n=4 - SOP 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+255*- 2.496+0*.84+0*6.064 =-637.482 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 97. Neural Networks Training Example Step n=4 - Output 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟒 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −637.482 = −𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 98. Neural Networks Training Example Step n=4 - Output 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 −𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟒 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −637.482 = −𝟏 -1 RED B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 99. Neural Networks Training Example Step n=4 Predicted Vs. Desired 255 0 0 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 −𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟒 = −𝟏 𝐝 𝒏 = 𝒅 𝟒 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 100. Neural Networks Training Example Step n=5 • In each step in the solution, the parameters of the neural network must be known. • Parameters of step n=5: η = .001 𝑋 𝑛 = 𝑋 5 = +1, 248, 80, 68 𝑊 𝑛 = 𝑊 5 = 𝑊 4 = −1.002, −2.496, .84, 6.064 𝑑 𝑛 = 𝑑 5 = −1 B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567
  • 101. Neural Networks Training Example Step n=5 248 80 68 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 102. Neural Networks Training Example Step n=5 - SOP 248 80 68 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+248*- 2.496+80*.84+68*6.064 =-31.306 RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002
  • 103. Neural Networks Training Example Step n=5 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 31.3 06 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟓 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −31.306 = −𝟏 sgn RED/BLUE 𝒀(𝒏) B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 248 80 68 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 104. Neural Networks Training Example Step n=5 - Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 31.3 06 −𝟏 =+1𝑿 𝟎 𝒀 𝒏 = 𝒀 𝟓 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 −31.306 = −𝟏 -1 RED B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 248 80 68
  • 105. Neural Networks Training Example Step n=5 Predicted Vs. Desired 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 - 637. 482 −𝟏 =+1𝑿 𝟎 -1 𝒀 𝒏 = 𝒀 𝟓 = −𝟏 𝐝 𝒏 = 𝒅 𝟓 = −𝟏 ∵ 𝒀 𝒏 = 𝒅 𝒏 ∴ Weights are Correct. No Adaptation B (BLUE)G (GREEN)R (RED) 00255 RED = -1 6880248 25500 BLUE = +1 2101567 6.064 .84 -2.496 -1.002 248 80 68
  • 106. Correct Weights • After testing the weights across all samples and results were correct then we can conclude that current weights are correct ones for training the neural network. • After training phase we come to testing the neural network. • What is the class of the unknown color of values of R=150, G=100, B=180?
  • 107. Testing Trained Neural Network (R, G, B) = (150, 100, 180) Trained Neural Networks Parameters η = .001 𝑊 = −1.002, −2.496, .84, 6.064
  • 108. Testing Trained Neural Network (R, G, B) = (150, 100, 180) SOP 150 100 180 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 F(s)s sgn =+1𝑿 𝟎 s=(𝑿 𝟎 𝑾 𝟎+𝑿 𝟏 𝑾 𝟏+𝑿 𝟐 𝑾 𝟐+𝑿 𝟑 𝑾 𝟑) =+1*-1.002+150*- 2.496+100*.84+180*6.064 =800.118 RED/BLUE 𝒀 6.064 .84 -2.496 -1.002
  • 109. Testing Trained Neural Network (R, G, B) = (150, 100, 180) Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 800. 118 =+1𝑿 𝟎 𝒀 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 800.118 = +𝟏 sgn RED/BLUE 𝒀 6.064 .84 -2.496 -1.002 150 100 180 𝒔𝒈𝒏 𝒔 = +𝟏, 𝒔 ≥ 𝟎 −𝟏, 𝒔 < 𝟎
  • 110. Testing Trained Neural Network (R, G, B) = (150, 100, 180) Output 𝑿 𝟏 𝑿 𝟐 𝑿 𝟑 800. 118 +𝟏 =+1𝑿 𝟎 𝒀 = 𝑺𝑮𝑵 𝒔 = 𝑺𝑮𝑵 800.118 = +𝟏 +1 BLUE 6.064 .84 -2.496 -1.002 150 100 180