SlideShare a Scribd company logo
George F Luger
ARTIFICIAL INTELLIGENCE 6th edition
Structures and Strategies for Complex Problem Solving
Machine Learning: Connectionist
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
11.0 Introduction
11.1 Foundations for Connectionist
Networks
11.2 Perceptron Learning
11.3 Backpropagation Learning.
11.4 Competitive Learning
11.5 Hebbian Coincidence Learning
11.6 Attractor Networks or “Memories”
11.7 Epilogue and References
11.8 Exercises
1
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.1 An artificial neuron, input vector xi
, weights on each input line, and a
thresholding function f that determines the neuron’s output value. Compare
with the actual neuron in fig 1.2
2
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.2 McCulloch-Pitts neurons to calculate the logic functions and and or.
3
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Table 11.1 The McCulloch-Pitts model for logical and.
4
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Table 11.2 The truth table for exclusive-or.
5
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.3 The exclusive-or problem. No straight line in two-dimensions can
separate the (0, 1) and (1, 0) data points from (0, 0) and (1, 1).
6
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.4 A full classification system.
7
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Table 11.3 A data set for perceptron classification.
8
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.5 A two-dimensional plot of the data oints in Table 11.3. The
perceptron of Section 11.2.1 provides a linear separation of the data sets.
9
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.6 The perceptron net for the example data of Table 11.3. The
thresholding function is linear and bipolar (see fig 11.7a)
10
ΣX
iW
i
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.7 Thresholding functions.
11
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.8 An error surface in two dimensions. Constant c dictates the size of
the learning step.
12
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.9 Backpropagation in a connectionist network having a hidden layer.
13
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.10
14
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.11 The network topology of NETtalk.
15
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.12 A backpropagation net to solve the exclusive-or problem. The Wij
are the weights and H is the hidden node.
16
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.13 A layer of nodes for application of a winner-take-all algorithm. The
old input vectors support the winning node.
17
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.14 The use of a Kohonen layer, unsupervised, to generate a sequence
of prototypes to represent the classes of Table 11.3.
18
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.15 The architecture of the Kohonen based learning network for the
data of Table 11.3 and classification of Fig 11.4.
19
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.16 The “outstar” of node J, the “winner” in a winner-take-all network.
The Y vector supervises the response on the output layer in Grossberg
training. The “outstar” is bold with all weights, 1; all other weights are 0.
20
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.17 A counterpropagation network to recognize the classes in Table
11.3. We train the outstar weights of node A, wsa
and wda
.
21
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.18 A SVM learning the boundaries of a chess board from points
generated according to the uniform distribution using
Gaussian kernels. The dots are the data points with the larger dots
comprising the set of support vectors, the darker areas indicate the
confidence in the classification. Adapted from Cristianini and Shawe-
Taylor (2000).
22
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Table 11.4 The signs and product of signs of node output values.
23
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.19 An example neuron for application of a hybrid Hebbian node where
learning is supervised.
24
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.20 A supervised Hebbian network for learning pattern association.
25
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.21 The linear association network. The vector Xi
is entered as input
and the associated vector Y is produced as output. yi
is a linear
combination of the x input. In training each yi
is supplied with its correct
output signals.
26
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.22 A linear associator network for the example in Section 11.5.4.
The weight matrix is calculated using the formula presented in the
previous section.
27
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.23 A BAM network for the examples of Section 11.6.2. Each node
may also be connected to itself.
28
Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009
Fig 11.24 An autoassociative network with an input vector Ii
. We assume
single links between nodes with unique indices, thus wij
= wij
and the weight matrix is symmetric.
29

More Related Content

PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence

What's hot (11)

PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PPT
Artificial Intelligence
PDF
Colorization with total variantion regularization
PDF
S+SSPR 2010 Workshop
PPTX
Unit 1.1
PPTX
Solving graph problems using networkX
PDF
Generalized Notions of Data Depth
PPTX
Unit 1.3
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Artificial Intelligence
Colorization with total variantion regularization
S+SSPR 2010 Workshop
Unit 1.1
Solving graph problems using networkX
Generalized Notions of Data Depth
Unit 1.3
Ad

Viewers also liked (15)

DOCX
씨알리스 구입방법 카톡:DDF11 & DDF11.KR 씨알리스 50mg구입,씨알리스 정품구하는방법,씨알리스 거래,씨알리스 조치법,씨알리스 ...
DOCX
레비트라 구입방법 카톡:DDF11 & DDF11.KR 레비트라 추천,레비트라 정품판매소,레비트라 직거래,레비트라 후불판매,레비트라 후불구입...
PDF
GQStreamGIF
DOCX
레비트라 구입방법 카톡:DDF11 & DDF11.KR 레비트라 50mg정품구입,레비트라 100mg정품판매,레비트라 100mg정품구입,레비트...
PPTX
Actividad 1 parcial ll
DOC
RSCVSEP16
DOCX
비아그라 구입방법 카톡:DDF11 & DDF11.KR 비아그라 구입처,비아그라 10mg구매,비아그라 판매사이트,비아그라 구매사이트,비아그라...
DOCX
시알리스 구입방법 카톡:DDF11 & DDF11.KR 시알리스 소금제조,,시알리스 구하는곳,시알리스 판매하는곳,시알리스 처방방법,시알리스 ...
DOCX
시알리스 구입방법 카톡:DDF11 & DDF11.KR 시알리스 방법,시알리스 치사량,시알리스 약,부작용,시알리스 종류,시알리스 모양,시알리...
DOCX
시알리스 구입방법 카톡:DDF11 & DDF11.KR 정품구입,씨알리스 정품파는곳,씨알리스 정품구입처,씨알리스 정품판매처,씨알리스 정품구입방법,
DOCX
비아그라 구입방법 카톡:DDF11 & DDF11.KR 비아그라 20mg정품판매,비아그라 처방전가격,비아그라 유통기한,비아그라 끊는법,비아그...
DOCX
시알리스 구입방법 카톡:DDF11 & DDF11.KR 시알리스 구입처,시알리스 10mg구매,시알리스 판매사이트,시알리스 구매사이트,시알리스...
DOCX
씨알리스 구입방법 카톡:DDF11 & DDF11.KR 씨알리스 20mg정품판매,씨알리스 처방전가격,씨알리스 유통기한,씨알리스 끊는법,씨알리...
RTF
Food Recovery(c)
DOCX
Bonafide comunicat de presa
씨알리스 구입방법 카톡:DDF11 & DDF11.KR 씨알리스 50mg구입,씨알리스 정품구하는방법,씨알리스 거래,씨알리스 조치법,씨알리스 ...
레비트라 구입방법 카톡:DDF11 & DDF11.KR 레비트라 추천,레비트라 정품판매소,레비트라 직거래,레비트라 후불판매,레비트라 후불구입...
GQStreamGIF
레비트라 구입방법 카톡:DDF11 & DDF11.KR 레비트라 50mg정품구입,레비트라 100mg정품판매,레비트라 100mg정품구입,레비트...
Actividad 1 parcial ll
RSCVSEP16
비아그라 구입방법 카톡:DDF11 & DDF11.KR 비아그라 구입처,비아그라 10mg구매,비아그라 판매사이트,비아그라 구매사이트,비아그라...
시알리스 구입방법 카톡:DDF11 & DDF11.KR 시알리스 소금제조,,시알리스 구하는곳,시알리스 판매하는곳,시알리스 처방방법,시알리스 ...
시알리스 구입방법 카톡:DDF11 & DDF11.KR 시알리스 방법,시알리스 치사량,시알리스 약,부작용,시알리스 종류,시알리스 모양,시알리...
시알리스 구입방법 카톡:DDF11 & DDF11.KR 정품구입,씨알리스 정품파는곳,씨알리스 정품구입처,씨알리스 정품판매처,씨알리스 정품구입방법,
비아그라 구입방법 카톡:DDF11 & DDF11.KR 비아그라 20mg정품판매,비아그라 처방전가격,비아그라 유통기한,비아그라 끊는법,비아그...
시알리스 구입방법 카톡:DDF11 & DDF11.KR 시알리스 구입처,시알리스 10mg구매,시알리스 판매사이트,시알리스 구매사이트,시알리스...
씨알리스 구입방법 카톡:DDF11 & DDF11.KR 씨알리스 20mg정품판매,씨알리스 처방전가격,씨알리스 유통기한,씨알리스 끊는법,씨알리...
Food Recovery(c)
Bonafide comunicat de presa
Ad

More from Muhammad Ahad (20)

PPTX
11. operating-systems-part-2
PPTX
11. operating-systems-part-1
PPTX
10. compute-part-2
PPTX
10. compute-part-1
PPTX
09. storage-part-1
PPTX
08. networking-part-2
PPTX
08. networking
PPTX
07. datacenters
PPTX
06. security concept
PPTX
05. performance-concepts-26-slides
PPTX
05. performance-concepts
PPTX
04. availability-concepts
PPTX
03. non-functional-attributes-introduction-4-slides
PPTX
01. 03.-introduction-to-infrastructure
PPTX
01. 02. introduction (13 slides)
PPT
Chapter14
PPT
Chapter13
PPT
Chapter12
PPT
Chapter11
PPT
Chapter10
11. operating-systems-part-2
11. operating-systems-part-1
10. compute-part-2
10. compute-part-1
09. storage-part-1
08. networking-part-2
08. networking
07. datacenters
06. security concept
05. performance-concepts-26-slides
05. performance-concepts
04. availability-concepts
03. non-functional-attributes-introduction-4-slides
01. 03.-introduction-to-infrastructure
01. 02. introduction (13 slides)
Chapter14
Chapter13
Chapter12
Chapter11
Chapter10

Recently uploaded (20)

PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Cloud computing and distributed systems.
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Approach and Philosophy of On baking technology
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
A Presentation on Artificial Intelligence
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
KodekX | Application Modernization Development
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Understanding_Digital_Forensics_Presentation.pptx
Encapsulation_ Review paper, used for researhc scholars
Cloud computing and distributed systems.
Review of recent advances in non-invasive hemoglobin estimation
Approach and Philosophy of On baking technology
Digital-Transformation-Roadmap-for-Companies.pptx
Mobile App Security Testing_ A Comprehensive Guide.pdf
Network Security Unit 5.pdf for BCA BBA.
Spectral efficient network and resource selection model in 5G networks
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
A Presentation on Artificial Intelligence
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
The AUB Centre for AI in Media Proposal.docx
NewMind AI Weekly Chronicles - August'25 Week I
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
KodekX | Application Modernization Development
Building Integrated photovoltaic BIPV_UPV.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...

Artificial Intelligence

  • 1. George F Luger ARTIFICIAL INTELLIGENCE 6th edition Structures and Strategies for Complex Problem Solving Machine Learning: Connectionist Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 11.0 Introduction 11.1 Foundations for Connectionist Networks 11.2 Perceptron Learning 11.3 Backpropagation Learning. 11.4 Competitive Learning 11.5 Hebbian Coincidence Learning 11.6 Attractor Networks or “Memories” 11.7 Epilogue and References 11.8 Exercises 1
  • 2. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.1 An artificial neuron, input vector xi , weights on each input line, and a thresholding function f that determines the neuron’s output value. Compare with the actual neuron in fig 1.2 2
  • 3. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.2 McCulloch-Pitts neurons to calculate the logic functions and and or. 3
  • 4. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.1 The McCulloch-Pitts model for logical and. 4
  • 5. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.2 The truth table for exclusive-or. 5
  • 6. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.3 The exclusive-or problem. No straight line in two-dimensions can separate the (0, 1) and (1, 0) data points from (0, 0) and (1, 1). 6
  • 7. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.4 A full classification system. 7
  • 8. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.3 A data set for perceptron classification. 8
  • 9. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.5 A two-dimensional plot of the data oints in Table 11.3. The perceptron of Section 11.2.1 provides a linear separation of the data sets. 9
  • 10. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.6 The perceptron net for the example data of Table 11.3. The thresholding function is linear and bipolar (see fig 11.7a) 10 ΣX iW i
  • 11. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.7 Thresholding functions. 11
  • 12. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.8 An error surface in two dimensions. Constant c dictates the size of the learning step. 12
  • 13. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.9 Backpropagation in a connectionist network having a hidden layer. 13
  • 14. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.10 14
  • 15. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.11 The network topology of NETtalk. 15
  • 16. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.12 A backpropagation net to solve the exclusive-or problem. The Wij are the weights and H is the hidden node. 16
  • 17. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.13 A layer of nodes for application of a winner-take-all algorithm. The old input vectors support the winning node. 17
  • 18. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.14 The use of a Kohonen layer, unsupervised, to generate a sequence of prototypes to represent the classes of Table 11.3. 18
  • 19. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.15 The architecture of the Kohonen based learning network for the data of Table 11.3 and classification of Fig 11.4. 19
  • 20. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.16 The “outstar” of node J, the “winner” in a winner-take-all network. The Y vector supervises the response on the output layer in Grossberg training. The “outstar” is bold with all weights, 1; all other weights are 0. 20
  • 21. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.17 A counterpropagation network to recognize the classes in Table 11.3. We train the outstar weights of node A, wsa and wda . 21
  • 22. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.18 A SVM learning the boundaries of a chess board from points generated according to the uniform distribution using Gaussian kernels. The dots are the data points with the larger dots comprising the set of support vectors, the darker areas indicate the confidence in the classification. Adapted from Cristianini and Shawe- Taylor (2000). 22
  • 23. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Table 11.4 The signs and product of signs of node output values. 23
  • 24. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.19 An example neuron for application of a hybrid Hebbian node where learning is supervised. 24
  • 25. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.20 A supervised Hebbian network for learning pattern association. 25
  • 26. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.21 The linear association network. The vector Xi is entered as input and the associated vector Y is produced as output. yi is a linear combination of the x input. In training each yi is supplied with its correct output signals. 26
  • 27. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.22 A linear associator network for the example in Section 11.5.4. The weight matrix is calculated using the formula presented in the previous section. 27
  • 28. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.23 A BAM network for the examples of Section 11.6.2. Each node may also be connected to itself. 28
  • 29. Luger: Artificial Intelligence, 6th edition. © Pearson Education Limited, 2009 Fig 11.24 An autoassociative network with an input vector Ii . We assume single links between nodes with unique indices, thus wij = wij and the weight matrix is symmetric. 29