SlideShare a Scribd company logo
Artificial Intelligence (CSIT 4th Sem)
Unit:5 Machine Learning
Prepared By : Narayan Dhamala
1
Introduction to Machine Learning
Prepared By : Narayan Dhamala
 Machine learning is the study of computer system that learn from
data and experience.
 Machine learning is subfield of artificial intelligence which gives
the computer ability to learn without being explicitly programmed.
 The goal of machine learning is to build computer system that
can adapt and learn from their experience.
 A computer program is said to be learn if it‟s performance P
improves over a task T with an experience E.
2
Concept of Learning
Prepared By : Narayan Dhamala
 Learning is a way of updating the knowledge.
 Learning is making useful changes in our mind.
 Learning is constructing or modifying representations of what is
being experienced.
 Learning denotes changes in the system that are adaptive in the
sense that they enable the system to do the same task (or tasks
drawn from the same population) more effectively the next time.
3
Types of Learning
Prepared By : Narayan Dhamala
The strategies for learning can be classified according to the amount of inference the system
has to perform on its training data. In increasing order we have
1. Rote learning – the new knowledge is implanted directly with no inference at all, e.g. simple
memorization of past events, or a knowledge engineer‟s direct programming of rules elicited
from a human expert into an expert system.
2. Supervised learning – the system is supplied with a set of training examples consisting of
inputs and corresponding outputs, and is required to discover the relation or mapping
between then, e.g. as a series of rules, or a neural network.
3. Unsupervised learning – the system is supplied with a set of training examples consisting
only of inputs and is required to discover for itself what appropriate outputs should be, e.g. a
Kohonen Network or Self Organizing Map.
4. Reinforcement learning- Is concerned with how intelligent agents ought to act in an
environment to maximize some notion of reward from sequence of actions.
4
Prepared By : Narayan Dhamala
5
Prepared By : Narayan Dhamala
6
Learning Framework
Prepared By : Narayan Dhamala
7
• There are four major components in a learning system:
Environment
Learning
Element
Performance
Element
Knowledge
Base
Learning Framework:
The Environment
Prepared By : Narayan Dhamala
• The environment refers the nature and quality of information given to the
learning element
• The nature of information depends on its level (the degree of generality wrt
the performance element)
– high level information is abstract, it deals with a broad class of problems
– low level information is detailed, it deals with a single problem.
• The quality of information involves
– noise free
– reliable
– ordered
8
Learning Framework:
Learning Elements
Prepared By : Narayan Dhamala
• Four learning situations
– Rote Learning
• environment provides information at the required level
– Learning by being told
• information is too abstract, the learning element must hypothesize missing data
– Learning by example
• information is too specific, the learning element must hypothesize more general rules
– Learning by analogy
• information provided is relevant only to an analogous task, the learning element must
discover the analogy
9
Learning Framework:
Learning Elements
Prepared By : Narayan Dhamala
• Four learning situations
– Rote Learning
• environment provides information at the required level
– Learning by being told
• information is too abstract, the learning element must hypothesize missing data
– Learning by example
• information is too specific, the learning element must hypothesize more general rules
– Learning by analogy
• information provided is relevant only to an analogous task, the learning element must
discover the analogy
10
Learning Framework:
The Knowledge Base
Prepared By : Narayan Dhamala
• Expressive
– the representation contains the relevant knowledge in an easy to get to
fashion
• Modifiable
– it must be easy to change the data in the knowledge base
• Extendibility
– the knowledge base must contain meta-knowledge (knowledge on how
the data base is structured) so the system can change its structure
11
Learning Framework:
The Performance Element
Prepared By : Narayan Dhamala
• Complexity
– for learning, the simplest task is classification based on a single rule while
the most complex task requires the application of multiple rules in
sequence
• Feedback
– the performance element must send information to the learning system to
be used to evaluate the overall performance
• Transparency
– the learning element should have access to all the internal actions of the
performance element
12
Statistical Based Learning: Naïve Bayes Model
Prepared By : Narayan Dhamala
• Statistical Learning is a set of tools for understanding
data. These tools broadly come under two classes:
supervised learning & unsupervised learning.
• Generally, supervised learning refers to predicting or
estimating an output based on one or more inputs.
• Unsupervised learning, on the other hand, provides a
relationship or finds a pattern within the given data
without a supervised output.
13
BAYESIAN METHODS
• Learning and classification (Supervised learning) method
based on probability theory.
• Baye‟s theorem plays a critical role in probabilistic
learning and classification.
• Uses prior probability of each category given no
information about an item.
• Categorization produces a posterior probability
distribution over the possible categories given a
description of an item.
P(A|B) =
BAYESIAN METHODS...
D =
AFTER TRAINING
Size<small, medium, large>
Color<red, blue, green>
Shape<circle, triangle, square>
Category<positive, negative>
BAYESIAN METHODS...
Naïve Bayesian Classifier: Training Dataset
Class:
C1:buys_computer = „yes‟
C2:buys_computer = „no‟
Data sample
X = (age <=30,
Income = medium,
Student = yes
Credit_rating = Fair)
Naïve Bayesian Classifier: An Example
• P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643
P(buys_computer = “no”) = 5/14= 0.357
• Compute P(X|Ci) for each class
P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222
P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6
P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444
P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4
P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667
P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2
P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667
P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4
• X = (age <= 30 , income = medium, student = yes, credit_rating = fair)
P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044
P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019
P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028
P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007
Therefore, X belongs to class (“buys_computer = yes”)
Learning by genetic algorithm
Genetic algorithm is an evolutionary algorithm which is based
on the principle of natural selection and natural genetics.
Genetic algorithm plays an important role in search and
optimization problem.
The main purpose of genetic algorithm is to find the
individuals from the search space with the best genetic
materials.
The genetic algorithm process consists of following 4 steps:
1) Encoding (Representation)
2) Selection
3) Crossover &
4) Mutation
AI Unit 5 machine learning
AI Unit 5 machine learning
AI Unit 5 machine learning
AI Unit 5 machine learning
AI Unit 5 machine learning
AI Unit 5 machine learning
Learning by Neural Networks
Neural Network
 A neural network( also called artificial neural network) is a computing system made up of a
number of simple, highly interconnected processing elements, which process information by
their dynamic state response to external inputs.
 An artificial neural network is an information processing paradigm that is inspired by
biological nervous system.
 It is composed of large number of highly interconnected processing elements called
neurons.
 Each neuron in ANN receives a number of inputs.
 An activation function is applied to these inputs which results the output value of the neuron.
Learning by Neural Networks
Biological neural network vs Artificial neural network
 The term "Artificial Neural Network" is derived from Biological neural networks that
develop the structure of a human brain. Similar to the human brain that has neurons
interconnected to one another, artificial neural networks also have neurons that are
interconnected to one another in various layers of the networks. These neurons are known
as nodes.
 Dendrites from Biological Neural Network represent inputs in Artificial Neural Networks, cell
nucleus represents Nodes, synapse represents Weights, and Axon represents Output.
Learning by Neural Networks
Biological neural network vs Artificial neural network
 The Relationship between Biological neural network and artificial neural network is as
follows
Learning by Neural Networks
Mathematical Model of ANN
Mathematical Model of ANN
Mathematical Model of ANN
Types of ANN
 The different types of ANN are as follows
1) Feed Forward ANN
 Feed forward neural network is the simplest form of neural networks where input data
travels in one direction only, passing through artificial neural nodes and exiting through
output nodes.
 The feed forward neural network does not contain loop or cycle.
 In feed forward neural network, the hidden layers may or may not be present but the input
and output layers are present there.
 Based on this, they can be further classified as a single-layered or multi-layered feed-
forward neural network.
Types of ANN
Types of ANN
Note:
• Single Layer Perceptron – This is the simplest feed forward neural network which does
not contain any hidden layer.
• Multi Layer Perceptron – A Multi Layer Perceptron has one or more hidden layers.
Advantages of Feed forward Neural Network
• Less complex, easy to design & maintain
• Fast and speedy [One-way propagation]
• Highly responsive to noisy data
Dis-advantages of Feed forward Neural Network
 Cannot be used for deep learning [due to absence of dense layers and back propagation]
Types of ANN
2) Recurrent (Feed back)Neural Network
 Recurrent neural network is a type of neural network in which the output from the previous
steps are feed as input to the current step.
 The recurrent neural network contains loop or cycle.
 The main and most important feature of RNN is Hidden state, which remembers some
information about a sequence.
Types of ANN
Advantages of Recurrent Neural Networks
• Model sequential data where each sample can be assumed to be dependent on historical
ones is one of the advantage.
• Used with convolution layers to extend the pixel effectiveness.
Disadvantages of Recurrent Neural Networks
• Training recurrent neural nets could be a difficult task
• Difficult to process long sequential data using ReLU(rectified linear ) as an activation
function.
Advantages and Dis-advantages of Neural Network
Advantages:
• A neural network can perform tasks in parallel,which a linear program cannot perform.
• When an element of the neural network fails, it can continue without any problem by their
parallel nature.
• A neural network does not need to be reprogrammed as it learns itself.
• It can be implemented in an easy way without any problem.
• As adaptive, intelligent systems, neural networks are robust and excel at solving complex
problems. Neural networks are efficient in their programming and the scientists agree that
the advantages of using ANNs outweigh the risks.
• It can be implemented in any application.
Disadvantages:
• The neural network requires training to operate.
• Requires high processing time for large neural networks.
• The architecture of a neural network is different from the architecture and history of
microprocessors so they have to be emulated.
Applications of ANN
Brain modeling:
Aid our understanding of how the brain works, how behavior emerges from the interaction of
networks of neurons, what needs to “get fixed” in brain damaged patients .
Real world applications :
 Financial modeling – predicting the stock market
 Time series prediction – climate, weather, seizures
 Computer games – intelligent agents, chess, backgammon
 Robotics – autonomous adaptable robots
 Pattern recognition – speech recognition, seismic activity, sonar signals
 Data analysis – data compression, data mining
Learning by Training ANN
 Training a Neural Network means finding the appropriate Weights of the Neural
Connections.
 Once a network has been structured for a particular application, that network is ready to be
trained. To start this process the initial weights are chosen randomly. Then, the training, or
learning, begins.
 There are two approaches to training - supervised and unsupervised.
 Supervised training involves a mechanism of providing the network with the desired output
either by manually "grading" the network's performance or by providing the desired outputs
with the inputs.
 Unsupervised training is where the network has to make sense of the inputs without
outside help.
Supervised Training
In supervised training, both the inputs and the outputs are provided. The network then
processes the inputs and compares its resulting outputs against the desired outputs. Errors
are then propagated back through the system, causing the system to adjust the weights
which control the network. This process occurs over and over as the weights are
continually tweaked. The set of data which enables the training is called the "training set."
During the training of a network the same set of data is processed many times as the
connection weights are ever refined.
Learning by Training ANN
Unsupervised Training
The other type of training is called unsupervised training. In unsupervised training, the
network is provided with inputs but not with desired outputs. The system itself must then
decide what features it will use to group the input data. This is often referred to as self-
organization or adaption.
Learning by Training ANN
Learning by Training ANN
Learning by Training ANN
Learning by Training ANN
Perceptron Learning
 Learning a perceptron means finding the right values for Weight. The hypothesis space of
a perceptron is the space of all weight vectors.
 The perceptron learning algorithm can be stated as below.
1. Assign random values to the weight vector
2. Apply the weight update rule to every training example
3. Are all training examples correctly classified?
a. Yes. Quit
b. b. No. Go back to Step 2.
 There are two popular weight update rules.
i) The perceptron rule, and
ii) Delta rule
Perceptron Learning
Perceptron Learning
Perceptron Learning
Perceptron Learning
Perceptron Learning
Perceptron Learning
Perceptron Learning
For example of back propagation algorithm (see notes
provided in class)
Prepared By : Narayan Dhamala
54
END

More Related Content

PDF
Ch 7 Knowledge Representation.pdf
PPTX
Machine learning ppt.
PPTX
Data mining tools (R , WEKA, RAPID MINER, ORANGE)
PPTX
Gradient descent method
PPTX
Deep Learning - CNN and RNN
PPTX
Expert system
PPTX
K-Nearest Neighbor Classifier
Ch 7 Knowledge Representation.pdf
Machine learning ppt.
Data mining tools (R , WEKA, RAPID MINER, ORANGE)
Gradient descent method
Deep Learning - CNN and RNN
Expert system
K-Nearest Neighbor Classifier

What's hot (20)

PDF
PPTX
Machine learning
PPTX
Semantic net in AI
PPTX
Semantic nets in artificial intelligence
PDF
Run time storage
PDF
Classification Based Machine Learning Algorithms
PPTX
Dbscan algorithom
PPTX
0 1 knapsack using branch and bound
PPTX
Resolution method in AI.pptx
PPTX
Ppt on data science
PPTX
Classification Algorithm.
PPT
Multi Head, Multi Tape Turing Machine
PPTX
PPTX
AI: Learning in AI
PPTX
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
PDF
Day 2 global_state_and_snapshot_algorithms
PDF
Bayesian learning
PPTX
Lecture 1- Artificial Intelligence - Introduction
PPTX
Fundamentals of Data science Introduction Unit 1
PDF
First Order Logic resolution
Machine learning
Semantic net in AI
Semantic nets in artificial intelligence
Run time storage
Classification Based Machine Learning Algorithms
Dbscan algorithom
0 1 knapsack using branch and bound
Resolution method in AI.pptx
Ppt on data science
Classification Algorithm.
Multi Head, Multi Tape Turing Machine
AI: Learning in AI
Artificial Intelligence (AI) | Prepositional logic (PL)and first order predic...
Day 2 global_state_and_snapshot_algorithms
Bayesian learning
Lecture 1- Artificial Intelligence - Introduction
Fundamentals of Data science Introduction Unit 1
First Order Logic resolution
Ad

Similar to AI Unit 5 machine learning (20)

PPT
learning.ppt
PPTX
Intro to machine learning
PPT
LearningAG.ppt
PDF
CSA 3702 machine learning module 1
PPTX
machine learning algorithm.pptx
DOCX
Learning Methods in a Neural Network
PPTX
Lec 18-19.pptx
PPTX
AI_06_Machine Learning.pptx
PDF
Cognitive Science Unit 4
PPTX
Learning in AI
PDF
nncollovcapaldo2013-131220052427-phpapp01.pdf
PDF
nncollovcapaldo2013-131220052427-phpapp01.pdf
PPT
machine-learning-with-python usage in.ppt
PPTX
Machine Learning an Exploratory Tool: Key Concepts
PPTX
Artificial Intelligence Approaches
PDF
Introduction to machine learning
PDF
Machine Learning - Deep Learning
PPTX
Chapter 6 - Learning data and analytics course
PDF
EssentialsOfMachineLearning.pdf
PPTX
DECISION TREE AND PROBABILISTIC MODELS.pptx
learning.ppt
Intro to machine learning
LearningAG.ppt
CSA 3702 machine learning module 1
machine learning algorithm.pptx
Learning Methods in a Neural Network
Lec 18-19.pptx
AI_06_Machine Learning.pptx
Cognitive Science Unit 4
Learning in AI
nncollovcapaldo2013-131220052427-phpapp01.pdf
nncollovcapaldo2013-131220052427-phpapp01.pdf
machine-learning-with-python usage in.ppt
Machine Learning an Exploratory Tool: Key Concepts
Artificial Intelligence Approaches
Introduction to machine learning
Machine Learning - Deep Learning
Chapter 6 - Learning data and analytics course
EssentialsOfMachineLearning.pdf
DECISION TREE AND PROBABILISTIC MODELS.pptx
Ad

Recently uploaded (20)

PDF
Well-logging-methods_new................
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPT
Project quality management in manufacturing
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
DOCX
573137875-Attendance-Management-System-original
PDF
PPT on Performance Review to get promotions
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Sustainable Sites - Green Building Construction
Well-logging-methods_new................
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Embodied AI: Ushering in the Next Era of Intelligent Systems
Arduino robotics embedded978-1-4302-3184-4.pdf
CYBER-CRIMES AND SECURITY A guide to understanding
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
Project quality management in manufacturing
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Internet of Things (IOT) - A guide to understanding
UNIT-1 - COAL BASED THERMAL POWER PLANTS
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
573137875-Attendance-Management-System-original
PPT on Performance Review to get promotions
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Sustainable Sites - Green Building Construction

AI Unit 5 machine learning

  • 1. Artificial Intelligence (CSIT 4th Sem) Unit:5 Machine Learning Prepared By : Narayan Dhamala 1
  • 2. Introduction to Machine Learning Prepared By : Narayan Dhamala  Machine learning is the study of computer system that learn from data and experience.  Machine learning is subfield of artificial intelligence which gives the computer ability to learn without being explicitly programmed.  The goal of machine learning is to build computer system that can adapt and learn from their experience.  A computer program is said to be learn if it‟s performance P improves over a task T with an experience E. 2
  • 3. Concept of Learning Prepared By : Narayan Dhamala  Learning is a way of updating the knowledge.  Learning is making useful changes in our mind.  Learning is constructing or modifying representations of what is being experienced.  Learning denotes changes in the system that are adaptive in the sense that they enable the system to do the same task (or tasks drawn from the same population) more effectively the next time. 3
  • 4. Types of Learning Prepared By : Narayan Dhamala The strategies for learning can be classified according to the amount of inference the system has to perform on its training data. In increasing order we have 1. Rote learning – the new knowledge is implanted directly with no inference at all, e.g. simple memorization of past events, or a knowledge engineer‟s direct programming of rules elicited from a human expert into an expert system. 2. Supervised learning – the system is supplied with a set of training examples consisting of inputs and corresponding outputs, and is required to discover the relation or mapping between then, e.g. as a series of rules, or a neural network. 3. Unsupervised learning – the system is supplied with a set of training examples consisting only of inputs and is required to discover for itself what appropriate outputs should be, e.g. a Kohonen Network or Self Organizing Map. 4. Reinforcement learning- Is concerned with how intelligent agents ought to act in an environment to maximize some notion of reward from sequence of actions. 4
  • 5. Prepared By : Narayan Dhamala 5
  • 6. Prepared By : Narayan Dhamala 6
  • 7. Learning Framework Prepared By : Narayan Dhamala 7 • There are four major components in a learning system: Environment Learning Element Performance Element Knowledge Base
  • 8. Learning Framework: The Environment Prepared By : Narayan Dhamala • The environment refers the nature and quality of information given to the learning element • The nature of information depends on its level (the degree of generality wrt the performance element) – high level information is abstract, it deals with a broad class of problems – low level information is detailed, it deals with a single problem. • The quality of information involves – noise free – reliable – ordered 8
  • 9. Learning Framework: Learning Elements Prepared By : Narayan Dhamala • Four learning situations – Rote Learning • environment provides information at the required level – Learning by being told • information is too abstract, the learning element must hypothesize missing data – Learning by example • information is too specific, the learning element must hypothesize more general rules – Learning by analogy • information provided is relevant only to an analogous task, the learning element must discover the analogy 9
  • 10. Learning Framework: Learning Elements Prepared By : Narayan Dhamala • Four learning situations – Rote Learning • environment provides information at the required level – Learning by being told • information is too abstract, the learning element must hypothesize missing data – Learning by example • information is too specific, the learning element must hypothesize more general rules – Learning by analogy • information provided is relevant only to an analogous task, the learning element must discover the analogy 10
  • 11. Learning Framework: The Knowledge Base Prepared By : Narayan Dhamala • Expressive – the representation contains the relevant knowledge in an easy to get to fashion • Modifiable – it must be easy to change the data in the knowledge base • Extendibility – the knowledge base must contain meta-knowledge (knowledge on how the data base is structured) so the system can change its structure 11
  • 12. Learning Framework: The Performance Element Prepared By : Narayan Dhamala • Complexity – for learning, the simplest task is classification based on a single rule while the most complex task requires the application of multiple rules in sequence • Feedback – the performance element must send information to the learning system to be used to evaluate the overall performance • Transparency – the learning element should have access to all the internal actions of the performance element 12
  • 13. Statistical Based Learning: Naïve Bayes Model Prepared By : Narayan Dhamala • Statistical Learning is a set of tools for understanding data. These tools broadly come under two classes: supervised learning & unsupervised learning. • Generally, supervised learning refers to predicting or estimating an output based on one or more inputs. • Unsupervised learning, on the other hand, provides a relationship or finds a pattern within the given data without a supervised output. 13
  • 14. BAYESIAN METHODS • Learning and classification (Supervised learning) method based on probability theory. • Baye‟s theorem plays a critical role in probabilistic learning and classification. • Uses prior probability of each category given no information about an item. • Categorization produces a posterior probability distribution over the possible categories given a description of an item. P(A|B) =
  • 15. BAYESIAN METHODS... D = AFTER TRAINING Size<small, medium, large> Color<red, blue, green> Shape<circle, triangle, square> Category<positive, negative>
  • 17. Naïve Bayesian Classifier: Training Dataset Class: C1:buys_computer = „yes‟ C2:buys_computer = „no‟ Data sample X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)
  • 18. Naïve Bayesian Classifier: An Example • P(Ci): P(buys_computer = “yes”) = 9/14 = 0.643 P(buys_computer = “no”) = 5/14= 0.357 • Compute P(X|Ci) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0.222 P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0.6 P(income = “medium” | buys_computer = “yes”) = 4/9 = 0.444 P(income = “medium” | buys_computer = “no”) = 2/5 = 0.4 P(student = “yes” | buys_computer = “yes) = 6/9 = 0.667 P(student = “yes” | buys_computer = “no”) = 1/5 = 0.2 P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0.667 P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.4 • X = (age <= 30 , income = medium, student = yes, credit_rating = fair) P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0.044 P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0.019 P(X|Ci)*P(Ci) : P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0.028 P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0.007 Therefore, X belongs to class (“buys_computer = yes”)
  • 19. Learning by genetic algorithm Genetic algorithm is an evolutionary algorithm which is based on the principle of natural selection and natural genetics. Genetic algorithm plays an important role in search and optimization problem. The main purpose of genetic algorithm is to find the individuals from the search space with the best genetic materials. The genetic algorithm process consists of following 4 steps: 1) Encoding (Representation) 2) Selection 3) Crossover & 4) Mutation
  • 26. Learning by Neural Networks Neural Network  A neural network( also called artificial neural network) is a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.  An artificial neural network is an information processing paradigm that is inspired by biological nervous system.  It is composed of large number of highly interconnected processing elements called neurons.  Each neuron in ANN receives a number of inputs.  An activation function is applied to these inputs which results the output value of the neuron.
  • 27. Learning by Neural Networks Biological neural network vs Artificial neural network  The term "Artificial Neural Network" is derived from Biological neural networks that develop the structure of a human brain. Similar to the human brain that has neurons interconnected to one another, artificial neural networks also have neurons that are interconnected to one another in various layers of the networks. These neurons are known as nodes.  Dendrites from Biological Neural Network represent inputs in Artificial Neural Networks, cell nucleus represents Nodes, synapse represents Weights, and Axon represents Output.
  • 28. Learning by Neural Networks Biological neural network vs Artificial neural network  The Relationship between Biological neural network and artificial neural network is as follows
  • 29. Learning by Neural Networks
  • 33. Types of ANN  The different types of ANN are as follows 1) Feed Forward ANN  Feed forward neural network is the simplest form of neural networks where input data travels in one direction only, passing through artificial neural nodes and exiting through output nodes.  The feed forward neural network does not contain loop or cycle.  In feed forward neural network, the hidden layers may or may not be present but the input and output layers are present there.  Based on this, they can be further classified as a single-layered or multi-layered feed- forward neural network.
  • 35. Types of ANN Note: • Single Layer Perceptron – This is the simplest feed forward neural network which does not contain any hidden layer. • Multi Layer Perceptron – A Multi Layer Perceptron has one or more hidden layers. Advantages of Feed forward Neural Network • Less complex, easy to design & maintain • Fast and speedy [One-way propagation] • Highly responsive to noisy data Dis-advantages of Feed forward Neural Network  Cannot be used for deep learning [due to absence of dense layers and back propagation]
  • 36. Types of ANN 2) Recurrent (Feed back)Neural Network  Recurrent neural network is a type of neural network in which the output from the previous steps are feed as input to the current step.  The recurrent neural network contains loop or cycle.  The main and most important feature of RNN is Hidden state, which remembers some information about a sequence.
  • 37. Types of ANN Advantages of Recurrent Neural Networks • Model sequential data where each sample can be assumed to be dependent on historical ones is one of the advantage. • Used with convolution layers to extend the pixel effectiveness. Disadvantages of Recurrent Neural Networks • Training recurrent neural nets could be a difficult task • Difficult to process long sequential data using ReLU(rectified linear ) as an activation function.
  • 38. Advantages and Dis-advantages of Neural Network Advantages: • A neural network can perform tasks in parallel,which a linear program cannot perform. • When an element of the neural network fails, it can continue without any problem by their parallel nature. • A neural network does not need to be reprogrammed as it learns itself. • It can be implemented in an easy way without any problem. • As adaptive, intelligent systems, neural networks are robust and excel at solving complex problems. Neural networks are efficient in their programming and the scientists agree that the advantages of using ANNs outweigh the risks. • It can be implemented in any application. Disadvantages: • The neural network requires training to operate. • Requires high processing time for large neural networks. • The architecture of a neural network is different from the architecture and history of microprocessors so they have to be emulated.
  • 39. Applications of ANN Brain modeling: Aid our understanding of how the brain works, how behavior emerges from the interaction of networks of neurons, what needs to “get fixed” in brain damaged patients . Real world applications :  Financial modeling – predicting the stock market  Time series prediction – climate, weather, seizures  Computer games – intelligent agents, chess, backgammon  Robotics – autonomous adaptable robots  Pattern recognition – speech recognition, seismic activity, sonar signals  Data analysis – data compression, data mining
  • 40. Learning by Training ANN  Training a Neural Network means finding the appropriate Weights of the Neural Connections.  Once a network has been structured for a particular application, that network is ready to be trained. To start this process the initial weights are chosen randomly. Then, the training, or learning, begins.  There are two approaches to training - supervised and unsupervised.  Supervised training involves a mechanism of providing the network with the desired output either by manually "grading" the network's performance or by providing the desired outputs with the inputs.  Unsupervised training is where the network has to make sense of the inputs without outside help. Supervised Training In supervised training, both the inputs and the outputs are provided. The network then processes the inputs and compares its resulting outputs against the desired outputs. Errors are then propagated back through the system, causing the system to adjust the weights which control the network. This process occurs over and over as the weights are continually tweaked. The set of data which enables the training is called the "training set." During the training of a network the same set of data is processed many times as the connection weights are ever refined.
  • 41. Learning by Training ANN Unsupervised Training The other type of training is called unsupervised training. In unsupervised training, the network is provided with inputs but not with desired outputs. The system itself must then decide what features it will use to group the input data. This is often referred to as self- organization or adaption.
  • 46. Perceptron Learning  Learning a perceptron means finding the right values for Weight. The hypothesis space of a perceptron is the space of all weight vectors.  The perceptron learning algorithm can be stated as below. 1. Assign random values to the weight vector 2. Apply the weight update rule to every training example 3. Are all training examples correctly classified? a. Yes. Quit b. b. No. Go back to Step 2.  There are two popular weight update rules. i) The perceptron rule, and ii) Delta rule
  • 54. For example of back propagation algorithm (see notes provided in class) Prepared By : Narayan Dhamala 54 END