2. What are Neural Networks?
• Neural networks are used to mimic the basic functioning of the
human brain and are inspired by how the human brain interprets
information.
• It is used to solve various real-time tasks because of its ability to
perform computations quickly and its fast responses.
3. Artificial Neural Network
• An Artificial Neural Network model contains various components that are inspired
by the biological nervous system.
• Artificial Neural Network has a huge number of interconnected processing
elements, also known as Nodes.
• These nodes are connected with other nodes using a connection link.
• The connection link contains weights, these weights contain the information about
the input signal.
• Each iteration and input in turn leads to updation of these weights.
• After inputting all the data instances from the training data set, the final weights of
the Neural Network along with its architecture is known as the Trained Neural
Network.
• This process is called Training of Neural Networks.
4. Artificial Neural Network
• This trained neural network is used to solve specific problems as defined in the
problem statement.
• Types of tasks that can be solved using an artificial neural network include
Classification problems, Pattern Matching, Data Clustering, etc.
• Artificial neural networks learn very efficiently and adaptively.
• They have the capability to learn “how” to solve a specific problem from the training
data it receives.
• After learning, it can be used to solve that specific problem very quickly and
efficiently with high accuracy.
• Some real-life applications of neural networks include Air Traffic Control, Optical
Character Recognition as used by some scanning apps like Google Lens, Voice
Recognition, etc.
5. Types of Neural Networks
ANN– It is also known as an artificial neural
network. It is a feed-forward neural network
because the inputs are sent in the forward direction.
It can also contain hidden layers which can make
the model even denser.
They have a fixed length as specified by the
programmer.
It is used for Textual Data or Tabular Data. A widely
used real-life application is Facial Recognition.
It is comparatively less powerful than CNN and
RNN.
6. Types of Neural Networks
CNN
It is also known as Convolutional Neural Networks.
It is mainly used for Image Data.
It is used for Computer Vision.
Some of the real-life applications are object detection in
autonomous vehicles.
It contains a combination of convolutional layers and neurons. It is
more powerful than both ANN and RNN.
7. Types of Neural Networks
• RNN
• It is also known as Recurrent Neural Networks. It is used to
process and interpret time series data.
• In this type of model, the output from a processing node is fed
back into nodes in the same or previous layers.
• The most known types of RNN are LSTM (Long Short Term
Memory) Networks
• Now that we know the basics about Neural Networks, We know
that Neural Networks’ learning capability is what makes it
interesting.
8. Types of learnings in Neural networks
There are 3 types of learnings in Neural networks, namely
1.Supervised Learning
2.Unsupervised Learning
3.Reinforcement Learning
9. Supervised Learning
• Supervised Learning: As the name suggests,
• it is a type of learning that is looked after by a supervisor.
• It is like learning with a teacher.
• There are input training pairs that contain a set of input and the
desired output.
• Here the output from the model is compared with the desired output
and an error is calculated, this error signal is sent back into the
network for adjusting the weights.
• This adjustment is done till no more adjustments can be made and
the output of the model matches the desired output. In this, there is
feedback from the environment to the model.
11. Unsupervised Learning
• Unlike supervised learning, there is no supervisor or a teacher
here.
• In this type of learning, there is no feedback from the
environment, there is no desired output and the model learns on
its own.
• During the training phase, the inputs are formed into classes that
define the similarity of the members.
• Each class contains similar input patterns.
• On inputting a new pattern, it can predict to which class that input
belongs based on similarity with other patterns.
• If there is no such class, a new class is formed.
13. Reinforcement Learning
• It gets the best of both worlds, that is, the best of both Supervised
learning and Unsupervised learning.
• It is like learning with a critique.
• Here there is no exact feedback from the environment, rather there is
critique feedback.
• The critique tells how close our solution is.
• Hence the model learns on its own based on the critique information.
• It is similar to supervised learning in that it receives feedback from the
environment, but it is different in that it does not receive the desired
output information, rather it receives critique information.
16. • An artificial neuron can be thought of as a simple or multiple
linear regression model with an activation function at the end.
• A neuron from layer i will take the output of all the neurons from
the later i-1 as inputs calculate the weighted sum and add bias to
it.
• After this is sent to an activation function as we saw in the
previous diagram.
17. The first neuron from the first layer is connected to all the inputs from the previous layer,
Similarly, the second neuron from the first hidden layer will also be connected to all the
inputs from the previous layer and so on for all the neurons in the first hidden layer.
For neurons in the second hidden layer (outputs of the previously hidden layer) are
considered as inputs and each of these neurons are connected to previous neurons,
likewise.
This whole process is called Forward propagation.
18. • After this, there is an interesting thing that happens.
• Once we have predicted the output it is then compared to the actual output.
• We then calculate the loss and try to minimize it.
• But how can we minimize this loss?
• For this, there comes another concept which is known as Back Propagation.
• First, the loss is calculated then weights and biases are adjusted in such a way
that they try to minimize the loss.
• Weights and biases are updated with the help of another algorithm called
gradient descent.
• We will understand more about gradient descent in a later section.
• We basically move in the direction opposite to the gradient.
• This concept is derived from the Taylor series.
19. Everybody wants to be a
Diamond!!
But very few are willing to get
cut!!!