SlideShare a Scribd company logo
Deep Learning
Lecture 2
Book for this Course
• https://amzn.to/3vaDhUU
• Link to purchase
AI ML and Deep Learning
Artificial
intelligence
• Artificial intelligence was born in the
1950s, when a handful of pioneers
from the nascent field of computer
science started asking whether
computers could be made to “think”—
a question whose ramifications we’re
still exploring today.
Artificial intelligence
• A concise definition of the field would be as follows: the
effort to automate intellectual tasks normally performed by
humans. As such, AI is a general field that encompasses
machine learning and deep learning, but that also includes
many more approaches that don’t involve any learning.
• Early chess programs, for instance, only involved hardcoded
rules crafted by programmers, and didn’t qualify as
machine learning.
Artificial
intelligence
Machine learning
Deep learning  Introduction and Basics
Machine Learning
• A machine-learning system is trained rather than
explicitly programmed. It’s presented with many
examples relevant to a task, and it finds statistical
structure in these examples that eventually allows
the system to come up with rules for automating
the task.
• For instance, if you wished to automate the task of
tagging your vacation pictures, you could present
a machine-learning system with many examples of
pictures already tagged by humans, and the
system would learn statistical rules for associating
specific pictures to specific tags.
Deep learning  Introduction and Basics
Machine Learning and
Statistics
Machine
Learning and
Statistics
• Unlike statistics, machine learning tends
to deal with large, complex datasets
(such as a dataset of millions of images,
each consisting of tens of thousands of
pixels) for which classical statistical
analysis such as Bayesian analysis would
be impractical.
• As a result, machine learning, and
especially deep learning, exhibits
comparatively little mathematical
theory—maybe too little—and is
engineering oriented. It’s a hands-on
discipline in which ideas are proven
empirically more often than
theoretically.
Learning
representations
from data
• To define deep learning and
understand the difference between
deep learning and other machine-
learning approaches, first we need
some idea of what machine learning
algorithms do.
Needed For Machine Learning
Central problem of ML and DL
• A machine-learning model transforms its input data into meaningful
outputs, a process that is “learned” from exposure to known
examples of inputs and outputs.
• Therefore, the central problem in machine learning and deep learning
is to meaningfully transform data: in other words, to learn useful
representations of the input data at hand—representations that get
us closer to the expected output. Before we go any further: what’s a
representation?
Representations
for the input data
• Machine-learning models are all about finding
appropriate representations for their input data—
transformations of the data that make it more
amenable to the task at hand, such as a
classification t
Representation of data in ML
Deep learning  Introduction and Basics
Representation
• Learning, in the context of machine learning, describes an
automatic search process for better representations.
• All machine-learning algorithms consist of automatically
finding such transformations that turn data into more-
useful representations for a given task.
• These operations can be coordinate changes, as you just
saw, or linear projections (which may destroy information),
translations, nonlinear operations (such as “select all points
such that x > 0”), and so on.
What makes Deep Learning Special?
• The deep in deep learning isn’t a reference to any kind of deeper
understanding achieved by the approach; rather, it stands for this
idea of successive layers of representations.
• How many layers contribute to a model of the data is called the
depth of the model.
• Other appropriate names for the field could have been layered
representations learning and hierarchical representations learning.
• Modern deep learning often involves tens or even hundreds of
successive layers of representations— and they’re all learned
automatically from exposure to training data.
What is
shallow
learning?
Deep learning
In deep learning, these layered representations are (almost always)
learned via models called neural networks, structured in literal layers
stacked on top of each other.
There’s no evidence that the brain implements anything like the learning
mechanisms used in modern deep-learning models. You may come
across pop-science articles proclaiming that deep learning works like the
brain or was modeled after the brain, but that isn’t the case.
Deep Learning in simple words
Deep learning is a mathematical framework for learning
representations from data.
What do the representations learned by a deep-learning algorithm look like?
You can think of a deep network as a multistage information-distillation operation, where information goes
through successive filters and comes out increasingly purified (that is, useful with regard to some task)
Deep learning  Introduction and Basics
Finding right values for these weights
The specification of what a layer does to its input data is stored in the layer’s weights, which in
essence are a bunch of numbers. In technical terms, we’d say that the transformation
implemented by a layer is parameterized by its weights (see figure in previous slide).
(Weights are also sometimes called the parameters of a layer.) In this context, learning means
finding a set of values for the weights of all layers in a network, such that the network will
correctly map example inputs to their associated targets.
But here’s the thing: a deep neural network can contain tens of millions of parameters. Finding
the correct value for all of them may seem like a daunting task, especially given that modifying
the value of one parameter will affect the behavior of all the
Loss Function
To control something, first you need to be able to
observe it. To control the output of a neural
network, you need to be able to measure how far
this output is from what you expected.
This is the job of the loss function of the network,
also called the objective function. The loss
function takes the predictions of the network and
the true target (what you wanted the network to
output) and computes a distance score, capturing
how well the network has done on this specific
example (See next Figure).
Deep learning  Introduction and Basics
BackPropagation
Algorithm
The fundamental trick in deep
learning is to use this score as a
feedback signal to adjust the value of
the weights a little, in a direction that
will lower the loss score for the
current example (see figure 1.9).
This adjustment is the job of the
optimizer, which implements what’s
called the Backpropagation
algorithm: the central algorithm in
deep learning.
Deep learning  Introduction and Basics
Method
Initially, the weights of the network are assigned random values, so the network merely implements a series of
random transformations. Naturally, its output is far from what it should ideally be, and the loss score is
accordingly very high.
But with every example the network processes, the weights are adjusted a little in the correct direction, and the
loss score decreases. This is the training loop, which, repeated a sufficient number of times (typically tens of
iterations over thousands of examples), yields weight values that minimize the loss function.
A network with a minimal loss is one for which the outputs are as close as they can be to the targets: a trained
network.
Once again, it’s a simple mechanism that, once scaled, ends up looking like magic.
What deep learning has Achieved so far?
Near-human-level
image
classification
Near-human-level
speech recognition
Near-human-level
handwriting
transcription
Improved machine
Translation
Improved text to
speech conversion
digital assistants
such as Google
Now and Amazon
Alexa
improve the
search results on
the web
ability to answer
natural language
questions
Deep Learning hype
Deep learning has reached a level of public attention and industry investment never before seen in the history
of AI, but it isn’t the first successful form of machine learning.
It’s safe to say that most of the machine-learning algorithms used in the industry today aren’t deep-learning
algorithms.
Deep learning isn’t always the right tool for the job—sometimes there isn’t enough data for deep learning to
be applicable, and sometimes the problem is better solved by a different algorithm.
If deep learning is your first contact with machine learning, then you may find yourself in a situation where all
you have is the deep-learning hammer, and every machine-learning problem starts to look like a nail. The only
way not to fall into this trap is to be familiar with other approaches and practice them when appropriate.

More Related Content

PPTX
Deep Learning - CNN and RNN
PDF
Deep Learning With Python Tutorial | Edureka
PPTX
Artificial Intelligence, Machine Learning and Deep Learning
PPTX
Bleu vs rouge
PPTX
Deep Learning Tutorial
PPTX
Deep Learning for Natural Language Processing
PPTX
[AIoTLab]attention mechanism.pptx
PPTX
Deep learning
Deep Learning - CNN and RNN
Deep Learning With Python Tutorial | Edureka
Artificial Intelligence, Machine Learning and Deep Learning
Bleu vs rouge
Deep Learning Tutorial
Deep Learning for Natural Language Processing
[AIoTLab]attention mechanism.pptx
Deep learning

What's hot (20)

PDF
Convolutional neural network
 
PPTX
Introduction to Machine Learning
PPTX
Machine Learning
PPTX
Artifical Neural Network and its applications
PDF
Attention is all you need (UPC Reading Group 2018, by Santi Pascual)
PPTX
Android Malware 2020 (CCCS-CIC-AndMal-2020)
PDF
Artificial Intelligence Vs Machine Learning Vs Deep Learning
PDF
Ch 7 Knowledge Representation.pdf
PPT
Back propagation
PDF
Introduction To Python | Edureka
PDF
Knowledge representation
PDF
Support Vector Machines ( SVM )
PDF
"An Introduction to Machine Learning and How to Teach Machines to See," a Pre...
PPTX
Comparison of Learning Algorithms for Handwritten Digit Recognition
PPTX
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
PPTX
Feature Selection in Machine Learning
PPTX
Introduction to Deep Learning
PPTX
An introduction to reinforcement learning
PPTX
Introduction to Deep Learning
PPTX
Deep learning
Convolutional neural network
 
Introduction to Machine Learning
Machine Learning
Artifical Neural Network and its applications
Attention is all you need (UPC Reading Group 2018, by Santi Pascual)
Android Malware 2020 (CCCS-CIC-AndMal-2020)
Artificial Intelligence Vs Machine Learning Vs Deep Learning
Ch 7 Knowledge Representation.pdf
Back propagation
Introduction To Python | Edureka
Knowledge representation
Support Vector Machines ( SVM )
"An Introduction to Machine Learning and How to Teach Machines to See," a Pre...
Comparison of Learning Algorithms for Handwritten Digit Recognition
What Is Deep Learning? | Introduction to Deep Learning | Deep Learning Tutori...
Feature Selection in Machine Learning
Introduction to Deep Learning
An introduction to reinforcement learning
Introduction to Deep Learning
Deep learning
Ad

Similar to Deep learning Introduction and Basics (20)

PDF
Deep Learning Demystified
PDF
ML crash course
PPT
DEEP LEARNING PPT aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
PPTX
introduction to machine learning
PPTX
Deep learning short introduction
PDF
Machine Learning
PPT
notes as .ppt
 
PPTX
Deep learning with tensorflow
PPTX
Machine Learning and its Applications
PPTX
Intro/Overview on Machine Learning Presentation
PPT
Lecture 1
PPT
lec1.ppt
PPT
Machine Learning Ch 1.ppt
PDF
An Introduction to Machine Learning
PPTX
MachinaFiesta: A Vision into Machine Learning 🚀
PPTX
Machine learning introduction
PPTX
Introduction to Machine Learning basics.pptx
PDF
ML.pdf
PPTX
Deep Learnings Training.pptx
PPT
Machine learning-in-details-with-out-python-code
Deep Learning Demystified
ML crash course
DEEP LEARNING PPT aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
introduction to machine learning
Deep learning short introduction
Machine Learning
notes as .ppt
 
Deep learning with tensorflow
Machine Learning and its Applications
Intro/Overview on Machine Learning Presentation
Lecture 1
lec1.ppt
Machine Learning Ch 1.ppt
An Introduction to Machine Learning
MachinaFiesta: A Vision into Machine Learning 🚀
Machine learning introduction
Introduction to Machine Learning basics.pptx
ML.pdf
Deep Learnings Training.pptx
Machine learning-in-details-with-out-python-code
Ad

Recently uploaded (20)

PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
web development for engineering and engineering
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
Construction Project Organization Group 2.pptx
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Sustainable Sites - Green Building Construction
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPT
Mechanical Engineering MATERIALS Selection
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
additive manufacturing of ss316l using mig welding
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
R24 SURVEYING LAB MANUAL for civil enggi
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Model Code of Practice - Construction Work - 21102022 .pdf
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
web development for engineering and engineering
Embodied AI: Ushering in the Next Era of Intelligent Systems
Construction Project Organization Group 2.pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Sustainable Sites - Green Building Construction
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
Mechanical Engineering MATERIALS Selection
bas. eng. economics group 4 presentation 1.pptx
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Operating System & Kernel Study Guide-1 - converted.pdf
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
additive manufacturing of ss316l using mig welding

Deep learning Introduction and Basics

  • 2. Book for this Course • https://amzn.to/3vaDhUU • Link to purchase
  • 3. AI ML and Deep Learning
  • 4. Artificial intelligence • Artificial intelligence was born in the 1950s, when a handful of pioneers from the nascent field of computer science started asking whether computers could be made to “think”— a question whose ramifications we’re still exploring today.
  • 5. Artificial intelligence • A concise definition of the field would be as follows: the effort to automate intellectual tasks normally performed by humans. As such, AI is a general field that encompasses machine learning and deep learning, but that also includes many more approaches that don’t involve any learning. • Early chess programs, for instance, only involved hardcoded rules crafted by programmers, and didn’t qualify as machine learning.
  • 9. Machine Learning • A machine-learning system is trained rather than explicitly programmed. It’s presented with many examples relevant to a task, and it finds statistical structure in these examples that eventually allows the system to come up with rules for automating the task. • For instance, if you wished to automate the task of tagging your vacation pictures, you could present a machine-learning system with many examples of pictures already tagged by humans, and the system would learn statistical rules for associating specific pictures to specific tags.
  • 12. Machine Learning and Statistics • Unlike statistics, machine learning tends to deal with large, complex datasets (such as a dataset of millions of images, each consisting of tens of thousands of pixels) for which classical statistical analysis such as Bayesian analysis would be impractical. • As a result, machine learning, and especially deep learning, exhibits comparatively little mathematical theory—maybe too little—and is engineering oriented. It’s a hands-on discipline in which ideas are proven empirically more often than theoretically.
  • 13. Learning representations from data • To define deep learning and understand the difference between deep learning and other machine- learning approaches, first we need some idea of what machine learning algorithms do.
  • 14. Needed For Machine Learning
  • 15. Central problem of ML and DL • A machine-learning model transforms its input data into meaningful outputs, a process that is “learned” from exposure to known examples of inputs and outputs. • Therefore, the central problem in machine learning and deep learning is to meaningfully transform data: in other words, to learn useful representations of the input data at hand—representations that get us closer to the expected output. Before we go any further: what’s a representation?
  • 16. Representations for the input data • Machine-learning models are all about finding appropriate representations for their input data— transformations of the data that make it more amenable to the task at hand, such as a classification t
  • 19. Representation • Learning, in the context of machine learning, describes an automatic search process for better representations. • All machine-learning algorithms consist of automatically finding such transformations that turn data into more- useful representations for a given task. • These operations can be coordinate changes, as you just saw, or linear projections (which may destroy information), translations, nonlinear operations (such as “select all points such that x > 0”), and so on.
  • 20. What makes Deep Learning Special? • The deep in deep learning isn’t a reference to any kind of deeper understanding achieved by the approach; rather, it stands for this idea of successive layers of representations. • How many layers contribute to a model of the data is called the depth of the model. • Other appropriate names for the field could have been layered representations learning and hierarchical representations learning. • Modern deep learning often involves tens or even hundreds of successive layers of representations— and they’re all learned automatically from exposure to training data.
  • 22. Deep learning In deep learning, these layered representations are (almost always) learned via models called neural networks, structured in literal layers stacked on top of each other. There’s no evidence that the brain implements anything like the learning mechanisms used in modern deep-learning models. You may come across pop-science articles proclaiming that deep learning works like the brain or was modeled after the brain, but that isn’t the case.
  • 23. Deep Learning in simple words Deep learning is a mathematical framework for learning representations from data.
  • 24. What do the representations learned by a deep-learning algorithm look like?
  • 25. You can think of a deep network as a multistage information-distillation operation, where information goes through successive filters and comes out increasingly purified (that is, useful with regard to some task)
  • 27. Finding right values for these weights The specification of what a layer does to its input data is stored in the layer’s weights, which in essence are a bunch of numbers. In technical terms, we’d say that the transformation implemented by a layer is parameterized by its weights (see figure in previous slide). (Weights are also sometimes called the parameters of a layer.) In this context, learning means finding a set of values for the weights of all layers in a network, such that the network will correctly map example inputs to their associated targets. But here’s the thing: a deep neural network can contain tens of millions of parameters. Finding the correct value for all of them may seem like a daunting task, especially given that modifying the value of one parameter will affect the behavior of all the
  • 28. Loss Function To control something, first you need to be able to observe it. To control the output of a neural network, you need to be able to measure how far this output is from what you expected. This is the job of the loss function of the network, also called the objective function. The loss function takes the predictions of the network and the true target (what you wanted the network to output) and computes a distance score, capturing how well the network has done on this specific example (See next Figure).
  • 30. BackPropagation Algorithm The fundamental trick in deep learning is to use this score as a feedback signal to adjust the value of the weights a little, in a direction that will lower the loss score for the current example (see figure 1.9). This adjustment is the job of the optimizer, which implements what’s called the Backpropagation algorithm: the central algorithm in deep learning.
  • 32. Method Initially, the weights of the network are assigned random values, so the network merely implements a series of random transformations. Naturally, its output is far from what it should ideally be, and the loss score is accordingly very high. But with every example the network processes, the weights are adjusted a little in the correct direction, and the loss score decreases. This is the training loop, which, repeated a sufficient number of times (typically tens of iterations over thousands of examples), yields weight values that minimize the loss function. A network with a minimal loss is one for which the outputs are as close as they can be to the targets: a trained network. Once again, it’s a simple mechanism that, once scaled, ends up looking like magic.
  • 33. What deep learning has Achieved so far? Near-human-level image classification Near-human-level speech recognition Near-human-level handwriting transcription Improved machine Translation Improved text to speech conversion digital assistants such as Google Now and Amazon Alexa improve the search results on the web ability to answer natural language questions
  • 34. Deep Learning hype Deep learning has reached a level of public attention and industry investment never before seen in the history of AI, but it isn’t the first successful form of machine learning. It’s safe to say that most of the machine-learning algorithms used in the industry today aren’t deep-learning algorithms. Deep learning isn’t always the right tool for the job—sometimes there isn’t enough data for deep learning to be applicable, and sometimes the problem is better solved by a different algorithm. If deep learning is your first contact with machine learning, then you may find yourself in a situation where all you have is the deep-learning hammer, and every machine-learning problem starts to look like a nail. The only way not to fall into this trap is to be familiar with other approaches and practice them when appropriate.