SlideShare a Scribd company logo
Agenda
▪ Difference Between Machine Learning and Deep Learning
▪ What is Deep Learning?
▪ What is TensorFlow?
▪ TensorFlow Data Structures
▪ TensorFlow Use-Case
Agenda
▪ Difference Between Machine Learning and Deep Learning
▪ What is Deep Learning?
▪ What is TensorFlow?
▪ TensorFlow Data Structures
▪ TensorFlow Use-Case
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Machine Learning vs Deep Learning
Let’s see what are the differences between Machine Learning and Deep Learning
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Machine Learning vs Deep Learning
Machine Learning Deep Learning
High performance on less data Low performance on less data
Deep Learning Performance
Machine Learning Performance
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Machine Learning vs Deep Learning
Machine Learning Deep Learning
Can work on low end machines Requires high end machines
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Machine Learning vs Deep Learning
Machine Learning Deep Learning
Features need to be hand-coded
as per the domain and data type
Tries to learn high-level features
from data
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
What is Deep Learning?
Now is the time to understand what exactly is Deep Learning?
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
What is Deep Learning?
Input Layer
Hidden Layer 1
Hidden Layer 2
Output Layer
A collection of statistical machine learning techniques used to learn feature hierarchies often based on
artificial neural networks
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
What are Tensors?
Let’s see what are Tensors?
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
What Are Tensors?
 Tensors are the standard way of representing data in TensorFlow (deep learning).
 Tensors are multidimensional arrays, an extension of two-dimensional tables (matrices) to data
with higher dimension.
Tensor of
dimension[1]
Tensor of
dimensions[2]
Tensor of
dimensions[3]
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Tensors Rank
Rank Math Entity Python Example
0 Scalar (magnitude
only)
s = 483
1 Vector (magnitude
and direction)
v = [1.1, 2.2, 3.3]
2 Matrix (table of
numbers)
m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
3 3-Tensor (cube of
numbers)
t =
[[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18
]]]
n n-Tensor (you get
the idea)
....
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Tensor Data Types
In addition to dimensionality Tensors have different data types as well, you can assign any one of
these data types to a Tensor
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
What Is TensorFlow?
Now, is the time explore TensorFlow.
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
What Is TensorFlow?
 TensorFlow is a Python library used to implement deep networks.
 In TensorFlow, computation is approached as a dataflow graph.
3.2 -1.4 5.1 …
-1.0 -2 2.4 …
… … … …
… … … …
Tensor Flow
Matmul
W X
Add
Relu
B
Computational
Graph
Functions
Tensors
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
TensorFlow Code-Basics
Let’s understand the fundamentals of TensorFlow
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
TensorFlow Code-Basics
TensorFlow core programs consists of two discrete sections:
Building a computational graph Running a computational graph
A computational graph is a series of TensorFlow
operations arranged into a graph of nodes
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
TensorFlow Building And Running A Graph
Building a computational graph Running a computational graph
import tensorflow as tf
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0)
print(node1, node2)
Constant nodes
sess = tf.Session()
print(sess.run([node1, node2]))
To actually evaluate the nodes, we must run
the computational graph within a session.
As the session encapsulates the control and
state of the TensorFlow runtime.
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Tensorflow Example
a
5.0
Constimport tensorflow as tf
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session
sess = tf.Session()
# Evaluate the tensor 'C'
print(sess.run(c))
Computational Graph
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Tensorflow Example
a
b
5.0
6.0
Const
Constimport tensorflow as tf
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session
sess = tf.Session()
# Evaluate the tensor 'C'
print(sess.run(c))
Computational Graph
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Tensorflow Example
a
b c
5.0
6.0
Const Mul
Constimport tensorflow as tf
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session
sess = tf.Session()
# Evaluate the tensor 'C'
print(sess.run(c))
Computational Graph
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Tensorflow Example
a
b c
5.0
6.0
Const Mul
30.0
Constimport tensorflow as tf
# Build a graph
a = tf.constant(5.0)
b = tf.constant(6.0)
c = a * b
# Launch the graph in a session
sess = tf.Session()
# Evaluate the tensor 'C'
print(sess.run(c))
Running The Computational Graph
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Graph Visualization
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Graph Visualization
 For visualizing TensorFlow graphs, we use TensorBoard.
 The first argument when creating the FileWriter is an output directory name, which will be created
if it doesn't exist.
File_writer = tf.summary.FileWriter('log_simple_graph', sess.graph)
TensorBoard runs as a local web app, on port 6006. (this
is default port, “6006” is “ ” upside-down.)oo
tensorboard --logdir = “path_to_the_graph”
Execute this command in the cmd
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Constants, Placeholders and Variables
Let’s understand what are constants, placeholders and variables
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Constant
One type of a node is a constant. It takes no inputs, and it outputs a value
it stores internally.
import tensorflow as tf
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0)
print(node1, node2)
Constant nodes
Constant
Placeholder
Variable
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Constant
One type of a node is a constant. It takes no inputs, and it outputs a value
it stores internally.
import tensorflow as tf
node1 = tf.constant(3.0, tf.float32)
node2 = tf.constant(4.0)
print(node1, node2)
Constant nodes
Constant
Placeholder
Variable
What if I want the
graph to accept
external inputs?
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Placeholder
Constant
Placeholder
Variable
A graph can be parameterized to accept external inputs, known as placeholders.
A placeholder is a promise to provide a value later.
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Placeholder
Constant
Placeholder
Variable
A graph can be parameterized to accept external inputs, known as placeholders.
A placeholder is a promise to provide a value later.
How to modify the
graph, if I want new
output for the same
input ?
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Variable
Constant
Placeholder
Variable
To make the model trainable, we need to be able to modify the graph to get
new outputs with the same input. Variables allow us to add trainable
parameters to a graph
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Let Us Now Create A Model
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Simple Linear Model
import tensorflow as tf
W = tf.Variable([.3], tf.float32)
b = tf.Variable([-.3], tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init)
print(sess.run(linear_model, {x:[1,2,3,4]}))
We've created a model, but we
don't know how good it is yet
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
How To Increase The Efficiency Of The Model?
Calculate the loss
Model
Update the Variables
Repeat the process until the loss becomes very small
A loss function measures how
far apart the current model is
from the provided data.
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Calculating The Loss
In order to understand how good the Model is, we should know the loss/error.
To evaluate the model on training data, we need a y i.e. a
placeholder to provide the desired values, and we need to
write a loss function.
We'll use a standard loss model for linear regression.
(linear_model – y ) creates a vector where each element is
the corresponding example's error delta.
tf.square is used to square that error.
tf.reduce_sum is used to sum all the squared error.
y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Reducing The Loss
Optimizer modifies each variable according to the magnitude of the derivative of loss with
respect to that variable. Here we will use Gradient Descent Optimizer
How Gradient Descent Actually
Works?
Let’s understand this
with an analogy
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Reducing The Loss
• Suppose you are at the top of a mountain, and you have to reach a lake which is at the lowest
point of the mountain (a.k.a valley).
• A twist is that you are blindfolded and you have zero visibility to see where you are headed. So,
what approach will you take to reach the lake?
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Reducing The Loss
• The best way is to check the ground near you and observe where the land tends to descend.
• This will give an idea in what direction you should take your first step. If you follow the
descending path, it is very likely you would reach the lake.
Consider the length of the step as learning rate
Consider the position of the hiker as weight
Consider the process of climbing down
the mountain as cost function/loss
function
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Reducing The Loss
Global Cost/Loss
Minimum
Jmin(w)
J(w)
Let us
understand the
math behind
Gradient
Descent
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Batch Gradient Descent
The weights are updated
incrementally after each
epoch. The cost function J(⋅),
the sum of squared errors
(SSE), can be written as:
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Batch Gradient Descent
The weights are updated
incrementally after each
epoch. The cost function J(⋅),
the sum of squared errors
(SSE), can be written as:
The magnitude and direction
of the weight update is
computed by taking a step in
the opposite direction of the
cost gradient
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Batch Gradient Descent
The weights are updated
incrementally after each
epoch. The cost function J(⋅),
the sum of squared errors
(SSE), can be written as:
The magnitude and direction
of the weight update is
computed by taking a step in
the opposite direction of the
cost gradient
The weights are then updated
after each epoch via the
following update rule:
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Batch Gradient Descent
The weights are updated
incrementally after each
epoch. The cost function J(⋅),
the sum of squared errors
(SSE), can be written as:
The magnitude and direction
of the weight update is
computed by taking a step in
the opposite direction of the
cost gradient
The weights are then updated
after each epoch via the
following update rule:
Here, Δw is a vector that
contains the weight
updates of each weight
coefficient w, which are
computed as follows:
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Reducing The Loss
Suppose, we want to find the best parameters (W) for our learning algorithm. We can apply the
same analogy and find the best possible values for that parameter. Consider the example below:
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)
sess.run(init)
for i in range(1000):
sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]})
print(sess.run([W, b]))
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
TensorFlow Use-Case
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Long Short Term Memory Networks Use-Case
We will feed a LSTM with correct sequences from the text of 3 symbols as inputs and 1 labeled
symbol, eventually the neural network will learn to predict the next symbol correctly
had a general
LSTM
cell
Council
Prediction
label
vs
inputs
LSTM cell with
three inputs and
1 output.
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Long Short Term Memory Networks Use-Case
long ago , the mice had a general council to consider what measures
they could take to outwit their common enemy , the cat . some said
this , and some said that but at last a young mouse got up and said he
had a proposal to make , which he thought would meet the case . you
will all agree , said he , that our chief danger consists in the sly and
treacherous manner in which the enemy approaches us . now , if we
could receive some signal of her approach , we could easily escape from
her . i venture , therefore , to propose that a small bell be procured , and
attached by a ribbon round the neck of the cat . by this means we
should always know when she was about , and could easily retire while
she was in the neighborhood . this proposal met with general applause ,
until an old mouse got up and said that is all very well , but who is to
bell the cat ? the mice looked at one another and nobody spoke . then
the old mouse said it is easy to propose impossible remedies .
How to
train the
network?
A short story from Aesop’s Fables
with 112 unique symbols
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Long Short Term Memory Networks Use-Case
A unique integer value is assigned to each symbol because
LSTM inputs can only understand real numbers.
20 6 33
LSTM
cell
LSTM cell with
three inputs and
1 output.
had a general
.01 .02 .6 .00
37
37
vs
Council
Council
112-element
vector
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Session In A Minute
Machine Learning vs Deep Learning What is Deep Learning? What is TensorFlow?
TensorFlow Code-Basics Simple Linear Model TensorFlow Use-Case
Copyright © 2017, edureka and/or its affiliates. All rights reserved.
Introduction To TensorFlow | Deep Learning Using TensorFlow | TensorFlow Tutorial | Edureka

More Related Content

PDF
PyTorch 2 Internals
PPTX
Deep learning with tensorflow
PDF
Introduction to Deep Learning, Keras, and TensorFlow
PDF
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
PDF
Intro to Neural Networks
PDF
Keras Tutorial For Beginners | Creating Deep Learning Models Using Keras In P...
PPTX
GPU and Deep learning best practices
PDF
Creating data apps using Streamlit in Python
PyTorch 2 Internals
Deep learning with tensorflow
Introduction to Deep Learning, Keras, and TensorFlow
PyTorch Python Tutorial | Deep Learning Using PyTorch | Image Classifier Usin...
Intro to Neural Networks
Keras Tutorial For Beginners | Creating Deep Learning Models Using Keras In P...
GPU and Deep learning best practices
Creating data apps using Streamlit in Python

What's hot (20)

PPTX
Journey of Generative AI
PDF
An Introduction to Generative AI - May 18, 2023
PDF
LanGCHAIN Framework
PDF
Landscape of AI/ML in 2023
PDF
MLOps Using MLflow
PDF
generative-ai-fundamentals and Large language models
PDF
End to end Machine Learning using Kubeflow - Build, Train, Deploy and Manage
PDF
Generative AI: Past, Present, and Future – A Practitioner's Perspective
PDF
Large Language Models - Chat AI.pdf
PDF
Introduction to LLMs
PDF
Generative AI
PDF
Let's talk about GPT: A crash course in Generative AI for researchers
PDF
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
PPTX
OpenAI-Copilot-ChatGPT.pptx
PDF
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
PDF
Intro to LLMs
PPTX
Google Vertex AI
PPTX
A Comprehensive Review of Large Language Models for.pptx
PPTX
Using Generative AI
PDF
Large Language Models Bootcamp
Journey of Generative AI
An Introduction to Generative AI - May 18, 2023
LanGCHAIN Framework
Landscape of AI/ML in 2023
MLOps Using MLflow
generative-ai-fundamentals and Large language models
End to end Machine Learning using Kubeflow - Build, Train, Deploy and Manage
Generative AI: Past, Present, and Future – A Practitioner's Perspective
Large Language Models - Chat AI.pdf
Introduction to LLMs
Generative AI
Let's talk about GPT: A crash course in Generative AI for researchers
How Does Generative AI Actually Work? (a quick semi-technical introduction to...
OpenAI-Copilot-ChatGPT.pptx
𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈: 𝐂𝐡𝐚𝐧𝐠𝐢𝐧𝐠 𝐇𝐨𝐰 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬 𝐈𝐧𝐧𝐨𝐯𝐚𝐭𝐞𝐬 𝐚𝐧𝐝 𝐎𝐩𝐞𝐫𝐚𝐭𝐞𝐬
Intro to LLMs
Google Vertex AI
A Comprehensive Review of Large Language Models for.pptx
Using Generative AI
Large Language Models Bootcamp
Ad

Viewers also liked (20)

PDF
What is Artificial Intelligence | Artificial Intelligence Tutorial For Beginn...
PPTX
Top 5 Deep Learning and AI Stories - October 6, 2017
PDF
Big Data Tutorial For Beginners | What Is Big Data | Big Data Tutorial | Hado...
PPTX
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
PDF
ReactJS Tutorial For Beginners | ReactJS Redux Training For Beginners | React...
PDF
Docker Swarm For High Availability | Docker Tutorial | DevOps Tutorial | Edureka
PDF
Angular 4 Data Binding | Two Way Data Binding in Angular 4 | Angular 4 Tutori...
PPTX
What to Upload to SlideShare
PDF
What Is DevOps? | Introduction To DevOps | DevOps Tools | DevOps Tutorial | D...
PPTX
Oficinas de Proyectos Eficientes con Microsoft EPM 2010 (en Microsoft Uruguay)
PPTX
PPTX
Presentación Juan David Echeverri EPM
PPTX
Importancia de las tic en la educación 3
PPTX
negociaciones de Epm
PDF
13 caso-epm
PPT
Taller Administración del tiempo
PDF
IFS Energy & Utilities Solution Map
 
PDF
[Tek] 4차산업혁명위원회 사회제도혁신위원회에 제출한 규제합리화 방안
PDF
Data Infrastructure at LinkedIn
PPTX
Data Infrastructure at LinkedIn
What is Artificial Intelligence | Artificial Intelligence Tutorial For Beginn...
Top 5 Deep Learning and AI Stories - October 6, 2017
Big Data Tutorial For Beginners | What Is Big Data | Big Data Tutorial | Hado...
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
ReactJS Tutorial For Beginners | ReactJS Redux Training For Beginners | React...
Docker Swarm For High Availability | Docker Tutorial | DevOps Tutorial | Edureka
Angular 4 Data Binding | Two Way Data Binding in Angular 4 | Angular 4 Tutori...
What to Upload to SlideShare
What Is DevOps? | Introduction To DevOps | DevOps Tools | DevOps Tutorial | D...
Oficinas de Proyectos Eficientes con Microsoft EPM 2010 (en Microsoft Uruguay)
Presentación Juan David Echeverri EPM
Importancia de las tic en la educación 3
negociaciones de Epm
13 caso-epm
Taller Administración del tiempo
IFS Energy & Utilities Solution Map
 
[Tek] 4차산업혁명위원회 사회제도혁신위원회에 제출한 규제합리화 방안
Data Infrastructure at LinkedIn
Data Infrastructure at LinkedIn
Ad

Similar to Introduction To TensorFlow | Deep Learning Using TensorFlow | TensorFlow Tutorial | Edureka (20)

PDF
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
PPTX
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
PDF
TensorFlow example for AI Ukraine2016
PPTX
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
PDF
Deep Learning Tutorial | Deep Learning Tutorial for Beginners | Neural Networ...
PDF
Introduction to Tensor Flow for Optical Character Recognition (OCR)
PDF
TensorFlow Tutorial.pdf
PDF
Language translation with Deep Learning (RNN) with TensorFlow
 
PPTX
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
PDF
Advanced Spark and TensorFlow Meetup May 26, 2016
PPTX
Machine Learning - Introduction to Tensorflow
PDF
Overview of TensorFlow For Natural Language Processing
PPTX
Deep Learning in your Browser: powered by WebGL
PDF
Pytorch for tf_developers
PDF
Tensorflow 2.0 and Coral Edge TPU
PDF
Introduction to TensorFlow, by Machine Learning at Berkeley
PPTX
tensorflow.pptx
PPTX
TensorFlow in Your Browser
PDF
Introduction to TensorFlow
PPTX
Introduction to Deep Learning and TensorFlow
TensorFlow Tutorial | Deep Learning Using TensorFlow | TensorFlow Tutorial Py...
What is TensorFlow? | Introduction to TensorFlow | TensorFlow Tutorial For Be...
TensorFlow example for AI Ukraine2016
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
Deep Learning Tutorial | Deep Learning Tutorial for Beginners | Neural Networ...
Introduction to Tensor Flow for Optical Character Recognition (OCR)
TensorFlow Tutorial.pdf
Language translation with Deep Learning (RNN) with TensorFlow
 
TensorFlow Tutorial | Deep Learning With TensorFlow | TensorFlow Tutorial For...
Advanced Spark and TensorFlow Meetup May 26, 2016
Machine Learning - Introduction to Tensorflow
Overview of TensorFlow For Natural Language Processing
Deep Learning in your Browser: powered by WebGL
Pytorch for tf_developers
Tensorflow 2.0 and Coral Edge TPU
Introduction to TensorFlow, by Machine Learning at Berkeley
tensorflow.pptx
TensorFlow in Your Browser
Introduction to TensorFlow
Introduction to Deep Learning and TensorFlow

More from Edureka! (20)

PDF
What to learn during the 21 days Lockdown | Edureka
PDF
Top 10 Dying Programming Languages in 2020 | Edureka
PDF
Top 5 Trending Business Intelligence Tools | Edureka
PDF
Tableau Tutorial for Data Science | Edureka
PDF
Python Programming Tutorial | Edureka
PDF
Top 5 PMP Certifications | Edureka
PDF
Top Maven Interview Questions in 2020 | Edureka
PDF
Linux Mint Tutorial | Edureka
PDF
How to Deploy Java Web App in AWS| Edureka
PDF
Importance of Digital Marketing | Edureka
PDF
RPA in 2020 | Edureka
PDF
Email Notifications in Jenkins | Edureka
PDF
EA Algorithm in Machine Learning | Edureka
PDF
Cognitive AI Tutorial | Edureka
PDF
AWS Cloud Practitioner Tutorial | Edureka
PDF
Blue Prism Top Interview Questions | Edureka
PDF
Big Data on AWS Tutorial | Edureka
PDF
A star algorithm | A* Algorithm in Artificial Intelligence | Edureka
PDF
Kubernetes Installation on Ubuntu | Edureka
PDF
Introduction to DevOps | Edureka
What to learn during the 21 days Lockdown | Edureka
Top 10 Dying Programming Languages in 2020 | Edureka
Top 5 Trending Business Intelligence Tools | Edureka
Tableau Tutorial for Data Science | Edureka
Python Programming Tutorial | Edureka
Top 5 PMP Certifications | Edureka
Top Maven Interview Questions in 2020 | Edureka
Linux Mint Tutorial | Edureka
How to Deploy Java Web App in AWS| Edureka
Importance of Digital Marketing | Edureka
RPA in 2020 | Edureka
Email Notifications in Jenkins | Edureka
EA Algorithm in Machine Learning | Edureka
Cognitive AI Tutorial | Edureka
AWS Cloud Practitioner Tutorial | Edureka
Blue Prism Top Interview Questions | Edureka
Big Data on AWS Tutorial | Edureka
A star algorithm | A* Algorithm in Artificial Intelligence | Edureka
Kubernetes Installation on Ubuntu | Edureka
Introduction to DevOps | Edureka

Recently uploaded (20)

PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Encapsulation theory and applications.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
Big Data Technologies - Introduction.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Modernizing your data center with Dell and AMD
PPT
Teaching material agriculture food technology
PDF
KodekX | Application Modernization Development
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
“AI and Expert System Decision Support & Business Intelligence Systems”
Encapsulation theory and applications.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Big Data Technologies - Introduction.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
MYSQL Presentation for SQL database connectivity
Spectral efficient network and resource selection model in 5G networks
Building Integrated photovoltaic BIPV_UPV.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Modernizing your data center with Dell and AMD
Teaching material agriculture food technology
KodekX | Application Modernization Development
Review of recent advances in non-invasive hemoglobin estimation
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Digital-Transformation-Roadmap-for-Companies.pptx
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
Mobile App Security Testing_ A Comprehensive Guide.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows

Introduction To TensorFlow | Deep Learning Using TensorFlow | TensorFlow Tutorial | Edureka

  • 1. Agenda ▪ Difference Between Machine Learning and Deep Learning ▪ What is Deep Learning? ▪ What is TensorFlow? ▪ TensorFlow Data Structures ▪ TensorFlow Use-Case
  • 2. Agenda ▪ Difference Between Machine Learning and Deep Learning ▪ What is Deep Learning? ▪ What is TensorFlow? ▪ TensorFlow Data Structures ▪ TensorFlow Use-Case
  • 3. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Machine Learning vs Deep Learning Let’s see what are the differences between Machine Learning and Deep Learning
  • 4. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Machine Learning vs Deep Learning Machine Learning Deep Learning High performance on less data Low performance on less data Deep Learning Performance Machine Learning Performance
  • 5. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Machine Learning vs Deep Learning Machine Learning Deep Learning Can work on low end machines Requires high end machines
  • 6. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Machine Learning vs Deep Learning Machine Learning Deep Learning Features need to be hand-coded as per the domain and data type Tries to learn high-level features from data
  • 7. Copyright © 2017, edureka and/or its affiliates. All rights reserved. What is Deep Learning? Now is the time to understand what exactly is Deep Learning?
  • 8. Copyright © 2017, edureka and/or its affiliates. All rights reserved. What is Deep Learning? Input Layer Hidden Layer 1 Hidden Layer 2 Output Layer A collection of statistical machine learning techniques used to learn feature hierarchies often based on artificial neural networks
  • 9. Copyright © 2017, edureka and/or its affiliates. All rights reserved. What are Tensors? Let’s see what are Tensors?
  • 10. Copyright © 2017, edureka and/or its affiliates. All rights reserved. What Are Tensors?  Tensors are the standard way of representing data in TensorFlow (deep learning).  Tensors are multidimensional arrays, an extension of two-dimensional tables (matrices) to data with higher dimension. Tensor of dimension[1] Tensor of dimensions[2] Tensor of dimensions[3]
  • 11. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Tensors Rank Rank Math Entity Python Example 0 Scalar (magnitude only) s = 483 1 Vector (magnitude and direction) v = [1.1, 2.2, 3.3] 2 Matrix (table of numbers) m = [[1, 2, 3], [4, 5, 6], [7, 8, 9]] 3 3-Tensor (cube of numbers) t = [[[2], [4], [6]], [[8], [10], [12]], [[14], [16], [18 ]]] n n-Tensor (you get the idea) ....
  • 12. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Tensor Data Types In addition to dimensionality Tensors have different data types as well, you can assign any one of these data types to a Tensor
  • 13. Copyright © 2017, edureka and/or its affiliates. All rights reserved. What Is TensorFlow? Now, is the time explore TensorFlow.
  • 14. Copyright © 2017, edureka and/or its affiliates. All rights reserved. What Is TensorFlow?  TensorFlow is a Python library used to implement deep networks.  In TensorFlow, computation is approached as a dataflow graph. 3.2 -1.4 5.1 … -1.0 -2 2.4 … … … … … … … … … Tensor Flow Matmul W X Add Relu B Computational Graph Functions Tensors
  • 15. Copyright © 2017, edureka and/or its affiliates. All rights reserved. TensorFlow Code-Basics Let’s understand the fundamentals of TensorFlow
  • 16. Copyright © 2017, edureka and/or its affiliates. All rights reserved. TensorFlow Code-Basics TensorFlow core programs consists of two discrete sections: Building a computational graph Running a computational graph A computational graph is a series of TensorFlow operations arranged into a graph of nodes
  • 17. Copyright © 2017, edureka and/or its affiliates. All rights reserved. TensorFlow Building And Running A Graph Building a computational graph Running a computational graph import tensorflow as tf node1 = tf.constant(3.0, tf.float32) node2 = tf.constant(4.0) print(node1, node2) Constant nodes sess = tf.Session() print(sess.run([node1, node2])) To actually evaluate the nodes, we must run the computational graph within a session. As the session encapsulates the control and state of the TensorFlow runtime.
  • 18. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Tensorflow Example a 5.0 Constimport tensorflow as tf # Build a graph a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # Launch the graph in a session sess = tf.Session() # Evaluate the tensor 'C' print(sess.run(c)) Computational Graph
  • 19. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Tensorflow Example a b 5.0 6.0 Const Constimport tensorflow as tf # Build a graph a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # Launch the graph in a session sess = tf.Session() # Evaluate the tensor 'C' print(sess.run(c)) Computational Graph
  • 20. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Tensorflow Example a b c 5.0 6.0 Const Mul Constimport tensorflow as tf # Build a graph a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # Launch the graph in a session sess = tf.Session() # Evaluate the tensor 'C' print(sess.run(c)) Computational Graph
  • 21. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Tensorflow Example a b c 5.0 6.0 Const Mul 30.0 Constimport tensorflow as tf # Build a graph a = tf.constant(5.0) b = tf.constant(6.0) c = a * b # Launch the graph in a session sess = tf.Session() # Evaluate the tensor 'C' print(sess.run(c)) Running The Computational Graph
  • 22. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Graph Visualization
  • 23. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Graph Visualization  For visualizing TensorFlow graphs, we use TensorBoard.  The first argument when creating the FileWriter is an output directory name, which will be created if it doesn't exist. File_writer = tf.summary.FileWriter('log_simple_graph', sess.graph) TensorBoard runs as a local web app, on port 6006. (this is default port, “6006” is “ ” upside-down.)oo tensorboard --logdir = “path_to_the_graph” Execute this command in the cmd
  • 24. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Constants, Placeholders and Variables Let’s understand what are constants, placeholders and variables
  • 25. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Constant One type of a node is a constant. It takes no inputs, and it outputs a value it stores internally. import tensorflow as tf node1 = tf.constant(3.0, tf.float32) node2 = tf.constant(4.0) print(node1, node2) Constant nodes Constant Placeholder Variable
  • 26. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Constant One type of a node is a constant. It takes no inputs, and it outputs a value it stores internally. import tensorflow as tf node1 = tf.constant(3.0, tf.float32) node2 = tf.constant(4.0) print(node1, node2) Constant nodes Constant Placeholder Variable What if I want the graph to accept external inputs?
  • 27. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Placeholder Constant Placeholder Variable A graph can be parameterized to accept external inputs, known as placeholders. A placeholder is a promise to provide a value later.
  • 28. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Placeholder Constant Placeholder Variable A graph can be parameterized to accept external inputs, known as placeholders. A placeholder is a promise to provide a value later. How to modify the graph, if I want new output for the same input ?
  • 29. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Variable Constant Placeholder Variable To make the model trainable, we need to be able to modify the graph to get new outputs with the same input. Variables allow us to add trainable parameters to a graph
  • 30. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Let Us Now Create A Model
  • 31. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Simple Linear Model import tensorflow as tf W = tf.Variable([.3], tf.float32) b = tf.Variable([-.3], tf.float32) x = tf.placeholder(tf.float32) linear_model = W * x + b init = tf.global_variables_initializer() sess = tf.Session() sess.run(init) print(sess.run(linear_model, {x:[1,2,3,4]})) We've created a model, but we don't know how good it is yet
  • 32. Copyright © 2017, edureka and/or its affiliates. All rights reserved. How To Increase The Efficiency Of The Model? Calculate the loss Model Update the Variables Repeat the process until the loss becomes very small A loss function measures how far apart the current model is from the provided data.
  • 33. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Calculating The Loss In order to understand how good the Model is, we should know the loss/error. To evaluate the model on training data, we need a y i.e. a placeholder to provide the desired values, and we need to write a loss function. We'll use a standard loss model for linear regression. (linear_model – y ) creates a vector where each element is the corresponding example's error delta. tf.square is used to square that error. tf.reduce_sum is used to sum all the squared error. y = tf.placeholder(tf.float32) squared_deltas = tf.square(linear_model - y) loss = tf.reduce_sum(squared_deltas) print(sess.run(loss, {x:[1,2,3,4], y:[0,-1,-2,-3]}))
  • 34. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Reducing The Loss Optimizer modifies each variable according to the magnitude of the derivative of loss with respect to that variable. Here we will use Gradient Descent Optimizer How Gradient Descent Actually Works? Let’s understand this with an analogy
  • 35. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Reducing The Loss • Suppose you are at the top of a mountain, and you have to reach a lake which is at the lowest point of the mountain (a.k.a valley). • A twist is that you are blindfolded and you have zero visibility to see where you are headed. So, what approach will you take to reach the lake?
  • 36. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Reducing The Loss • The best way is to check the ground near you and observe where the land tends to descend. • This will give an idea in what direction you should take your first step. If you follow the descending path, it is very likely you would reach the lake. Consider the length of the step as learning rate Consider the position of the hiker as weight Consider the process of climbing down the mountain as cost function/loss function
  • 37. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Reducing The Loss Global Cost/Loss Minimum Jmin(w) J(w) Let us understand the math behind Gradient Descent
  • 38. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Batch Gradient Descent The weights are updated incrementally after each epoch. The cost function J(⋅), the sum of squared errors (SSE), can be written as:
  • 39. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Batch Gradient Descent The weights are updated incrementally after each epoch. The cost function J(⋅), the sum of squared errors (SSE), can be written as: The magnitude and direction of the weight update is computed by taking a step in the opposite direction of the cost gradient
  • 40. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Batch Gradient Descent The weights are updated incrementally after each epoch. The cost function J(⋅), the sum of squared errors (SSE), can be written as: The magnitude and direction of the weight update is computed by taking a step in the opposite direction of the cost gradient The weights are then updated after each epoch via the following update rule:
  • 41. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Batch Gradient Descent The weights are updated incrementally after each epoch. The cost function J(⋅), the sum of squared errors (SSE), can be written as: The magnitude and direction of the weight update is computed by taking a step in the opposite direction of the cost gradient The weights are then updated after each epoch via the following update rule: Here, Δw is a vector that contains the weight updates of each weight coefficient w, which are computed as follows:
  • 42. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Reducing The Loss Suppose, we want to find the best parameters (W) for our learning algorithm. We can apply the same analogy and find the best possible values for that parameter. Consider the example below: optimizer = tf.train.GradientDescentOptimizer(0.01) train = optimizer.minimize(loss) sess.run(init) for i in range(1000): sess.run(train, {x:[1,2,3,4], y:[0,-1,-2,-3]}) print(sess.run([W, b]))
  • 43. Copyright © 2017, edureka and/or its affiliates. All rights reserved. TensorFlow Use-Case
  • 44. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Long Short Term Memory Networks Use-Case We will feed a LSTM with correct sequences from the text of 3 symbols as inputs and 1 labeled symbol, eventually the neural network will learn to predict the next symbol correctly had a general LSTM cell Council Prediction label vs inputs LSTM cell with three inputs and 1 output.
  • 45. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Long Short Term Memory Networks Use-Case long ago , the mice had a general council to consider what measures they could take to outwit their common enemy , the cat . some said this , and some said that but at last a young mouse got up and said he had a proposal to make , which he thought would meet the case . you will all agree , said he , that our chief danger consists in the sly and treacherous manner in which the enemy approaches us . now , if we could receive some signal of her approach , we could easily escape from her . i venture , therefore , to propose that a small bell be procured , and attached by a ribbon round the neck of the cat . by this means we should always know when she was about , and could easily retire while she was in the neighborhood . this proposal met with general applause , until an old mouse got up and said that is all very well , but who is to bell the cat ? the mice looked at one another and nobody spoke . then the old mouse said it is easy to propose impossible remedies . How to train the network? A short story from Aesop’s Fables with 112 unique symbols
  • 46. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Long Short Term Memory Networks Use-Case A unique integer value is assigned to each symbol because LSTM inputs can only understand real numbers. 20 6 33 LSTM cell LSTM cell with three inputs and 1 output. had a general .01 .02 .6 .00 37 37 vs Council Council 112-element vector
  • 47. Copyright © 2017, edureka and/or its affiliates. All rights reserved. Session In A Minute Machine Learning vs Deep Learning What is Deep Learning? What is TensorFlow? TensorFlow Code-Basics Simple Linear Model TensorFlow Use-Case
  • 48. Copyright © 2017, edureka and/or its affiliates. All rights reserved.