SlideShare a Scribd company logo
Autoencoders
Autoencoders
● Autoencoders are
○ Artificial neural networks
○ Capable of learning efficient representations of the input data, called
codings, without any supervision
○ The training set is unlabeled.
● These codings typically have a much lower dimensionality than the input
data, making autoencoders useful for dimensionality reduction
Autoencoders
Autoencoders
Why use Autoencoders?
● Useful for dimensionality reduction
● Autoencoders act as powerful feature detectors,
● And they can be used for unsupervised pre-training of deep neural
networks
● Lastly, they are capable of randomly generating new data that looks very
similar to the training data; this is called a generative model
Autoencoders
Autoencoders
Autoencoders
Why use Autoencoders?
For example, we could train an autoencoder on pictures of faces, and it
would then be able to generate new faces
Autoencoders
● Surprisingly, autoencoders work by simply learning to copy their inputs
to their outputs
● This may sound like a trivial task, but we will see that constraining the
network in various ways can make it rather difficult
Autoencoders
Autoencoders
Autoencoders
For example
● You can limit the size of the internal representation, or you can add noise
to the inputs and train the network to recover the original inputs.
● These constraints prevent the autoencoder from trivially copying the
inputs directly to the outputs, which forces it to learn efficient ways of
representing the data
● In short, the codings are byproducts of the autoencoder’s attempt to
learn the identity function under some constraints
Autoencoders
What we’ll learn ?
● We will explain in more depth how autoencoders work
● What types of constraints can be imposed
● And how to implement them using TensorFlow, whether it is for
○ Dimensionality reduction,
○ Feature extraction,
○ Insupervised pretraining,
○ Or as generative models
Autoencoders
Autoencoders
Let’s try to understand
“why constraining an
autoencoder during training pushes it to discover and exploit
patterns in the data”
With an example
Efficient Data Representations
Autoencoders
Which of the following number sequences do you find the easiest
to memorize?
● 40, 27, 25, 36, 81, 57, 10, 73, 19, 68
● 50, 25, 76, 38, 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20
Efficient Data Representations
Autoencoders
● At first glance, it would seem that the first sequence should be easier,
since it is much shorter
● However, if you look carefully at the second sequence, you may notice
that it follows two simple rules:
○ Even numbers are followed by their half,
○ And odd numbers are followed by their triple plus one
This is a famous sequence known as the hailstone sequence
Efficient Data Representations
Autoencoders
● Once you notice this pattern, the second sequence becomes much easier
to memorize than the first because you only need to memorize the two
rules,
○ The first number,
○ And the length of the sequence
Efficient Data Representations
Autoencoders
● Note that if we could quickly and easily memorize very long sequences,
you would not care much about the existence of a pattern in the
second sequence
● You would just learn every number by heart, and that would be that
● It is the fact that it is hard to memorize long sequences that makes
it useful to recognize patterns,
● And hopefully this clarifies why constraining an autoencoder during
training pushes it to discover and exploit patterns in the data
Efficient Data Representations
Autoencoders
The relationship between memory, perception, and pattern matching was
famously studied by William Chase and Herbert Simon in the early
1970s
Efficient Data Representations
William Chase Herbert Simon
Autoencoders
Let’s see how their study of chess players is similar to an
Autoencoder
● They observed that expert chess players were able to memorize the
positions of all the pieces in a game by looking at the board for just 5
seconds,
● A task that most people would find impossible
● However, this was only the case when the pieces were placed in realistic
positions from actual games, not when the pieces were placed randomly.
Efficient Data Representations
Autoencoders
Let’s see how their study of chess players is similar to an
Autoencoder
● Chess experts don’t have a much better memory than you and I,
● They just see chess patterns more easily thanks to their experience with
the game
● Noticing patterns helps them store information efficiently
Efficient Data Representations
Autoencoders
Let’s see how their study of chess players is similar to an
Autoencoder
● Just like the chess players in this memory experiment, an autoencoder
○ looks at the inputs,
○ converts them to an efficient internal representation,
○ and then spits out something that (hopefully) looks very close to the
inputs
Efficient Data Representations
Autoencoders
Composition of Autoencoder
An autoencoder is always composed of two parts:
● An encoder or recognition network that converts the inputs to an
internal representation,
● Followed by a decoder or generative network that converts the
internal representation to the outputs
Efficient Data Representations
Autoencoders
Composition of Autoencoder
Efficient Data Representations
Autoencoders
Composition of Autoencoder
Efficient Data Representations
An autoencoder
typically has the same
architecture as a
Multi-Layer
Perceptron, except that
the number of neurons
in the output layer must
be equal to the
number of inputs
Autoencoders
There is just one hidden
layer composed of two
neurons (the encoder),
and one output layer
composed of three
neurons (the decoder)
Composition of Autoencoder
Efficient Data Representations
Autoencoders
Composition of Autoencoder
Efficient Data Representations
The outputs are often called
the reconstructions
since the autoencoder tries
to reconstruct the inputs,
and the cost function
contains a reconstruction
loss that penalizes the model
when the reconstructions are
different from the inputs.
Autoencoders
Because the internal
representation has a lower
dimensionality than the input
data, it is 2D instead of
3D, the autoencoder is said
to be undercomplete
Composition of Autoencoder
Efficient Data Representations
Autoencoders
Composition of Autoencoder
Efficient Data Representations
● An undercomplete
autoencoder cannot
trivially copy its inputs to
the codings, yet it must
find a way to output a
copy of its inputs
● It is forced to learn the
most important features in
the input data and drop
the unimportant ones
Autoencoders
Let’s implement a very simple undercomplete autoencoder for
dimensionality reduction
Autoencoders
PCA with an Undercomplete Linear Autoencoder
If the autoencoder uses only linear activations and the cost function is
the Mean Squared Error (MSE), then it can be shown that it ends up
performing Principal Component Analysis
Now we will build a simple linear autoencoder to perform PCA on a 3D
dataset, projecting it to 2D
Autoencoders
>>> import tensorflow as tf
>>> from tensorflow.contrib.layers import fully_connected
>>> n_inputs = 3 # 3D inputs
>>> n_hidden = 2 # 2D codings
>>> n_outputs = n_inputs
>>> learning_rate = 0.01
>>> X = tf.placeholder(tf.float32, shape=[None, n_inputs])
>>> hidden = fully_connected(X, n_hidden, activation_fn=None)
>>> outputs = fully_connected(hidden, n_outputs, activation_fn=None)
>>> reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE
>>> optimizer = tf.train.AdamOptimizer(learning_rate)
>>> training_op = optimizer.minimize(reconstruction_loss)
>>> init = tf.global_variables_initializer()
PCA with an Undercomplete Linear Autoencoder
Run it on Notebook
Autoencoders
PCA with an Undercomplete Linear Autoencoder
The two things to note in the previous code are are:
● The number of outputs is equal to the number of inputs
● To perform simple PCA, we set activation_fn=None i.e., all neurons
are linear, and the cost function is the MSE.
Autoencoders
Now let’s load the dataset, train the model on the training set, and use it to
encode the test set i.e., project it to 2D
>>> X_train, X_test = [...] # load the dataset
>>> n_iterations = 1000
>>> codings = hidden # the output of the hidden layer provides the
codings
>>> with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
training_op.run(feed_dict={X: X_train}) # no labels
codings_val = codings.eval(feed_dict={X: X_test})
PCA with an Undercomplete Linear Autoencoder
Run it on Notebook
Autoencoders
Above figure shows the original 3D dataset (at the left) and the output of
the autoencoder’s hidden layer (i.e., the coding layer, at the right)
PCA with an Undercomplete Linear Autoencoder
Autoencoders
PCA with an Undercomplete Linear Autoencoder
As you can see, the autoencoder found the best 2D plane to project the
data onto, preserving as much variance in the data as it could just like PCA
Questions?
https://discuss.cloudxlab.com
reachus@cloudxlab.com

More Related Content

PPT
Intro to Deep learning - Autoencoders
PPTX
Autoencoders in Deep Learning
PDF
Introduction to Autoencoders
PPTX
Autoencoder
PDF
Autoencoders Tutorial | Autoencoders In Deep Learning | Tensorflow Training |...
PPTX
UNIT-4.pptx
PDF
Seq2Seq (encoder decoder) model
PDF
Recurrent Neural Networks. Part 1: Theory
Intro to Deep learning - Autoencoders
Autoencoders in Deep Learning
Introduction to Autoencoders
Autoencoder
Autoencoders Tutorial | Autoencoders In Deep Learning | Tensorflow Training |...
UNIT-4.pptx
Seq2Seq (encoder decoder) model
Recurrent Neural Networks. Part 1: Theory

What's hot (20)

PDF
Autoencoder
PDF
Machine Learning: Introduction to Neural Networks
PPTX
Feedforward neural network
PDF
Introduction to Neural Networks
PPT
backpropagation in neural networks
PDF
Introduction to Recurrent Neural Network
PDF
Gradient descent method
PPT
Artificial Neural Networks - ANN
PPTX
Regularization in deep learning
PDF
Deep Learning - Convolutional Neural Networks
PDF
Artificial Neural Networks Lect3: Neural Network Learning rules
PPTX
Optimization in Deep Learning
PPTX
Activation function
PDF
Convolutional Neural Network Models - Deep Learning
PPTX
Deep Learning - RNN and CNN
PPTX
Unification and Lifting
PDF
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
PPTX
Self-organizing map
PDF
Loss functions (DLAI D4L2 2017 UPC Deep Learning for Artificial Intelligence)
Autoencoder
Machine Learning: Introduction to Neural Networks
Feedforward neural network
Introduction to Neural Networks
backpropagation in neural networks
Introduction to Recurrent Neural Network
Gradient descent method
Artificial Neural Networks - ANN
Regularization in deep learning
Deep Learning - Convolutional Neural Networks
Artificial Neural Networks Lect3: Neural Network Learning rules
Optimization in Deep Learning
Activation function
Convolutional Neural Network Models - Deep Learning
Deep Learning - RNN and CNN
Unification and Lifting
Recurrent Neural Networks (RNN) | RNN LSTM | Deep Learning Tutorial | Tensorf...
Self-organizing map
Loss functions (DLAI D4L2 2017 UPC Deep Learning for Artificial Intelligence)
Ad

Similar to Autoencoders (20)

PPTX
Autoecoders.pptx
PPTX
Introduction to Autoencoders: Types and Applications
PDF
UNIT-4.pdf
PDF
UNIT-4.pdf
PDF
Autoencoders
PDF
Structure learning with Deep Autoencoders
PDF
Honey, I Deep-shrunk the Sample Covariance Matrix! by Erk Subasi at QuantCon ...
PDF
Autoencoder in Deep Learning and its types
PDF
Explanation of Autoencoder to Variontal Auto Encoder
PDF
Lecture 7-8 From Autoencoder to VAE.pdf
PDF
Structure learning with Deep Neural Networks
PPTX
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
PDF
Structure learning with Deep Neural Networks
PPTX
Autoencoders for image_classification
PDF
Deep Style: Using Variational Auto-encoders for Image Generation
PDF
Autoencoder
PPTX
Unsupervised Feature Learning
PPTX
Lecture 7-8 From Autoencoder to VAE.pptx
PPTX
Autoencoders in Computer Vision: A Deep Learning Approach for Image Denoising...
PPTX
A Comprehensive Overview of Encoder and Decoder Architectures in Deep Learnin...
Autoecoders.pptx
Introduction to Autoencoders: Types and Applications
UNIT-4.pdf
UNIT-4.pdf
Autoencoders
Structure learning with Deep Autoencoders
Honey, I Deep-shrunk the Sample Covariance Matrix! by Erk Subasi at QuantCon ...
Autoencoder in Deep Learning and its types
Explanation of Autoencoder to Variontal Auto Encoder
Lecture 7-8 From Autoencoder to VAE.pdf
Structure learning with Deep Neural Networks
Piotr Mirowski - Review Autoencoders (Deep Learning) - CIUUK14
Structure learning with Deep Neural Networks
Autoencoders for image_classification
Deep Style: Using Variational Auto-encoders for Image Generation
Autoencoder
Unsupervised Feature Learning
Lecture 7-8 From Autoencoder to VAE.pptx
Autoencoders in Computer Vision: A Deep Learning Approach for Image Denoising...
A Comprehensive Overview of Encoder and Decoder Architectures in Deep Learnin...
Ad

More from CloudxLab (20)

PDF
Understanding computer vision with Deep Learning
PDF
Deep Learning Overview
PDF
Recurrent Neural Networks
PDF
Natural Language Processing
PDF
Naive Bayes
PDF
Training Deep Neural Nets
PDF
Reinforcement Learning
PDF
Apache Spark - Key Value RDD - Transformations | Big Data Hadoop Spark Tutori...
PDF
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
PDF
Apache Spark - Dataframes & Spark SQL - Part 2 | Big Data Hadoop Spark Tutori...
PDF
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
PDF
Apache Spark - Running on a Cluster | Big Data Hadoop Spark Tutorial | CloudxLab
PDF
Introduction to SparkR | Big Data Hadoop Spark Tutorial | CloudxLab
PDF
Introduction to NoSQL | Big Data Hadoop Spark Tutorial | CloudxLab
PDF
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...
PPTX
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
PPTX
Introduction to Deep Learning | CloudxLab
PPTX
Dimensionality Reduction | Machine Learning | CloudxLab
PPTX
Ensemble Learning and Random Forests
PPTX
Decision Trees
Understanding computer vision with Deep Learning
Deep Learning Overview
Recurrent Neural Networks
Natural Language Processing
Naive Bayes
Training Deep Neural Nets
Reinforcement Learning
Apache Spark - Key Value RDD - Transformations | Big Data Hadoop Spark Tutori...
Advanced Spark Programming - Part 2 | Big Data Hadoop Spark Tutorial | CloudxLab
Apache Spark - Dataframes & Spark SQL - Part 2 | Big Data Hadoop Spark Tutori...
Apache Spark - Dataframes & Spark SQL - Part 1 | Big Data Hadoop Spark Tutori...
Apache Spark - Running on a Cluster | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to SparkR | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to NoSQL | Big Data Hadoop Spark Tutorial | CloudxLab
Introduction to MapReduce - Hadoop Streaming | Big Data Hadoop Spark Tutorial...
Introduction To TensorFlow | Deep Learning Using TensorFlow | CloudxLab
Introduction to Deep Learning | CloudxLab
Dimensionality Reduction | Machine Learning | CloudxLab
Ensemble Learning and Random Forests
Decision Trees

Recently uploaded (20)

PDF
Approach and Philosophy of On baking technology
PPTX
Big Data Technologies - Introduction.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
KodekX | Application Modernization Development
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Empathic Computing: Creating Shared Understanding
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
Cloud computing and distributed systems.
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
cuic standard and advanced reporting.pdf
Approach and Philosophy of On baking technology
Big Data Technologies - Introduction.pptx
Review of recent advances in non-invasive hemoglobin estimation
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
KodekX | Application Modernization Development
Unlocking AI with Model Context Protocol (MCP)
Understanding_Digital_Forensics_Presentation.pptx
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Reach Out and Touch Someone: Haptics and Empathic Computing
Empathic Computing: Creating Shared Understanding
MYSQL Presentation for SQL database connectivity
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Cloud computing and distributed systems.
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Advanced methodologies resolving dimensionality complications for autism neur...
GamePlan Trading System Review: Professional Trader's Honest Take
Spectral efficient network and resource selection model in 5G networks
cuic standard and advanced reporting.pdf

Autoencoders

  • 2. Autoencoders ● Autoencoders are ○ Artificial neural networks ○ Capable of learning efficient representations of the input data, called codings, without any supervision ○ The training set is unlabeled. ● These codings typically have a much lower dimensionality than the input data, making autoencoders useful for dimensionality reduction Autoencoders
  • 3. Autoencoders Why use Autoencoders? ● Useful for dimensionality reduction ● Autoencoders act as powerful feature detectors, ● And they can be used for unsupervised pre-training of deep neural networks ● Lastly, they are capable of randomly generating new data that looks very similar to the training data; this is called a generative model Autoencoders
  • 4. Autoencoders Autoencoders Why use Autoencoders? For example, we could train an autoencoder on pictures of faces, and it would then be able to generate new faces
  • 5. Autoencoders ● Surprisingly, autoencoders work by simply learning to copy their inputs to their outputs ● This may sound like a trivial task, but we will see that constraining the network in various ways can make it rather difficult Autoencoders
  • 6. Autoencoders Autoencoders For example ● You can limit the size of the internal representation, or you can add noise to the inputs and train the network to recover the original inputs. ● These constraints prevent the autoencoder from trivially copying the inputs directly to the outputs, which forces it to learn efficient ways of representing the data ● In short, the codings are byproducts of the autoencoder’s attempt to learn the identity function under some constraints
  • 7. Autoencoders What we’ll learn ? ● We will explain in more depth how autoencoders work ● What types of constraints can be imposed ● And how to implement them using TensorFlow, whether it is for ○ Dimensionality reduction, ○ Feature extraction, ○ Insupervised pretraining, ○ Or as generative models Autoencoders
  • 8. Autoencoders Let’s try to understand “why constraining an autoencoder during training pushes it to discover and exploit patterns in the data” With an example Efficient Data Representations
  • 9. Autoencoders Which of the following number sequences do you find the easiest to memorize? ● 40, 27, 25, 36, 81, 57, 10, 73, 19, 68 ● 50, 25, 76, 38, 19, 58, 29, 88, 44, 22, 11, 34, 17, 52, 26, 13, 40, 20 Efficient Data Representations
  • 10. Autoencoders ● At first glance, it would seem that the first sequence should be easier, since it is much shorter ● However, if you look carefully at the second sequence, you may notice that it follows two simple rules: ○ Even numbers are followed by their half, ○ And odd numbers are followed by their triple plus one This is a famous sequence known as the hailstone sequence Efficient Data Representations
  • 11. Autoencoders ● Once you notice this pattern, the second sequence becomes much easier to memorize than the first because you only need to memorize the two rules, ○ The first number, ○ And the length of the sequence Efficient Data Representations
  • 12. Autoencoders ● Note that if we could quickly and easily memorize very long sequences, you would not care much about the existence of a pattern in the second sequence ● You would just learn every number by heart, and that would be that ● It is the fact that it is hard to memorize long sequences that makes it useful to recognize patterns, ● And hopefully this clarifies why constraining an autoencoder during training pushes it to discover and exploit patterns in the data Efficient Data Representations
  • 13. Autoencoders The relationship between memory, perception, and pattern matching was famously studied by William Chase and Herbert Simon in the early 1970s Efficient Data Representations William Chase Herbert Simon
  • 14. Autoencoders Let’s see how their study of chess players is similar to an Autoencoder ● They observed that expert chess players were able to memorize the positions of all the pieces in a game by looking at the board for just 5 seconds, ● A task that most people would find impossible ● However, this was only the case when the pieces were placed in realistic positions from actual games, not when the pieces were placed randomly. Efficient Data Representations
  • 15. Autoencoders Let’s see how their study of chess players is similar to an Autoencoder ● Chess experts don’t have a much better memory than you and I, ● They just see chess patterns more easily thanks to their experience with the game ● Noticing patterns helps them store information efficiently Efficient Data Representations
  • 16. Autoencoders Let’s see how their study of chess players is similar to an Autoencoder ● Just like the chess players in this memory experiment, an autoencoder ○ looks at the inputs, ○ converts them to an efficient internal representation, ○ and then spits out something that (hopefully) looks very close to the inputs Efficient Data Representations
  • 17. Autoencoders Composition of Autoencoder An autoencoder is always composed of two parts: ● An encoder or recognition network that converts the inputs to an internal representation, ● Followed by a decoder or generative network that converts the internal representation to the outputs Efficient Data Representations
  • 19. Autoencoders Composition of Autoencoder Efficient Data Representations An autoencoder typically has the same architecture as a Multi-Layer Perceptron, except that the number of neurons in the output layer must be equal to the number of inputs
  • 20. Autoencoders There is just one hidden layer composed of two neurons (the encoder), and one output layer composed of three neurons (the decoder) Composition of Autoencoder Efficient Data Representations
  • 21. Autoencoders Composition of Autoencoder Efficient Data Representations The outputs are often called the reconstructions since the autoencoder tries to reconstruct the inputs, and the cost function contains a reconstruction loss that penalizes the model when the reconstructions are different from the inputs.
  • 22. Autoencoders Because the internal representation has a lower dimensionality than the input data, it is 2D instead of 3D, the autoencoder is said to be undercomplete Composition of Autoencoder Efficient Data Representations
  • 23. Autoencoders Composition of Autoencoder Efficient Data Representations ● An undercomplete autoencoder cannot trivially copy its inputs to the codings, yet it must find a way to output a copy of its inputs ● It is forced to learn the most important features in the input data and drop the unimportant ones
  • 24. Autoencoders Let’s implement a very simple undercomplete autoencoder for dimensionality reduction
  • 25. Autoencoders PCA with an Undercomplete Linear Autoencoder If the autoencoder uses only linear activations and the cost function is the Mean Squared Error (MSE), then it can be shown that it ends up performing Principal Component Analysis Now we will build a simple linear autoencoder to perform PCA on a 3D dataset, projecting it to 2D
  • 26. Autoencoders >>> import tensorflow as tf >>> from tensorflow.contrib.layers import fully_connected >>> n_inputs = 3 # 3D inputs >>> n_hidden = 2 # 2D codings >>> n_outputs = n_inputs >>> learning_rate = 0.01 >>> X = tf.placeholder(tf.float32, shape=[None, n_inputs]) >>> hidden = fully_connected(X, n_hidden, activation_fn=None) >>> outputs = fully_connected(hidden, n_outputs, activation_fn=None) >>> reconstruction_loss = tf.reduce_mean(tf.square(outputs - X)) # MSE >>> optimizer = tf.train.AdamOptimizer(learning_rate) >>> training_op = optimizer.minimize(reconstruction_loss) >>> init = tf.global_variables_initializer() PCA with an Undercomplete Linear Autoencoder Run it on Notebook
  • 27. Autoencoders PCA with an Undercomplete Linear Autoencoder The two things to note in the previous code are are: ● The number of outputs is equal to the number of inputs ● To perform simple PCA, we set activation_fn=None i.e., all neurons are linear, and the cost function is the MSE.
  • 28. Autoencoders Now let’s load the dataset, train the model on the training set, and use it to encode the test set i.e., project it to 2D >>> X_train, X_test = [...] # load the dataset >>> n_iterations = 1000 >>> codings = hidden # the output of the hidden layer provides the codings >>> with tf.Session() as sess: init.run() for iteration in range(n_iterations): training_op.run(feed_dict={X: X_train}) # no labels codings_val = codings.eval(feed_dict={X: X_test}) PCA with an Undercomplete Linear Autoencoder Run it on Notebook
  • 29. Autoencoders Above figure shows the original 3D dataset (at the left) and the output of the autoencoder’s hidden layer (i.e., the coding layer, at the right) PCA with an Undercomplete Linear Autoencoder
  • 30. Autoencoders PCA with an Undercomplete Linear Autoencoder As you can see, the autoencoder found the best 2D plane to project the data onto, preserving as much variance in the data as it could just like PCA