SlideShare a Scribd company logo
3
Most read
4
Most read
6
Most read
Compressed Sensing
Using Generative
Models
Kenneth Emeka Odoh
14th Nov 2018 (Kaggle Data Science Meetup | SFU Ventures Lab)
Vancouver, BC
Table of Contents
● Compressed sensing
● Generative model
Inspired by the paper named Compressed sensing using generative model
[Bora et. al, 2017]
{https://arxiv.org/abs/1703.03208 } 2
Kenneth Emeka Odoh
Compressed Sensing
Compressed sensing is the way of estimating signal from
underdetermined system by exploiting the structure of the
problem.
y = Ax + ໗
where x is recovered signal, ໗ is noise, and y is measurement
signal.
3
Kenneth Emeka Odoh
● Dimensionality reduction techniques / Compression techniques
○ PCA, DCT, FFT, JPEG
● Denoising
● Deblurring
● Anomaly detection
● Blind source separation
There are a number of techniques for compressed sensing
Optimization theory e.g Linear programming, gradient descent
Application of Compressed Sensing
4
Underdetermined Linear System
Ax = b
A has dimension of n x m, x has m x 1,
and b has n x 1, where n>m
b is signal, x is recovered signal
b, x are different representation of
signals e.g time and frequency domain
● Infinitely many solutions
5
Kenneth Emeka Odoh
Structure on the Problem
● Sparsity
● Orthogonality
● Symmetry
● Smoothness
Imposing structure on the problem in form of inductive bias. This is achieved by
adding regularizer which improves efficiency and simplicity of processing.
Handling different representations
● Parseval theorem
● Heisenberg uncertainty principle
6
Kenneth Emeka Odoh
Estimating A
The Conditions for extracting A
● Restricted Isometry Property (RIP)
This creates orthonormal matrix to impose sparsity.
● Restricted Eigenvalue Condition (REC)
This is a sufficient condition as it enforces that sparse vectors are far from null
space. This can be achieved by initializing the weight from thin-tailed distribution.
Implication of Theorem 1.1, 1.2 lead to the least loss with a high probability if A is
sampled for a thin-tailed normal distribution with tight bounds on the training size
[Bora et. al, 2017].
7
Kenneth Emeka Odoh
Variational Autoencoder (VAE) vs
“Vanilla” Autoencoder
Autoencoder learns compressed representation of input by
compressing and decompressing back to match to
reconstruct the original input. It is an unsupervised learning
method with goals to learn functions for compression and
decompression.
VAE learns probability of the parameter representing the data
distribution. This focuses on learning probability distribution
instead of learning functions.
8
Kenneth Emeka Odoh
VAE
[https://jaan.io/what-is-variational-autoencoder-vae-tutorial/]
9
Kenneth Emeka Odoh
[https://cedar.buffalo.edu/~srihari/CSE676/20.10.3-VAE.pdf] 10
Kenneth Emeka Odoh
Reparameterization Trick
This is needed to backpropagate through a random node (latent node)
[https://stats.stackexchange.com/questions/199605/how-does-the-reparameterization-trick-for-vaes-work-and-why-is-it-i
mportant] 11
Kenneth Emeka Odoh
z= f(ϵ,μ,L)
Differentiate z with respect to μ
GAN
[https://deeplearning4j.org/generative-adversarial-network] 12
Kenneth Emeka Odoh
● This consist of a generator and discriminator pitted together in a
zero-sum game.
● Generator models the data distribution.
● Discriminator estimate the probability that the sample result from the
model distribution or the data distribution.
● The generator keeps creating counterfeit object and the discriminator
keeps trying to detect the counterfeits. The process continue until the
generator can produce real counterfeit that cannot be discriminated
from the originals.
● Generative models and reinforcement learning model are a natural
way to incorporate game theoretic principles in machine learning. 13
Kenneth Emeka Odoh
Why Generative model may requires lower
number of Training examples
● Regularization is a form of inductive bias that exploit the
structure of the problem thereby simplifying the problem
space.
● With suitable representation learning with good priors,
with smaller VC-dimension leads to smaller training size.
○ VC-dimension is a measure of capacity of space of
functions.
Kenneth Emeka Odoh
14
Results
15
Kenneth Emeka Odoh
16
Kenneth Emeka Odoh
Extensions of Work
● Use Wasserstein metric instead of KL-divergence in the VAE and
GANs formulation. This would reduce error.
● Deciding on a rigorous theoretical justification for regularization leads
to underfitting. This is relating to bias-variance tradeoff.
● How Early stopping as a regularizer can impact the experiments.
● Quantifying the uncertainty (aleatory and epistemic).
● How to evaluate generative models?
● Putting constraint on the output of the generative model e.g using sum
of squares [https://www.wired.com/story/a-classical-math-problem-gets-pulled-into-self-driving-cars]
17
Kenneth Emeka Odoh
Conclusions
● Instability of lasso in the presence of multiple collinearity.
● Try elastic net regularizer.
● L2 lead to closed form solution leading to unique solution.
● L1 does not lead to a closed form, supports sparsity and
suitable to feature selection.
● Representational learning with regularization may require
less training size thereby lowering the training times with
less complex models.
18
Kenneth Emeka Odoh
References
1. http://slazebni.cs.illinois.edu/spring17/lec12_vae.pdf
2. Carl Doersch, Tutorial on Variational Autoencoders, 2016.
3. Neelakantan et. al, Adding gradient noise improves learning for very deep
network, 2016.
4. Bora et. al, Compressed sensing using generative models, 2017
5. Ian Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks
19
Kenneth Emeka Odoh
Past Talks
Some of my past talks
● Tutorial on Cryptography, slide:
https://www.slideshare.net/kenluck2001/crypto-bootcamp-108671356 , 2018
● Landmark Retrieval & Recognition, slide:
https://www.slideshare.net/kenluck2001/landmark-retrieval-recognition-105605174 , video:
https://youtu.be/YD6ihpBMyso , 2018
● Tracking the tracker: Time Series Analysis in Python from First Principles, slide:
https://www.slideshare.net/kenluck2001/tracking-the-tracker-time-series-analysis-in-python-fr
om-first-principles-101506045 , 2018
● WSDM Recommender System, slide:
https://www.slideshare.net/kenluck2001/kaggle-kenneth , video:
https://youtu.be/exwJmQzDBag , 2018
Kenneth Emeka Odoh
20

More Related Content

PDF
Salt Identification Challenge
PDF
Landmark Retrieval & Recognition
PDF
PDF
Log Analytics in Datacenter with Apache Spark and Machine Learning
PDF
Gradient boosting in practice: a deep dive into xgboost
PPTX
Accelerating Habanero-Java Program with OpenCL Generation
PDF
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
PPTX
Lec05 buffers basic_examples
Salt Identification Challenge
Landmark Retrieval & Recognition
Log Analytics in Datacenter with Apache Spark and Machine Learning
Gradient boosting in practice: a deep dive into xgboost
Accelerating Habanero-Java Program with OpenCL Generation
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
Lec05 buffers basic_examples

What's hot (20)

PPTX
Database Agnostic Workload Management (CIDR 2019)
PPT
Determining QoS of WS-BPEL Compositions
PDF
A Multiple Kernel Learning Based Fusion Framework for Real-Time Multi-View Ac...
PDF
EdSketch: Execution-Driven Sketching for Java
PDF
Speaker Diarization
PDF
Gate-Cs 1993
PDF
Seq2Seq (encoder decoder) model
PPTX
Robot, Learning From Data
PDF
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
PDF
Learning Financial Market Data with Recurrent Autoencoders and TensorFlow
PDF
ANSSummer2015
PDF
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
PDF
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
PPT
2010-Pregel
PDF
Development of Multi-Level ROM
PPTX
java memory management & gc
PDF
Mutate and Test your Tests
PPTX
Deep Learning, Keras, and TensorFlow
PDF
Safe and Efficient Off-Policy Reinforcement Learning
PDF
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Database Agnostic Workload Management (CIDR 2019)
Determining QoS of WS-BPEL Compositions
A Multiple Kernel Learning Based Fusion Framework for Real-Time Multi-View Ac...
EdSketch: Execution-Driven Sketching for Java
Speaker Diarization
Gate-Cs 1993
Seq2Seq (encoder decoder) model
Robot, Learning From Data
Optimization for Neural Network Training - Veronica Vilaplana - UPC Barcelona...
Learning Financial Market Data with Recurrent Autoencoders and TensorFlow
ANSSummer2015
Backpropagation - Elisa Sayrol - UPC Barcelona 2018
Optimization (DLAI D4L1 2017 UPC Deep Learning for Artificial Intelligence)
2010-Pregel
Development of Multi-Level ROM
java memory management & gc
Mutate and Test your Tests
Deep Learning, Keras, and TensorFlow
Safe and Efficient Off-Policy Reinforcement Learning
Backpropagation (DLAI D3L1 2017 UPC Deep Learning for Artificial Intelligence)
Ad

Similar to Compressed Sensing using Generative Model (20)

PPTX
Adversarial Variational Autoencoders to extend and improve generative model
PDF
Variational Autoencoders VAE - Santiago Pascual - UPC Barcelona 2018
PDF
VAE-type Deep Generative Models
PPTX
Adversarial Variational Autoencoders to extend and improve generative model -...
PDF
Generative adversarial networks
PDF
Tutorial on Deep Generative Models
PDF
Deep learning unsupervised learning diapo
PPTX
GDC2019 - SEED - Towards Deep Generative Models in Game Development
PPTX
Variational Auto Encoder and the Math Behind
PDF
Deep Generative Modelling
PDF
Introduction to Deep Generative Models
PDF
Deep Generative Models I (DLAI D9L2 2017 UPC Deep Learning for Artificial Int...
PDF
Tutorial on Theory and Application of Generative Adversarial Networks
PDF
Variational Autoencoders For Image Generation
PDF
Deep Generative Models
PDF
Generative models : VAE and GAN
PPTX
Understanding Autoencoder (Deep Learning Book, Chapter 14)
PDF
Deep image generating models
PDF
Generative Models for General Audiences
PDF
從 VAE 走向深度學習新理論
Adversarial Variational Autoencoders to extend and improve generative model
Variational Autoencoders VAE - Santiago Pascual - UPC Barcelona 2018
VAE-type Deep Generative Models
Adversarial Variational Autoencoders to extend and improve generative model -...
Generative adversarial networks
Tutorial on Deep Generative Models
Deep learning unsupervised learning diapo
GDC2019 - SEED - Towards Deep Generative Models in Game Development
Variational Auto Encoder and the Math Behind
Deep Generative Modelling
Introduction to Deep Generative Models
Deep Generative Models I (DLAI D9L2 2017 UPC Deep Learning for Artificial Int...
Tutorial on Theory and Application of Generative Adversarial Networks
Variational Autoencoders For Image Generation
Deep Generative Models
Generative models : VAE and GAN
Understanding Autoencoder (Deep Learning Book, Chapter 14)
Deep image generating models
Generative Models for General Audiences
從 VAE 走向深度學習新理論
Ad

Recently uploaded (20)

PPTX
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
PDF
The scientific heritage No 166 (166) (2025)
PDF
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
PPTX
Microbiology with diagram medical studies .pptx
PPTX
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
PPTX
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
PPTX
neck nodes and dissection types and lymph nodes levels
PPTX
Comparative Structure of Integument in Vertebrates.pptx
DOCX
Viruses (History, structure and composition, classification, Bacteriophage Re...
PPTX
BIOMOLECULES PPT........................
PDF
. Radiology Case Scenariosssssssssssssss
PDF
Phytochemical Investigation of Miliusa longipes.pdf
PPTX
Introduction to Cardiovascular system_structure and functions-1
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
PDF
HPLC-PPT.docx high performance liquid chromatography
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PDF
AlphaEarth Foundations and the Satellite Embedding dataset
PPTX
microscope-Lecturecjchchchchcuvuvhc.pptx
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
The scientific heritage No 166 (166) (2025)
SEHH2274 Organic Chemistry Notes 1 Structure and Bonding.pdf
Microbiology with diagram medical studies .pptx
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
Protein & Amino Acid Structures Levels of protein structure (primary, seconda...
neck nodes and dissection types and lymph nodes levels
Comparative Structure of Integument in Vertebrates.pptx
Viruses (History, structure and composition, classification, Bacteriophage Re...
BIOMOLECULES PPT........................
. Radiology Case Scenariosssssssssssssss
Phytochemical Investigation of Miliusa longipes.pdf
Introduction to Cardiovascular system_structure and functions-1
TOTAL hIP ARTHROPLASTY Presentation.pptx
HPLC-PPT.docx high performance liquid chromatography
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
AlphaEarth Foundations and the Satellite Embedding dataset
microscope-Lecturecjchchchchcuvuvhc.pptx
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
Taita Taveta Laboratory Technician Workshop Presentation.pptx

Compressed Sensing using Generative Model

  • 1. Compressed Sensing Using Generative Models Kenneth Emeka Odoh 14th Nov 2018 (Kaggle Data Science Meetup | SFU Ventures Lab) Vancouver, BC
  • 2. Table of Contents ● Compressed sensing ● Generative model Inspired by the paper named Compressed sensing using generative model [Bora et. al, 2017] {https://arxiv.org/abs/1703.03208 } 2 Kenneth Emeka Odoh
  • 3. Compressed Sensing Compressed sensing is the way of estimating signal from underdetermined system by exploiting the structure of the problem. y = Ax + ໗ where x is recovered signal, ໗ is noise, and y is measurement signal. 3 Kenneth Emeka Odoh
  • 4. ● Dimensionality reduction techniques / Compression techniques ○ PCA, DCT, FFT, JPEG ● Denoising ● Deblurring ● Anomaly detection ● Blind source separation There are a number of techniques for compressed sensing Optimization theory e.g Linear programming, gradient descent Application of Compressed Sensing 4
  • 5. Underdetermined Linear System Ax = b A has dimension of n x m, x has m x 1, and b has n x 1, where n>m b is signal, x is recovered signal b, x are different representation of signals e.g time and frequency domain ● Infinitely many solutions 5 Kenneth Emeka Odoh
  • 6. Structure on the Problem ● Sparsity ● Orthogonality ● Symmetry ● Smoothness Imposing structure on the problem in form of inductive bias. This is achieved by adding regularizer which improves efficiency and simplicity of processing. Handling different representations ● Parseval theorem ● Heisenberg uncertainty principle 6 Kenneth Emeka Odoh
  • 7. Estimating A The Conditions for extracting A ● Restricted Isometry Property (RIP) This creates orthonormal matrix to impose sparsity. ● Restricted Eigenvalue Condition (REC) This is a sufficient condition as it enforces that sparse vectors are far from null space. This can be achieved by initializing the weight from thin-tailed distribution. Implication of Theorem 1.1, 1.2 lead to the least loss with a high probability if A is sampled for a thin-tailed normal distribution with tight bounds on the training size [Bora et. al, 2017]. 7 Kenneth Emeka Odoh
  • 8. Variational Autoencoder (VAE) vs “Vanilla” Autoencoder Autoencoder learns compressed representation of input by compressing and decompressing back to match to reconstruct the original input. It is an unsupervised learning method with goals to learn functions for compression and decompression. VAE learns probability of the parameter representing the data distribution. This focuses on learning probability distribution instead of learning functions. 8 Kenneth Emeka Odoh
  • 11. Reparameterization Trick This is needed to backpropagate through a random node (latent node) [https://stats.stackexchange.com/questions/199605/how-does-the-reparameterization-trick-for-vaes-work-and-why-is-it-i mportant] 11 Kenneth Emeka Odoh z= f(ϵ,μ,L) Differentiate z with respect to μ
  • 13. ● This consist of a generator and discriminator pitted together in a zero-sum game. ● Generator models the data distribution. ● Discriminator estimate the probability that the sample result from the model distribution or the data distribution. ● The generator keeps creating counterfeit object and the discriminator keeps trying to detect the counterfeits. The process continue until the generator can produce real counterfeit that cannot be discriminated from the originals. ● Generative models and reinforcement learning model are a natural way to incorporate game theoretic principles in machine learning. 13 Kenneth Emeka Odoh
  • 14. Why Generative model may requires lower number of Training examples ● Regularization is a form of inductive bias that exploit the structure of the problem thereby simplifying the problem space. ● With suitable representation learning with good priors, with smaller VC-dimension leads to smaller training size. ○ VC-dimension is a measure of capacity of space of functions. Kenneth Emeka Odoh 14
  • 17. Extensions of Work ● Use Wasserstein metric instead of KL-divergence in the VAE and GANs formulation. This would reduce error. ● Deciding on a rigorous theoretical justification for regularization leads to underfitting. This is relating to bias-variance tradeoff. ● How Early stopping as a regularizer can impact the experiments. ● Quantifying the uncertainty (aleatory and epistemic). ● How to evaluate generative models? ● Putting constraint on the output of the generative model e.g using sum of squares [https://www.wired.com/story/a-classical-math-problem-gets-pulled-into-self-driving-cars] 17 Kenneth Emeka Odoh
  • 18. Conclusions ● Instability of lasso in the presence of multiple collinearity. ● Try elastic net regularizer. ● L2 lead to closed form solution leading to unique solution. ● L1 does not lead to a closed form, supports sparsity and suitable to feature selection. ● Representational learning with regularization may require less training size thereby lowering the training times with less complex models. 18 Kenneth Emeka Odoh
  • 19. References 1. http://slazebni.cs.illinois.edu/spring17/lec12_vae.pdf 2. Carl Doersch, Tutorial on Variational Autoencoders, 2016. 3. Neelakantan et. al, Adding gradient noise improves learning for very deep network, 2016. 4. Bora et. al, Compressed sensing using generative models, 2017 5. Ian Goodfellow, NIPS 2016 Tutorial: Generative Adversarial Networks 19 Kenneth Emeka Odoh
  • 20. Past Talks Some of my past talks ● Tutorial on Cryptography, slide: https://www.slideshare.net/kenluck2001/crypto-bootcamp-108671356 , 2018 ● Landmark Retrieval & Recognition, slide: https://www.slideshare.net/kenluck2001/landmark-retrieval-recognition-105605174 , video: https://youtu.be/YD6ihpBMyso , 2018 ● Tracking the tracker: Time Series Analysis in Python from First Principles, slide: https://www.slideshare.net/kenluck2001/tracking-the-tracker-time-series-analysis-in-python-fr om-first-principles-101506045 , 2018 ● WSDM Recommender System, slide: https://www.slideshare.net/kenluck2001/kaggle-kenneth , video: https://youtu.be/exwJmQzDBag , 2018 Kenneth Emeka Odoh 20