This document discusses deep generative models including variational autoencoders (VAEs) and generational adversarial networks (GANs). It explains that generative models learn the distribution of input data and can generate new samples from that distribution. VAEs use variational inference to learn a latent space and generate new data by varying the latent variables. The document outlines the key concepts of VAEs including the evidence lower bound objective used for training and how it maximizes the likelihood of the data.