InfoGAN is a method for learning disentangled and interpretable representations using generative adversarial networks (GANs). It introduces a mutual information objective that maximizes the mutual information between a small subset of latent codes and the generated images. This allows it to learn representations where the latent codes correspond to interpretable factors of variation in the image domain. It presents results on MNIST, faces, chairs, SVHN and CelebA datasets where the latent codes discover meaningful and interpretable factors such as digit identity, azimuth, lighting conditions and more without any supervision.