This document discusses convolutional neural networks and provides details on GoogLeNet, an architecture that utilized deeper convolutions to achieve state-of-the-art results in image classification. It covers related work on CNNs and the Network in Network model. The architectural details section explains innovations in GoogLeNet, while the training methodology discusses hyperparameters like the optimizer, momentum, learning rate, and data splits. Results showed a 4% decrease in error rate using stochastic gradient descent.