This document summarizes the implementation of neural style transfer to combine the content of one image with the artistic style of another image. The authors used a pretrained VGG-16 convolutional neural network to extract feature representations from images. They defined a loss function combining content and style losses to minimize differences between the generated image and the style/content images. The image was iteratively updated using L-BFGS optimization. Testing with sample images achieved good results in 5 epochs, transferring the style of a painting onto photos. Further improvements could optimize speed for a web app and experiment with different parameter weights and image sizes.