This document presents MegaDepth, a new dataset for single-view depth prediction generated from internet photos using structure-from-motion and multi-view stereo techniques. It introduces challenges in existing depth datasets and contributions of MegaDepth, which includes over 130,000 images from landmarks after filtering. An hourglass convolutional neural network is trained on MegaDepth using novel scale-invariant and ordinal depth losses to predict depth from single views with high accuracy and generalizability. Evaluation shows the model generalizes well to other datasets while identifying limitations for complex surfaces, thin objects, and difficult materials.