The document discusses combining Spark and IPython for distributed computing. Spark is a distributed computation engine that runs on a cluster, while IPython provides an interactive computing environment. The goal is to connect IPython to a Spark cluster so developers can process large datasets interactively. This allows processing data at scale during development and easily exporting code to production. The document provides an overview of Spark and IPython architectures and demonstrates connecting an IPython kernel to a Spark context to develop Spark scripts interactively.