This document summarizes a presentation about Apache Spark given by Radu Moldovan. It introduces Spark as a cluster computing platform for in-memory data processing of large datasets that provides an alternative to Hadoop MapReduce. The presentation covers Spark's core functionality and modules like Spark SQL, its programming model using RDDs, and its architecture. It also demonstrates Spark functionality through code examples and discusses Spark's integration with Mesos as well as the presenter's experience using different Spark versions for large-scale healthcare data processing.