Distributed model-to-model transformations can be computationally expensive for large models or complex transformations. The authors present an approach to distribute ATL model transformations using MapReduce. Local match and apply phases are performed in parallel by mappers. Global resolve is done by reducers to combine local results. An evaluation shows near-linear speedup on Amazon EMR for models up to 100,000 lines of code. Challenges include load balancing, persistence for concurrent read/write, and parallelizing all transformation phases.