This document presents a splitting method for optimizing nonsmooth nonconvex problems of the form h(Ax) + g(x), where h is nonsmooth and nonconvex, A is a linear map, and g(x) is a convex regularizer. The method relaxes the problem by introducing an auxiliary variable w and minimizing a partially minimized objective with respect to x and w alternately using proximal gradient descent. Applications to problems in phase retrieval, semi-supervised learning, and stochastic shortest path are discussed. Convergence results and empirical performance on these applications are presented.