The linear programming approach to approximate dynamic programming (ALP) generates approximations to the optimal value function of a Markov decision process by computing parameters for linearly parameterized function classes. ALP formulates the problem of finding the optimal parameters as a linear program with constraints given by Bellman's equation evaluated at sampled state-action pairs. Solving this linear program provides an approximate value function and induced policy. The dissertation analyzes ALP, providing error bounds on the approximation quality and theoretical guidelines for implementation.